Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

In Washington earlier this month, one person’s words in the New York Times were were deemed a threat to national security by those at whom they were aimed.

An anonymous Trump administration official was labeled “a seditious traitor who must be identified and prosecuted for illegal conduct” for exercising his or her 1st Amendment rights by publishing an op-ed in the September 5 edition of the New York Times. Vice President Pence stated that the op-ed writer’s actions inside the Administration—trying to limit what the writer believes is the damage President Trump is doing daily to the United States—is “an assault on our democracy”—a notion unhinged from any semblance of reality. 

Like everyone else working in the Trump administration, the author of the op-ed took the same oath I did when I served in the federal government, the text of which is federal law: 5. U.S.C. § 3331. Here’s the text:

I, AB, do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same; that I take this obligation freely, without any mental reservation or purpose of evasion; and that I will well and faithfully discharge the duties of the office on which I am about to enter. So help me God.

The oath makes no reference to pledging fealty to whoever happens to be President. It is a pledge of loyalty to our form of government, not an individual. The notion that the Justice Department even has a basis to prosecute the writer does not pass the laugh test, much less constitutional muster.

The anonymous Trump administration official—and if he or she is to be believed, many more working for Trump—views him as a domestic threat to the American people and the Constitution itself. Democrats and others on the political left have viewed Trump that way since he won the Electoral College vote in November 2016. Clearly others in the Administration now view Trump the same way.

The anonymous op-ed writer is hardly the first person working for a federal chief executive to believe that an increasingly mentally unhinged boss needed to be contained, or even removed. Nixon White House counsel-turned-Watergate whistleblower John Dean is perhaps the most prominent, but he was not the only Nixon administration official prepared to ignore or even countermand a presidential order deemed a threat to the Republic.

As Daily Beast reporter Gil Troy reminded us less than a month into the Trump presidency, then-Secretary of Defense James Schlesinger made certain in the summer of 1974 that any Nixon order to the military would not be carried out unless Schlesinger approved it. Would Nixon really have tried to order the Old Guard to do something crazy, like march on Capitol Hill and round up those who voted for the articles of impeachment against him? Probably not, but Schlesinger made sure there was no way it could happen.

I think it’s fair to argue that the author of the op-ed in question should’ve resigned, then published. By remaining anonymous and in the Administration, the author has forced his or her colleagues to engage in the very public and humiliating spectacle of going out of their way to say, “It wasn’t me.” The chaos of Trump’s governing “style” has been deepened by the op-ed writer’s action, something that carries its own risks, however ill-defined they may be. 

But the reality is that the day-to-day business of keeping America’s government running is handled by hundreds of thousands of effectively anonymous civil servants, all of whom have taken the oath outlined above, the overwhelming majority of whom execute that oath faithfully every day. It is they who will help ensure that America and its government survive the Trump era, even if enduring it sometimes feels like the political equivalent of passing a kidney stone.

The existence of government infrastructure deters or “crowds out” private investment. Many airports, bridges, and urban transit systems in the United States used to be private, but during the mid-20th century entrepreneurs were squeezed out by governments.

The provision of federal aid or subsidies to government-owned airports, bridges, and transit facilities was a key factor in pushing out private enterprise. That is one reason why I favor repealing federal aid for transportation.

AIRPORTS

In the early years of commercial aviation, private airports served many American cities. For example, the main airports in Los Angeles, Miami, Philadelphia, and Washington D.C. were for-profit business ventures in the 1930s.

The airports were generally successful and innovative, but they lost ground over time due to unfair government competition:

  • City governments were often eager to set up their own airports, even if private airports already served an area.
  • Cities issued tax-exempt bonds to finance their airports, giving them a financial edge over private airports.
  • Private airports pay taxes. Government airports do not, giving them another financial edge.
  • The U.S. military and the Post Office promoted government airports over private ones.
  • Federal New Deal programs provided aid to government airports, not private ones.
  • Congress provided aid to government airports for national defense purposes during World War II.
  • The federal Surplus Property Act after the war transferred excess military bases to the states for government airport use.
  • The federal Airport Act of 1946 began regular federal aid to government airports, not private ones.
  • The new Federal Aviation Administration in 1958 “prohibited private airports from offering commercial service.”

So governments banished entrepreneurs from a major part of America’s aviation industry. In the early 1930s, about half of the nation’s more than 1,100 airports were private, but by the 1960s, private commercial airports had mainly disappeared. Very sad, as I discuss here.

However, there is good news about airports. A privatized commercial airport industry is booming abroad, particularly in Europe. U.S. policymakers should let entrepreneurs take another crack at our airport industry.

BRIDGES

Bob Poole discusses government crowd out of private bridges in his new book Rethinking America’s Highways. In the 1920s, four main bridges built in the San Francisco area were private toll facilities. In the 1930s, the Golden Gate Bridge and Oakland Bay Bridge were built as government toll facilities.

Poole picks up the story:

All six of these bridges suffered declines in traffic and revenue due to the Depression, but the Bay Bridge and the Golden Gate opened closer to its end and were therefore less affected. Their financing costs were also lower, with the Bay Bridge getting low-cost financing from the New Deal’s Reconstruction Finance Corporation, and the Golden Gate being able to issue tax-exempt toll revenue bonds, rather than the taxable bonds issued by the toll bridge companies.

In addition, the California legislature voted in 1933 to relieve the Bay Bridge of having to cover operating and maintenance costs out of toll revenues, allocating state highway fund (gas tax) monies to cover those costs. The four private toll bridges all went into receivership by 1940. Unlike the Ambassador Bridge (in Michigan), they were unable to work out refinancing plans and were eventually acquired by the state, with the Dumbarton and San Mateo transfers not taking place until the early 1950s; their shares traded on the Pacific Coast Exchange until then.

A similar fate befell many of the other 200-odd private toll bridges during the Depression. The Reconstruction Finance Corporation provided low-cost loans to public-sector toll bridges, but not to investor-owned ones. Relatively new government toll agencies offered buyouts to struggling bridge owners during those years. The New York State Bridge Commission bought four private toll bridges over the Hudson River; the Delaware River Joint Toll Bridge Commission acquired at least six private toll bridges; and the city of Dallas bought the toll bridge on the Trinity River in order to eliminate tolls.

By 1940, the Public Roads Administration (the former Bureau of Public Roads, now part of the Federal Works Agency) reported that the number of US toll bridges had declined to 241, of which 142 were still investor-owned. But nearly all the bridges had been bought out by toll agencies or state and local governments by the mid-1950s.

URBAN TRANSIT

The early history of urban transit in America is one of private-sector funding and innovation, as Randal O’Toole discusses in this study. Hundreds of cities had private streetcar and bus companies moving people in downtowns and the growing suburbs in the early 20th century.

As the century progressed, however, the rise of automobiles undermined the demand for transit. At the same time, transit firms had difficulty cutting costs because their workforces were dominated by labor unions and governments resisted allowing them to cut services on unprofitable routes.

The nail in the coffin for private transit was the Urban Mass Transportation Act of 1964, which provided federal aid to government-owned bus and rail systems. The act encouraged state and local governments to take over private systems, and a century of private transit investment came to a close.

This Transportation Research Board study discusses the decline of private transit:

As the declining fortunes of America’s cities gained national recognition during the 1960s, Congress passed legislation that for the first time gave the federal government a prominent role in the provision of urban transit. The Urban Mass Transportation Act of 1964 (later redesignated the Federal Transit Act) provided loans and grants for transit capital acquisition, construction, and planning activities.

… Notably, only public entities could apply for the federal grants. Given the availability of federal aid, many cities, states, and counties purchased or otherwise took over their local rail and bus systems. Thus by the 1970s, a largely new model of transit provision—public ownership—had become increasingly prevalent in the United States. Many jurisdictions consolidated the operations of smaller private and public systems under the auspices of regional transit authorities. A few states, such as Connecticut, Rhode Island, and New Jersey, formed statewide transit agencies.

… In 1940, only 20 transit systems in the country were publicly owned, and they accounted for just 2 percent of ridership. By 1960, although the vast majority of all systems were still in private ownership, properties in public ownership accounted for nearly half of all transit ridership, mainly because the country’s very largest systems were publicly owned. By 1980, more than 500 systems were publicly owned, accounting for 95 percent of ridership nationally.

In sum, the bad news is that when the government advances, the private sector retreats. But the good news we have seen around the world in recent decades is that when the government gets out of the way, the private sector steps in to provide better services at lower costs.

Further reading:

https://www.downsizinggovernment.org/transportation

https://www.downsizinggovernment.org/infrastructure-investment

https://www.downsizinggovernment.org/privatization

In principle, the federal housing-voucher program known as Section 8 ought to win points as a market-oriented alternative to the old command-and-control approach of planning and constructing public housing projects. While allowing recipients wider choice about where to live, it has also enabled private landlords to decide whether to participate and, if so, what mix of voucher-holding and conventionally paying tenants makes the most sense for a location. 

But there is another possibility, which is that Section 8 will in time bring with it onerous new restrictions on the private landlord-tenant relationship. For landlords, participation in the program has long carried with it some significant burdens of inspection, certification, and reporting paperwork. So long as participation was voluntary, these conditions were presumably worth it in exchange for the chance to reach voucher-holders as a class of potential tenants. When accepting Section 8 tenants stops being a voluntary choice, however, the balance is likely to shift. And one of the big policy pushes of the past decade – zealously promoted by the Obama administration – was the local enactment of laws and ordinances prohibiting so-called source-of-income discrimination, which in practice can mean making it a legal offense for a landlord to maintain a policy of declining Section 8 vouchers. Once that sort of control is in place, and landlords cannot opt out of the program, there will no longer be any natural check on Washington’s imposition of ever more burdensome conditions via Section 8 program rules on private landlords, including conditions that affect their relations with conventional non-voucher tenants. 

Now, in an en banc ruling, the Third Circuit has made clear another source of legal exposure for landlords participating in the program. A specialized portion of the program provides so-called enhanced housing vouchers to enable tenants to go on living in properties that once received “project-based” Section 8 support (akin to traditional low-income housing) but have been converted by their owners to conventional market-rate housing. Philip Harvey owned one such property a unit of which had long been rented to Florence Hayes. When Ms. Hayes died in 2015, Harvey sought to renovate the apartment for use by his daughter, while Ms. Hayes’s son wanted to take over as primary tenant. Litigation ensued and a three-judge panel of the Third Circuit ruled, over a dissent, that once her lease expired the law placed Harvey under no obligation to sign a new lease with her successor. 

On Aug. 31, however, the full Third Circuit by a lopsided margin overturned the panel opinion and ruled Ms. Hayes’s son had the right to take over as tenant and obtain lease renewals from Harvey under good behavior, and so did anyone else who had been on the lease (even as a child) at the time of such a property’s conversion. It construed language about how a tenant “may elect to remain” in a converted project as binding not just HUD in its obligation to provide assistance, but also as binding the landlord. Only Judges D. Michael Fisher and Thomas Hardiman, who had prevailed on the original panel, dissented. Various tenants’-rights amicus filers, as well as the City of Philadelphia, took the son’s side. 

Judge Fisher, in dissent, says the majority “overlooks the basic design of the enhanced voucher program as an incentive-based program, not a compulsory one.” But “overlooks” may not be the right verb. Maybe a better one is “takes another step to subvert.”

In my last post I wrote about the lawsuit TNB USA Inc has filed against the New York Fed, which has refused to grant the would-be bank a Master Account. I argued that, despite its name (TNB stands for “The Narrow Bank”), and despite what some commentators (now including, alas, The Wall Street Journal’s editorial staff) seem to think, TNB isn’t meant to supply ordinary persons with a safer alternative to deposits at ordinary banks. Instead, TNB’s purpose is to receive deposits from non-bank financial institutions only, to allow them to take advantage, indirectly, of the Fed’s policy of paying interest on bank reserves — thereby potentially earning more than they might either by investing directly in securities or by taking advantage of the Fed’s reverse repo program, which is open to them but which presently offers a rate 20 basis points lower than the Fed’s IOER rate.

A Hollow Victory?

Yet for all the controversy TNB’s lawsuit has generated, its outcome may no longer matter as much as it might once have. For one thing, TNB’s success can no longer undermine the Fed’s ON-RRP program, which is designed to implement the Fed’s target interest rate lower bound, for the simple reason that that program is already moribund. Commenting on my post, J.P. Koning observed that, while the Fed’s ON-RRP facility, first established in December 2013, once supplied non-bank financial institutions with an attractive investment alternative, it ceased being so this year. As the chart below, reproduced from J.P.’s comment, shows, the facility — which once accommodated hundreds of billions of dollars in bids — is now completely inactive:

The decline on ON-RRP activity since the beginning of this year is a byproduct of the general increase in market rates of interest, both absolutely and relative to the Fed’s ON-RRP offer rate, that has made the program both less attractive to potential participants and unnecessary as a means for establishing a lower-bound for the effective fed funds rate. But that decline is but one symptom of a more general development, to wit: the tendency of the Fed’s policy rate settings to lag further and further behind increases in market-determined interest rates, thanks in no small part to the Trump administration’s fiscal profligacy. Here, for example, is a FRED chart comparing the Fed’s policy rate settings to the yield on 1-month Treasury bills:

In the figure the “Lower Limit” of the Fed’s federal funds target range is also the Fed’s ON-RRP facility offer rate, while the “Upper Limit” is the same as the Fed’s IOER rate until mid-June 2018, and 5 basis points above the IOER rate afterwards.

Although an overnight repurchase agreement is a more liquid investment than a one-month Treasury bill, its easy to appreciate how that difference ceased, in the last year or so, to compensate for the gap between the ON-RRP rate and other money market rates. But those rates have also increased relative to the IOER rate, with the Fed’s June decision to reduce the IOER – ON-RRP rate spread from 25 to 20 basis points, reducing the attractiveness of IOER relative to money market rates by another 5 bps. Consequently, bank reserves are also much less attractive relative to money market instruments, and especially to shorter-term Treasury bills, than they were a year ago.

All of which means that TNB’s efforts could end up being in vain even if the Fed ends up granting it an account. As J.P. Koning points out in his own post concerning the TNB case, “even if TNB succeeds in its lawsuit, there is a larger threat. The gap the bank is trying to exploit is shrinking.” In contrast, when the TNB plan was originally developed in 2016, that gap was about 25 basis points.

It’s possible, of course, that future changes will see the IOER rate ruling the interest-rate roost once again, instead of becoming a bit player, in the future. But until that happens TNB USA Inc. may find landing customers just as difficult as landing a Master Account.

Whither the Floor System?

Although rising market rates may cause TNB’s efforts to come to naught, that possibility should not offer Fed officials much comfort, for the tendency for those rates to outpace its own policy rate settings poses no less a threat to its own operating framework than it does to TNB’s business plan.

That operating framework, called a “floor” system, depends on banks’ willingness to hoard reserves, so that changes in the amount of reserves in the banking system, instead of causing banks to increase their lending — thereby putting downward pressure on market interest rates — lead to like changes in banks’ excess reserve holdings. The Fed is then able, in principle, to expand or shrink its balance sheet without altering the stance of monetary policy. Instead of depending on the quantity of reserves the Fed creates, that stance will depend mainly on the interest rate the Fed pays on excess reserves, or the IOER rate, for short.

If, on the other hand, excess reserves cease to be attractive relative to other assets banks might acquire, those banks will no longer be inclined to hold substantial quantities of excess reserves. Instead, they’ll exchange them for other assets, and Treasury securities especially, since such securities are just as useful as reserves when it comes to meeting Basel III’s Liquidity Coverage Ratio rules. The Federal Home Loan Banks, on the other hand, are increasingly inclined to offer their surplus Fed balances on the private repo market instead of lending them to banks in return for a piece of the IOER pie. Eventually, either Treasury yields and private-market repo rates must decline enough, relative to the IOER rate, to make reserve hoarding attractive once again, or the passing of the reserve balance “hot potato” must eventually raise the quantity of bank deposits enough to convert unwanted excess reserves into required reserves.

Prior to October 2008, when the Fed first put its floor system in place, the “hot potato” effect had been the norm: bank reserves paid no interest at all, while even one-month Treasury bills yielded over 2 percent. Consequently, banks held only trivial amounts of excess reserves, disposing of the rest first in the fed funds market but ultimately by acquiring other assets until deposit expansion eliminated any surplus reserves. Monetary policy in turn meant  adjusting the quantity of reserves to keep the Fed’s policy rate on target.

A look at the next chart suggests why the same money market developments that might render TNB’s efforts nugatory also threaten to cause the Fed’s floor system to unravel. The chart’s red line shows the spread between the yield on 1-month Treasuries and the IOER rate (left scale), while its blue line shows the banking system’s ratio of excess reserves to total deposits (right scale).

Until October 2008, with IOER = 0, a very high Treasury-IOER spread kept excess reserves at a minimum. Afterwards, in contrast, a negative spread encouraged banks to accumulate trillions in excess reserves instead of using those reserves to support a proportional increase in deposits. But lately the Treasury-IOER rate has been back in positive territory. (Indeed, the last observations for the red line should be 5 bps higher than what’s shown, because in June the Fed established a 5 bps difference between its target rate “upper limit,” used in the chart as a proxy for the IOER rate, and the IOER rate itself.) As the chart also shows, banks have responded accordingly, by reducing their excess reserve holdings relative to their total deposits.

In conclusion, while the Fed may succeed in fending-off TNB’s attempts to give non-bank financial institutions’ access to IOER, it may find preserving its IOER-based operating system much harder. What’s more, if you ask me, the Fed’s attempts to preserve that operating system, by abandoning its plan to shrink its balance sheet or by resorting to more aggressive IOER rate increases, could ultimately do all of us a lot more harm than its treatment of TNB.

[Cross-posted from Alt-M.org]

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

I’ve also posted similar files for 20162015201420102007 and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post more files for commuting by income and other pertinent topics. 

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

For comparison, you can download similar files for 20162015201420102007 and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post similar files for commuting by income and other pertinent topics.

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

For comparison, you can download similar files for 2016, 2015, 2014, 2010, 2007and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post similar files for commuting by income and other pertinent topics.

Dan Cadman of the Center for Immigration Studies (CIS) has written a blog post purporting to identify issues in a short brief that I wrote about U.S. citizens in Texas for whom ICE filed detainers. In it, he makes numerous inaccurate and unsupported assertions. Cadman presents zero evidence to rebut the conclusion of the brief and instead accuses an ICE supervisory officer of perjury because his statements fail to support Cadman’s position.

My brief uses data from Travis County, Texas to identify people who claimed U.S. citizenship and presented Social Security Numbers to local authorities, but ICE submitted a detainer request for them anyway, only to later cancel or not execute it. Cadman responds:

While it’s true that people who later prove to be U.S. citizens sometimes find themselves in removal proceedings (something I’ve previously commented on and explained), most often this occurs because an individual doesn’t even know he is a U.S. citizen…

In his link in support of his “most often” claim, he cites a single case where the person didn’t know he was a U.S. citizen, while we know of many individual cases in which detainers were filed for U.S. citizens who asserted their citizenship at the start of the process (here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, etc.). In any case, every person in my brief asserted U.S. citizenship at the outset from the time of their booking by Travis County Sheriff’s Office until ICE finally cancelled their detainer. Cadman continues:

[Bier] would have us believe that ICE agents actively “target” American citizens even though it is clear that they have no hand at all into what individuals are arrested by police and booked into Travis County (or any Texas) jail, and merely respond to the information passed to them as a consequence.

I never claimed that ICE agents “actively” seek out people who they know are American citizens. As I wrote in the executive summary of my brief, I state that these are “mistakes” that ICE only belated attempts to correct. In any case, if a law enforcement agency arrests hundreds of innocent people, it is perfectly legitimate to say that hundreds of “innocent people” were targeted by that agency, even if the individual agents didn’t know or intend to target innocent people. Moreover, it is incorrect to claim that ICE agents “merely respond to information passed to them”—Travis County Sheriff’s Office doesn’t make assessments of removability or citizenship, nor do they issue detainers. ICE makes those determinations.

Cadman attempts to argue that even though ICE canceled the detainers for these people, we cannot suppose that it was because they were U.S. citizens. He attempts to sketch out what he believes is happening:

ICE agents don’t, nor should they, always accept such assertions [of U.S. citizenship] at face value because they know the frequency with which false claims are made. One strategy they exercise is to immediately file the detainer while concurrently obtaining the release date of the individual being held by the police. They then work against the clock to either verify the claim or disprove it… . Keep in mind that when ICE agents withdraw a detainer, it doesn’t mean the claim isn’t false — it just means they couldn’t break it in the time frame they had to investigate.

If this is what ICE agents are doing, it would violate current ICE policies, which require agents to issue detainers based on what they believe to be is “probable cause” of removability. A simple assertion of U.S. citizenship would never overcome a determination based on actual probable cause (such as a biometric record of a prior deportation). In the bad days before even agent-determined probable cause was required, an assertion of U.S. citizenship would not have triggered cancelation either. Again, ICE would require the U.S. citizen to substantiate the claim first.

Cadman’s scenario implies that ICE agents are issuing detainers for people claiming U.S. citizenship based on their gut instincts and then hoping to prove that the person is lying before they are released. If this is what is occurring, it would indeed explain why U.S. citizens are regularly targeted by ICE as well as showing that the agency is breaking its own policy. That is a poor defense of ICE’s actions.

In any case, my brief quoted court testimony under oath from ICE Supervisory Detention and Deportation Officer John Drane from Rhode Island stating that, in fact, a detainer canceled for a person claiming U.S. citizenship is almost certainly because they were a U.S. citizen. Cadman responds:

while even ICE agents in the northeast would not be completely immune to the phenomenon of false claims, the claims would be of a significantly smaller scale and different character from those in Texas. This would certainly have had an impact on how Drane framed his response to the question of withdrawing a detainer, because his experiences would be nothing like those of ICE agents working in south or central Texas.

This is simply incorrect. The rate of U.S. citizenship claims overall was actually higher in Rhode Island around this time (7.2 percent) than in Travis County (5.7 percent), so Drane dealt with the same issue: some people do make false claims, while others, including the litigant in the case, make valid claims of U.S. citizenship when targeted with detainers. Cadman continues:

The time frame of Drane’s deposition (April 2015) is also significant. In November 2014, President Obama and then-Homeland Security Secretary Jeh Johnson announced a host of new “executive actions” that would govern how immigration agencies administered their responsibilities… . . many detainers were withdrawn as not meeting the new criteria of criminality drawn up by Secretary Johnson and his cohorts… .

Cadman presents no data or even anecdotes to support the claim that many detainers were withdrawn due to the Jeh Johnson enforcement criteria. In fact, the Johnson policies changed the criteria for issuing a detainer, so detainers for people who were not subject to enforcement priorities were not issued to begin with, leading to a significant decline in detainers issued. In any case, 90 percent of the U.S. citizens identified in my brief were targeted before Johnson’s new enforcement priorities were in effect or after the Trump administration rescinded them. In addition, the rate of cancelations for people claiming U.S. citizenship actually decreased during those years. Cadman continues:

It’s not a surprise that Drane avoided speaking to these very real, very major reasons that many detainers were withdrawn by ICE. One can surmise that he sidestepped the issue of agents being obliged to cancel detainers under the imposed-from-above priority system for fear of his job.

Here, Cadman actually accuses an ICE supervisory agent of lying under oath to avoid disclosing the reasons for the detainer cancelations. I don’t understand how Cadman can have complete faith in ICE under some circumstances while assuming the worst about them in others without any evidence. More importantly, Cadman’s claims about Drane are simply false. He has zero incentive to lie. The Obama administration was not hiding its looser enforcement policies in 2015—it was bragging about them. More importantly, in the context of this case, Drane is admitting something that would place blame on his office for wrongfully targeting U.S. citizens—something that the Obama administration would certainly not want to disclose. Lastly, why would he risk potential jail time by perjuring himself on this point? It simply makes no sense. Cadman concludes:

Bier has taken what are clearly dubious conclusions about the number of U.S. citizens against whom detainers were filed in the Travis County jail after arrest for criminal offenses, and then through extrapolation and aggregation, applied them to assert that, if this many were caught up in ICE “targeting” of citizens in the county, then as a matter of simple multiplication one can derive how many U.S. citizens must have been “targeted” statewide… . . Each county and each state is sufficiently unique in population and demographics that using any one of them to extrapolate to a whole is different entirely than using legitimate random sampling techniques.

Cadman is correct that a state-wide random sample would provide far more useful data. Every county in Texas should release this information if they have it. But the data that we do have allow us to learn something about Travis County, at a minimum. Maybe Travis County is an outlier in either direction, we simply don’t know, but I never claimed that my extrapolation from Travis County to the whole state of Texas is anything but an estimate.

Travis County, Texas is the third largest recipient of detainers in the state of Texas, providing a significant sample of the detainers in the state. Moreover, the dynamics in Travis County are substantially similar to other counties in Texas—all are fairly close to the border and all are subject to Texas law with regard to immigration enforcement. Cadman takes issue with my hedging this extrapolation, but that is simply what prudent analysts do when the evidence is incomplete.

My brief shows that ICE often issues detainer requests for people who claim U.S. citizenship and present Social Security Numbers to local authorities, only to then cancel those requests. The best explanation—based on ICE policies and ICE testimony—is that ICE issued detainers for hundreds of U.S. citizens. It is noteworthy that ICE itself in a statement to the Washington Post did not use any of Cadman’s poor defenses, but only asserted that it works to improve its processes over time. That may be true, but severe deficiencies still remain.

Not long after the limited-government U.S. Constitution was ratified and the new government resumed operation, numerous political leaders began pushing to expand federal power. Leading politicians of the 1790s did not agree with each other about the proper scope of federal authority, either legally or practically.

Treasury Secretary Alexander Hamilton proposed ideas for top-down manipulation of the economy. And fellow Federalist President John Adams signed into law the infamous Alien and Sedition Acts in 1798, which among other things outlawed any “false, scandalous and malicious writing” against the government, the Congress, and the president.

An article in the Washington Post the other day discussed some interesting details regarding the enforcement of the sedition statute:

Adams and his Federalist Party supporters in Congress passed the Alien and Sedition Acts under the guise of national security, supposedly to safeguard the nation at a time of preparing for possible war with France. The “Alien” part of the law allowed the government to deport immigrants and made it harder for naturalized citizens to vote. But the law mainly was designed to mute backers of the opposition Democratic-Republican Party led by Thomas Jefferson, who also happened to be the vice president. Jefferson had finished second to Adams in the 1796 presidential election and again ran against him in 1800.

An early target of the new law was Rep. Matthew Lyon, who had accused Adams of “ridiculous pomp.” In the fall of 1798 the government accused the Vermont congressman of being “a malicious and seditious person, and of a depraved mind and a wicked and diabolical disposition.” He was convicted of sedition, fined $1,000 and sentenced to four months in prison. Lyon campaigned for reelection from jail and won in a landslide. On his release in February 1799, supporters greeted him with a parade and hailed him as “a martyr to the cause of liberty and the rights of man.”

… Another target was James Callender, a pro-Jefferson journalist for the Richmond Examiner and the man who had exposed Federalist Alexander Hamilton’s extramarital affair. In 1800, Callender wrote an election campaign pamphlet that said of Adams: “As President he has never opened his lips, or lifted his pen, without threatening and scolding; the grand object of his administration has been to exasperate the rage of contending parties … and destroy every man who differs from his opinions.” Callander was convicted of sedition, fined $200 and sent to federal prison for nine months. He continued to write from his prison cell, calling Adams “a gross hypocrite and an unprincipled oppressor.”

… The government also came after critics of some members of the Adams administration, such as Treasury Secretary Hamilton. In 1799, Charles Holt, editor of the New London Bee in Connecticut, published an article accusing Hamilton of seeking to expand the U.S. military into a standing army. He also took personal jabs at Hamilton, asking, “Are our young officers and soldiers to learn virtue from General Hamilton? Or like their generals are they to be found in the bed of adultery?” The government promptly charged Holt with being a “wicked, malicious seditious and ill-disposed person — greatly disaffected” to the U.S. government. He was fined $200 and sent to jail for three months.

The speech crackdown extended even to private remarks, as Luther Baldwin, the skipper of a garbage boat in Newark, discovered. In July 1798, while passing through Newark on his way to his summer home in Massachusetts, Adams rode in his coach in a downtown parade complete with a 16-cannon salute. When Baldwin and his buddy Brown Clark heard the cannon shots while drinking heavily at a local tavern, Clark remarked, “There goes the president, and they are firing at his arse.” Baldwin responded that he didn’t care “if they fired thro’ his arse.” The tavern owner reported the conversation, and both drinkers were fined and jailed for sedition.

Thomas Jefferson and James Madison led the opposition to the big government Federalist policies of the 1790s, and “in the end, widespread anger over the Alien and Sedition Acts fueled Jefferson’s victory over Adams in the bitterly contested 1800 presidential election.” Free speech was restored and the incoming president would focus on cutting the excess spending, taxes, and debt built up by the prior Federalist administrations.

Hardly a day goes by without a report in the press about some new addiction. There are warnings about addiction to coffee. Popular psychology publications talk of “extreme sports addiction.” Some news reports even alert us to the perils of chocolate addiction. One gets the impression that life is awash in threats of addiction. People tend to equate the word “addiction” with “abuse.” Ironically, “addiction” is a subject of abuse.

The American Society of Addiction Medicine defines addiction as a “chronic disease of brain reward, motivation, memory and related circuitry…characterized by the inability to consistently abstain, impairment in behavioral control, craving” that continues despite resulting destruction of relationships, economic conditions, and health. A major feature is compulsiveness. Addiction has a biopsychosocial basis with a genetic predisposition and involves neurotransmitters and interactions within reward centers of the brain. This compusliveness is why alcoholics or other drug addicts will return to their substance of abuse even after they have been “detoxed” and despite the fact that they know it will further damage their lives. 

Addiction is not the same as dependence. Yet politicians and many in the media use the two words interchangeably. Physical dependence represents an adaptation to the drug such that abrupt cessation or tapering off too rapidly can precipitate a withdrawal syndrome, which in some cases can be life-threatening. Physical dependence is seen with many categories of drugs besides drugs commonly abused. It is seen for example with many antidepressants, such as fluoxetine (Prozac) and sertraline (Zoloft), and with beta blockers like atenolol and propranolol, used to treat a variety of conditions including hypertension and migraines. Once a patient is properly tapered off of the drug on which they have become physically dependent, they do not feel a craving or compulsion to return to the drug.

Some also confuse tolerance with addiction. Similar to dependency, tolerance is another example of physical adaptation. Tolerance refers to the decrease in one or more effects a drug has on a person after repeated exposure, requiring increases in the dose.

Science journalist Maia Szalavitz, writing in the Columbia Journalism Review, ably details how journalists perpetuate this lack of understanding and fuel misguided opioid policies.

Many in the media share responsibility for the mistaken belief that prescription opioids rapidly and readily addict patients—despite the fact that Drs. Nora Volkow and Thomas McLellan of the National Institute on Drug Abuse point out addiction is very uncommon, “even among those with preexisting vulnerabilities.” Cochrane systematic studies in 2010 and 2012 of chronic pain patients found addiction rates in the 1 percent range, and a report on over 568,000 patients in the Aetna database who were prescribed opioids for acute postoperative pain between 2008 and 2016 found a total “misuse” rate of 0.6 percent. 

Equating dependency with addiction caused lawmakers to impose opioid prescription limits that are not evidence-based, and is making patients suffer needlessly after being tapered too abruptly or cut off entirely from their pain medicine. Many, in desperation, seek relief in the black market where they get exposed to heroin and fentanyl. Some resort to suicide. There have been enough reports of suicides that the US Senate is poised to vote on opioid legislation that “would require HHS and the Department of Justice to conduct a study on the effect that federal and state opioid prescribing limits have had on patients — and specifically whether such limits are associated with higher suicide rate.” And complaints about the lack of evidence behind present prescribing policy led Food and Drug Administration Commissioner Scott Gottlieb to announce plans last month for the FDA to develop its own set of evidence-based guidelines.

Now there is talk in media and political circles about the threats of “social media addiction.” But there is not enough evidence to conclude that spending extreme amounts of time on the internet and with social media is an addictive disorder. One of the leading researchers on the subject stresses that most reports on the phenomenon are anecdotal and peer-reviewed scientific research is scarce. A recent Pew study found the majority of social media users would not find it difficult to give it up. The American Psychiatric Association does not consider social media addiction or “internet addiction” a disorder and does not include it in its Diagnostic and Statistical Manual of Mental Disorders (DSM), considering it an area that requires further research.

This doesn’t stop pundits from warning us about the dangers of social media addiction. Some warnings might be politically motivated. Recent reports suggest Congress might soon get into the act. If that happens, it can threaten freedom of speech and freedom of the press. It can also generate biliions of dollars in government spending on social media addiction treatment.

Before people see more of their rights infringed or are otherwise harmed by unintended consequences, it would do us all a great deal of good to be more accurate and precise in our terminology. It would also help if lawmakers learned more about the matters on which they create policy.

As Hurricane Florence spins toward the Carolina coast, the nation’s attention will be on the disaster readiness and response of governments and the affected communities. Have lessons been learned since the deeply flawed government response to Hurricane Katrina back in 2005?

I examined FEMA and the Katrina response in this study, discussing both the government failures and the impressive private-sector relief efforts.

Last year, Hurricane Maria devastated Puerto Rico, again exposing all sorts of government failures. Well-known chef José Andrés has a new book on the Maria response. He had an eye-opening experience on the island volunteering on relief efforts with his World Central Kitchen.

The Washington Post’s review of the book says that Andrés saw the flaws of top-down bureaucratic relief efforts and embraces more of a spontaneous order view of effective disaster relief:

With We Fed an Island, chef-and-restaurateur-turned-relief worker José Andrés doesn’t just tell the story about how he and a fleet of volunteers cooked millions of meals for the Americans left adrift on Puerto Rico after Hurricane Maria. He exposes what he views as an outdated top-down, para-military-type model of disaster relief that proved woefully ineffective on an island knocked flat by the Category 4 hurricane.

… ‘My original plan was to cook maybe ten thousand meals a day for five days, and then return home,’ Andrés writes. Instead, Andrés and the thousands of volunteers who composed Chefs for Puerto Rico remained for months, preparing and delivering more than 3 million meals to every part of the island. They didn’t wait for permission from FEMA.

… These grass-roots culinary efforts didn’t always sit well with administration officials or with executives at hidebound charities, in part because Andrés was no diplomat. He trolled Trump on Twitter over the situation on Puerto Rico. He badgered FEMA for large contracts to ramp up production to feed even more hungry citizens. He infamously told Time magazine that the “American government has failed” in Puerto Rico. A chef used to fast-moving kitchens, Andrés had zero patience for slow-footed bureaucracy, especially in a time of crisis.

… After dealing with so much red tape and mismanagement (remember the disastrous $156 million contract that FEMA awarded to a small, inexperienced company to prepare 30 million hot meals?), Andrés wants the government and nonprofit groups to rethink the way they handle food after a large-scale natural disaster. He wants them to drop the authoritarian, top-down style and embrace the chaos inherent in crisis. Work with available local resources, whether residents or idle restaurants and schools. Give people the authority and the means to help themselves. Stimulate the local economy.

‘What we did was embrace complexity every single second,’ Andrés writes. ‘Not planning, not meeting, just improvising. The old school wants you to plan, but we needed to feed the people.’

Andrés and World Central Kitchen have embraced complexity. 

Hail to the chef!

 

 

As of this writing, Tuesday, September 11, Hurricane Florence is threatening millions of folks from South Carolina to Delaware. It’s currently forecast to be near the threshold of the dreaded Category 5 by tomorrow afternoon. Current thinking is that its environment will become a bit less conducive as it nears the North Carolina coast on Thursday afternoon, but still hitting as a Major Hurricane (Category 3+). It’s also forecast to slow down or stall shortly thereafter, which means it will dump disastrous amounts of water in southeastern North Carolina. Isolated totals of over two feet may be common. 

At the same time that it makes landfall, there is going to be the celebrity-studded “Global Climate Action Summit” in San Francisco, and no doubt Florence will be the poster girl.

There’s likely to be the usual hype about tropical cyclones (the generic term for hurricanes) getting worse because of global warming, even though their integrated energy and frequency, as published by Cato Adjunct Scholar Ryan Maue, show no warming-related trend whatsoever.

Maue’s Accumulated Cyclone Energy index shows no increase in global power or strength.

Here is the prevailing consensus opinion of the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (NOAA GFDL): “In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.”

We’ll also hear that associated rainfall is increasing along with oceanic heat content. Everything else being equal (dangerous words in science), that’s true. And if Florence does stall out, hey, we’ve got a climate change explanation for that, too! The jet stream is “weirding” because of atmospheric blocking induced by Arctic sea-ice depletion. This is a triple bank shot on the climate science billiards table. If that seems a stretch, it is, but climate models can be and are “parameterized” to give what the French Climatologist, Pierre Hourdin, recently called “an anticipated acceptable range” of results.

The fact is that hurricanes are temperamental beasts. On September 11, 1984, Hurricane Diana, also a Category 4, took aim at pretty much the same spot that Florence is forecast to landfall—Wilmington, North Carolina. And then—34 years ago—it stalled and turned a tight loop for a day, upwelling the cold water that lies beneath the surface, and it rapidly withered into a Category 1 before finally moving inland. (Some recent model runs for Florence have it looping over the exact same place.) The point is that what is forecast to happen on Thursday night—a major category 3+ landfall—darned near happened over three decades earlier… and exactly 30-years before that, in 1954, Hurricane Hazel made a destructive Category 4 landfall just south of the NC/SC border. The shape of the Carolina coastlines and barrier islands make the two states very susceptible to destructive hits. Fortunately, this proclivity toward taking direct hits from hurricanes has also taught the locals to adapt—many homes are on stilts, and there is a resilience built into their infrastructure that is lacking further north.

There’s long been a running research thread on how hurricanes may change in a warmer world. One thing that seems plausible is that the maximum potential power may shift a bit further north. What would that look like? Dozens of computers have cranked away thousands years of simulations and we have a mixture of results: but the consensus is that there will be slightly fewer but more intense hurricanes by the end of the 21st Century. 

We actually have an example of how far north a Category 4 can land, on August 27, 1667 in the tidewater region of southeast Virginia. It prompted the publication of a pamphlet in London called “Strange News from Virginia, being a true relation of the great tempest in Virginia.” The late, great weather historian David Ludlum published an excerpt:

Having this opportunity, I cannot but acquaint you with the Relation of a very strange Tempest which hath been in these parts (with us called a Hurricane) which began on Aug. 27 and continued with such Violence that it overturned many houses, burying in the Ruines much Goods and many people, beating to the ground such as were in any ways employed in the fields, blowing many Cattle that were near the Sea or Rivers, into them, (!!-eds), whereby unknown numbers have perished, to the great affliction of all people, few escaped who have not suffered in their persons or estates, much Corn was blown away, and great quantities of Tobacco have been lost, to the great damage of many, and the utter undoing of others. Neither did it end here, but the Trees were torn up by their roots, and in many places the whole Woods blown down, so that they cannot go from plantation to plantation. The Sea (by the violence of the winds) swelled twelve Foot above its usual height, drowning the whole country before it, with many of the inhabitants, their Cattle and Goods, the rest being forced to save themselves in the Mountains nearest adjoining, where they were forced to remain many days in great want.

Ludlum also quotes from a letter from Thomas Ludwell to Virginia Governor Lord Berkeley about the great tempest:

This poore Country…is now reduced to a very miserable condition by a continual course of misfortune…on the 27th of August followed the most dreadful Harry Cane that ever the colony groaned under. It lasted 24 hours, began at North East and went around to Northerly till it came to South East when it ceased. It was accompanied by a most violent raine, but no thunder. The night of it was the most dismal time I ever knew or heard of, for the wind and rain raised so confused a noise, mixed with the continual cracks of falling houses…the waves were impetuously beaten against the shores and by that violence forced and as it were crowded the creeks, rivers and bays to that prodigious height that it hazarded the drownding of many people who lived not in sight of the rivers, yet were then forced to climb to the top of their houses to keep themselves above water…But then the morning came and the sun risen it would have comforted us after such a night, hat it not lighted to us the ruins of our plantations, of which I think not one escaped. The nearest computation is at least 10,000 house blown down.

It is too bad that there were no anemometers at the time, but the damage and storm surge are certainly consistent with a Category 4 storm. And this was in 1667, at the nadir of the Little Ice Age.

A Maryland story in the Washington Post last week presents a classic case of local political corruption. The broader message of the story is that when we give government the power to regulate an activity—in this case liquor sales—we open the door to corruption.

Even if you believe that regulatory regimes are created with good intentions, the politicians and officials in charge inevitably get swarmed by lobbyists and some of them will focus on lining their own pockets. With respect to the public interest, the resulting policy outcomes are a crapshoot.

Former Maryland state delegate Michael L. Vaughn (D) was sentenced to 48 months in federal prison Tuesday after he was convicted of accepting cash in exchange for votes that would expand liquor sales in Prince George’s County.

A jury found Vaughn guilty of conspiracy and bribery in March. During his six-day trial in U.S. District Court in Maryland, Vaughn and his attorneys argued that the bundles of cash he received from liquor store owners and a lobbyist in 2015 and 2016 were campaign contributions that he failed to report because he had personal financial problems.

But prosecutors for the government argued that the more than $15,000 that changed hands in a coffee shop bathroom, a dark restaurant and other locations throughout the county were bribes.

… Sentencing Judge Paula Xinis called Vaughn’s misconduct ‘exceptionally serious’ and ‘grievous bribery.’

Vaughn was one of seven arrested last year in a federal corruption case that investigators called “Operation Dry Saloon.” Liquor store owners, lobbyists, former liquor board commissioners and former Prince George’s County Council member William A. Campos (D) conspired to pass laws that would allow for Sunday liquor sales in the county in exchange for cash.

… Prosecutors, however, argued that Vaughn and former chief liquor inspector David Son hashed out a scheme in which local liquor store owners Young Paig and Shin Ja Lee would pay Vaughn $20,000 over two years to clear the way for Sunday sales.

… ‘He fully embraced the pay-to-play culture that has been a repeat phrase in this court for a decade,’ Windom said, alluding to the 87-month sentence former Prince George’s County executive Jack Johnson received for bribery and corruption.

Local governments have large and excessive power over private land development, and that power has long been a source of corruption. Here’s what the Washington Post said about Jack Johnson’s crimes in a 2011 story:

Jack Johnson, a Democrat who was county executive from 2002 until December 2010, came to the attention of federal authorities in 2006, when the FBI began investigating allegations of corruption, campaign finance violations and tax fraud. Authorities found massive corruption centered around a “pay-to-play culture” that began months after Johnson took office.

‘Under Jack Johnson’s leadership, government in Prince George’s County literally was for sale,’ the [sentencing] memo said.

The pay-to-play scheme involved several developers, including Laurel physician and developer Mirza H. Baig … In his plea agreement, Jack Johnson acknowledged accepting up to $400,000 from the scheme.

Johnson, 62, was charged last November with evidence tampering and destruction of evidence after federal agents arrested him and his wife, 59, at their Mitchellville home. They were overheard on a wiretap scheming to stash $79,600 in cash in Leslie Johnson’s underwear and flush a $100,000 check that Jack Johnson received as a bribe from a developer.

… On the day of their arrests, Johnson was at Baig’s office picking up a cash bribe and talking about how he would continue the corruption ‘through his wife’s new position on the county council,’ the memorandum said.

‘He proudly bragged about how he was going to orchestrate approval of various funding and approvals by the County Council for Baig’s projects,’ according to the memo.

Federal officials valued the benefits that Baig received in exchange for illegal payments to Johnson at more than $10 million on two development projects.

With public healthcare programs accounting for over a trillion dollars of federal spending, efforts to identify and remedy sources of waste are increasing. A new working paper finds: 

There is substantial waste in U.S. healthcare, but little consensus on how to identify or combat it. We identify one specific source of waste: long-term care hospitals (LTCHs). These post-acute care facilities began as a regulatory carve-out for a few dozen specialty hospitals, but have expanded into an industry with over 400 hospitals and $5.4 billion in annual Medicare spending in 2014. We use the entry of LTCHs into local hospital markets and an event study design to estimate LTCHs’ impact. We find that most LTCH patients would have counterfactually received care at Skilled Nursing Facilities (SNFs) – post-acute care facilities that provide medically similar care to LTCHs but are paid significantly less – and that substitution to LTCHs leaves patients unaffected or worse off on all measurable dimensions. Our results imply that Medicare could save about $4.6 billion per year – with no harm to patients – by not allowing for discharge to LTCHs.

The cost of healthcare in the United States remains a significant problem, but eliminating regulatory carve-outs such as LTCHs is one way to address this growing issue.

Research assistant Erin Partin contributed to this blog post.

 

Dedicated readers may recall my having reported here several years ago the suit filed by Colorado’s Four Corner’s Credit Union against the Kansas City Fed — after the Fed refused it a Master Account on the grounds that it planned to cater to Colorado’s marijuana-related businesses. Until then the episode was almost unique, for the Fed had scarcely ever refused a Master Account to any properly licensed depository institution. Eventually the Fed and Four Corners reached a compromise, of sorts, with the Fed agreeing to grant the credit union an account so long as it promised not to do business with the very firms it was originally intended to serve!

Well, as The Wall Street Journal’s Michael Derby reported last week, the Fed once again finds itself being sued for failing to grant a Master Account to a duly chartered depository institution. Only the circumstances couldn’t be more different. The plaintiff this time, TNB USA Inc, is a Connecticut-chartered bank; and its intended clients, far from being small businesses that cater to herbalistas, include some of Wall Street’s most venerable establishments. Also, although TNB is suing the New York Fed for not granting it a Master Account, opposition to its request comes mainly, not from the New York Fed itself, but from the Federal Reserve System’s head honchos in Washington. Finally, those honchos are opposed to TNB’s plan, not because they worry that TNB’s clients might be breaking Federal laws, but because of unspecified “policy concerns.”

Just what are those concerns? The rest of this post explains. But I’ll drop a hint or two by observing that the whole affair (1) has nothing to do with either promoting or opposing safe banking and (2) has everything to do with (you guessed it) the Fed’s post-2008 “floor” system of monetary control and the interest it pays on bank reserves to support that system.

What’s In a Name?

To understand the Fed’s concerns, one has first to consider TNB’s business plan. Doing that in turn means demolishing a myth that has already taken root concerning that enterprise — one based entirely on it’s name.

You see, “TNB” stands for “The Narrow Bank.” And some commentators, including John Cochrane, initially took this to mean that TNB was supposed to be a narrow bank in the conventional sense of the term, meaning one that would cater to ordinary but risk-averse depositors — like your grandma — by investing their money entirely in perfectly safe assets, such as cash reserves or Treasury securities. For example, the Niskanen Center’s Daniel Takash says that, if TNB wins its suit,

it would offer many businesses (and potentially consumers) the option [to] save their money in a safer financial institution and increase interest-rate competition in the banking industry.

Fans of narrow banking see it as a superior alternative to the present practice of insuring bank deposits while allowing banks to use such deposits to fund risky investments.

The assumption that TNB has no other aim than that of being a safer alternative to already established banks naturally makes the Fed’s opposition to it seem irrational: “Fed Rejects Bank for Being Too Safe,” is the attention-getting (but equally question-begging) headline assigned to Matt Levine’s Bloomberg article about the lawsuit. It seems irrational, that is, unless one assumes that Fed officials place other interests above that of financial-system safety. “That the Fed, which is a banker’s bank, protects the profits of the big banks’ system against competition, would be the natural public-choice speculation,” Cochrane observes. Alternatively, he wonders whether his vision of a narrow banking system might not be

as attractive to the Fed as it should be. If deposits are handled by narrow banks, which don’t need asset risk regulation, and risky investment is handled by equity-financed banks, which don’t need asset risk regulation, a lot of regulators and “macro-prudential” policy makers, who want to use regulatory tools to control the economy, are going to be out of work.

Get Lost, Grandma!

No one who knows me will imagine that I’d go out of my way to defend the Fed against the charge that it doesn’t always have the general public’s best interests in mind. Yet I’m compelled to say that explanations like Cochrane’s for the Fed’s treatment of TNB, let alone ones that suppose that the Fed has it in for safety-minded bankers, miss their mark. Such explanations badly misconstrue TNB’s business plan, especially by failing to grasp the significance of the declaration, included in its complaint against the New York Fed, that its “sole business will be to accept deposits only from the most financially secure institutions” (my emphasis).

You see, despite what Cochrane and Levine and some others have suggested, TNB was never meant to be a bank for me, thee, or the fellow behind the tree. Nor would it cater to any of our grandmothers. And why would it bother to? After all, unless grandma keeps over $250,000 in her checking account, her ordinary bank deposit is already safer than a mouse in a malt-heap. There’s no need, therefore, for any Fed conspiracy to keep a safe bank aimed at ordinary depositors from getting off the ground.

Instead TNB is exclusively meant to serve non-bank financial institutions, and money market mutual funds (MMMF) especially. Its purpose is to allow such institutions, which are not able to directly take advantage of the Fed’s policy of paying interest on excess reserves (IOER), to do so indirectly. In other words, TNB is meant to serve as a “back door” by which non-banks may gain access to the Fed’s IOER payments, with their TNB deposits serving as surrogate Fed balances, thereby allowing non-banks to realize higher returns, with less risk, than they might realize by investing directly in Treasury securities. J.P. Koning gets this (and much else) right in his own post about TNB, published while yours truly was readying this one for press:

TNB is a designed as a pure warehousing bank. It does not make loans to businesses or write mortgages. All it is designed to do is accept funds from depositors and pass these funds directly through to the Fed by redepositing them in its Fed master account. The Fed pays interest on these funds, which flow through TNB back to the original depositors, less a fee for TNB. Interestingly, TNB hasn’t bothered to get insurance from the Federal Deposit Insurance Corporation (FDIC). The premiums it would have to pay would add extra costs to its lean business model. Any depositor who understands TNB’s model wouldn’t care much anyways if the deposits are uninsured, since a deposit at the Fed is perfectly safe.

Once one realizes what TNB is about, explaining the Fed’s reluctance to grant it a Master Account becomes as easy as winking. The explanation, in a phrase, is that, were it to gain a charter, TNB could cause the Fed’s present operating system, or a substantial part of it, to unravel. Having gone to great lengths to get that system up and running, the Fed doesn’t want to see that happen. Since the present operating system is chiefly the brainchild of the Federal Reserve Board, it’s no puzzle that the Board is leading the effort to deny TNB its license.

How would TNB’s presence matter? The Fed has been paying interest on banks’ reserve balances, including their excess reserves, since October 2008. Ever since then, IOER rates have exceeded yields on many shorter-term Treasury securities — while being free from the interest-rate risk associated with holdings of longer-term securities. But banks alone (that is, “depository institutions”) are eligible for IOER. Other financial firms, including MMMFs, have had to settle for whatever they could earn on their own security holdings or for the fixed offering rate on the Fed’s Overnight Reverse Repurchase (ON-RRP) facility, which is presently 20 basis points lower than the IOER rate.

Naturally, any self-respecting MMMF would relish the opportunity to tap into the Fed’s IOER program. But how can any of them do so? Not being depository institutions, they can’t earn it directly. Nor will placing funds in an established bank work, since such a bank will only “pass through” a modest share of its IOER earnings keeping some — and probably well over 20 basis points — to cover its expenses and profits. But a bank specifically designed to cater to the MMMFs needs — now that’s a horse of a different color.

What would happen, then, if TNB, and perhaps some other firms like it, had their way? That would be the end, first of all, of the Fed’s ON-RRP facility and, therefore, of the lower limit of the Fed’s interest rate target range that that facility is designed to maintain.

Second, the Fed would face a massive increase in the real demand for excess reserve balances that would complicate both its monetary control efforts and its plan to shrink its balance sheet.

TANSTAAFL

OK, so the Fed may not like what TNB is up to. But why should the rest of us mind it? So what if the Fed’s leaky “floor-type” operating system lacks a “subfloor” to limit the extent to which the effective fed funds rate can wander below the IOER rate? Why not have the Fed pay IOER to the money funds, and to the GSEs while it’s at it, and have a leak-free floor instead? Besides, many of us have money in money funds, so that we stand to earn a little more from those funds once they can help themselves to the Fed’s interest payments. What’s not to like about that?

Plenty, actually. Consider, first of all, what the change means. The Fed would find itself playing surrogate to a large chunk of the money market fund industry: instead of investing their clients’ funds in some portfolio of Treasury securities, money market funds would leave the investing to the Fed, for a return — the IOER rate — which, instead of depending directly upon the yield on the Fed’s own asset portfolio, is chosen by Fed bureaucrats.

Now ask yourself: Just how is it that the Fed’s IOER payments could allow MMMFs to earn more than they might by investing money directly into securities themselves? Because the Fed has less overhead? Don’t make me laugh. Because Fed bureaucrats are more astute investors? I told you not to make me laugh! No, sir: it’s because the Fed can fob-off risk — like the duration risk it assumed by investing in so many longer-term securities — on third parties, meaning taxpayers, who bear it in the form of reduced Fed remittances to the Treasury. That means in turn that any gain the MMMFs would realize by having a bank that’s basically nothing but a shell operation designed to let them bank with the Fed would really amount to an implicit taxpayer subsidy. There Ain’t No Such Thing As A Free Lunch.

As it stands, of course, ordinary banks are already taking advantage of that same subsidy. But two wrongs don’t make a right. Or so my grandmother told me.

[Cross-posted from Alt-M.org]

 The Reason Foundation’s Bob Poole has published a new book, Rethinking America’s Highways: A 21st Century Vision for Better Infrastructure.

The book examines the structure of U.S. highway ownership and financing and describes why major reforms are needed. Bob has a deep understanding of both the economics and engineering of highways.

Bob puts U.S. highways in international context. He describes, for example, how Europe has more experience with private highways than we do. The photo below is the Millau Viaduct in southern France. Wiki says it is “ranked as one of the great engineering achievements of all time.” The structure includes the tallest bridge tower in the world, and it was built entirely by private money. Isn’t that beautiful? I mean both the bridge and the fact that it is private enterprise.

Bob’s book regards the institutional structure for highways, which is different that the often superficial highway discussions in D.C. Those often surround the total amount of money the government spends. But the more important issue is ensuring that we spend on projects where the returns outweigh the costs.

D.C. policymakers often focus on the jobs created by highway construction. But labor is a cost of projects, not a benefit. Instead, policymakers should focus on generating long-term net value.

Finally, spending advocates often decry potholes and deficient bridges, but the optimal amount of wear-and-tear on infrastructure is not zero, else we would spend an infinite amount.

So the challenge is to spend the right amount, and to focus it on the most needed repairs and expansions. To do that, we need to get the institutional structure right, and that is what Bob’s book is about.

Every policy wonk and politician interested in infrastructure should read Bob’s book.

 

The Independent said this of the bridge: “The viaduct, costing €400m (£278m), has been built in record time (just over three years) for a project of this size. The French construction company, Eiffage, the direct descendant of the company started by Gustav Eiffel, the builder of the celebrated tower beside the Seine, has raised the money entirely from private financing. In return, the company has been given a 75 -year concession to run the viaduct as a toll-bridge.”

Shortly after Iowa prosecutors charged illegal immigrant Christian Rivera with the murder of Molly Tibbetts in August, his Iowa employer erroneously stated that E-Verify had approved him for legal work. That later turned out to be false as his employer, Yarrabee Farms, ran his name and Social Security Number (SSN) through another system called Social Security Number Verification Service (SSNVS) that merely verified that the name and number matched, not E-Verify.  That mix-up has inspired many to argue that an E-Verify mandate for all new hires would have stopped Rivera from working and, thus, prevented the murder of Mollie Tibbetts.  That’s almost certainly not true.  New details reveal that E-Verify would likely not have prevented Rivera from working.    

E-Verify is an electronic eligibility for employment verification system run by the federal government at taxpayer expense. Created as a pilot program in 1996, E-Verify is intended to prevent the hiring of illegal immigrants by verifying the identity information they submit for employment against federal government databases in the Social Security Administration and Department of Homeland Security.  The theory behind E-Verify is that illegal immigrants won’t have the identity documents to pass E-Verify (hold your laughter) so they won’t be able to work, thus sending them all home and preventing more from coming.  That naïve theory fails when confronted with the reality of the Rivera case.

Rivera submitted the name John Budd on an out of state drivers license and an SSN that matched that name to his employer, Yarrabee Farms, when he was hired in 2014.  Yarrabee Farms ran the SSN and name John Budd through the Social Security Number Verification Service (SSNVS) to guarantee that they matched for tax purposes (Yarrabee Farms confused SSNVS with E-Verify).  SSNVS matched the name with the SSN and approved Rivera-disguised-as-Budd to work. 

E-Verify would also have matched the name with the SSN and approved Rivera for work.  The systematic design flaw in E-Verify is that it only verifies the documents that a worker hands his employers, not the worker himself.  Thus, if an illegal immigrant hands the identity documents of an American citizen to an E-Verify-using employer then it verifies the documents and the worker with the documents gets the job – just as happened here with Rivera handing Yarrabee Farms the identity of John Budd.  That’s why 54 percent of illegal immigrants run through E-Verify are approved for legal work.  E-Verify is worse than a coin toss at identifying known illegal immigrants. 

Rivera’s identity would even have gotten around the DRIVE program in Iowa because he handed his employer an out-of-state drivers license.  DRIVE is intended to link other identity information from the Iowa state’s DMV to the job applicants as an extra layer of security.  If any of that information doesn’t match the information that the applicant gives to his employer then his employer is supposed to realize the applicant is an illegal worker.  However, the flaw in DRIVE is that it only works for the state-level DMV and fails to add extra security for out-of-state drivers licenses.  Thus, Rivera’s out-of-state identity would not have been caught by DRIVE.     

Rivera is a low-skilled and poor illegal immigrant from Mexico whose English language skills are so bad that he needs an interpreter in court.  Yet he would easily have been able to fool E-Verify, a sophisticated government immigration enforcement program praised by members of Congress, the President, and the head of at least one DC think-tank, by using somebody else’s name and SSN with a driver’s license from another state. 

A law passed in 1986 has required workers in the United States to present a government identification to work legally – a requirement that has resulted in an explosion in identity theft.  Rivera likely stole Budd’s identity to get a job, an unintended consequence of that 1986 law. A national E-Verify mandate will vastly expand identity theft

As a further wrinkle, if Yarrabee Farms found any of Rivera’s identity documents or information suspicious and confronted Rivera with their suspicions concerning Rivera’s identity, his name, race, or age, then Yarrabee Farms would likely have run afoul of other labor laws and exposed itself to a serious lawsuit.  The federal government expects employers to enforce immigration laws but not to the point that they can profile applicants.  The safe choice is not to profile anyone and hire those who present documents so long as they are not obviously fake.

The last wrinkle is that many businesses don’t comply with E-Verify in states where it is mandated.  In the second quarter of 2017, only 59 percent of new hires in Arizona were run through E-Verify even though the law mandates that 100 percent be run through.  Arizona has the harshest state-level immigration enforcement laws in the country and they can’t even guarantee compliance with E-Verify.  There is even evidence that Arizona’s E-Verify mandate temporarily increased property crime committed by a subpopulation that is more likely to be illegally present in the United States, prior to that population learning that E-Verify is easy to fool.  South Carolina, the state with the best-reputed enforcement of E-Verify, only had 55 percent compliance in the same quarter of 2017.  The notion that a lackluster Washington will do better than Arizona or South Carolina is too unserious a charge to rebut. 

Since SSNVS matched the name John Budd with a valid SSN and Rivera used an out-of-state drivers license, E-Verify would not have caught him.  E-Verify is a lemon of a system that is not a silver bullet to stop illegal immigration.  It wouldn’t have stopped Rivera from working legally in Iowa.  E-Verify’s cheerleaders should stop using the tragic murder of Mollie Tibbetts as a sales pitch for their failed government program.

 

Cato released my study today on “Tax Reform and Interstate Migration.”

The 2017 federal tax law increased the tax pain of living in a high-tax state for millions of people. Will the law induce those folks to flee to lower-tax states?

To find clues, the study looks at recent IRS data and reviews academic studies on interstate migration.

For each state, the study calculated the ratio of domestic in-migration to out-migration for 2016. States losing population have ratios of less than 1.0. States gaining population have ratios of more than 1.0. New York’s ratio is 0.65, meaning for every 100 residents that left, only 65 moved in. Florida’s ratio is 1.45, meaning that 145 households moved in for every 100 that left.

Figure 1 maps the ratios. People are generally moving out of the Northeast and Midwest to the South and West, but they are also leaving California, on net.

People move between states for many reasons, including climate, housing costs, and job opportunities. But when you look at the detailed patterns of movement, it is clear that taxes also play a role.

I divided the country into the 25 highest-tax and 25 lowest-tax states by a measure of household taxes. In 2016, almost 600,000 people moved, on net, from the former to the latter.

People are moving into low-tax New Hampshire and out of Massachusetts. Into low-tax South Dakota and out of its neighbors. Into low-tax Tennessee and out of Kentucky. And into low-tax Florida from New York, Connecticut, New Jersey, and just about every other high-tax state.

On the West Coast, California is a high-tax state, while Oregon and Washington fall just on the side of the lower-tax states.

Of the 25 highest-tax states, 24 of them had net out-migration in 2016.

Of the 25 lowest-tax states, 17 had net in-migration.  

 

https://object.cato.org/sites/cato.org/files/pubs/pdf/tbb-84-revised.pdf

A new report from the American Public Transportation Association (APTA) comes out firmly in support of the belief that correlation proves causation. The report observes that traffic fatality rates are lower in urban areas with high rates of transit ridership, and claims that this proves “that modest increases in public transit mode share can provide disproportionally larger traffic safety benefits.”


Here is one of the charts that APTA claims proves that modest increases in transit ridership will reduce traffic fatalities. Note that, in urban areas with fewer than 25 annual transit trips per capita – which is the vast majority of them – the relationship between transit and traffic fatalities is virtually nil. You can click the image for a larger view or go to APTA’s document from which this chart was taken.

In fact, APTA’s data show no such thing. New York has the nation’s highest per capita transit ridership and a low traffic fatality rate. But there are urban areas with very low ridership rates that had even lower fatality rates in 2012, while there are other urban areas with fairly high ridership rates that also had high fatality rates. APTA claims the correlation between transit and traffic fatalities is a high 0.71 (where 1.0 is a perfect correlation), but that’s only when you include New York and a few other large urban areas: among urban areas of 2 million people or less, APTA admits the correlation is a low 0.28.

The United States has two kinds of urban areas: New York and everything else. Including New York in any analysis of urban areas will always bias any statistical correlations in ways that have no application to other urban areas.

In most urban areas outside of New York, transit ridership is so low that it has no real impact on urban travel. Among major urban areas other than New York, APTA’s data show 2012 ridership ranging from 55 trips per person per year in Los Angeles to 105 in Washington DC to 133 in San Francisco-Oakland. From the 2012 National Transit Database, transit passenger miles per capita ranged from 287 in Los Angeles to 544 in Washington to 817 in San Francisco.

Since these urban areas typically see around 14,000 passenger miles of per capita travel on highways and streets per year, the 530-mile difference in transit usage between Los Angeles and San Francisco is pretty much irrelevant. Thus, even if there is a weak correlation between transit ridership and traffic fatalities, transit isn’t the cause of that correlation.

San Francisco and Washington actually saw slightly more per capita driving than Los Angeles in 2012, yet APTA says they had significantly lower fatality rates (3.7 fatalities per 100,000 residents in San Francisco and 3.6 in Washington vs. 6.4 in Los Angeles). Clearly, some other factor must be influencing both transit ridership and traffic fatalities.

With transit ridership declining almost everywhere, this is just a desperate attempt by APTA to make transit appear more relevant than it really is. In reality, contrary to APTA’s unsupported conclusion, modest rates in transit ridership will have zero measurable effect on traffic fatality rates.

Content moderation remains in the news following President Trump’s accusation that Google manipulated its searches to harm conservatives. Yesterday Congress held two hearings on content moderation, one mostly about foreign influence and the other mostly about political bias. The Justice Department also announced Attorney General Sessions will meet soon with state attorneys general “to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms.” 

None of this is welcome news. The First Amendment sharply limits government power over speech. It does not limit private governance of speech. The Cato Institute is free to select speakers and topics for our “platform.” The tech companies have that right also even if they are politically biased. Government officials should also support a culture of free speech. Government officials bullying private companies contravenes a culture of free speech. Needless to say, having the Justice Department investigate those companies looks a lot like a threat to the companies’ freedom. 

So much for law and theory. Here I want to offer some Madisonian thoughts on these issues. No one can doubt James Madison’s liberalism. But he wanted limited government in fact as well as in theory. Madison thought about politics to realize liberal ideals. We should too. 

Let’s begin with the question of bias. The evidence for bias against conservatives is anecdotal and episodic. The tech companies deny any political bias, and their incentives raise doubts about partisan censorship. Why take the chance you might drive away millions of customers and invite the wrath of Congress and the executive branch on your business? Are the leaders of these companies really such political fanatics that they would run such risks? 

Yet these questions miss an important point. The problem of content moderation bias is not really a question of truth or falsity. It is rather a difficult political problem with roots in both passion and reason. 

Now, as in the past, politicians have powerful reasons to foster fear and anger among voters. People who are afraid and angry are more likely to vote for a party or a person who promises to remedy an injustice or protect the innocent. And fear and anger are always about someone threatening vital values. For a Republican president, a perfect “someone” might be tech companies who seem to be filled with Progressives and in control of the most important public forums in the nation. 

But the content moderation puzzle is not just about the passions. The fears of the right (and to a lesser degree, the left) are reasonable. To see this, consider the following alternative world. Imagine the staff of the Heritage Foundation has gained potential control over much of the online news people see and what they might say to others about politics. Imagine also that after a while Progressives start to complain that the Heritage folks are removing their content or manipulating new feeds. The leaders of Heritage deny the charges. Would you believe them? 

Logically it is true that this “appearance of bias” is not the same as bias, and bias may be a vice but cannot be a crime for private managers. But politically that may not matter much, and politics may yet determine the fate of free speech in the online era. 

Companies like Google have to somehow foster legitimacy for their moderation of content, moderation that cannot be avoided if they are to maximize shareholder value. They have to convince most people that they have a right to govern their platforms even when their decisions seem wrong. 

Perhaps recognizing that some have reasonable as well as unreasonable doubts about their legitimacy would be a positive step forward. And people who harbor those reasonable doubts should keep in mind the malign incentives of politicians who benefit from fostering fear and anger against big companies. 

If the tech companies fail to gain legitimacy, we all will have a problem worse than bias. Politicians might act, theory and law notwithstanding. The First Amendment might well stop them. But we all would be better off with numerous, legitimate private governors of speech on the internet. Google’s problem is ours.

Pages