Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The New York Times recently reported on a proposed policy change at the Environmental Protection Agency that would require the agency to only rely on scientific research with publicly available data when setting pollution exposure standards. Proponents of the rule argue that the practice would allow other researchers to examine and replicate findings, an essential characteristic of the scientific method. Opponents argue the rule would exclude large amounts of research that rely on confidential health information that cannot be public. The Times quotes opponents who view the policy change as an attempt by the Trump administration to attack regulations they don’t agree with by undermining the scientific results on which they are based.

Increased transparency in data used in empirical research and the facilitation of replication of studies are like mom and apple pie. In an ideal world, such practices are the very essence of the scientific method. In practice, academic journals in many disciplines already require that data used in empirical and experimental work be available for replication. The International Committee of Medical Journal Editors, no strangers to the limitations of data that include personal information, recently affirmed their commitment to responsible data sharing.

While opposition to transparency certainly has bad optics, the opponents of this rule change do have a point. The struggle over transparency isn’t really about transparency. Instead, it is simply the latest chapter in the scrum over two studies whose results are the bases of EPA decisions about appropriate clean air exposure standards.

The Harvard Six Cities Study (SCS) and the American Cancer Society Study (ACS), both published in the 1990s, provide the data on which the EPA estimates all mortality risks from air pollution. Both studies looked at the health damage caused by particulate matter (PM), which accounts for 90 percent of the health benefits from emission regulation, and found that higher levels of PM exposure were associated with increased mortality. And these studies both rely on individual health data given under conditions of confidentiality

So the opponents are probably correct that the transparency rule is a clever attempt to undermine the current basis for EPA regulation of PM. And reduced PM exposure is the exclusive basis on which current conventional pollution regulation is justified because the benefits of additional emissions controls on other conventional pollutants are low. In this environmentalists’ nightmare, a transparency mandate ends additional regulation of air quality by the EPA.

But contrary to the Times assertion, this a not simply a tale of good science versus evil polluters. The SCS has been the subject of intense scientific scrutiny and much criticism because of results that are biologically puzzling. The increased mortality was found in men but not women, in those with less than high school education but not more, and those who were moderately active but not sedentary or very active. Among those who migrated away from the six cities, the PM effect disappeared. Cities that lost population in the 1980s were rust belt cities that had higher PM levels and those who migrated away were younger and better educated. Thus, had the migrants stayed in place it is possible that the observed PM effect would have been attenuated.

Furthermore, a survey of 12 experts (including 3 authors of the ACS and SCS) asked whether concentration-response functions between PM and mortality were causal. Four of the 12 experts attached nontrivial probabilities to the relationship between PM concentration and mortality not being causal (65 percent to 10 percent). Three experts said there is a 5 percent probability of noncausality. Five said a 0-2 percent probability of noncausality. Thus 7 out of the 12 experts would not reject the hypothesis that there is no causality between PM levels and mortality. Based on these findings, a 95 percent confidence interval would include zero mortality effect for any reductions in PM concentration below 16 micrograms per cubic meter.

So the scientific fragility of the two studies has been known for some time. Despite that fragility, in December 2012, EPA set a much lower fine-PM standard of 12 micrograms per cubic meter of air to be met by 2020.

If the EPA is forced to use other studies, such as two recent papers by Michael L. Anderson and by Tatyana Deryugina, Garth Heutel, Nolan H. Miller, David Molitor, and Julian Reif, the estimated benefits from PM exposure reduction are reduced. Anderson’s estimate is 60 percent smaller. Deryugina et al conclude that declining PM concentrations from 1999 to 2011 resulted in an additional 150,000 life-years per year, which, if valued at $100,000 per life year, would equal $15 billion in annual benefits. The EPA estimates that the annual compliance costs of the 1990 Clean Air Act standards were $44 billion in 2010. With these lopsided costs and benefits, it is certainly true that eliminating the ACS and SCS studies from consideration and forcing reliance on other studies would result in less stringent regulation except in areas with bad geography, such as Los Angeles, that prevents pollution dispersion.

The Times article mentions the fact that the Congress has had the opportunity to enact language achieving the same goal of transparency but has not done so. Libertarians have long criticized the growth of the discretionary administrative state. Thus because Congress has explicitly considered but failed to enact a research transparency requirement, libertarians should be cautious about using administrative discretion to achieve their preferred outcome.

Written with research assistance from David Kemp.

Former U.S. Secretary of Education Arne Duncan has taken to the pages of the Washington Post to let you know that you shouldn’t listen to people who tell you that “education reform” hasn’t worked well. At least, that is, reforms that he likes—he ignores the evidence that private school choice works because, as far as can be gathered from the op-ed, he thinks such choice lacks “accountability.” Apparently, parents able to take their kids, and money to educate them, from schools they don’t like to ones they do is not accountability.

Anyway, I don’t actually want to re-litigate whether reforms since the early 1970s have worked because as time has gone on I’ve increasingly concluded that we do not agree on what “success” means and the measures we have of what we think might be “success” often don’t tell us what we believe they do. These are, by the way, major concerns that I’ll be tackling with Dr. Patrick Wolf in a special Facebook live event on Wednesday. Join us!

Rather than assessing the impacts of specific reforms on what are often fuzzy and moving targets, I want to examine one crucial assertion that Duncan says needs to be “noted”: students today are “relatively poorer than in 1971.”

To back this, Duncan links to a Post article from 2015 that said, “For the first time in at least 50 years, a majority of U.S. public school students come from low-income families.” The article is based on a report from the Southern Education Foundation, which only mentions low-income rates as far back as 1989. More important, it is based on the share of students eligible for free and reduced-price lunches (FRPL), a flawed indicator of child poverty.

As the National Center for Educational Statistics (NCES) has pointed out, families earning up to 185 percent of the poverty level are eligible for reduced-price lunches, and now many students get free lunches no matter their income if their schools use the Community Eligibility option. As the NCES summarizes:

[T]he percentage of students receiving free or reduced price lunch includes all students at or below 185 percent of the poverty threshold, plus some additional non-poor children who meet other eligibility criteria, plus other students in schools and districts that have exercised the Community Eligibility option, which results in a percentage that is more than double the official poverty rate [italics added].

What is the poverty rate for families with children? In 1971, according to the U.S. Census, 12 percent of families with children under the age of 18 had incomes at or below the poverty level. By 2016 the rate was 15 percent. Up, but not hugely. And you have to know what “poverty” means: It is about cash income, and excludes major benefits such as food stamps, housing subsidies, and tax credits. Include those, according to the Center for Budget and Policy Priorities, and incomes for the poorest fifth of Americans have risen from about $20,000 in 1973 to over $22,000 in 2011. And with technological change, what that money can buy has afforded a much higher standard of living. Smartphones versus stuck-to-your-wall phones, anyone?

Finally, national test scores don’t gauge the performance of just the poor, but of all Americans. And while the poor are almost certainly better off today than in 1971, the nation as a whole is definitely better off. Indeed, as the figure above shows, inflation-adjusted, per-capita income nearly doubled from $18,603 in 1971 to $33,205 in 2016. Indeed, returning to the telephone theme, 13 percent of Americans didn’t have regular access to a telephone in 1971, versus about 2 percent today without a phone in their “housing unit.”

It is misleading, at best, to say that the “student population is relatively poorer” than in 1971. Burrow into the evidence and it is clear that American students are appreciably better off today than they were in 1971. It’s a basic reality we at least need to acknowledge before crediting broad “reform” for supposedly better outcomes.

In 2014 the government of Ecuador, under then-President Rafael Correa, announced with great fanfare that the Ecuadorian Central Bank (BCE) would soon begin issuing an electronic money (dinero electrónico, or DE). Users would keep account balances on the central bank’s own balance sheet and transfer them using a mobile phone app. Enabling legislation was passed in September, qualified users could open accounts beginning in December, and the accounts became spendable in February 2015. A headline on CNBC’s website declared: “Ecuador becomes the first country to roll out its own digital cash.”

The subsequent fate of the electronic money project has received less attention in the American press. Less than three years after opening, the system is now shutting down. In December 2017 Ecuador’s National Assembly, at the urging of President Lenin Moreno, Correa’s hand-picked successor who took office earlier in the year, passed legislation to decommission the central bank electronic money system. The legislation simultaneously opens the market to mobile payment alternatives from the country’s private commercial banks and savings institutions. As described below, the state system had failed to attract a significant number of users or volume of payments. Account holders now have until the end of March 2018 to withdraw their funds. Complete deactivation is scheduled for mid-April.

The substitution of open competition for state monopoly in mobile money is an important victory for the people of Ecuador. The entire episode is important internationally for the lesson it teaches us about the limits to a central bank’s ability to launch a new form of money when the public prefers established forms. The lesson provides an instructive contrast to “the case for central bank electronic money” recently made by Aleksander Berentsen and Fabian Schär in the pages of the Federal Reserve Bank of St. Louis Review.

The Birth of the Project

There is an important backstory to the episode: Ecuador had suffered a hyperinflation of its domestic currency, the sucre, in 1999, prompting residents to dollarize their own payments and finances. In January 2000 the government, bowing to the popular verdict, announced that it would officially dollarize, fixing a parity of the sucre to the US dollar and retiring all sucres from circulation by September.  (Concerning Ecuador’s dollarized system see my earlier post here.)

The electronic money project was born in 2014 legislation that gave the state a monopoly in electronic money. Only the central bank could issue electronic dollars, and only the state-owned mobile phone company CNT could provide mobile payment services. The law barred the private mobile phone companies and private financial institutions from providing competing systems. The legislature also banned cryptocurrencies.

Because President Correa (in office 2007-2017) had often complained about the discipline that dollarization imposed on his government, observers wondered whether the electronic money system was intended merely as a way for the government to gain some monopoly profits, or was a first step toward de-dollarization. To calm fears that the electronic money would become a forced currency to be followed by de-dollarization, the law declared that use of the electronic money would be voluntary, and that even public employees and state contractors would not be obliged to accept it in payments from the state. (Everyone knew that the Assembly could later revise that provision of the law, of course.)

The government was quite optimistic that the system would rapidly prove popular. The leading newspaper El Comercio reported on Christmas Day of 2014: “Fausto Villavicencio, responsible for the new payment mechanism in Ecuador, said that the authorities expect that some 500,000 people will use e-money in 2015.”[1]  The actual number of accounts opened in 2015 turned out to be less than 5000. The economist Diego Grijalva of the Universidad de San Francisco de Quito, citing the Ecuadorian central bank’s balance sheet, noted in early 2016 that “the Ecuadorian Electronic Money System is already implemented, but it has an uncertain future. In particular, financial institutions are not obliged to use it and the use thereof (less than US $ 800,000 for the end of January 2016) corresponded to less than 0.003% of the monetary liabilities of the Ecuadorian financial system.”

Its Failure to Achieve Popularity

One had to be skeptical of the stated rationale for the central bank electronic money project, to benefit the unbanked. Invited to speak in Ecuador about the dollarization regime in November 2014 (working paper in English, later published in Spanish) at events organized by the USFQ and the think-tank IEEP, I added some critical comments on the new project:

There is no reason to believe that a national government can run a mobile payment system more efficiently than private firms … If the government sincerely wishes to help the poor and unbanked, it should let private providers enter the competition, which will drive down the fees that the poor and unbanked will have to pay.

Private bankers in Ecuador made similar arguments during the life of the project. In its December 2017 legislation, the government conceded the case. According to one news account, “the Government hopes that with the transfer to the private financial system the means of payment can reach more unbanked population.”

I attributed a fiscal rationale to the project:

[W]hy does the government want to issue mobile payment credits as a monopolist? It seems likely that the project is meant as a fiscal measure. One million dollars held by the public in the form of government-issued credits is a million-dollar interest-free loan from the public to the government.

From the fact that the government is now closing its service, I infer that the central bank failed to make a profit even as a statutory monopolist. Float was smaller, and expenses higher, than had been hoped (see below). The new administration had no fiscal reason to keep it open.

Although I did not foresee the system’s failure to achieve sustainability, I did add one final dig at the system’s low trustworthiness:

Personally, I would find dollar-denominated account credits that are claims on [the leading private mobile phone companies] Movistar or Claro more credible than claims on the government of Ecuador. After all, unlike the government, neither company defaulted on its bonds in the past 12 years.

Trust, it turned out, was the crucial issue.

Unlike what is usually envisioned under the rubric “central bank electronic money,” the BCE was not creating nominally default-risk-free accounts denominated in its own domestic fiat money. It was issuing claims to US dollars that it might become unable or unwilling to repay. The government under Correa had in fact defaulted on sovereign dollar-denominated bonds in 2008. Although the sucre hyperinflation of 1999 had brought with it a banking crisis, since dollarization the commercial banks had by all indications become stable and prudently run.

Consequently it was reasonable for an informed citizen in 2014-17 to think that dollars on deposit at a private commercial bank in Ecuador were less risky than dollars on deposit at the central bank. The private banks had better incentives to behave prudently than the BCE had. A private bank could be taken to court if it failed to pay, but not so the government central bank with its sovereign immunity.[2] The enabling legislation specified no limit on the volume of electronic dollars the BCE could create, and no prudential requirement that the central bank hold adequate assets to redeem them.

The Ecuadorian public recognized a risk of default or devaluation with the central bank’s electronic money accounts, and stayed away from them, defying to the optimistic projections of government officials promoting the system. In June 2016 President Correa recognized that the project had critics, but he dismissed them as merely members of the opposition party and certain private bankers annoyed that the business was not going to them. In fact mistrust in the system was much more widespread.

At least one print commentator at the time pointed to the BCE’s lack of trustworthiness.  El Comercio’s economic columnist Gabriela Calderón de Burgos in a June 2016 column clearly predicted that because of public mistrust the DE system would not succeed. She noted that, unlike the private bankers with their own wealth at stake, the BCE could behave irresponsibly, and would be pressured to do so by the Treasury with its chronic financing problems. Thus the electronic claims on the BCE are “a currency that does not inspire confidence,” and as a result “the DE will not work because it will not enjoy widespread acceptance. It would only achieve this if the government declares it to be a curso forzoso [forced tender].” But the government knows that such a move “would lead to chaos.”

Later that month, in a column on the “antics” of the central bank, she observed: “The government has intensified its campaign for people to deposit their dollars in the BCE and use ‘electronic money.’ I suspect that the campaign will have little success because of the justified distrust that the government and the BCE have earned in terms of their ability to take care of others’ funds.”

In May 2017 Calderón de Burgos returned to the theme, in a column entitled “Only the Dollar is Trusted.” The project for the Ecuadorian government to issue its own digital currency, denominated in US dollars, she wrote, “faces an insurmountable inconvenience: people will not voluntarily accept the new currency. That’s why they’ve been trying to convince us to use electronic money for three years and still few use it.” The US dollar itself, because it is something that Ecuadorian politicians cannot devalue, “generates much more confidence than any alternative that can occur to our politicians, even and particularly in times of financial crisis.”

A news article in December 2017 reported the answers that ordinary people gave when asked them directly why they weren’t using the BCE’s electronic money. Their answers confirm that many found the system not creditworthy. For example: “Mistrust is among the reasons, says Frank Guijarro, owner of a tire network.” And: “I do not trust opening an account with the Central Bank, so I pay in cash and sometimes with a debit card when I cannot get out,” says Katherine Alcivar, 26.” The president of the association of cooperative savings banks gave a similar answer in an interview: “The greatest confidence we can give is that your resources are in your financial institution and not in the BCE.” The BCE system was haunted by the “ghost” of the previous government’s default. In addition, the BCE did too little to promote acceptance by shopkeepers and other businesses: “Not enough strength was given to the reception channels.”

As a result of these shortcomings, the system peaked at only $11.3 million in account balances, less than 5 hundredths of 1% of the country’s narrow money stock M1 ($24.5 billion). According to the deputy general manager of the central bank electronic money system, before the announcement of the coming shutdown ironically raised the average level of activity due to withdrawals, the system averaged only about 1,100 transactions per day.  The total value transacted over the entire life of the system was only about $65 million. Only 7,067 businesses ever conducted transactions with the electronic money. While a total of 402,515 accounts were eventually opened, the BCE found in retrospect that only 41,966 were ever used to acquire goods and services or to make payments. Another 76,105 were used only to upload and download money. The remaining 286,207 accounts (71%) that were opened were never used. (I do not know why the three reported component figures do not sum exactly to the reported total.)

Lessons from the Failure of the Project

We can make a back-of-the-envelope calculation of the Ecuadorian government’s profit from its monopoly electronic money project. Between 2014 and the present, the Ecuadorian government has been paying roughly 8% interest on the bonds it sells in international markets. Replacing $11.3 million of 8% bonds with zero-interest liabilities of the central bank provides an annual debt-service savings of less than $1 million, specifically $904,000. From the BCE’s 2014 income statement (the most recent that seems to be available), its “administrative expenses” (presumably payroll) were roughly $38 million. Thus the project would have turned a loss if it enlarged the BCE payroll by as little as 2.4%, even leaving aside non-salary expenditures on promoting and operating the project.

An accounting report on the DE project issued by GPR, an Ecuadorian government accounting office, puts the government’s expenditures on the project at $7,967,553.78. Comparing that figure to the estimated debt service savings of only $904,000, the fiscal loss is clear. My thanks to Luis Espinosa Goded and Santiago Gangotena of the USFQ Department of Economics for pointing me toward the GPR report and helping me read it.

It is instructive to contrast the outcome in Ecuador with the optimistic picture of central bank electronic money drawn by Berentsen and Schär, who write:

We believe that there is a strong case for central bank money in electronic form, and it would be easy to implement. Central banks would only need to allow households and firms to open accounts with them, which would allow them to make payments with central bank electronic money instead of commercial bank deposits. As explained earlier, the main benefit is that central bank electronic money satisfies the population’s need for virtual money without facing counterparty risk.

The BCE deposits, by contrast to the scenario they have in mind, were not free of counterparty risk. More generally, in a sound banking system a commercial bank’s counterparty risk for depositors can be negligible, very close to zero, so that the central bank’s zero default risk need not be a big draw. In episodes where the central bank and the commercial banks simultaneously circulate banknote liabilities (e.g. today’s Scotland or Northern Ireland), no public concern about a risk difference is evident.

The Ecuadorian case also shows that implementation of a central bank electronic money system isn’t so easy. It requires more than merely setting up a website (the US federal government has sometimes proven not even competent at that) and letting households and firms open deposits. A convenient point-of-sale deposit-transfer mechanism, requiring both hardware and software, must be provided to many thousands of merchants. Consumer service and marketing are part of the business of providing retail payments. There is no reason to think that central banks are or would be good at a commercial business operation. In short, it is far from clear that asking bureaucrats to build a “public option” electronic money system would have benefits in excess of its cost.

[1] All quoted statements from Ecuadorean sources are my own translations, assisted by Google Translate.

[2] George Selgin and I raised this point some years ago as part of a general case for preferring private competition to sovereign monopoly in currency.

[Cross-posted from]

On March 30, Sally Satel, a psychiatrist specializing in substance abuse at Yale University School of Medicine, co-authored an article with addiction medicine specialist Stefan Kertesz of the University of Alabama Birmingham School of Medicine condemning the plans of the Center for Medicare and Medicaid Services to place limits on the amount of opioids Medicare patients can receive. The agency will decide in April if it will limit the number of opioids it will cover to 90 morphine milligram equivalents (MME) per day. Any opioids beyond that amount will not be paid for by Medicare. One year earlier, Dr. Kertesz made similar condemnations in a column for The Hill. While 90 MME is considered a high dose, they point out that many patients with chronic severe pain have required such doses or higher for prolonged periods of time to control their pain. Promoting the rapid reduction of opioid doses in such people will return many to a life of anguish and desperation.

CMS’s plan to limit opioid prescriptions mimics similar limitations put into effect in more than half of the states and is not evidence-based. These restrictions are rooted in the false narrative that the opioid overdose problem is mostly the result of doctors over-prescribing opioids to patients in pain, even though it is primarily the result of non-medical opioid users accessing drugs in the illicit market. Policymakers are implementing these restrictions based upon a flawed interpretation of opioid prescribing guidelines published by the Centers for Disease Control and Prevention in 2016.

Drs. Satel and Kertesz point out that research has yet to show a distinct correlation between the overdose rate and the dosages on which patients are maintained, and that the data show a majority of overdoses involve multiple drugs. (2016 data from New York City show 97 percent involved multiple drugs, and 46 percent of the time one of them was cocaine.)

Not only are the Medicare opioid reduction proposals without scientific foundation, but they run counter to the recommendations of CMS in its 2016 guidelines. As Dr. Kertesz stated in 2017:

“In its 7th recommendation, the CDC urged that care of patients already receiving opioids be based not on the number of milligrams, but on the balance of risks and benefits for that patient. That two major agencies have chosen to defy the CDC ignores lessons we should have learned from prior episodes in American medicine, where the appeal of management by easy numbers overwhelmed patient-centered considerations.”

In an effort to dissuade the agency, Dr. Kertesz sent a letter to CMS in early March signed by 220 health professionals, including eight who had official roles in formulating the 2016 CDC guidelines. The letter called attention to the flaws in the proposal and to its great potential to cause unintentional harm. CMS will render its verdict as early as today.

Until policymakers cast off their misguided notions about the forces behind the overdose crisis, patients will suffer needlessly and overdose rates will continue to climb. 

I am saddened to report that Pat Korten, Cato’s vice president for communications from 1996 to 1999, died Thursday evening after suffering a stroke earlier in the week. 

Pat was a personal friend of mine. We served together in the administration of President Reagan, first for four years at the Office of Personnel Management, then for two years at the Department of Justice where Pat continued to serve under President George H.W. Bush. 

Pat was a movement classical liberal from his first days as an undergraduate at the University of Wisconsin. He was an informed and sharp communicator of the first order in all the positions he held, including with the Knights of Columbus, where he spent the last ten years of his career. More than anything, however, he was man of deeply held principle, who at the same time could fill a room with his infectious laugh. We’ve lost a wonderful spokesman for liberty. May he rest in peace.

A judge in Los Angeles ruled Wednesday that Starbuck’s, Peet’s, and many other retailers face potentially massive liability under California law for not warning consumers that naturally occurring substances in roasted coffee beans can cause cancer, at least in lab animals. Absurd? Outrageous? Yes. But the scorn and outrage should be directed not at the judge but at the law whose terms he was required to enforce – Proposition 65, adopted by state voters through the initiative process in 1986 – as well as the lawyer-swayed California political system that still, more than 30 years later, is unwilling to address the measure’s gross flaws. 

Acrylamide is a naturally occurring substance formed when many foods are browned or otherwise subjected to high heat, including in many cases grilled burgers, fried chicken, bread, almonds, and potato chips. Like many other constituents of everyday life, it appears to cause cancer in some animals at high dosages. And that brings it under the terms of Prop 65, which has already led to a proliferation of warnings on and around thousands of common goods and services in California, from office furniture to hotel corridors to garages (car exhaust). Almost everyone agrees by now that the over-proliferation of warnings makes it less likely that consumers will pay attention to those few warnings that actually flag notable risks. Although on paper the law provides exemptions for some risks that are not “significant” or are balanced by benefits, these have been hard for defendants to use in practice, and the coffee vendors were not saved by the argument that java overall provides (scientifically uncertain) net health benefits, which may even perhaps include net anti-cancer benefits, that outweigh the (also scientifically uncertain) risks. 

What happens next? As the Post reports, “In addition to the warning signs likely to result from the lawsuit, the Council for Education and Research on Toxics, which brought the lawsuit, has asked for fines as much as $2,500 for every person exposed to the chemical since 2002, potentially opening the door to massive settlements.” And the financial shakedown value here is far from incidental; it’s the very motor that keeps the law going. Way back in 2001 – yes, Overlawyered has been covering this for nearly 20 years – I noted of the idealistic-sounding CERT, then involved in a suit against Starbucks over minute amounts of the Chinese herb ma huang in chai tea, and its lawyer Raphael Metzger:

While CERT is previously unknown, the same is not true of attorney Metzger, based in Long Beach, who runs a large “toxic-tort” practice whose website is publicizing the Starbucks action…  “The constitutional right of Californians to pursue and obtain safety could be an untapped source of riches that plaintiffs’ attorneys should consider on behalf of their clients and the public,” Metzger wrote a while back in the San Francisco Daily Journal regarding the prospect of tort claims based on the California Constitution’s “inalienable rights” provision. 

Metzger is involved in CERT’s current coffee litigation as well. Meanwhile the California political system, which listens carefully to the small industry of nonprofits and attorneys that make a living by filing suits, has been unwilling to do more than nibble around the browned edges of Prop 65’s famous irrationalities. The warnings of potentially chaotic results like today’s – like Prop 65 warnings in general – have gone ignored.  Overlawyered has covered in detail both Prop 65 in general (including its use against scented candles, matches, brass knobs, light bulbs, playground sand, and billiard cue chalk) and acrylamide in particular. 

The NRA cites this pronouncement by the Brady Center’s co-founder, Pete Shields:  “The first problem is to slow down the number of handguns being … sold….  The second problem is to get handguns registered.  The final problem is to make possession … totally illegal.”  There’s the proof, says the NRA, that liberals just want to get rid of our guns and kill the Second Amendment.  That narrative had traction among hardcore gun rights people, but Heller actually defused the argument by affirming that the Second Amendment is here to stay, and it secures a fundamental, individual right.  

Then comes Justice Stevens — for many years, the intellectual leader of the liberal wing of the Court — and breathes new life into the NRA’s storyline.  What better evidence that the left wants a gun-free America?  A liberal icon calls for repeal of the Second Amendment – a proposal that will never be implemented, and would have limited effect if it were.  The Second Amendment doesn’t prevent states from enacting reasonable regulations; and its repeal wouldn’t prevent states from allowing assault weapons or high capacity magazines.  It’s state law, not the Second Amendment, that “calls the shots.”

But if so, then why the Second Amendment?  To prevent government from constructively banning a large class of weapons in common use for self-defense.  That was tried in DC (until Heller), and in Chicago (until McDonald), and perhaps in a few other localities.  That’s what would happen again if the Second Amendment were repealed.  And that’s why the NRA’s slippery slope argument still resonates with millions of gun owners.

Venezuela has the largest oil reserves in the world. Crude exports earn the country 95 per cent of its foreign exchange. That figure used to be lower, but relentless nationalization and the government’s insistence on controlling prices and exchange rates have made other exports unviable. Not that productive activity has reoriented inward: the IMF expects Venezuelan GDP to have dropped by 15 per cent in real terms each year in 2016 and 2017, and to do so again in 2018. This is a country in freefall.

Nor have price controls helped to sustain Venezuela’s currency. The bolivar, dubbed with cruel irony ‘strong’ because it replaced the old, weaker, bolivar at a 1:1,000 rate, has itself lost 99.9 per cent of its value against the U.S. dollar since March 2016. Shortages induced by controls, inept state management of nationalized companies, and capital flight have joined unlimited central bank money-printing to extinguish the purchasing power of Venezuelan money.

Rational policymakers would react to such a catastrophic state of affairs by enacting a dramatic U-turn and committing to it. Previous episodes of hyperinflation in Latin America were most effectively quelled by dollarization and the subsequent liberalization of goods and capital markets. But the extreme form of socialism that is the ruling regime’s ideology makes the leadership unwilling to countenance change.

Instead, they regale the population with a mixture of repression and gimmicks. The launch of the Petro, a state-sponsored cryptocurrency announced late last year, belongs in the latter category.

The Petro, which according to the Venezuelan government’s clumsy white paper is available for purchase as of yesterday, is supposed to be linked to the price of Venezuelan oil. From the white paper:

In words: the government vows to accept Petros in payment for taxes and government fees at a rate determined by the previous day’s price of Venezuelan oil. Dv is a discount rate.

Because the quantity of Petros in circulation is fixed and governance of the cryptocurrency is technically decentralized, the government argues that future manipulation of the Petro’s value is outside its control.

The reality, unsurprisingly, is more complicated. The Petro will run on the NEM blockchain, where transactions are validated differently from most cryptocurrencies. Bitcoin, for example, relies on a proof-of-work consensus algorithm, where computing power determines which transactions are validated. NEM, on the other hand, uses a proof-of-importance system where transactions are confirmed by the most important nodes, with importance defined as the number of coins owned and the frequency of transactions.

The Petro will ostensibly be accepted for payment of Venezuelan taxes and government fees, but little else. Moreover, the government has issued 100 million tokens but only 82.4 million are available for sale. Venezuelan authorities will presumably retain the remainder, so they will play an outsize role in the governance of the cryptocurrency under the POI system, despite the nominally decentralized blockchain.

Second, the supposed “backing” of the Petro by oil reserves is nothing of the kind. There is a link between the market price of Venezuelan oil and the Petro’s bolivar exchange rate, but ownership of the cryptocurrency gives its owner no claim on sovereign oil assets. By buying Petros, one is giving the country’s socialist government full faith and credit that it will fulfill its promise to redeem liabilities at the prevailing oil price. Given recent experience and Venezuela’s multiple oil commitments to sovereign creditors such as Russia and China, that would be a lot of faith indeed.

The Petro might offer Venezuelan citizens a distraction from their nation’s dire problems; it may even allow the Venezuelan dictatorship to evade some U.S. sanctions, despite President Trump’s decision to ban U.S. citizens from buying Petros. But as a cryptocurrency, it is doomed to fail.

A true crypto-asset might instead represent claims on real barrels of oil, which could be traded in a decentralized market through transactions recorded on the blockchain. It’s unclear how much of an improvement this would yield over futures trading on an exchange, although it’s probably cheaper for some market participants to transact on the blockchain. But the Petro offers no true claim on anything, so its utility is dubious given the likelihood that the Petro’s state sponsor will default on its promises.

If you needed another reason not to buy any currency, digital or otherwise, issued by the Venezuelan government, then this is it.

[Cross-posted from]

President Donald Trump has dismissed Secretary of Veterans Affairs Dr. David Shulkin amid disagreement within the administration over the future of the beleaguered  Veterans’ Health Administration, a single-payer health system whose closest analogue is the United Kingdom’s National Health Service. 

In a farewell printed in the New York Times, Shulkin criticizes proposals to improve health care for veterans by privatizing the VHA:

The private sector, already struggling to provide adequate access to care in many communities, is ill-prepared to handle the number and complexity of patients that would come from closing or downsizing V.A. hospitals and clinics, particularly when it involves the mental health needs of people scarred by the horrors of war. Working with community providers to adequately ensure that veterans’ needs are met is a good practice. But privatization leading to the dismantling of the department’s extensive health care system is a terrible idea. The department’s understanding of service-related health problems, its groundbreaking research and its special ability to work with military veterans cannot be easily replicated in the private sector.

Actually, Shulkin is probably right. The VHA has built expertise in treating the special challenges veterans face (which is not to say the VHA always treats veterans well). If privatization “dismantl[es] the department’s extensive health care system,” it could take the private sector years to fill in the gap. Simply “closing or downsizing V.A. hospitals and clinics” could well be “a terrible idea.”

Fortunately, that is not what privatization means. To privatize does not mean to dismantle. It means to transfer ownership of a resource from the government to private individuals. 

Privatization of the VHA need not dismantle any aspect of that unique system. All that privatization would or need do is transfer ownership of VA hospitals and clinics–of all the system’s physical capital–to the people that system exists to serve: veterans. The VHA would continue to exist as the nation’s largest integrated health system, and would preserve its capacity to meet the unique needs of veterans, but under the control of veterans themselves rather than politicians who persistently renege on the commitments they make to veterans.

Cato Vice President for Defense and Foreign Policy studies Christopher A. Preble and I explain in the New York Times how privatization can have bipartisan appeal:

The alternative system we propose combines the universal goal of improving veterans’ benefits with conservative Republicans’ preference for market incentives and antiwar Democrats’ desire to make it harder to wage war. 

Read more about this bipartisan VA privatization proposal in Chapter 14, Veterans Benefits of Cato’s Handbook for Policymakers (8th ed.).

I take a look at the federal budget situation in The Hill:

The 2,232-page omnibus spending deal signed into law last week threw fiscal sanity out the window. While entitlement spending has continued to grow, the relative restraint in discretionary spending had provided hope that federal budget control was possible.

But that hope is now dashed under this president and Congress. The omnibus hiked discretionary spending 13 percent in a single year, while scraping the budget caps that were the singular achievement of reformers after the landmark 2010 election.

President Trump included substantial cuts in his recent budget, but signing the omnibus made a joke of his own proposals for fiscal restraint. 

The GOP’s discretionary budget actions and the relentless rise of health care and retirement spending have put the budget on a catastrophic course.

You can read the rest at The Hill.

President Trump continued his grumbling about Amazon this morning, echoing common but misguided views about the states being hurt by the rise of retail sales over the Internet. NPR has said, “The big problem is a loss of sales tax revenues as online sales climb.” And a coalition of states recently complained that online sales are imposing an “ever-increasing toll on the states’ fiscal health.”

But government data does not show any substantial “toll.” The chart shows total state-local revenues from income and sales taxes as a percentage of gross domestic product (GDP). E-commerce sales have grown to nine percent of retail sales, but sales tax revenues have nonetheless roughly kept pace with economic growth.

Since 1990, sales tax revenues have dipped only slightly from 3.1 percent to 3.0 percent of GDP.

Meanwhile, state-local income tax revenues have fluctuated with the economy, but have trended upward. They have risen from 1.8 percent of GDP in 1990 to 2.1 percent today.

Overall, state-local tax revenues (including property and other taxes) have edged up since 1990 from 8.7 percent of GDP to 8.8 percent.

It is not a lack of revenue that is taking “a toll on the states’ fiscal health,” but, rather, ever-increasing spending on Medicaid and worker pensions, as the Wall Street Journal discusses today.

The Washington Post reports that “House Republicans are considering a vote on a ‘balanced-budget amendment’” (BBA) to the constitution, having just backed a $1.3 trillion omnibus spending bill which will worsen the deficit considerably.

With deficits now projected to rise as high as 5.3 percent of GDP by 2019, this move amounts to the worst kind of “fiscal virtue signaling” on behalf of the GOP leadership. The vote appears designed to tell voters that the GOP favors fiscal restraint, safe in the knowledge the amendment is near-certain to fail, given the hurdles in the Senate alone and despite all recent evidence to the contrary.

There will therefore be a lot of rightful mocking and dismissiveness from the commentariat on this move. But two points from the conclusions on my recent paper on fiscal rules should be borne in mind.

First, lots of people will use this hook to come out and say a BBA is bad economics, particularly given that overwhelmingly mainstream economists oppose a requirement at the federal level for the books to balance every year.

But countries around the world have developed much more sophisticated fiscal rules which in effect balance budgets over the economic cycle. Switzerland’s is even part of its constitution, and it appears to work pretty well. Fiscal rules really can really help to shape responsible budget outcomes, provided they smooth spending by capping it around trend revenues (rather than requiring balance every year), and avoid scope for overoptimistic assumptions or creative accounting by politicians.

Second and crucially, though, fiscal discipline – even to get to the stage of introducing and abiding by rules – requires political and public buy-in. At the moment, the equilibrium in Washington is instead for higher spending and more borrowing, and a continual reluctance to countenance reform of entitlement programs which drive the dreadful long-term debt projections.

Republicans had the opportunity, after the tax cuts, to explain to voters that if they liked their tax cuts, and wanted to keep their tax cuts, then fiscal restraint over a number of years was necessary. Now, even getting to a stage where a BBA could kick in would likely take years given the high deficit, and the political difficulties of cutting spending.

No doubt there are some Republicans who still care and worry about balancing the books. But with this proposed vote, the GOP instead is preaching like St Augustine: “Lord give me fiscal discipline, but not yet.” The best way of locking in fiscal responsibility is to practice it.

Read my full paper on fiscal rules and the experience of other countries here.

Taiwan’s supporters in Congress and the Trump administration are pushing unprecedented measures to increase Washington’s backing for the island’s de facto independence from China. On March 1, the Senate passed the Taiwan Travel Act, which the House of Representatives had previously approved in January. The TTA states that it should be the policy of the United States to authorize officials at all levels to visit Taiwan to meet with their counterparts and allow high-level Taiwanese officials to enter the United States for meetings with U.S. officials. Notably, the TTA specifically encouraged interaction by “cabinet-level national security officials.”

As I note in a new article in China-U.S. Focus, although the measure does not compel the executive branch to change policy, it clearly underscores the congressional desire for closer U.S. ties, especially defense ties, with Taiwan’s government. Since the Senate passed the legislation with no dissenting votes, it reinforced the intensity of the congressional position. That President Trump signed the legislation instead of letting it go into effect without his signature signaled his agreement with the substance.

Although it was not a legal requirement, Washington’s policy since it switched official diplomatic relations from Taipei to Beijing in 1979 has been to authorize only low-level (usually economic) policymakers to interact with their Taiwanese counterparts. Prominent officials such as the President, Secretary of State, and Secretary of Defense, refrain from doing so. That situation is now likely to change.

Congressional activists also are pushing a new gesture of support for Taiwan, even though Beijing’s strong protests in response to the TTA have barely begun to subside. Two key Republican senators, John Cornyn (R-TX) and James Inhofe (R-OK), are urging President Trump to approve the sale of F-35 fighters to Taipei. Cornyn is the assistant majority leader and Inhofe is a senior member of the Armed Services Committee, so their support for such a sale is not a minor matter.   

U.S. arms sales to Taiwan always are a sensitive issue with the Chinese government. Beijing contends that the communique President Reagan signed in 1982 committed the United States to phase-out all such sales. U.S. leaders respond that the promise was conditional on Beijing’s willingness to rule out the use of force to compel Taiwan’s reunification with the mainland—a renunciation China has never made. A provision in the 1979 Taiwan Relations Act authorizes the sale of defensive arms to Taipei, but it is quite a stretch to regard F-35s as a defensive weapon system.

Since President Trump’s election, Beijing’s suspicions have grown that the United States intends to dilute, if not abandon, the “one-China” policy that has governed bilateral relations since the 1970s. The concerns soared with the much-discussed December 2016 telephone conversation between President-elect Trump and Taiwanese President Tsai Ing-wen. No previous president-elect since Washington’s recognition of the PRC as China’s rightful government had ever interacted with a Taiwanese leader. Trump alleviated Beijing’s concerns when he assured President Xi Jinping in February 2017 that Washington remained fully committed to the one-China policy, but passage of the Taiwan Travel Act and the new congressional push for F-35 sales undoubtedly revive China’s worries.  

Trump’s appointment of John Bolton as his new national security advisor also likely elevates Beijing’s apprehension. Bolton is a longtime, passionate supporter of an independent Taiwan. Not only did he previously urge the United States to establish diplomatic relations with Taipei, he even suggested redeploying U.S. troops currently stationed on Okinawa to Taiwan to demonstrate the firmness of Washington’s commitment to the island’s security.

It is hard not to empathize with the aspirations of a vibrant, capitalist democracy like Taiwan. In a just world, the Taiwanese would have every right to determine their own political destiny and not be pressured into reunifying with the mainland—especially as long as the PRC remains a repressive, one-party state. But we do not live in a just world, and China regards reunification as a vital interest for which it is prepared to go to war.

The Taiwan Travel Act and the proposed F-35 sale signify an emphatic pro-Taiwan tilt and a serious policy change. Even if the Trump administration does not fully implement the TTA and approve the arms sale, a future administration now has congressional authorization and encouragement to do so. Some of the statements already coming from China’s state-controlled media are worrisome. The semi-official Global Times suggested that Beijing’s response to the latest provocations might need to be “military” in nature. That is not a minor concern. The Taiwan Relations Act states that Washington would regard any Chinese military coercion of Taiwan as a grave breach of the peace in East Asia. There is little doubt that America would be entangled in such a conflict.

U.S. leaders are playing a very dangerous game when they flirt with measures that undermine the one-China policy. Greater caution is imperative.

Ever since President Trump appointed John Bolton to be the new national security advisor last week, a torrent of commentary has poured forth about the hawkish Fox News pundit and American Enterprise Institute senior fellow, who once served as United Nations Ambassador for 18 months in the George W. Bush administration. Two pieces published today, however, stand out for their precision and insight. 

The first is by The Atlantic’s Peter Beinart, whose central argument is that Bolton is not the learned foreign policy scholar many believe him to be. While Bolton certainly has years of experience, it hasn’t been of the right kind. Bolton’s “militancy,” his “incessant, almost casual, advocacy of war,” Beinart argues, is positively “Trumpian: The less evidence you have, the more certain you sound.”

Bolton’s analysis and prognostications - particularly about Iraq, Iran, and North Korea - have so frequently been proven wrong by events that it can be tedious to lay it all out. Beinart does a good job of it, but his real insight is to suggest a possible explanation for why Bolton has been so extremely hawkish, and wrong, for so long. 

[I]f Kissinger is right that “[high] office teaches decision making, not substance” and that it “consumes intellectual capital; it does not create it,” then the narrow professional experience through which Bolton has amassed his intellectual capital matters a great deal. He has never served in the military. He has never studied another region of the world, or another period of history, at the graduate level. He has spent his entire adult life in the interlocking world of hawkish think tanks, Washington law firms, Republican politics, and the right-wing media. And he manifests that narrowness in the smugly insular worldview he brings to his new job.

Over the past two decades, Bolton has written dozens of columns and essays, often for the flagship publications of the American right. To read them is to enter a cocoon. His writing is filled with assertions—about the purity of America’s intentions, the motivations of its adversaries, the uselessness of diplomacy, and the efficacy of war—for which he offers either feeble evidence or no evidence at all.

Do read the whole thing.

The second must-read on Bolton’s appointment comes from Josh Shifrinson, assistant professor of international affairs with the Bush School of Government at Texas A&M University. In the Washington Post’s Monkey Cage blog, Shifrinson argues that an extremist like Bolton can rise to the top of television punditry, and now to immense power as the president’s right-hand man on all things national security, only because of America’s peculiar place atop the international system. The unusual outsize power of the United States in the post-Cold War era has several implications for foreign policy.

First, it is a permissive environment for foreign policy activism in that there are few external constraints on the exercise of U.S. might. We face fewer negative consequences for strategic blunders and foolish wars, compared at least to states that face retaliation from peer competitors.

Second, this peculiar position of U.S. dominance means that domestic politics and the idiosyncrasies of individual leaders matter more for foreign policy than it otherwise would amid a more equal balance of power. “All this means sage leadership that screens policy ideas is especially important,” Shifrinson writes. “With an inexperienced leader like Trump in the Oval Office, Bolton’s views can gain traction partly because America still reigns as the sole superpower.”

Third, while other powers, like China, are beginning to compete with the U.S. in the economic and diplomatic spheres, America still reigns supreme in the miltiary arena. Using force and projecting power are our comparative advantage, and so Washington’s incentive is to play to this strength, wisely or not.

Both Beinart and Shifrinson illustrate just how hazardous it is to have a man like Bolton in the Oval Office advising a man like Trump. If his history of erroneous analysis and impulsive support for elective wars is any guide, Americans should be bracing for a bumpy remainder to the Trump presidency. 

It looks like we have another terrible case of cherry-picking the evidence. But this time it’s shockingly misleading. Instead of simply pretending that the evidence on school choice is “mixed,” the Center for American Progress took it a step further by saying that the voucher evidence is “highly negative.” They are absolutely wrong. Here’s why.

The Four Evaluations

Their review of the research relies on only four voucher studies – Indiana, Ohio, Louisiana, and D.C. Two of these studies – Indiana and Ohio – are non-experimental, meaning that the researchers could not establish definitive causal relationships. But let’s go ahead and entertain them anyway.

The Ohio study used an econometric technique called regression-discontinuity-design, which can only replicate experimental results when a large number of students are used right around a treatment cutoff point. The intuition behind the method is that it is essentially random chance that students fall just around either side of the cut point, and therefore the students are randomly assigned to the voucher treatment or not.

The Ohio program used a cutoff variable - the performance of the child’s public school – to determine program eligibility. However, the researchers used student observations that were not right around the cut point and even removed the observations that were closest to the discontinuity. In other words, the authors could not establish causality, and it is more likely that the children assigned to receive the voucher program were less advantaged than those who were ineligible. After all, students in lower-performing public schools were the ones that were eligible for the choice program.

Even then, the model with the largest sample size actually found that being eligible for the program led to positive test score impacts. But the authors at CAP never mentioned that.

The Indiana study was also non-experimental, as it compared voucher students to those remaining in traditional public schools. But let’s look at it anyway. While the authors did find small negative effects of the program on test scores initially, voucher students caught up to public school students in math and performed better in reading after four years. How in the world can a positive result like this be “highly negative?” Weird.

The Louisiana experiment did find large negative effects on test scores in the first two years. However, voucher students caught up to their public school peers in both math and reading after three years. The CAP authors argue that the main model – although clearly preferred by the Louisiana research team – is less “accurate” because of the “restricted sample size.” That is odd, as using more control variables (and a consistent sample) usually makes econometric models more accurate – not less. Another thing that is odd: the CAP authors chose not to report the positive Ohio results – which came from their larger sample of students – and instead chose to report the negative results – which came from a sample that was less than a tenth of the size. Why the change in criteria?

The CAP review heavily relies on the most recent experimental evaluation of the D.C. voucher program. It just so happens to be one of the only two voucher experiments in the world to find negative effects on student test scores.

The first-year evaluation of the D.C. voucher program found a 7.3 point loss in math scores and no effects on reading scores. Stanford University’s CREDO converts standardized effect sizes to “days of learning” by multiplying effects by 7.2. This means the first-year math loss in D.C. would be around 53 days of learning. However, the CAP authors overstated this loss by more than 28 percent by saying that voucher students lost 68 days of learning.

What’s more – prior research has found that switching schools – for whatever reason – reduces student achievement by at least 10 percent of a standard deviation (or at least 72 days of learning). After all, students and schools need to adjust to their new environments. That the average voucher student only lost 7.3 points – rather than 10 – from switching schools suggests that the private schools in D.C. may have actually had positive effects on academic outcomes net of the temporary negative effects of a one-time school switch.

Further, the recent D.C. evaluation only looks at students after one year – when they are still adjusting to their new schools. And the meta-analysis of 19 voucher experiments shows that voucher programs’ effects on test scores get better over time. In fact, the positive test score trend was found in both Louisiana and Indiana. In addition, about half of the students in the control group in the D.C. experiment went to schools of choice. In other words, the first-year loss in math scores was relative to a mix of students in both traditional public schools and public charter schools.

And we cannot forget about the unequal playing field in our nation’s capital. D.C. voucher students only receive around $9,600 per year, while children in charter schools receive 46 percent more resources, while students in traditional public schools receive around 3 times the amount of education dollars. It’s amazing that D.C. voucher students are doing as well as they are with such a huge funding disadvantage.

The True State of the Evidence

So what does the evidence actually say?

When synthesizing any body of research, we ought to rely on the most rigorous studies – the experiments. We should also look at all of the studies so we are sure not to fall prey to cherry-picking.

Eleven of the 17 existing voucher experiments in the United States find positive effects on test scores for some or all students, and a recent meta-analysis of 19 voucher experiments around the world finds positive effects overall. Only 2 of the 17 experimental evaluations find any negative effects on student test scores – and those are also the only two evaluations solely looking at effects after the first year.

But what about the students that are left behind in public schools? It turns out that competition benefits those students as well. At least 24 studies exist on this topic. And 23 of the 24 studies find positive effects on student achievement for kids in public schools. None of these studies find negative effects.

But we shouldn’t only look at test scores. After all, families do not care all that much about test scores, especially since test scores are weak predictors of long-term outcomes. It just so happens that private school choice programs have much more positive effects on non-test score outcomes.

I found 11 studies in my review of the most rigorous studies linking private school choice programs to civic outcomes like student tolerance levels and political participation. The majority of the studies found large positive effects. For instance, researchers from Harvard University and the University of Arkansas found that children that won a random lottery to use the D.C. voucher program were about 90 percent more likely to permit individuals from groups they oppose to give a speech in their community. No studies found negative effects. And another review by Patrick J. Wolf similarly found that private school choice largely improves civic outcomes.

Only one experiment – in D.C. – links a voucher program to high school graduation. And it finds that winning the lottery to use a voucher increases the likelihood that a student will graduate high school by 21-percentage points. That is huge.

Another systematic review of the evidence finds that voucher programs lead to racial integration. In fact, 7 of the 8 rigorous studies that exist on the topic find positive effects. None of the studies find negative effects. Unsurprisingly, when vouchers allow disadvantaged children to leave their segregated neighborhood schools, society becomes more integrated.

It’s time we set the record straight. The preponderance of the evidence suggests that private school choice improves test scores, high school graduation rates, tolerance, civic engagement, criminality, racial integration, and public school performance. And, of course, all of these benefits come at a lower cost to the taxpayer.

With the substantial body of scientific evidence suggesting precisely the opposite, claiming that voucher impacts are “highly negative” is almost as absurd as saying that the Earth is flat. Anyone making such a claim needs to seriously reevaluate their position.

Alex Nowrasteh had an excellent post yesterday on how the western tradition on immigration and naturalization formed the basis of the Founders’ views on those subjects and resulted in the most liberal policies in the world at the time. The debates at the Constitutional Convention highlight his point, showing just how liberal the Founders had become on immigration and naturalization.

At one point, Gouverneur Morris offered an amendment that would require 14 years of citizenship, rather than four, before a person could serve as a senator, “urging the danger of admitting strangers into our public Councils.” Charles Pinckney of South Carolina seconded the motion, recalling “the jealousy of the Athenians on this subject who made it death for any stranger to intrude his voice into their legislative proceedings.”

Yet as Alex notes, the Romans—rather than the Greeks—informed the views of most founders on naturalization, and most of the representatives at the convention opposed the Morris amendment for fear of, as future Chief Justice of the Supreme Court Oliver Ellsworth put it, “discouraging meritorious aliens from emigrating to this Country.” Alexander Hamilton argued that the “advantage of encouraging foreigners was obvious and admitted,” asserting that “persons in Europe of moderate fortunes will be fond of coming here where they will be on a level with the first Citizens.”

Father of the Constitution James Madison “was not averse to some restrictions on this subject, but could never agree to the proposed amendment” in part “because it will discourage the most desirable class of people from emigrating to the U.S.” In other words, not only were the Founders opposed to restricting the free movement of people into the United States, but they opposed restrictions on citizenship that they felt would discourage immigrants from using that freedom. Madison spoke of “great numbers” who would wish to come to the United States.

The goal of a “liberal” Constitution was one that the representatives repeated often (if not always pursued). In a separate conversation on the issue of qualifications to serve in office, Benjamin Franklin noted that the “Constitution will be much read and attended to in Europe, and if it should betray a great partiality to the rich, it will not only hurt us in the esteem of the most liberal and enlightened men there, but discourage the common people from removing to this Country.” On this amendment, he made the same point, stating he “was not against a reasonable time, but should be very sorry to see anything like illiberality inserted in the Constitution.”

Madison agreed, further arguing that the amendment “will give a tincture of illiberality to the Constitution.” Edmund Randolph of Virginia “reminded the Convention of the language held by our patriots during the Revolution, and the principles laid down in all our American Constitutions.”

James Wilson of Pennsylvania, who helped produce the first draft of the Constitution, was himself an immigrant from Scotland and raised the possibility of himself “being incapacitated from holding a place under the very Constitution which he had shared in the trust of making.” He noted that two other representatives—Robert Morris, originally of England, and Thomas Fitzsimons, originally of Ireland—shared the same situation. Furthermore, Wilson described “the discouragement & mortification [immigrants] must feel from the degrading discrimination now proposed,” noting that he had himself “experienced this mortification.” He said it “was wrong to deprive the government of the talents virtue and abilities of such foreigners as might chose to remove to this country.”

Madison and Franklin argued against the anti-immigrant conspiracy theories of the day that held that foreign governments would leverage their expatriates to their advantage. Franklin noted, “When foreigners after looking about for some other Country in which they can obtain more happiness, give a preference to ours, it is a proof of attachment which ought to excite our confidence and affection.” Madison added that foreign governments’ “bribes would be expended on men whose circumstances would rather stifle than excite jealousy and watchfulness in the public.”

Even those who favored a longer restriction on citizenship made clear that it was by no means out of opposition to immigration. George Mason of Virginia stated that he was “for opening a wide door for emigrants.” Moreover, he opposed an outright ban on citizenship in light of those foreigners who supported the cause of independence. Even Gouverneur Morris “ran over the privileges which emigrants would enjoy among us, though they should be deprived of that of being eligible to the great offices of Government; observing that they exceeded the privileges allowed to foreigners in any part of the world.”

Morris lost the vote 7 to 4, but the convention did adopt a higher standard of nine years on a second vote of 6 to 4. Yet despite this, their statements make clear that the Founding Fathers had a conception of citizenship and immigration that shares little in common with today’s nationalists. They wanted the most open possible society where foreigners could aspire to full citizenship in a reasonable time frame and receive equal treatment to citizens as soon as possible.

A letter in the New York Times from Joel Berg, the chief executive of Hunger for America, caught my eye because it encapsulates the political debate about financial poverty and what to do about it.

Progressives believe that increasing the disposable incomes of the poor via minimum wage rises, expansions of tax credits and benefits (reform conservatives agree here), and government provision of services is the way to go. Plenty of conservatives want to reform existing welfare programs with work requirements or reforms to reduce disincentives  to encourage people to earn their way to higher incomes.

Let’s put aside debate about what the “correct” measure of poverty is. What links the two is that both consider financial poverty (understood commonly) as being about nominal incomes. In one view you alleviate it by transferring money or have government take on the funding of services to reduce out-of-pocket costs. On the other, you incentivize people to earn it.

Income, however obtained, is of course crucially important to individual well-being. Money matters, as do the debates about the efficiency and trade-offs of all these programs.

But focus by policy experts, politicians and the media on the “income-based” narrative of poverty alleviation has left a huge blind spot: that many government policies worsen the finances of the poor by raising the prices of important everyday goods and services. This means any level of income goes less far in satisfying needs - worsening the financial plight of the poor directly, but also driving the very demands for more redistribution and higher minimum wages we see.

Think about housing and the role of zoning and land-use planning laws in raising prices. Child care costs are likewise driven up by stringent staff-child ratios in certain states, without appearing to raise overall quality. Highly regressive tariffs are imposed on imported clothing. Sugar programs and milk marketing orders raise both sugar and dairy prices.

The poor spend the highest proportion, on average, on what we might consider “essential” goods and services. Shelter, food, transport, utilities and apparel together account for 68.3 percent of the $25,318 spent on average by the poorest fifth of households. And yet in all these areas, policies at the federal, state and local levels often structurally raise market prices by restricting supply, raising compliance costs, institutionalizing monopoly power and much else.

Of course all of these interventions are introduced for other reasons: to prevent urban sprawl, to raise the quality of child care, to deal with environmental externalities or to “protect” certain industries, and much else. But the fact is these policies cumulatively raise the cost of living significantly for the poor, and increase the demand for higher government spending and intervention to alleviate poverty. In fact, it most cases they are doubly damaging, as often they reduce economic efficiency too.

This presents an opportunity for libertarians to offer a different perspective on the poverty debate. We should highlight how existing government interventions drive up the cost of living for the poor, and propose a targeted assault on them as a significant “first do no harm” anti-poverty agenda. Serious analysis on how these policies are regressive has been done before, but they are rarely all pulled together into a single narrative that says governments should prioritize undoing these interventions as a nationwide poverty reduction effort.

There are theoretical reasons to think that such an argument – that freer markets are part of the solution to poverty, rather than its cause – could get a better hearing today.

With the US deficit already projected to rise to 5.3 percent of GDP by 2019, the scope for raising structural government spending is low. Liberalization of these markets could even reduce the need for spending in certain areas by reducing the demand for government. We appear to have hit diminishing returns where redistribution is concerned anyway. Poverty rates have remained stubborn despite huge increases in transfer spending since the 1970s. Housing and child care costs are regularly in the news. Left-wing commentators and public intellectuals worry about the regressive nature of zoning and occupational licensing laws, and conservatives worry about regulations which impeded economic growth. More and more evidence (not least the recent paper on Seattle) now suggests that there are significant trade-offs for policies such as minimum wage increases too.

A “cost of living” agenda would be neutral on the welfare state and so could attract bipartisan support too. You do not have to believe existing anti-poverty programs have completely failed to acknowledge their effectiveness can be undermined by bad policies elsewhere which drive up living costs. You do not have to believe they are a success to believe that it is prudent and just to improve the financial position of the poor by reducing living costs as a quid pro quo for cutting welfare programs.

The main political barriers to such an agenda are two-fold. First, the vested interests who “win” from the various interventions and protections will resist. Second, comprehensive supply-side reform across a number of different areas and levels of government is tough to coordinate, and does not provide the focus that campaigning on one area - wages or tax credits - does.

Nevertheless, it is a worthwhile agenda. Before calling for major changes to welfare spending, one way or the other, politicians should realize the destructive consequences of their own policies on the living standards of the least well-off. That’s why over the coming few months, I am going to try to map out the contours of what an anti-poverty cost of living agenda might look like.

Martha Bebinger reports for National Public Radio station WBUR about the rise in fentanyl-laced cocaine. She cites numerous accounts of college students using cocaine to stay awake while studying for exams, or while attending campus parties, and then falling into a deep sleep after the initial cocaine rush. Some don’t wake up. Others get revived by the opioid overdose antidote naloxone.

Massachusetts state police recorded a nearly three-fold increase in seizures of cocaine laced with fentanyl over the past year. And the Drug Enforcement Administration lists Massachusetts among the top three states in the US for seizures of cocaine/fentanyl combinations. The DEA says the mixture is popularly used for “speedballing.” The original recipe used heroin mixed with cocaine in order to minimize the negative effects of the “come-down” after the rush of cocaine. Cocaine mixed with heroin is very unpredictable and dangerous. When it is mixed with fentanyl—five times the potency of heroin—it is even more dangerous.

There is a debate among law enforcement as to whether the cocaine is accidentally laced with fentanyl by sloppy underground drug manufacturers, or whether the mixture is intentional. There have been several reports of cocaine users who were unaware that the cocaine they were snorting or smoking contained fentanyl.

Connecticut state health statisticians keep track of opioid overdoses that included cocaine. While the majority of the time the overdose is from the classic “speedball” combination of heroin and cocaine, they have noted a 420 percent increase in fentanyl/cocaine in the last 3 years. However, Massachusetts does not register drug combinations when it records “opioid overdoses,” so it is unknown just what percentage of the 1,977 estimated opioid overdose deaths in Massachusetts last year were in combination with cocaine or other drugs. New York City keeps detailed statistics. In 2016, cocaine was found in 46 percent of the city’s opioid deaths, heroin and fentanyl were involved in 72 percent of opioid overdose deaths, and 97 percent of all opioid overdose deaths involved multiple drugs.

Meanwhile, President Trump and most state and local policymakers remain stuck on the misguided notion that the way to stem the overdose rate is to clamp down on the number and dose of opioids that doctors can prescribe to their patients in pain, and to curtail opioid production by the nation’s pharmaceutical manufacturers. And while patients are made to suffer needlessly as doctors, fearing a visit from a DEA agent, are cutting them off from relief, the overdose rate continues to climb.

The overdose crisis has always primarily been a product of drug prohibition—not of doctors treating patients.

At Politico Jeff Greenfield writes about “The Hollywood Hit Movie That Urged FDR to Become a Fascist.” The movie was “Gabriel Over the White House” in 1933 and, Greenfield writes, “it was designed as a clear message to President Franklin Delano Roosevelt that he might need to embrace dictatorial powers to solve the crisis of the Great Depression.” Greenfield assures us that FDR did not become a dictator, but he notes that “the impulse toward strongman rule” often stems from a sense of populist grievance, along with the scapegoating of “subversive enemies undermining the nation.” Depending on the time and the strongman, those subversive enemies can be Jews, capitalists, Wall Street, the 1 percent, the homosexuals, or in some countries the Americans.

Gene Healy wrote about “Gabriel” 10 years ago in The Cult of the Presidency and in this column in 2012:

…many of us still believe in authoritarian powers for the president.

In a November 2011 column, the Washington Post’s Dana Milbank offered “A Machiavellian model for Obama” in Jack Kennedy’s “kneecapping” and “mob-style threats” against steel-company executives who’d dared to raise prices.

Despite the obligatory caveat: “President Obama doesn’t need to sic the FBI on his opponents,” Milbank observed that “the price increase was rolled back” only after “subpoenas flew [and] FBI agents marched into steel executives’ offices”: “Sometimes, that’s how it must be. Can Obama understand that?”

Greenfield says “Gabriel” was both a commercial and critical hit, but “faded into obscurity, in large measure because the idea of a “benevolent dictatorship” seemed a lot less attractive after the degradation of Hitler, Mussolini and Stalin.”

But that wasn’t so obvious in 1933. As I wrote in a review of Three New Deals by Wolfgang Schivelbusch, there was a lot of enthusiasm in the United States for central planning and “Fascist means to gain liberal ends.” Two months after Roosevelt’s inauguration, the New York Times reporter Anne O’Hare McCormick wrote that the atmosphere in Washington was “strangely reminiscent of Rome in the first weeks after the march of the Blackshirts, of Moscow at the beginning of the Five-Year Plan.… America today literally asks for orders.”

And Roosevelt was prepared to give those orders. In his inaugural address he proclaimed:

If we are to go forward, we must move as a trained and loyal army willing to sacrifice for the good of a common discipline. We are, I know, ready and willing to submit our lives and property to such discipline, because it makes possible a leadership which aims at a larger good. I assume unhesitatingly the leadership of this great army.… I shall ask the Congress for the one remaining instrument to meet the crisis — broad executive power to wage a war against the emergency, as great as the power that would be given to me if we were in fact invaded by a foreign foe.

Fortunately, American institutions did not collapse. The Supreme Court declared some New Deal measures unconstitutional. Some business leaders resisted it. Intellectuals on both the right and the left, some of whom ended up in the early libertarian movement, railed against Roosevelt. Republican politicians (those were the days!) tended to oppose both the flow of power to Washington and the shift to executive authority. But we’re being reminded again, in Washington as well as Moscow and Beijing and Budapest and Istanbul, that liberal institutions are always threatened by populism and authoritarianism and especially the combination of the two.

“Gabriel Over the White House” will air on TCM on April 27.


In their highly influential book describing behavioral economics, Nudge, Richard H. Thaler and Cass R. Sustein devote 2 pages to the notion of “bad nudges.” They describe a “nudge” as any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. The classic example of a nudge is the decision of an employer to “opt-in” or “opt-out” employees from a 401(k) plan while allowing the employee to reverse that choice; the empirical evidence strongly suggests that opting employees into such plans dramatically raises 401(k) participation. Many parts of the book advocate for more deliberate choice architecture on the part of the government in order to “nudge” individuals in the social planner’s preferred direction.

Thaler and Sunstein provide short discussion and uncompelling examples of bad nudges. They correctly note “In offering supposedly helpful nudges, choice architects may have their own agendas. Those who favor one default rule over another may do so because their own economic interests are at stake.” (p. 239) With respect to nudges by the government, their view is “One question is whether we should worry even more about public choice architects than private choice architects. Maybe so, but we worry about both. On the face of it, it is odd to say that the public architects are always more dangerous than the private ones. After all, managers in the public sector have to answer to voters, and managers in the private sector have as their mandate the job of maximizing profits and share prices, not consumer welfare.”

In my recent work (with Jim Marton and Jeff Talbert), we show how bad nudges by public officials can work in practice through a compelling example from Kentucky. In 2012, Kentucky implemented Medicaid managed care statewide, auto-assigned enrollees to three plans, and allowed switching. This fits in with the “choice architecture” and “nudge” design described by Thaler and Sunstein. One of the three plans – called KY Spirit – was decidedly lower quality than the other two plans, especially in eastern Kentucky. For example, KY Spirit was not able to contract with the dominant health care provider in eastern Kentucky due to unsuccessful rate negotiations. KY Spirit’s difficulties in eastern Kentucky were widely reported in the press, so we would expect there to be greater awareness of differences in MCO provider network quality in that region.

Given the virtually identical and non-existent financial differences across the three Medicaid plans (they were essentially free to Medicaid clients), the standard economic framework with rational consumers and trivial transaction costs would predict all enrollees would switch out of lower quality plans. In this case, it would suggest mass defections from KY Spirit. In contrast, the “nudge” framework suggests enrollees would be for more likely to remain in inferior plans. The nudge – in this case a bad nudge – worked. In each of the other two plans – both of higher quality – approximately 95% of those assigned to those plans stayed in them. For KY Spirit, the percentage was lower, but very far from the prediction of full-scale exit. Specifically, 57% of those assigned to KY Spirit remained enrolled in the plan in 2012, despite its well-documented problems. For sicker individuals, 44% remained in KY Spirit, despite the serious problems in accessing healthcare providers. Very few individuals who opted out of their assigned health plan made the active choice to enroll in KY Spirit, consistent with the notion of its low quality. Of more than 37,000 individuals in eastern Kentucky assigned to the other two health plans, slightly more than 100 actively moved into KY Spirit.

Why would public officials assign Medicaid enrollees to a low quality health care plan? After all, virtually all examples of government nudges in the Thaler and Sunstein book portray officials as steering clients in the right direction. In the Kentucky context, the underlying motivation appears to be program costs. The state paid different reimbursement rates to each of the three health plans, and most of the time, KY Spirit was the “low cost, low quality” plan. In reality, this “bad nudge” – from the Medicaid enrollee’s perspective – was a cost saving from the taxpayer’s point of view. Compare to an objective of maximizing the quality of plans for Medicaid enrollees, the actual plan assignment which included some “bad nudges” reduced program costs by approximately 5%.

Although policymakers might be applauded in this case for reigning in program costs through behavioral economics, it is far from the optimistic framework portrayed in Thaler and Sunstein about maximizing client interest.