Policy Institutes

A few weeks ago, President Trump surpassed his 500th day in office. That’s a good vantage point to appraise his economic policies to Make American Great Again.

Over at the Library of Economics and Liberty’s Econlog, I offer my assessment. It’s not good.

This may seem surprising, given current economic conditions. But economic policy isn’t merely about the current moment, but predominantly about improving economic conditions long-term. Aside from a couple provisions in the December 2017 tax law, President Trump has done precious little in that regard and much to harm the economy long-term, from borrow-and-spend fiscal policy, to harmful trade and immigration policies, to disinterest in serious regulatory reform, to his refusal to face the country’s dreay long-term fiscal challenges.

From my conclusion:

MAGAnomics appears to be little more than an impulsive dislike of free trade and immigration, a hazy desire for less regulation, disinterest in (or perhaps a lack courage to face) the nation’s long-term fiscal problems, and a desire to temporarily lower taxes without making the hard choices necessary to fiscally balance those cuts and make them enduring. In other words, MAGAnomics is a slogan supporting a few weak and many harmful initiatives, not a serious collection of policies thoughtfully designed to strengthen the nation’s economic health.

Take a look and see if you agree.

In a Regulation article in 2013, Johnathan Lesser described how subsidies to renewable energy generators could actually increase electricity prices by reducing the profits and thus the long run supply of unsubsidized conventional alternatives like natural gas generators. 

According to Catherine Wolfram of the University of California, Berkeley Haas School of Business, the predictions of Lesser have become reality. Natural gas generators in The Pennsylvania-New Jersey-Maryland (PJM) regional electricity market have not received revenues sufficient to cover their capital costs in most years since 2009. Under such circumstances existing plants eventually will cease operation and no new plants will be built. Higher prices and uncertain supply are inevitable.

Calpine, an operator of natural gas plants, asked the Federal Energy Regulatory Commission (FERC) to require PJM to fix the generation capacity market—a government created market that pays firms for reserve generation capacity—to account for the subsidized competitors. Last month, FERC agreed with Calpine that the capacity market is currently “unjust and unreasonable” and issued an order requiring PJM to extend a price floor, which so far only applies to natural gas generators, to all resource types.

However, the FERC order falls short of the first best option: eliminating subsidies to all resources. Federal regulators, Congress, and states should work to repeal the regulations, mandates, and subsidies that complicate the capacity market. An even bolder move would be to mimic Texas, which has no capacity market; generators are paid only for the energy they generate. 

Written with research assistance from David Kemp.

Yesterday, Chris Edwards and I co-authored a piece for The Hill on “opportunity zones.” Opportunity zones were one element of last year’s tax reform law.

They’re more or less what would happen if the Low-Income Housing Tax Credit (LIHTC) and Community Development Block Grant (CDBG) produced offspring: opportunity zones both aim at generating economic development in declining areas (similar to CDBG) and use the tax code to incentivize public private partnerships (like LIHTC).

There are other similarities to CDBG and LIHTC. Opportunity zones may benefit investors and developers more than benefit the poor, which makes them like LIHTC.

The law has no provision to measure opportunity zone’s effectiveness, and measuring effectiveness would be hard anyway, which makes opportunity zones like CDBG. Currently, advocates simply cite the number of projects built with CDBG or LIHTC funding, which doesn’t tell a savvy information-consumer whether programs are meeting their objectives. 

As a result, opportunity zones will likely run on auto-pilot, while special interest groups claim it is effective based on the number of projects that were funded through the new tax mechanism. We won’t know how many of those projects would have been built anyway.

Lawyers, accountants, and financial advisors will make money advising investors and developers on program rules, who will then make money deferring and reducing their capital gains taxes.

There’s nothing wrong with cutting taxes, but opportunity zones are the wrong way to accomplish that. And national policy shouldn’t play favorites or pretend Congress or even state governors know where businesses or people should locate. (Hint: the best place for business and poor people to locate probably aren’t declining areas.) 

Rather than federal “help”, states can create their own state-wide opportunity zones by reforming their own tax codes and fixing their zoning, occupational licensing, and childcare regulations. Zoning regulations keep low-skilled workers trapped in declining places and excluded from economic opportunity, and occupational licensing makes it harder to relocate to new economic opportunities. 

Local reforms would really help poor workers, and regardless of whether they brough declining places back, they would improve poor worker’s ability to locate in non-declining places where the jobs are. Opportunity zones? Not so much.

Last month, we summarized evidence for the long-term stability of Greenland’s ice cap, even in the face of dramatically warmed summer temperatures. We drew particular attention to the heat in northwest Greenland at the beginning of the previous (as opposed to the current) interglacial. A detailed ice core shows around 6000 years of summer temperatures averaging 6-8oC (11-14oF) warmer than the 20th century average, beginning around 118,000 years ago. Despite six millenia of temperatures that are likely warmer than we can get them for a mere 500 years, Greenland only lost about 30% of its ice. That translates to only about five inches of sea level rise per century from meltwater.

We also cited evidence that after the beginning of the current interglacial (nominally 10,800 years ago) it was also several degrees warmer than the 20th century, but not as warm as it was at the beginning of the previous interglacial.

Not so fast. Work just published online in the Proceedings of the National Academy of Sciences by Jamie McFarlin (Northwestern University) and several coauthors now shows July temperatures averaged 4-7oC (7-13oF) warmer than the 1952-2014 average over northwestern Greenland from 8 to 10 thousand years ago. She also had some less precise data for maximum temperatures in the last interglacial, and they are in agreement (maybe even a tad warmer) with what was found in the ice core data mentioned in the first paragraph.

Award McFarlin some serious hard duty points. Her paleoclimate indicator was the assembly of midges buried in the annual sediments under Wax Lips Lake (we don’t make this stuff up), a small freshwater body in northwest Greenland between the ice cap and Thule Air Base, on the shore of the channel between Greenland and Ellesmere Island. Midges are horrifically irritating, tiny biting flies that infest most high-latitude summer locations. They’re also known as no-see-ums, and they are just as nasty now as they were thousands of years ago.  

Getting the core samples form Wax Lips Lake means being out there during the height of midge season.

She acknowledges the seeming paradox of the ice core data: how could it have been so warm even as Greenland retained so much of its ice? Her (reasonable) hypothesis is that it must have snowed more over the ice cap—recently demonstrated to be occurring for the last 200 years in Antarctica as the surrounding ocean warmed a tad. 

The major moisture source for snow in northwesternmost Greenland is the Arctic Ocean and the broad passage between Greenland and Ellesmere. The only way it would snow so much as to compensate for the two massive warmings that have now been detected, is for the water to have been warmer, increasing the amount of moisture in the air. As we noted in our last Greenland piece, the Arctic Ocean was periodically ice-free for millenia after the ice age.  

McFarlin’s results are further consistent, at least in spirit, with other research showing northern Eurasia to have been much warmer than previously thought at the beginning of the current interglacial.

Global warming apocalypse scenarios are driven largely by the rapid loss of massive amounts of Greenland ice, but the evidence keeps coming in that, in toto, it’s remarkably immune to extreme changes in temperature, and that an ice-free Arctic Ocean has been common in both the current and the last interglacial period. 

Federal Reserve Chairman Jerome Powell was before the Senate Banking Committee today to present the semiannual Monetary Policy Report to Congress. Unfortunately, there was little discussion of monetary policy during the proceedings.

The Senators spent nearly all of their time asking the Chairman about the recent stress tests, changes to the tax code, and concerns over additional tariffs. On tariffs, Powell deserves credit for plainly stating that “in general, countries that have remained open to trade and haven’t erected barriers, including tariffs, have grown faster, have had higher incomes, [and] higher productivity, and countries that have…gone in a more protectionist direction have done worse.”

While many Senators ignored monetary policy, the one notable exception came when Senator Pat Toomey asked whether the flattening yield curve on bonds would cause the Fed to adjust either its path for interest rates increases or the pace of its balance sheet reduction.

A flattening yield curve means the difference, or spread, between short- and long-term bonds is narrowing. When short-term bond yields end up higher than those on long-term bonds, then the yield curve has inverted. The concern that Toomey’s question points to is that, in the past, an inverted yield curve has typically signaled a coming recession.

Rather than a direct response to what the flatter yield curve potentially means for normalizing monetary policy, Powell delivered his weakest answer of the day. He admitted that the Fed has discussed yield curve dynamics in policy meetings, that “different people think about it different ways,” and that he tries to understand the yield curve in terms of what it says about neutral interest rates. He ignored the part of the question about whether or not the narrowing spread was signaling a potential economic slowdown—something not lost on seasoned Fed watchers.

While the Senators’ questions left a lot to be desired on the monetary front, the Chairman’s prepared remarks were a bit more encouraging. There, as David Beckworth notes, Powell once again highlighted the FOMC’s use of monetary policy rules when setting policy. It was only a year ago that the Fed added a new section to its semiannual report on monetary policy rules. That the Fed has continued to update and expand that section in subsequent reports is welcome news. However, Powell discusses monetary policy rules as useful insofar as they guide FOMC decisions on the path of interest rates. Because they do not accurately reflect the stance of monetary policy, this laser focus on interest rates can be problematic.

To truly improve the Fed’s performance, Powell should move beyond policy rules that fixate on interest rates and instead explore a monetary regime that would enhance macroeconomic stability.

Powell will be on the Hill again tomorrow, before the House Committee on Financial Services.

The heat and humidity are now on the rise again after a quite pleasant respite. But the last heatwave was exceedingly uncomfortable and prompted an examination of just how miserable Mid-Atlantic summers can be. My own weather equipment, in Marshall VA, showed the maximum heat index—a weighted combination of temperature and humidity that’s akin to heat stress—topped out at an astounding 125°F late in the afternoon of July 3.

This wasn’t a nationwide event, unlike the dust-bowl summers of 1934 and 1936. Instead, as shown on climatologist Roy Spencer’s blog, the unusual heat was rather circumscribed, with a fairly even distribution of above and below-normal temperatures across North America.

It’s worth having a look at the national history of very hot temperatures, shown below:

Figure 1. Despite warmer global average temperatures, there’s no trend in extremely hot days in the US record.

The heat of the 1930s has yet to be topped. In our region, none of the recent heat holds a melting candle to the summer of 1930, which was also exceedingly dry. Except for a few locations that got hit-or-miss thunderstorms, much of the Mid-Atlantic saw less than an inch of rain between June 20th and the end of August, with reports of a mere 10% of normal rain being common.

Here’s how hot it was. Leander McCormick Observatory is Charlottesville’s long-term climate station. For 23 days, beginning on July 19, 1930, the high temperature averaged 100°. Most Mid-Atlantic stations see about one such day per year. During that heatwave, on July 20, Woodstock, in the heart of the Shenandoah Valley, set the all-time credible Virginia record with 109°. (There is a 110° reading at Balcony Falls VA in 1954, but it’s not consistent with nearby temperatures.)

Urban Washington, DC was largely without air conditioning, and residents took to the parks to sleep. But that’s not a safe option now, and it’s also not clear that we have enough grid power to handle that much heat. The hottest days in the eastern U.S. come perilously close to bringing down the electrical grid.

Lack of, or loss of, air conditioning in a major urban heatwave kills people. This happened in Chicago in 1995, with 739 excess deaths as the heat index went astronomical. Nearby southern Wisconsin and eastern Iowa saw values above 130°, and one location (Appleton, WI) hit an astounding 148⁰ at 5pm on July 13, the most uncomfortable heat ever measured in the western hemisphere. That was an official airport reading made on calibrated instruments.

A peculiarity of urban heatwaves, at least in the continental U.S., is that as they become more frequent—which they must, thanks mostly to urban sprawl, as well as a slight nudge from carbon dioxide—heat-related deaths begin to decline. This was noted both in Chicago, post-1995 and in France, post-2003, as subsequent temperature extremes resulted in far few fatalities than would have been expected by heat/death models.

The response to extreme heat is both political and personal. Because of the Chicago tragedy, cities nationwide developed heat emergency plans, which include both publicity and cooling centers. The French decided that—tres gauche—American-style air conditioning wasn’t so bad after all, as they descended in droves upon big-box stores to buy them for granny’s room.

The decline in heat-related mortality is therefore a function of adaptation. Two of the hottest cities in the US are Phoenix and Tampa, and they also have some of the oldest (and therefore most susceptible) populations. Only in Seattle, where heatwaves are very rare, is there increasing heat-related mortality. And as urban heat becomes more frequent nationwide, heat-related mortality should decline as long as the power stays on.

As a historian of the Cold War, I have a passing knowledge of a number of meetings between Soviet/Russian leaders and U.S. presidents. Some are famous for getting relations off on the wrong foot (e.g. Kennedy and Khrushchev at Vienna in 1961); others set the stage for great breakthroughs, but were seen as failures at the time (e.g. Reagan and Gorbachev at Reykjavik in 1986); still others are largely forgotten (e.g. Johnson and Kosygin at Glassboro, NJ in 1967). It is impossible to predict how we will remember the first substantive meeting between Donald Trump and Vladimir Putin.

We can see, however, what President Trump wants us to remember. “I think we have great opportunities together as two countries that, frankly,…have not been getting along very well for the last number of years,” Trump said at the opening of the meeting in Helsinki. “I think we will end up having an extraordinary relationship.” 

President Trump has long said, going back to his campaign, that it is important to have good relations with Russia. I agree. I’ve never seen meetings between American leaders and senior government officials and their foreign counterparts as a “reward” for good or bad behavior. It’s called diplomacy. If this first meeting does set a tone for cooperation between the two countries, historians might eventually judge it worthwhile.

Unfortunately, the context surrounding this meeting is not conducive to long-term success. Credible evidence of Russian interference in the 2016 election, affirmed in detail as recently as Friday, casts a long shadow, and makes it very difficult to make progress on matters of mutual interest. Any genuine breakthrough will immediately run afoul of U.S. domestic politics. That President Trump continues to dismiss the Mueller investigation as a “rigged witchhunt” and mostly blames his predecessor for failing to call the Russian election hack to the attention of the American people merely confirms a widespread perception that he doesn’t take it seriously.

In addition, on the heels of last week’s NATO summit, and the G-7 meeting last month, there is the unsettling fact that President Trump seems to prefer meeting with autocrats than with leaders of democracies. We saw that again today, with President Trump praising Vladimir Putin effusively days after he humiliated European leaders. He also spoke warmly of their mutual friend, China’s Xi Jinping. Last month, the president joked about how North Koreans “sit up at attention” when Kim Jong Un speaks, and he would like “my people to do the same.” He seems particularly impressed by how others are able to stifle domestic dissent. This behavior and rhetoric plays into his critics’ warnings about Donald Trump’s authoritarian instincts, and today’s meeting does nothing to ease such concerns.

President Trump’s idiosyncrasies notwithstanding, however, I will be paying attention to what, if anything, emerges from his meeting with Vladimir Putin. These could include agreement to discuss nuclear arms control, tamping down the civil war in Syria, and possibly reaching some resolution on Ukraine. But we’d all be advised to wait a bit before rendering a definitive judgement.

As regular Alt-M readers know, I’ve been saying for over a year now that, despite their promise to “normalize” monetary policy, Fed officials have been determined to maintain the Fed’s post-crisis “floor” system of monetary control, in which changes to the Fed’s monetary policy stance are mainly achieved by means of adjustments to the rate of interest the Fed pays on banks’ excess reserve balances, or the IOER rate, for short.

Until recently the Fed’s intentions had to be inferred by reading between the lines of its official press releases, or by referring to personal preferences expressed by leading Fed officials. But with today’s release of the Fed’s official Monetary Policy Report by the Board of Governors, it’s no longer necessary to speculate. The section “Interest on Reserves and Its Importance for Monetary Policy,” on pp. 44-46, leaves hardly any room for doubt that the Board of Governors still regards the IOER rate as “the principal tool the FOMC [sic] uses to anchor the federal funds rate,” and that it plans to keep on doing so after it “normalizes” monetary policy by completing its ongoing balance sheet unwind and by further raising its fed funds rate target upper limit by another percentage point or so.[1]

An Awkward Start

Having already spilled several gallons of ink criticizing the Fed’s floor system, on these pages and in Floored!, my forthcoming book on the subject, I don’t see the point of reviewing those criticisms here, by way of a comprehensive reply to the Board’s recent remarks defending that arrangement. Still I can’t resist pointing out some especially galling aspects of those remarks, starting with this opening passage:

The financial crisis that began in 2007 triggered the deepest recession in the United States since the Great Depression. In response, the Federal Open Market Committee (FOMC) cut its target for the federal funds rate to nearly zero by late 2008. Other short-term interest rates declined roughly in line with the federal funds rate. Additional monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation. The FOMC undertook other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.

These policy actions made financial conditions more accommodative and helped spur an economic recovery that has become a long-lasting economic expansion.

Although the passage itself doesn’t refer to interest on reserves, its purpose is to introduce a discussion devoted to singing the praises of that policy instrument. It’s in light of that intention that the passage raises my hackles. For what the Fed’s report doesn’t say is that, when the Fed introduced IOER in early October 2008, it did so, not because it thought “monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation,” but because it was determined to prevent its then-ongoing emergency lending from having any stimulus effect, and from thereby becoming a source of unwanted upward pressure on inflation! IOER was, in other words, originally intended to serve as a contractionary monetary policy measure, just when monetary expansion was desperately needed.

And boy did it work! NGDP, which had been growing, albeit at a snail’s pace, went into a tailspin. Nor was that all. Because the Fed’s IOER rate — first set at 75 basis points, briefly lowered to 65 bps, then quickly raised to 100 basis points, and finally lowered again (in early December 2008) to 25 basis points, where it remained for the duration of the crisis — was designed to prop-up the fed funds rate by encouraging banks to accumulate excess reserves, when the Fed finally determined that the U.S. economy could use a little stimulus after all, it had no choice but to resort to “other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.”

But we mustn’t be too hard on the authors of the report. After all, it would have been awkward for them to laud the Fed’s floor system after first pointing out how, during the last months of 2008 and the start of 2009, that system played an important part in bringing the U.S. economy to its knees.

Not a Popular System

Another irksome passage in the Board’s report is the one declaring that “Interest on reserves is a monetary policy tool used by all of the world’s major central banks.” Yes, and no. Plenty of central banks pay interest on bank reserves. But the policy the report defends isn’t simply that of paying interest on bank reserve balances, including excess reserve balances. It’s that of using the IOER rate as the Fed’s chief instrument of monetary control, which is the essence of a “floor” operating system. And that means setting an IOER rate high enough to encourage banks to stock-up on  excess reserves, instead of trading them for other assets.

Although the central banks of several other nations have employed floor systems in the past, today, besides the Fed itself, only the Bank of England and the ECB still rely on floor systems — or something close. Most  central banks now rely on “corridor” systems of some kind, in which the central bank’s IOER (“deposit”) rate sets a lower bound on movements in its policy rate, and open-market operations are routinely employed to keep the actual policy rate at a target set somewhere between that lower bound and an upper bound consisting of the central bank’s own lending rate. Finally, a number of other central banks that either used floor systems before the crisis or adopted such systems during it, including the Swiss National Bank, the Bank of Japan, Norges Bank, and the Reserve Bank of New Zealand, switched to “tiered” or “quota” systems afterwards. In a tiered system, reserves may earn interest at a rate that makes them seem attractive relative to other safe assets, but they do so only up to a fixed limit. Beyond that limit they earn only a relatively modest return — if not a zero or negative return. Because the marginal opportunity cost of reserves remains positive in tiered systems, such systems operate more like corridor systems than like a floor system.

Just How Low Has the Fed Really Gone?

But of all the irritating claims of the Board’s report, the one that has gone furthest in putting me in high dudgeon is this one:

The rate of interest the Federal Reserve pays on banks’ reserve balances is far lower than the rate that banks can earn on alternative safe assets, including most U.S. government or agency securities, municipal securities, and loans to businesses and consumers. Indeed, the bank prime rate — the base rate that banks use for loans to many of their customers — is currently around 300 basis points above the level of interest on reserves.

To which the following footnote is appended:

The Congress’s authorization allows the Federal Reserve to pay interest on deposits maintained by depository institutions at a rate not to exceed the “general level of short-term interest rates.” The Federal Reserve Board’s Regulation D defines short-term interest rates for the purposes of this authority as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments.” The rate of interest on reserves has been well within a range of short-term interest rates as defined in Board regulations.

Where to begin?

It’s absurd, first of all, to treat interest rates on “loans to businesses and consumers,” the prime rate included, as rates on safe assets. But don’t take my word for it: consider what two Fed senior economists, one of whom works at the Board of Governors, have to say on the subject, in a Liberty Street Economics post entitled, “What Makes a Safe Asset?” Safe assets, they write,

are those with a very high likelihood of repayment, and are easy to value and trade …. As a result, safe assets typically trade at a premium, known in the academic literature as a “convenience yield,” which reflects the nonpecuniary benefits investors receive for holding them …

In today’s financial system, the prime example of a safe asset is U.S. Treasury securities. These securities are considered to have zero credit risk, can be easily sold, and can be used as collateral either to raise funding or to post as margin in derivatives positions. … Treasuries’ safe asset status translates into an average yield reduction of 73 basis points. This yield spread can be interpreted as a measure of the convenience yield embedded in Treasuries.

However, Treasuries differ significantly in maturity and that affects their safe asset characteristics. Treasury bills (T-bills) have the shortest maturities and are often thought of as “money-like” assets, that is, assets similar to physical currency. Because of this moneyness, yields on short-term T-bills are typically lower than those on comparable assets….

The private sector can also create safe assets. For example, many of the benefits ascribed to public safe assets are also attributed to private short-term debt of certain issuers. An important difference between public and private safe assets, however, is that the reliability of private safe assets can come into question.

Stretch the notion as much as you like, you will never get “safe assets” to include even the safest bank loans. That is, you won’t be able to do it unless you are a Fed official trying to claim that the Fed’s IOER rate has been “far lower than the rate that banks can earn on alternative safe assets.”

Nor is it possible to justify comparing the Fed’s IOER rate — a rate on assets (reserves) of essentially zero maturity — to rates on otherwise safe longer-term assets. Instead, to sustain the claim that the Fed’s IOER rate has been low relative to that on assets of comparable safety, including comparably low exposure to interest-rate (or duration) risk, Fed officials would have to show that the IOER rate is below rates on safe assets with very short (if not zero) maturities. That rules out comparisons to  Treasury and agency bonds and notes, leaving only Treasury bills. Even then the comparison is a bit unfair, as even the shortest-term Treasury bills have longer terms — and are therefore less liquid and safe — than bank reserves.

But let that pass. Instead, let’s just consider how the report’s assertion that the Fed’s IOER rate “is far lower than the rate that banks can earn on alternative safe assets” stacks up against the record regarding yields on various Treasury bills. Let FRED do the talking:

As the chart shows, throughout most of its existence the IOER rate has been well above, not just rates on shorter-term Treasury Bills, but those on 1-year T-bills; indeed, for a long interval banks had to hold T-bills of 2-year maturities or longer to earn as much interest as excess reserves paid. And while the situation isn’t nearly so bad today, it remains the case that reserves pay more than one-month Treasury bills. That’s not “far lower than the rate that banks can earn on alternative safe assets.” It’s not even a little lower. It’s higher. Nor could things be otherwise, because having a floor system means having an IOER rate that’s high enough “to remove the opportunity cost to commercial banks of holding reserve balances,” which it wouldn’t be if it were really “far lower than the rate that banks can earn on alternative safe assets.”

“D” for Deception

And what about that footnote? It just adds insult to injury by showing the lengths to which the Fed has been willing to go to twist and bend the law authorizing it to pay interest on bank reserves. As the note correctly observes, that law requires that the Fed’s IOER rate not exceed “the general level of short-term interest rates.” Since the IOER rate is itself, as we’ve seen, a rate on a riskless zero-maturity asset, any reasonable interpretation of the statute would have it refer to the general level of rates on other short-term, riskless assets, such as 4 week-Treasury Bills or, perhaps, overnight Treasury-secured repos.

So, in preparing Regulation D, how did the Fed define short-term rates for the purpose of implementing the statute? Why, as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments” (my emphasis). If you can’t see how self-serving — not to say dishonest — the Fed’s definition is, please read it again, carefully, bearing in mind what “term” rates are and that the Fed’s “primary credit rate” is what’s more commonly known as its “discount” rate — that is, “the interest rate charged to commercial banks and other depository institutions on loans they receive from their regional Federal Reserve Bank’s lending facility–the discount window.”

That Regulation D refers to “term” rates rather than overnight rates, when the latter are obviously more appropriate, is the least of it. The inclusion on the Fed’s list of comparable rates of the Fed’s primary credit rate is the real kicker. First of all, that rate isn’t a market rate but one that the Fed itself administers. What’s more, the Fed has long had a policy of setting it well “above the usual level of short-term market interest rates” (my emphasis again). These days, for example, it sets it “at a rate 50 basis points above the Federal Open Market Committee’s (FOMC) target rate for federal funds.” Because the IOER rate once defined the upper limit of the FOMC’s fed funds target rate range, and is now set 5 basis points below that limit, any interest rate that the Fed pays on reserves is bound to be lower than the Fed’s primary credit rate. Thus the Fed has cleverly interpreted and implemented the statute in a manner that allows it to claim that it is obeying the law requiring that its IOER rate not exceed “the general level of short-term interest rates” no matter how it sets that rate, including when it sets it well above truly comparable market-determined short-term rates!

Now I hope you’re at least starting to see why the Fed’s report has got my goat.

_______________________
[1] “Sic” because it is the Board of Governors, rather than the FOMC, that sets the IOER rate. Concerning this anomalous exception to the rule assigning responsibility for the conduct of monetary policy to the FOMC, see my January 10, 2018 testimony before the Monetary Policy and Trade Subcommittee of the House Financial Services Committee.

[Cross-posted from Alt-M.org]

As a physician licensed to prescribe narcotics, I am legally  permitted to prescribe the powerful opioid methadone (also known by the brand name Dolophine ) to my patients suffering from severe, intractable pain that hasn’t been adequately controlled by other, less powerful pain killers. Most patients I encounter who might fall into that category are likely to be terminal cancer patients. I’ve often wondered why I am approved to prescribe methadone to my patients as a treatment for pain, but I am not allowed to prescribe methadone to taper my patients off of a physical dependence they may have developed from long-term opioid use, so as to help them avoid the horrible acute withdrawal syndrome. I am also not permitted to prescribe methadone as a medication-assisted treatment for addiction. These last two uses of the drug require special licensing and permits and must comply with strict federal guidelines. 

The synthetic opioid methadone was invented in Germany in 1937. By the 1960s, methadone was found to be effective as medication-assisted treatment for heroin addiction, and by the 1970s methadone treatment centers were established throughout the US, providing specialized and highly structured care for patients suffering from Substance Abuse Disorder. The Narcotic Addict Treatment Act of 1974 codified the methadone clinic structure. Today, methadone clinics are strictly regulated by the Drug Enforcement Administration, the National Institute on Drug Abuse, the Substance and Mental Health Services Administration, and the Food and Drug Administration. These regulations establish guidelines for the establishment, structure, and operation of methadone clinics, in most cases requiring patients to obtain their methadone in person at one fixed site. After a period of time, some of these patients are allowed to take methadone home from the facility to self-administer while they remain closely monitored. This onerous regulatory system has led to an undersupply in methadone treatment facilities for patients in need. Furthermore, the need for patients to travel, often long distances, each day to the clinic to receive their daily dose has been an obstacle to their obtaining and complying with the treatment program.

Earlier this month addiction specialists from the Boston University School of Medicine and Public Health and the Massachusetts Department of Public Health argued in the New England Journal of Medicine that community physicians interested in the treatment of Substance Abuse Disorder should be allowed to prescribe methadone to their patients seeing them in their offices and clinics. Doctors have been allowed to prescribe the opioid buprenorphine for medication-assisted treatment of addiction for years, and in recent years nurse practitioners and physicians’ assistants have been able to obtain waivers that allow them to engage in medication-assisted treatment as well.

The authors noted that methadone has been legally prescribed by primary care providers to treat opioid addiction in other countries for many years— in Canada since 1963, in the UK since 1968, and in Australia since 1970, for example. They state, 

Methadone prescribing in primary care is standard practice and not controversial in these places because it benefits the patient, the care team, and the community and is viewed as a way of expanding the delivery of an effective medication to an at-risk population.

Policymakers serious about addressing the ever-increasing overdose rate from (mostly) heroin and fentanyl afflicting our population should take a serious look at reforming the antiquated regulations that hamstring the use of methadone to treat addiction.

 

In the few days since President Trump nominated him to be an Associate Justice on the Supreme Court, Judge Brett Kavanaugh has seen his life put under the microscope. It turns out that the U.S Court of Appeals for the D.C Circuit judge really likes baseball, volunteers to help the homeless, and has strong connections to the Republican Party – especially the George W. Bush administration. More consequentially, Kavanaugh is an influential judge with solid conservative credentials. For libertarians, Kavanaugh’s record includes much to applaud, especially when it comes to reining in the power of regulatory authorities. However, at least one of Kavanaugh’s concurrences reveals arguments that should concern those who value civil liberties. Members of the Senate Committee on the Judiciary should press Kavanaugh on these arguments at his upcoming confirmation hearing.

In 2015, Kavanaugh wrote a solo concurrence in the denial of rehearing en banc in Klayman v. Obama (full opinion below), in which the plaintiffs challenged the constitutionality of the National Security Agency’s (NSA) bulk telephony metadata program. According to Kavanaugh, this program was “entirely consistent” with the Fourth Amendment, which protects against unreasonable searches and seizures.

The opening of the concurrence is ordinary enough, with Kavanaugh mentioning that the NSA’s program is consistent with the Third Party Doctrine. According to this doctrine, people don’t have a reasonable expectation of privacy in information they volunteer to third parties, such as phone companies and banks. This allows law enforcement to access details about your communications and your credit card purchases without search warrants. My colleagues have been critical of the Third Party doctrine, filing an amicus brief taking aim at the doctrine in the recently decided Fourth Amendment case Carpenter v. United States

Because the Third Party Doctrine remains binding precedent, Kavanaugh argues, the government’s collection of telephony metadata is not a Fourth Amendment search. Regardless of one’s opinion of the Third Party Doctrine, this is a reasonable interpretation of Supreme Court precedent from an appellate judge.

Yet in the next paragraph the concurrence takes an odd turn. Kavanaugh argues that even if the government’s collection of millions of Americans’ telephony metadata did constitute a search it would nonetheless not run afoul of the Fourth Amendment:

Even if the bulk collection of telephony metadata constitutes a search,[…] the Fourth Amendment does not bar all searches and seizures. It bars only unreasonable searches and seizures. And the Government’s metadata collection program readily qualifies as reasonable under the Supreme Court’s case law. The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient “special need” – that is, a need beyond the normal need for law enforcement – that outweighs the intrusion on individual liberty. Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports. […] The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States. See THE 9/11 COMMISSION REPORT (2004). In my view, that critical national security need outweighs the impact on privacy occasioned by this program. The Government’s program does not capture the content of communications, but rather the time and duration of calls, and the numbers called. In short, the Government’s program fits comfortably within the Supreme Court precedents applying the special needs doctrine.

This paragraph includes a few points worth unpacking: 1) That the collection of telephony metadata is permitted under the “Special Needs” Doctrine, and 2) The 9/11 Commission Report buttresses the claim that “The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States.”

Kavanaugh asserts that the NSA’s program serves a special need, and is therefore exempt from the Fourth Amendment’s warrant requirement. The so-called Special Needs Doctrine usually applies when government officials are acting in a manner beyond what is associated with ordinary criminal law enforcement. Justice Blackmun explained the justification for the doctrine in his New Jersey v. T.L.O. (1985) concurrence:

Only in those exceptional circumstances in which special needs, beyond the normal need for law enforcement, make the warrant and probable cause requirement impracticable, is a court entitled to substitute its balancing of interests for that of the Framers.

Kavanaugh’s concurrence includes a few notable examples of the Special Needs Doctrine, such as drug tests for high school athletes and drunk driving roadblocks. Unlike Klayman, which concerned the indiscriminate bulk collection of millions of citizens’ telephony metadata, these cases involved limited searches specific to an isolated government interest.

In United States v. United States District Court (1972) – the so-called “Keith Case” – the Supreme Court rejected the government’s argument that “the special circumstances applicable to domestic security surveillances necessitate a further exception to the warrant requirement.”

The Supreme Court did not find this or some of the government’s arguments persuasive:

But we do not think a case has been made for the requested departure from Fourth Amendment standards. The circumstances described do not justify complete exemption of domestic security surveillance from prior judicial scrutiny. Official surveillance, whether its purpose be criminal investigation or ongoing intelligence gathering, risks infringement of constitutionally protected privacy of speech. Security surveillances are especially sensitive because of the inherent vagueness of the domestic security concept, the necessarily broad and continuing nature of intelligence gathering, and the temptation to utilize such surveillances to oversee political dissent. We recognize, as we have before, the constitutional basis of the President’s domestic security role, but we think it must be exercised in a manner compatible with the Fourth Amendment. In this case we hold that this requires an appropriate prior warrant procedure.

Kavanaugh’s argument that the NSA’s domestic spying can override Fourth Amendment protections thanks to “special needs” is at odds with the Supreme Court’s holding in the Keith Case. If the Court expanded special needs to cover the bulk collection of telephony metadata it would be the most expansive application of the doctrine to date.

It’s important to consider why Kavanaugh believes “bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States.”

In making this claim, Kavanaugh cited the 2004 9/11 Commission Report. This report does not directly recommend the bulk collection surveillance at issue in Klayman, nor does it make the argument that such a program would have prevented the 9/11 attacks.  

In fact, the Privacy and Civil Liberties Oversight Board’s (PCLOB) 2014 report on the NSA’s bulk telephony surveillance program, published before Kavanaugh’s Klayman concurrence, found that the program was not a critically important part of the ongoing War on Terror:

Based on the information provided to the Board, we have not identified a single instance involving a threat to the United States in which the telephone records program made a concrete difference in the outcome of a counterterrorism investigation. Moreover, we are aware of no instance in which the program directly contributed to the discovery of a previously unknown terrorist plot or the disruption of a terrorist attack. And we believe that in only one instance over the past seven years has the program arguably contributed to the identification of an unknown terrorism suspect. In that case, moreover, the suspect was not involved in planning a terrorist attack and there is reason to believe that the FBI may have discovered him without the contribution of the NSA’s program.

Even in those instances where telephone records collected under Section 215 offered additional information about the contacts of a known terrorism suspect, in nearly all cases the benefits provided have been minimal — generally limited to corroborating information that was obtained independently by the FBI.

Kavanaugh’s assertion that the NSA’s invasive surveillance program is justified on national security grounds is simply not supported by the 9/11 Commission Report or the PCLOB’s report.

If the Senate does vote to confirm Kavanaugh, as is widely expected, he will likely be on the bench for decades. In that time, he will hear cases involving warrantless surveillance justified on national security grounds. This surveillance may involve facial recognition, drones, and other emerging surveillance methods. That a potential Supreme Court justice might view such warrantless surveillance as justified because of a national security-based “special needs” exception to the Fourth Amendment should worry everyone who values civil liberties. Members of the Senate Committee on the Judiciary must ask Kavanaugh to better explain his reasoning in Klayman.

Klayman v. Obama by Matthew Feeney on Scribd

Nationwide transit ridership in May 2018 was 3.3 percent less than in the same month of 2017. May transit ridership fell in 36 of the nation’s 50 largest urban areas. Ridership in the first five months of 2018 was lower than the same months of 2017 in 41 of the 50 largest urban areas. Buses, light rail, heavy rail, and streetcars all lost riders. 

These numbers are from the Federal Transit Administration’s monthly data report. I’ve posted an enhanced spreadsheet that has annual totals in columns GY through HO, mode totals for major modes in rows 2123 through 2129, agency totals in rows 2120 through 3129, and urban area totals for the nation’s 200 largest urban areas in rows 3131 through 3330.

Declines in 2018 continue a trend that began in 2014. Year-on-year monthly ridership has fallen in 21 of the last 24 months and all of the last seven months. The principle cause is likely the growth of Uber, Lyft, and other ride-hailing services, but whatever the cause, there seems to be no positive future for public transit.

Of the urban areas that saw ridership increase, ridership grew by 1.2 percent in Houston, 2.2 percent in Seattle, 2.4 percent in Denver, 1.2 percent in Portland, 5.0 percent in Indianapolis, 7.8 percent in Providence, 7.2 percent in Nashville, and an incredible 63.1 percent in Raleigh. Most of the growth in Raleigh was students carried by North Carolina State University’s bus system.

On a percentage basis, the biggest losers were Miami, Boston, Cleveland, Kansas City, and Milwaukee, all of which saw about 11 percent fewer riders in May 2018 than May 2017. Ridership fell 9.2 percent in Phoenix, 8.0 percent in Jacksonville, 7.2 percent in Virginia Beach-Norfolk, 6.4 percent in Dallas-Fort Worth, 5.9 percent in Atlanta, and 5.6 percent in Philadelphia.

Numerically, the biggest losses were in New York, whose transit systems carried 12.7 million fewer riders in May 2018 than 2017; Boston, -4.1 million; Los Angeles, -2.4 million; Philadelphia, -1.7 million; and Miami, -1.4 million. Chicago, Washington, Atlanta, and Phoenix all lost more than half a million monthly riders.

Some people have argued that ridership is declining because of cuts to transit services. Others have concluded that the cuts to transit service “mostly followed, and not led falling ridership.” The posted spreadsheet includes data for vehicle-revenue miles of service that could support either view.

Transit service in both Houston and Seattle grew by 2.6 percent, supporting Houston’s 1.2 percent and Seattle’s 2.2 percent ridership gains. Indianapolis’ 5.0 percent increase in ridership was supported by a 9.9 percent increase in service. Service declined 2.0 percent in New York and 3.7 percent in Los Angeles, either reflecting or contributing to falling ridership in those urban areas.

However, ridership declined 2.5 percent in San Diego despite a 10.9 percent increase in service. Ridership in San Jose fell by 4.2 percent despite a 2.4 percent increase in service. Jacksonville’s 8.0 percent loss of riders came in spite of a 2.6 percent increase in service.

It seems clear that service levels are only one of the factors influencing transit ridership. Moreover, there appear to be rapidly diminishing returns to service: large service increases are needed to get small ridership gains. On the other hand, ridership declines reduce agency revenues forcing reductions in service, leading to further ridership declines: a classic death spiral.

Transit industry leaders must be hoping for some kind of catastrophe that will send gasoline prices above $4 a gallon, for that is probably the only thing that could save the industry from its current trajectory. That is unlikely, and the industry is not worth saving any other way.

The Senate Judiciary Committee recently voted in favor of a bill that would update copyright law and apply new regulations to interactive streaming services, such as Spotify. The Music Modernization Act (MMA) addresses the issues of non-payment to copyright holders—the basis of a $1.6 billion lawsuit against Spotify—and undefined unenforceable music property rights stemming from the lack of a comprehensive database that records the ownership of copyrights. In the current issue of Regulation, Thomas Lenard and Lawrence White recount the history of music copyright law and discuss some of the shortcomings of the MMA.

The New York Times quotes one supporter of the Act as stating, “This is going to revolutionize the way songwriters get paid in America.” But the MMA primarily incorporates streaming services into the existing framework through which distributors of music obtain permission from and provide compensation to music copyright holders.

A key provision of the MMA is that the Register of Copyrights would designate a Musical Licensing Collective (MLC) with two primary functions. The first is to serve as a collective rights organization that grants licenses for interactive streaming, receives royalties from streaming services, and remits the royalties to copyrights holders. The second function is to create and manage a database of rights holders.

The revolutionary aspect of the MMA is the creation of such a database. Currently, the music industry lacks a comprehensive database that keeps track of copyrights, which is what has created the problems of nonpayment and limited music distributors’ ability to negotiate with individual copyright holders. Lenard and White contend that the database building function of the MLC may be necessary because the economies of scale in managing such a database might be large enough to create a natural monopoly (though nongovernmental groups are already developing open source and blockchain initiatives to solve these problems).

However, by linking the database function of the MLC with its role as a collective rights organization, Lenard and White argue that the MMA simply extends a regulatory regime that limits competition. As it stands, the music copyright system largely consists of compulsory licenses and rates set by administrative or judicial proceedings. The MLC as outlined in the MMA would be a government enforced monopoly with the same anticompetitive practices.

As Lenard and White state,

Whenever an opportunity for pro-competitive reform of music licensing arises, policymakers seem to revert to the traditional regulatory model that discourages competition. They never miss an opportunity…to miss an opportunity. The MMA— with its reliance on compulsory licensing, blanket licensing by a marketing collective, and regulated rates—is the latest of several recent examples.

Instead of extending the current anticompetitive regulations to streaming services, policymakers should instead update the music copyright registration system and allow a competitive copyright market to develop through which those copyrights are traded.  Those changes would provide greater benefits for music creators, distributors, and consumers.

Written with research assistance from David Kemp.

Readers who watched the Cato forum last November on prosecutorial fallibility and accountability, or my coverage at Overlawyered, may recall the story of how a Federal Trade Commission enforcement action devastated a thriving company, LabMD, following a push from a spurned vendor. Company founder and president Mike Daugherty, who took part on the Cato panel, wrote a book about the episode entitled The Devil Inside the Beltway: The Shocking Exposé of the U.S. Government’s Surveillance and Overreach into Cybersecurity, Medicine and Small Business.

Last month two separate federal appeals courts issued rulings offering, when combined, some consolation for Daugherty and his now-shuttered company. True, a panel of the D.C. Circuit Court of Appeals, finding qualified immunity, disallowed the company’s claims that FTC staffers had violated its constitutional rights by acting in conscious retaliation for its criticism of the agency. On the other hand, an Eleventh Circuit panel sided with the company and (quoting TechFreedom) “decisively rejected the FTC’s use of broad, vague consent decrees, ruling that the Commission may only bar specific practices, and cannot require a company ‘to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.’” [More on the ruling here and here]

As usual, John Kenneth Ross’s coverage at the Institute for Justice’s Short Circuit newsletter is worth reading, both descriptions appearing in the same roundup since they were decided in such quick succession:

Allegation: Days after LabMD, a cancer-screening lab, publicly criticized the FTC’s yearslong investigation into a 2008 data breach at the lab, FTC staff recommend prosecuting the lab. Two staffers falsely represent to their superiors that sensitive patient data spread across the internet. (It hadn’t.) The FTC prosecutes; the lab lays off all workers and ceases operations. District court: Could be the staffers were unconstitutionally retaliating for the criticism. D.C. Circuit: Reversed. Qualified immunity. (Click here for some long-form journalism on the case.)…

Contrary to company policy, a billing manager at LabMD—a cancer-screening lab—installs music-sharing application on her work computer; a file containing patient data gets included in the music-sharing folder. In 2008 a cybersecurity firm finds it and tells LabMD the file has spread across the internet. (Which is false.) When LabMD declines to hire the cybersecurity firm, the firm reports the breach to the FTC, which prosecutes the case before its own FTC judge. LabMD does not settle; the expense of fighting forces the company to shutter. The FTC orders LabMD to adopt “reasonably designed” cybersecurity measures. Eleventh Circuit: The FTC’s vague order is unenforceable because it doesn’t tell LabMD how to improve its cybersecurity.

Our friend Berin Szóka of TechFreedom sums it up: “The court could hardly have been more clear: the FTC has been acting unlawfully for well over a decade.” He continues by calling this “a true David and Goliath story”:

Well over sixty companies, many of them America’s biggest corporations, have simply rolled over when the FTC threatened to sue them [over data security practices]. … Only Mike Daugherty, the entrepreneur who started and ran LabMD, had the temerity to see this case through all the way to a federal court. …After losing his business and a decade of his life, Daugherty is a hero to anyone who’s ever gotten the short end of the regulatory stick.

 

 

When a user clicks on a Google search result, the web browser transmits a “referral header” to the destination website, unless a user has disabled them. The referral header contains the URL of the search results page, which includes the user’s search terms. Websites use this information for editorial and marketing purposes.

In 2010, Paloma Gaos filed a class action in the Northern District of California, seeking damages for the disclosure of her search terms to third-party websites through referral headers, claiming fraud, invasion of privacy, and breach of contract, among others. She eventually settled with Google on behalf of an estimated class of 129 million people in return for an $8.5 million settlement fund and an agreement from Google to revise its FAQ webpage to explain referral headers. Attorneys’ fees of $2.125 million were awarded out of the settlement fund, amounting to 25 percent of the fund and more than double the amount estimated based on class counsel’s actual hours worked.

But no class members other than the named plaintiffs received any money! Instead, the remainder of the settlement fund was awarded to six organizations that “promote public awareness and education, and/or…support research, development, and initiatives, related to protecting privacy on the Internet.” Three of the recipients were alma maters of class counsel.

This diversion of settlement money from the victims to causes chosen by the lawyers is referred to as cy pres. “Cy pres” means “as near as possible,” and courts have typically used the cy pres doctrine to reform the terms of a charitable trust when the stated objective of the trust is impractical or unworkable. The use of cy pres in class action settlements—particularly those that enable the defendant to control the funds—is an emerging trend that violates the due process and free speech rights of class members.

Accordingly, class members objected to the settlement, arguing that the district court abused its discretion in approving the agreement and failed to engage in the required rigorous analysis to determine whether the settlement was “fair, reasonable, and adequate.” The U.S. Court of Appeals for the Ninth Circuit affirmed the settlement, so two objecting class members, including Competitive Enterprise Institute lawyer Ted Frank, asked the Supreme Court to take the case (with a supporting brief from Cato)—which it has.

Cato has filed an amicus brief at this merits stage, arguing that the use of cy pres awards in this manner violates the Fifth Amendment’s Due Process Clause and the First Amendment’s Free Speech Clause. Specifically, each class member has a right to his claim, any compensation that arises from it, and representation that will defend the first two rights. The aggregate nature of class actions makes it easy to forget that their sole foundation is individual rights; class counsel and defendants end up ignoring that foundation and using the class as an aggregate tool for self-interest and collusion. When the settlement includes a cy pres award, it’s worse because class members’ property is involuntarily transferred to strangers. That those strangers are charitable organizations does not improve the situation, because it just gives class counsel and defendants’ collusion a philanthropic veneer. In the end, cy pres awards guarantee that every participant in the litigation derives some benefit except for the class members, the owners of the property being doled out. This perversion of the role of the judiciary is a gross violation of due process, and only a shift to an opt-out system and rigorous supervision by the courts can salvage individual rights.

This morning, USA Today published an article by Brad Heath that examined data showing Baltimore (City) Police Department (BPD) activity slowed at the same time Baltimore homicides infamously spiked since 2015. The piece is worth reading in full and the data deserves a more detailed response, but at the outset it’s important to note what the data do not say.

Several comments by current and former members of the BPD quoted in the piece say that front line officers are unwilling to do their jobs because of the public backlash to Freddie Gray’s death. Recall that, following a chase, several Baltimore police officers shackled Freddie Gray but left him unsecured in the back of a police van—strongly resembling what is colloquially known as  a “rough ride,” an unofficial retaliation for making police officers chase someone, also known as a “run tax”—and Gray consequently died of a broken neck suffered in that van. The subsequent though unsuccessful criminal prosecutions of the BPD officers involved for what looked like an illegal extrajudicial punishment that led to a man’s death, apparently, discourages front line officers from being proactive to keep the community safe. And, one way to look at the USA Today data is to say that, as a consequence of this slow down, murder rates have jumped precipitously.

It is a damning indictment indeed if BPD officers feel they need the freedom to needlessly kill Baltimore residents to do their jobs effectively. The data certainly shows a work slow-down by Baltimore officers and that slow-down may, in fact, be one factor that partially contributes to the rise in homicides. But that front-line officers feel this way about the people they are sworn to protect reflects a mindset that is anathema to positive police-community relations and thus endangers the community that has no reason to trust its police force.

Rather than being the cause of Baltimore’s murder spike, the BPD work slow-down is more likely just one symptom of an unhealthy departmental culture. As a result, that department has repeatedly proven itself unworthy of the public trust and the community suffers greatly because of it. 

Watch this space for more on this topic.

Even as public opinion shifts in favor of marijuana legalization, with sixty percent of Americans supporting broad legalization and ninety percent supporting medical use, Attorney General Jeff Sessions and the Department of Justice (DOJ) continue to stonewall efforts to expand availability of cannabis and cannabis-derived treatments for medical research.

In testimony to a Senate Appropriations subcommittee in April, Sessions argued that although recent studies have shown that access to medical marijuana reduces opioid overdose deaths, the evidence to support expanding access is still insufficient.

This is simply untrue. While DOJ and DEA policy have limited the ability of U.S. researchers to access and experiment with medical grade marijuana, substantial peer-reviewed scientific research supports the benefits of medical marijuana.

Medical marijuana has been shown to improve the quality of life and health outcomes of patients with cancer, multiple sclerosis, Parkinson’s disease, chronic pain, PTSD, and many other ailments. Israel and many European Union countries lead the way in medical and pharmaceutical research. The market for medical marijuana is projected to be worth 55 billion dollars by 2025, and biopharmaceutical firms are entering multi-million dollar partnerships with universities to advance the research and development of new cannabis-based medications.

Yet despite the economic and humanitarian gains from expanding research into of medical marijuana, the DOJ refuses to expand marijuana production for scientific use. In August 2016 the DEA issued a policy statement providing a legal registration process for marijuana suppliers. None of the 25 applications submitted thus far has been accepted or rejected. Instead of allowing the regulated production of marijuana for research purposes, as allowed under the law, the DEA is keeping applicants in bureaucratic limbo.

When questioned about the administrative inaction, Sessions argued that language in the policy violated the 1961 United Nations Single Convention on Narcotic Drugs.  Yet the treaty contains broad exemptions regarding medical research and use, and, given the proliferation of marijuana research abroad, legal pathways exist that do not violate the treaty.

Even as the DEA refuses to take action, other federal agencies are quietly accepting medical marijuana. The FDA recently approved a drug containing CBD (cannabidiol) derived from marijuana. In a statement, FDA Commissioner Scott Gottlieb said, “We’ll continue to support rigorous scientific research on the potential medical uses of marijuana-derived products and work with product developers who are interested in bringing patients safe and effective, high quality products.”

Restricting scientific research and development within the United States will only hurt American scientists, companies, and patients. While Jeff Sessions may continue to argue fiercely against medical marijuana, the tide is turning.

Research assistant Erin Partin contributed to this blogpost.

In a recent Philadelphia Inquirer opinion piece White House economic advisor Peter Navarro hailed the christening of a new transport ship in the nearby Philly Shipyard as evidence of the “United States commercial shipbuilding industry’s rebirth.” As is typical of Navarro’s pronouncements, the reality is almost the exact opposite. In fact, a closer examination of the ship’s construction reveals it to be symptomatic not of a rebirth, but of the industry’s long downward slide.

Named after the late Senator Daniel K. Inouye of Hawaii, Navarro describes the 850-foot Aloha-class vessel as “massive” and notes that it is “the largest container ship ever built in the United States.” This, however, is somewhat akin to the tallest Liliputian. Although perhaps remarkble in a domestic context, by international standards the ship is a relative pipsqueak. Triple-E class ships produced by Daewoo Shipbuilding & Marine Engineering for Maersk Line, for example, are over 1,300 feet in length. While the Inouye’s cargo capacity is listed at 3,600 TEUs (twenty-foot equivalent units, roughly equivalent to a standardized shipping container), the Triple-E class can handle 18,000.

The only thing truly massive about the Inouye is its cost. The price tag for this vessel and another Aloha-class ship also under construction at the Philly Shipyard is $418 million, or $209 million each. The Triple-E vessels, purchased by Maersk Line, meanwhile, each cost $190 million. The South Korean-built ships, in other words, offer five times the cargo capacity for nearly $20 million dollars less.

But the story gets worse.

The Wall Street Journal reports that after the Philly Shipyard completes work on “two small ships”—a reference to the Inouye and its sister vessel—it has no more orders lined up. The shipyard is already laying off 20 percent of its workforce and the dearth of future work has prompted speculation of a possible shutdown. Sadly, the Philly Shipyard’s travails are hardly atypical of the U.S. shipbuilding industry, and even Navarro admits that the sector has seen its workforce decline from 180,000 in 1980 to 94,000 today.

And yet we are to believe that the Inouye’s construction heralds the pangs of an alleged rebirth? 

At least credit the White House advisor for assigning proper blame for this sad state of affairs (which he misguidedly presents as credit). The Inouye, Navarro says, is in large part the result of a protectionist law called the Jones Act. He’s not wrong. Formally known as the Merchant Marine Act of 1920, the law mandates that ships transporting merchandise between two domestic ports be U.S.-built, U.S.-owned, U.S.-flagged, and U.S.-crewed.

The result is that instead of purchasing cheaper foreign-built ships, Americans are faced with enormous prices for relatively small ships. The cost of transportation, in turn, is higher than what it would otherwise be while the number of Jones Act-compliant vessels has gone down, along with jobs for mariners and shipbuilders. Those ships that remain, meanwhile, are far older than the foreign counterparts—no surprise given the cost deterrent to buying new ships. While the Inouye is brand new, the average Jones Act cargo ship is 34 years old. The international average is 25.2.

Consistent with other protectionist misadventures, the Jones Act’s list of victims includes those it was meant to help.

Rather than recommitting to the Jones Act and other failed forms of maritime protectionism as Navarro is so eager to do, the United States should instead be aggressively seeking this law’s repeal. An increasingly untenable status quo demands nothing less. Learn more about Cato’s Project on Jones Act Reform.

President Trump and his trade advisers are the most vocal in putting forward misguided views on the trade deficit, but, unfortunately, their position is a bipartisan one. Here’s something Congressman Brad Sherman of California said recently:

But Rep. Brad Sherman (D-CA), ranking member of the House Foreign Affairs Asia and the Pacific subcommittee, told Inside U.S. Trade he would be “surprised if any [bilateral] deal is finalized in the next 12 months.” Sherman met with Gerrish late last week, he said.

“Look, we spent 50 years telling the world that the only moral and correct thing to do was to have the United States run an enormous trade deficit with the entire world,” he said. “Of course, they decided to agree. Getting them to change their minds is not something that we are doing all that effectively and it’s certainly not something that is easy.”

Asked if he was confident a bilateral deal would be initiated in the near future, Sherman said “no, definitely not.”

Gerrish, he said, “was getting my input, but my input is certainly if you are dealing with a managed economy there has to be stated goals for how large the trade deficit will be or whether it will be balanced trade,” he said. “And it’s good to have people focused on the trade deficit; whether they are going about it the right way is perhaps another story. But ignoring it is a short-term strategy.”

Asked which countries might be top contenders for a bilateral, Sherman said none, adding that the criteria USTR was using to determine candidates was based on countries that trade fairly.

The U.S. will “strike deals” only with countries “that will provide for balanced and fair trade, of which there are none that I’m aware of right now,” he said.

The notion that a trade deal should lead to “balanced trade” seems like it comes from a Cuba-Venezuela trade arrangement, in which oil is traded for doctors. In the free market world of trade agreements, by contrast, the parties agree not to a barter of goods and services, but rather to remove tariffs and other protectionist barriers. The resulting bilateral trade balance is something to be determined by the market. The new trade flows are probably worth studying for various academic reasons, but are not a measure of success or failure of the deal.

By contrast, Congressman Sherman seems to think that the negotiation is over the trade deficit itself: “there has to be stated goals for how large the trade deficit will be.” But that’s not how U.S. trade negotiations do work or should work. What we negotiate about is the level of tariffs and other barriers. (Ideally, both sides would agree to have no tariffs, although in practice it is often just a lowering of tariffs.)

There can be complications from trading with the “managed economies” that he refers to, but those can be dealt with in trade agreements through specific rules. For example, they can establish rules on how state-owned enterprises should behave. There were rules of this sort in the Trans Pacific Partnership, and it would be a good idea for someone to propose similar rules in an agreement with China. 

International rules to limit managed trade and constrain protectionism are a good idea. A (bipartisan) focus on bilateral trade deficits, by contrast, won’t address these fundamental issues, and is a big mistake.

Congressman Sherman’s comments did not surprise me, because I had a brief exchange with him on this very issue in a House Committee hearing last year on the impact of a US-UK trade agreement (starts at 1:12:28). He asked the following question and was looking for a short answer: “Would a deal with Britain that simply eliminated all tariffs be good or bad for reducing America’s trade deficit? … it’s possible that it can’t be estimated.” I knew I wouldn’t be able to have a real discussion with him in this setting on the value of trade deficits as a metric, but in answering I wanted to get the point out there that looking at trade deficits is a mistake, so I said: “I can’t estimate it but I also don’t think trade deficits are bad for the economy.” He responded by saying, “We lose 10,000 jobs for every billion dollars of trade deficits …”, but then quickly moved on.

We have spent a lot of time over the years rebutting the misunderstandings about trade deficits: See, e.g., here, here, here, and here. But clearly, there is still work to do. 

I’ve previously blogged about Allah v. Milling, a case in which a pretrial detainee was kept in extreme solitary confinement for nearly seven months, for no legitimate reason, and subsequently brought a civil-rights lawsuit against the prison officials responsible. Although every single judge in Mr. Allah’s case agreed that these defendants violated his constitutional rights, a split panel of the Second Circuit said they could not be held liable, all because there wasn’t any prior case addressing the “particular practice” used by this prison. Cato filed an amicus brief in support of Mr. Allah’s cert pertition, which explicitly asks the Supreme Court to reconsider qualified immunity—a judge-made doctrine, at odds with the text and history of Section 1983, which regularly allows public officials to escape accountability for this kind of unlawful misconduct.

I also blogged about how, on June 11th, the Supreme Court called for a response to the cert petition, indicating that the Court has at least some interest in the case. The call for a response also triggered 30 days for additional amicus briefs, and over the last month, Cato has been coordinating the drafting and filing of two such briefs—one on behalf of a group of leading qualified immunity scholars (detailing the many recent academic criticisms of the doctrine), and the other on behalf of an incredibly broad range of fifteen public interest and advocacy groups concerned with civil rights and police accountability. 

The interest-group brief is especially noteworthy because it is, to my knowledge, the single most ideologically and professionally diverse amicus brief ever filed in the Supreme Court. The signatories include, for example, the ACLU, the Institute for Justice, the Second Amendment Foundation, Americans for Prosperity (the Koch brothers’ primary advocacy group), the American Association for Justice (formerly the Association of Trial Lawyers of America), the Law Enforcement Action Partnership (composed of current and former law-enforcement professionals), the Alliance Defending Freedom (a religious-liberties advocacy group), and the National Association of Criminal Defense Lawyers. The brief’s “Statement of Interest” section, after identifying and describing all of the individual signatories, concludes as follows:

The above-named amici reflect the growing cross-ideological consensus that this Court’s qualified immunity doctrine under 42 U.S.C. § 1983 misunderstands that statute and its common-law backdrop, denies justice to victims of egregious constitutional violations, and fails to provide accountability for official wrongdoing. This unworkable doctrine has diminished the public’s trust in government institutions, and it is time for this Court to revisit qualified immunity. Amici respectfully request that the Court grant certiorari and restore Section 1983’s key role in ensuring that no one remains above the law.

The primary theme of this brief is that our nation is in the midst of a major accountability crisis. The widespread availability of cell phones has led to large-scale recording, sharing, and viewing of instances of egregious police misconduct, yet more often than not that misconduct goes unpunished. Unsurprisingly, public trust in law enforcement has fallen to record lows. Qualified immunity exacerbates this crisis, because it regularly denies justice to victims whose constitutional rights are violated, and thus reinforces the sad truth that law enforcement officers are rarely held accountable, either criminally or civilly.

Moreover, qualified immunity not only hurts the direct victims of misconduct, but law enforcement professionals as well. Policing is dangerous, difficult work, and officers—most of whom do try to uphold their constitutional obligations—increasingly report that they cannot effectively carry out their responsibilities without the trust of their communities. Surveys of police officers thus show strong support for increased transparency and accountability, especially by holding wrongdoing officers more accountable. Yet continued adherence to qualified immunity ensures that this worthy goal will never be reached.

The Supreme Court is in recess now, and the defendants’ response brief won’t be due until September 10th, so we’re going to have to wait until early October to find out if the Supreme Court will take the case. But the Court, the legal community, and the public at large should now be aware that criminal defense lawyers, trial lawyers, public-interest lawyers of every ideological stripe, criminal-justice reform groups, free-market & limited-government advocates, and law enforcement professionals themselves all agree on at least one thing—qualified immunity is a blight on our legal system, and the time has come to cast off this pernicious, counter-productive doctrine.

In a 2012 dissent from a District of Columbia Appellate Court opinion, Supreme Court nominee Brett Kavanaugh acknowledged that “dealing with global warming is urgent and important” but that any sweeping regulatory program would require an act of Congress:

But as in so many cases, the question here is: Who Decides? The short answer is that Congress (with the President) sets the policy through statutes, agencies implement that policy within statutory limits, and courts in justiciable cases ensure that agencies stay within the statutory limits set by Congress.

Here he sounds much like the late justice Antonin Scalia, speaking for the majority in the 2014 case Utility Air Regulatory Group v. EPA:

When an agency claims to discover in a long-extant statute an unheralded power to regulate “a significant portion of the American economy” we [the Court] typically greet its announcement with a measure of skepticism.  We expect Congress to speak clearly if it wishes to assign to an agency decisions of vast “economic and political significance.”

Scalia held this opinion so strongly that, in his last public judicial act, he wrote the order (passed 5-4) to stay the Obama Administration’s sweeping “Clean Power Plan.” Such actions occur when it appears the court is likely to vote in a similar fashion in a related case.

This all devolves to the 2007 landmark ruling, 5-4, in Massachusetts v. EPA, that the EPA indeed was empowered by the 1990 Clean Air Act Amendments to regulate emissions of carbon dioxide if the agency found that they endangered human health and welfare (which they subsequently did, in 2009). Justice Kennedy, Kavanaugh’s predecessor, voted with the majority.

Will Kavanaugh have a chance to reverse that vote? That depends on what the new Acting Administrator of the EPA plans to do about carbon dioxide emissions. If the agency simply stops any regulation of carbon dioxide, there will surely be some type of petition to compel the agency to continue regulation because of the 2009 endangerment finding. Alternatively, those already opposed to it might petition based upon the notion that the science has changed markedly since 2009, with increasing evidence that the computer models that were the sole basis for the finding have demonstrably overestimated warming in the current era. It’s also possible that Congress could compel EPA to reconsider its finding, and that a watered-down version might find itself at the center of a court-adjudicated policy fight.

Whatever happens, though, it is clear that Brett Kavanaugh clearly prefers Congressional statutes to agency fiat. Assuming that he is confirmed, he will surely exert his presence and preferences on the Court, including that global warming is “urgent and important,” but it is the job of Congress to define the regulatory statutes.

Pages