Crony Capitalism And The Transcontinental Railroads

This article, written by Mises Daily editor Ryan W. McMaken, was originally published by Mises on March 10.

When Barack Obama used the transcontinental railroads as an example of the wonderful things that can be accomplished with grandiose government programs, he was attacked for mistakenly referring to the railroads as “inter continental.” Notably, he was attacked by approximately no one for talking up a government program that in reality should be best remembered as a pioneering feat in government corruption, corporate welfare, and immense waste.

Although not related in quite the heroic terms it once was, the trans­continental railroads retain their place as one of the great alleged suc­cess stories of nineteenth-century America. According to the popular myths, the same myths now exploited by the president, and challenged by no one, the railroads, these supposedly great monuments to the ingenuity of American industrialists, united East and West by bringing together the economies of the West coast and the East coast. This government program then set the stage for the massive economic growth and national greatness that would occur in the United States during the early twentieth century.

And yet, few claims about the necessity or success of the transcontinental railroads are true. While none would argue that transcontinentals would not become economically feasible in the private market at some point, during the 1860s, as the first transcontinentals took shape, there was no economic justification. This is why the first transcontinentals were all creatures, not of capitalism or the private markets, but of government. There simply were not enough people, capital, manufactured goods, or crops between Missouri and the West coast to support a private-sector railroad.

As creatures of government and of taxpayer-funded schemes to subsidize the railroads and their wealthy owners through cheap loans and outright subsidies, the railroads quickly became scandal-ridden, wasteful, and contemptuous of the public they were supposed to serve.

This tale is told in grim detail in historian Richard White’s 2011 tome on the transcontinental railroads, Railroaded: The Transcontinentals and the Making of Modern America, which exposes the near-utter disconnect between the railroads and the true geography of the markets in the mid-nineteenth century.

While it has been long-assumed that the West coast benefited immensely from the transcontinentals that connected the West coast to eastern markets, in fact the overland railroads made little difference. The West coast already had its own economy founded on exports to Europe and Asia, and Californians and Oregonians obtained all the goods they needed by sea. Indeed, for years after their completion, the railroads of the West coast were unable to effectively compete with the steamship operators (many of them also subsidized by Congress) that provided cheaper transportation of goods. Naturally then, this situation degenerated into a political competition between railroads and steamship companies seeking more favorable treatment from the federal government.

In general, however, the economy of the West coast turned to the more efficient and more competitive sea carriers. By the 1860s, the sea carriers were already taking advantage of well-developed trade with the Panama Railroad across Central America, completed in 1855, that was providing true transcontinental shipping at a much lower price over a much shorter overland route.

In spite of massive subsidies and free lands equal in size to New England, the lack of overland trade made it difficult for the railroads to turn a profit, and after a series of bankruptcies, bailouts, and other schemes, railroad owners like Leland Stanford, Thomas Durant, and Jay Gould managed to make a lot of money manipulating federal largesse, but many others, including families and ranchers who followed the flood of money and capital west during the boom, but who found themselves as paupers on the western plains after the bust, were ruined by the railroad’s bubble economy.

With the signing of the first bill to create the transcontinentals in 1862, it was already known that there was no economic justification for the railroads, which is why they were, according to White, “justified on the grounds of military necessity.” Lacking any privately funded-entrepreneurs willing to build a road through more than a thousand miles of territory uninhabited by whites, the 1862 Railroad Act created the Union Pacific, making it the first federally-created corporation since the Bank of the United States. Legal and economic shenanigans ensued, and it would not be until the 1890s that anyone built a privately-funded railroad, the Great Northern Railway.

Indeed, by the 1890s, global progress in technology and technique had greatly reduced the cost of constructing railroads. The benefits of waiting for the private sector to construct railroads when costs and consumer demand made them feasible could have been enormous. The costs of not waiting were indeed huge. The transcontinentals set the stage for the corruption and corporate capitalism that now defines the Gilded Age in the minds of many. While much of the American economy of that era was characterized by very free markets, the railroad markets west of Missouri were anything but. In the end, the railroads constituted a huge transfer of wealth from taxpayers, Indians, Mexicans, and more efficient enterprises who found themselves competing with these subsidized behemoths.

It was the same old story of using the state to socialize costs while privatizing profits. As one opposition Congressman declared in response to the Railroad Bill, the enterprise was “substantially a proposition to build this road … on Government credit without making [the railroads] the property of the Government when built. If there be profit, the corporations may take it; if there be loss, the Government must bear it.”

Even if presented with this information today, many Americans, both left and right, are likely to just shrug and make the consequentialist argument that the railroads were “worth it” because without them, “America” (whatever that means to the one making the argument) wouldn’t be as “great” (another perfectly malleable term) without the transcontinentals being built by the U.S. government. This enormously presumptuous statement, however, completely ignores the opportunity cost of constructing and financing the railroads in that fashion. What else could have been funded with the resources that went to the railroads during the decades following the American Civil War? We’ll never know.

Yet even during the 1870s and 80s, when it became apparent to many that the railroads were a gargantuan waste of money, and most of the railroad companies were in bankruptcy, the railroad’s supporters claimed that it had all been a great idea because, although the railroads were bankrupt, the railroads themselves were still there, and were now presumed to be an immutable part of the landscape forever available for future Americans. Even that argument held no water, of course, because it turns out that railroads require an enormous amount of upkeep and maintenance. This was especially true of the first transcontinentals which were poorly and cheaply constructed, and which required rebuilding in many places. The railroads were in fact huge white elephants that in many cases could only be maintained with cheap government financing and other forms of corporate welfare.

Interestingly, White, in his conclusions in Railroaded, appears somewhat dismayed at the chaos that reigned among the railroad companies and within the so-called markets that connected the railroads to the farmers, ranchers, and miners who used the railroads for shipping. Lacking the insights of the Austrian School, White fails to see the booms, busts, and waste of the transcontinentals as the natural outcome of a government-dominated market divorced from a functioning consumer market or price system. White’s understanding of economics remains mired in neo-classical assumptions using buzzwords like “competition” and “efficiency” as the most important aspects of markets. In this, White is very much like his nineteenth-century subjects who, we learn from White, were themselves stuck in non-Austrian economic thinking that so often concludes that when markets appear to be broken, they can be fixed by government-mandated competition and government-determined prices that are said to be more “efficient.” The central role of the consumer, so well understood by Austrians, was often ignored by even the most consistent free-marketeer of that time and place.

I’m forced to forgive White for his ignorance of economics, however, for he has done a great service in providing us with such detailed and unvarnished documentation of the crony capitalist world of the transcontinental railroads. Although he’s likely a complete stranger to the works of Bastiat, White concludes that the unseen cost of the transcontinentals is one of the great ignored realities of the railroads. Those who dogmatically defend the government’s transcontinentals, White asserts, need to “escape” thinking that assumes the “inevitability of the present.” Yes, it’s a fact that the government-financed railroads were built, and yes, it’s a fact that American standards of living increased greatly in the decades that followed. The assumed connection between those two events, however, is on far shakier ground, and the assumption that it was right to tax and defraud millions of American taxpayers to make the enormous boondoggle a reality, is on the shakiest ground of all.

Mises Institute: Labor Unions And Freedom Of Association

This article by Gary M. Galles originally appeared on the Ludwig von Mises Institute website on Tuesday, March 4. 

Mandatory union membership and mandatory dues imposed on those who do not want to join are again at issue. On the heels of contentious “right to work” disputes in several states, the Supreme Court has recently heard arguments challenging an Illinois mandate requiring home health care workers to pay representation fees to a union they did not want. That case, Harris v. Quinn, has the potential to even challenge the Court’s 1977 Aboud precedent upholding mandatory union dues for public sector workers. Such a result would be a victory for liberty.

Unions and their allies in Harris v. Quinn reiterate the claim, accepted in Aboud, that “union security” rules are needed to prevent workers from unfairly opting out of paying for union services. But that claim, which portrays the issue as defending the property, contract, and freedom of association rights of unions (to be paid for services rendered to workers they represent), intentionally misrepresents the core issue, which is the liberty of workers and employers.

“Union security” rules are clear violations of the liberty of workers’ and employers’ freedom to not be forced to associate with certain groups against their will, a freedom unions ironically steamroll in the name of freedom of association, asserted only for themselves, despite its inconsistency with freedom of association for all. Consequently, unions must find a legitimate sounding way of defending the coercion involved. That is where the free-rider argument comes in, which frames the issue as protecting legitimate rights, rather than the illegitimate use of government-granted coercive powers to impose employment terms violating government’s primary role: protecting individual rights.

Labor laws have made unions exclusive representatives for groups of workers. Therefore, unions assert that every worker must be forced to pay for his or her representation, or he or she will be able to “free ride” on those services. That is, workers’ rights must be abrogated to prevent non-members’ unethical behavior.

But free-riding on unions is not the fundamental problem. Mandatory exclusive representation in the form of monopoly unions imposed to the detriment of those who disagree (pro-union legislation exempted unions from antitrust laws) is the fundamental problem.

Given majority approval in a union certification election, current labor law interpretation requires all affected workers to submit to union representation and pay the union’s price for it. Those terms are imposed not only on workers who voted for the union, but for those who supported another union, those who preferred remaining union-free, and those who did not vote (including those hired after the union is certified, who never get an effective chance to vote). Workers (or the agents they select voluntarily) and employers are prohibited from negotiating their own arrangements, including labor-management cooperation not controlled by the union and “yellow-dog” agreements requiring abstention from union involvement (which, before labor laws eliminated such rights, the Supreme Court called “part of the constitutional rights of personal liberty and private property”).

The supposed “free-riding” workers are those who would refuse union representation, but are not allowed to. They are harmed by the imposition, revealed by their unwillingness to pay the “price” for those services. They are not free-riding on the union. They are “forced riders,” required to abide by, and pay for, violations of their rights and interests, to benefit unions. That violation of workers’ (and employers’) rights, not their attempts to escape the harm unwanted representation imposes on them, is the central issue.

Despite union rhetoric, they don’t really want to solve the “free rider” problem they hang their argument on, because it is easily fixable. But unions stop at nothing to prevent the solution. All a fix would require is ending mandatory exclusive union representation. If workers were allowed to choose representation by different unions or other agents or to negotiate for themselves, the problem would disappear. Each union would only negotiate for its voluntary members, eliminating so-called free riders. But unions have fought with tooth, nail, and their members’ wallets to impose and maintain exclusive representation, knowingly harming all dissenters and thereby creating the “free rider” problem. And their recent behavior, as in Michigan, reveals how far they will go to maintain that power to circumvent competition in the labor markets they control, now largely in the public sector.

Despite unions’ deceptive arguments for their government-granted exclusive, abusive powers in terms of freedom of association, real, general freedom of association does not invalidate the potential of workers forming unions. Scholars who are part of the Austrian School have been at the forefront of making that clear.

As Walter Block put it in “Labor Relations, Unions, and Collective Bargaining: A Political Economic Analysis,” “unionism … admits of a voluntary and a coercive aspect. The philosophy of free enterprise is fully consistent with voluntary unionism, but is diametrically opposed to coercive unionism.” Voluntary unions are consistent with liberty because “if it is proper for one worker to quit his job, then all workers, together, have every right to do so, en masse.” And in his “The Yellow Dog Contract: Bring It Back!” he addressed this issue directly:

Are unions per se illegitimate? No. If all they do is threaten mass quits unless their demands are met, they should not be banned by law. But as a matter of fact, not a one of them limits itself in this manner. Instead, in addition, they threaten the person and property not only of the owner, but also of any workers who attempt to take up the wages and working conditions spurned by the union. They also favor labor legislation that compels the owner to deal with the union, when he wishes to ignore these workers and hire the “scabs” instead.

Ludwig von Mises, in his 1966 magnum opus, Human Action, also made the distinction between voluntary and coercive unions clear:

The issue is not the right to form associations. It is whether or not any association of private citizens should be granted the privilege of resorting with impunity to violent action. … The problem is not the right to strike, but the right — by intimidation or violence — to force other people to strike, and the further right to prevent anybody from working in a shop in which a union has called a strike.

Requiring union representation and endowing those unions with monopoly powers violates the liberty and freedom of association of dissenting workers, employers, non-union workers, and consumers. Undoing that abuse would fix every union free-riding and forced-riding problem. And it would be easy to do. As Murray Rothbard put it, in his 1973 For a New Liberty, “All that is needed, both for libertarian principle and for a healthy economy, is to remove and abolish these special privileges.” That is why Harris v. Quinn, which offers the Court another chance to see through the “free-rider” smokescreen to the central issue, presents an opportunity for a reform that would benefit the vast majority of Americans.

Gary M. Galles is a professor of economics at Pepperdine University. He is the author of The Apostle of Peace: The Radical Mind of Leonard Read. Send him mail. See Gary Galles’s article archives.

Creative Commons License

Mises: Galbraith Was Right About Advertising

This column by Mises scholar Robert Batemarco was first posted at the Ludwig von Mises Institute’s website on February 25, 2014.

Now that I have your attention, rest assured that even when John Kenneth Galbraith got something right, he got it wrong. One of the signature ideas for which Galbraith is known is the Dependence Effect, which states that advertising convinces people that they need things that they don’t really need. In Galbraith’s own words, “If the individual’s wants are urgent, they must be original with himself. They cannot be urgent if they must be contrived for him. … One cannot defend production as satisfying wants if that production creates the wants. … The even more direct link between production and wants is provided by the institutions of modern advertising and salesmanship.” Galbraith uses this concept to undermine the foundations of microeconomics in the personal preferences of individuals.

Where Galbraith is right is that such salesmanship does indeed get people to demand things that are not in their best interests. Where he goes terribly wrong is in finding producers in the marketplace as the primary perpetrators of this effect. In fact, it is the State that makes greatest use of salesmanship to obtain the consent of people for things that do not only not make them better off, but usually do them harm. What makes the State’s behavior even worse is that when the State’s salesmanship fails, the State can fall back on the use of force to get people to satisfy the wants “that production creates.” Private firms, unless they are in league with the State, do not have that ability.

Examples of the Dependence Effect as implemented by the State abound. Let’s take just three.

Exhibit A is U.S. entry into World War I. That a majority of the American people did not want their sons sent to die in European trenches can be inferred from the facts that they re-elected a President who ran under the slogan, “He kept us out of war,” and that the number of volunteers was not sufficient for the U.S. to commit large armies to that conflict without conscription. After all, who would find the desire to make sure that J.P. Morgan did not take losses on his British bonds or to guarantee American citizens the right to travel safely on ships carrying weapons for the belligerent powers? Even the purported goal of “making the world safe for democracy” was not exactly something most Americans felt in the depths of their being until it was drummed into their heads by such agencies as the Committee on Public Information, if even then. Similar discrepancies between what most Americans wanted and what they were contrived to want are to be found in nearly every war the U.S. government has gotten its citizens into.

Exhibit B would be the Affordable Care Act. Many relatively young and healthy people do not feel that medical insurance, especially the expensive pre-paid medical plans that are currently mistaken for true insurance, are worth the cost to them. Since almost no amount of government propaganda was sufficient to get them to spend their money on something that far down their list of priorities, force was used to bring them into compliance, i.e., to get them to spend money on, “things that they don’t really need.” At the same time, many other people had plans that they believed were meeting their needs and within their budgets. Here the government enacted policies that anyone could have readily foreseen — with thousands of words published to that effect to enlighten those to whom it was not obvious — would result in the removal of those policies from the market. So not only do we see the State seeing to it that people obtain products they do not want, but it also brings about the elimination of products people do want. Not only is the State the true locus of the Dependence Effect, but it is also the source of its equally evil twin, what we might call the Elimination Effect.

Finally, we have Exhibit C, which is a constantly-devaluing currency. A fiat currency of diminishing value is hardly something that is innately and urgently needed by most people, yet we have been told over and over by leaders of the Federal Reserve System and mainstream economists, many of whom would like to be Fed chairman, that such money is all that stands between us and another Great Depression.

Yet during the nineteenth century, there were five periods of at least five years (including one of 25 years) in which price levels as measured by the GDP deflator fell, but real GDP had an average annual rate of growth from 2.7 percent to 6.2 percent. The demand for a central-bank-created money that creates unsustainable booms and busts and generates Cantillon effects (that transfer income from the middle class and the poor to wealthy cronies of those in high positions) are clearly not “needs” that originate with anyone but those on the receiving end of the wealth transfer. Depreciating money is another case in which government efforts to persuade people were unsuccessful and had to be enforced by legal tender laws and confiscation of the people’s gold.

These examples could be multiplied many times over. The TSA and its privacy-violating “security theater,” NSA spying on law-abiding citizens, subsidized artwork of dubious value, a War on Poverty that generated numerous behaviors that perpetuated poverty, and a food pyramid that, when followed, seems to lead to more obesity rather than less, are just a few on a list that seems endless. In every one of these cases, people came to accept such programs that were not satisfying needs that were intrinsic due to the hype generated by court economists or court historians, or failing that, had them shoved down their throats by force. Thus, the Dependence Effect is alive and well. Only its presence is most strongly felt in the government sector. Indeed, while John Kenneth Galbraith points the finger elsewhere in his explanation of the concept, he himself is one of the court economists who perpetrated it, as nearly his entire body of work persuaded many people they had needs “not original within themselves.”

Mises Institute: Colorado’s New Cannabis Economy

This article, written by Mises Daily editor Ryan W. McMaken, was originally published by the Mises Institute on Feb. 20. 

What would have happened if one or two States had somehow managed to legalize alcohol during Prohibition? Most likely, those States would have become centers of entrepreneurship with retail outlets, medicines and innovation in equipment, machinery and other forms of capital related to alcohol-related industries.

With the recent legalization of recreational cannabis use in both Washington State and Colorado, we’re able to see a similar experiment in action.

While the 18th Amendment prohibiting alcohol production and sales precluded State-level legalization, Federal drug laws enjoy no such Constitutional backing. On Nov. 6, 2012, Amendment 64 to the Colorado State Constitution was approved by Colorado voters in the form of a popular ballot initiative. The amendment mandated that “the use of marijuana should be legal for persons twenty-one years of age or older and taxed in a manner similar to alcohol.”

Moreover, the amendment mandated that industrial hemp be legal and that “all parts of the plant” plus seeds, oils, extracts and other forms of cannabis be legal as well.

Also legalized were “marijuana accessories,” including “any equipment, products or materials of any kind which are used, intended for use, or designed for use in planting, propagating, cultivating, growing, harvesting, composting, manufacturing, compounding, converting, producing, processing, preparing, testing, analyzing, packaging, repackaging, storing, vaporizing, or containing marijuana, or for the ingesting, inhaling, or otherwise introducing marijuana into the human body.”

The amendment was certified by the Governor or Colorado on Dec. 10, 2012, and the recreational use of cannabis has been legal under Colorado law ever since.

Amendment 64, with all its language covering “equipment, products, [and] materials,” hints at the economic complexity that has always existed behind recreational drugs, but which now, in a limited case in a limited jurisdiction, has emerged from the black market and underground operations into the light of the larger marketplace. The cannabis market is not simply a matter of putting some leaves in small bags. The new legal market, instead, is a market with far better quality control and accountability on the part of merchants. And it means economic growth for many industries that have never traditionally been connected with recreational drugs.

Supporting the cannabis merchants are a wide variety of enterprises from distribution warehouses to financial institutions, attorneys, short-haul truckers and more. The new demand for commercial real estate to serve the needs of both producers and retail outlets has created a need for real estate brokers who can specialize in the cannabis industry, while attorneys assist with the drafting of legal documents and accountants must be hired to keep track of the money. Unfortunately, many of these industries must continue to be wary of Federal law, even when State law is clear on the matter. Banks, specifically, which are regulated at the Federal level, only recently were given the green light by Federal regulators to open accounts for cannabis-related businesses. The legality of this sort of banking remains on shaky ground, however, and many banks remain loath to participate, thus crippling the financial and banking opportunities for the cannabis industry in Colorado and Washington.

All of those private-sector actors, from retail clerks to insurance brokers, are making money from the cannabis economy; yet many business leaders and politicians still mock and disregard this new entrepreneurial activity with a dismissive wave. This includes the Metro Denver Chamber of Commerce, which unconvincingly claims to work for the expansion of business opportunities in the region. The Chamber chose to support the ongoing heavy-handed prohibition of cannabis businesses because legalization in Colorado, according to the Chamber, is an effort to “to profit from the legalization of marijuana at the expense of… children.”

On what grounds is this attitude by politicians and anti-cannabis activists justified? Their activism against cannabis is, the overwhelming majority of the time, purely arbitrary.

Suppose for example, that a pro-cannabis activist were to compile information on all the jobs, tax revenue and capital that has been created in Colorado and presented such information to other States as an argument for legalization. We can probably guess what would happen. In many cases, the information would be dismissed and ridiculed, with politicians and chamber of commerce executives claiming that they don’t want such unhealthy and “dangerous” products in their State and that people who consume such goods are lazy or criminal.

Yet such people never apply similar analysis to other products that are arguably far more dangerous, costly and counterproductive than anything turned out by the cannabis industry.

There can be little doubt that officials in Texas and Georgia, for example, are extremely happy about having Frito Lay and Coca-Cola in their States, respectively. In each case, State officials and their cronies in the business sector no doubt sing the praises of all the jobs and economic activity brought to the State by the snack-food and soda industries.

Yet one could easily argue that these industries are far more detrimental to American consumers than cannabis is or has ever been. Obesity-related conditions kill nearly one in five Americans, and cannabis kills almost none. Using the logic of the opponents of freedom in this matter, could not one argue that such industries provide nothing more than an opportunity for people to ruin their health while receiving virtually no nutrition in return whatsoever? How many hundreds of millions of taxpayer dollars are wasted each year through Medicare and Medicaid to finance the diabetes drugs for the soda and snack-food consumers (including children) who are sent to their graves far more quickly by the good people at Frito Lay and Coca-Cola? Indeed, Coca-Cola Corporation once explicitly sought to convince consumers to drink fewer healthy beverages such as milk and water, and drink soda instead.

Likewise, Missouri officials don’t seem keen on disparaging Budweiser beer in spite of that product’s many connections to domestic violence, alcoholism and fatal car accidents.

We can be fairly certain that the Metro Denver Chamber of Commerce would fall all over itself to welcome any of those industries to Colorado.

Meanwhile, however, the cannabis refugees who move to Colorado to buy real estate, invest in local enterprises or seek better healthcare for their children are simply regarded as borderline criminals and not as serious economic actors. The industry that attracted that capital, both human and otherwise, is to be regarded with suspicion.

I don’t mention drunk driving and obesity because I think government can or should solve those problems, of course. I simply point out that the standard by which the cannabis-haters measure the cannabis industry is rarely applied to other industries that can be shown to have economic benefits, but which can also be shown to impose heavy costs on society in other forms.

And naturally, none of this analysis even touches on the many other benefits not directly connected to the cannabis economy. For example, those of us who have no interest in consuming cannabis no longer have to worry that some imprudent house guest might leave cannabis on our property or in our cars, thus exposing us to criminal charges (at the State level). The industrial hemp economy, which we don’t even have room to discuss here, offers myriad other economic benefits totally unrelated to recreational drug use.

In a free society, it’s not up to business “leaders,” politicians, or the arbiters of public decency as to which industries shall be lauded and welcomed, and which shall be ignored and shunted aside. It is the market, which far more reliably reflects the true preferences and desires of the population than any political process, that is the one objective and honest measure of what it is that the consumers and taxpayers want. If consumers don’t want the cannabis industry in Colorado, it will surely shrink to insignificance. If, on the other hand, consumers do in fact want it, lawmakers possess no economic or moral grounds to declare otherwise.

–Ryan W. McMaken

Ryan W. McMaken is the editor of Mises Daily and The Free Market. Send him mail. See Ryan McMaken’s article archives.

Creative Commons License

 

Business Journalist Explains Democracy And Government Spending

This article by Gregory Bresiger originally appeared on the Ludwig von Mises Institute website on Feb. 18.

Over the past 80 years or so, governments have been expected to provide more and more social services. That means bigger and bigger bills for the taxpayers of this generation and generations unborn. Since governments almost constantly run in the red, they often resemble cocaine addicts desperately looking for the next fix. The fix, in the case of governments, is the perpetual need for more money, no matter how much wealth the engine of a strong economy may be generating.

One of the delusions of any democratic government is the “other guy will pay” syndrome. Usually, in mature welfare state democracies, lawmakers look toward election cycles and understand that there is never enough money to back their promises. So they pass the bill problems to the future by money printing and issuing bonds that won’t come due until after their elections have been won.

However, in most democracies, to quote former pro football coach George Allen, “The future is now.” Tens of millions of people in the United States, Europe and Japan are retiring now after a lifetime of paying taxes into flawed government retirement funds. Democracies are facing problems keeping all the promises of former pols, most of whom now enjoy fat government pensions while the taxpayers struggle to pay the bills they left behind. So paying the bills is a perpetual problem for pols facing the next election.

For example, in the United States Social Security and Medicare “trust” funds surpluses have been arrogated over the years by both right- and left-wing governments. They used them to pay off political debts and make deficits seem smaller. However, President Bill Clinton implicitly conceded the scam. At the end of his Presidency, he was urging lawmakers “to save” Social Security. Why did it have to be saved? Where had years of high payroll taxes gone?

Luckily for pols, most voters don’t seem to understand that the overspending of the past now threatens their lifestyles and the value of their currencies. The scam goes on as governments look for new taxes to make good the promises of past governments. So today the latest fiscal gimmick is a rehash of an old scheme: “The rich will pay.”

The implication of this game is that the rest of us will enjoy a free, or low-cost, ride, while a vast fortune of wealth goes into government coffers. Government services — everything from supposedly free medical services to free quality education to superb transportation services and on and on — will be provided at little or no cost because the rich — whoever they are — will pay.

Here in New York City, a new Mayor with strong teacher union connections wants to extend pre-school public school programs by assessing a higher tax on those making $500,000 or more a year. At the national level, President Barack Obama has stressed that those making $250,000 or more a year should pay higher taxes. The argument is also based on the idea that the government should step in and cure the problem of “income equality.”

Using this kind of thinking, one might bar certain teams from winning more championships (the New York Yankees, Real Madrid, the Green Bay Packers, the Boston Celtics) because they’ve won many more championships than most other teams, so that’s unfair. The logic of the rich-must-pay pols is that league or the government must correct inequalities. But inequalities among humans are many and are impossible to define since every individual is unique. Yet these kinds of policies not only don’t work, they hurt our economy. They have been discussed and, in some cases, tried. In effect, they tell the successful: “You’re doing too well. We must do something about you.”

It reminds me of post-World-War-II tax policy. That’s when progressive tax policy dominated the system before the John F. Kennedy Administration tax cuts of the early 1960s. In the United States in the 1940s and 1950s, we actually had marginal tax rates of 94 percent. Think of that. If you reached a certain point and you were able to keep only 6 cents of every additional dollar you made, why would you make that extra dollar? This is what happened in the 1940s and 1950s. High-income people would stop working late in the year when they were coming close to the 94 percent.

How did that help anyone?

Obviously, it hurt our economy as talented people, in a modified Atlas Shrugged scenario, withdrew for part of the year. What about the rich? I want the same of them that I want of everyone else. I want them to continue to spend and invest as much as they want without worrying about what J.S. Mill called “a success tax.” Whether they consume or help build production by starting businesses that generates the “jobs” that our pols are perpetually baying for during these days of high unemployment and weak job growth.

But what about the rest of us — those making $250,000 or less? I doubt that raising taxes on the rich, who make up a small percentage of society, will make much of a difference. Consider many Western governments are running deficits in the hundreds of billions of dollars each year. The whole concept of soak the rich is silly and counterproductive. First, let us consider the people who already have wealth. If governments keep raising taxes when they reach a certain level, these people can stop working just before the penalty rate is triggered. Unlike the rest of us, the rich don’t have to work for taxable wages. They already have substantial wealth.

And for the rest of us, middle and low income, does anyone actually think that, once the government collects still more higher taxes on the big earners that the new money will go to our central governments and they will give all the rest of us a break on taxes?

You can stop laughing now.

The counterargument to the rich-will-pay syndrome is thus: First, even the highest marginal tax rates on the rich will never generate the amount of money the government expects. As taxes go up, people don’t work as hard or, in the case of people with modest incomes, they will work under the table. That’s a tremendous problem in a high-tax nation such as Spain. Second, the history of so many central government programs is that, even when they generate a lot in taxes, tremendous amounts of the proceeds are eaten up in administrative costs.

Bureaucracies at the highest levels of governments are damn expensive to maintain. That is true whether one is speaking of aircraft carriers or Social Security. Indeed, in the case of the latter, I once heard an economist, Jeffrey Burnham, say at a conference that whenever Social Security has had a big surplus, governments have helped themselves to it. Here’s another example of the you-send-it-and-we’ll-spend-it-and-then-some philosophy that dominates democratic governments. In the 1980s in the United States there was a demand that toxic waste dumps be cleaned. The government, through its taxing power, raised hundreds of billions of dollars. What happened to the dump sites? Few of them were cleaned up.

What happened?

Most of the money was spent in administrative costs. The government shouldn’t raise taxes on the rich, or the rest of us. It should cut taxes by closing down whole departments of government and selling government assets. Let people, all people, take home more of their hard-earned money. With Social Security and other government welfare programs facing incredible funding gaps, it is more important than ever that people have private savings and investments. I mean assets under their control, not dependent on a government program that can change the payment levels with the stroke of a pol’s pen.

Let people not depend on the other guy, or on vote-hungry pols, to pay their bills. Let people build their own assets and take control of their lives through a remarkable system — more private property.

Gregory Bresiger

Gregory Bresiger is an independent business journalist who writes for the business section of the New York Post and Financial Advisor Magazine. His new book is MoneySense. Visit his blog at GregoryBresiger.com. See Gregory Bresiger’s article archives.

Creative Commons License

Edward Snowden, The NSA And The U.S. Courts

This essay, written by Ben O’Neill, was originally published by Mises Daily on Feb. 12.

Many commentators following the National Security Agency scandals have been eagerly awaiting the recommendations of the U.S. government task force on the matter, and the proposed reforms to be implemented by President Barack Obama to bring the spy agency under control. If you’re interested in this kind of thing, you can watch the President’s recent speech and nod your head approvingly when he talks about the “tradition of limited government” in the United States and the Constitutional limits his government is at pains to respect. Oh, and just for good measure, while you’re listening to this magnificent oration being replayed to you on YouTube, the NSA will be recording your Internet browser history, or possibly even hacking your computer.[1] If you decide to click on the “like” or “dislike” buttons at the bottom of the video, that little nugget of political information can be added to their “metadata” archives, along with the rest of your Internet activities. In fact, in the 42 minutes it will take you to watch the President’s speech, the NSA will have vacuumed up about 40 million records of Internet browsing from around the world.[2] Perhaps yours will be among them.

It is instructive to note that all of this will be done by the same government that operates under an explicit Constitutional directive purportedly protecting people from “unreasonable searches and seizures” and specifying that “… no Warrants shall issue, but upon probable cause … and particularly describing the place to be searched, and the persons or things to be seized.”[3] Indeed, one of the most instructive aspects of the NSA scandal is the way the agency has succeeded, for an extensive period of time, in warding off legal challenges to the Constitutionality of its surveillance programs. This is instructive from the point of view of libertarian theory, since it illustrates the degree to which the much-vaunted “checks and balances” within the state apparatus, highlighted in the recent Obama speech, are really illusory. In practice, the judicial and executive branches of government tend to act as a legitimizing mechanism for the actions of government agencies, with rare “checks and balances” and “reforms” coming only when the legitimacy of the system is under potent attack from some outside source.

The NSA has taken great advantage of the symbiosis between the executive and judicial branches of the state, having implemented long-running programs of lawless surveillance and phony judicial review. The modus operandi of the agency in these matters has been to hide behind various secrecy requirements that have been used to hamstring attempts at open judicial review, ensuring that scrutiny of its programs and their legal basis is kept away from the prying eyes of the public. This has included the use of secret courts, where other parties are not represented and are not privy to proceedings. It has also included the use of secrecy requirements in evidence controlled by the NSA, which prevents people from showing that they have standing to challenge the agency’s programs in court or mandates that such matters are “state secrets,” beyond the scope of judicial review. And of course, it has also included an extensive regime of secret judicial rulings and secret “law,” with proceedings conducted behind a legal wall chiseled with those two ominous words: top secret!

In fact, the Obama speech on NSA reform is but a sideshow to the real cracks that are starting to appear in the NSA’s legal fortifications. More important is the recent preliminary ruling in the case of Klayman v. Obama which has opened the actions of the NSA up to some long-overdue judicial scrutiny in the public courts. In the preliminary ruling in December, the U.S. District Court for the District of Columbia found that the NSA’s mass collection of metadata, as shown in its own leaked documents, “most likely” violates the 4th Amendment to the U.S. Constitution. (Since this was a preliminary hearing, the judge was unable to make a more definite ruling at that time.[4]) In response to a preliminary application by plaintiffs seeking an injunction to stop the NSA from collection their metadata, Judge Richard Leon issued a scathing judgment against the NSA, dismissing several of its arguments as lacking common sense, and describing its mass surveillance technology as “almost-Orwellian.”[5] The Klayman case was followed up almost immediately by a contrary ruling in ACLU vs. Clapper, where Judge William Pauley examined the same legal precedents and arguments and found that there is no Constitutional protection against the mass collection of metadata by the NSA. The ruling relied heavily and uncritically on government reports on terrorist threats to the United States, and claimed that the NSA surveillance is crucial in combating terrorism.[6]

So there you have it, the system is now in action! Obama is promising reforms! The courts have stepped in! The judges are restless! All hail the finely constructed checks and balances! If all goes well, the plaintiffs in Klayman vs. Obama and ACLU vs. Clapper will have their final hearing in court, and the NSA will have their actions assessed against the strictures of the U.S. Constitution. Obama is promising more judicial oversight and a “public advocate” for the NSA court system. Hurrah!

But still, one is left with an uneasy feeling. After all, this is far from the first case in which plaintiffs have sought to challenge the legal basis of the NSA programs; and it is long since the time Obama first took command of the national security apparatus. So what has changed? Why are there now promises of new reforms? Why has there been a breakthrough in this case, but not in previous cases of the same kind? For seven years, the NSA’s PRISM program was under the oversight of the same judiciary and subject to the same “checks and balances” as now. For most of those years, it was under the direction of the current President. Why is it that the program now ruled to be “most likely unconstitutional” in one case has been proceeding unimpeded for so long — under the very same system of “oversight” and “checks and balances” — and challenges from previous litigants have been shot down in flames in case after case?

Well, we all know what has happened to make such a difference: Edward Snowden happened! The one antidote for the previously operating regime of secret “law” has been the leaking of classified documents from within the NSA, revealed to the public by this whistle-blower and “lawbreaker.”[7] Concerned that the NSA was acting contrary to the U.S. Constitution, Snowden released a treasure trove of documents to the media, setting out the unlawful activities of the NSA, all verified in its own words. The Klayman case represents the first post-Snowden case against the NSA, a situation where the judiciary now has to come to terms with a hostile public, which is well aware of what is hidden behind the legal walls erected around the NSA. The recent Obama speech also represents the first major reaction of the U.S. government to the prospect that it may receive an adverse Constitutional ruling discrediting its pretensions to legal observance.

The Klayman case is quite a breakthrough. Many have rightly regarded the case as representing a major breakthrough in judicial oversight of the NSA; but to put it more accurately, it represents the beginning of judicial oversight. In previous cases of this kind, the NSA has managed to ward off Constitutional challenges to its surveillance programs by arguing that all would-be plaintiffs lack “standing” to sue, and by appealing to the classified status of its secret programs and the privilege of “state secrets.” It has hidden behind a regime of secret judicial orders and rulings, all inaccessible to the public. The Klayman case and the ACLU vs. Clapper case are notable and important because they are the first of their kind where the plaintiffs have been allowed to proceed with their arguments against the NSA activities and the examination of the legal status of these activities has been allowed to proceed. This has been possible only because the leaks from Snowden allowed the plaintiffs to show that they had personally been subject to surveillance, something that has been impossible in previous cases brought against the NSA.

There is certainly cause to be cheerful about the recent court ruling in Klayman, as it is the first instance where the NSA programs have been subjected to Constitutional scrutiny in a public court. In view of the facts of the case, the preliminary findings of Leon are extremely sensible and, indeed, ought to be inescapable.[8] However, the case is far from over, with appeals expected to higher courts, a final ruling on the matter and then probably more appeals. One legal commentator has suggested that the trial judge’s ruling in the Klayman case is “… best understood as a kind of [“friend of the court”] brief to the Supreme Court …” [9]

In view of this likely path of appeal, it is instructive to understand the complicity of the Supreme Court in the previous regime of secrecy that has been perpetrated by the NSA. The ultimate arbiter of Constitutionality in the U.S. legal system has shown itself, in past cases, to be highly protective of the government in these matters and has previously assented to some quite absurd doctrines and arguments to prevent any meaningful judicial review. The court has repeatedly taken assurances from the U.S. government that the opportunity for Constitutional review would arise in the future, but it has consistently sided with the government’s assertions that it cannot arise for this particular plaintiff, or this one, or this one. This has meant that, while the illusion of judicial control has been maintained, the court has taken a policy of de facto immunity from Constitutional scrutiny. As Larry Klayman put it, “Most judges are just ‘yes men’ who rubber-stamp the federal government’s agenda.”[10]

Whether the challenge in Klayman vs. Obama ultimately succeeds or fails, the fact that it is heard at all is an initial cracking of the legal barriers that have been erected by the U.S. government to cover its own lawlessness. While there is some cause for buoyancy, there is just as much reason to be disgusted that scrutiny of the illegal programs of the NSA has taken so long to get a genuine hearing before the public court system and that the man responsible for allowing this to occur continues to be branded as a criminal and a traitor by the U.S. government, whose crimes he has exposed, and threatened with imprisonment or death.

Ben O’Neill

Ben O’Neill is a lecturer in statistics at the University of New South Wales (ADFA) in Canberra, Australia. He has formerly practiced as a lawyer and as a political adviser in Canberra. He is a Templeton Fellow at the Independent Institute, where he won first prize in the 2009 Sir John Templeton Fellowship essay contest.

Creative Commons License

Endangered Species, Private Property And The American Bison

This essay, written by Benjamin M. Wiegold, was originally published on Mises Daily on Feb. 10.

The political debate over what should be done about endangered species seems to be continuing without end. Calls are already emerging this year to place a multitude of species on the endangered list, including the emperor penguin, the Arizona toad, the African lion and many others.

The current endangered species list of both plants and animals numbers more than 9,000. This is considerably higher than the 78 different species mentioned under the original Endangered Species Preservation Act of 1966. Nearly 50 years later, 72 of these 78 species still remain on the list, with only two recovered, three extinct and one removed due to an error in the data.

Since Richard Nixon signed the comprehensive Endangered Species Act of 1973, intended to “halt and reverse the trend toward species extinction, whatever the cost” (emphasis added), only 30 of these 9,000 species have actually recovered, with 10 having gone extinct. This gives the Act, enforced by both the U.S. Fish and Wildlife Service (FWS) and the National Oceanic and Atmospheric Administration (NOAA), an abysmal success rate of less than 1 percent, despite an average yearly budget of nearly $2 billion.

Also signed in 1973 by 80 different governments was the Convention on International Trade in Endangered Species of Wild Fauna and Flora (the CITES treaty), designed to “ensure that international trade in specimens of wild animals and plants does not threaten their survival.” Examples are the illegal rhino horn and elephant tusk markets.

In a recent move that is likely to come as a surprise to many animal lovers around the world, the South African government is asking the international community to legalize the rhino horn trade as a means to save the animal and to fight poaching.

The justification for this is clear: By banning the sale of the horns and thus making them less available, the price for them has gone through the roof, creating incredible monetary incentives for poachers. Meanwhile, the rhino has been stripped of a major source of its value in legal markets, which has caused private owners of rhinos to question the profitability of providing heavily for the animal’s security.

Such perverse forces are at play regarding the entire endangered species list, as the very status “endangered” makes it nearly impossible to make any money from the animals.

Although some animal-protection activists no doubt abhor the idea of profiting from animals, making such profits illegal has clearly done nothing to protect those on the endangered list, as it appears that so few of these species will ever be delisted. It is far more likely that the only way to save them is through free markets and privatization, as was seen in the case of the American bison, also known as the American buffalo.

The American Bison

One major reason for the dramatic decrease in bison population figures from the tens of millions that lived in 1800 to only a few hundred living in the 1880s is what economists call a tragedy of the commons.

When property is privately owned, there are incentives to save and conserve resources because of the possibility that a more opportune time for their employment may appear down the road. To the extent that such resources generate income for their owners, plans will be made so as to preserve them into the foreseeable future. In other words, an individual who sells buffalo products for a living will be in dire financial straits if the entire bison population is eradicated.

A tragedy of the commons situation emerges wherever there is common (public) access to property. Under such conditions, massive costs are imposed by attempting to save resources because, after all, if you don’t use it, someone else will. The tragic aspect of common ownership then is that the resource in question becomes overused at unsustainable levels, as was seen with the wild buffalo.

For hundreds of years, the Native Americans hunted the buffalo for food and resources; but the human population was so small at the time that there were no concerns of endangering the animal.

Everything changed with the Industrial Revolution, when not only was the American population massively expanding, but so was the demand for buffalo products. Of course, most of the plains buffalo were located far enough west that most of the land was  not owned by whites, resulting in an indiscriminate slaughter. The years 1872 and 1873 alone recorded more than 3 million bison killed.

Furthermore, the railroads assisted in this annihilation because buffalo herds were at one point so large that they were known to delay trains for days on end. This is another example of the commons problem, as was the increasing prevalence of hunting for sport. If the buffalo were privately owned, interfering with the railroads would constitute trespassing, a liability the bison owners would have an incentive to avoid.

The second major reason for the near extinction of the buffalo was State-controlled land and all of the moral hazards that accompany it.

Official Federal government policy also helped speed the decline of the bison. In addition to the vast tracts of unsettled land, there were still considerable Native American populations calling the region home. The U.S. government, however, wanted the land for itself and proceeded to force the natives onto so-called “reservations.” War naturally broke out with the U.S. military. Led by the bloodthirsty Gen. William Sherman, the U.S. military adopted a scorched-earth policy that included attempts to eradicate bison herds from the plains.

But even disregarding such ill-intentioned people, those within the government who truly wanted to preserve the buffalo were unsuccessful, too.

Although Idaho, Texas, New Mexico and other States passed laws similar to the Endangered Species Act, they often failed to do so before it was too late and the buffalo were already gone. In 1872, Yellowstone National Park was opened as a safe haven; but poaching still remained a substantial problem. Henry Yount, remembered for his time at Yellowstone as the first national park ranger, resigned after only 14 months on the job because he knew his efforts alone were hopeless.

Thankfully for the bison, Charles Goodnight, James McKay, William and Charles Alloway, as well as a host of other private ranchers, began to scoop up wild buffalo throughout the 1860s and ’70s. From 1884 to 1902, the bison population in Yellowstone actually decreased from 25 to 23; but also by 1902, an estimated 700 were privately owned. This trend has continued for more than a century. By the 1990s, the ratio was 25,000 publicly owned to 250,000 privately owned bison.

Conclusion

Whether speaking of the South African government, the FWS and the NOAA, or the U.S. government in general, the continual failures to protect endangered species emerge as the result of an allocation problem.

Whereas private owners use basic accounting to determine if they are making profits or suffering losses, government activities are not subject to such constraints because the state is able to externalize its costs onto others through taxation. Private firms that displayed such an unbroken record of failure would have closed their doors years ago.

The bison experience shows us that not only are governments often responsible for putting many species in danger in the first place, but that their efforts at species preservation cannot match the success of private ownership.

–Benjamin M. Wiegold

Ben Wiegold is a staunch anarcho-capitalist and has been educating himself through the Mises Institute since 2011. He is also a self-taught musician who has played with a number of small bands in the Chicagoland area. See Benjamin M. Wiegold’s article archives.

Creative Commons License

Mises Scholar: Labor And Energy Regulations Take Us To The Cleaners

This article by Christopher Westley originally appeared on the Ludwig von Mises Institute website on Friday, February 7.

My clothes dryer went bust the day after Christmas, leading to one of the more common frustrations we face in the modern nation-state.

You see, there was a time when one’s dryer broke, the owner faced two options: have it repaired or buy a new one. The owner would weigh the costs and benefits of each, make a decision, and then move on to other things. But those days are gone. Now when an appliance goes on the fritz, a dreaded third option is increasingly being foisted upon us: that of fixing it yourself.

Now, self-repair was probably a more common choice back during my grandfather’s generation. But as the economy expanded and per capita incomes grew, the time spent repairing one’s own appliances meant less time working in the market. Toward the end of his career, my grandfather — the owner of a sheet metal business in Waukesha, Wisconsin — probably paid others to repair his appliances so he could better focus on serving his own customers.

This is one example of how the expansion of wealth made possible under capitalism leads to the creation of new jobs that did not exist previously. We don’t know what alternatives to appliance repair these repairmen would have chosen as careers had the repair market for labor not opened up — and even this development would never have occurred without entrepreneurs from previous generations introducing appliances to the home in the first place — but we can be sure it would have brought them less benefit. If this were not (apodictically) true, they would have chosen those alternatives instead of their actual careers in appliance repair.

With my own broken dryer, I could have dipped into savings and bought a new low-end model for about half a grand, but this was an option I wanted to avoid. I could have contacted a repair service, but the cost could have easily reached the price of a new machine.

Both outcomes result from restrictions on market forces that hinder both the supply of dryers and availability of repair. “Energy Star” compliance standards on appliances have increased production costs so as to cartelize this industry while providing only negligible benefits in terms of power efficiency. Meanwhile, labor market interventions, especially on the entry-level side of the market, have reduced the supply of repairmen, thus allowing existing repairmen the ability to claim higher wages than they would otherwise. For people (like myself) who do not live in a big city, even finding a repairman can be difficult.

The end result: The effects of government failure were reaching into my home and savings. Worse, they were forcing me to embrace the dreaded third option.

I chose to fix my dryer myself.

This decision was not made gleefully. I am not a tool guy. My comparative advantages tend not to include ratchet sets and elbow grease. What’s more, I resented being forced to teach myself skills my grandfather and his generation gladly gave up when market forces developed to a point at which they could. My situation smacked of societal devolution poking its cloven hoof into my laundry room.

But I plowed ahead and soon learned I was far from being alone in my predicament and that, in fact, huge masses of individuals across the country are being forced by similar artificial circumstances to take on last-minute appliance repair training against their will.

In response, the Internet is today chocked full of repair manuals that can be accessed after a few clicks into a search engine, while there are thousands of repair-oriented YouTubes uploaded by heroic experts explaining how to diagnose and then repair seemingly any appliance problem.

Tasks such as replacing a dryer motor or belt become much less daunting when you can watch someone else do it, step-by-step, in a video that’s posted by a professional and played by a novice — all for a zero price. While I watched several such videos, it occurred to me that they were part of a vast, spontaneous, decentralized, and unregulated training system that has emerged to counter the adverse effects of government intervention. Which lead me to ponder (as I unplugged my dryer and pulled it away from the wall): What similar market innovations developed in the former Soviet Union that made life somewhat bearable there? As the size and scope of government in the United States grows, what further workarounds will people be forced to instigate to make life bearable here? Will subversive videos explaining do-it-yourself surgery pop up once prices in health care are completely abolished?

These spontaneous training systems are far from perfect. They are clearly second-best, with first-best being those market options made unseen and unattainable due to violent interventions in market forces. But they are more than good enough. I’m proof of it. If an economist like me who was heretofore unaware that ratchets were measured in both inches and centimeters can replace a dryer belt and motor with the help of a few hastily found YouTube friends, then they serve much more than the social good.

They also serve the poor, especially, who suffer the most when government restricts market choice. May all of us who use these resources apply at least some of their economized funds to fight back against an overweening government that makes them necessary in the first place.

Christopher Westley is an associated scholar at the Mises Institute. He teaches in the College of Commerce and Business Administration at Jacksonville State University.
Creative Commons License

Mises Scholar Rates Bernanke’s Legacy: A Weak And Mediocre Economy

This essay, written by John P. Cochran, was originally published by Mises Daily on Feb. 4.

As Chairman Bernanke’s reign at the Fed comes to an end, the Wall Street Journal provides its assessment of “The Bernanke Legacy.” Overall the Journal does a reasonable job on both Greenspan and Bernanke, especially compared to the “effusive praise from the usual suspects; supporters of monetary central planning. The Journal argues when accessing Bernanke’s performance it is appropriate to review Bernanke’s performance “before, during, and after the financial panic.”

While most assessments of Bernanke’s performance as a central banker focus on the “during” and “after” financial-crisis phases with much of the praise based on the “during” phase, the Journal joins the Austrians and John Taylor in unfavorable assessment of the more critical “before” period. It was this period when the Fed generated its second boom-bust cycle in the Greenspan-Bernanke era. In the Journal’s assessment, Bernanke, Greenspan, and the Fed deserve an “F.” While this pre-crisis period mostly fell under the leadership of Alan Greenspan, the Journal highlights that Bernanke was the “leading intellectual force” behind the pre-crisis policies. As a result of these too loose, too long policies, just as the leadership of the Fed passed from Greenspan to Bernanke, the credit boom the Fed “did so much to create turned to mania, which turned to panic, which became a deep recession.” The Journal’s description of Bernanke’s role should be highlighted in any serious analysis of the Bernanke era:

His [Bernanke’s] role goes back to 2002 when as a Fed Governor he gave a famous speech warning about deflation that didn’t exist [and if it did exist should not have been feared]. He and Mr. Greenspan nonetheless followed the advice of Paul Krugman to promote a housing bubble to offset the dot-com crash.

As Fed transcripts show, Mr. Bernanke was the board’s intellectual leader in its decision to cut the fed-funds rate to 1% in June 2003 and keep it there for a year. This was despite a rapidly accelerating economy (3.8% growth in 2004) and soaring commodity and real-estate prices. The Fed’s multiyear policy of negative real interest rates produced a credit mania that led to the housing bubble and bust.

For some of the best analysis of the Fed’s pre-crisis culpability one should turn to Roger Garrison’s excellent analysis. In a 2009 Cato Journal paper, Garrison characterizes Fed policy during the “Great Moderation as a “learning by doing policy” which, based on events post-2003, would be better classified as “so far so good” or “whistling in the dark.” The actual result of this “learning by doing policy” is described by Garrison in “Natural Rates of Interest and Sustainable Growth”:

In the earlier episode [dot.com boom-bust], the Federal Reserve moved to counter the upward pressure of interest rates, causing actual interest rates not to deviate greatly from the historical norm. In the later episode [housingbubble/boom-bust], the Federal Reserve moved to reinforce the downward pressure on interest rates, causing the actual interest rates to be exceedingly low relative to the historical norm. Although the judgment, made retrospectively by economists of virtually all stripes, that the Fed funds target rate was “too low for too long” between mid-2003 and mid-2004, it was almost surely too low for too long relative to the natural rate in both episodes.

Given this and other strong evidence of the Fed’s role in creating the credit driven boom, the Journal faults “Mr. Bernanke’s refusal to acknowledge that the Fed made any mistake in the mania years.”

On the response to the crisis, the Journal refrains from the accolades of many who credit the Fed led by the leading scholar of the Great Depression from acting strongly to prevent another such calamity. According to the Fed worshipers, things might not be good, but without the unprecedented actions and bailouts things would have been catastrophic. The Journal’s more measured assessment:

Once the crisis hit, Mr. Bernanke and the Fed deserve the benefit of the doubt. From the safe distance of hindsight, it’s easy to forget how rapid and widespread the financial panic was. The Fed had to offset the collapse in the velocity of money with an increase in its supply, and it did so with force and dispatch. One can disagree with the Fed’s special guarantee programs, but we weren’t sitting in the financial polar vortex at the time. It’s hard to see how others would have done much better.

But discerning readers of Vern McKinley’s Financing Failure: A Century of Bailouts might disagree. Fed actions, even when not verging on the illegal, were counter-productive, unnecessary, and contributed to action freezing policy uncertainty which contributed to the collapse of the velocity of money. McKinley describes much of what was done as “seat-of-the-pants decision-making”:

“Seat of the pants” is not a flattering description of the methods of the regulators, but its use is justified to describe the panic-driven actions during the 2000s crisis. It is only natural that under the deadline of time pressure judgment will be flawed, mistakes will be made and taxpayer exposure will be magnified, and that has clearly been the case. With the possible exception of the Lehman Brothers decision … all of the major bailout decisions during the 2000s crisis were made under duress of panic over a very short period of time with very limited information at hand and with input of a limited number of objective parties involved in the decision making. Not surprisingly, these seat-of–the-pants responses did not instill confidence, and there was no clear evidence collected that the expected negative fallout would truly have occurred.

While a defense of some Fed action could be found in Hayek’s 1970s discussion of “best” policy under bad institutions (a central bank) where he argued that during a crisis a central bank should act to prevent a secondary deflation, the Fed actions went clearly beyond such a recommendation. Better would have been an immediate policy to end the credit expansion in its tracks. The Fed’s special guarantee programs and movement toward a mondustrial policy should be a great worry to anyone concerned about long-term prosperity and liberty. Whether any human running a central bank could have done better is an open question, but other monetary arrangements could clearly have led to better outcomes.

The Journal’s analysis of post-crisis policy, while not as harsh as it should be, is critical. Despite an unprecedented expansion of the Fed’s balance sheet, the “recovery is historically weak.” At some point “a Fed chairman has to take some responsibility for the mediocre growth — and lack of real income growth — on his watch.” Bernanke’s policy is also rightly criticized because “The other great cost of these post-crisis policies is the intrusion of the Fed into politics and fiscal policy.”

Because the ultimate outcome of this monetary cycle hinges on how, when, or if the Fed can unwind its unwieldy balance sheet, without further damage to the economy; most likely continuing stagnation or a return to stagflation, or less likely, but possible hyper-inflation or even a deflationary depression, the Bernanke legacy will ultimately depend on a Bernanke-Yellen legacy. Given, as the Journal points out, “Politicians — and even some conservative pundits — have adopted the Bernanke standard that the Fed’s duty is to reduce unemployment and manage the business cycle,” the prospect that this legacy will be viewed favorably is less and less likely. Perhaps if the editors joined Paul Krugman in reading and fully digesting Joe Salerno’s “A Reformulation of Austrian Business Cycle Theory in Light of the Financial Crisis,” they would correctly fail Bernanke and Fed policy before, during, and after the crisis.

But what should be the main lesson of a Greenspan-Bernanke legacy? Clearly, if there was no pre-crisis credit boom, there would have been no large financial crisis and thus no need for Bernanke or other human to have done better during and after. While Austrian analysis has often been criticized, incorrectly, for not having policy recommendations on what to do during the crisis and recovery, it should be noted that if Austrian recommendations for eliminating central banks and allowing banking freedom had been followed, no such devastating crisis would have occurred and no heroic policy response would have been necessary in the resulting free and prosperous commonwealth.

John P. Cochran is emeritus dean of the Business School and emeritus professor of economics at Metropolitan State University of Denver and coauthor with Fred R. Glahe of The Hayek-Keynes Debate: Lessons for Current Business Cycle Research. He is also a senior scholar for the Mises Institute and serves on the editorial board of the Quarterly Journal of Austrian Economics.

Creative Commons License

U.S. Embraces Serfdom, Market Is Taking Over Sweden’s Government Healthcare

This essay, written by Per L. Bylund, was originally published on Mises Daily on Jan. 29.

While contemporary mythology has it otherwise, the market is not a distinct phenomenon: it is what exists when people interact and otherwise voluntarily transact with each other. The broad definition of the market is simply what people (choose to) do when they are not forced to do otherwise. So it is not surprising that even the Soviet Union, “despite” its anti-market rhetoric, fundamentally relied on markets: foreign markets for prices to guide planners’ economic calculation, and domestic black markets for resource allocation and goods distribution according to people’s real needs and preferences. The black market, indeed, was “a major structural feature” of the Soviet economy.

In other words, we should expect to see markets wherever governments fail. Or, to put it more accurately, markets exist where government cannot sufficiently repress or otherwise crowd out voluntary exchange.

So it should be no surprise that, as The Local reports, Swedes en masse get private health care insurance on the side of the failing welfare systems. This is indirectly a result of the relatively vast liberalization of the Swedish economy over the course of the past 20 years (as I have noted here and here), which has resulted in the “experimental” privatization of several hospitals (even one emergency hospital is privately owned). While previously only the political elite (primarily, members of the Riksdag, the Swedish parliament) had access to private health care through insurance, the country now sees a blossoming and healthy insurance market.

Private health care insurance was initially offered to employees as part of employers’ benefits packages, since this ensured direct access to care when needed, and a faster return to work. This trend was easily recognizable in service sectors heavily dependent on the skill and knowledge of individual employees. Working as a professional consultant in Sweden in the late 1990s and 2000s, I personally experienced and benefited from such private health care insurance through my employer. This type of very affordable insurance provided same-day appointment with GPs and specialists alike, whereas going to the public hospital would have entailed waiting in line during the overcrowded “open access” times or waits of perhaps a week or more to see a GP.

My experience is first-hand with both alternatives, and they were at the time as different as night and day. While talking heads in the media cried out that private insurance created a “fast track” for “the rich,” the net effect for the already overwhelmed public health care system was relief through decreased demand. As we should expect from any shift toward market, everybody was ultimately better off thanks to this (limited) marketization of Swedish health care (perhaps excepting bureaucrats who previously enjoyed the power to directly control health care).

Waiting for Care

Swedes maintain that they get good (they mean great) health care, and the statistics partly confirm this. In fact, Sweden’s health care was recently noted as the tenth most efficient in the world (excluding smaller countries). The decentralized regional system of government (regional governments, taxing incomes in the range 10-12 percent, are primarily responsible for health care, public transport, and cultural subsidies) has undoubtedly contributed to this, especially since the national voucher/guarantee system enacted in 1992 has increased competition between regions and thereby placed pressure on politicians and hospital administration.

The fact that one in every ten people voluntarily foregoes care even though they need it, according to the regulating authority Socialstyrelsen’s status report 2011 (3 percent of whom could not afford care, p. 64), should also lessen the pressure on the health care system. It should also be noted that Swedish bureaucracy overall is comparatively effective and efficient (likely a result of the country being very small and having a long tradition of both governmental transparency and a hardworking population), so why would this not also be the case in health care?

The main problem is naturally due to the central planning of health care, whether or not it is planned by regional “competing” governments. While access and quality are guaranteed by national law, Swedes usually have to line up for care. As noted above, wait times may be days or weeks for appointments with GPs while several (or many, and increasing) hours for ER care, but the real problem is apparent in specialist care such as surgery where wait times are not uncommonly several months, or even years.

Swedish media frequently reports on cases of mistreatment, extreme wait times, and deaths due to not being offered care in time. An increasingly common phenomenon is denying the severely ill ambulance for all sorts of symptoms, for example severe burns, blood poisoning, myocardial infarction (1, 2), or stroke.

Even an otherwise laudatory article in The New York Times notes how wait times are the problem in Swedish health care. This remains a major shortcoming despite the national “health care guarantee” (guaranteed care within 90 days). As in any market where consumption is subsidized through artificially low (or no) fees, demand skyrockets and there is simply no way for suppliers of the service to keep up with it.

Private insurance and (semi-)private hospitals in this sense offers relief for an otherwise unsustainable system; their net effect is lower demand on public hospitals, which should make life easier for many in Sweden. Access used to be more difficult, except for those who could skip the regular system by taking advantage of personal relationships or family bonds with physicians, nurses, and other hospital personnel. My personal experience speaks to this latter fact, though it generally is dismissed by Swedes wanting to believe in the system. The fact that “knowing the right people” can open doors is irrefutable, however. And it is important in socialized systems.

A Constant Lack of Funds

As in the NYT article, all problems including the wait times are generally blamed on a “lack” of funds. As Jonsson and Banta note, “limited resources do result in waiting lists and other restrictions.” In the media and political discourse, this is discussed as “cutbacks,” but yet the funds seem to never be enough.

This is symptomatic for any public system — the allocated funds are never (and can never be) sufficient. There is simply too much waste due to lack of incentives and market prices. In order to deal with health care’s runaway costs (or pressure to cut costs, depending on one’s view), health care providers tend to employ the same techniques as others subjected to a public primarily one-payer system. These techniques may vary over time and can be different in different places, but they all amount to exploiting loopholes or in other ways circumvent the system’s limitations. One such technique includes a type of “creative” accounting to up the hospital’s cash inflow by indicating in the patient’s medical records a more expensive treatment than the one actually given. One treatment on the books, another off the books.

This is of course an expected outcome of a centrally planned system with relatively limited health care user fees (contrary to popular myth, Sweden’s health care is not “free”). When Swedes get health care, it is generally of quite good quality. But to get it, they need the right connections, or insurance. The former offers no guarantee but only a relative improvement, while the latter is a proper market contract. No wonder Swedes take advantage of their newfound opportunity to have health care insurance.

The Future: Sweden or the United States?

Liberals tend to point to Sweden as a good example of how well an extensive welfare state functions. They are not completely wrong, since Sweden is a rather well-functioning country. But this is despite the welfare state; these live in the past, and assert that Sweden today is one part in the 1970s and two parts their own imagination. The fact is that the Swedish welfare state imploded in the early 1990s; it was crushed under its own weight after more than two decades of rapid decline.

The reason Sweden is doing so well at present is partly an illusion and partly a market story. It is an illusion since what other countries we have to compare with are also welfare states (or, as in the case of the United States, a warfare-welfare state); being best of the worst does not mean one is actually good. It is a market story since Sweden has for more than two decades consistently rolled back the welfare state, introduced market prices and private ownership, “experimented” with market-like incentives for public providers, and cut taxes

What Sweden has done is hardly sufficient, but it appears to be in the right direction. More importantly, it is in a direction not taken by many other countries — and this explains the country’s relatively strong financial condition.

In contrast, the United States is moving toward the liberal distorted image of what Sweden is supposedly like. While Sweden is embracing a system including what appears to be real health care insurance, the U.S. is moving from a hybrid third-party payer system (inaccurately described as private health care insurance) to an all-out public health care system following ObamaCare.

When the United States is firmly going down the road to serfdom, the market appears to be taking over Sweden’s health care.

Per L. Bylund, PhD is a research professor in the Hankamer School of Business, Baylor University.
Creative Commons License

Mises President: Whatever Happened To Peace Officers?

The following is a selection from a speech by Mises Institute President Jeff Deist at the Southwest Regional Mises Circle in Houston, “The Police State: Know It When You See It,” on January 18, 2014.

Today when we use the term peace officer, it sounds antiquated and outdated. I’m sure most people in the room under 40 have never heard the term actually used by anyone; we might as well be talking about buggy whips or floppy disks. But in the 1800s and really through the 1960s, the term was used widely in America to refer generally to lawmen, whether sheriffs, constables, troopers, or marshals. Today the old moniker of peace officer has been almost eliminated in popular usage, replaced by “police officer” or the more in vogue “law enforcement officer.”

The terminology has certain legal differences in different settings; in some places peace officers and police officers are indeed different individuals with different functions, jurisdictions, or powers to execute warrants. But nobody says peace officer anymore, and it’s not just a coincidence.

The archetype of a peace officer is mostly fictitious — sheriffs in westerns often come to mind, stern lawmen carrying Colt revolvers called “Peacemakers.” But the Wyatt Earps of western myth weren’t always so peaceful, and often, at least in movies, used their Peacemakers to shoot up the place.

Outside the Old West archetype, Sheriff Andy Taylor of the Andy Griffith Show is perhaps the best and most facile example of what it once meant, at least in the American psyche, to be a peace officer. Now of course the Andy Griffith show was fictional. And there’s no doubt that many, many small town sheriffs in America over the decades have been anything but peace officers. Yet it’s fascinating that just a few decades ago Americans could identify with the character of Sheriff Taylor as a recognizable ideal.

Obviously the situation today is very different, and we all know how far things have fallen. Police have suffered a very serious decline over the last several decades, both in terms of their public image and the degree to which average citizens now often fear police officers rather than trust them. We can note also that poor and minority communities have long been less trusting, or perhaps less naïve, about the real nature of police. But today that jaundiced view has found its way into middle-class consciousness.

Now the subject of police misconduct and the growing militarization and lawlessness of police departments could fill many hours, and several libertarian writers are doing a great job of documenting police malfeasance, as in the excellent work of investigative journalist William Norman Grigg.

But allow me to mention some particularly egregious recent examples of police action escalating and harming, rather than protecting and serving.

As just one example, we can point to the case in which a 90-pound, mentally-ill young man very recently was killed by three so-called law enforcement officers from three different agencies in Southport, North Carolina. He was apparently having a schizophrenic episode and brandishing a screwdriver when police arrived in answer to his family’s 911 call asking for “help.” The first two officers managed to calm the young man down, but the third escalated the situation, demanding that the other officers use a taser to subdue him. Once his body hit the ground the young man was brutally shot at close range by the third officer, for reasons that remain unclear.

As another example, we could note the beating death of Kelly Thomas by police in Fullerton, California. The beating was seen as so brutal and unjustified by many members of the community that it led to the recall of three members of the Fullerton City Council who defended the police department in the wake of the beating.

So here we see modern police at work. Escalation. Aggression. A lack of common sense, making a bad situation worse. Overriding concern for the safety of police officers, regardless of the consequences for those being “protected.” These are not the hallmarks of peace officers, to put it mildly.

Another troubling development that demonstrates how far we’ve strayed from the peace officer ideal can be seen in the increasing militarization of local police departments. The Florida city of Ft. Pierce (population 42,000) recently acquired an MRAP vehicle, which stands for “mine response ambush protection” for the bargain price of $2,000. The U.S. military is unloading hundreds of armored tank-like vehicles as Operation Enduring Freedom winds down — and it’s also unloading thousands of Afghanistan and Iraq combat vets into the ranks of local police and sheriffs. The Ft. Pierce police chief states, “The military was pretty much handing them out. … You know, it is overkill, until we need it.”

So how did we go from “peace” officers to “police” officers to “law enforcement” officers anyway? How did we go from “protect and serve” to “escalate and harm”? And what is behind the militarization of police departments and the rise of the warrior cop, as one writer terms it?

Well, as Austrians and libertarians we should hardly be surprised, and we certainly don’t need a sociological study to understand what’s happening. The deterioration in police conduct, and the militarization of local police forces, quite simply and quite predictably mirrors the rise of the total state itself.

We know that state monopolies invariably provide worse and worse services for more and more money. Police services are no exception. When it comes to your local police, there is no shopping around, there is no customer service, and there is no choice. Without market competition, market price signals, and market discipline, government has no ability or incentive to provide what people really want, which is peaceful and effective security for themselves, their families, their homes, and their property. As with everything government purports to provide, the public wants Andy Griffith but ends up with the Terminator.

There is no lack of Austrian scholarship in this area, the intersection between security services, state monopolies, public goods, and private alternatives. I would initially direct you toward two excellent primary sources to learn more about how markets could provide security services that no only produce less crime at a lower cost, but also provide those services in a peaceful manner.

My first recommendation is Murray Rothbard’s Power and Market, which opens with a chapter entitled “Defense Services on the Free Market.” Right off the bat Rothbard points out the inherent contradiction between property rights and the argument that state-provided police services are a necessary precondition to securing such property rights:

Economists have almost invariably and paradoxically assumed that the market must be kept free by the use of invasive and unfree actions — in short, by governmental institutions outside the market nexus.

In other words, we’re told that state-provided police are a necessary precondition to market activity. But Rothbard points out that many goods and services are indispensable to functioning markets, such as land, food, clothing, and shelter for market participants. Rothbard asks, “… must all these goods and services therefore be supplied by the State and the State only?”

No, he answers:

Defense in the free society (including police protection) would therefore have to be supplied by people or firms who (a) gained their revenue voluntarily rather than by coercion and (b) did not — as the State does — arrogate to themselves a compulsory monopoly of police or judicial protection.

Another excellent starting point is Hans Hoppe’s The Private Production of Defense. Hoppe makes the case that our long-held belief in collective security is nothing more than a myth, and that in fact state protection of private property — our system of police, courts, and jails — is incompatible with property rights and economic reality.

Motivated, as everyone is, by self-interest and the disutility of labor, but equipped with the unique power to tax, state agents will invariably strive to maximize expenditures on protection — and almost all of a nation’s wealth can conceivably be consumed by the cost of protection — and at the same time to minimize the actual production of protection. The more money one can spend and the less one must work for it, the better off one will be.

Both Rothbard and Hoppe discuss an “insurance” model for preventing crime and aggression, which makes sense from a market perspective. Rothbard posits that private police services likely would be provided by insurance companies which already insure lives and property, for the commonsense reason that “… it would be to their direct advantage to reduce the amount of crime as much as possible.”

Hoppe takes the insurance concept further, arguing that:

The better the protection of insured property, the lower are the damage claims and hence an insurer’s loss. Thus, to provide efficient protection appears to be in every insurer’s own financial interest. … Obviously, anyone offering protection services must appear able to deliver on his promises in order to find clients.

Compare this to the “growth” model of most local police departments, which continuously lobby their city councils for more money and more officers!

Now admittedly the private provision of police and security services is a complex and controversial subject, and we’re only touching on it today. But rest assured that if you read further, both Rothbard and Hoppe address many common objections raised when discussing private police: attendant issues like political borders; differing legal systems; physical jurisdiction and violence among competing firms; the actuarial problems behind insuring against physical aggression; free riders; and so forth.

But increasingly society is moving in the direction of private security regardless: consider for example, complex insurance networks and indemnification arrangements across borders; private arbitration of disputes; the rise of gated communities and neighborhoods utilizing private security agencies; and fraud prevention mechanisms provided by private businesses like eBay and Paypal.

These trends can only intensify as governments, whether Federal, State, or local, increasingly must spend more and more of their budgets to service entitlement, pension, and debt promises.

If we want our police to act more like Sheriff Andy Taylor and less like militarized aggressors, we must look to private models — models where our interests are aligned with security providers. Only then can we bring back true “peace” officers, private security providers focused on preventing crime and defusing conflicts in cost effective and peaceful ways.

Creative Commons License

Economics Professor: We Will Be Told Hyperinflation is Necessary, Proper, Patriotic and Ethical

This essay, written by Patrick Barron, was originally published by the Ludwig von Mises Institute on Jan. 13.

Hyperinflation leads to the complete breakdown in the demand for a currency, which means simply that no one wishes to hold it. Everyone wants to get rid of that kind of money as fast as possible. Prices, denominated in the hyper-inflated currency, suddenly and dramatically go through the roof. The most famous examples, although there are many others, are Germany in the early 1920s and Zimbabwe just a few years ago. German Reichsmarks and Zim dollars were printed in million and even trillion unit denominations.

We may scoff at such insanity and assume that America could never suffer from such an event. We are modern. We know too much. Our monetary leaders are wise and have unprecedented power to prevent such an awful outcome.

Think again.

Our monetary leaders do not understand the true nature of money and banking; thus, they advocate monetary expansion as the cure for every economic ill. The multiple quantitative easing programs perfectly illustrate this mindset. Furthermore, our monetary leaders actually advocate a steady increase in the price level, what is popularly known as inflation. Any perceived reduction in the inflation rate is seen as a potentially dangerous deflationary trend, which must be countered by an increase in the money supply, a reduction in interest rates, and/or quantitative easing. So an increase in inflation will be viewed as success, which must be built upon to ensure that it continues. This mindset will prevail even when inflation runs at extremely high rates.

Like previous hyperinflations throughout time, the actions that produce an American hyperinflation will be seen as necessary, proper, patriotic, and ethical; just as they were seen by the monetary authorities in Weimar Germany and modern Zimbabwe. Neither the German nor the Zimbabwean monetary authorities were willing to admit that there was any alternative to their inflationist policies. The same will happen in America.

The most likely trigger to hyperinflation is an increase in prices following a loss of confidence in the dollar overseas and its repatriation to our shores. Committed to a low interest rate policy, our monetary authorities will dismiss the only legitimate option to printing more money — allowing interest rates to rise. Only the noninflationary investment by the public in government bonds would prevent a rise in the price level, but such an action would trigger a recession. This necessary and inevitable event will be vehemently opposed by our government, just as it has been for several years to this date.

Instead, the government will demand and the Fed will acquiesce in even further expansions to the money supply via direct purchases of these government bonds, formerly held by our overseas trading partners. This will produce even higher levels of inflation, of course. Then, in order to prevent the loss of purchasing power by politically connected groups, the government will print even more money to fund special payouts to these groups. For example, government will demand that Social Security beneficiaries get their automatic increases; likewise for the quarter of the population getting disability benefits. Military and government employee pay will be increased. Funding for government cost-plus contracts will ratchet up. As the dollar drops in value overseas, local purchases by our overextended military will cost more in dollar terms (as the dollar buys fewer units of the local currencies), necessitating an emergency increase in funding. Of course, such action is necessary, proper, patriotic, and ethical.

Other Federal employee sectors like air traffic controllers and the TSA workers will likely threaten to go on strike and block access to air terminal gates unless they get a pay increase to restore the purchasing power of their now meager salaries.

State and local governments will also be under stress to increase the pay of their public safety workers or suffer strikes which would threaten social chaos. Not having the ability to increase taxes or print their own money, the Federal government will be asked to step in and print more money to placate the police and firemen. Doing so will be seen as necessary, proper, patriotic, and ethical.

Each round of money printing eventually feeds back into the price system, creating demand for another round of money printing … and another … and another, with each successive increase larger than the previous one, as is the nature of foolishly trying to restore money’s purchasing power with even more money. The law of diminishing marginal utility applies to money as it does to all goods and services. The political and social pressure to print more money to prevent a loss of purchasing power by the politically connected and government workers will be seen as absolutely necessary, proper, patriotic, and ethical.

Many will not survive. Just as in Weimar Germany, the elderly who are retired on the fruits of a lifetime of savings will find themselves impoverished to the point of despair. Suicides among the elderly will be common. Prostitution will increase, as one’s body becomes the only saleable resource for many. Guns will disappear from gun shops, if not through panic buying then by outright theft by armed gangs, many of whom may be your previously law-abiding neighbors.

Businesses will be vilified for raising prices. Goods will disappear from the market as producer revenue lags behind the increase in the cost of replacement resources. Government’s knee-jerk solution is to impose wage and price controls, which simply drive the remaining goods and services from the white market to the gangster-controlled black market. Some will sit out the insanity. Better to build inventory than sell it at a loss. Better still to close up shop and wait out the insanity. So government does the necessary, proper, patriotic, and ethical thing: it prints even more money and prices increase still more.

The money you have become accustomed to using and saving eventually becomes worthless; it no longer serves as a medium of exchange. No one will accept it. Yet the government continues to print it in ever greater quantities and attempts to force the citizens to accept it. Our military forces overseas cannot purchase food or electrical power with their now worthless dollars. They become a real danger to the local inhabitants, most of whom are unarmed. The US takes emergency steps to evacuate dependents back to the States. It even considers abandoning our bases and equipment and evacuating our uniformed troops when previously friendly allies turn hostile.

And yet the central bank continues to print money. Politically-connected constituents demand that it do so, and it is seen as the absolutely necessary, proper, patriotic, and ethical thing to do.

Patrick Barron is a private consultant in the banking industry. He teaches in the Graduate School of Banking at the University of Wisconsin, Madison, and teaches Austrian economics at the University of Iowa, in Iowa City, where he lives with his wife of 40 years.

Creative Commons License

How The Drug War Makes Drugs Less Safe

This article, written by research analyst Benjamin M. Wiegold, was originally published by the Ludwig von Mises Institute on Dec. 26.

Desomorphine, a grotesque new drug known on the street as krokodil, has been making news for its increasing popularity as a cheap substitute for heroin, albeit with a devastating range of ill effects of its own. Reports allege that it originated in Siberia in 2002 and has become common in Russia and other Eastern European countries.

The name itself seems to corroborate this, krokodil is Russian for crocodile, and refers to the reptilian-like scales caused by the severe tissue damage from repeated use of this killer drug. For an addict, the life expectancy is less than two years.

Some sources dispute the prevalence of its use, arguing that the drug is not nearly as common as suggested. Special agent Jack Riley, the man in charge of the Drug Enforcement Administration’s (DEA) Chicago office, said that “200 DEA agents in five states have made finding krokodil a top priority,” but that for everything discovered so far, “the lab tells us it’s just heroin.”

The argument over whether people are dying from, and addicted to, heroin or krokodil is irrelevant to us here. What matters is that unsafe products and innovations in the recreational drug industry are a result of drug prohibition.

In both Russia and the United States, heroin is among a host of drugs that is currently illegal. Vast sums of money amounting to $41 billion annually in the U.S. is being doled out in a protracted effort to enforce this drug prohibition. We may ask: To what extent is it working? When we consider not only the appearance of new drugs like krokodil and the other uses to which $41 billion could be put (such as returning the money to its owners), the answer is an indignant no.

A substance as devastating as krokodil simply wouldn’t sell in a free market, and for this reason would no longer be produced, let alone concocted in the first place. For all producers and distributors of drugs — and of all goods, for that matter — a profit opportunity exists in that by not killing your customers in two years they can live long, productive lives as patrons of your trade, providing you income. Furthermore, to the extent that customers accurately perceive the benefits and safety risks associated with use of your products, they will tend to choose those with the least detrimental side effects. Likewise, potency will become more consistent, the content of the drug more pure, and dosage levels will be discussed with doctors and trained professionals to insure safety for the user, just as prescription and over-the-counter drugs are today (although ironically, legal drugs kill more than their illegal counterparts).

In a free market, if a bad batch were to be sold and consumed, causing injury, the victim, or perhaps an acquaintance of the victim, would sue, seeking compensation and restitution in a court of law, because at this point the drug user is in fact the victim of an act of fraud.

The common arguments for heroin prohibition, on the other hand, involve little more than reference to a list of symptoms which occur with reckless or prolonged use: It kills people, it destroys families, it squanders money, it makes addicts act, often violently, with desperation, etc. But it is precisely for these very reasons that it ought to be completely legalized.

Unlike users of prescription drugs, users of illegal drugs such as heroin are necessarily ignorant of where it has been produced, the extent of its purity — such as whether or not the drug is heroin, krokodil or a combination — and of its potency and health risks. On the other hand, the consumer benefits of drug legitimacy are enormous. When drugs are legal on the open market, there is recourse to a court of law should the drugs prove to be unsafe. In a legal drug environment, there is no fear of the law turning against anyone for admitting prior use of illegal drugs. Such legitimacy is lacking when purchasing illegal drugs on the black market, where there are no paper trails, no guarantees, no refunds, and no manuals, not to mention no doctors involved.

The legal incentives created by prohibition lead to artificially restricted production, decreasing the supply for sale, and causing the price to skyrocket. As a result, prohibition transforms users into criminals, not only by definition, but by making their drug use a larger financial burden, enticing many to commit real crimes. All of this is a disaster for the addicts.

By utilizing the communication opportunities provided by the internet, drug outreach programs and education groups reach more people in need, and do more for addicts, recovering addicts, and their families and friends than any police action ever has.

The real solutions are to be found in families, friends, and communities — not the state’s interference and manipulation of actions. Liberty is when individuals are free — rather than prohibited — to engage with one another on their own terms and to control their own lives. Repealing prohibition and making drugs a market commodity is the only way to clean up and truly help the drug communities.

Ron Paul famously asked in his farewell address: “Has anybody noticed that authorities can’t even keep drugs out of the prisons? How can making our entire society a prison solve the problem?” Of course, the “prison approach” couldn’t ever solve the problem; it actually hasn’t insofar as it has been tried ($41 billion annually!); and it has brought with it a multitude of additional problems regarding civil liberties, drug safety, pricing, overdose and addiction rates. Perhaps an appropriate follow-up to the question posed by the former Congressman would be: How long will it take for society to realize the promise of the other option?

–Benjamin M. Wiegold

Note: The views expressed in Daily Articles on Mises.org are not necessarily those of the Mises Institute.

Ben Wiegold is a staunch anarcho-capitalist and has been educating himself through the Mises Institute since 2011. He is also a self-taught musician who has played with a number of small bands in the Chicagoland area. See Benjamin M. Wiegold’s article archives.

You can subscribe to future articles by Benjamin M. Wiegold via this RSS feed.

Creative Commons License

Research Analyst: Obamacare’s Many Negative Side-Effects Should Surprise No One

This article, written by research analyst Jordan Bruneau, was originally published by the Ludwig  von Mises Institute on Dec. 26.

Even left liberals are coming to realize that Obamacare is fatally flawed. Perhaps this is because fewer people will be insured at the end of the year, under Obamacare, than at the beginning of the year as insurers are forced to drop coverage. Stories of such cancellations to cancer-stricken children certainly don’t help matters. For a program whose expressed purpose is to bring insurance to more people, this irony seems even too much for the interventionists to stomach.

Obamacare’s negative effects, however, are simply a microcosm of government policy in general. Virtually all well-intended (assuming they are in fact well-intended) government policies bring negative unintended consequences that hurt the very people they intend to serve. The prevalence of this paradox, called iatrogenics (originally used in the medical context to refer to doctors’ actions that hurt patients), should give pause to those who favor government intervention to solve societal problems.

Take rent control policies, for example, intended to make housing more accessible to those with lower incomes. In reality these policies shrink the amount of available housing because potential landlords have less incentive to rent out, and developers have less incentive to build new, units. As a result, less housing is available for those with lower incomes. Just look at the apartment shortage in New York or San Francisco, the two cities with the most stringent rent-control policies, for proof.

This process of iatrogenics also exists in financial regulation. Polemicist Nassim Taleb has illustrated how increased financial regulation intended to prevent another financial crisis has actually made one more likely. Regulations entrust the fate of the financial system to a handful of big banks because they are the only ones who can afford to comply with them. This consolidation of power among the big banks makes the financial system riskier because if one of these few banks fails the damage will be much greater to the economy than from the failure of one small bank among many. “These attempts to eliminate the business cycle,” says Taleb, “lead to the mother of all fragilities.”

In terms of protecting society’s most economically disadvantaged, sociologist Charles Murray chronicles, most recently in his bestseller Coming Apart, how the federal government’s war on poverty paradoxically hurts the poor. He explains that though welfare benefits are well intentioned, what they in effect do is pay people to stay poor, hurting the very people they intend to help. These misaligned incentives are a leading reason why $15 trillion in welfare spending over the past 50 years has perversely resulted in a 50-year-high poverty rate of 15.1 percent.

Those currently advocating for a raise of the minimum wage should first examine its iatrogenic history of bringing about negative unintended consequences to the very low wage people it intends to help. Minimum wage increases actually hurt low wage earners because business owners lay off staff and cut back on hours to try to recoup their losses from such mandated wage increases. This leaves those with a tenuous grasp on the labor market in an even more precarious position. “Unfortunately, the real minimum wage is always zero, regardless of the laws,” says economist Thomas Sowell, “and that is the wage that many workers receive in the wake of the creation or escalation of a government-mandated minimum wage, because they either lose their jobs or fail to find jobs.”

Of course it’s not only left liberal policies that generate negative unintended consequences that hurt the very people they’re intended to help, but also conservative ones like the war on drugs and the war on terror.

The war on drugs intends to help drug-blighted communities by enacting and enforcing strict penalties on drug use. What it in effect does is hurt these communities by making criminals out of a significant portion of its inhabitants. Drug users now make up nearly 25 percent of federal and state prison inmates, many of whom go in for simple possessions and come out hardened criminals wreaking untold damage on their communities. Even those who do not run afoul with the law again face a lifetime of job and social struggles with a criminal record attached to their name.

The same iatrogenic story exists in the war on terror, which intends to keep us safe by waging a multipronged offensive against potential terrorists and the geographies they may inhabit. Unfortunately, as former CIA intelligence officer Michael Scheuer has illustrated, some of these prongs, such as aggressive drone warfare and support for apostate regimes, actually fan the flames of US hatred making us less safe. “It’s American policy that enrages al-Qaeda,” says Scheuer, “not American culture and society.”

Government intervention, no matter what its form or intention, causes iatrogenics — unintended negative consequences that hurt the very people they’re intended to help. Nowhere is this better exemplified than with Obamacare, a policy intended to bring insurance to all that has in effect taken it away from many. Perhaps the growing coalition of people recognizing this paradox will take this revelation and apply it to other policy arenas as well. For the affected classes, we can only hope.

Mises Scholar Explains Why Bankers Created The Fed

This article, written by Ludwig von Mises Institute associate scholar Christopher Westley, was originally published by the Institute on Dec. 23.

The Democratic Party gained prominence in the first half of the 19th century as being the party that opposed the Second Bank of the United States. In the process, it tapped into an anti-state sentiment that proved so strong that we wouldn’t see another like it until the next century.

Its adversaries were Whig politicians who defended the bank and its ability to grow the government and their own personal fortunes at the same time. They were, in fact, quite open about these arrangements. It was considered standard-operating procedure for Whig representatives to receive monetary compensation for their support of the Bank when leaving Congress. The Whig Daniel Webster even expected annual payments while in Congress. Once, he complained to the Bank of the United States President Nicholas Biddle, “I believe my retainer has not been renewed or refreshed as usual. If it be wished that my relation to the Bank should be continued, it may be well to send me my usual retainer.”

No wonder these people were often pummeled with canes on the House floor.

It is little wonder that early Democrats garnered such popular support and would demand Andrew Jackson end America’s experiment with central banking. Jackson called it “dangerous to the liberty of the American people because it represented a fantastic centralization of economic and political power under private control.”

It’s hard to believe that guy who said that is now on the $20 bill.

Jackson also warned that the Bank of the United States was “a vast electioneering engine” that could “control the Government and change its character.” These sentiments were echoed by Roger Taney, Jackson’s Treasury Secretary, who talked of the Bank’s “corrupting influence” and ability to “influence elections.” (The Whigs would later get revenge on this future chief justice when Abraham Lincoln, in response to a written opinion with which he disagreed, issued his arrest warrant.)

But the courtship between the political classes and their cronies would continue in the decades following Lincoln’s assassination. Those politically well-connected groups that benefited from early central banking continued to benefit from government finance, especially off of “internal improvements,” which is the 19th century term for pork. National banking would appear during the War Between the States, setting in place a banking system in which individual banks would be chartered by the Federal government. The government itself would use regulations backed by a new armed U.S. Treasury police force to encourage the banks’ inflation and protect them from the market penalties that inflation would otherwise bring them, such as the loss of specie and the occurrence of bank runs.

The boom and bust cycle, explained by the Austrian School in such detail, became worse and worse in the period leading up to 1913. And with the rise of Progressive Era spending on war and welfare, and with the pressure on banks to inflate to finance this activity, the boom and bust cycles worsened even more. If there was one saving grace about this period, it would be that banks were forced to internalize their losses. When banks faced runs on their currencies, private financiers would bail them out. But this arrangement didn’t last, so when the losses grew, those financiers would secretly organize to reintroduce central banking to America, thus engineering an urgent need for a new “lender of last resort.” The result was the Federal Reserve.

This was the implicit socialization of the banking industry in the United States. People called the Federal Reserve Act the Currency Bill, because it was to create a bureaucracy that would assume the currency-creating duties of member banks.

It was like the Patriot Act, in that both were centralizing bills that were written years in advance by people who were waiting for the appropriate political environment in which to introduce them. It was like our current healthcare bills, in which cartelized firms in private industry wrote chunks of the legislation behind closed doors long before they were introduced in Congress.

It was unnecessary. If banks were simply held to similar standards as other, more efficient industries were held to (the rule of law at the very least), then far fewer fraudulent banks would ever come about. There were market institutions that would penalize those banks that over-issued currencies and brought about bank runs and financial crises. As Ludwig von Mises would later write:

What is needed to prevent further credit expansion is to place the banking business under the general rules of commercial and civil laws compelling each individual and firm to fulfill all obligations in full compliance with the terms of contract.

The bill was passed fairly easily, in part because the Democrats had a larger majority in both Houses than they do today. There were significant differences that were resolved in conference, with one compromise resulting in the requirement that only 40 percent of the gold reserve back the new currency. So instead of a 1-to-1 relationship between gold and currency issued (a ratio that defined sound market banking since the time of Renaissance Italy), the new Federal Reserve notes would be inflated — by law — at a ratio of 1-to-2.5.

The bill that was first drawn up at Jekyll Island, Ga., was signed by Woodrow Wilson in the Oval Office shortly after the Senate approved it. At one point during the signing ceremony, as he reached for a gold pen to finish signing the bill, he jokingly declared: “I’m drawing on the gold reserve.”

Truer words were never spoken.

Central banks always result in feeding those forces that centralize and expand the nation-state. The Fed’s policies in the 1920s, so well documented by Murray N. Rothbard, would provoke the Great Depression, which, in the end, wrenched political power from municipal and State governments to the swampland in Washington. Today, people take seriously the claim that there can be a viable Federal solution to every problem thanks to the money printed up by the Fed, while each decade has seen a larger proportion of the population become dependent on its inflation.

Yet Jackson’s beliefs about the perniciousness of the Second Bank of the United States are just as applicable to the Federal Reserve today.

Here’s to hoping we’ll see Jackson’s hawkish nose and unkempt hair on a gold-backed, privately issued currency in the not–too-distant future.

Mises Editor’s Note: This article is based on a speech delivered at the Mises Institute’s Birth and Death of the Fed conference in Jekyll Island, Ga., on Feb. 26, 2010.

Creative Commons License

Mises Scholar Explains Possible Outcomes Of The Paper Money Experiment

This article, written by Ludwig von Mises Institute associate scholar and associate professor at Universidad Rey Juan Carlos Philipp Bagus, was originally published by the Institute on Dec. 13.

A paper currency system contains the seeds of its own destruction. The temptation for the monopolist money producer to increase the money supply is almost irresistible. In such a system with a constantly increasing money supply and — as a consequence — constantly increasing prices, it does not make much sense to save in cash to purchase assets later. A better strategy, given this scenario, is to go into debt to purchase assets and pay back the debts later with a devalued currency. Moreover, it makes sense to purchase assets that can later be pledged as collateral to obtain further bank loans. A paper money system leads to excessive debt.

This is especially true of players who can expect that they will be bailed out with newly produced money such as big businesses, banks, and the government.

We are now in a situation that looks like a dead end for the paper money system. After the last cycle, governments have bailed out malinvestments in the private sector and boosted their public welfare spending. Deficits and debts skyrocketed. Central banks printed money to buy public debts (or accept them as collateral in loans to the banking system) in unprecedented amounts. Interest rates were cut close to zero. Deficits remain large. No substantial real growth is in sight. At the same time, banking systems and other financial players sit on large piles of public debt. A public default would immediately trigger the bankruptcy of the banking sector. Raising interest rates to more realistic levels or selling the assets purchased by the central bank would put into jeopardy the solvency of the banking sector, highly indebted companies and the government. It looks like even the slowing down of money printing (now called “QE tapering”) could trigger a bankruptcy spiral. A drastic reduction of government spending and deficits does not seem very likely either, given the incentives for politicians in democracies.

So will money printing be a constant with interest rates close to zero until people lose their confidence in the paper currencies? Can the paper money system be maintained or will we necessarily get a hyperinflation sooner or later?

There are at least seven possibilities:

1. Inflate: Governments and central banks can simply proceed on the path of inflation and print all the money necessary to bail out the banking system, governments and other overindebted agents. This will further increase moral hazard. This option ultimately leads into hyperinflation, thereby eradicating debts. Debtors profit; savers lose. The paper wealth that people have saved over their lifetime will not be able to assure such a high standard of living as envisioned.

2. Default on entitlements: Governments can improve their financial positions by simply not fulfilling their promises. Governments may, for instance, drastically cut public pensions, Social Security and unemployment benefits to eliminate deficits and pay down accumulated debts. Many entitlements that people have planned upon will prove to be worthless.

3. Repudiate debt: Governments can also default outright on their debts. This leads to losses for banks and insurance companies that have invested the savings of their clients in government bonds. The people see the value of their mutual funds, investment funds and insurance plummet, thereby revealing the already-occurred losses. The default of the government could lead to the collapse of the banking system. The bankruptcy spiral of overindebted agents would be an economic Armageddon. Therefore, politicians until now have done everything to prevent this option from happening.

4. Financial repression: Another way to get out of the debt trap is financial repression. Financial repression is a way of channeling more funds to the government, thereby facilitating public debt liquidation. Financial repression may consist of legislation making investment alternatives less attractive or more directly in regulation inducing investors to buy government bonds. Together with real growth and spending cuts, financial repression may work to actually reduce government debt loads.

5. Pay off debt: The problem of overindebtedness can also be solved through fiscal measures. The idea is to eliminate debts of governments and recapitalize banks through taxation. By reducing overindebtedness, the need for the central bank to keep interest low and to continue printing money is alleviated. The currency could be put on a sounder base again. To achieve this purpose, the government expropriates wealth on a massive scale to pay back government debts. The government simply increases existing tax rates or may employ one-time confiscatory expropriations of wealth. It uses these receipts to pay down its debts and recapitalize banks. Indeed, the International Monetary Fund has recently proposed a one-time 10-percent wealth tax in Europe in order to reduce the high levels of public debts. Large-scale cuts in spending could also be employed to pay off debts. After World War II, the United States managed to reduce its debt-to-GDP ratio from 130 percent in 1946 to 80 percent in 1952. However, it seems unlikely that such a debt reduction through spending cuts could work again. This time, the U.S. does not stand at the end of a successful war. Government spending was cut in half from $118 billion in 1945 to $58 billion in 1947, mostly through cuts in military spending. Similar spending cuts today do not seem likely without leading to massive political resistance and bankruptcies of overindebted agents depending on government spending.

6. Currency reform: There is the option of a full-fledged currency reform including a (partial) default on government debt. This option is also very attractive if one wants to eliminate overindebtedness without engaging in a strong price inflation. It is like pressing the reset button and continuing with a paper money regime. Such a reform worked in Germany after  World War II (after the last war financial repression was not an option) when the old paper money, the Reichsmark, was substituted by a new paper money, the Deutsche Mark. In this case, savers who hold large amounts of the old currency are heavily expropriated, but debt loads for many people will decline.

7. Bail-in: There could be a bail-in amounting to a half-way currency reform. In a bail-in, such as occurred in Cyprus, bank creditors (savers) are converted into bank shareholders. Bank debts decrease and equity increases. The money supply is reduced. A bail-in recapitalizes the banking system and eliminates bad debts at the same time. Equity may increase so much that a partial default on government bonds would not threaten the stability of the banking system. Savers will suffer losses. For instance, people who invested in life insurances and who, in turn, bought bank liabilities or government bonds will assume losses. As a result, the overindebtedness of banks and governments is reduced.

Any of the seven options, or combinations of two or more options, may lie ahead. In any case, they will reveal the losses incurred in and end the wealth illusion. Basically, taxpayers, savers or currency users are exploited to reduce debts and put the currency on a more stable basis. A one-time wealth tax, a currency reform or a bail-in are not very popular policy options, as they make losses brutally apparent at once. The first option of inflation is much more popular with governments, as it hides the costs of the bailout of overindebted agents. However, there is the danger that the inflation at some point gets out of control. And the monopolist money producer does not want to spoil his privilege by a monetary meltdown. Before it gets to the point of a runaway inflation, governments will increasingly ponder the other options as these alternatives could enable a reset of the system.

Bagus is an associate professor at Universidad Rey Juan Carlos. He is an associate scholar of the Ludwig von Mises Institute and was awarded the 2011 O.P. Alford III Prize in Libertarian Scholarship. He is the author of “The Tragedy of the Euro” and coauthor of “Deep Freeze: Iceland’s Economic Collapse. The Tragedy of the Euro” has so far been translated and published in German, French, Slovak, Polish, Italian, Romanian,  Finnish, Spanish, Portuguese, British English, Dutch, Brazilian Portuguese, Bulgarian, and Chinese. See his website.

Creative Commons License

Mises Daily: The State Causes The Poverty It Later Claims To Solve

This article, written by the Ludwig von Mises Institute’s Germany executive director Andreas Marquart, was originally published on the Institute’s website on Dec. 7.

If one looks at the current paper money system and its negative social and social-political effects, the question must arise: where are the protests by the supporters and protectors of social justice? Why don’t we hear calls to protest from politicians and social commentators, from the heads of social welfare agencies and leading religious leaders, who all promote the general welfare as their mission?

Presumably, the answer is that many have only a weak understanding of the role of money in an economy with a division of labor, and for that reason, the consequences of today’s paper money system are being widely overlooked.

The current system of fractional reserve banking and central banking stands in stark opposition to a market economy monetary regime in which the market participants could decide themselves, without state pressure or coercion, what money they want to use, and in which it would not be possible for anyone to expand the money supply because they simply choose to do so.

The expansion of the money supply, made possible through central banks and fractional reserve banking, is in reality what allows inflation, and thus, declining income in real terms. In The Theory of Money and Credit Ludwig von Mises wrote:

The most important of the causes of a diminution in the value of money of which we have to take account is an increase in the stock of money while the demand for it remains the same, or falls off, or, if it increases, at least increases less than the stock. … A lower subjective valuation of money is then passed on from person to person because those who come into possession of an additional quantity of money are inclined to consent to pay higher prices than before.[1]

When there are price increases caused by an expansion of the money supply, the prices of various goods and services do not rise to the same degree, and do not rise at the same time. Mises explains the effects:

While the process is under way, some people enjoy the benefit of higher prices for the goods or services they sell, while the prices of the things they buy have not yet risen or have not risen to the same extent. On the other hand, there are people who are in the unhappy situation of selling commodities and services whose prices have not yet risen or not in the same degree as the prices of the goods they must buy for their daily consumption.[2]

Indeed, in the case of the price of a worker’s labor (i.e., his or her wages) increasing at a slower rate than the price of bread or rent, we see how this shift in the relationship between income and assets can impoverish many workers and consumers.

An inflationary money supply can cause impoverishment and income inequality in a variety of ways:

1. The Cantillon Effect

The uneven distribution of price inflation is known as the Cantillon effect. Those who receive the newly created money first (primarily the state and the banks, but also some large companies) are the beneficiaries of easy money. They can make purchases with the new money at goods prices that are still unchanged. Those who obtain the newly created money only later, or do not receive any of it, are harmed (wage-earners and salaried employees, retirees). They can only buy goods at prices which have, in the meantime, risen.[3]

2. Asset Price Inflation

Investors with greater assets can better spread their investments and assets and are thus in a position to invest in tangible assets such as stocks, real estate, and precious metals. When the prices of those assets rise due to an expansion of the money supply, the holders of those assets may benefit as their assets gain in value. Those holding assets become more wealthy while people with fewer assets or no assets either profit little or cannot profit at all from the price increases.

3. The Credit Market Amplifies the Effects

The effects of asset price inflation can be amplified by the credit market. Those who have a higher income can carry higher credit in contrast to those with lower income, by acquiring real estate, for example, or other assets. If real estate prices rise due to an expansion of the money supply, they may profit from those price increases and the gap between rich and poor grows even faster.[4]

4. Boom and Bust Cycles Create Unemployment

The direct cause of unemployment is the inflexibility of the labor market, caused by state interference and labor union pressures. An indirect cause of unemployment is the expansion of the paper money supply, which can lead to illusory economic booms that in turn lead to malinvestment. Especially in inflexible labor markets, when these malinvestments become evident in a down economy, it ultimately leads to higher and more lasting unemployment that is often most severely felt among the lowest-income households.[5]

The State Continues to Expand

Once the gap in income distribution and asset distribution has been opened, the supporters and protectors of social justice will more and more speak out, not knowing (or not saying) that it is the state itself with its monopolistic monetary system that is responsible for the conditions described.

It’s a perfidious “business model” in which the state creates social inequality through its monopolistic monetary system, splits society into poor and rich, and makes people dependent on welfare. It then intervenes in a regulatory and distributive manner, in order to justify its existence. The economist Roland Baader observed:

The political caste must prove its right to exist, by doing something. However, because everything it does, it does much worse, it has to constantly carry out reforms, i.e., it has to do something, because it did something already. It would not have to do something, had it not already done something. If only one knew what one could do to stop it from doing things.[6]

The state even exploits the uncertainty in the population about the true reasons for the growing gap in income and asset distribution. For example, The Fourth Poverty and Wealth Report of the German Federal Government states that since 2002, there has been a clear majority among the German people in favor of carrying out measures to reduce differences in income.

Conclusion

The reigning paper money system is at the center of the growing income inequality and expanding poverty rates we find in many countries today. Nevertheless, states continue to grow in power in the name of taming the market system that has supposedly caused the impoverishment actually caused by the state and its allies.

If those who claim to speak for social justice do nothing to protest this, their silence can only have two possible reasons. They either don’t understand how our monetary system functions, in which case, they should do their research and learn about it; or they do understand it and are cynically ignoring a major source of poverty because they may in fact be benefiting from the paper money system themselves.

Creative Commons License

Mises Fellow Explains The United States’ Senseless Prohibition Of Hemp

This article, written by Ludwig von Mises Institute senior resident fellow Mark Thornton, originally appeared on the Mises Institute’s website.

Hemp is a plant from the cannabis family that is often closely associated with marijuana. Marijuana or Cannabis indica and Cannabis sativa can contain high concentrations of tetrahydrocannabinol (THC) which is a psychoactive component of marijuana, along with a large number of other cannabinoids. THC is the primary reason why humans have used these two forms of cannabis medicinally, recreationally, and ritually for a few thousand years.

Hemp is a variety of Cannabis sativa that has been used by humans even longer, several thousand years, to produce fiber and oil seed. This variety has an extremely low or undetectable concentration of THC. It is not therefore able to produce the “high” from marijuana nor does it have any known medical uses. However, it is a very valuable and versatile raw material in the production of such products as paper, textiles, rope, bio-fuels, protein powder for humans, bird seed and many other products, including biodegradable plastics. It is generally considered environmentally friendly because it requires little or no herbicides, pesticides, or chemical fertilizers.

Its current economic value is difficult to determine because it has been linked with marijuana and considered illegal in many countries, including the U.S. since 1937. In that year the Marijuana Tax Act of 1937 was passed which effectively made marijuana prohibited because the tax was set high enough to discourage legal transactions. It also stopped the cultivation of hemp, except during WWII because of hemp’s military usefulness. The Marijuana Tax Act was overturned in 1969 and replaced with the Comprehensive Drug Abuse Prevention and Control Act of 1970.

We do know that historically hemp has been extremely useful and valuable. According to Scott Sondles, author of Hemponomics: Unleashing the Power of Sustainable Growth:

Hemp is a variety of cannabis sativa and was one of the first crops domestically cultivated. Since the beginning hemp has been an essential staple crop and up until the mid-19th century it was the most traded commodity in the world. (p. 4)

Sondles recounts how hemp was the raw material that made the sails for ships, the ropes for the pulley and other early machinery, as well as the first form of paper to be extensively used. The seed oil was also important as a base ingredient in products such as inks and paints. Christopher Columbus’s ships contained over 80 tons of hemp-based products, primarily the sails and ropes used to power and steer the ships.

That might be all well and good, but maybe hemp is no longer as valuable. The development of plastics, technology, and new raw materials may have turned hemp into an obsolete product that can no longer compete in the industrial era.

To counter that notion, it is important to point out that Australia, Canada, and many other countries have all legalized the growing of hemp in recent years. France has a long tradition of growing hemp for seed oil and China plans to substantially increase acreage and production of hemp, primarily for textile production.

So why doesn’t the U.S. take advantage of hemp production? Sondles and others point the finger at the DuPont Corporation. In 1937, the year that the Marijuana Tax Act was passed, the DuPont Corporation was awarded a patent on the production of plastics from oil. As Sondles sees it, correctly in my view, this was the turning-point case where special interests pushing a mercantilist policy agenda won out over the virtues of the free market.

Sondles lays out this sordid case of politics against the people. It begins with Harry Anslinger. Anslinger had been the chief enforcement bureaucrat of alcohol prohibition and went on to become commissioner of the Federal Bureau of Narcotics. He was appointed by Treasury Secretary Andrew Mellon, who was DuPont’s banker and Anslinger’s soon to be in-law.

Anslinger drew up the propaganda against marijuana and drafted the legislation that included industrial hemp. In order to avoid the tax and penalties associated with marijuana, farmers would have to process their crops on their own farms to remove all the leaves from the stalks before transport. This processing requirement made growing hemp economically prohibitive compared to the growing of other crops which would earn farmers subsidies from the federal government.

According to Sondles’s reading of history, the importance of marijuana may have been a secondary consideration compared to hemp in the passage of the Marijuana Tax Act. With Anslinger providing propaganda against marijuana from inside government, the cabal could count on William Randolph Hearst to distribute the propaganda through his large chain of newspapers. Hearst owned a “vast acreage of timberland and was investing in paper mills to manufacture newspaper using DuPont’s chemicals.”

There are two additional strengths of Sondles’s book that I would like to mention. First, it includes an introduction to hemp’s important place in world and American history. When you get done with the book you have to wonder how the textbook writers could ignore this crop in their books. The second strength of the book is that the author has a good sense of Austrian economics when it comes to politics, public policy, war, and even monetary theory, deflation and the Austrian business cycle theory. The author may be overly enthusiastic on the question of the prospects of legal hemp, but if the silly and sordid prohibition against hemp is repealed we will get the final verdict from the marketplace.

Get Thorton’s “The Economics of Prohibition” here.
Creative Commons License

General Electric’s Crony Capitalism

This is an adaptation of chapter 10 from Hunter Lewis’s book Crony Capitalism in America: 2008-2012, available in the Mises Store.

During the presidential campaign of 2012, an online commentator observed that President Obama had not met with his Jobs Council for six months. How could this be, the commentator asked, when jobs were foremost on the president’s agenda? The answer was not hard to discover.

The Council was headed by General Electric CEO Jeffrey Immelt, a noted Obama political backer. Other members included Penny Pritzker, an heiress who served as Obama’s finance chairwoman in 2008, and Richard Trumpka, president of the AFL-CIO, one of the largest Obama campaign contributors. The group was established after the 2010 mid-term election losses as a device to emphasize the administration’s focus on jobs but, more importantly, to recognize political allies and campaign donors and prepare for the 2012 presidential election. This was more or less acknowledged when, after the president’s re-election, it was disbanded, despite the persistence of high unemployment.

Why had the president chosen General Electric’s Immelt in particular as the head of this campaign arm? For one reason, Immelt was sympathetic to the president’s brand of state-led capitalism. He had gone so far as to say of China in a television interview: “The one thing that actually works, state run communism, may not be your cup of tea, but their government works.”[1]

In addition, employees of General Electric as a group had been Obama’s ninth largest campaign contributor in 2008, donating $529,855. These donations in part reflected the company’s close and indeed symbiotic relationship with government in finance, defense, green energy, television, technology, and export, and its status as a primary beneficiary of the administration’s stimulus bill. It was impossible to say where the government stopped and General Electric began and vice versa.

Even more importantly, the government rescued the company from what seemed likely to be bankruptcy in 2008-2009. It also let the company off with an exceptionally mild slap-on-the-wrist fine of $50 million for cooking its books in the late 1990s and 2000s, when there might instead have been a large fine and criminal fraud charges.[2] As a further indication of its exceptionally close ties, the Obama administration inserted language into the late 2012 fiscal cliff bill that enabled the company to avoid paying much federal income taxes.[3]

How had General Electric come to be in need of a government rescue during the Crash of 2008? For most of its history, the company was considered the bluest of blue chip firms, the last company that anybody would have expected to be in need of a rescue. Prior to the Crash of 2008, it enjoyed the highest possible score from the financial rating agencies. There was a problem, however: the rating was undeserved, perhaps the result of rating agency myopia, perhaps some behind-closed-door deal.

GE Capital, the company’s finance arm, was the fastest growing part of the company. By 2007, it contributed almost 40 percent of revenues and almost half of the profits. It generated these revenues and profits by using the company’s triple A financial rating to borrow money at rates even lower than paid by banks for short periods of time and then relending for longer periods to consumers, including sub-prime borrowers. This was a classic house of cards. It should have resulted in the company’s bankruptcy. But when, in September 2008, GE ran out of credit, and the survival of the company suddenly became doubtful, Immelt knew what to do.

David Stockman, budget director under President Reagan and professional investor, described what happened:

The nation’s number one crony capitalist — Jeff Immelt of GE — jumped on the phone to [Treasury] Secretary Paulson and yelled “fire!” Soon the Fed and FDIC stopped the commercial-paper [short-term corporate debt] unwind dead in its track by essentially nationalizing the entire market. Even a cursory look at the data, however, shows that Immelt’s SOS call was a self-serving crock.

So in the fall of 2008, the US supposedly stood on the edge of an abyss, with a likely shutdown of the entire financial system, and a Depression from which we might never emerge. But this was actually just hyperbole, a way to scare President George W. Bush and members of Congress. No wonder the former said that “I’ve abandoned free market principles to save the free market system.” To say something so foolish in public in a television interview, he must have actually believed it.

Secretary Paulson is also alleged to have said, after receiving Immelt’s desperate call in September 2008, that he realized the crisis had now spread from Wall Street to Main Street. But he must have known that GE was, by that time, the very embodiment of Wall Street, despite being headquartered nearby in Connecticut. No doubt “helping Main Street” provided good cover for, among other things, saving Paulson’s Goldman Sachs.

By the time the Obama administration arrived, GE spent more money on lobbying than any other company. Immelt was asked first to join the President’s Economic Recovery Advisory Board and then, as we have noted, to chair the Council on Jobs and Competitiveness. When the administration’s Environmental Protection Agency (EPA) began enforcing new rules to reduce greenhouse gas emissions, the very first exemption was granted to a GE-powered facility, the Avenal Power Center in California.[4] Meanwhile, GE built a part for General Motors’ electric car, the Chevy Volt, a favorite project of the administration that had been given hidden subsidies of as much as $250,000 per vehicle along with buyer tax credits.[5] When that proved insufficient to get the car sold, the government bought thousands of Volts for its own fleet.

It was potentially embarrassing to the administration that GE outsourced so many jobs overseas. For example, when Congress outlawed old-fashioned incandescent light bulbs, partly at GE’s urging, manufacture of the new fluorescent bulbs was moved from GE’s light bulb plants in Ohio and Kentucky to China. Also potentially embarrassing, but little known, was that the fluorescents contained mercury, an environmental hazard, and that some of the Chinese workers had reportedly been poisoned by exposure to it.[6] None of this, however, kept GE from benefiting, directly or indirectly, from what may have been billions in Stimulus Act grants.

–Hunter Lewis

Note: The views expressed in Daily Articles on Mises.org are not necessarily those of the Mises Institute.

Hunter Lewis is cofounder of Against Crony Capitalism. He is the former CEO of Cambridge Associates and the author of eight books, including two new books, Free Prices Now! and Crony Capitalism in America: 2008-2012. He has served on boards and committees of 15 not-for-profit organizations, including environmental, teaching, research, and cultural organizations, as well as the World Bank. See Hunter Lewis’s article archives.

You can subscribe to future articles by Hunter Lewis via this RSS feed.

Notes


[1] Charlie Rose CBS interview, http://www.freebeacon (December 10, 2012).

[2] Grant’s Interest Rate Observer (October 5, 2012): 1.

[3] Carney, http://www.washingtonexaminer.com (January 2, 2013).

[4] http://www.washingtonexaminer.com (February 2, 2011).

[5] http://www.againstcronycapitalism.org (February 21, 2012).

[6] Times of London: also http://www.washingtonexaminer.com (May 17, 2011).

You can receive the Mises Dailies in your inbox. Subscribe or unsubscribe.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Mises Official Explains The Hidden Role Of The IRS In Obamacare

This article, written by Professor Joseph Salerno, was originally published on Mises.org on Nov. 26.

The highly publicized glitches and failures associated with the  launch of the Affordable Health Care Act have obscured the central role of the IRS in carrying out the law’s mandate.  Stripped down to its essentials, Obamacare is not an “insurance” plan at all.  It is rather a naked redistributionist scheme to coerce the young and the healthy into paying for the healthcare bills of the elderly and the sickly.  This means that some agency had to be enlisted to penalize the young and healthy who refused to willingly participate in their own fleecing. What agency is better equipped to do this than the IRS?  In an article in the Washington Post, Tom Hamburger and Sarah Kliff point out:

. . . the IRS also has a huge role in carrying out the law, including helping to distribute trillions of dollars in insurance subsidies and penalizing people who do not comply.

The fine is intended to encourage healthy people to enroll even if they do not have an immediate need for care. If the elderly and the sick dominate the ranks of those who sign up, it could lead to what health economists call an “ insurance death spiral” of rapidly escalating costs, premium hikes and declining enrollment.

This means a massive increase in the scope and operations of the IRS, which is:

. . . charged under the act with carrying out nearly four dozen new tasks in what represents the biggest increase in its responsibilities in decades. None is more crucial than enforcing the requirement that all citizens secure health insurance or pay a penalty.

Fortunately for the American public, because the IRS has lately become so universally reviled, it has been “hamstrung” by Congress in carrying out its mandate: it is legally precluded from employing its full fascist panoply of liens, foreclosures, and criminal prosecution.  It can only garnish tax refunds due to those uninsured who have overpaid their taxes.  (The penalty is $95 or 1 percent of income, whichever is greater).

Meanwhile some are beginning to express doubts about whether the law can be made to work given the present structure of incentives and penalties.  Jon Gruber, the MIT economist who helped design the mandate in the Massachusetts insurance plan says, “We should be absolutely clear we don’t know how this will work.”  And Robert Laszewski, president of Health Policy and Strategy Associates opines, “I now think there is little hope we are going to get enough younger healthy people to sign up, and that means that this law is in grave danger of financial collapse.”

Another thought about Obamacare:  it is a redistribution scheme that egalitarians contemplate only in their most fevered dreams.  It  actually redistributes health itself from those endowed with it to those who are not, because  real income is directly correlated with health,  and by financially penalizing the healthy, whether though formal penalties if they do not enroll or over-priced premiums if they do,  it deprives them of part of the means by which they can maintain their health.

Salerno is academic vice president of the Mises Institute, professor of economics at Pace University, and editor of the Quarterly Journal of Austrian Economics. He has been interviewed in the Austrian Economics Newsletter and on Mises.org.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Mises: Social Security Is The Most Successful Ponzi Scheme in History

This article, written by Pepperdine University economics professor Gary Galles, was originally published by Mises on Nov. 22.

“We paid our Social Security and Medicare taxes; we earned our benefits.” It is that belief among senior citizens that President Obama was pandering to when, in his second inaugural address, he claimed that those programs “strengthen us. They do not make us a nation of takers.”

If Social Security and Medicare both involved people voluntarily financing their own benefits, an argument could be made for seniors’ “earned benefits” view. But they have not. They have redistributed tens of trillions of dollars of wealth to themselves from those younger.

Social Security and Medicare have transferred those trillions because they have been partial Ponzi schemes.

After Social Security’s creation, those in or near retirement got benefits far exceeding their costs (Ida Mae Fuller, the first Social Security recipient, got 462 times what she and her employer together paid in “contributions”). Those benefits in excess of their taxes paid inherently forced future Americans to pick up the tab for the difference. And the program’s almost unthinkable unfunded liabilities are no less a burden on later generations because earlier generations financed some of their own benefits, or because the government has consistently lied that they have paid their own way.

Since its creation, Social Security has been expanded multiple times. Each expansion meant those already retired paid no added taxes, and those near retirement paid more for only a few years. But both groups received increased benefits throughout retirement, increasing the unfunded benefits whose burdens had to be borne by later generations. Thus, each such expansion started another Ponzi cycle benefiting older Americans at others’ expense.

Social Security benefits have been dramatically increased. They doubled between 1950 and 1952. They were raised 15 percent in 1970, 10 percent in 1971, and 20 percent in 1972, in a heated competition to buy the elderly vote. Benefits were tied to a measure that effectively double-counted inflation and even now, benefits are over-indexed to inflation, raising real benefit levels over time.

Disability and dependents’ benefits were added by 1960. Medicare was added in 1966, and benefits have been expanded (e.g., Medicare Part B, only one-quarter funded by recipients, and Part D’s prescription drug benefit, only one-eighth funded by recipients).

The massive expansion of Social Security is evident from the growing tax burden since its $60 per year initial maximum (for employees and employers combined). Tax rates have risen and been applied to more earnings, with Social Security now taking a combined 12.4 percent of earnings up to $113,700 (and Medicare’s 2.9 percent combined rate applies to all earnings, plus a 0.9 percent surtax beyond $200,000 of earnings).

Those multiple Ponzi giveaways to earlier recipients created Social Security’s 13-digit unfunded liability and Medicare’s far larger hole. And despite politicians’ repeated, heated denials, many studies have confirmed the results.

One recent study of lifetime payroll taxes and benefits comes from the Urban Institute. For Medicare, they calculated that (in 2012 dollars) an average-wage-earning male would get $180,000 in benefits, but pay only $61,000 in taxes — “earning” only about one-third of benefits received. A similarly situated female does even better. The cumulative “excess” benefits equal $105 trillion, with net benefits increasing over time.

The Urban Institute’s calculations revealed a different situation for Social Security. An average-earning male who retired in 2010 will receive $277,000 in lifetime benefits, $23,000 less than his lifetime taxes, while for females, their $302,000 in lifetime benefits approximates their lifetime taxes. And things are getting worse. By 2030, that man will be “shorted” 16 cents (10 cents for women) of every lifetime tax dollar paid.

While those results resoundingly reject “we earned it” rhetoric for Medicare, the Social Security results, with new retirees getting less than they paid in, could be spun as “proving” Social Security is not a Ponzi scheme. However, that would be false. The reason is that Medicare is still in its expansion phase, as with Medicare Part D, piling up still bigger future IOUs. However, Social Security has essentially run out of new expansion tricks, although liberal groups are pushing to apply Social Security taxes to far more income as one last means of robbing those younger to delay the day of reckoning. That simply means that we are being forced to start facing the full consequences of the redistribution that was started in 1935. That is, the current bad deal Social Security offers retirees is just the result of the fact that it has been a Ponzi scheme for generations, and someone must get stuck “holding the bag.”

In fact, perhaps the best description of the current Social Security and Medicare situation comes from Henry Hazlitt, long ago, in Economics in One Lesson:

Today is already the tomorrow which the bad economist yesterday urged us to ignore. The long-run consequences of some economic policies may become evident in a few months. Others may not become evident for several years. Still others may not become evident for decades. But in every case those long-run consequences are contained in the policy as surely as the hen was in the egg, the flower in the seed.

Social Security and Medicare’s generational high-jacking has become “the third rail of politics” in large part because seniors want to believe that they paid their own way. But they have not. They have only paid for part of what they have gotten. The rest has indeed been a Ponzi scheme. And as Social Security is already revealing, the future cannot be put off forever, however much wishful thinking is involved. Some are already being forced to confront the exploding pot of IOUs involved, and it will get much worse.

The supposedly “most successful government program in the history of the world,” according to Harry Reid, has turned seniors into serious takers. The fact that some of them are now starting to share the pain caused by those programs does not contradict that fact. It just shows the dark side of the most successful Ponzi scheme in the history of the world.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.