Spring 2010

The Scourge of Juristocracy

– James Grant

When rights are at issue, Americans instinctively turn to the courts. It is an undemocratic habit that they have exported, along with the underlying institutions, with dismaying success.

The United States may not be the world’s indispensable nation, as its secretary of state famously claimed a dozen years ago, but it has certainly been the indispensable inspiration in the global spread of democracy. The irony is that while this has not led to a great deal of imitation of American institutions such as the presidency, the single most widely replicated feature of the American political system is also its most undemocratic one.

Since the end of World War II, there has been a worldwide convergence toward U.S.-style judicial supremacy—or what some observers now call “juristocracy.” In both long-established and new democracies, as Ran Hirschl shows in his excellent book Towards Juristocracy (2004), constitutional reforms have taken political power away from elected politicians and shifted it to unelected judges. When democracies were established in Southern Europe in the 1970s, in Latin America in the 1980s, and in Central and Eastern Europe and South Africa in the 1990s, they almost all included a strong judiciary and a bill of rights.

Of the mature democracies that have embraced juristocracy in the postwar rights revolution, Israel is one of the most extreme examples. As Aharon Barak, the president of the Israeli Supreme Court from 1995 to 2006, once claimed, “Nothing falls beyond the purview of judicial review. The world is filled with law; everything and anything is justiciable.” Even the most contentious questions—such as “Who is a Jew?”—were questions for the court to answer. Barak made it clear that the main influence on his approach was the U.S. Supreme Court, the decisions of which were “shining examples of constitutional thought and constitutional action.”

From the beginning, Americans have embraced and idolized the notion of fundamental, higher-order, immutable law that is somehow superior to politics. It is a view that entails rights enshrined in a constitution and interpreted by judges, who extend their authority over ever larger domains. In the 20th century, the U.S. Supreme Court demonstrated an increasing readiness to actively resolve politically controversial issues, from Roe v. Wade (1973), which established the right to abortion, to its decision in Citizens United v. Federal Election Commission earlier this year, which overturned legislation that barred corporations from sponsoring political ads to influence elections—a “devastating” ruling, as President Barack Obama described it, which “strikes at our democracy itself.”

Modern judicial activism is in many ways an expression of the old belief that democracy must be tempered by aristocracy—an idea that was prevalent in the late 18th century and now masquerades in democratic garb. The main vehicle by which judicial activism has been brought about is, of course, the language of rights. Coinciding with the articulation of the secular, anti-religious feelings of the Enlightenment, the flourishing of constitutional debate in the 18th century witnessed regular appeals to the idea of inalienable natural rights, which took on a sacred role. But it was only in the latter half of the 20th century that the idea (now described as human rights) became an intrinsic part of legal and political discourse. For many today, a world without rights enforced by a judiciary is unthinkable. Especially in undemocratic regimes and in new or unstable democracies beset by deep corruption and other ills, rights-based judicial review is a necessary protection against arbitrary government. But in ostensibly healthier democracies, it inevitably comes at a cost.

Until recently, parliamentary systems such as Britain’s were firmly based on the belief that rights-based judicial review has a price not worth paying. Britain is one of the few remaining countries that lack a written (or codified) constitution. That is not to say that Britain does not have a constitution—or that it does not take rights seriously—but rather that the constitution is to some extent flexible, and that the protection of human rights is contingent on the democratic will of Parliament. From this perspective, a bill of rights is merely, in the words of one constitutional specialist, “the statement of a political conflict pretending to be a resolution of it.” Rights, as political claims, must compete with other political claims, must fight the political fight—a conflict that is not resolved by using rights as trumps. In what circumstances should, say, liberty prevail over security and vice versa? By handing such decisions to the judiciary, juristocracy denies citizens their democratic right to participate in the political decision-making process.

In their responses to the inescapable choices—between rights and democracy, minority rule and majority rule, law and politics—the prevailing views in the United States and Britain in recent times could not have been further apart. Paradoxically, however, in the Revolutionary era—a critical period for both the U.S. and British constitutions—the American republic was largely influenced by a British practice in which public debate was suffused with a culture of law. Law was seen as a critical part of intellectual life and something that all educated gentlemen should study.

At the same time, however, the two countries began to diverge in their approaches to law. One reason for this was the different way the Enlightenment was felt on each side of the Atlantic. Whereas in Britain the Enlightenment had the effect of reducing respect for law, which was now seen as tradition bound and reactionary, in America lawyers were among the most radical thinkers, and largely replaced the clergy as the dominant force in American culture and public affairs. Believing that parliamentary tyranny was just as bad as royal tyranny, many American colonists placed their faith in fundamental law enshrined in a constitution—as John Adams famously put it, “a government of laws and not of men.” Even Thomas Paine, an otherwise radical democrat, wrote admiringly in Common Sense (1776)that “in America the law is King. For as in absolute governments the King is law, so in free countries the law ought to be King.”

At the time of the Founding, there was a strong demand in America for information on English common law—that long tradition of judge-made law derived from the wisdom of the judicial elite but said to embody the common sense of the nation. For Americans, by far the most influential defense of that tradition was Sir William Blackstone’s famous Commentaries on the Laws of England (1765–69), which became so influential that in 1775 Edmund Burke announced that “they have sold nearly as many of Blackstone’s Commentaries in America as in England.” Common law was used to support claims of natural rights. According to Roger Sherman, who helped draft the Declaration of Independence, the British constitution was rooted “in the law of God and nature,” and the colonies adopted common law “not as common law, but as the highest reason.”

From the beginning, Americans have embraced and idolized the notion of fundamental, higher-order, immutable law that is somehow superior to politics. It is a view that entails rights enshrined in a constitution and interpreted by judges, who extend their authority over ever larger domains.

In Britain, however, Blackstone’s views were severely criticized. One of the first to take him on was Edward Gibbon, who attacked Blackstone’s defense of the mysteries of common law (with its roots “in barbarous ages, and since continued from a blind reverence to antiquity”) as an attempt to perpetuate the privileged status of lawyers and judges in society. Gibbon argued that just as the clergy of all religions preferred traditional law to written law, so too did the lawyers, because it secured their status as the law’s sole interpreter. The legal establishment, for obvious reasons, had an interest in making the law as obscure as possible.

Even more scathing than Gibbon was Jeremy Bentham, the utilitarian philosopher and jurist, whose criticisms of Blackstone were published in 1776 in a book titled A Fragment on Government. Having listened to Blackstone’s lectures as a student at Oxford with “rebel ears,” Bentham pursued a lifelong campaign against his work, one that was to prove critical in the development of the British constitution. Bentham is perhaps most famous for his claim that there are no inalienable natural rights, which he dismissed as “nonsense upon stilts.” Rights for him were only political claims and opinions. The common-law tradition, he argued, was nothing more than an attempt to substitute the opinion of judges for that of the people as expressed in legislation. Why, he asked, should we prefer the opinion of the few to that of the many?

Although Bentham was opposed to the American Revolution when he wrote these criticisms of Blackstone, in his later years he came to embrace democracy, and to see America (or, as he preferred to call it, the “Anglo-American United States”) as the best example of democracy in action. But Bentham also recognized that while the “plague of despotism,” by which he meant English rule, had been driven out of the United States, there remained the “plague of lawyers.” In America, as Alexis de Tocqueville was later to put it, the aristocracy of lawyers and judges provided a bulwark against the “excesses of democracy.” They “secretly oppose their aristocratic propensities to the nation’s democratic instincts, their superstitious attachment to what is old to its love of novelty, their narrow views to its immense designs, and their habitual procrastination to its ardent impatience.” With the growth of what Alexander Hamilton called a “sacred respect for constitutional law,” the law became a “civil religion” in secular America, and even progressive liberals started to praise the Supreme Court, not as a conservative bulwark against democracy but as an instrument of evolutionary progressive change.

It has long been assumed that in Marbury v. Madison (1803) the Supreme Court unilaterally asserted the power, without any basis in the Constitution, to declare acts of Congress unconstitutional. This is wrong. The decision was relatively uncontroversial at the time, and there is overwhelming evidence to suggest that the power of judicial review was intended by some of the Framers. “Right from the nation’s beginning,” writes Gordon Wood, one of the leading historians of the period, the judiciary “acquired a special power that it has never lost.” That is not to say that the Court was at first keen to exercise its power. It was only gradually, over the following two centuries, that judicial review came to mean judicial supremacy.

One of the main problems with the separation of powers theory is that it led to the erroneous belief that a strict distinction could be drawn between lawmaking and judicial decision making, the former being the legitimate function of the legislature.

However, judicial review was far from uncontested at the time. Perhaps its most important critic was Thomas Jefferson, even though he was himself a lawyer who advocated a prominent role for lawyers in public affairs. Jefferson thought that most Americans, obsessed as they were with English common law, had completely lost sight of republicanism. Another great critic was James Madison, who believed that safeguards against the “excesses of democracy” were to be found in the checks and balances of the American political (as opposed to legal) system—a system that he, contrary to Jefferson, saw as partly aristocratic in design. As Madison noted in The Federalist 51, any power supposedly outside politics, such as the judiciary, could not be trusted, because it could easily end up espousing the views of an unjust majority or result in the tyranny of the minority, and “may possibly be turned against both parties.”

Although Jefferson was committed to inalienable human rights, he had much in common with the more radical Bentham. Ignoring Madison’s advice that a bill of rights could actually limit the people’s rights (by restricting protection only to enumerated rights), Jefferson argued strenuously that the Constitution was inadequate without one—a view Madison was eventually compelled to accept. But Jefferson did not believe in a strong judiciary; in fact, he wanted to tame the judiciary and turn it into “a mere machine.” Yes, the judiciary could enforce the Bill of Rights, but such enforcement would not entail judicial review of legislation, because the judges did not have a monopoly on the interpretation of the Constitution. Jefferson was deeply opposed to the common-law tradition because he thought that the only legitimate law was legislation emanating from the will of the people.

That said, America’s democratic tradition has never really been dominant. American constitutional history has instead been defined by the view of Hamilton, who argued that the “learned professions truly form no distinct interest in society” and, as such, were “an impartial arbiter.” Responding to popular attacks on the aristocratic propensities of lawyers, but maintaining their belief that democratic politics was something to be feared, American lawyers in the early republic tried to convince themselves and the public that the judiciary was indeed independent and impartial. The ideal of the separation of powers was Montesquieu’s “enthusiastic but mistaken tribute” to the British constitution, the philosopher Isaiah Berlin lamented, which misled Blackstone and resulted in the principle’s being “much too faithfully adopted in the United States.”

One of the main problems with the separation of powers theory is that it led to the erroneous belief that a strict distinction could be drawn between lawmaking and judicial decision making, the former being the legitimate function of the legislature. Justice Antonin Scalia is among those (now in a minority) who insist that judges should not “make” law; they should simply apply and interpret legislation. Judges, he insists, should not appeal to the idea of a “living constitution” or look to the purpose of the law or the intention of the legislature. If they do, they will be making a judgment based not on what the law in fact is but on what it ought to be. Instead, judges should look to the original meaning of the text. Scalia is ardently opposed to common-law tradition, chiefly because of his understanding of democracy: Unelected judges should not be lawmakers. As he sees it, only if judges follow the original meaning can judicial review be fully democratic and neutrally conducted.

But few today take seriously this conservative focus on the original meaning, which requires historical study (Bentham would say ancestor worship) and can give rise to countless competing interpretations. Most American legal thinkers instead take a view similar to that of the classical common-law lawyers. For them, when judges decide cases, they are applying the law that already exists in the form of the community’s common principles, which may change over time. With their training and experience, judges, in this view, are best placed to work out what the community’s common principles are (or, more accurately, what they ought to be). As Alexander Bickel wrote in his seminal work The Least Dangerous Branch (1962)—which derived its title from Hamilton’s description of the judiciary—the Supreme Court is the “guardian” of the nation’s values, a role it has vastly expanded in recent decades.

The most prominent modern defender of this kind of judicial supremacy is Ronald Dworkin, the doyen of liberal legalism. Writing in The New York Review of Books last year, Dworkin criticized Justice Sonia Sotomayor for perpetuating the myth that law can be neutral with regard to political morality when, in her confirmation hearings, she repeatedly claimed that her constitutional philosophy was simply “fidelity to the law.” Dworkin rightly saw this as a meaningless statement, and used the occasion to drive home his message that legal judgment requires a controversial decision based on principles of morality. For him, the very idea of neutrality is absurd.

Originally, liberals held quite a different view of the judiciary’s role. It had become obvious to progressives at the start of the 20th century that the courts act politically. This was a time of conservative rulings, exemplified by Lochner v. New York (1905), in which the Supreme Court, reading its laissez-faire values into the Constitution, struck down a law limiting the working hours of bakers on the grounds that it was an unconstitutional interference with freedom of contract. The era marked a turning point in America for progressive jurists, of whom Justice Oliver Wendell Holmes was a prominent early example. In his dissent in Lochner, Holmes called for judicial restraint and argued that, in a democracy, the legislature and not the courts should decide such controversial issues. The legal realists, as these jurists became known, acknowledged that judges’ political biases played a key role in judicial decision making, and that judicial decision making unavoidably entailed judicial lawmaking.

After the Court started to issue progressive rulings during the New Deal, however, liberal criticism petered out. Earl Warren’s tenure as chief justice (1953–69)—which is most famous for its decision in Brown v. Board of Education (1954) outlawing racial segregation in public schools—was every bit as political as the Lochner era. Liberals found it very easy to agree with the Court’s judicial activism because the justices were reading their liberal values into the Constitution. The difficulty, however, was that having supported the Court’s increased politicization during the Warren era, liberals found it difficult to make any tenable criticisms when conservative rulings reappeared during the tenure of Chief Justice William Rehnquist (1986–2005). Dworkin had argued that judges should decide cases according to their political morality, and that was precisely what the Rehnquist Court—with its apogee in Bush v. Gore (2000)—was doing (and what the Roberts Court continues to do).

The conversion of American liberals to the case for a political role for the Court coincided with the growth of support around the world, accelerated by World War II, for judicially enforceable human rights—culminating in the Universal Declaration of Human Rights in 1948. According to the philosopher John Gray, this trend was an important development of “the older liberal project, or illusion, of abolishing politics, or of so constraining it by legal and constitutional formulae that it no longer matters what are the outcomes of political deliberation.” In a democracy, this is unacceptable, which is why the British system, based on the legislative supremacy of Parliament, has generally sought to resolve questions of human rights by turning to elected politicians rather than unelected judges.

The British system, however, is far from perfect. Take, for example, the fact that government ministers continue to derive many powers from the monarch and not from Parliament. This and other problems (such as the ability of the government to control Parliament through its backbench members) generally stem from the insufficiency of parliamentary power. But rather than strengthen Parliament to ensure more effective accountability, Britain is strengthening the power of the courts. In The New British Constitution (2009), Oxford professor of government Vernon Bogdanor explains that, almost without anyone noticing, “a new constitution is in the process of being created before our eyes.” The traditional supremacy of Parliament is being undermined, and the judiciary is now taking center stage.

In the 1990s, the British courts—without any constitutional basis—began to use the language of fundamental constitutional rights. In 1998, Parliament itself passed the Human Rights Act, incorporating into domestic law the catalog of basic rights (such as the rights to life, privacy, and free expression) set out in the European Convention on Human Rights (1950). The act, which Bogdanor approvingly calls the “cornerstone of the new British constitution,” did not give judges the power to declare legislation unconstitutional, but it seems inevitable that they will move in that direction. This was made explicit last fall, when the new Supreme Court of the United Kingdom replaced the House of Lords as the highest court in Britain. According to Lord Collins, who is one of its 12 justices, the new court will become like its U.S. counterpart: “perhaps not so pivotal as the American Supreme Court, but certainly playing a much more central role in the legal system and approaching the American ideal of a government of laws and not of men.”

If Britain does have a new constitution, it is unique in the manner in which it has been created. More than 10 years ago, at the outset of New Labor’s constitutional reforms, David Marquand, a public intellectual and former Labor MP, described the changes as “the muddled, messy work of practical men and women, unintellectual when not positively anti-intellectual, apparently oblivious of the long tradition of political and constitutional reflection of which they are the heirs, responding piecemeal and ad hoc to conflicting pressures.” Infatuated with the U.S. Constitution and ignorant of their own, British politicians are in danger of losing a system that, in the words of Lord Balfour in 1928, is happily conducive to “the never-ending din of political conflict.”

For every defender of liberty, however, the desire to put one’s faith in the courts is especially great when, as now, civil liberties are being eroded in the name of national security. The U.S. Supreme Court’s ruling in Boumediene v. Bush (2008),upholding the right of habeas corpus for foreign detainees in the Guantánamo Bay prison, was rightly seen as a great success. But even in this area, faith in judges can be misplaced. The usual stance of the judiciary when national security is at stake is—entirely understandably, of course—to defer to the executive. For example, in the famous wartime decision Korematsu v. United States (1944), the Supreme Court upheld an executive order authorizing the evacuation and detention of American citizens of Japanese descent from the West Coast. Such decisions can sap the energy from the political process. What better way for a president to defend his actions and quash debate than to point to the favorable opinion of judges?

It would be wrong to take this objection to the judiciary’s guardian role too far. On many occasions, the courts have held the executive to account. From a democratic point of view, this is perfectly acceptable when the executive has exceeded the powers established by Congress. But, for the reason that Congress—unlike the courts—is democratically accountable to the people, Mark Tushnet, one of America’s shrewdest constitutional commentators, has argued that judicial review should be abolished except when expressly sanctioned by Congress. The U.S. Constitution, he writes, needs to be “taken away from the courts.”

Notwithstanding the recent constitutional reforms in Britain, Parliament continues to dominate the British constitution. “The British people,” said Lord Bingham, the recently retired senior judge in the House of Lords, “have not repelled the extraneous power of the papacy in spiritual matters and the pretensions of royal power in temporal in order to subject themselves to the unchallengeable rulings of unelected judges.” This was essentially Jefferson’s argument. Only by turning away from juristocracy and back to figures such as Jefferson, can America—and the world—produce a system in which democracy will be capable of flourishing.

* * *

James Grant is the Wright Rogers Law Scholar at the University of Cambridge. He is working on a book about the influence of the Enlightenment on the British constitution in the 18th century. 

Photo courtesy of Flickr/Joe Gratz