For the last thirty years or more, there has been wide agreement that politics and sound monetary policy are incompatible. If politicians control the money supply, the thinking goes, then every time an election comes around they will risk inflation by goosing the economy with easy money in order to buy support from the voters. Price stability requires long-term thinking; but the public wants instant gratification. Without constraints, democracies are bad at self-preservation. It’s for this reason that most central banks are kept formally independent of the elected branches of government; that way, monetary policy is preserved as the business of supposedly neutral technocrats.
This is the common sense that emerged after the period of widespread inflation in the 1970s, and what appeared to be the failure of central banks to buck political influence and take steps to curb it. The paradigmatic case was the refusal of the US Federal Reserve in 1972 to raise interest rates and risk a recession that would weaken Richard Nixon’s chances of re-election. When Paul Volcker was made the Fed chair in 1979, he acted without regard for the potential political fallout, raising interest rates to nearly 20 per cent and causing unemployment to rocket; in 1982, it reached a high of 10.8 per cent (its peak after the 2008 crisis, by comparison, was 10.2 per cent in October 2009). Jimmy Carter, who had appointed Volcker, was crushed by Ronald Reagan in the 1980 presidential election. But inflation was brought under control, and the lesson was learned: central banks must not concern themselves with the election cycle if they are to guarantee price stability. By the end of the 1990s, most of the world’s central banks had been granted independence. This was one of a set of liberalising reforms of the era, alongside the privatisation of state-run industries, deregulation, the removal of capital controls and the breaking of unions. Today, formally detached from democratic control, central banks are more powerful than ever.
The Bank of England was made independent after Labour’s victory in the 1997 general election. For fifty years before that it had been under the authority of the Treasury. But this arrangement, like other mid 20th-century public controls on capitalism, was anomalous. For most of the time since its inception in the late 17th century the bank had been a private institution, driven by profit like any other. It was granted charters by the government setting out its privileges as a corporation, but it wasn’t under state control. It was a long time – more than two hundred years – before it took up anything like the role allocated to most central banks today: fine-tuning monetary policy to guarantee some balance in the national economy between price stability and employment. The bank was instrumental in the rise of the modern British state and its global reach, but as David Kynaston shows in his official history, it has always had an uncertain relationship with the state, mediating awkwardly between the private imperatives of finance and the public demands of politics.
When the bank was founded in 1694, its business was to finance the war that William of Orange waged against Louis XIV after assuming the throne in the Glorious Revolution. The government was loaned £1.2 million at an interest rate of 8 per cent; in exchange, subscribers to the loan were incorporated as a joint-stock bank with a temporary charter granting it the right to issue notes, take deposits, discount bills and make private loans. (The scheme was designed by the Scottish merchant William Paterson, whose other project, the Darien scheme, which aimed to establish a Scottish colony in Panama, ended in disaster when most of the settlers died from malaria and other diseases.) Its principal backers were Whigs – it was assumed they intended to use the bank to entrench William’s new regime – and there was, Kynaston writes, ‘a significant Dutch connection’. The majority of its 26 original directors were merchants based in the City; half a dozen were Huguenots, and ‘about half were from the dissenting interest.’ Among the 1268 initial subscribers were 190 ‘esquires’, who contributed 25 per cent of the total £1.2 million, and 63 titled aristocrats (15 per cent). Most of the subscribers made contributions of less than £1000; they included ‘carriers, clothworkers, embroiderers, farmers, mariners and wharfingers’. But the largest group were members of London’s mercantile middle classes. Tories worried that the existence of a national bank would enable merchant capital to overtake land as the arbiter of economic power, and that the capacity to create a permanent national debt would tempt Britain into military adventurism. They were right: the Bank of England made possible a new system of public finance that facilitated Britain’s rise to global commercial and military supremacy.
For the next 120 years, Britain was at war almost without pause. The bank provided the funds, and managed the state’s long-term debt (Parliament allocated revenues to service it, but not to repay the principal). As Britain waged wars from Virginia to Bengal, the reach of its empire grew, and so did the debt. In 1697, the government owed its creditors £16.7 million; by the end of the Napoleonic Wars in 1815, this figure had grown to £834 million. The bank charged a hefty fee for managing the national debt, and was criticised for war profiteering. But the government had grown dependent on it, and with the outbreak of each new war, its charter was renewed. This was a mutually beneficial relationship. For the stockholders, funding the bank was a very profitable business, though it sometimes placed them in awkward positions: George Washington, for example, was a stockholder when the bank was financing the war to put down his rebellion. For the state, having easy access to a huge pool of private capital allowed it to outspend and ultimately defeat France, even though it was larger and more populous. The failure of the French monarchy to find an equivalent method of managing its debt hastened its overthrow in 1789.
The bank didn’t lend only to the state. It was a principal source of credit for London merchants, and made major loans to the East India and South Sea Companies. Positions at the bank were dominated by London’s leading mercantile families, and handed down from father to son. In the 19th century, a clerkship at the bank was a respectable career choice for a member of London’s haute bourgeoisie. But it was tedious and poorly paid. Heavy drinking and gambling on the job were common; forgery and fraud weren’t unheard of. In-house wrestling matches were a popular way for bored clerks to pass the time. After the Napoleonic Wars, the bank cracked down on the wild atmosphere. Drunkenness, cigar-smoking, singing and gambling were made sacking offences, and it was threatened that men’s moustaches would be removed by force. Few of the bank’s employees had university educations, and until the late 20th century – when economists began to fill out its ranks – it had a justified reputation for anti-intellectualism. The economist David Ricardo disparaged it as a ‘company of merchants’ with no knowledge of political economy, whose hunger for profits made them untrustworthy guardians of the public good.
One of the most controversial aspects of the bank’s business from the outset was its power to issue paper money. Before its foundation, banking in England had been dominated by London goldsmiths, who, in addition to taking deposits, issued promissory notes that could be used to settle transactions and be converted into gold on demand. The bank took over this function, issuing a variety of notes, including the ‘running cash note’, which was made out by hand for irregular amounts – £12, say – corresponding to a specific amount of gold or silver held at the bank by its original bearer. This note could be passed from one holder to another as a form of payment and presented to the bank in exchange for specie. Because the bank’s original subscription was for £1.2 million, this was, at first, the total amount of notes it could issue. In 1725, the bank began to print notes, and issued them in standard, though large and unwieldy, denominations. The smallest note was £20, roughly equivalent to £4000 today. The many country banks that emerged in the second half of the 18th century took advantage of the bank’s reluctance to issue smaller denominations and began to print their own paper money. The last stopped doing so only in 1921.
As the use of paper money expanded slowly during the 18th century, its link to metal was assumed to be inviolable. But that link broke down during the wars with Revolutionary France. The declaration of war in 1793 led to bank runs across England, at country banks and at the Bank of England itself, at the same time as Britain was sending gold abroad in the form of loans and subsidies to its Continental allies and payments for its military forces around the world. In early 1797, the landing of a French expeditionary force in Wales (the last time hostile foreign forces landed in Britain) heightened fears that the French were about to invade. As people withdrew as much gold as they could, the Bank of England’s reserves quickly fell from more than £10 million to around £1 million and Pitt suspended the convertibility of the bank’s notes into gold. It was supposed to be a temporary arrangement, but over the next 24 years, the bank ran its first long-term experiment with fiat money. The results were mixed. The bank financed the war, the British won, and there wasn’t a major financial collapse. But the expansion of paper money in the economy led to inflation (low though it was by the standards of the 20th century) and to widespread counterfeiting, a crime punishable by hanging.
When the wars ended in 1815, the bank moved to restore the convertibility of its notes into gold, which it achieved in 1821. By the following year, the number of notes in circulation had fallen to £17 million from a height of £26 million in 1815, and the bank’s gold reserves had increased to £14.2 million. There was continuing controversy as to whether it should be able to issue paper money as it wished, or whether it should do so according to a fixed ratio of the gold it held. The debate was settled when the bank’s charter came up for renewal. The 1844 Bank Charter Act stipulated that it could issue £14 million backed by non-metal securities, but any amount beyond this had to correspond exactly to the amount of gold it held. The bank was given a firmer hold on the issuing of notes, so as to prevent country banks from recklessly issuing paper money, and the relationship between its public duties and profit-making functions was clarified. The Act split it into an issuing department, whose freedom to issue notes was strictly curtailed, and a banking department, which could continue to turn a profit as it wished.
It was during this period that the bank began to develop some of the functions of central banking that are familiar today, in particular using its control over credit and the money supply to respond to economic crises. If the 18th century was characterised by ceaseless war, the 19th was the century of financial panic, which recurred every ten years or so after the end of the Napoleonic Wars – in 1825, 1837, 1847, 1857, 1866, 1873 and 1890. Many construed this eerie regularity as an ineluctable aspect of capitalism: a wave-like pattern of boom and bust later referred to as the ‘business cycle’. The bank gradually came round to the idea that it had a duty to respond when other banks failed during a panic. After the crisis of 1866, which began with the collapse of the leading bank Overend and Gurney and led to a string of bank panics in London, Walter Bagehot formulated the canonical definition of the bank’s role as ‘lender of last resort’: during a crisis, it should lend liberally, on any good security, though at a high enough rate of interest to prevent moral hazard. It has been said that Bagehot’s dictum helped to transform the bank’s understanding of its public duties, but this is perhaps an exaggeration: at the end of the century, it still answered to its stockholders, was still free of government control, and was still out to make a profit.
By the 1870s, most of the world’s major economies had set the value of their currencies to gold at a fixed rate. This had the effect of making exchange between them very simple: if you knew how much an ounce of gold cost in the country you were trading with, and how much it cost at home, then you could translate the value of their currency into your own. The emergence of reliable exchange rates facilitated a period of intense global economic integration at the close of the 19th century that was unrivalled until the late 20th century. The Bank of England became the ‘banker’s bank’, presiding over the complex financial networks of the City; London was the nerve centre of the global economy; and sterling was the world’s currency of exchange.
The gold standard was, at least in theory, an elegant system. Say you exported something to Belgium and received payment in Belgian francs. You could then take those francs to the National Bank of Belgium and exchange them for gold, which you could bring back to Britain and exchange for pounds. When Belgium imported something from you, they effectively exported a certain amount of gold. Countries that imported more than they exported experienced a net outflow of gold, and vice versa. Since money was issued according to a ratio of specie held in the central bank, an outflow of gold meant less money would be put into circulation in that country. This would cause prices to fall, and economic activity to slow. But as prices fell, goods would become cheaper, and more attractive to foreign buyers. Exports would rise, gold would flow back into the country, more currency would be put into circulation, and prices and employment would rise again.
The system was supposed to be self-adjusting, but central banks played an active role. When faced with an outflow of gold from its reserves, a central bank would raise the rate at which it lent to other banks. Most commercial transactions before 1914 were settled by means of a bill of exchange, essentially a promissory note specifying the amount a merchant would be paid at a later date. If you were paid with a bill of exchange, you could exchange this at a bank for money, but only after the bank took a cut from its face value. (The amount of the cut was the rate of discount.) The bank could then exchange your bill at a central bank, which would charge its own discount rate. (The Bank of England called this the ‘bank rate’.) If the central bank raised its discount rate, other banks would bring fewer bills to cash, since they would be worth less. In this way, as discount rates rose, credit got tighter, and less money would be put into the economy. This would cause prices to fall, making local goods cheaper to foreign buyers and the cost of foreign goods more expensive. Again, exports would rise, imports would fall, and gold would flow back into the country.
There is a cost, in this scenario, to sticking with the gold standard: raising discount rates leads to the tightening of credit, and the result of that is likely to be an economic downturn. Maintaining fixed exchange rates, therefore, required a willingness to stomach periodic returns to mass unemployment, and to stand aside until the system adjusted. Expansionary policies, like lowering the interest rate or increasing public spending to promote recovery, weren’t seen as an option. The Bank of England had always been run by merchants who put the needs of trade above those of industry or labour. And the government didn’t yet think of unemployment as something it should worry about: it was a problem for charity, not policy. Labour was weak, democracy was highly constrained, and the state had little control over the bank’s policies. ‘Before 1914,’ as one official wrote in 1929, ‘a change in bank rate was no more regarded as the business of the Treasury than the colour which the bank painted its front door.’ After the First World War, all this would change. As labour grew stronger and democracy more robust, the government would no longer be able to ignore the domestic consequences of the policies needed to keep sterling on the gold standard.
The period of globalisation that began in the second half of the 19th century came to an end with the outbreak of the First World War. The gold standard broke down. The state needed to spend far more on its war machine than would be permissible in a system that kept government spending firmly in check, and exporting gold freely abroad during wartime was impossible. The state assumed new and unprecedented powers over the national economy. The bank lost ground to the Treasury, which among other things took charge of approving new issues of currency. (Lord Cunliffe, governor of the bank at the time, reportedly said that ‘while the war lasted the bank would have to regard itself as a department of the Treasury.’) It also lost its singular role as war financier, as Britain and its major European allies turned to the US for loans, which would leave them heavily indebted for years to come, accelerating America’s displacement of Britain as the global financial hegemon.
After the end of the First World War, the bank’s participation in the effort to restore the rules of the prewar world economy and reclaim Britain’s position as a global financial leader was overseen by Montagu Norman, the most powerful governor in its history. The bank played a major role in the financial reconstruction of postwar Europe, issuing huge loans, particularly to the countries thrown into financial chaos by the break-up of the Russian and Austro-Hungarian Empires, and attempting to set up independent central banks, on its own model, around the world. (One notable absence in Kynaston’s book is any real discussion of the bank’s overseas operations, which stretched across the earth, from El Salvador to Siam, and encompassed the entire British Empire.) The charismatic Norman, a severe depressive (before the war he had been misdiagnosed by Carl Jung as incurably insane as a result of late-stage syphilis), is attributed with a large role in the development of the ‘“mystique” of the central banker’. He hated experts, and preferred to make decisions by instinct, turning to economists not to tell him what to do, but to supply explanations for things he had already done. He was the ‘apotheosis of the English cult of the administrator as artist-leader’, one economist later wrote, ‘a kind of Künstlerführerprinzip’. In lieu of the politicians who had led the world to war in 1914, he dreamed of a clique of central bankers managing the international system. Detached from governments and free from democratic pressures, they would work to re-create the kind of liberal world economy that had flourished during the Gladstone era.
In Britain, however, Norman’s chief policy achievement – the return to gold in April 1925 – was a disaster. The value of the pound was set at the prewar rate, which made British goods much more expensive to foreign buyers. As exports slumped, the bank raised interest rates to protect its gold reserves. Deflation followed, along with increased unemployment and labour unrest, culminating in the General Strike of 1926. Keynes and a few others had predicted this. But their warnings went against a powerful orthodoxy. Currencies that weren’t pegged to gold were thought highly unstable, and hyperinflation was a new and terrifying spectre, appearing in the early 1920s not only in Germany, but also in Austria, Hungary and Poland. The bank still concerned itself far more with controlling inflation than with unemployment. But the experience of mass unemployment that followed the return to gold showed that deflation could be just as bad. Churchill later admitted that bending to Norman’s will when he was chancellor had been ‘the biggest blunder in his life’.
The gold standard did not last much longer. In the spring and summer of 1931, bank panics spread through Europe as investors moved their capital out of one risky country after another. The central banks of these countries began to lose gold. In July, worries about Britain’s financial state led to a sell-off of the pound, and the Bank of England lost £30 million of gold. Protecting the pound’s link to gold would have required massive interest-rate hikes during a period of rising unemployment, which, after the experience of returning to gold in 1925, was politically untenable. In September 1931, the pound was removed from gold; the US dollar followed in 1933 and the French franc in 1936. Britain’s abandonment of gold was seen at the time as a portent of doom, but in fact it was a step that had to be taken before the recovery from the Depression could begin. Leaving gold made it possible for countries to stimulate their economies. Before this, lowering interest rates would have led to further capital flight and gold outflows. But now, the bank, at the behest of the Treasury, could drop interest rates to 2 per cent, where they remained until 1951.
But the bank had lost stature. There were calls to bring it under direct government control, its heads appointed by the Treasury and its board staffed by representatives from industry, labour and agriculture. In 1937, Norman admitted that he had become simply ‘an instrument of the Treasury’. The emergence in the 1930s of the sterling area – a group of countries that fixed their currencies to the pound – gave the bank a new task in co-ordinating monetary policy through a quasi-imperial arrangement treated as an alternative to the global gold standard. But during the Second World War, the bank worked largely at the mercy of the Treasury, keeping interest rates low but otherwise sidelined in major decision-making. It was allowed to send just one representative to the Bretton Woods Conference of July 1944, for example, where the rules for a postwar international monetary system and plans for the International Monetary Fund and World Bank were drawn up.
The bank was nationalised at the end of the war. The Treasury took over its stock, and the Crown assumed authority for appointing senior officials. That didn’t stop those officials continuing to behave as if the bank were still autonomous: they remained distrustful of politicians and experts, they worked in secret, and their focus was on inflation, exchange stability and fiscal discipline. Much of their daily work remained the same: maintaining the exchange rate, issuing notes, handling the government’s debt, and providing services to and supervising other banks. They continually badgered the government to cut spending and constrain wage increases, and lobbied to keep monetary policy out of the political arena. Nonetheless, with the Treasury in charge, monetary policy largely took a back seat to Keynesian fiscal policies.
In the 1970s, the inflation that beset much of the Western world brought monetary policy back to the fore. Monetarism – the idea that price stability and output are chiefly a function of the money supply – became the new guiding doctrine, and the task of containing inflation replaced promoting employment as the first priority in countries around the world. Thatcher was a committed monetarist, but she disliked and distrusted the bank and insisted that monetary policy remain in the hands of the Treasury. This was the period when the idea of central bank independence took root, but it couldn’t be acted on until Blair took office. From 1997, the Bank of England was given the operational freedom to craft monetary policy in line with the inflation target set by the Treasury. It lost its role in banking supervision and managing government debt, but under the governorship of Mervyn King, who took over in 2003 (and who commissioned Kynaston to write this book), became a powerful machine with one overriding purpose: to hit the inflation target.
There would be criticism, later, that the bank’s narrow focus prevented it from paying attention to the signs that might have alerted it to the coming financial crisis, and that once the crisis hit in 2007-8, the bank bungled the handling of it. In the late summer of 2007, when Britain faced its first major bank run since the middle of the 19th century, King refused to prop up Northern Rock with emergency liquidity. His priority was the avoidance of moral hazard – quite unlike the traditional Bank of England governor, he was a theoretical economist with little connection to or interest in the City; he was happy to let risk-takers fail – but he was taken to task for allowing the panic to develop into a full-blown crisis. Hardly less controversial was the bank’s programme of quantitative easing, which in the three years after it began in March 2009 created £375 billion to buy assets such as government bonds in order to lower their yield and put more money into the economy as a way of stimulating spending and investment. King also got into the habit of criticising the Labour government for its fiscal deficits and for not allowing the bank broader supervisory powers over finance. After the election of 2010, he cast his lot with the coalition government, and enthusiastically supported its austerity policies. The myth of the bank’s political neutrality, or what had survived of it, was dispelled. Mark Carney, King’s successor, is now routinely attacked for aligning the bank against Brexit.
Like the US Fed and the European Central Bank, the Bank of England emerged from the 2007-8 crisis more powerful than ever. Central banks led the recovery, and have now assumed nearly exclusive responsibility for macroeconomic management. The Bank of England regained its powers of financial supervision and pumped unthinkably large amounts of money into the national economy. Such unconventional monetary policies may have prevented a depression, but they have been widely criticised for their distributional consequences: by inflating asset prices, quantitative easing has helped the well-off, but hurt people with smaller savings and pensions. These are trade-offs that should be adjudicated in public, not behind closed doors. It is unsurprising that central banks are often attacked by left and right alike, and that their independence is once more in question. Theresa May, who began her career at the Bank of England, hinted in her speech to the Tory Conference in 2016, her first as prime minister, that the bank’s autonomy over monetary policy should no longer be taken for granted. John McDonnell has said that it would be brought to heel in the first week of a Labour government.
After the 2016 US election, some assumed that Donald Trump would end the Fed’s independence. But so far he has done little besides expressing his disapproval when it raises interest rates. He replaced Janet Yellen, a high-profile Obama appointee, with a mainstream figure, Jerome Powell, who has continued the policies of ‘normalisation’ Yellen began: ending quantitative easing and gradually raising interest rates. These moves are counter to hopes of breakneck economic growth, and should there be a recession, the Fed’s independence will almost certainly come under more direct attack. So far as some are concerned, this needn’t be a bad thing: ending central bank independence, and putting monetary policy back in the hands of elected governments, will make possible new kinds of expansive policies, and dislodge the obsession with inflation, which, it can be argued, has been too low, not too high, for too long. It would also help to neutralise a winning populist argument: that a cabal of technocrats runs the economy behind closed doors. But in the short run, ending the Fed’s independence will give the president a powerful tool for leveraging the national wealth for his political gain. It’s an uncomfortable position to be in if you’re worried about democracy: wanting more of it, but not just yet.
Send Letters To:
The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN
letters@lrb.co.uk
Please include name, address, and a telephone number.