This just in from Shweta Singh, Assistant Professor at the South Asian University, Delhi:
For those of you interested in the politics of Sri Lanka, here is my take on Mark Salter’s book ‘To End A Civil War: Norway’s Peace Engagement with Sri Lanka’ in the Asian Studies Review!
To end a civil war; Norway’s peace engagement in Sri Lanka
by Mark Salter, London, C. Hurst & Co. Ltd., 2015, 531 pp., £25.00 (paperback)
Mark Salter’s To End a Civil War; Norway’s Peace Engagement in Sri Lanka brings to the fore the “story of the Norwegian effort to facilitate an end to the Sri Lankan Conflict – in the first instance as seen by the Norwegian facilitation team, but also as perceived by others involved in the process” (p. 9). While Salter writes the “story” of the Norwegian effort, beginning in 2000, he also walks the reader through the complex maze of domestic politics in Sri Lanka, and sheds valuable light on why the Norwegian effort failed to end the Sri Lankan conflict.
The book is also a strategically written narrative that in many ways argues that the peace effort failed not because there were critical pitfalls that Norway committed, but because at the domestic level in Sri Lanka, the key actors, whether the state or the Liberation Tigers of Tamil Eelam (LTTE), were not committed enough to the peace process.
The strength of the book lies in the effort to weave through the complex details of this story in 12 systematically organised chapters. Chapter 1 traces the pre-2000 origins of the Norwegian engagement in Sri Lanka, and provides context for the remainder of the analysis. The sections on the kidney saga (which involved getting Balasingham, political strategist and chief negotiator of the LTTE, out of Vanni for treatment through secret high-level discussions with Colombo) and Norwegian facilitator Eric Solheim’s first meeting with Balasingham (pp. 32–40) are particularly interesting, and provide great insight into how and why Norway as the facilitating country was able to build trust with the LTTE in the initial phase of its engagement.
From Chapter 3 to Chapter 7, the book traces the process of Norwegian facilitation. Through the strategic voices of the Norwegian facilitators, the author clearly makes an effort to set the record straight concerning Norway’s engagement, be it on critical questions related to the issue of the neutrality of Solheim as the facilitator (pp. 70–71, pp. 88–89, pp. 101–106), the issue of federalism at the Oslo round of talks (pp. 111–120), the reactions to the LTTE’s withdrawal (pp. 142–144), the critical issue of the Interim Self Governing Authority (ISGA) (pp. 156–164) or the tsunami and the politics of the Post Tsunami Operational Management Structure (PTOMS) (pp. 212–215), to highlight a few. Given that it was precisely on these issues that Norway faced severe criticisms, the book goes a long way in clarifying the story from a Norwegian perspective. However, there is clear bias, which tilts the narrative towards the Norwegian point of view on most of these issues.
The last four chapters of the book are intriguing. While they provide an insight into the challenges that Norway faced from both the Sri Lankan state and the LTTE, they also push the reader to question some of the critical arguments provided by many who were part of the Norwegian team (including Eric Solheim, Vidar Helgesen and Jon Hanseen-Bauer; p. 383) in defence of Norway’s actions in the final stages of facilitation. It needs to be stated that there is no doubt he facilitators were constrained by the limited mandate Norway had; yet a critical question that remains unanswered in the discussion on facilitation and its ambiguities (pp. 400–402) is: did Norway in some ways not overstep its role as a facilitator, given the ambiguities? Although one would agree with Salter’s story that the lack of political bipartisanship in Sri Lanka was a key factor leading to the failure of Norwegian efforts, the effort by the author to classify it as the only key factor is subject to debate.
There is also veiled criticism vis-à-vis India’s role in the final stages of Norwegian facilitation, but the nuances of that side of the story have not been addressed in detail. For instance, recalling events in 2000 in Chapter 2, on the first meeting with Indian government representatives, Solheim recounts: “We travelled to Delhi to meet Foreign Secretary Lalit Mansingh […] He asked us to sit down and then began a third degree interrogation […] There was no protocol, [it was] like a police interrogation…” (p. 51).
It would have been useful for the reader to have more information on Norway’s engagement with India on Sri Lanka, given that India is an important regional player. In all, the book is definitely a useful account of the Norwegian story of facilitating the peace process in Sri Lanka, but it falls short of providing a comprehensive view of the factors that led to the failure of the peace engagement in Sri Lanka, in which Norway cannot be given a clean chit.
Should Aung San Suu Kyi keep her Nobel peace prize?
Should Aung San Suu Kyi keep her Nobel peace prize?
Strong argument from the UK Guardian columnist George Monbiot for a move that would definitely prove anathema among many both in Myanmar and abroad. Strip Aung San Suu Kyi of her Nobel Prize on account of her signal failure to take a stand against the persecution of the Rohingya in Myanmar today. And even worse, as some see it, her implicit endorsement of Buddhist nationalist anti-Rohingya prejudices and alleged complicity in the crimes against humanity being visited on hapless Rohingya civilians.
Aung San Suu Kyi: ‘It is hard to think of any recent political leader by whom such high hopes have been so cruelly betrayed.’ Photograph: Edgar Su/ Reuters
Few of us expect much from political leaders: to do otherwise is to invite despair. But to Aung San Suu Kyi we entrusted our hopes. To mention her name was to invoke patience and resilience in the face of suffering, courage and determination in the unyielding struggle for freedom. She was an inspiration to us all.
Friends of mine devoted their working lives to the campaign for her release from the many years of detention imposed by the military dictatorship of Myanmar, and for the restoration of democracy. We celebrated when she was awarded the Nobel peace prize in 1991; when she was finally released from house arrest in 2010; and when she won the general election in 2015.
None of this is forgotten. Nor are the many cruelties she suffered, including isolation, physical attacks and the junta’s curtailment of her family life. But it is hard to think of any recent political leader by whom such high hopes have been so cruelly betrayed.
By any standards, the treatment of the Rohingya people, a Muslim minority in Myanmar, is repugnant. By the standards Aung San Suu Kyi came to symbolise, it is grotesque. They have been described by the UN as “the world’s most persecuted minority”, a status that has not changed since she took office.
She has denied the very identity of the people being attacked, asking the US ambassador not to use the term Rohingya
The Convention on the Prevention and Punishment of Genocide describes five acts, any one of which, when “committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group”, amounts to genocide. With the obvious and often explicit purpose of destroying this group, four of them have been practised more or less continuously by Myanmar’s armed forces since Aung San Suu Kyi became de facto political leader.
I recognise that the armed forces retain great power in Myanmar, and that Aung San Suu Kyi does not exercise effective control over them. I recognise that the scope of her actions is limited. But, as well as a number of practical and legal measures that she could use directly to restrain these atrocities, she possesses one power in abundance: the power to speak out. Rather than deploying it, her response amounts to a mixture of silence, the denial of well-documented evidence, and the obstruction of humanitarian aid.
I doubt she has read the UN human rights report on the treatment of the Rohingyas, released in February. The crimes it revealed were horrific.
It documents the mass rape of women and girls, some of whom died as a result of the sexual injuries they suffered. It shows how children and adults had their throats slit in front of their families.
It reports the summary executions of teachers, elders and community leaders; helicopter gunships randomly spraying villages with gunfire; people shut in their homes and burnt alive; a woman in labour beaten by soldiers, her baby stamped to death as it was born.
It details the deliberate destruction of crops and the burning of villages to drive entire populations out of their homes; people trying to flee gunned down in their boats.
Rohingya refugees from Myanmar’s Rakhine state arrive in Bangla-desh:120,000 people have been forced to flee in the past fortnight. Photograph: KM Asad/AFP/Getty Images
And this is just one report. Amnesty International published a similar dossier last year. There is a mountain of evidence suggesting that these actions are an attempt to eliminate this ethnic group from Myanmar.
Hard as it is to imagine, this campaign of terror has escalated in recent days. Refugees arriving in Bangladesh report widespread massacres. Malnutrition ravages the Rohingya, afflicting 80,000 children.
It is true that some Rohingya people have taken up arms, and that the latest massacres were triggered by the killing of 12 members of the security forces last month, attributed to a group that calls itself the Arakan Rohingya Salvation Army. But the military response has been to attack entire populations, regardless of any possible involvement in the insurgency, and to spread such terror that 120,000 people have been forced to flee in the past fortnight.
In her Nobel lecture, Aung San Suu Kyi remarked: “Wherever suffering is ignored, there will be the seeds of conflict, for suffering degrades and embitters and enrages.” The rage of those Rohingya people who have taken up arms has been used as an excuse to accelerate an existing programme of ethnic cleansing.
She has not only denied the atrocities, attempting to shield the armed forces from criticism; she has also denied the very identity of the people being attacked, asking the US ambassador not to use the term Rohingya. This is in line with the government’s policy of disavowing their existence as an ethnic group, and classifying them – though they have lived in Myanmar for centuries – as interlopers. She has upheld the 1982 Citizenship Law, which denies these people their rights.
When a Rohingya woman provided detailed allegations about her gang rape and associated injuries by Myanmar soldiers, Aung San Suu Kyi’s office posted a banner on its Facebook page reading “Fake Rape”. Given her reputation for micromanagement, it seems unlikely that such action would have been taken without her approval.
Not only has she snubbed and obstructed UN officials who have sought to investigate the treatment of the Rohingya, but her government has prevented aid agencies from distributing food, water and medicines to people displaced or isolated by the violence. Her office has accused aid workers of helping “terrorists”, putting them at risk of attack, further impeding their attempts to help people who face starvation.
So far Aung San Suu Kyi has been insulated by the apologetics of those who refuse to believe she could so radically abandon the principles to which she once appealed. A list of excuses is proffered: that she didn’t want to jeopardise her prospects of election; that she doesn’t want to offer the armed forces a pretext to tighten their grip on power; that she has to keep China happy.
None of them stand up. As a great democracy campaigner once remarked: “It is not power that corrupts, but fear. Fear of losing power corrupts those who wield it.” Who was this person? Aung San Suu Kyi. But now, whether out of prejudice or out of fear, she denies to others the freedoms she rightly claimed for herself. Her regime excludes – and in some cases seeks to silence – the very activists who helped to ensure her own rights were recognised.
This week, to my own astonishment, I found myself signing a petition for the revocation of her Nobel peace prize. I believe the Nobel committee should retain responsibility for the prizes it awards, and withdraw them if its laureates later violate the principles for which they were recognised. There are two cases in which this appears to be appropriate. One is Barack Obama, who, bafflingly, was given the prize before he was tested in office. His programme of drone strikes, which slaughtered large numbers of civilians, should disqualify him from this honour. The other is Aung San Suu Kyi.
Please sign this petition. Why? Because we now contemplate an extraordinary situation: a Nobel peace laureate complicit in crimes against humanity.
• George Monbiot is a Guardian columnist
The Secret History of the Banking Crisis
The Secret History of the Banking Crisis
Riveting, accessible account of the 2008 financial crisis and the hush-hush ‘swapline’ system between the US Fed and a select coterie of European central banks put in place to contain it (remains with us today). Will Trump get round to disrupting it? And just as importantly, will it ever become the subject of politcal, ideally democratic scrutiny and discussion?
Published in the August 2017 edition of Prospect.
A broker looks at his screens at Frankfurt’s stock exchange on September 15, 2008, as the German stock exchange went down 4.6 percent in the afternoon in the wake of US investment banking giant Lehman Brothers filing for bankruptcy. AFP PHOTO DDP/ THOMAS LOHNES GERMANY OUT (Photo credit should read THOMAS LOHNES/AFP/Getty Images)
The secret history of the banking crisis
Accounts of the financial crisis leave out the story of the secretive deals between banks that kept the show on the road. How long can the system be propped up for?
by Adam Tooze. Published in August 2017 issue of Prospect Magazine
It is a decade since the first tremors of what would become the Great Financial Crisis began to convulse global markets. Across the world from China and South Korea, to Ukraine, Greece, Brexit Britain and Trump’s America it has shaken our economy, our society and latterly our politics. Indeed, it has thrown into question who “we” are. It has triggered both a remarkable wave of nationalism and a deep questioning of social and economic inequalities. Politicians promise their voters that they will “take back control.” But the basic framework of globalisation remains intact, so far at least. And to keep the show on the road, networks of financial and monetary co-operation have been pulled tighter than ever before.
In Britain the beginning of the crisis was straight out of economic history’s cabinet of horrors. Early in the morning of Monday 14th September 2007, queues of panicked savers gathered outside branches of the mortgage lender Northern Rock on high streets across Britain. It was—or at least so it seemed—a classic bank run. Within the year the crisis had circled the world. Wall Street was shaking, as was the City of London. The banks of South Korea, Russia, Germany, France, Belgium, the Netherlands, Ireland and Iceland were all in trouble. We had seen nothing like it since 1929. Soon enough Ben Bernanke, then chairman of the US Federal Reserve and an expert on the Great Depression, said that this time it was worse.
But the fact that the tumult assumed such spectacular, globe-straddling dimensions had initially taken Bernanke by surprise. In May 2007 he reassured the public that he didn’t think American subprime mortgages could bring down the house. Clearly he underestimated the crisis. But was he actually wrong? For it certainly wasn’t subprime that brought down Northern Rock. The British bank didn’t have any exposure in the United States. So what was going on?
The familiar associations evoked by the Northern Rock crisis were deceptive. It wasn’t panicking pensioners all scrambling to withdraw their savings at once that killed the bank. It wasn’t even the Rock’s giant portfolio of mortgages. The narrative of Michael Lewis’s The Big Short, of securitisation, pooling and tranching, the lugubrious details of trashy mortgage dealing, the alphabet soup of securitised loans and associated derivatives (MBS, CDO, CDS, CDO-squared) tell only one part of the story. What really did for banks like Northern Rock and for all the others that would follow—Bear Stearns, Merrill Lynch, Lehman, Hypo Real State, Dexia and many more—and what made this downturn different— so sharp, so sudden and so systemic, not just a recession but the Great Recession—was the implosion of a new system not just of bank lending, but of bank funding.
It is only when we examine both sides of the balance sheet—the liabilities as well as the assets—that we can appreciate how the crisis was propagated, and then how it was ultimately contained at a global level. It is a story that the crisis-fighters have chosen not to celebrate or publicise. Ten years on, the story is worth revisiting, not only to get the history right, but because the global fix that began to be put in place in the autumn of 2007 is in many ways the most significant legacy of the crisis. It is still with us today and remains largely out of sight. The hidden rewiring of the global monetary system provides reassurance to those in the know, but it has no public or political standing, no resources with which to fight back if attacked. And this matters because it is increasingly out of kilter with the nationalist turn of politics.
In the wake of the crash and its austere aftermath, voters in many countries have pointed the finger at globalisation. The monetary authorities, however, have quietly entwined themselves more closely than ever before—and they have done so in order to provide life support to that bank funding model which caused such trouble a decade ago. Ten years on, the question of whether this fix is sustainable, or indeed wise, is a question of more than historical interest.
“To keep the show on the road, networks of financial and monetary co-operation have been pulled tighter than ever before”
In 2007 economists were expecting a crisis. Not, however, the crisis they got. The standard crisis scenario through to autumn that year involved a sudden loss of confidence in American government debt and the dollar. In the Bush era, the Republicans had cut taxes and spent heavily on the War on Terror, borrowing from China. So what would happen, it was asked anxiously, if the Chinese pulled the plug? The great fear was that the dollar would plunge, interest rates would soar and both the US economy and the Chinese export sector would crash land. It was what Larry Summers termed a balance of financial terror. America’s currency seemed so doomed that in autumn 2007, the US-based supermodel Gisele Bündchen asked to be paid in euros for a Pantene campaign, and Jay-Z dissed the dollar on MTV.
But somewhat surprisingly, like the nuclear stand-off in the Cold War, the financial balance of terror has become the basis for a precarious stability. Crucially, both Beijing and Washington understand the risks involved, or at least they seemed to until the advent of President Donald Trump. Certainly during the most worrying moments in 2008 Hank Paulson, Bush’s last Treasury Secretary, made sure that Beijing understood that its interests would be protected. Beijing reciprocated by increasing its commitment to dollar assets.
In 2007, it was not the American state that lost credibility: it was the American housing market. What unfolded was a fiasco of the American dream: 8.7m homes were lost to foreclosure. But the real estate bust wasn’t limited to the US. Ireland, Spain, the UK and the Netherlands all had huge credit booms and suffered shattering busts. As homeowners defaulted some lenders went under. This is what happened early on to predatory lenders such as New Century and Countrywide. Bankruptcy also came to the Anglo Irish Bank and Spain’s notorious regional mortgage lenders, the cajas. In the fullness of time, it was—perhaps, though not necessarily—the fate that might well have befallen Northern Rock too. But before it could suffer death by a thousand foreclosures, Northern Rock was felled by a more fast-acting kind of crisis, a crisis of “maturity mismatch.”
Banks borrow money short-term at low interest and lend long at marginally higher rates. It may sound precarious, but it is how they earn their living. In the conventional model, however, the short-term funding comes from deposits, from ordinary savers. Ordinarily, in a well-run bank, their withdrawals and deposits tend to cancel each other out. Fits of uncertainty and mass withdrawals are always possible, and perhaps even inevitable once in a while. So to prevent them turning into bank runs, governments offer guarantees up to a reasonable amount.
Most of the Northern Rock depositors had little to fear. Their deposits were, like all other ordinary savers, guaranteed by then Chancellor Alistair Darling. The investors who weren’t covered by government backing were those who had provided Northern Rock with funding through a new and different channel—the wholesale money market. They had tens of billions at stake, and every reason to panic. It was the sudden withdrawal of this funding that actually killed Northern Rock.
As well as taking in money from savers, banks can also borrow from other banks and other institutional investors. The money markets offer funds overnight, or for a matter of weeks or months. It is a fiercely competitive market with financial professionals on both sides of every trade. Margins are slim, but if the volumes are large there are profits to be made. For generations this was the preserve of investment bankers—the ultimate insiders of the financial community. They didn’t bother with savers’ deposits. They borrowed in the money markets. From the 1990s commercial banks and mortgage lenders began to operate on a similar model. It was this new form of “market-based” banking combined with the famous securitisation of mortgages that enabled the huge expansion of European and US banking that began to crash in 2007.
Run for the hills: Northern Rock depositors rush to start taking out their money.
By the summer of 2007 only 23 per cent of Northern Rock’s funding came from regular deposits. More than three quarters of its operation was sustained by borrowing in capital and money markets.
For these funds there were no guarantees. For a run to develop in the money market, the mortgages did not need to default. All that needed to happen was for the probability of some of them defaulting to increase. That was enough for interbank lending and money market funding to come abruptly to a halt. The European money markets seized up on 9th August. Within a matter of days Northern Rock was in trouble, struggling to repay short-term loans with no new source of funding in prospect. And it was through the same funding channel that the crisis went global.
The attraction of money market funding was that it freed you from the cumbersome bricks-and-mortar branch network traditionally used to attract deposits. Using the markets, banks could source funding all over the world. South Korean banks borrowed dollars on the cheap to lend in Won. American banks operating out of London borrowed Yen in depressed Japan, flipped them into dollars and then lent them to booming Brazil. The biggest business of all was the “round tripping” of dollars between America and Europe. Funds were raised in America, which for reasons of history and the nation’s sheer scale, is the richest money market in the world.
Those dollars were exported to institutions and banks in Europe, who then reinvested them in the US, very often in American mortgages. The largest inflow of funds to the US came not from the reinvestment of China’s trade surplus, but through this recycling of dollars by way of Europe’s banks. Barclays didn’t need a branch in Kansas any more than Lehman did. Both simply borrowed money in the New York money markets. From the 1990s onwards, Europe’s banks, both great and small, British, Dutch, Belgian, French, Swiss and German, made themselves into a gigantic trans-Atlantic annex of the American banking system.
All was well so long as the economy was buoyant, house and other asset prices continued to go up, money markets remained confident and the dollar moved predictably in the direction that everyone expected, that is gently downwards. If you were borrowing dollars to fund a lending business the three things that you did not want to have happen were: for your own loans to go bad; money markets to lose confidence; or for dollars to suddenly become scarce, or, what amounts to the same thing, unexpectedly expensive. While the headlines were about sub-prime, the true catastrophe of the late summer of 2007 was that all three of these assumptions were collapsing, all at once, all around the world.
“The Fed effectively established itself as a lender of last resort to the entire global financial system”
The real estate market turned down. Large losses were in the pipeline, over years to come. But as soon as Bear Stearns and Banque Nationale de Paris (BNP) shut their first real estate funds, the money markets shut down too. Given the global nature of bank funding this produced an acute shortage of dollar funding across the European and Asian banking system. It was the opposite of what the best and brightest in macroeconomics had expected: strong currencies are, after all, meant to be built on thrift and industry, not shopping splurges and speculative debts. But rather than the world being glutted with dollars, quite suddenly banks both in Europe and Asia began to suffer periodic and panic-inducing dollar shortages.
The paradigmatic case of this counterintuitive crisis would eventually be South Korea. How could South Korea, a champion exporter with huge exchange reserves be short of dollars? The answer is that in the years of the recovery from the 1997 East Asian crisis, while Korean companies Hyundai and Samsung had conquered the world, Korea’s banks had been borrowing dollars at relatively low interest rates to lend out back home in Won to the booming home economy. Not only was there an attractive interest rate margin, but thanks to South Korea’s bouyant exports, the Won was steadily appreciating. Loans taken out in dollars were easier to repay in Won. As such these loans cushioned the losses suffered by South Korean firms on their dollar export-earnings.
By the late summer of 2008 the South Korean banks operating this system owed $130bn in short-term loans. Normally this was no problem, you rolled over the loan, taking out a new short-term dollar credit to pay off the last one. But when the inter-bank market ground to a halt the South Koreans were painfully exposed. Barring emergency help, all they could do was to throw Won at the exchange markets to buy the dollars they needed, which had the effect of spectacularly devaluing their own currency and making their dollar obligations even more unpayable. South Korea, a country with a huge trade surplus and a large official dollar reserve, faced a plunging currency and a collapsing banking system.
In Europe the likes of RBS, Barclays, UBS and Deutsche had even larger dollar liabilities than their South Korean counterparts. The BIS, the central bankers’ bank, estimated that Europe’s mega-banks needed to roll over $1-1.2 trillion dollars in short-term funding. The margin that desperate European banks were willing to pay to borrow in sterling and euro and to swap into dollars surged. Huge losses threatened—and both the Bank of England and the European Central Bank (ECB) could not do much to help. Unlike their East Asian counterparts, they had totally inadequate reserves.
The one advantage that the Europeans did have over the Koreans, was that the dollars they had borrowed had largely been invested in the US, the so-called “round-tripping” again. The huge portfolios of American assets they had accumulated were of uncertain value, but they amounted to trillions of dollars and somewhere between 20 and 25 per cent of the total volume of asset- and mortgage-backed securities. In extremis the Europeans could have auctioned them off. This would have closed the dollar-funding gap, but in the resulting fire sales the European banks would have been forced to take huge write downs. And most significantly, the efforts by the Fed and the US Treasury to stabilise the American mortgage market would have been fatally undercut.
“In the 60s, swaps were about stabilising exchange rates. Now they’re all about stabilising oversized banks”
This was the catastrophic causal chain that began to emerge in August 2007. How could the central banks address it? The answer they found was three-pronged. The most public face of crisis-fighting was the effort to boost the faltering value of the mortgage bonds on the banks’ books (typically securitised versions of other banks’ mortgage loans, which were becoming less reliable in the downturn), and to provide the banks with enough capital to absorb those losses that they would inevitably suffer. This was the saga of America’s Troubled Asset Relief Programme, which played out on Capitol Hill. In the case of Northern Rock this prong involved outright nationalisation. Others took government stakes of varying sizes. Warren Buffett made a lucrative investment in Goldman Sachs. Barclays has now been charged by the Serious Fraud Office with fraudulently organising its own bailout, by—allegedly—lending money to Qatar, which that state is then said to have reinvested in Barclays. Without the bailout, you ended up with Lehman: bewildered bankers standing on the pavements of the City and Wall Street carrying boxes of their belongings. The masters of the universe plunged to earth. It half-satisfied the public’s desire for revenge. But it did nothing for business confidence.
With enough capital a bank could absorb losses and stay afloat. But to actually operate, to make loans and thus to sustain demand and avert a downward spiral of prices and more bankruptcies, the banks needed liquidity. So, secondly, the central banks stepped in, taking over the function, which the money market had only relatively recently assumed but was now suddenly stepping back from, of being the short-term lenders. The ECB started as early as August 2007. The Bank of England came in late, but on a large scale. The Fed became the greatest liquidity pump, with all of Europe’s banks benefiting from its largesse. The New York branches of Barclays, Deutsche, BNP, UBS and Credit Suisse were all provided with short-term dollar funding on the same basis as Citi, Bank of America, JP Morgan and the rest.
But it was not enough. The Europeans needed even more dollars. So the Fed’s third, final and most radical innovation of the crisis was to devise a system to allow a select group of central banks to funnel dollars to their banks. To do so the Fed reanimated an almost-forgotten tool called the “swap lines,” agreements between central banks to trade their currencies in a given quantity for a given period of time. They had been used regularly in the 1960s, but had since gone out of use. Back then, the aim was stabilising exchange rates. This time, the aim was different: to stabilise a swollen banking system that was faltering, and yet abjectly too big to fail. At a moment when dollars were hard to come by, the new swap lines enabled the ECB to deposit euros with the Fed in exchange for the dollars that the eurozone banks were craving. The Bank of England benefited from the same privilege.
Not that they were welcome at first. When the Fed first mooted the idea in the autumn of 2007, the ECB resisted. It did not want to be associated with a crisis that was still seen largely as American. If Gisele didn’t want to be paid her modelling fees in US dollars, why on earth should the ECB be interested? But as the European bank balance sheets unravelled, it would soon become obvious that Frankfurt needed all the dollars it could get. Initiated in December 2007, the swap lines would rapidly expand. By September all the major European central banks were included.
In October 2008 the network was expanded to include Brazil, Australia, South Korea, Mexico, New Zealand and Singapore. For the inner European core, plus Japan, they were made unrestricted in volume. The sums of liquidity were huge. All told, the Fed would make swap line loans of a total of $10 trillion to the ECB, the Bank of England the National Bank of Switzerland and other major banking centres. The maximum balance outstanding was $583bn in December 2008, when they accounted for one quarter of the Fed’s balance sheet.
It was a remarkable moment: the Fed had effectively established itself as a lender of last resort to the entire global financial system. But it had done so in a decentralised fashion, issuing dollars on demand both in New York and by means of a global network of central banks. Not everyone was included. Russia wasn’t, which was hardly surprising given that it had come to blows with the west over Georgia’s Nato membership application only weeks earlier. Nor did the Fed help China or India.
And though it helped the ECB, it did not provide support to the “new Europe” in the east. The Fed probably imagined that the ECB itself would wish to help Poland, the Baltics and Hungary. But the ECB’s president Jean-Claude Trichet was not so generous. Instead, eastern Europe ended up having to rely on the International Monetary Fund (IMF).
Swapsies? As a scholar of the Great Depression, the Fed’s Ben Bernanke knew the importance of swap lines. Photo: MARK WILSON/GETTY IMAGES
The swap lines were central bank to central bank. But who did they really help? The reality, as all those involved understood, was that the Fed was providing preferential access to liquidity not to the “euro area” or “the Swiss economy” as a whole, but to Deutsche Bank and Credit Suisse. Of course, the justification was “systemic risk.” The mantra in Washington was: you have to help Wall Street to help Main Street. But the immediate beneficiaries were the banks, their staff, especially their highly-remunerated senior staff and their shareholders.
Though what the Fed was doing was stabilising the global banking system, it never acknowledged as much in so many words, certainly not on the record, where it said as little as it decently could about the swap line operation. The Fed’s actions have global effects. But it remains an American institution, answerable to Congress. Its mandate is to maintain employment and price stability in the US economy. The justification for the swap lines, therefore, was not global stability, but the need to prevent blowback from Europe’s de facto Americanised banks—to avoid a ruinous, multi-trillion dollar fire sale of American assets. Once the worst of the crisis had passed, Bernanke would assist the European banks in liquidating their American assets by way of the Fed’s three rounds of asset purchases, known as Quantitative Easing (QE).
The swaps were meticulously accounted for. Every cent was repaid. No losses were incurred—the Fed even earned a modest profit. They were not exactly covert. But given the extraordinary extension of its global influence that the swaps implied, they were never given publicity, nor even properly discussed. Bernanke’s name will be forever associated with QE, not swap lines. In his lengthy memoirs, The Courage to Act, the swaps merit no more than a few cursory pages, though Bernanke as a scholar of the 1930s knows very well just how crucial these instruments were. Is this an accident? Surely not. In the case of the swap lines, the courage to act was supplemented by an ample measure of discretion.
The Fed did everything it could to avoid disclosing the full extent and range of beneficiaries of its liquidity support operations. They did not want to name and shame the most vulnerable banks, for fear of worsening the panic. But there are politics involved too. Given the rise of the Bernanke-hating Tea Party in 2009, the likely response in Congress to news headlining the scale of the Fed’s global activity was unpredictable to say the least. When asked why no one on Capitol Hill had chosen to make an issue of the swap lines, one central banker remarked to me that it felt as though “the Fed had an angel watching over it.”
One other reason for the tight lips is that the story of the swap lines is not yet over. The network was rolled out in 2007 and 2008 as an emergency measure, but since then it has become the under-girding of a new system of global financial crisis management. In October 2013, as the Fed prepared finally to begin the process of normalisation by “tapering” its QE bond purchases, it made another decision which made plain that the new normal would not be like the old. It turned the global dollar swap line system into a standing facility: that is to say, it made its emergency treatment for the crisis into a permanent feature of the global monetary system. On demand, any of the core group of central banks can now activate a swap line with any other member of the group. Most recently the swap line system was readied for activation in the summer of 2016 in case of fallout from the Brexit referendum.
As the original crisis unfolded in 2008, radical voices like Joseph Stiglitz in the west, and central bankers in the big emerging economies called for a new Bretton Woods Conference—the meeting in 1944, which had decided on the post-war currency system and the creation of the IMF and the World Bank. The Great Financial Crisis had demonstrated that the dollar’s exorbitant privilege was a recipe for macroeconomic imbalances. The centre of gravity in the world economy was inexorably shifting. It was time for a new grand bargain.
“Central banks has staged Bretton Woods 2.0. But they had not invited the public or explained their reasons”
What these visionary suggestions failed to register was that foundation of the world’s de facto currency system were not public institutions like the IMF, but the private, dollar-based global banking system. The introduction of the swap lines gave that system unprecedented state support. The Fed had ensured that the crisis in global banking did not become a crisis of the dollar. It had signalled that global banks could rely on access to dollar liquidity in virtually unlimited amounts, even in the most extreme circumstances. The central banks had, in other words, staged their Bretton Woods 2.0. But they had omitted to invite the cameras or the public, or indeed to explain what they were doing.
The new central bank network created since 2008 is of a piece with the new networks for stress testing and regulating the world’s systemically important banks. The international economy they regulate is not one made up of a jigsaw puzzle of national economies, each with its gross national product and national trade flows. Instead they oversee, regulate and act on the interlocking, transnational matrix of bank balance sheets.
This system was put in place without fanfare. It was essential to containing the crisis, and so far it has operated effectively. But to make this technical financial network into the foundation for a new global order is a gamble.
It worked on the well-established trans-Atlantic axis. But will it work as effectively if it is asked to contain the fallout from an East Asian financial crisis? Can it continue to operate below the political radar, and is it acceptable for it to do so? With the Fed in the lead it places the resources, expertise and authority of the world’s central banks behind a market-based system of banking that has shown its capacity for over-expansion and catastrophic collapse. For all the talk of “macroprudential” regulation, Basel III and Basel IV, rather than disarming, down-sizing and constraining the global banking system, we have—through the swap lines—embarked on, if you like, a regulatory race to the top, where the authorities intervene heavily to allow the big banks in some countries to continue what they were doing before the unsustainable ceased to be sustained. And without even the political legitimacy conferred by G20 approval. Not everyone in the G20 is part of the swap line system.
The Fed’s safety net for global banking was born at the fag-end of the “great moderation,” the era when economies behaved nicely and predictably, and when a “permissive consensus” enabled globalisation. Though a child of crisis, it bore the technocratic, “evidence-based” hall marks of that earlier era. It bears them still.
Can it survive in an age when the United States is being convulsed by a new wave of economic nationalism? Is there still a guardian angel watching over the Fed on Capitol Hill? And with Trump in the White House, how loudly should we even ask the question?
Canada’s approach to immigration: how (and why) it works
Canada’s approach to immigration: how (and why) it works
Prime Minister Justin Trudeau greets refugee families who recently arrived in Canada at an open house of the Masjid Al-Salaam Mosque in Peterborough Ont., Sunday, January 17, 2016. THE CANADIAN PRESS/Fred Thornhill
Good, balanced analysis of Canada’s approach to immigration – and the lessons it holds for Brexit Britian (and indeed other EU countries) – the September issue of Prospect. As one of the Canadian experts citex argues with reference to the fact that without increased immigration, the UK will age rapidly:
“You are going to wake up with unbelievable problems. If you don’t have a strategy around this, your social programmes—education, health—are going to collapse. Through our immigration system we’re solving our economy in 2030 and beyond. You need to think long-term. You can spend all this money on innovation, on infrastructure, but if you don’t have anyone to use it productively, it ain’t going to matter.”
How Canada’s liberal immigration policy works—and why it could be a success here too
Canadian PM Justin Trudeau turns up at the airport to greet refugees as “new Canadians”—and personally hands out winter coats. When the arguments for immigration are so strong, why won’t Britain follow its path?
In a packed House of Commons, the Conservative Party leader rose from the benches and spoke with passion about the plight of thousands of victims of Islamic State (IS) in Iraq. They needed the protection of the west and should be welcome in our country, she argued. MP after MP rose to echo the leader’s views. No-one claimed the nation’s well of hospitality would be exhausted; no-one suggested the children should be subject to dental checks to prove their age. A vote was called on whether to accept the refugees—more than 1,000 Yazidi women and girls. There were 313 votes in favour; not a single MP voted against. The national newspapers the next day were united in their approval; this was a moment of great pride for a nation that saw itself as tolerant and generous, with a sense of fair play. This was not some parallel universe, nor some distant time in history; this was Canada, last year.
Since the election of Liberal Prime Minister Justin Trudeau in 2015, Canada has been revelling in its position as a global liberal beacon. While many European nations have balked at the idea of accepting Syrian refugees fleeing terror, Trudeau announced that Canada would invite 25,000 Syrians immediately. He even turned up at the airport to greet them as “new Canadians” and personally handed out winter coats.
As last August’s Yazidi vote showed, such sympathy is bipartisan. “We’ve always had a very generous refugee programme in Canada,” Rona Ambrose, the then Conservative Party leader who led the charge in the Commons, told me. “I thought this was something I could make a difference on.” The idea that a centre-right party wouldn’t speak up on behalf of refugees strikes Ambrose as strange (she gasps when I mention the discussion in the UK about possible dental checks of child refugees). “There is a consensus on the importance of immigration and refugees for our country,” she says. “It’s not unusual that we [the Conservatives] would do something on human rights.”
Canada’s immigration policy isn’t just about compassion—there’s calculation too. While happily accepting more refugees per capita than most western nations, its openness extends to migrants as well. Each year the government sets a target for the number of immigrants it wants, based on what the economy needs. Provinces and cities compete to host them. The government even encourages people to apply. The country’s immigration minister visited China last August in order to persuade government officials to double, and eventually triple, the number of offices where Chinese could apply for Canadian visas.
Last year the government set a target of 300,000 immigrants. That figure was broken down into more than a dozen different categories, with each given its own target. For instance, in 2015, Canada sought to bring in up to 30,000 people to work in the care industry, some 51,000 skilled workers, and around 20,000 parents and grandparents of immigrants already in the country. Unlike the UK, Canada chooses not to include international students in its overall figures, as they are temporary residents.
The largely unplanned flow of newcomers into the UK is, demographically, remarkably similar to the immigration Canada pro-actively courts. Last year 588,000 people arrived here, just shy of 1 per cent of the overall population of 65m. Canada’s 300,000 target is, likewise, a little under 1 per cent of its population. However, while the release of UK immigration statistics each quarter is greeted with a bout of hand-wringing, renewed promises to reduce the figure and anti-immigrant coverage in newspapers, in Canada it’s a little different. When the target was hit last year politicians on all sides voiced their approval; the newspapers were supportive. This year, once again, the government wants another 300,000 immigrants to come to Canada.
Canada has achieved something no other western nation can claim—it has built a liberal immigration system that takes a humanitarian attitude towards asylum seekers, while also bringing in hundreds of thousands of economic migrants through a well-planned, orderly process that—crucially—has broad public support. “We have, by and large, a public consensus for immigration,” Senator Ratna Omidvar, an independent politician who has played a prominent role in the evolution of Canada’s immigration policy, told me. “It is impossible for any political party to be anti-immigrant. We argue about who should get in—should they have education, how many refugees—but we won’t argue about whether we should have immigrants.”
The British debate, meanwhile, has been reduced to a conversation about prevention—essentially, how can we stop so many people coming? Since the referendum, liberals have been trapped in a cul-de-sac, endlessly debating the rights and wrongs of freedom of movement, but seemingly unable to have a conversation about how to build a sensible system that fits the country’s needs. If Britain leaves the European Union as expected, we will—as the slogan on the bus proclaimed—“take back control.” But what then? Once the UK has full control of its own borders, what sort of immigration policy should it have?
The Brexiteers claim they have a ready-made model in Australia. Barely a day went by during the referendum campaign without the phrase “Australian-style points-based system” passing the lips of Nigel Farage or Boris Johnson. The appeal of Australia to those angry about immigration is clear: this is an old Commonwealth nation with historic ties to the UK and—something which shouldn’t be overlooked—a majority-white population. It has also built a reputation as a nation that goes to extraordinary lengths to prevent would-be refugees from reaching its shores. The topic has swung elections—Tony Abbott’s successful campaign in 2013 owed much to the three-word slogan, “Stop the Boats.” Perhaps it is this image of a nation that literally sends out gunboats to stop people arriving that appeals to Ukip and the Conservative right.
It may be because pro-immigration politicians in Britain have been stuck in a defensive crouch that it never occurred to them to find a model of their own. But should they try to do so, they could do a lot worse than look towards another giant land of the old Commonwealth. On paper, the Australian and Canadian systems have much in common—they both operate points-based systems. But the political (not to mention cultural) emphasis between the two nations could not be more different. Where Australia gives the impression it is closed, Canada presents itself as open; while many Australians still view their nation as monocultural, Canada is comfortably multicultural.
With freedom of movement very possibly coming to an end, there is surely an opportunity for those who believe that immigration—properly managed—can be a force for good. Liberal politicians have struggled to champion an alternative. Is it time for them to start endorsing the Canadian model?
One of Canada’s big advantages is that it has been thinking in a serious way about the subject for a lot longer than we have. The very first law Canada passed after it became a democracy in 1848 was on immigration; one of the first ministries created in 1867, when it became a self-governing colony, was a ministry of immigration and citizenship. “We’ve had 170 years to get a policy, a ministry and a bureaucracy to work properly,” says the Canadian philosopher and novelist John Ralston Saul. In fact, it goes further back—long before Canada gained independence. “When the 50,000 Loyalists fled from the United States (during the American Revolution) they were given loans and horses,” he says. “When the Sauls came in 1840s they were given 100 acres each, some horses, cows, seed for two years and some cash.”
Admittedly, it helped that they were white. Like its neighbour to the south, Canada was both a nation of immigrants and a racist society. Unlike the US, however, where African-Americans have been a demographic reality since the 17th century, in Canada migration and race were traditionally the same issue.
“Not so many decades ago, Canada had an appallingly racist immigration policy,” says Michael Trebilcock, a law professor at the University of Toronto and the co-author of The Making of the Mosaic: A History of Canadian Immigration Policy. “We essentially only let in white folk from Europe. We refused to take any Jewish refugees during the Second World War and we interned most of our Japanese citizens.”
The progressive turn of Canada’s immigration policy came later, starting in the 1960s. A Conservative government, led by the lawyer and human rights activist John Diefenbaker, introduced a raft of legislation that made Canada a more liberal nation. Universal adult suffrage was finally introduced, as indigenous people were granted the vote, and the first ethnic minority politicians appointed to the cabinet. Diefenbaker also introduced the 1961 Immigration Act, which removed the explicit racial discrimination criteria that had previously existed in the official admissions process. Later in the decade, the Liberal government introduced the first points system, which embedded a clear set of rules governing immigration. As a result, Canada’s ethnic make-up began to change. Steadily, it began to embrace the idea of multiculturalism.
The same idea had some traction in British government in the 1960s, with home secretary Roy Jenkins being a keen advocate. But the multicultural argument proved easier to sustain in the Canadian context, where it helped not only adjust to new demographic realities, but also to distinguish a young country from its colonial past. Prior to the 1960s, the dominant story was of the British and the French making a pact to build a nation. By adopting multiculturalism, Canada “built a narrative in which immigrants can be incorporated,” says Irene Bloemraad, professor of sociology at University of California, Berkeley, and a migration expert. It became the defining part of the national story—indeed, Trudeau emphasised the strength derived from diversity when promoting his refugee policy last year. It also helped Canada distance itself from the US and Britain. “Canada was trying to assert itself on the world stage,” says Bloemraad. “Being able to say we’re multicultural made us stand out.”
Multiculturalism is embedded in the education system, too. Bloemraad moved to Canada in 1976 when she was four years old. “The history we got was stories of the Loyalists, of British people fleeing the American Revolution, fighting off the nasty Americans at the border,” she recalls. Changes have been made to the curriculum to reflect growing diversity. “That can be criticised as tokenism, but when you tell young people that that’s part of the story of the country, it’s much harder to promote an exclusive vision of the story of the country.”
A 1920s cover of Canada West designed to attract settlers from further afield
Today, around one in five people living in Canada was born elsewhere. In Toronto and Vancouver, that number rises to around half. (For comparison, 14 per cent of people living in Britain were born abroad; it is 41 per cent in inner London.) Canada’s diversity is fairly well represented politically: 49 MPs are visible minorities, as they are known in Canada—that’s 15 per cent. “Role models matter,” says Bloemraad. One of those role models, Senator Omidvar, adds: “People do talk about Canadian exceptionalism and I think our exceptionalism lies in the fact that we embrace immigration as one of the most important instruments of nation building.”
Behind these warm words lies a harsh reality: Canada needs immigrants. Even with 300,000 people arriving every year, Canada’s population growth will stall. Current estimates suggest that Canada will reach 50m people by 2050, but then plateau. That has led some prominent Canadians to argue for higher immigration. The advisory council on economic growth—a committee set up by the finance ministry—has called for the number to be increased to 450,000.
Without this increase Canada will be left behind warns Mark Wiseman, the former head of Canada’s national pension plan and one of the key members of the committee. If the annual immigration target stays as it is, he argues, “Canada goes from being the 11th largest economy in the world to the 29th. By every measure we become the equivalent of Romania in terms of our influence in the world. We’d no longer have a place in the G7, no longer have any influence at the United Nations and we’d struggle on a GDP and GDP per capita basis. We would become stagnant.”
Compare this to the British debate. Canada’s target is the product of an in-depth and ongoing analysis of what the nation needs, which has resulted in broad political support. Britain, meanwhile, has a target to reduce its immigration to “the tens of thousands.” This is not based on any economic study, rather it reflects short-term political tactics created to stave off Ukip and appease right-wing tabloids, while also placating the sizeable number of Britons worried about immigration. There is no sustained attempt to work out what levels of immigration, nor what type of immigrants, the British economy actually needs.
There is another problem with the target: since it was introduced in 2010, the government has failed to hit it every single year. This has further eradicated trust in immigration policy, and politicians more generally. The sense that an important promise was being broken fuelled the Leave campaign last year. But the overwhelmingly negative discourse on immigration goes further back, and beyond any one party. Chapter five of Labour’s 2010 general election manifesto was titled “Crime and Immigration.” Five years later, its offering in this area was summed up in three words, “Controls on Immigration,” plastered across a mug. This is the way British political parties tend to view immigration: not as part of a broader discussion about communities and citizenship, but about control, borders and policing. While in Canada there is a ministry dedicated to immigration and citizenship, in the UK—as in most European nations—immigration is part of the Home Office, whose principal purpose is not citizenship but the maintenance of order.
A multicultural crowd gathers for the Dundas West festival in the Little Portugal area of Toronto. Photo: TRANSCENDENTAL GRAPHICS/GETTY IMAGES, ROBERTO MACHADO NOA/LIGHTROCKET VIA GETTY IMAGES
The path to living in Canada and becoming a citizen begins abroad. Go to a Canadian High Commission in New Delhi or Chongqing and it will be “jam-packed with immigration and citizenship specialists,” explains Ralston Saul. They are there to help would-be citizens apply, not to find ways to prevent people from coming in the first place. Then, when migrants are approved, they are helped. “You take on this role as the inviter of people, not the ‘put upon,’” says Ralston Saul. That leads government to provide free English and French lessons, allow migrants to use the health service and send their children to school. Access to the welfare state—which in the UK is the most inflammatory of all questions, the very thing David Cameron dedicated his doomed EU renegotiations trying to restrict—here plays a positive role in integration. “The federal government puts money into their pocket every month,” says Senator Omidvar. “They have enough money to pay for childcare. Their children go to a public school. The foundations are all there.”
“The University of Toronto is a venerable institution—it’s our Oxford. It’s less than one-third white. And you know what? No-one cares”
There is also a clear path to citizenship in just three years. “It serves you and it serves us if you become a citizen sooner rather than later,” says Omidvar, because then “they’re fully enfranchised.” Bloemraad agrees. “When migrants come they’re coming as permanent migrants. There is an understanding that if they arrive it’s a long-term relationship and they’re going to become future Canadians. It changes the way immigrants view things—they have a long-term investment in society.”
Crucially—and here the comparison with the UK is stark—there are clear rules about who can come in and what the process is. “Let’s not kid ourselves,” says Omidvar. “We look very carefully at everyone who comes into our country. That highly-managed system gives confidence to Canadians.”
No system is perfect and despite the warm words of Trudeau, Canada’s immigration ministry is struggling to deal with the rising number of refugee arrivals. Almost 40,000 refugee claimants are stuck in limbo, waiting for their cases to be heard—some arrived five years ago. And while Canada’s multi-culturalism appears deeply embedded, two recent terrorist attacks—one at the Canadian National War Memorial that was claimed to be inspired by IS, and another on a mosque in Quebec, which killed six Muslims—suggests that Canada is not immune from the challenges facing western Europe.
But overall support for immigration has not wavered—and no mainstream political party has sought to break the consensus.
Could the Canada model work in the UK? First, let’s look at some of the potential reasons why not. For a start, there’s geography. Surrounded by oceans to the east and west, ice to the north and a long border with the world’s richest nation to the south, there are no natural migration flows. This means there are few worries about people “cheating” the system, by entering the country without approval. Yet even in the UK, undocumented arrivals make up a small proportion of the overall influx. Despite the television footage of would-be asylum seekers in Calais jumping fences or hopping onto the backs of lorries, the numbers entering the country without prior approval are relatively small. In total, 30,000 people applied for asylum in the UK last year—and 87 per cent of those are made when people are already in the country, rather than at a sea port, suggesting that the fixation on pressure at ports and borders could be misplaced.
Is the sheer size of Canada somehow a factor? Its landmass is 40 times larger than Britain’s. However, much of that land is uninhabitable. The vast majority of Canadians live in a string of cities near its southern border with the US and—unsurprisingly—it is to those cities that nearly all immigrants head. They are no more sprawling than Britain’s cities: the population densities are very similar. Toronto, at 4,334 people per sq km, is more tightly-packed than Birmingham; Montreal’s density is on a par with Manchester’s; and there are 5,492 people per sq km living in Vancouver, just one more than London’s 5,491.
What about economic difference? Here, again, the two nations are relatively similar. Canada’s GDP per capita is US$42,000; the UK’s is US$40,000. In recent history, unemployment rates have been pretty comparable too, if anything tending to be lower in Britain, which should make it less prone than Canada to the “taking our jobs” argument.
What really stops Britain following Canada’s path is politics—and the atmosphere in which political debate takes place. While the three largest political parties in Canada are all pro-immigration, that can only be said with any confidence about minor parties in the UK. This could change, though. While Labour is still split on freedom of movement, the drift is in a liberal direction, and not only on the left. Since becoming leader, Jeremy Corbyn has made his party’s tone on immigration much more positive, while—from the backbenches—Yvette Cooper has put pressed the government to do more for child refugees.
Within the Conservative party too, an eye-catching and potentially disruptive axis is emerging. In Scotland, Ruth Davidson has begun to distance herself from her more hardline Westminster colleagues, publicly questioning the “tens of thousands” target, while Amber Rudd—despite her bizarre abortive suggestion that companies should publicly list their foreign workers—has shown an openness to data, recently announcing that the Migration Advisory Committee (MAC) would “examine the role EU nationals play in the UK economy and society.”
The British media is another issue. “There is an unwritten consensus in Canada that the media is in favour of immigration,” says Omidvar. “The Toronto Star is aggressively pro-immigration because guess who reads their newspaper? Our national daily, the Globe and Mail is very much an outlet that believes in pluralism. Their editorials are pro-immigration.” As for Britain: “You have your tabloids and oh gosh!” A recent study of British newspapers by King’s College London revealed that immigration stories featured on the front pages 99 times during the 10-week EU referendum campaign—79 of those were negative.
Then—and this will be the thorniest difference for many liberals—there is Canada’s comfort, and Britain’s relative discomfort, with multiculturalism. The dominant argument in the British debate, articulated by Prospect’s founding editor David Goodhart, is that much of the population feels deeply uncomfortable when rapid immigration distorts a nation’s culture, values and traditions. Canadians seem to find such an argument unconvincing. “The University of Toronto is a venerable institution—it’s our Oxford,” says Mark Wiseman, the former national pension plan CEO. “It’s less than one-third white. And you know what? No-one cares.”
For Wiseman, demographics should make debate about culture moot. Without increased immigration, the UK will rapidly age. “You are going to wake up with unbelievable problems. If you don’t have a strategy around this, your social programmes—education, health—are going to collapse. Through our immigration system, we’re solving our economy in 2030 and beyond. You need to think long-term. You can spend all this money on innovation, all this money on infrastructure, but if you don’t have anyone to use it productively, it ain’t going to matter.”
If freedom of movement really does end, there will be multiple economic challenges (as Jonathan Portes explains). But the UK will have an opportunity to create a new immigration strategy. And Bloemraad believes that could turn into something more—the chance to write the next chapter in the nation’s story. “One of the challenges for the UK is what is its story moving into the 21st century? How do you construct a story that recognises there are third or fourth generation people from all over the world who are all British?”
The economic arguments for a liberal immigration policy are strong; the only thing holding it back is politics. Maybe Canada is more exceptional than Britain. Perhaps Canadians are more tolerant, more compassionate. Or, could it be that Canada has simply ditched its racist past, worked out a plan for its economy and embraced the world? And in doing so, showed the way.
Scotland: Normal nation, neurotic neighbour
Here’s a brilliant piece from veteran writer and commentator Neal Ascherson on the age-old problématique of Anglo-Scottish relations. Required pre-election reading both north and south of the border.
Scotland: Normal nation, neurotic neighbour
The Union has been in decline for decades. The root problem is not turbulent Scots, it is a very English failure to develop a healthy nationalism south of the border
I was a guest, one day, at a royal banquet at Windsor Castle. The table, set with immaculate Victorian china and Georgian silver, stretched far into a distance where a white speck must certainly have been Her Majesty, and the dark blob Árpád Göncz, then President of Hungary.
Servants waited behind the chairs, in which Brits and Hungarians alternated. Windsor’s ancient stock of Tokaj was served. Hungarian neighbours, reluctant to be impressed, conceded it was wonderful. Conversation began. And then it happened.
The pipe band of a Highland regiment, in full tartan splendour, tramped in and began a slow march around the table with a gigantic clamour of bagpipes. Talk became instantly impossible as they made two lengthy circuits of the hall. The Brits looked decorously at their plates, as if nothing was going on. But the Hungarians looked wildly around. Your Queen—what does she mean by this?
I also wondered what she meant. But then I reflected how the Emperor Augustus no doubt brought loyal costumed Gauls to perform at his banquets. Habsburg emperors probably ordered ferocious Croats to dance with their weapons at dinners for foreign guests. So didn’t British kings and queens want to show visitors that they, too, had tamed barbarous tribes from the distant mountains and trained them up to imperial service?
Windsor is not Westminster. But lurking somewhere beneath the wrangle between the Scottish and British governments over Brexit and another independence referendum is a faint imperial stain. Why have the Scots forgotten their place? That was never a colonised, subjugated place. Scotland was not Kikuyuland or even Ireland. Its role was as a loyal, if exotic, partner and body-guard in the imperial enterprise—a grand privilege. And for centuries after the 1707 Union, most Scots did think it a privilege. Now they increasingly don’t. Why not?
Many English people can now see why not, even if they share the rising grumbles against Scots who “take our money and keep whinging.” After all, England just voted to break a Union and risk the economy, in order to get away from distant lawmakers nobody voted for. “Same thing with the Scots, if you think about it,” runs the thought for many ordinary English voters. But that’s emphatically not the way the rulers of the Anglo-British state, the political and social elites and their retinue, think about it. This is because they have a tin ear for nationalism. Even though it was English nationalism which put this Tory, Brexit government where it is.
As the quarrel between Theresa May and Nicola Sturgeon grows ruder, with May asserting that the search for independence is just “playing political games,” it’s time to think sensibly about nationalism. Time to junk the old Labour mantra, still repeated by many, that “nationalism equals racism equals fascism equals war.” (Communists, who at least had a political education, usually knew better.) The truth is that nationalism has been the world’s strongest mobilising force for two centuries. Stronger than hunger or religion, stronger than the class struggle. A sort of spectrum reaches from the enlightened, modernising, liberalising “let’s join the world” variety, across to the backward-looking, myth-making, exclusive and vengeful variety which drenched the 20th century in blood. You can call these extremes “civic” and “ethnic”—though in fact there’s no nationalism which doesn’t contain something of both: it’s the proportions that matter. Scottish nationalism, like early Indian nationalism, is firmly, primly, at the civic end. English nationalism, though rooted in one of the most tolerant peoples on earth, has ugly ethnic elements. And there are reasons for that.
The Brexit voters in England and the Scottish voters for independence or the SNP (not always the same thing) are alike in some ways, unlike in others. Both provoke the baffled horror of well-educated and well-travelled people—who are for the most part not badly off, and generally living in big English cities or in Edinburgh. These nice people find both voting masses “divisive” (a favourite word), and ask how anyone can want more borders in a globalised world. A similar liberal horror is felt by Americans who didn’t see Donald Trump coming, by Dutch people who felt disgraced by Geert Wilders, and by French, Hungarian or Polish citizens incredulous at support for Marine Le Pen, Viktor Orbán and Jarosław Kaczyn’ski.
The injured national feelings of the English gave the Brexit vote its power and its victory. Nigel Farage wasn’t entirely wrong, as he sensed the wave of pride and delight which ran across much of England after the result, to talk about an “independence day.” People felt that they had indeed “taken back control.” I think they were quite wrong about that. The distant, oppressive “controllers” were not in Brussels: they are sitting here at home in Westminster and the City. But traits of “ethnic” nationalism deflected what could have been a reasonably “civic” campaign. The fomenting of often groundless panic over “migrants” led to outbursts of violent xenophobia, including the heartbreaking murder of MP Jo Cox.
And yet English voters seem to have made another, more mature choice. Almost nobody believed Tory and Ukip assurances that Brexit would make them richer. People accepted that leaving the European Union could bring a bumpy, unpredictable time for the economy—but they reckoned it was worth risking for the political gain: “taking back our country.” That’s authentic nationalism. Those condescending maxims—“Nobody votes to get poorer” and “It’s the economy, stupid!”—lose traction here. In Scotland, the independence camp recognise now that they overdid detailed economic reassurances in the 2014 campaign. This time, from what I hear, the “Yes” strategy will be twin-track. It will admit frankly that the Scots—workers and pensioners—may well be in for a rough first few years of independence. But it will also offer a broad account of Scotland’s resources, natural, human and intellectual, and argue only an independent Scotland can release them.
Brexit and the Scottish upsurge are similar, too, as rebellions against complacent elites. But in Scotland the urge to mutiny could be channelled towards a mildly social-democratic party—the SNP—and not towards vengeful populism. The age profiles don’t match, either. In England, older people were more likely to be Leavers. In Scotland, according to the latest Scottish Social Attitudes Survey, it’s the opposite: 72 per cent of voters under the age of 24 support independence, but only 26 per cent of those over 65. This demographic difference suggests that the survival-time left to the United Kingdom is now measurable. But it also reaffirms a sense of Scotland’s nationalism as forward-looking and optimistic, in contrast to a Brexit nationalism in retreat from the world, yearning for a lost and mostly imaginary age when the whole world knew what Britain was and Britons had no “unhealthy” doubts about their identity.
“If nationalism is normal, England is abnormal in not developing self-conscious national governance”
In Scotland, the 2014 campaign left pride but also scars. Families and friends quarrelled, and the splits have been slow to heal. But that was nothing compared to England’s post-Brexit anguish. Remainers south of the border lamented: “We suddenly find we are two nations who simply don’t know each other!” Part of the explanation lies in England’s enduring cultural segregation, in which class divisions acquire a caste-like opacity. Scotland is a more European, plebeian society, with a smaller “hereditary” middle class and a much less significant sector of private education. People may not like each other, but at least they know each other.
If nationalism is normal, it follows that England has been abnormal in not developing a self-conscious nationalist movement. But it also follows that Scotland, until recently, was even less normal. National awareness was always present—every Scottish school pupil has known that their country was once independent. But so what? In the 19th century, when nationalist revolutions were breaking out all over Europe in “submerged” nations such as Hungary, Poland, Lombardy or the Czech lands, nothing similar happened in Scotland. Instead, injured patriotism was safely displaced into the past—nostalgic adoration for the Jacobites, Robert the Bruce and William Wallace—or into the struggles of other nations (the Edinburgh bourgeoisie poured out money and poetry for Polish refugees).
Why? Because Scotland had begun to do well out of the Empire’s vast opportunities. Because pious Scottish Protestants were loyal to the Hanoverian dynasty as a bulwark against hated continental Catholics. Because of the trauma left by popular Scottish support for the French Revolution—a sympathy brutally suppressed, but still a terrifying memory for the propertied classes. Because of Scotland’s solidarity with England in the Napoleonic wars, which cut Britain off from European political development.
But a cultural nationalism, with its political implications amputated, survived. It’s a mistake to think nationalist movements only arise in backward peripheries. Most have been made by minorities who feel themselves more civilised than their metropolitan rulers. The industrialised Czechs rebelled against the semi-feudal despotism of the Habsburgs, the Poles against primitive Russian tyranny, the go-ahead Americans against the archaic British monarchy, the sophisticated Catalans and Basques against somnolent Castile. The Scots, in the 19th and 20th centuries, considered themselves far better educated and more technically sophisticated than forelock-tugging England. There was enough truth in this to preserve a sense of cultural superiority, even as the Scots continued to evade its political logic.
This began to change after the First World War. A literary and language revival was followed by the slow emergence of political nationalism (the National Party of Scotland, which would eventually become the Scottish National Party was founded in 1928). At first seen as dotty and marginal, the SNP began to break through when young Winnie Ewing won the 1967 Hamilton by-election with the words: “Stop the world! Scotland wants to get on.”
By now, the context had changed. The Empire had collapsed, Scotland’s heavy industries were in steep decline, poverty and unemployment were provoking mass emigration. The “bargain” of the 1707 Union—prosperity in return for our independence—was seen to be failing. And Scots always have regarded it as a bargain: as a revocable Treaty of Union rather than as a once-and-for-all Act which only the London parliament could repeal.
Put it another way. Many Scots, including senior Scottish lawyers, consider that Scotland still retains a “residual sovereignty.” The “Claim of Right,” eventually signed in 1989 by most of Scotland’s (then overwhelmingly Labour) MPs, spoke of “the sovereign right of the Scottish people” to choose their form of government. English constitutional jurists —for whom Westminster’s sovereignty is and always was foundational, both before and after the Treaty of Union—think that is complete nonsense. Look for a moment at this collision between two philosophical planets—one Scottish-European and rooted in the Enlightenment, the other archaic and uniquely English—and you can feel its heat in the current angry rhetoric between May and Sturgeon.
What slowly followed after 1989 is now familiar. The glue holding UK politics together—all-British political parties—came apart. The Scottish parliament, with many domestic powers, was revived in 1999. The SNP has been governing Scotland for the last 10 years, and by destroying Labour in Scotland, has skewed the whole balance of British politics—probably forever.
Independence? The “Yes” campaign assumes that this next referendum may be even tougher to win than the last. The economy feels bleak; the enthusiasm of 2014 is probably unrepeatable. Yet it’s an odd situation. The polls show a curiously hard but static support for independence, only now showing a slight tendency to creep up to the halfway mark. But at the same time, Scotland’s place in the Union seems to grow looser almost month by month. The rejection by May of Scotland’s emphatic choice to stay in the EU—with all 32 council areas backing “Remain”—is only the most spectacular of a long chain of smaller rebuffs and apparent broken promises which began after the “No” victory in the 2014 referendum. When will those stubborn, watchful voters react to this? When will the loose tooth finally drop out?
Signs in the wind: the number of people I come across in Scottish cities now saying that “it’s going to happen some day, whether we like it or not.” Or “I’d like to see an independent Scotland, but not now with all this uncertainty in the world.” These are opinions which can easily slide into “Yes,” if an independence campaign really gains conviction. Sturgeon is still liked and trusted. The Unionist campaign, by contrast, would be weak and divided in contrast to “Better Together” in 2014, and in the light of Holyrood’s changed arithmetic, probably Conservative-led. That would increase the difficulty of the Unionist sell at any time, but especially when the prospect of decades of right-wing Tory rule in London, committed to dismantling what’s left of the welfare state and the public sector—both so precious to Scotland—is everywhere found “horrendous.” But in spite of all that, a renewed impatience for change which would make those signs relevant hasn’t shown itself yet.
“The bien pensants want nothing to do with popular nationalism, which they despise”
If this is a tale of two nationalisms, one pretty normal and the other—English—still shapeless and nameless, then it’s striking that SNP leaders and thinkers would be happy for a sensible, sustainable “English National Party” to emerge. Ukip canalised plenty of English patriotic anger, but—surprisingly—Nigel Farage always insisted that “Britain” was the country he wanted taken back. English nationalism is always lurking, but—weirdly—it often takes resentment of the Scots to see it surface. As dawn broke and Scotland’s “No” vote came in on 19th September 2014, a relieved David Cameron stood outside No 10 and announced that progress on the campaign “vow” about fresh devolution to Holyrood would now depend on simultaneously introducing English-only votes at Westminster for English-laws. So it turned out—at least Downing Street saw it, before it had to back off—that Scotland’s two years of national soul-searching had, after all, been about English independence! Both Cameron and Farage seemed to be clouded by the London fog which has blinded English thinking about these islands for several centuries. For distant historical reasons, they never grasped the distinction between a nation and a state—a difference obvious to other Europeans, whether Slovaks or Scots, who could see that Great Britain was a multi-national state rather than a nation.
Nonetheless, most foreigners still call the place “England” for short. And reading Virginia Woolf’s Mrs Dalloway, you can see that the London upper classes in the 1920s would never have considered the monarchy, the government or themselves to be anything but “English.” The Empire—now yes, that was “British.” But soon after the Second World War, the idea spread that there was something a bit coarse, a bit hurtful to Welsh or Scottish feelings, to talk about stuff being “English” (except for football, of course). Very sportingly, the English brought themselves to do what seemed “inclusive” and, as it wasn’t yet called, “politically correct”; they began to talk about their own country as “Britain” and about “Britishness” instead.
There were two problems with this. One was ironic: the word-change caught on “down South” just as the Scots were discovering that they felt less and less British. The other is much more serious. The change has made talking about the English nation and Englishness seem “inappropriate,” even faintly racist, and this has helped to distort and stunt political expression in England to a frightening degree. The very reasonable movement for an English parliament is drowned out by cranky outfits like the English Democrats: “Give us back Monmouthshire!”
Elsewhere in Europe, communal self-assertion has often been tamed into a modernising, liberal force by middle-class intellectuals. But in England, the “enlightened elite” holds its nose and turns away. A class thing, yet again. The bien pensants want nothing to do with popular nationalism, which they despise as uneducated hooliganism—white vans, St George’s flags, louts murdering Poles for speaking their own language in a pizza queue. The consequence: pop-up demagogues divert well-founded grievances into stupid xenophobia. An undirected English self-awareness is spreading, but English political life still stagnates. This is why some Scots, like political theorist Tom Nairn, argue that Scottish independence could also be the liberation of England. “Britishness,” they suggest, has been a heavy veil preventing English people from seeing their own situation clearly. Nairn invented the term “Ukania” to describe the decaying Union, echoing the Austrian novelist Robert Musil’s use of “Kakania” for the fin-de-siècle Austro-Hungarian Empire on the edge of disintegration. If the UK were broken up and “Britain” reduced to a mere term of geography, wouldn’t the English be free to rediscover those principles of fairness, equality and democracy which they once helped to spread overseas—and give them a new birth at home?
Was there ever a British people? Most great empires develop a ruling caste whose members transcend mere national or racial identities. There was Civis Romanus. There was Homo Sovieticus. And there was also Homo Britannicus. From one end of the archipelago to the other, he wore the same clothes, spoke with the same public-school accent, ate the same sad food and patted the same sort of dog. Sometimes he was a headmaster or vicar, sometimes a colonel, sometimes he went out to govern New South Wales. To meet him, you’d never know if his roots were in Cornwall, County Down or Caithness. His culture and values were simply… British—perhaps the only time that Britain showed one of the symptoms of a classic nation. (He’s not yet quite extinct, but found now only in a few protected environments.) One of his assumptions was that Great Britain and its empire had risen to be something universal, leaving petty nationalism behind. That delusion has been shared by many imperial powers: the Napoleonic legacy in France, or the Ostpolitik of Wilhelmine Germany. Unfortunately, it was a delusion that Homo Britannicus bequeathed to his successors. Even today, London governments find it hard to take Welsh or Scottish nationalism seriously. Independence? These “Celts” of ours have to be joking or, as May revealingly puts it, “playing games.” They can’t really mean it, in the way Washington’s Americans or Lord Byron’s Greeks meant it. (Or as the Irish Celts meant it? Oh, don’t bring all that up again… )
To come through this tumult into a new stability, several old mental ramparts have to be bulldozed. One is that basically imperial defect of vision. In 1883, the prophet of empire JR Seeley wrote The Expansion of England, in which he presented England’s subordination of Scotland, Wales and Ireland as only the prelude to a racially English world-state dominating the globe. He conceded that these “utterly unintelligible” Celts, plus “a good many French and Dutch and a good many Caffres and Maories” could be admitted “without marring the ethnological unity of the whole.” Never mind the Caffres and Maories. Traces of Seeley’s attitude to the “unintelligibles” nearer home still survive at Westminster.
The “Ukanian” mindset also needs to drop the idea that Britain can muddle its way through this problem by embracing federalism. No month goes by now without somebody announcing that a federal constitution would end all these silly disputes over power. There are two problems here. The first is the gross British asymmetry. A federation is a coming-together of polities in a law-bound partnership. If it is to succeed, no single partner must dominate, so it helps if partners are of similar size. A federation of four in which one partner (England) has 85 per cent of the population simply wouldn’t work. The hopeful answer has been: “Ah, but England could be divided into regions—then it would all balance out.” Unhappily, England doesn’t want to be divided up. Many regions have impressive cultural identities and growing resentments of London. But when invited to vote for devolution with their own sub-parliaments, as the North-East was most recently asked in a 2004 referendum, they have said no. It may be that the English have a tradition, going back to the Tudors and beyond, that strong centralised government works best. Whatever the reason, the English have every right to spit out constitutional wheezes they don’t fancy.
The second problem with federalising the UK is the weird and antique Anglo-British power structure. The three-way scrimmage over the Brexit referendum—“the people’s voice” versus parliament versus Crown prerogative—showed that, when it comes to it, nobody really knows what the law of state is in this country. But most politicians, when it suits them, do agree that parliament at Westminster is sovereign: its authority should be absolute and its laws should not be overruled. No modern state would tolerate this bizarre idea. In most democracies, the legislature is subject to the supreme law of a Constitution. But in Britain, a sovereign parliament can’t by definition irrevocably surrender authority—and the split of powers in a federation does need to be irrevocable. So when Gordon Brown talks about “near-federalism,” he is really talking not about shared sovereignty but only about wider devolution.
So with federation “returned to sender” and the imperial monocle replaced by normal lenses, the UK’s future looks clouded. For Scotland, the remaining alternatives are slightly expanded devolution or independence. Neither seems overwhelmingly attractive to most Scots, in this pause before the new independence campaign begins. But what most Scots do want is a future which keeps their contact with the other “British” nations open, warm, easy and very special. And, surprising as it sounds, Scottish independence is probably the best way to preserve that “Britishness.”
Devolution works more and more scratchily. There’s no way to reconcile the need of this small country for strong, interventionist government with the welfare-cutting, privatising programmes of Conservative administrations, now likely to hold power in London for a generation. Even if the next “indyref” fails and Sturgeon’s SNP is discredited, the independence option will smoulder on like those fires that glow in the night from abandoned coalfields. And, though it seems unthinkable at the moment, history (not only Irish) warns that a much more radical and impatient movement might take the place of the SNP—just as Sinn Féin supplanted the moderate Irish Home Rule party.
Better, then, to “lose Scotland” (that revealing phrase), and to construct a close, affectionate relationship between independent states which respects all the personal and cultural links and tastes which do form a residual “Britishness.” If Scotland ends up inside the EU with “rump UK” outside, that intimacy—to say nothing of economic necessity—will find its way round any new frontiers.
Scotland and England know each other as no other two nations in the world do, and much of their problem at the moment comes from the fact that—having long lived in the shadow of Britannia—England does not appear to know herself. If Scotland had freedom, and England learned self-awareness, a new form of common self-respect and pride could grow: the virtuous nationalism of a virtual Britain.
Pachyderm Paradise
Pachyderm Paradise
Here’s the venerable decorated elephant – tusker as it’s known locally – fronting the procession at this year’s Navam Perahera festival in Colombo. Organized by the Gangaramaya Temple, a local landmark, and held during the February poya (full moon), the event always draws a sizeable and enthusiastic crowd.
‘In The Cage, Trying To Get Out’
‘In The Cage, Trying To Get Out’
Imagno/Getty Images
Herschel Grynszpan at his first interrogation, one day after he shot the German diplomat Ernst vom Rath at the German embassy in Paris, November 8, 1938
Here is another typically engaging and informative review by historian Timothy Schneider from the pages of The New York Review of Books, this time looking a new set of books broadly looking at Jewish life in Europe during the 1930s as communities across the continent struggled to come to terms with the threat posed by the rise of Hitler and a Nazi Germany.
I learned much from it. For example, the fact that on the eve of World War II, the Jewish population of just two Polish cities, Warsaw and Łódź was larger than the entire Jewish community in Germany. I was also reminded of a few telling details, such as the fact that right up until a few months before the war’s outbreak in September 1939, the Polish military was training young Zionist paramilitaries – principally members of the Irgun – for their departure to Palestine with a view to making much trouble as possible for the British authorities there in the hope of persuading them to accept the creation of a new Jewish state. Or that not so long prior to the joint German-Soviet invasion of Poland in autumn 1939 Hitler had proposed to Warsaw an alliance with the goal of invading the Soviet Union. (Poland refused).
As Snyder notes, moreover, to attempt to understand ‘the life and death of European Jews in the 1930s and 1940s is, almost by definition, to engage with [Hannah] Arendt’. Pointing in particular to her seminal work The Origins of Totalitarianism, Snyder notes that Arendt highlighted what he calls ‘the elemental connection between statelessness and mass murder’. In doing so Arendt was also pointing to a central theme of Snyder’s most recent book, Black Earth: The Holocaust As History and Warning. And as he expresses that thesis here with reference to Hitler’s effective annihilation of many of the states occupied by the Nazis during the early states of World War II:
The denial of civil rights to Jews within states was one form of repression. The destruction of states themselves rendered Jews vulnerable as nothing else could. Hitler’s aspiration to rid the earth of Jews could only proceed to completion after the states themselves were destroyed.
Nor is this a purely historical point. When we consider the appalling destruction of minority communities in the ‘destroyed states’ of our time – most notably Iraq and Syria – the continued relevance of this insight becomes all too grimly apparent.
À l’intérieur du camp de Drancy, by Annette Wieviorka and Michel Laffitte Paris: Perrin, 382 pp., e23.00 (
Herschel Grynszpan shot the German diplomat Ernst vom Rath in Paris on November 7, 1938. The Nazis claimed that the young man was an agent of the international Jewish conspiracy, and that his act of murder was an early salvo in the “Jewish War” against Germany. In fact, he was a confused and angry teenager who, like thousands of European Jews in late 1938, was unwanted both in Poland, where he was a citizen, and in Germany, which he knew as home. Both Germany and Poland were pursuing policies designed to get rid of Jews, Berlin with deadly but hidden purpose, Warsaw with cynicism and calculation. Anti-Semitism, however, did not unite the two governments but rather ruined their mutual relations. People like Grynszpan were caught in the middle. He was the victim not of German-Polish agreement but of a growing German-Polish conflict.
In Poland in 1938, an authoritarian clique in power had to deal with public anti-Semitism as well as opposition from an anti-Semitic party, the National Democrats, that had never run the state by itself and organized pogroms as a challenge to public order. There were three million Jews in Poland, a tenth of the total population, a third of the urban population. There were about as many Jews in the Polish cities of Warsaw and Łódz as there were in all of Germany, or for that matter in all of Palestine.
In domestic policy the Polish regime copied some of the tactics of the National Democrats, founding a ruling party that did not admit Jews and presenting mass Jewish emigration as a goal of foreign policy. Polish leaders supported the establishment of a state of Israel with the most expansive possible boundaries. In secret the foreign ministry and the ministry of defense supported the right-wing Zionist militants of Betar and Irgun. Young Jewish men were trained on Polish military bases and then sent back to Palestine to make trouble for the British Empire in the hardly hidden hope that the British could be driven out, or at least induced to permit mass emigration of Jews from Poland.
In Germany, Hitler had already made Jews second-class citizens and proclaimed his hatred of them and his intention to eliminate them. The Nazi leadership was far more anti-Semitic than the general population, for whom Jewish matters in general had little salience. Less than 1 percent of the German population was Jewish, and most German Jews would be induced to emigrate by repression and theft. “World Jewry,” the wraith that haunted Hitler’s speeches, was mostly present, even in the Nazi mind, beyond the borders. In 1938 Hitler, Göring, and Ribbentrop confused Polish leaders by proposing to them as common interests a war against the Soviet Union and the deportation of the Jews.
The Poles, though fearful of Soviet power and desirous of reducing their Jewish population, did not see how those two goals could be pursued at the same time. Surely a large-scale continental war would disrupt any plan for Jews to emigrate? The group of Polish “colonels” who ruled the country, though quite cynical after their own fashion, could not begin to anticipate where Hitler’s logic would lead after 1938: toward the mass killing of Jews under the cover of war.
In any event, German policy in 1938 was bringing Jews to Poland rather than drawing them away. After the German annexation of Austria (or Anschluss) in March 1938, some twenty thousand Jews with Polish citizenship living in Austria tried to return to Poland. After humiliating pogroms, Austrian Jews were subjected to a systematic policy of expropriation and forced emigration devised by Adolf Eichmann. As these methods were then applied to German Jews, Polish diplomats feared that the tens of thousands of Polish Jews living in Germany would also seek to return. The foreign ministry decided to exclude Polish Jews abroad from the protection of the Polish state.
Right after the Anschluss, the Polish government demanded that all of its citizens living abroad register with embassies—and in October, right before the deadline, instructed its ambassador in Berlin not to stamp the passports of Jews. The Germans could see where this was headed, and responded by deporting about 17,000 Polish Jews to Poland in late October. Very often these were people whose entire lives had been spent in Germany and whose connection to Poland was quite limited. Grynszpan’s parents, for example, had moved to Germany in 1911, before an independent Poland had been established. Their children had been born in Germany.
Grynszpan’s parents had sent their son, then fifteen years old, to an aunt and uncle in Paris in 1936 to spare him from Nazi repression. By 1938, both his Polish passport and his German visa had expired, and he had been denied legal residency in France. He faced what his biographer Jonathan Kirsch perceptively calls the “existential threat of statelessness.” His aunt and uncle had to hide him in a garret so that he would not be expelled. They shared with him a postcard from his sister, mailed right after the family was deported from Germany to Poland: “Everything was finished for us.”
The young man had some sort of disagreement with his aunt and uncle about how to react to the family tragedy, and left the house in a rage. The next day he bought a gun, took the métro to the German embassy, asked to meet a German diplomat, and shot Ernst vom Rath, the one who agreed to meet him. It was, he confessed to the French police as he allowed himself to be arrested, an act of revenge for the suffering of his family and his people.
Kirsch has a dramatic story, and he tells it well. There is a climax: Hitler and Goebbels seized upon the murder as an occasion for the first national German pogrom, the Kristallnacht of November 9 and 10, 1938. There is the long, slow denouement: Grynszpan, when the Germans later got hold of him, changed his story, and claimed that Rath was his lover. German jurists dutifully added a violation of Paragraph 175, the ban on homosexual intercourse, to the list of the charges against Grynszpan. This of course implicated Rath, whom the Nazis wished to present as a blood martyr, in crimes of a sexual and racial character involving a minor. Kirsch argues that Grynszpan believed that Hitler would not be able to tolerate his testifying about a love affair on the witness stand.
Kirsch’s version (which here follows an earlier book by Gerald Schwab*) credits Grynszpan with an intelligence he did not always display, but this defense had already been suggested to him in France by a lawyer, and he had a long time to consider his strategy. Most likely the crime was political but the defense was calculated. Rumors about a sexual connection between Grynszpan and Rath were current after the shooting but seem unlikely to be true. Kirsch, to his credit, is interested in the purported homosexual relationship only as a possibility to be considered and analyzed in order to clarify what happened.
Bernard Wasserstein has set himself a difficult task in On the Eve, his history of the Jewish Europe of the 1930s: to hold the attention of readers who already know how the story will end. His research is superb, but in an important respect he has written a work of art rather than of social science: he seeks to convey a moment rather than arrive at an explanation. The pertinent epigram is from Simon Dubnow, the founder of modern Jewish historiography: “The historian’s essential creative act is the resurrection of the dead”—which in this case means the murdered. The challenge comes with a double edge if we remember that Dubnow himself is one of those murdered, shot in Riga in 1941 during the Holocaust.
We cannot forget the Holocaust when we read of the Jews of the 1930s, nor does Wasserstein expect any such thing. But we must remember that our knowledge of a Holocaust in 1941 cannot have been shared by Jews in 1938, and more broadly that the meaning of lives cannot be reduced to the motives of the murderers. Wasserstein meets Dubnow’s challenge with a dozen thematic chapters about Jewish ways of life; one of the later ones, on “youth,” is perhaps the most representative and the finest. For young people (such as Grynszpan) formed entirely by the 1930s, this moment was everything they had, all they knew of life. In essays written by Jewish schoolchildren in Poland, Wasserstein finds a haunting collective loneliness.
In earlier sections devoted to Western and Central Europe, Wasserstein calls attention to the absence of children, seeing the smallness of Jewish families as evidence of an individualist “road toward collective oblivion.” This seems to take the demographic doomsaying of the 1930s too seriously. For one thing, as Wasserstein acknowledges a few pages later, Jews had smaller families in Western and Central Europe not because they were in despair about the fate of their people but because they had become bourgeois. Their low fertility rates, low infant mortality, and long lives anticipated the demographic transition of postwar Europe. For another, Jews in the major Jewish homeland, Poland, were still reproducing at a fairly high rate; without emigration the Jewish population grew by about 50,000 a year. And as the Polish origins of the Grynszpan family remind us, in Germany immigration rather than reproduction was the natural source of demographic growth.
The extreme difficulty of movement in the late 1930s thus becomes the theme of the book. After the United States restricted immigration in 1924 and the British limited migration to Palestine in 1936, most Jews knew that their fate, whatever it might be, would come in Europe. Although German and Polish restrictions on citizenship policies toward Jews set the final trap for families like the Grynszpans in 1938, these policies were part of, and in some measure a reaction to, the global constriction of emigration. The Évian Conference of July 1938, on the issue of Jewish refugees from Nazi persecution, had demonstrated that no major country was willing to take the Jews of Germany—and, to Warsaw’s frustration, the far more numerous Jews of Poland were not even discussed. Insofar as Jews in Poland were moving at all, it was from the small towns to the cities. Wasserstein gives excellent descriptions of Jewish urban misery, although much of the misery was, of course, simply urban and not particularly Jewish. Polish peasants, whose unemployment rate was even higher than that of the Jews, were also flooding the cities.
Wasserstein writes of “New Jerusalems,” the cities that Jews considered to be special. In Poland this was Vilna (Wilno in Polish, Vilnius for Lithuanians, whose capital it is today), where the historian Simon Dubnow, among many others, gathered historical and ethnographic materials for YIVO—the Institute for Jewish Research (today in New York). From the neighboring Soviet Union, the other European country with a Jewish population in the millions, Wasserstein chooses Minsk: notable indeed for its Soviet-era Yiddish culture, at least before the Stalinist Great Terror of 1937–1938 and the Holocaust.
Slovenský Národný Archív
Adolf Hitler and the Slovak leader Jozef Tiso, Salzburg, Austria, July 1940
One of Wasserstein’s many achievements is to integrate Soviet Jewish experiences, with all of their radical differences, into a European history. Because the Soviet Union is an integral part of his history, concentration camps do not loom in the future but define the present. There were twice as many Jews in Soviet camps in 1938 as there were people in German camps. The Soviets killed about a hundred times more Jews in the 1930s than did the Germans. For the most part, this was not from any special anti-Jewish animus: Jews were sometimes killed in the USSR for political reasons associated with their being Jewish, as Wasserstein tends to stress, but much more often simply because they seemed to be standing in the way of some larger policy; for example, the deliberate famine in Soviet Ukraine in 1933. Thousands of Soviet Jews were shot by the Soviet secret police as Polish spies in 1938, unlikely though that might seem. This was part of what the Soviets called the “Polish Operation” of the Great Terror, which was particularly bloody in Minsk.
The large number of Jewish victims of Soviet power was mainly a function of the repressive character of the Soviet state at the time. Despite all the bloodletting, the Soviet Union was then the only officially anti-anti-Semitic state in the world, and it assimilated more Jews into its system than any other country had done. Wasserstein points to what he considers an unmistakable sign of this integration: many Jews in the Soviet Union forsook their God. Whereas most Christians in the USSR admitted to their beliefs in the 1937 census, only 10 percent of Jews did.
Wasserstein doesn’t know Polish or Russian. Perhaps as a result his account of the integration of Jews into the two major Slavic cultures can seem a bit more exotic than it actually was; but he does know, aside from German, Yiddish, Hebrew, and French, the languages of his other two “New Jerusalems”: Dutch for Amsterdam and Ladino for Salonika. Each city is presented in impressively credible detail, and the juxtaposition of all four illustrates, about as well as can be done, the multiplicity of the different Jewish cultures in Europe. Wasserstein himself clearly loves languages, and they give him an occasion for brief moments of erudite playfulness in a work whose tone is generally calm and earnest. His confident multilingualism permits an interesting European counterhistory of mass literacy and mass politics. Christian national elites were eager in the first third of the twentieth century to raise up the Christian masses to democracy, socialism, or nationalism by teaching them to read. Male Jews, for the most part, were already literate in a language or two or three. They were bemused or afraid or, sometimes, fascinated by the cultures around them.
The missing chapter is about the Jews who tend most to fascinate us, the writers and the scientists. Leaving them out is the most interesting, and perhaps the most un-Jewish, move that Wasserstein makes. Sigmund Freud figures not as the founder of psychoanalysis but as the author of a self-reflective note about his Jewish identity; Julian Tuwim appears not as the most-read Polish poet but as an example of ambivalent self-regard; György Lukács is not the leading Marxist philosopher of his time but only an admirer of the “foggy” Jewish nationalism of Martin Buber. In a kind of postmodern chivalrous gesture, only the achievements of Jewish feminists get close attention. There is no consideration of the “contributions,” as Wasserstein says with irony, of Jews to European culture. This choice denies the reader any vicarious sense of superiority (“we made the culture and they destroyed it”) or any redeeming access to the uses of adversity (“look what we did despite it all”). With a supple but irresistible force, this insistence on the typical experience and not on exceptional achievement holds the book squarely in the category of social history: a portrait of a people, a collective one.
Wasserstein restores, as well as anyone could, a moment of life. He even begins a kind of reclamation of life from death. The suicides that followed the tragedies of 1938—the Anschluss in Austria, the deportation of Jews from Germany to Poland, and Kristallnacht in Germany—were not only predictable consequences of oppression but rather attempts, at least in some cases, to preserve the shape of a life whose continuation, in the new circumstances, could only corrupt. Yet the suspense can only be maintained for so long; these tragedies, though presented again and again in human terms by Wasserstein, are also general turning points, beginnings of an ending. By the time Wasserstein reaches Grynszpan and his deed in late 1938, in a chapter entitled “In the Cage, Trying to Get Out,” the darkness is falling.
The absorption of Austria into the Third Reich in March 1938 led to a German-Polish-Jewish refugee crisis in October, which in turn led to Grynszpan’s assassination of Rath and to Kristallnacht in November. The Anschluss was also the beginning of the end of the European state system. Hitler, much encouraged by his unexpectedly rapid success, pressed onward toward Czechoslovakia. At Munich in September 1938, the French and British abandoned their Czechoslovak ally, allowing Germany to annex the rim of mountainous territory called the Sudetenland. Hitler, further emboldened, moved in March 1939 to destroy the remaining Czechoslovak state.
The Jews of western Czechoslovakia were absorbed into the Reich along with a “Protectorate of Bohemia and Moravia.” The Jews of the farthest reaches of eastern Czechoslovakia, the region known as Subcarpathian Ruthenia, found themselves under Hungarian rule after Berlin granted Budapest this territory in the First Vienna Arbitrage of 1940. The destruction of Czechoslovakia left these Jews stateless, and Hungary refused to recognize about 20,000 of them as its own citizens. Hungary would expel these people in 1941, and they would become the victims of the first large-scale shooting action of the Holocaust.
In Hitler’s disposition of Czechoslovakia, Slovakia became an independent state, subordinate to the Reich but formally sovereign. As James Mace Ward shows in his finely researched biography, the Slovak leader Monsignor Jozef Tiso understood this new beginning as a chance for Christian, national, and social revolution. The end of Czechoslovakia deprived the Jews of their previous civil status; the new Slovak state denied them equal citizenship and deprived them of property rights. Tiso wanted Slovaks to seize Jewish property and take up Jewish professions, and thus expand the national middle class. The Jews, suitably impoverished, could then be deported to the Reich as laborers, as was arranged in October 1941. Slovak leaders asked, that December, for assurances that Jews sent to Germany would never return. This was superfluous: the endpoint of the deportations was Auschwitz.
In March 1938 the Warsaw government expressed no objection to the annexation of Austria, and in September 1938 it actively supported the partition of Czechoslovakia. After these two easy triumphs, though, Hitler turned again to Poland, and now his tone was far less cordial. The German proposals to the Polish government in late 1938 and early 1939 remained incoherent. There was some vague assurance that Poland could share in the spoils of a German-Polish war against the Soviet Union, as well as some incomprehensible hints of a common solution to the Jewish problem. Far more precise were German demands: that Danzig, then a free city in which Poland had important interests, be ceded to the Reich; and that Poland allow an extraterritorial highway to connect Germany with East Prussia.
Polish leaders understood that even a victory against the Soviet Union alongside Germany would be a defeat, since Poland would surely become a German satellite the moment it became a German place d’armes. For Polish public opinion and to Polish leaders, the German plans for Danzig and the highway were themselves intolerable violations of sovereignty. Poland decided to resist such German demands and risk war. Great Britain and France then endorsed Poland’s independence and offered security guarantees. When the Polish foreign minister visited London in April 1939, he still was hoping to persuade the British to allow Jewish settlement in Palestine. In May the Polish army was still training the Irgun.
Hitler wanted war in 1939, and was not choosy about allies. Although his ultimate goal was, as he had been telling the Poles for years, an attack on the USSR, he was perfectly willing to make an arrangement with Stalin if it served his immediate aims. Thus in the summer of 1939 Hitler changed his basic conception from that of an attack on the Soviet Union with Polish help to an attack on Poland with Soviet help (with the Soviets, of course, to be betrayed later on).
This is where Wasserstein, quite understandably, ends his study: with the Molotov-Ribbentrop Pact of August 1939, the de facto German-Soviet alliance that doomed Poland. We are shown a photograph of the forlorn delegates at the World Zionist Congress as they hear the news of the arrangement between Hitler and Stalin. As they immediately understood, this meant a German war on Poland and Nazi domination over millions of Jews. It also opened the way to a German attack on the Western European nations of many of the delegates. The session ended early so that the delegates could hurry home; Chaim Weizmann closed the meetings with his prayer that “we shall meet again, alive.”
Poland was quickly defeated by the joint German and Soviet invasions of September 1939. Britain and France provided no meaningful assistance to Poland but did declare war on Hitler’s Germany. Herschel Grynszpan, then still in a French jail awaiting a trial, asked to be able to join the French army. In June 1940 France fell almost as quickly as had Poland. Grynszpan was now hastily evacuated to the south. The French often allowed people in his situation to escape, but Grynszpan, fearing the Germans, wanted to remain in French captivity. He wandered through the south of France, the territories that came to be governed by the collaborationist Vichy regime, searching for a French prison that would take him. Meanwhile German diplomats filed a formal request for his extradition, which the new Vichy authorities quickly granted.
Grynszpan’s position, here as throughout his short life, was both glaringly unusual and yet highly representative. Vichy was eager to rid itself of foreign Jews. Grynszpan was in the worst possible legal position for a Jew in France, lacking both French citizenship and foreign citizenship, since Poland, according to the Germans, had ceased to exist as a state. After his deportation to Germany, where the Nazis failed to arrange a show trial, he was killed, although the precise circumstances of his death, according to Kirsch, are unknown.
We know a good deal, thanks to the careful chronicle of Annette Wieviorka and Michel Laffitte, of the fate of Polish and stateless Jews in Vichy France in general. They were rounded up, often with the help of the French police, dispatched to the holding camp at Drancy outside Paris, and deported to Auschwitz. So despite everything, Grynszpan was in one way typical. He belonged to the largest group of victims. Polish Jews were well over half of those murdered in the Holocaust overall. And Polish Jews were also the largest group of Holocaust victims in France itself. More Polish Jews residing in France were killed than were French Jews. In this sense, the Holocaust in France was a chapter of the Holocaust in Poland, and in the history of statelessness.
Hannah Arendt noticed in her wartime writings and then in her Origins of Totalitarianism the elemental connection between statelessness and mass murder. She observed from France the events of 1938, as Jews were forced back and forth in what Wasserstein calls “refugee tennis.” The denial of civil rights to Jews within states was one form of repression. The destruction of states themselves rendered Jews vulnerable as nothing else could. Hitler’s aspiration to rid the earth of Jews could only proceed to completion after the states themselves were destroyed. Where any vestige of sovereignty remained, as in Vichy France and Slovakia, Jewish policy could change and deportations could cease, as indeed happened in both places in 1943. Where sovereignty was completely removed, Jews had no chance, either at home or abroad. Polish Jews were at greater risk of death than anyone else in German-occupied Poland—but also in Vichy France. Arendt’s point was stronger than she realized herself.
To try to understand the life and death of European Jews in the 1930s and 1940s is, almost by definition, to engage with Arendt. Ward ends his book with a citation of The Origins of Totalitarianism, but Wasserstein misses few opportunities to disagree with her, and Kirsch energetically denies her strange interpretation of the Grynszpan case as a Gestapo conspiracy. And yet the Grynszpan case itself, when considered against the broader setting of the events of 1938, confirms Arendt’s broader point. Grynszpan was not, as the Nazis claimed, a representative of a “Jewish War” declared by a Jewish international conspiracy against Germany; but he and his family were typical victims of a particular tactic of the war against the Jews, the deprivation of citizenship. Several governments acted in the late 1930s to deny Jews citizenship or to destroy states where Jews were citizens. Nazi Germany combined the ambition of eliminating the Jews with the eradication of sovereignty that allowed that ambition to be realized.
*The Day the Holocaust Began: The Odyssey of Herschel Grynszpan (Praeger, 1990). ↩
Cuba: the paradox of US foreign policy
Cuba: the paradox of US foreign policy
People sit in a bus heading to see the ashes of Cuba’s leader Fidel Castro in Bartolome Maso, on the foothills of Sierra Maestra, Cuba, Friday, Dec. 2, 2016. Castro’s ashes are on a four-day journey across Cuba from Havana to their final resting place in the eastern city of Santiago. (AP Photo/Ricardo Mazalan)
Here is a convincing recent thinkpiece from Isabel Hilton, published in Prospect, on the paradoxical role implacable US opposition to Fidel Castro played in helping consolidate and later prop up his dictatorial regime.
“Mishandling the developing world nationalism that was such a pervasive phenomenon of the post-war world was one of the significant errors of US foreign policy. Washington viewed such movements exclusively through the distorting lens of superpower rivalry, casting Moscow as the arch manipulator of every local and regional movement that manifested antipathy to the United States or its interests. It led the US to argue the virtues of democracy while simultaneously propping up right-wing dictatorships from Chile to the Philippines.”
The paradox of US foreign policy on Cuba
It created what it aimed to destroy—a hostile, pro-Soviet, long-lived regime built around Fidel Castro
It was the early 1990s. I was in Havana, on one of several frustrating reporting trips. Cuba then was an unrewarding place for a journalist to visit: the Party elite was all but impossible to meet, and if you succeeded, they spoke in the dead language of the Communist bureaucrat. The result was a notebook full of unusable political sloganising.
To call Cuba’s press operation unhelpful would be to pay it a compliment: accepting a minder was a condition of a journalist’s visa, but minders would vanish for days on end, wasting precious reporting time. Calls to them went unanswered or unreturned, the Cuban press was brain-crushingly uninformative, and blanket state surveillance served as a strong disincentive for ordinary Cubans to speak to foreign journalists.
It was before everybody had mobile phones, so waiting for the minder’s call back entailed long hours in stale hotel rooms. I filled some of those hours watching Fidel Castro on a fuzzy black-and-white TV set. He was reading what seemed to be the world’s longest shopping list.
Cuba was hungry. The Soviet Union had collapsed and had taken with it a cozy trading system that allowed Castro to supply sugar at inflated prices in exchange for pretty much everything that Cuba needed at preferential rates. When it collapsed, Cuba lost around 80 per cent of both imports and exports, and GDP dropped by more than 30 per cent. Fidel was explaining to his people why the shelves were empty and were likely to remain so for some time, a task that took several hours.
The shopping list included, importantly, oil and petrochemicals: bicycles were soon to make their appearance on the streets of Havana, posing a lethal hazard in a city in which the street lighting had gone dark. But Castro’s interminable speech was testament to Cuba’s comprehensive dependency on the Soviet bloc: it included everything from matches to motor parts, powdered milk to wheat; kerosene to medicine, soap to agricultural machinery. Cuba’s neighbourhood supermarket had closed and its credit was no good at any other.
Thus began the “special period” of material hardship that gave hope to optimists in Miami that this time, Fidel Castro’s regime would surely collapse. It was, after all a moment—now rather distant—in which liberal democracies congratulated themselves on their victory in the long struggle against totalitarianism. Commentators were casting around the surviving regimes, speculating on who would be next.
For my friends in Cuba these were hard times. One couple whom I had got to know well had started out as supporters of the revolution and had lived scrupulously by its rules. As young people, they had volunteered in literacy programmes and contributed to the once vibrant cultural life of Havana. Neither had contacts with exiles across the water; none of their family members had sought to escape; they had put their shoulders to the wheel to build the new Cuba. Both were well respected in their professions—one a television director, the other a cultural critic and broadcaster—but they lived in a house so dilapidated that it seemed as though banging the front door might bring it crashing down.
Holding hard currency had been a crime, and although a few special dollar shops had opened up, the stigma of the Miami connection had persisted. Now, though, Cubans with relatives abroad and access to dollars were becoming a new elite. Those who had none were forced to rely on a meagre and intermittent state ration system. In Havana’s hotels, highly trained professionals jostled for shifts carrying luggage for tourists, in the hope of harvesting a few dollars in tips. Young Cuban women could be seen entwined around excited male tourists and public spaces became theatres of prostitution. Havana seemed to be spiralling back to pre-revolutionary days, when vice and organised crime ruled the city.
Yet Castro’s regime did not collapse. Then, as at every other point of crisis in his long reign, Fidel was saved by the unremitting hostility of the United States, giving him a ready scapegoat for Cuba’s domestic privations and an overwhelming argument for resistance. His popular appeal remained embedded in Cuba’s fierce nationalism and the more impatient the US calls for his departure, the firmer his grip.
Much ink has been spilled over the intriguing question of whether it might have been different—whether Fidel was a Communist from the beginning and the Cuban revolution always destined to turn to Moscow. Both Cuban revolutionaries and US Cold Warriors had reason to argue that he was, and, in the absence of available Cuban archives on the subject, the question may never be definitively settled. But it is intriguing to reflect on the possibility that, had the Eisenhower administration played its cards differently, it might have avoided having and spared subsequent administrations five decades of provocation from a Soviet ally 90 miles off its coast, whose example inspired anti-American movements around the world.
There is evidence in support of both positions: the young revolutionary Fidel Castro was not a member of the Cuban Communist Party and his 26 July Movement encompassed a range of political views, bound together by a hatred of President Batista. The Cuban Communist Party strongly disapproved of what they saw as Fidel’s bourgeois adventurism. He overthrew a dictator whom the US had supported for far too long, but perhaps a friendlier reception by Washington would have kept Fidel from turning to Moscow.
The US was in the grip of a ferocious anti-Communism that would produce, among other things, the McCarthy hearings, but it did not seem unremittingly opposed to the Cuban revolution at the beginning. Washington recognised the new order within days and regarded Fidel as more politically moderate than his brother Raul, a misapprehension that persisted for decades.
Richard Nixon, then Eisenhower’s vice-president and himself an enthusiastic Cold Warrior, met Castro in April 1959. He wrote in his memoirs that he had identified Castro in that meeting as a man the US should not do business with, but in this, as in other things, Nixon’s word is unreliable. A memo that he wrote at the time suggests a more sympathetic view.
Nixon was deeply impressed by Castro’s leadership qualities. “The one fact we can be sure of,” he wrote, “is that he has those indefinable qualities which make him a leader of men. Whatever we may think of him he is going to be a great factor in the development of Cuba and very possibly in Latin American affairs generally. He seems to be sincere; he is either incredibly naive about Communism or under Communist discipline—my guess is the former… But because he has the power to lead… we have no choice but at least to try to orient him in the right direction.”
But by March 1960, Eisenhower had decided to try to overthrow Fidel and economic sanctions were imposed. The following year, Fidel declared himself a Marxist-Leninist. The well-known catalogue of CIA-sponsored farce, from exploding cigars to the Bay of Pigs, duly followed. The fact that Castro was—or would become—a dictator was not the point: the US went on installing and supporting dictators throughout the period, so long as they were, to paraphrase the words of diplomat Jeanne Kirkpatrick, some version of “our son-of-a-bitch.” But Castro’s strength lay in not being a US “son-of-a-bitch.” The ideological overlay always came a poor second to his nationalism.
How might it have looked from Castro’s perspective? Fidel clearly benefitted from US animosity as he consolidated his power, eliminating his rivals and growing into the mythologised liberator, a familiar Latin American character. Where does nationalism get its energy and legitimacy, after all, if not from the threats of the hegemon, and what greater excuse could there be for domestic repression than the permanent national emergency of US aggression?
Even if that attitude was not embedded from the start, Fidel had plenty of evidence that the US had little compunction about overthrowing elected governments that tried to assert national against its corporate interests. Fidel’s triumph came only five years after the US-sponsored coup against the elected leader of Guatemala, Jacobo Arbenz, who had had the temerity to propose that the American company United Fruit should pay tax. (Guatemala subsequently suffered three decades of military dictatorship and a protracted civil war that cost tens of thousands of lives.)
Nevertheless, has Washington’s door remained open, Cuba might have normalised. It was the most prosperous country in the region and the one with intimate ties to it giant neighbour. But as time went by, the US did Fidel the further service of focussing its animus increasingly on his person, even more than his professed ideology. In 1996, long after the collapse of the USSR, Bill Clinton signed the Cuban Liberty and Democratic Solidarity Act, barring any US president from lifting sanctions on Cuba as long as Fidel was in power. What greater endorsement could they have given him?
Mishandling the developing world nationalism that was such a pervasive phenomenon of the post-war world was one of the significant errors of US foreign policy. Washington viewed such movements exclusively through the distorting lens of superpower rivalry, casting Moscow as the arch manipulator of every local and regional movement that manifested antipathy to the United States or its interests. It led the US to argue the virtues of democracy while simultaneously propping up right-wing dictatorships from Chile to the Philippines.
In the name of fighting Communism, the US entered a series of largely fruitless wars against nationalist movements that peaked in the debacle of Vietnam, and tailed off into Ronald Reagan’s military adventures in Central America. The paradox in Cuba was that US policy created almost exactly what it aimed to destroy—an unremittingly hostile, pro-Soviet and spectacularly long-lived regime, built around the personality of Fidel. Every tightening of the screw on Cuba, every failed assassination plot or invasion attempt burnished Fidel’s credentials as a heroic David against a reactionary and bullying Goliath.
Gdansk: open to migration and diversity
Gdansk: open to migration and diversity
Here’s a short video about an excellent initiative recently launched by a long-standing favourite city of mine. Niech zyie Gdańsk!
Sri Lanka: The Nation In Question
Sri Lanka: The Nation In Question
Here is my latest piece, a review of some recent books about Sri Lanka published in the Ceylon Today newspaper.
Mark Salter, 29/12/2017
2016 has been a good year for books about Sri Lanka. (Interest disclaimer: Hurst, the publishers in focus here, released my book on the country last year) First up was A Long Watch: War, Captivity and Return in Sri Lanka by Commodore Ajith Boyagoda, as told to Sunila Galappatti, a writer and former Director of the Galle Literary Festival.
As befits a prisoner of war memoir A Long Watch is couched in direct, lucid prose. It tells an extraordinary story. In September 1994, at the height of the civil war, Boyagoda was commanding one of the Sri Lankan Navy’s largest warships, the Sagrewardene. South of Mannar it came under attack by LTTE vessels and eventually sunk. Unlike many of his crew Boyagoda survived the assault, only to be pulled out of the sea with the other survivors and hauled away by LTTE cadres.
The highest-ranking officer ever captured by the Tigers, Boyagoda spent the next eight years in captivity, eventually being released in 2002, as part of a prisoner exchange deal. The majority of the book covers his long years of imprisonment. The picture that emerges is a complex one. Boyagoda makes no bones about his rejection of conventional ‘evil terrorist’ characterizations of the Tigers. He is also at pains to emphasize how fairly he was treated by his jailors, expresses sympathy for the injustices visited on the Tamil population, and even shows empathy for his captors, many of whom were, as he notes, forcefully conscripted by the Tigers in their youth.
As Galappatti has acknowledged elsewhere, telling a story as exceptional and as potentially charged as this one was never going to be an easy task. As a consequence she sticks firmly to a first-person narrative, keeping herself and her opinions firmly in the background. Inevitably, the resulting account has proved controversial. In particular, following its publication accusations that in a war time version of the ‘Stockholm Syndrome’ Boyagoda had sold out – even spied for – the Tigers were voiced in a number of quarters.
Certainly, the return to the South in 2002 did not prove easy for Boyagoda: eventually released from the Navy, initially he struggled to relate to his children and family, from whose lives he had been separated for so long. Overall, the account of Boyagoda’s wartime captivity is best read for what it is: one man – albeit a particularly thoughtful, sensitive one’s – experiences, as opposed to what it is not: an objective, critical account of the Sri Lankan conflict.
Next came, Madurika Rasaratnam’s Tamils and the Nation: India and Sri Lanka Compared. An altogether denser, more academic work of comparative political history, Rasaratnam’s book is a magisterial effort to address a central question. Why did India and Sri Lanka’s post-independence evolution follow such hugely differing trajectories with respect to their Tamil populations? Why was it the case, for example, that whereas by the late 1960s, previously independence-oriented political parties such as the ADMK had fully embraced the notion of Tamil Nadu’s place within the wider Indian polity, in Sri Lanka the Sinhala-dominated State’s continuing failure to accommodate Tamil aspirations eventually succeeded in transforming political forces that had vocally advocated independence from Britain and national unity into advocates of Tamil Eelam– and eventually into those, such as the LTTE, with no qualms over the use of violence to achieve that goal?
Not that all was perfect on the western side of the Palk Straits. As Rasaratnam’s book makes clear, for all the Indian National Congress (INC)’s success in accommodating Tamil demands within a broader pan-Indian nationalist framework, the story with respect to another key minority – Muslims – was rather less rosy. In particular, in the lead up to independence Rasaratnam highlights growing antagonism between a nascent Hindu nationalist movement and its Muslim counterpart as a source of – arguably still unresolved – tension within Indian society.
Nonetheless, the overall picture of a nation-in-the-making struggling and in a number of important respects succeeding in accommodating cultural, social and ethno-religious differences is a fascinating one. Not least, as noted above, on account of the vital successes India later achieved with respect both to Tamils and other Southern Dravidian cultures.
What then, of Sri Lanka? Space doesn’t permit a full review of Rasaratnam’s account of Ceylon, and later Sri Lanka’s dealings with its minority communities. At least the post-independence part of the story is well known to students of the civil war, notably pivotal events such as the 1956 Sinhala Only Act and the succession of ultimately failed pacts negotiated between Sinhalese and Tamil political leaders in the 1950s and 1960s.
What needs underscoring here is the lessons this story carries for the Sirisena Government in its efforts to move beyond the post-war morass it inherited from the Rajapaksas. First of these –underscored by Indian experience – is the central importance of a concerted effort to articulate and promote an inclusive national consciousness. An effort, moreover, that needs to go beyond simply devising a new constitutional framework (though undoubtedly it does need to include this).
In other words, while necessary for reaching a ‘political solution’ to the ethnic conflict, devising a new Constitution incorporating a revised framework of devolved governance embodying and even going beyond the 13th Amendment won’t do the trick by itself. What’s needed is a concerted attempt to frame a new national vision in which minorities crucially, Tamils and Muslims are given a central place in the country’s essential self-understanding and political practice.
‘There ain’t no black in the Union Jack’, as British Jamaican poet Benjamin Zepaniah memorably pointed out. And the related question for Sri Lanka is this: can it put the colours excised by Sinha Le supporters back in the national flag in ways that will help make Tamils and Muslims as proud to be Sri Lankan as their Sinhalese compatriots in future?