In the second issue of Distilled, we explored the debate between the organizing principles of individualism versus collectivism in the areas of politics, economics, and culture. We had a lively debate about these principles that took us from labour unions in the US and Canada, EU obstinacy in landlocked Switzerland, patchwork communities in the UAE, Maoist rebels in India, and finally, to China where it seems the Chinese are trying to have both.
Yet, we began to ask ourselves – is there more to the debate than merely individualism and collectivism? What are the other principles that underlie people’s beliefs and actions? Where do these principles come from? And how do these principles bear relevance to some of the critical problems we discussed in our first issue, “The Global Crisis of Confidence”?
Why is it necessary for us to debate principles and where they come from?
We believe that in today’s world, every individual is bombarded with constant information. Most of this information consists of too much noise, and not enough signal.
Since our inception, we have believed that the only way to truly understand our complex, interconnected global society is by telling stories. Holistic narratives about who we are, where we’ve come from, and how we intend to reach our vision for the future. Together.
But these stories require more than just a recitation of facts and figures. They need to incorporate first-hand accounts and experiences by a wide diversity of people around the world. They need to appeal to both the emotional and rational core of people’s sense of self-identity and group belonging. They need to offer a comprehensible view of the past and present, and a hope for the future.
Before we get to these requirements, however, we first need to understand the values and principles people hold, where they come from and how they’re formed, and how they influence the way we behave. Without this debate, we are left to wallow in the malarial swamps of our own stagnation and decadence – ceding what gains we achieved through our struggle for democracy, liberty, and freedom to that ancient blind leviathan, Apathy.
Distilled aims to not only be a platform for this debate and an incubator for these stories, but also a community of people around the world who yearn to join the global discourse. We are, first and foremost, an empathetic community of people from various backgrounds who seek to make sense of this world, and to simultaneously make this world a better place for everyone.
In this issue, we start off with a critical look into how virtuous we are and whether or not we need a different set of norms and morals for transhumans. After this you can delve into the world of religion by asking how a religious community can be, for better and worse, an enormous influence on a person’s beliefs and sense of belonging, and whether this religious community ought to be responsible for the education of students in a secular democracy. Following up on the question of belief, we dare you to inquire about the relationship between God and man. Diverting to more profane topics, we subsequently explore the distinction between facts and values in American political debate and the principles that underlie technocracy in Europe, America, and the rest of the world. We also ask “whatever happened to brotherly love?” in international relations, how the international defense community might gain from acting on principles, and what role human rights can play in the future. And finally, as a discussion of our core values can’t be omitted from this issue, we critically investigate Christopher Hitchens’s principles and his strong adherence to his anti-totalitarian beliefs before questioning democracy itself and the principles that lend this ubiquitous form of government credence.
A vigorous discourse is the mother of change and invention. That is why we hope you enjoy engaging the debates in our third issue, whether it’s through the comments section, letters to the editor, our upcoming podcast, our blogs Distilled Daily and Distilled Extra, our Facebook and Twitter pages, or even through your own contribution to our next issue. And as always, we remind you to stake your ground and defend it. Wisely.
Even though human rights are an entirely artificial construct, a human rights approach can be a truly universal tool in addressing world problems on a practical level. To that specific end, these rights should not be seen as normative soft law but rather as instruments to enforce minimum standards. Potentially, they could rally people with different belief systems around a common goal. By doing so, they can guide our principles in a very different way than ideological or religious concepts.
The insufficiency of ideology and religion.
The question of where mankind’s morality and its ensuing values are derived from is a quintessential challenge that has troubled people since time immemorial. The ancients attributed our sense of morals and principles to either spiritual or divine origins or – at least since the dawn of Greek philosophy in the 6th century BC – framed them in a wide range of philosophical constructs. Unfortunately, neither had any empirical foundation.
Thanks to ongoing research in the neurosciences, and in particular discoveries about our conscience, we are starting to understand the true mechanisms governing our daily life. For some, this is worrisome. Scientists are now contesting theories put forward by philosophers and ethicists that have been common knowledge for ages.
One such example is the pervasive idea of the "Cartesian Theater", a term coined by the philosopher and cognitive scientist Daniel Dennett. The Cartesian Theater refers to the view that somewhere in the brain there is a hub of some kind, being fed a continuous stream of sensory inputs, based on which it makes ‘conscious’ decisions. In reality, our decisions are made through countless feedback loops between various parts of the brain. A groundbreaking study by Benjamin Libet showed that people execute movements, for example the decision to grab a cup of coffee, just nanoseconds before they actually become conscious of the desire to drink coffee. In short, there is no central decision making device through which we make conscious decisions.
These discoveries have radical implications on the often nurtured idea of free will – or rather, the lack of it – in contemporary western thinking. Furthermore, it touches on domains such as economics (are we really rational beings, as standard heterodox economic theory assumes?), or law enforcement (are we truly responsible for our own actions?). Naturally, for practical purposes, we need to sideline these newfangled notions, lest we feel inclined to plunge society into the faceless depths of nihilism.
This conclusion is not a plea to return to fanciful divine world views. Far from it. The simple understanding that rules, norms, social relations, principles and values resulting from a particular view on morality are all very much artificial, opens up opportunities. In fact, it liberates us from dogma. Religious values are based on a belief system that claims universal truths of divine origin, though textual criticism has shown extensively that these narratives are all man-made in the course of many decades or centuries.
Along the same lines, dogmatic ideologies have been deconstructed. Time and again they have proven to explain a part of society, but not everything. This can be illustrated by the plethora of political splinter groups in existence today, from liberal nationalists to social or green liberals to capitalist communists. No ideology has the power to convince all of us what morality and which according principles we should exercise. From a postmodern point of view, one could argue that people simply pick and choose their world view depending on their experiences, their social and geographic environment and so on. Does this mean we should abandon the search for a universal set of principles?
Human rights and their opponents.
People will be people, and obviously such a quest will never be given up. In that respect, probably the most remarkable experiment since the close of the Second World War is the development of human rights, epitomized in the Universal Declaration of Human Rights (UDHR) drafted and ratified at the United Nations General Assembly in 1948. The essential element here is the word "universal". The declaration pretends to be global in nature, and thus applicable to everyone. On a very fundamental level, the principles enshrined in the document should guide our morality.
Saying that the universality of human rights are contested is of course stating the obvious. It is conspicuous in particular in the United Nations General Assembly, probably the best forum to see the controversy in action. Fierce critics can be found amongst those who believe that the state has priority over the individual. They usually invoke sovereignty as a valid argument to resist the universality of human rights. Since the declaration defends exactly those individual rights in the face of an abusive state, it is perhaps not surprising that the USSR, during the preliminary debates in the Commission on Human Rights in 1947-1948, called civil and political rights an "18th-century affair".
Then there are countries ruled by a religious regime that reject the Universal Declaration. To them, the Declaration is "a secular understanding of the Judeo-Christian tradition", as an Iranian UN delegate remarked in 1982 without any argument as to how that tradition was translated into the declaration. He went on to say that the document could not be implemented by Muslims without trespassing the Islamic law, and that an instrument should be established that was "truly universally accepted".
Later in 1990, the Organisation of Islamic Cooperation adopted the Cairo Declaration on Human Rights in Islam, in which human rights are subjugated to the Sharia. And so, like their ideological counterparts, religious zealots stick to a discriminatory approach: because dogma doesn’t allow dissenting opinions and since diversity is intentionally eliminated, their view of universality means that everyone should adopt the one and only creed. Knowing that an agreement about morality by everyone on planet earth is impossible, it is easy to see why that line of reasoning simply won’t offer any workable solution.
That is why the only alternative is to be found in human rights, which can apply to everyone, regardless of their race, creed, ideology and other traits. Much like a social protection floor, they function as a threshold that cannot be crossed. For example, only by guaranteeing the right to religion can a person truly choose his or her own belief. In fact, religious people should fully support this rights based approach. After all, it is in their own interest to be sure their religious practices will not be threatened by others.
In short: the problem here is not the human rights themselves, but the ideological or religious dogma thanks to which political dialogue is impossible. If taken up seriously, a human rights approach can turn out to be an instrument enabling us to tackle a wide range of problems on the world stage, regardless of ideological barriers between the different political players in the arena. As the next section will point out, this can be of much help in a time where the debate is often "depoliticized".
The problem of depoliticization.
This becomes clear when looking at the debates regarding our current economic system. In spite of the financial and economic crisis that has been ravaging the West in particular, and as a result the whole world, the dominant system of capitalism is still firmly in place. Tighter regulation has been advocated and agreed upon, resulting for example in the Basel III Accord, a set of rules strengthening capital requirements, liquidity and leverage for banks. All in all though, these changes are modest and piecemeal.
Many, particularly among the (radical) left, bemoan the simple fact that this vast and complex system is not up for serious discussion in contemporary politics, media or among citizens. True enough, vested interests are too deeply entrenched, and what’s more: in the current paradigm we’re all very much part of the problem. Save for insane dictatorships like the North-Korean pandemonium, or indigenous people clinging on to their habitat, globalization has made it almost impossible to unhinge oneself from this "machine".
This "depoliticization" of our economy has been popularized by the likes of Slavoj Žižek. While it is true that the ‘most dangerous philosopher in the West’ has a good number of flaws and gaps in his theory, he’s spot on about this issue. Not only because the economy is out of reach of true political debate, but also because this depoliticization distorts what drives and motivates us. It has rationalized and materialized the principles that govern our lives along a mostly utilitarian line.
Ostensibly, many important debates are still fought between left and right. Returning to the economic arena, a lot of ink is wasted on the debate between austerity and growth spending. In reality, the proponents on both sides still very much support a capitalist route out of this crisis. When push comes to shove, their common principles all lead to economic growth measured in GDP increases. They merely bicker about the path leading up to that holy grail.
This convergence of classical left and right, or rather the tilting of the political center towards a rightwing utilitarian side of the spectrum, gained even bigger momentum with the advent of the ‘Third Way’. People like Anthony Giddens sought to try to reconcile right-wing and left-wing politics. However, in doing so, they neutralized the political debate. Now, how could a human rights based approach repair the political debate, rather than avoid it, as Third Way proponents did?
How the human rights can work.
It is important to remember that human rights are an instrument, and that it is a choice. The only assumption made is that they should be able to apply to anyone. They are especially useful because they are operational. Take the problem of world hunger for instance. Everyone will agree that everybody should have enough to eat. What people disagree on is which economic model should be used to achieve it. At one point a theory will dominate the debate, after which the problem is often depoliticized to a certain extent. If that happens and hunger persists, it becomes awfully difficult to contest the approach taken by policy makers.
With a rights based approach on the other hand, the issue cannot be depoliticized. Olivier De Schutter, UN special rapporteur on the right to food explains it in very simple terms: "The world's 1 billion hungry people do not deserve charity: they have a human right to adequate food, and governments have corresponding duties, which are enshrined in international human rights law. Governments that are serious about making progress on development objectives should be asked to adopt a legislative framework for the realisation of economic and social rights, such as the right to food or the right to health care".
With such a legislative framework in place, "accountability mechanisms should be established, allowing victims to hold governments responsible for their failure to take action. This removes the stigma of charity, and it is empowering for victims. Instead of being helped because they have unsatisfied needs, they are granted remedies because their rights are being violated". It is important to note that it’s still up to society to debate on how exactly these rights should be met. With a rights based approach however, the debate can no longer be avoided, since people can effectively take matters into their own hands.
To conclude, the human rights approach can help us define our principles more clearly, and put them up for debate on a regular basis. They are often portrayed as ‘Western’, but even if they originated there they are not Western in nature. Human rights are merely instruments, entirely devoid of dogma, with the only assumption being that everyone should be able to enjoy them without any discrimination. The toolbox can be used to guarantee basic levels of entitlements and freedoms, individual as well as collective, and it still leaves ample room for a political debate. In fact it might even revive it.
I must make one thing clear: Christopher Hitchens is one of my childhood heroes.
My first encounter with “The Hitch” was via YouTube. As a disgruntled, unemployed, undirected 18 year old, I stumbled upon a short excerpt of a debate between him and one of his numerous opponents on the subject of religion. I was immediately impressed by his oratory skill and his use of the English language – it was sophisticated, yet clear and penetrable. This five minute clip led me to purchase his book “god is not great: How Religion Poisons Everything”, sparking my desire to continue reading, questioning and critiquing on a daily basis. I see his work and voice as an inspiration, which led me back into academia and my fortuitous position at Cambridge University today.
I come back to the term “childhood” hero, as this is a very important distinction. When I first came into contact with his work, I had the mind of a child. I was willing to take things at face value, not question or probe any deeper than what made me feel comfortable. An example would be my subscription to what is essentially his reductionist view on theism, something that I have since honed and interpreted into my own views on the eternal ontological debate.
The point here is that whilst I have since come to disagree with many of Hitchens views on a variety of matters, what I cannot question are his underlying principles.
Hence, his death on December 15th 2011 is a day I will always remember. I experienced that odd sorrow of having lost someone I felt I knew, yet never actually met. Since his death, as in life, Hitchens has become the target of criticism from a variety of fronts. Richard Seymour’s “Unhitched: The trial of Christopher Hitchens” is perhaps one of the more vile and contemptible attempts at vilifying and slandering one of the great public intellectuals of our generation. In his book, Seymour accuses Hitchens of plagiarism and opportunism. Whilst I do not have the scope here to review Seymour’s work, James Kirchik’s piece published in the Daily Beast achieves more than I ever could.
Hitchens was at his core, an anti-totalitarian. His mission was the defence of enlightenment values, freedom of speech, freedom of enquiry and the refusal to be subjected to the desires and whims of any undemocratically elected, dictatorial overlord. This core principle resonates throughout his work. At no point does he find compromise or favour with any scenario where these values are not strictly adhered. His deterministic style however often led him to reductionist conclusions.
The result was a redundant labelling of his place on the political spectrum, especially in the later years of his career, as a neo-conservative and as a mouthpiece for the Bush-Blair doctrine. His principles, however, meant that he didn’t have a place on the political spectrum. He was simply a contrarian that defended the rights of all peoples to advocate their wishes and opinions in a liberal-democratic fashion.
The fatwa, issued by Ayatollah Khomeini against Salman Rushdie for the publication of Midnights Children in 1989, and Hitchens’s response is one of the more notable examples of his principles in action. The idea that someone could be sentenced to death for the production of fiction is abhorrent, and Hitchens rightly condemned it as such. Let us not forget that others involved in its translation, publication and promotion were indeed killed. Hitchens, in a 2009 article, neatly summarises his position and reflects on the consequences of the weak response to the fatwa:
There is now a hidden partner in our cultural and academic and publishing and broadcasting world: a shadowy figure that has, uninvited, drawn up a chair to the table. He never speaks. He doesn’t have to. But he is very well understood.
The legacy of the attack on Rushdie has been that Anglo-American media and publishers are, without doubt, more wary or indeed fearful of causing offence.
Should we however be afraid of offending people?
Being offended does not stop anyone from protesting or forming a counter argument. But inciting violence? Or claiming someone should be killed? These acts are contemptible.
At the time of the declaration from the Ayatollah, Hitchens recalled a Muslim interviewer asking him “is nothing sacred?” to which he replied:
No, nothing is sacred…the only thing that should be upheld at all costs and without qualification is the right of free expression, because if that goes, then so do all other claims of right as well.
What is ironic about this claim is that it could be interpreted that free speech is in fact sacred. It is a clear example of Hitchens’ adherence to the principle of defending enlightenment values that causes him to reduce down what is obviously a complex issue (if placed in different circumstances to that of Rushdie’s).
We find this unwavering commitment to anti-totalitarian principles in his work and his opinions on religion. This field is where Hitchens arguably made his name in the wider public sphere.
What distinguished Hitchens’ criticism from the likes of Richard Dawkins for instance was that he did not necessarily belittle or patronise the beliefs or faiths of individuals. What he attacked were totalitarian institutions, which glorified an ever watching, ever-present authoritarian figurehead. This is why he deemed himself an “anti-theist”. It was not enough to say that religion follows a false mandate, but that it is alo in itself a frightening idea:
Even the most human and compassionate of the monotheisms and polytheisms are complicit in this quiet and irrational authoritarianism: they proclaim us in Fulke Greville’s unforgettable line, ‘Created sick – Commanded to be well’. And there are totalitarian insinuations to back this up if its appeal should fail. Christians, for example, declare me redeemed by a human sacrifice, which occurred thousands of years before I was born. I didn’t ask for it, and would willingly have foregone it, but there it is: I’m claimed and saved whether I like it or not. And if I refuse the unsolicited gift? Well, there are still some vague mutterings about an eternity of torment for my ingratitude. This is somewhat worse than a Big Brother state, because there could be no hope of its eventually passing away.”
Letters to a Young Contrarian, 2001
The main gripe of Hitchens towards religion here then is not that he views it as necessarily barbaric or redundant, as others in the atheist camp would argue. In essence it condemns its followers to a life of servitude and subjugation. His anger or critique is directed towards those who willingly subscribe to this state of existence. As he states in Letters, “What matters about any individual is not what he thinks, but how he thinks”.
What must be considered however is that Letters was published in 2001, well before Hitchens public notoriety and YouTube following fully developed. His debates and voice clips are, indeed, what he is more regularly quoted on. Hitchens no doubt fell in love with the fame and adulation that the Internet brought him. These clips are Hitchens at his most honest or principle driven, as there is no filter or time to edit his responses. However, author Martin Amis attests to his ability to produce pieces with minimal editing or proofing time. His written work is what we should turn to if we really wish to understand how Hitchens’ mind truly worked.
We turn now to the Iraq war. This was the episode through which Hitchens lost many friends and allies and remained entrenched in his convictions until his death. Hitchens’ main target and justification for intervention in Iraq was Saddam Hussein and, to use Hitchens’s own words, his “theocratic crime family”. The atrocities committed by Hussein against the Kurdish minority and Shi’ite majority was something that he found inexcusable, and that these heinous acts validated direct action. The removal of a despot and totalitarian, regardless of the consequences, was the correct course of action. He accurately points out that involvement in Iraq has been on-going long before 2003, with the American CIA backing the coup that installed the Ba’ath party and Hussein in 1963. The creation of a monster required direct action to atone for past sins.
There is a plain motivation behind Hitchens’ response to Iraq. It stems from his revulsion of Jihadism and his experiences through the persecution of his close friend, Salman Rushdie by a similarly despotic figure. He developed an affinity for the Kurdish people in particular and their misery at the hands of Hussein troubled him deeply. We see however a reductionist outcome, similarly to his views on religion. The consequences of the Iraq war are far reaching and we are left in 2013 with a nation that is fractured and chaotic. The view that the removal of one despot justifies the social and economical upheaval of a nation is far too short sighted.
Ultimately, Hitchens held convictions that were to him beyond critique or compromise. His belief in enlightenment values drove his desire to support any cause that promoted freedom of speech and enquiry. This also came at a cost.
So staunch were his principles that it often led him to narrow and reductionist conclusions. As long as what he loved and cherished was defended and promoted, the consequences of the action used to achieve these aims were a necessary by-product. Hitchens should in one sense be praised for his consistency and honesty when it came to his principles. Conviction is something we no longer see from the majority of western politicians and it is a sad state of affairs. We instead have a meek political class who shift into whatever form they deem necessary to gain popular approval.
If there is one sentiment that I take from all of Hitch’s work it is this: “Take the risk of thinking for yourself, much more happiness, truth, beauty, and wisdom will come to you that way”.
Washington is on fire. Potomac parlours and palaces are ablaze with talk of the impending destruction of civility and credit ratings.
Cries ring out from every pulpit, “If we don’t fix the deficit, we’ll never get out of this recession!” “If we don’t get our fiscal house in order, we’ll be repo’ed by China!”
Fortunately, we have hope for salvation at the hands of a savior. Or rather, several saviors – consisting of the smartest men and women (but mostly men) from the highest echelons of academia, media, business, and politics. Under rather ominous names such as “The Gang of Ten” and “The Silent Centrist Majority”, these wealthy social engineers have the solution for our ills, revealed to them from atop Mount Aspen and Davos upon digital tablets by the gods of economics and mathematics and management science.
The solution? Austerity and painful sacrifices. Sequestration and entitlement cuts. Alas, if only the common rabble would put aside their bickering and come together in the spirit of bipartisanship and unity. If only we could put aside the electorate and strike a “Grand Bargain” in the spirit of the founding fathers.
Meanwhile, the UK has already embarked on its own disastrous expedition into the deserts of austerity. Sure it hasn’t been 40 years (yet) since the coalition government came into power back in May 2010, the UK with no sight of the promised land. Sure, the London Olympics were a temporary boost, but the growth that was meant to come from greater confidence in the bond market has yet to arrive. In fact, there were several quarters where GDP shrunk rather than stagnated. What about the debt? Due to the prolonging of the recession and a decline in government spending and investment, it seems likely that the decline in tax revenue will far exceed the savings from curtailed borrowing, worsening the deficit and debt.
What is the coalition’s response? In quarters where GDP grew, coalition leaders proclaimed that austerity is working and we need more of it for growth. In quarters where GDP was shrinking: austerity is working and we need more of it for growth. “Just trust us” said the wise toffs of the Home Counties. And many of us did.
Unfortunately, the situation looks far worse in the eurozone. Cyprus has just rejected a proposed haircut on all bank deposits proposed by the Troika (the European Union Commission, the International Monetary Fund, and the European Central Bank) – a proposal meant to protect Cyprus’s status as a Russian money haven while resolving Cyprus’s outstanding debt on the back of all of its depositors.
Perhaps most tellingly, the Italian public has rejected prime minister Mario Monti, an unelected technocrat asked to form a government following the departure of Silvio Berlusconi. In their most recent election, the Italian people have handed significant control over the reins of power to a protest party, the Five Star Movement, led by comedian Beppe Grillo.
The technocrats have long been ascendant. Since the era of the Reagan-Thatcher dream team, technocrats have insisted that they be allowed to place their hand on the tiller of the ship of state. Yet, it has becoming increasingly clear that technocracy has foundered, and that there is a rising populist mutiny against the technocrats.
What exactly is technocracy? Technocracy is a system of governance in which technical experts make and implement policy decisions. How are these policy decisions determined?
Like engineers constructing the perfect traffic circle, social scientists have devised the best means of organizing society. In practice, these policy proposals are posited under the aegis of neoclassical economics and liberalism.
Liberalism and neo-classical economics find their foundations in some widely esteemed (and predominantly Anglo-Saxon) political philosophers and economists, such as Adam Smith, Charles Montesquieu, John Locke, Thomas Jefferson, John Stuart Mill, and even John Maynard Keynes. Of course, this list is incomplete as we can only manage a tip of the hat to the great role general Enlightenment values and a belief in rational scientific progress has contributed to fueling the technocratic project.
The more virulent forms of radical liberal ideology can be found in the work of Austrians (or Austro-Hungarians to be more precise) such as Ludwig von Mises, Friedrich Hayek, and Joseph Schumpeter. To that we can add an American, Milton Friedman, to round out the European bunch.
The rapid global acceptance of their approach to public policy was not entirely rooted in their standing in academia – Mises and Hayek were never scholarly brokers during most of their academic careers. Rather, this acceptance came about due to a network of scholars and academics, public intellectuals, politicians and government officials, journalists and television personalities, vocal local and state groups, and others who held fast to their ideals and repeated them often.
Importantly, technocracy is founded on a belief in meritocracy: that only the best solutions from the most qualified people survive to implementation, and that solutions and people only rise to the top if they deserve to. If your idea doesn’t work, or if there is someone who is better than you, then you slip back down the ladder. Consequently, wealth naturally gravitates to people in accordance with their standing on that meritocratic ladder.
What’s there not to like about meritocracy? Wealth and riches are no longer the domain of kings and plutocrats, but to anyone who works hard and plays by the rule. Alas, meritocracy contains the seeds of its own destruction, even when it seems that everything is working just right. Without constant vigilance, every meritocracy descends into an oligarchy (“the iron law of oligarchy” as fatalistically coined by German sociologist Robert Michels).
But these are, many would argue, issues of process and implementation. The global crisis of confidence is an indication that people are not exactly upset that technocracy isn’t technically working. In fact, it’s working exactly as expected in many countries, with lower projected deficits and an uptick in stock indices. People are upset because they aren’t sure what these policies intend to accomplish. Both citizens and politicians conflate their problems, such as unemployment, with the policy solutions proposed by the technocrats without having a clear idea of how the policy prescriptions cure their particular diseases.
That is why it is essential to view technocracy not as a set of mathematical solutions, but as a complete, internally consistent ideology.
What is an ideology? Jonathan Haidt, professor and author of “Righteous Mind”, describes ideology as “a set of beliefs about the proper order of society and how it can be achieved.”
Technocracy certainly proposes several ways we can achieve some proper order of society. Deregulation and greater competition. Austerity. Lower taxes. Freedom of movement for labor and capital. The private sale of state assets. The promotion of open democratic governments. There is nothing inherently objectionable about any of these policy prescriptions. But they beg the question:
To what end are these means? What is the set of beliefs about the proper order of society that technocrats advocate? And even if we reached a consensus on these beliefs, are the technocrats’ policy prescriptions the proper way to achieve them?
Many technocrats would prefer to skirt these questions. It’s the end of history, they would say. The great debates have all been decided (coincidentally in their favor). Liberal democracy and capitalism are the crowned victors in the great Hegelian dialectic. We already know that the end goal of civilization is a more refined version of the American ideal, now we just need to get there.
But perhaps it’s not so simple. Given that there are a lot of assumed principles behind the technocratic vision, what exactly are these moral foundations?
Jonathan Haidt helpfully breaks down moral foundations into six categories:
Care/Harm: we want to care for people in our group, whatever that may be.
Fairness/Cheating: we want justice and fairness.
Loyalty/Betrayal: we respect loyalty and are hurt when we are betrayed.
Authority/Subversion: we have people and institutions that we respect, and we are offended when these people or institutions are subverted.
Sanctity/Degradation: we have things that we consider sacred or incorruptible, and become upset when these things are defiled (in ways that we may not even initially understand).
Liberty/Oppression: we seek to be free, and resist being consciously shackled.
Political ideologies (and religions!) seek to answer the various questions posed by each of these foundations. For example:
What or who do you think is worth caring for? What do you consider fair (in your dealings at work, school, home, etc.)? Should we be more loyal to our country, our ethnic or religious compatriots, or to our family and friends? Where do we draw the line between respecting an institution’s decisions and actively questioning its every activity? Should we allow for the bulldozing of graveyards, pass a law that limits the construction of minarets, or refuse public funding to artists that desecrate religious symbols? What liberties and freedoms do we allow, and how do we determine when one person’s liberty is obstructing the liberty of another?
Technocracy has answers for all these questions and more. For example, neoclassical economists greatly value the liberty of individuals to pursue unregulated economic activity, while liberals greatly value the ability for individuals to form political groups and express themselves freely. Unfortunately, we rarely consider these value judgments in our current public debate.
There is nothing inherently wrong with these principles! But in practice, these value judgments can clearly cause harm when not regulated. Other times, the value judgments made by technocrats conflict with what a majority of the population considers morally just. For example, international technocracy rarely distinguishes between cheap, unregulated labor in China or India and labor in the Western industrialized countries. But you’ll find that even in today’s globalized world, people still care very deeply for their country and countrymen, and for the guarantee of humane working conditions.
Most distressingly, technocrats rely on the authority invested in their institutions and leaders to shut down debate. Combining this sense of authority vested in individuals with the relatively little trust that remains for collective institutions has created a situation in the USA where voters return their representatives to Congress at an alarming rate, while Congress itself is less popular than Donald Trump, head lice, and colonoscopies.
Technocracy is fundamentally radical and voters in Western democracies tend to shy away from radical change and revolution. So why is it so often associated with moderate conservatism? Like dredging bits and pieces from the Titanic, salvaging conservatism requires a prolonged campaign of deep and arduous exploration. Generally speaking, conservatives believe that gradual or no change, within the context of existing institutions, is far better than revolutionary upheaval.
What technocrats have posited is that the natural buffer to radical change is the unregulated free market. The power concentrated in the state would naturally be countered by the individual liberties of its citizens as expressed in their free exchange of goods and services in the marketplace. Were this dream utopia only true! As we have found out, repeatedly since the mid 1970s, is that a rigid adherence to unregulated free markets leads to massive instability.
These policies also enrich the upper classes and the educated elite. Therefore, the natural base of democratic support for these policies is very narrow, hence why concealing and reframing the debate is so vital to technocrats.
The technocrats have an advantage in the public discourse. They are based, ostensibly, in science and numbers. They appeal to rationality. And when something doesn’t work, it’s the implementation that is at fault, and not the underlying principles. Technocracy cannot fail, it can only be failed. Hence we are left with a growing frustration with every failure, and an unvarying set of tools to solve every problem we have.
Once we transition the debate away from technical solutions to the moral foundations on which those solutions are predicated, then we can begin to have a truly political debate. What exactly is a political debate? It is a discussion that addresses who we are as a group, the values we collectively hold as a society, and how we can achieve a vision of society that accords with those values.
Why do we even need to discuss these questions? People have legitimate points of disagreement between each other, and there isn’t a clear rubric that determines who is right or wrong. That’s good! Disagreement is how we expose our assumptions, learn from others, and together evolve to a better understanding of ourselves and each other.
Were it not for moral considerations, it would seem just as sensible to us to employ children and foster slavery as it did 200 years ago. A moral dialogue is the foundation of a good society, and in a time of greater diversity, migration, and globalization, it is essential at all levels of global society. What do we want as a society and what are the necessary sacrifices and opportunity costs we are willing to accept in order to achieve that vision of the future?
A common telling of the foundation of moral philosophy and even metaphysics is that these fields originally started as offshoots of these fundamental political questions. By discussing with others our present state of affairs and how we must come to terms with living together do we begin to truly inquire into everything from what is right or wrong to the nature of existence itself. As historian Tony Judt once said, “A democracy of permanent consensus will not long remain a democracy.”
Simply put, challenging the moral foundations of technocracy (or any other set of policy prescriptions) is not just a means to develop a better society, but necessary to satisfying our natural human curiosity and creative capacities. Without an opportunity to answer these questions, people are left with a vacuum of what to believe in and hope for. Trust in institutions crumbles. Cooperation between individuals and groups is strained, even beyond the point where capitalism can function effectively.
So returning to the USA, the clear moral alternative to cutting entitlements and starving senior citizens is to simply borrow more money at the incredibly favorable interest rates Americans currently enjoy to invest in infrastructure, education, and research and to stimulate aggregate demand. Concurrently, the USA should be examining the role of money in its political systems and finding ways to regulate its political institutions (such as Congressional redistricting) in order to ensure that Americans feel some sense of ownership over their own democracy.
In the UK, there ought to be a return of some assets and functions handed over to the private sector or quangos back to the state. Rather than rely on a flawed, utopian conception of a “big society”, it’s far better to return to the post-war social democratic model that promoted a sense of national unity and efficient economy of scale in the distribution of services.
In Europe, Cyprus ought to reject the euro and its exposure to Russian depositors as a means of reclaiming its sovereignty – a far better option than allowing wealthy foreigners demolish the savings and pensions of mostly lower and middle class Cypriots. Italy and Greece should continue rejecting the imposition of deleterious socialized solutions to crimes committed by wealthy bankers and investors in the private sector. Their integration within the EU gives them the leverage to force the European Central Bank to do what it should have done some time ago: print money to pay off bonds, even if that means bond-holders will take a haircut due to inflation.
Globally, communities of various sizes and shapes should examine what policy solutions truly work for them on both a moral and pragmatic basis. Governments should seek out solutions that allow them to maintain the trust and cooperation of their own citizens for the short to medium term.
But if we are rejecting technocracy, then what set of policies and principles do we turn to? Going back to the way things were is no good — to the early twentieth century age of war and statism. Surely liberal democracy plus capitalism is the best we can do. How could we possibly do better?
Unfortunately, we’ve avoided this discussion for so long, it’s difficult to propose a clear alternative to the current system for our present troubles. There are, however, some directions in which we can go. For instance, several of the principles assumed by technocracy, with minor modification, can be used to buttress more social democratic policies. By realizing that individual liberties and freedom are enhanced by social security and stability, we can then focus on how the state can ensure an adequate safety net for capitalism to be even more robust and dynamic.
Or we can question just how much individualism people truly desire. As Jonathan Haidt points out in “Righteous Mind”, we are perhaps not so much selfish as we are “group-ish”. We are social creatures by birth, and as such we seek out whatever is in the best interest of our group – whether that’s our spouse, our family, our friends, our country, our religious comrades, our fellow language speakers, our fellow members of X ethnicity, our fellow Klingons and Jedis, our fellow philatelists and numismatists and other hobbyists, etc. We are still, of course, constrained by the powerful forces of geography, language, and nationhood, but increasingly these forces must be reconciled with other powerful affiliations borne of globalization.
This debate is essential for everyone, and not just those of us in the industrialised West. As Western institutions and culture spread throughout the world, so do their accompanying problems. If we don’t rectify the flaws in our own system now, then we risk spreading a malignant cancer to the rest of the global body politic.
Our greatest tool for exploring alternate policy solutions is principled discourse. Only then can we reconcile populist frustrations with policy solutions to our collective problems. Otherwise, as the late Vaclav Havel once remarked, people will continue to “shrug off anything that goes beyond their everyday, routine concern for their own livelihood; they seek ways of escape; they succumb to apathy, to indifference toward suprapersonal values and their fellow men, to spiritual passivity and depression”.
Josh Trank and Matt Landis’ 2012 film Chronicle confronts an audience with what is ostensibly newly broken ground in the superhero genre. But even slightly scrape this outer facade and what one begins to see is a detailed - yet unvocalised - discussion on the ethical and moral standpoints occupied by trans-human characters within a definitively human social structure.
Perhaps the first point of interest this film offers — to the viewer who recognises the theme — is the distinction it draws between being trans-human and being post-human; an issue that comes down to self-perception. Perception of oneself as trans-human — even without that label — causes one to occupy a zone on a spectrum, with extremes we can label as "human" (to be read as "a person with the same general abilities and limitations typical to homo sapiens") and "post-human" (to be read as "a being not limited by those factors that hold back a human, and/or with increased physical or mental capacities"). Trans-human, by definition, is the central point of this spectrum.
Despite playing a role in this enquiry, this is not the main issue this paper seeks to address. Instead, what appears to be the most significant theme in this film is the discussion of how an insular minority of trans-human characters propose, attempt to follow and eventually break a series of morals and laws that are based on distinctly humanist concerns.
What causes initial adherence to these rules, and how are they subverted or complied with? What triggers the need for them in the first place, and what are the outcomes?
Whilst a delineation of how a character becomes trans-human may be required for a technological transition, the process of becoming trans-human in Chronicle is left unanswered, short of the film’s three main characters — Andrew, Matt, and Steve — coming into contact with an inexplicable artefact. So it is not this paper’s intention to question how or why this transformation occurs in our characters.
It is sufficient to say that the characters can be classified as trans-human due to their development of, as the writers put it, "telekinetic" abilities that even at their most basic advance them beyond the capabilities of a normative human. These physical and mental advantages aside, within the films universe, our label of "trans-human" can be used as a matter-of-fact definition. The film’s early narrative documents the increasing capabilities of our three main characters, showing them develop from the ability to stop and hold a moving baseball (0:14:22) up until the ability to levitate and fly (0:29:39). The idea that these powers are increasing implies the idea that these abilities could (feasibly) continue to develop and so our trans-human label is applied on the understanding that this is a transitional state between human and something potentially unrecognisable.
With the onset of their powers, our characters can be resolutely classified as trans-human and yet, because of its transitional nature, this state of trans-humanism cannot be maintained for any extended period of time. It is here that a discussion on morality and ethics must begin.
It is through moral choices and one’s self-perception that movement on this spectrum can occur. After Andrew’s actions endanger the life of a member of the public (0:26:00), Matt suggests that our characters must conform to a set of rules that, by definition, limit the use of their powers: "Rule number one. No using it [the powers] on living things. Rule number two, you can’t use it when you’re angry." (0:28:11)
Without any modification these two rules hold true to normative humanist laws which are, ostensibly, based upon commonly held morals. Yet even with this, Matt modifies the use of "living things" by prefacing the statement saying: "You put a guy in the hospital, how do you feel about that? You hurt somebody!" (0:28:03)
Here, the writer’s use of language shows that the underlying concern here is not with the act of violence itself, but the act committed against a human. The implication is that "somebody" means human, different to an animal or to "living things" in general. A special kind of reverence is held in normative humanist morality and ethics for the human subject. The imposition of rules — applicable only to trans-humans — that hold special concern for the human subject is a conscious choice by Matt, and latterly Steve and Andrew, to shift towards the human end of what we can call the "trans-humanist pendulum".
Matt’s decision and Steve and Andrew’s compliance serves to anchor them at the human end of this spectrum. The best term for this role is the Extropian term "Human Plus" as it cannot be denied that a divide does exist between our characters and normative humans, but the training the characters undergo shows these advantages can be used without compromising what one might traditionally call one’s "humanity". Whilst this classification of our trans-human characters as Human Plus puts us in a comfortable position, it is the compliance with this humanist moral framework that is our area of interest and which causes the problems within the film.
One thing the film makes abundantly clear is the conflicts that the trans-human encounters when functioning as a minority within human society. The major conflict we see, embodied in Andrew, is facing problems encountered when human as a trans-human. The conflict gives rise to a choice. Compliance with internally imposed — though external by definition — humanist laws leaves the trans-human just as vulnerable as they were before, the rules negating their advantage. Disregarding the laws allows them their ‘natural’ advantage, but removes them from what it was that keeps them ‘human’.
This realisation simmers under the surface of the film and its first spike causes the implementation of the rules. The turning point comes with a confrontation between Andrew and his abusive father (0:50:34) and the rhetoric of the film shifts from one of humanist idealism — that this form of human advancement can be used to help people, used to gain enlightenment, or bettering the self — to that of evolutionary superiority. Andrew’s threat to his father sums up this early part of the shift: "I could crush you, do you know that?!" (0:51:41)
Through Andrew we see a shift in the self-perception of the trans-human. If the trans-human ceases to see themselves as inherently human then conformity to normative humanistic ethical structures ceases to be either desirable or necessary; compliance with these ethical and moral structures allows one to retain what could be traditionally considered their "humanity". But if one believes — as Andrew does to after this turning point — that they have transcended humanity, then these rules are a hindrance, a relic of the past no longer applicable, and the transition to the post-human is complete.
It almost goes without saying that this usage of the terms "trans-human" and "post-human", aren't used with a technological bias in mind, but in an evolutionary form. In defining the post-human, N. Katherine Hayles advances an initial definition drawn solely from the technological standpoint: "The post-human view privileges informational pattern over material instantiation."
Despite the value of Hayles’ technological standpoint, our evolutionary one deals with changes in the genetic and wider biological changes that the traditional "human" undergoes. And yet there is one point at which Hayles’ standpoint and our evolutionary view converge:
"People become post-human because they think they are post-human." This buys into the assumption that the aforementioned spectrum between human and post-human exists — at least potentially and abstractly so — and that the post-human is achieved through the specific choices the trans-human makes in regard to evolutionary superiority or adherence to humanistic ethical codes. Andrew becomes post-human because he regards himself so. It is at this stage that Andrew’s dialogue and perspective begins to change from the pursuit of enlightenment (0:36:05) to discussions about the evolutionary role of the apex predator within an ecosystem (0:57:29).
By implicitly defining himself as the apex predator alongside a program of violence against others, Andrew has fully identified with the post-human end of our spectrum.
Interestingly, and spurred on by the images evoked by the term "predator", the film begins to portray Andrew animalistically. We see him hunt and discuss his spoils covered in blood — though he retains a part of his humanity when he finds himself disgusted by his condition (0:56:50). We see him eventually lose all language faculties, beginning to roar near to the close of the film (1:14:26). Yet Andrew’s violence towards those he sees as an inferior species is almost ethically justifiable.
By no longer viewing himself in the Human Plus classification we established earlier, Andrew ceases to be covered by Ronald L. Sandler’s summation of traditional ethical concerns in relation to perceived species boundaries. Sandler states that: "1) Members of the species Homo Sapiens have a particular moral status by virtue of being a member of the species; 2) human moral agents have a special moral obligation to other members of the species Homo sapiens by virtue of their being conspecifics; and 3) members of the species Homo sapiens have special moral obligations to the continuance or furtherance of of Homo sapiens by virtue of its being their species."
Even a glancing look will show that the subject that considers themselves post-human — either explicitly or otherwise — ceases to be of the species Homo Sapiens, and so the moral and ethical concerns that maintain the trans-human’s traditional humanity simply cease to apply. This realisation warrants, even justifies, Andrew’s endangerment of the lives of others, his lack of concern for the safety of others and, from a humanist perspective, his eventual death at the hands of the film’s remaining Human Plus character — the spearing, like an animal hunted, that ends Andrew’s life functioning as the final, poignant image of his transcendence and subsequent loss of humanity (1:14:52).
In a resolutely human world, the portrayal of the post-human is going to be one that resolves in the eventual defeat of this new apex predator. Of the three originally trans-human characters, the only survivor is Matt, in his role as Human Plus. The narrative backhandedly rewards him for his compliance to humanist morals and ethics, but also ensures that the second his difference is revealed he must impose exile on himself, despite his positive contribution.
We see, then, that whilst it is possible for the trans-human to function within a normative human society if they regard themselves as still bound to a humanist moral framework, it is impossible to continue to do so if they reveal themselves to be anything but normative. The fall of the post-human is achieved through non-compliance to these moral standards, and so is hunted down and ostracised or killed.
It follows, then, that in a society where trans-humanism — be it evolutionary, technological or intellectual — is fully realisable, humanist moral and ethical codes will not suffice if trans-humans, regardless of how they self-regard, are to be fully accepted within a mixed-ability society.
Whilst our thinking must continue to work towards how we realise the trans-human goal, equal thought must go into how we reconcile the possibility of two divergent streams of humanity. Sandler discusses the issues surrounding Homo Sapiens and a distinctive moral status that our species afford to ourselves. He argues that some things we considered inherently human - such as rationality, future planning, "interests, agency, and relationships" — are shared by other terrestrial agents, and so Homo Sapiens should not be afforded a moral status distinct from some other terrestrial beings.
Our concern, using the examples illustrated above, should be for both those that remain Homo Sapiens, and for those that choose to become trans- or post-human. In terms that are both noble and necessarily self-serving, institutions that have taken this possibility in mind have sprung up.
The Non-Human Rights Project has grown out of the desire to, in their words, "change the common law status of at least some nonhuman animals from mere ‘things’, which lack the capacity to possess any legal right, to ‘persons’, who possess such fundamental rights as bodily integrity and bodily liberty, and those other legal rights to which evolving standards of morality, scientific discovery, and human experience entitle them". An awareness of the shared traits of the human subject and other terrestrial beings has driven the Non-Human Rights Project to move to protect those beings that aren’t covered by the term ‘persons’.
This project is noble as it endeavours to protect those who can’t protect themselves; self-serving as one day, the legal definitions that will cover these other terrestrial beings can be used to legally enshrine the rights of the trans- or post-human in whatever form they so appear. These legal protections will show the moral character of our species in its current form by enshrining moral codes for those that will follow us, and allows a legal and moral basis through which these divergent strains of humanity can live side by side.
This paper has shown that if or when these shared traits disappear then distinctive moral status will need to be fashioned, and in doing so, co-habitation and cooperation must be at the forefront of our thinking.
Akko is a town in Northern Israel that besides being the holiest city in the Baha'i religion is relatively unimportant in the modern world. In 1291 however, Acre, as it was called, became the last bastion of Christianity in the Holy Land, before it fell to the Sultan’s forces after siege and bloody invasion. It was here that Christianity gathered its forces for the strongest stand that could be mustered against the changing world around it.
In modern times, pressed by the changing nature of local institutions, Christianity is garrisoning. The ethic of a religion that at times prided itself on diversity is rescinding into itself. It is slowly turning monochromatic. The Church is less an element of a balanced life and increasingly the mistaken definition of one. Where Sundays used to provide access to a particular network and time to reflect on morality, that network is becoming the primary point of reference in the lives of many Christians, while others are given a binary choice of living as a Christian, among Christians, with a prescribed Christian world-view, or drifting away.
This is the rise of evangelistic Christianity. The programme is an unpalatable one to those not subscribing to it. The option of measured involvement is increasingly difficult and requires feats of ideological acrobatics. An ever greater part of the congregation’s life, the institutions associated with churches seek more and more to extend the moralistic incubation period of Christians. In theory, this will prevent contamination by interaction with other dialogues and forms of compromise by bestowing the (emotional) “armour” to resist empathising with other ethics. School is an important battlefield for the minds of the flock.
In typical colleges, more Christians are undone than made, although those that are made tend to be tempered strong. Increasingly though, there is the option to get a higher education, and preserve the sheltered world view that was instilled, perhaps coercively, violently, or voluntarily, throughout childhood. To that end, one can browse a list of the best schools for homeschooled children in the US. It cannot be stated better than in the article itself:
“Perhaps you think that your homeschooling education will inoculate you against the negative influences you’ll encounter at a conventional secular or even religious institution of higher learning. But think again. Polling data show that college destroys many a young person’s faith, replacing it with a secular vision of reality.”
It is alarming that the fastest growing segment of Christianity is its extreme wing. Railing against more traditional, less evangelistic Christians (“shopping-cart Christianity” of doctrinal picking and choosing), believers are often either pushed away from the discomfort of emotional services and political sermons, or turn those sharing the experience into their principal support network eschewing what most would consider a balanced life.
In 2003, the American Family Association, issued a warning against the Presbyterian Church of the United States. The problem was that the Presbyterians’ generally assembly had democratically called "for the elimination of laws governing the private sexual behavior between consenting adults (and the passage) of laws forbidding discrimination based on sexual orientation in employment, housing, and public accommodations." On another issue, that of religious tolerance, they were again called out, this time along with Catholics, Methodists, and even some Baptists for demonstrating disapproval of religious profiling.
Each individual strain of Christianity has its own mechanisms for changing doctrine, or position where there is none centralised. Disagreements resulting from these schisms account for many of the sub-denominations that exist. As an ethical system, this is why Christianity in theory welcomes a plurality, dissent, and debate in the extreme.
Newer “non-denominational” Churches imply an enlightened openness above and beyond the factions that were produced along the way. Conversely and somewhat contradictorily, a consolidated political ethic among these post-denominational Churches seems accompanied instead by a radically unitary and evangelistic modus operandi.
In short, doctrinal debate and mutual tolerance is reduced to dogma. There are examples of theological flexibility, and there are examples of it’s unwelcomeness. Take perhaps the story and arguments of Matthew Vines on why theologically Christianity can be compatible with homosexuality. It is unethical to impose an ethic on another is the implied operational standard of the former Christian status quo of diversity. The current answer among the newly coalescing front is that individuals should be responsible to do just that. The post-denominational nature allows an agreement on nominal political issues that can bypass the deeper theological debates of more traditional branches, and facilitate this process.
Vines is refuted by a group then whose slogan is “The Bible is ours to proclaim, not edit” accompanied by a caricature of white males and one marginal female (literally on the margin) looking on approvingly. The argument can be summarised as: deviation is the result of a lack of fortitude to maintain a path-dependent normality. The response also dismisses several Churches already mentioned as “liberals”, implying a single interpretation of scripture, socially consequential, and at odds with the central philosophy of Protestantism that no man or group holds a superior interpretation of that scripture. The docility of uncertainty and accompanying humility that characterised some forms of Christianity, is the first element rejected in the new brand. It is therefore not a feature of the institutions it sponsors to inculcate evangelicals with a deconstructed ethic of expressive terrestrial governance, rather than introverted reflection and simple consistency of example.
As this process is realised and moralistic positions homogenise among Christians, institutions can grow to provide to the newly consolidated service consumer base. The process is comparable to the roots of mass-production and the associated vertical integration of industry in an earlier America where society was less stratified.
One blossoming sector of this Christian social economy is in post-secondary education. With growth rates in enrollment far outstripping mainstream colleges, they are taking a bigger section of the collegiate freshman pie every year.
In Canada, these schools first became controversial because of the idea of public subsidies. While some still receive public transfers, if not directly then via tax breaks, it is more to do with the ethical considerations of their place as academies that is worrying. The context of the shifts in Christianity to shun plurality of interpretation is vital to understand however. Even more so, the same coalescence along political lines, as an increasingly rigid ethic is translated into practise. Instead of providing expansion of thought and insight, they help keep narrow blinders of a static worldview on individuals, and therefore run contrary to the purpose of academic enrichment.
Tautological selection is the first step. Before entering, there must be a consensus on several issues. Most obviously being a Christian is a requirement of entry, but additionally, most major political issues are settled at the door. This epistemic closure extends to varying degrees.
At the Christianity-lite end of the spectrum are schools like St. Francis Xavier University. Comparable in some ways to Notre Dame, a nominally Catholic university renowned mostly for its sporting (ironically, football, of a different kind). St. FX flaunts dreadlocked undergrads of the sort you would expect to see at a university in their handbook. It has a normal dorm life with some gendered and some mixed, while nobody is obligated to live in residence. There is also a campus bar, no rules on student sexuality and on the checklist of things not to bring to school is “A narrow mind - residence is about learning inside and outside of the classroom”.
Also in the Canadian Maritimes, Kingswood College lies closer to the other extreme. Spurning non-traditional hairstyles, Kingswood students live in separated residences with strict limitations on gender mixing. Students not living with parents are obligated to live on campus where alcohol, tobacco and movies rated for consumption by those over the age of 13 are forbidden outright. According to the student handbook, prohibited sexual activity is a more serious offence than misuse of a motor vehicle.
Other schools enforce other forms of radical conformity. Providence University College’s handbook makes explicit that “Out of a respect for God-given life, students should not support or participate in abortions or abortion-related activities.”
Rocky Mountain, Pacific Life, Redeemer, and Columbia colleges all require conformity in students’ relationship choices, general ethic, and outlaw participation in occult activities. The emergent university quidditch craze apparently will be skipping these campuses.
Vanguard College students for their part must agree that “That the Second Coming of the Lord is imminent and will be personal, visible, and pre-millennial. This is the believer’s blessed hope and is a vital incentive to holy living and faithful service.” Essentially, impending judgement is good because people follow the rules, which include a ban on jewlery that could be interpreted as transsexual on men. Students are therefore reminded that they may have to waive personal rights for the good of the "Body of Christ".
The problem is that debate is settled before it has begun; something antithetical to the development of a critical mind, and to the idea of academic due process. While it is true students of many universities enter and leave college without altering their views, those at Christian universities are not even afforded the opportunity to do so. There is no growth, simply retrenchment.
The attitude exists elsewhere in modern Christianity, that Christians increasingly must segregate themselves from society, as society becomes less influenced by Christianity. After speaking with individuals who had left evangelical environments in the Greater Vancouver area, several trends emerged. Unlike my own experiences with Church (of the more traditional variety), they were not encouraged to seek outside networks, save for the purposes of witnessing, or leading them as Christians. Old friends were considered reminders and temptations of a more sinful time, and time with them was to be managed accordingly. Increasingly they were brought into a monochromatic ethic that was dictated rather than dialogued. Then again, what is the Bible if not an ethical dictation?
One particularly worrying theme has to do with gender roles. Since the nuclear family is taken as the epitome of human organisation (and the only acceptable context for sexual relations), the role of the woman in this formation is infallibly pre-established. Women then are encouraged to marry early and start families, fulfilling the role “God” intended for them. This is not a valued statement in and of itself but rather a descriptive one, assuming “living faith” (evolution in application and interpretation of scripture) is not applied at more than a glacial pace. If this archaic role of women in an irrelevant familial archetype is taken as exclusively acceptable, it necessarily represses the explicitely subordinate gender.
Nonetheless, some women repeated that they felt empowered by the role God had bestowed upon them in what can only be understood as a display of divine Stockholm syndrome. Such a purpose in servitude means masquerading coerced self-deprecation as liberation. All rationalisation or questioning of social composition is left at the door for those prescribed as efficient in bygone eras. This is not true of all Churches, but it is apparent in the evangelical brand that, on the presumption of the Bible as literal and temporally uniform, prescribes social context rather than responds to it.
What is surprising is the popularity of such rigidity. Alternet, in talking with women of Mars Hill, was able to hypothesise on why such Churches are appealing. In addition to proposing an easier path to life, it helps make that path a reality. With counselling services, child care, community dinners, it becomes apparent that many of one’s base needs and additional comforts can be found in devotion to the organisation, or indeed to God. In America, and increasingly in Canada, where social services provided by the state are scarce and underfunded, they can be easily procured simply by submission to this proxy organisation. If all moral questions and uncertainties of life are settled under one roof, then mortality can just run its course until either the gates of heaven or the second coming validates their presumptions. Why pick a more difficult life when you don’t need to?
The old paradigm, one where Church is largely a Sunday event, a proxy network to one’s professional and social life, is being eroded as those networks are less and less tinted by subtle religious undertones in modern North America. In a world where prayer in schools is unacceptable, entirely new schools must be built to ensure ideological incubation.
How far will it go? It is important to remember that radical Islam is not the norm in that faith. It is a loud minority given voice by media, and able to exist in areas of weak state and infrastructure. That is unlikely to be the case in North America with Christians. Becky Fischer in the film Jesus Camp however is, like many, under the impression that the Islamic mainstream is something it is not. She famously pronounced:
“It's no wonder, with that kind of intense training and discipling, that those young people are ready to kill themselves for the cause of Islam. I wanna see young people who are as committed to the cause of Jesus Christ as the young people are to the cause of Islam. I wanna see them as radically laying down their lives for the Gospel as they are over in Pakistan and Israel and Palestine and all those different places, you know, because we have... excuse me, but we have the truth!”
Admiration for the devotion of another faith is different from envying their fanaticism and this comment walks the line. It is likely that Christians will continue to segregate voluntarily. They will reduce the size of their step into the secular world and moderate Christianity will likely continue to shrink.
As North American society accommodates increasing diversity with secularisation, the activities that traditionally were served with more varied networks are consolidated by the faithful. The linear paths of the lives of Christians become increasingly serviced within more narrow Christian networks. From childhood day care and homeschooling, through university, business networks, financial services, and recreational activities the whole way along, the Church provides for the needs of the flock increasingly more than government and communities.
There is nothing problematic with this at face value. The challenge however is that it runs the threat of consolidating the principles of Christian communities in opposition to the changing wider society around them. With a heterogenising North American culture, where institutions were once tacitly Christian, they are being forced to either secularise and become accessible, or Christianise and serve that segment specifically. The debates over the nature of the Boy Scouts are one example of this as Christians fight to prevent exposure to secularism.
In 1291 the Christians lost Acre and the entirety of the Holy Land fell under Islamic governance. Without implying history’s composition of them, a cycle emerged as a golden age began and a confident but moderate Islam encouraged the expansion of science, technology, and the arts. After dominating the region the Ottoman civilisation in crisis began to split into religious hardliners and modernists. They would become Wahhabist Saudi Arabia and secular Turkey.
The same sort of division is apparent in North America where 400+ years of unquestioned (contextually moderate) Christian domination are under threat. The same splits are occurring. It is easy to say there are no clear geographical guiding lines for the splits now, but there were none in the the nineteenth century Middle East either; individuals followed post-hoc national borders. Whether geography will be a factor in the current Christian turmoil is difficult to say, but it is worth noting that state laws on moral issues differ vastly between some Northern and Southern US states.
Canada for its part is supposedly undergoing a “cultural shift” to the West, which to many is simply code for a rising conservative ethic from that region. This however is mostly due to the rising economic importance of Alberta rather than the presentation of a viable cultural alternative to the secular, tolerant, liberal character the world thought it knew Canada for. Vancouver, a small but intense battleground mirrors the national schism with a nominally white, conservative East hosting some of the most hardline evangelicals, while the multicultural West and downtown remain immovably liberal and a society apart.
Whether geographic trends continue to consolidate will be one of the more interesting points of development moving forward. Even if not territorially however, Christianity does continue to hole up ideologically, just as it did in Acre when once before under threat. The Christian empire in the Holy Land was the last time before North America that Christians had established rule outside of Europe. Eventually it fell. In modern Canada and the US, secular legislation and multiculturalism have replaced the Ottoman scimitars and Devşirme that challenged Christian supremacy elsewhere. If the religion continues to draw inwards, defining itself by a unifying pentecostal ethic, it will become an increasingly unsavoury philosophical diversion, and will likely render it vulnerable to a more cultural variety of siege. There may be very real, and indeed possibly violent consequences to these developments however.
This means things are likely to get worse before they get better. If a culture war is to be prevented, religious ethics will need to be segregated from political ones. Terrestrial governance must focus on the problems we face down here and no prescriptive book can fulfil that task. Tolerance must be extended to all faiths but not to the point those faiths are permitted to exhibit greater intolerance towards others. Finally education and shared experience must be a priority; Canada and the US must work to prevent the continued epistemic closure of Christianity. Not only can it lead to social enclaving or extremism, but indeed to the eventual incompatibility of North American society and its dominant religion.
Having successfully gotten away with a criminal act, many people become so overwhelmed with guilt that they end up turning themselves in anyways. This voluntary action of handing oneself over to the authorities for punishment is meant to bring closure to one’s conscience; it is a way to right the wrong that was committed. At the heart of this idea is the position that justice cannot be done without some form of punishment. Those turning themselves in essentially accept justice as restorative.
This way of thinking about justice goes back millennia. Hammurabi’s code – the first known written legal system – already contained the now famous “eye for an eye” provision as a way of restoring justice. Punishment need not be strictly reciprocal to be restorative however and most laws since have developed specific punishments for different crimes, whether these be fines, imprisonment, bodily harm, or even death. These more often than not fall short of strict reciprocity. But what they all hold in common is that they provide a means for a perpetrator to make up for the crime in an abstract sense. For instance, the law may deem five years in prison as the necessary restitution for a common assault.
"What do you think, would not one tiny crime be wiped out by thousands of good deeds?"
(Dostoyevsky, Crime and Punishment)
What’s interesting is not that this system exists but how strongly we as a society believe in restitution according to the letter of the law. In other words, it is the need an individual criminal may feel for institutionalized punishment as a result of their actions that is truly surprising. This holds true even when punishment may not be in the best interests of society from a purely practical or utilitarian angle.
Take a case where someone has committed a grisly murder and gotten away with it, either through police incompetence or sheer luck. After the initial jubilation of having escaped the long arm of the law, they may begin to feel guilt and consider turning themselves in. If they do so they will probably end up sitting in prison for a very long time or even face the death penalty, restoring justice according to the legal principles set out by society. But beyond helping sooth the criminal’s own conscious and providing closure for the family and friends of the victim, this outcome gives little practical benefit to society at large.
In fact, governments incur large costs for keeping prisoners locked up or on death row and getting the conviction in the first place isn’t cheap either. For the sake of argument, let’s assume that the perpetrator will not murder again and that enough other criminals are caught so that his escape does not lesser the effect punishment has as a deterrent.
Now take an alternative where the criminal does not turn themselves in. They have the same guilty feelings as a result of their action decide to dedicate their lives to doing general good deeds instead.
The criminal in this case may volunteer for a charity, donate a large portion of his income to good causes and generally try to be a better person than they were before. These things they do because of their feelings of guilt and had it not been for their crime, they would be doing none of them. Practically, it would seem that this option would provide society with more benefits then if the criminal had sat in prison for an extended period of time.
Still, many people would be inherently uncomfortable with this idea for the simple reason that the criminal appears to have gotten away with his crime; they have not been punished and there seems to be something deeply unsettling about that for us. Typically, we don’t plead for the perpetrator of a crime to go out and do good deeds as a way to make up for their actions. On the contrary, it is common for the police or the victim’s family to call for the criminal to turn themselves in to the authorities, telling them that it is this that is the right thing to do.
This may be because the victim or their family has not benefited from the murderer’s subsequent good work. But it is hard to see how punishing the criminal would benefit the victim’s family anyway, unless the punishment was in the form of compensation. Most punishments for serious crimes – in Western societies at least – are not.
Criminals themselves tend to hold this viewpoint. Take Dostoyevsky’s character Raskolnikov in Crime and Punishment. Despite getting away with the murder and robbery of a miserable old woman – something he even considers to be to the benefit of society – he is still driven mad throughout the rest of the novel. He even gives away much of the money he had stolen to needy people in an attempt to justify his actions but this does little to sooth his conscious. In the end, Raskolnikov ends up turning himself in to the authorities and accepts the harsh punishment of being sent to a work camp in the frozen tundra of Siberia.
It seems that what is at play here is the desire or need to confess for one’s actions. In other words, we need others to recognize the efforts that we are making in atoning for our crimes. This is because the criminal act is not only a violation of natural law – if such an absurd concept in human conduct even exists. It is also a violation of human law because as social animals, we sin against others in our community when we go and commit a crime.
For instance, killing another person is not wrong in itself or according to nature – one could even argue that aggressive and deadly competition between living things is very normal – but it is wrong precisely because it is a violation of the rules of the community to which we belong.
The death of another, understood as murder, is something only possible within a communal understanding through our rationalization of natural events by way of language. Even one's own private morals are derived from a social existence – so to believe murder is wrong is to do so because of one's interactions with others.
It is therefore not enough to seek to make amends by becoming a better person because without confession, there can be no absolution from the society one has wronged. Even in the Christian tradition where forgiveness is paramount, confession of one's sins against God is done to another person in the form of the priest. It is not the deity that is negatively affected after all – he is all-powerful – but other members of the Christian community whom we have sinned against. This is true even in cases where the sin seems of a very personal nature such as using the lord’s name in vain. It is wrong because in being members of a certain community – the church in this case – we have agreed not to do so. The covenant is made to our fellow members and it is really them that we offend, or feel we offend if they happened not to hear our utterance.
Confession is not enough, however, and the priest will make the offender say a series of prayers before he can be absolved of his sins. Similarly, in law a court does not accept a criminal’s admission of his crime as sufficient for absolution and the judge declares a fitting punishment for the offender. The punishment, just like the crime is decided upon within a social context. Speeding for instance is not naturally wrong. On the contrary, it is society that decides going over 100km/h on a certain road is an offense and that a fine of x amount of currency is the appropriate restitution.
But punishment does not only satisfy the demands of society with reference to the crime, it also satisfies the criminal’s longing for absolution. In the Philosophy of Right, Hegel describes crime, in his famous and confusing phraseology, as the negation of the rules of society. Punishment subsequently becomes the negation of this first negation and the restoration of right. In other words, it is a way of returning society to its original position because without it the law decided upon by the community would become meaningless by way of being unenforceable.
At the same time, the punishment recognizes the criminal as a fellow human being and member of the community by applying to them the law to which they too have agreed; they are not treated as an animal to be restrained but as a human who’s participation in communal life and rule making is to be respected. In essence, one’s human freedom in both making and breaking the laws is acknowledged through the enforcement of punishment.
Turning oneself can then be seen as an attempt to return to the community from which our sin has removed us. And by accepting our punishment, we also escape the wilderness of a solitary moral life and return to our place in society. Like Socrates drinking the hemlock despite being offered a chance at escape. To submit to the law voluntarily is to accept one’s own freedom, a creative freedom only possible through a life in common with others.
Monie, age eight, lay shaking among a pile of dead corpses, waiting for the safety of nightfall to come so that she could make her way to Kigali. Only weeks earlier, she had witnessed Tutsis beating and shooting her father and raping and killing both her mother and sisters. Monie was now alone in the world, but somehow she managed to make her way to an orphanage where she found shelter, after being taken for dead like the many bodies strewn around her.
Unfortunately, somewhere between half a million and a million Tutsis and Tutsi sympathizers were not as “lucky” as Monie. They were systematically exterminated by the ruling Hutus during a period of just 100 days in Rwanda in 1994. Guns played an instrumental role in rounding people up, intimidating them into cooperation, and shooting those who fled. And a substantial portion of these guns came from France, though individuals from at least twelve additional countries including China, Russia, the United Kingdom, and Italy also arranged for arms transfers.
If ever there was a situation in which states should have acted on principle to prevent genocide, this was it. And yet there were plenty of practical reasons why it would have been smarter for countries to enforce stricter arms control regulations apart from ethical concern, including the threat of regional instability, reputation consequences, and a skewed arms production incentive structure.
Nevertheless, this same pattern of weapons-producing nations sending arms to groups that later use them to inflict human suffering can be seen throughout history. Indeed, it was the United States that armed the Taliban and Saddam Hussein´s despotic regime in Iraq in the 1980s while the Soviet Union and China armed Ethiopians from the 1970s to 1990s. This latter conflict with Eritrea from 1998 and 2000 cost the lives of 70,000 to 120,000 soldiers and civilians. And it was Russia who provided arms to the Syrian regime, which continues to use them to murder innocent civilians throughout the country to this day.
The problem is that once an arms shipment is made, a country cannot control the end use of those weapons and cannot recall them when the fighting is done. Overall, there are 875 million small arms in the world today, and 74% lie in civilian hands. Their annual trade is valued at over $50 billion, and there are substantial financial incentives for governments to turn a blind eye to their end uses—perhaps one reason why there is still no comprehensive international framework regulating how arms are bought and sold.
Fortunately, a group of Nobel Peace Prize Laureates led by former Costa Rican President Óscar Arias Sánchez recognized this shortcoming and presented an International Code of Conduct on the Transfer of Arms to the United Nations in 1997. After a number of years and much lobbying, 153 UN Member States passed Resolution 61/89 in 2006, agreeing on the necessity to create an Arms Trade Treaty (ATT). After several more years of working groups, preparatory committees, and informal discussions, the UN finally convened a diplomatic conference in July 2012 to negotiate and adopt the treaty.
Unfortunately, that conference failed to come to an agreement when the US made a last-minute request for more time. Russia, Cuba, the Democratic People’s Republic of Korea (DPRK), and Venezuela followed suit.
They have now had their time to reflect, and the second diplomatic conference that convenes at the UN from March 18th to the 28th must finish the job now.
The strong interest that numerous countries have shown in recent years and optimism in anticipation of this conference provides a reason to hope. The treaty enjoys support from seven co-author countries (Argentina, Australia, Costa Rica, Finland, Japan, Kenya, and the United Kingdom) as well as 100 co-sponsors. Yet, the draft text from the July conference that will be used as the basis for negotiations is inadequate as it stands today. Although every country is entitled to input under the consensus-based operating procedure of the conference, it will truly be up to the big weapons producing nations such as the US, Russia, France, China, and Germany to agree to the terms of a strong ATT in order to ensure its passage.
The most glaring problem is that the scope of the draft treaty is far too narrow. Despite the fact that the world produces nearly enough bullets annually to kill all of its inhabitants twice, ammunitions are not included in the scope of the current ATT text (though they are briefly referenced in Article Six). In other words, prolific guns in regions of the world like Africa and Central America that have recently experienced conflict will continue to be used to kill as long as there are bullets to load their barrels.
Another major problem with the current text is that it would only forbid a state from authorizing an arms transfer that would be used “for the purpose of” committing genocide, crimes against humanity, or war crimes. Yet history suggests that it is near impossible to prove that a state’s initial transfer of weapons is ever made “for the purpose of” committing serious crimes. International law established in the Articles on the Responsibility for Internationally Wrongful Acts (2001), the International Court of Justice’s judgment in the Bosnia Genocide Case (2007), and other examples, makes it clear that inaction in the face of knowledge of a risk of genocide is enough to make a state responsible. That pre-established standard must not be undermined by a weak ATT.
Additionally, the wording excludes more relevant war crimes in today’s world such as intentionally attacking civilians, private property, or humanitarian aid providers. A better text would require transfer refusal based on a “knowledge or reasonable expectation” test rather than the current lax standard that would still likely permit all of the aforementioned transfers that were used to commit war crimes.
Countless stories like Monie’s should be enough to convince governments of the absolute need to fix these loopholes and pass a bulletproof treaty this March. The principle that we should seek to avoid arms from being diverted to inflict mass human suffering should be a collective and universal obligation. Yet the practical reality is that money talks, and several of the top arms manufacturers such as the US and Russia have shown resistance to the ATT out of fear of financial losses for their domestic industries.
But even if strategic gain is the only concern, and countries do not support the ATT on principle, there are still three fundamental practical arguments to be made for why weapons producing countries should support a stronger ATT.
As President Arias said in a speech prior to the last ATT diplomatic conference in 2012, “If it is legitimate for us to worry about the possibility that terrorist networks gain access to a nuclear weapon, it is also legitimate for us to worry about the rifles, grenades, and machine guns that are given into their hands, not to mention the hands of young people, gangs, and drug cartels.”
The second reason why arms producing countries should support a strong ATT is that the development of conflicts abroad threatens to embroil rich nations in conflicts they would prefer to avoid or else they risk severely damaging their reputations. All too often, arms shipments are diverted and used to perpetrate genocide and crimes against humanity. Then, the same nations that exported those arms are left to intervene to end an atrocity to which they themselves contributed.
Returning to the Rwandan genocide example, if governments such as those of the US, UK, and France had enforced stricter controls on the licensing of arms shipments, those arms would not have fallen into the hands of Hutus. In turn, Western nations would not have found themselves in the difficult position of having to decide whether to declare the situation in Rwanda “genocide” and to intervene. As a result, the US government in particular, along with others, suffered severe reputation consequences that could have been averted if it had done the principled thing in the first place.
The final reason why arms producers should support a strong ATT is that doing so would prevent these countries and their weapons manufacturers, which generally already have stronger export controls than most states, from being put at a competitive disadvantage.
Though some ATT opponents argue that the arms export market is too competitive for companies to turn down any buyers, no matter how corrupt, Western suppliers are currently being undercut by the poor practices of other suppliers. The ATT would hold all states to the same set of standards of arms transfer authorization. When producers worldwide are held to the same standards, when one producer rejects a sale that is unauthorized, another one cannot make that same sale. Thus, with a strong ATT, arms exporters will not risk losing business by being ethical.
Governments with strong export controls would also benefit because current regulation inconsistencies permit actors involved in the illicit arms trade to transplant their operations to countries with more favourable conditions for illegal activities. By establishing universal standards, the ATT would discourage the proliferation of the illegal arms trade and ensure that the controls established by one government are not undermined by the lack of controls elsewhere.
It is also worth noting that the ATT would not pose any threat to private gun ownership, which many of the top arms producing nations hold dear. The purpose of the ATT is very specifically to regulate trafficking of all conventional weapons between countries and in no ways calls upon countries to agree to anything not in accordance with their own national laws.
Arms producers and business leaders seem to agree with the competitive advantages of a strong ATT. The UK’s defence industry (Aerospace Defense Security) has supported the UK’s push for an ATT since 2003 and Europe’s defence industry (the Aerospace and Defense Industries Association of Europe) announced its support for a tough and enforceable ATT in February 2010. A group of 21 global investors controlling over $1.2 trillion in assets issued a statement in 2011 calling for a robust, comprehensive, and legally binding ATT.
So as delegations from nearly 200 countries worldwide descend upon New York in the coming days to hammer out a treaty, even if they are not moved by stories like that of Monie in Rwanda, they ought to fight for a bullet proof treaty for their own practical reasons. Even if they are not concerned with the principle that states should do all they can do alleviate human suffering, they still ought to be concerned with protecting their own stakes and realize the tangible benefits that the ATT will provide for them.
In other words, this is one situation in which principle pays.
Think about democracy. Think about alternatives to democracy. What comes to mind?
Socialism, Communism, some more –isms, and anarchy. Of course there are some shades in between, but really that’s about it. When you ask even a harsh critic of democracy why we keep pushing its wheel up the proverbial hill, the response is usually something like, “Well, it’s not perfect, but it’s all we have”. Really?
When you start to think of actual alternatives to democracy, aside from the few other alternatives ensconced in a slew of manifestos, we have a hard time thinking of something better.
Why? It seems asinine that humans cannot think their way out of this conundrum – the same humans that created Surrealism, Prog-Rock, and microscopic robots. We seem capable of stretching our imagination beyond any former limitation, but have been recycling the same basic governing system since it was philosophized by a few Greek guys centuries ago.
True, democracy works. Kind of. It has its ups and downs. This piece is not meant to discount democracy (entirely) so much as it is meant to really get us ponder on the question as to why we think it has been the admittedly-imperfect-but-the-best-option-nonetheless name of the game for so long. Why has communism always failed? Why is anarchism considered such an elusive idea? Why hasn’t the revolution happened? Furthermore, why can’t we think of other alternatives?
Many people would argue (and have argued) that it is because of the way we are. Our nature, our inherent state of being, dictates failure for us before we even have a chance to prove it wrong.
Democracy is purported as the best way of governing ourselves because of our nature, but does our nature also prevent us from going further than democracy? Is this it? Why can’t we think outside of a democratic framework? It really is absurd to think that we only have a few options out there, but it is even more absurd that it seems nearly impossible to think of a new one. Are there really no more boundaries to push?
This is a speculative piece meant to draw connections between very large and nebulous ideas, namely human nature and what I will call democracy culture. Democracy culture, for the sake of this short piece, can be understood as the way of life that gave rise to democracy and sustains its global presence. Democracy culture is the culture that suggests there is no better option (mostly because of our prescribed “nature”).
Let’s do an experiment:
Start with human nature, or whatever you perceive human nature to be. The way we are ‘supposed’ to act. Have been told to act. The way we do act. The way we treat each other, and the way we are treated. Then, think about our institutions. Economic, political, and social. Think about how our supposed inherent nature – our apparent greed especially – has influenced the formation of these institutions.
Really think about. Try to remove human nature from the equation. Start to look at the institutions that surround you and understand how our “nature” went into their development. Capitalism, for example, is based on the notion that – in nature – humans inherently want more. There is no ideal stasis, no cooperation, in nature. Then, think about our three-branch system here in the U.S#. Checks and balances. Each branch wants more power, so each branch has the power to monitor the others. This balance of power was written into our legal constitution because it is an inherent part of our human constitution. You start to see how many suppositions were made on your behalf, because of your nature, without any of your input.
Is it, however, part of everyone’s constitution or just the commoner’s constitution? The normal Joe. Let’s look at representative democracy. Centuries ago, a few Greek guys decided what was best for the way we live our lives today. The commoners aren’t smart enough or don’t have enough time to make their own decisions. We need to elect and elite group of representatives to take care of business and to best represent our concerns. The basic framework of the U.S. governing system is built upon the foundational knowledge that we are not smart enough, good enough, or concerned enough to make our own decisions. Why? Does human nature make us less smart? Or does that apply to just some of us?
In conversations, when I have asked people why they think representative democracy is a good idea, I get a similar response to the same question I have about democracy: “It’s not perfect, but it’s the best we got… There’s no way to get everyone’s opinion. It would take too long.” In response to this critique, there are other kinds of democracy. It has been synchronized time and time again and currently exists in various formats.
In Bolivia, under the guidance of indigenous leader Evo Morales, it has taken an interesting turn, granting rights to Mother Earth and purporting an overt socialist platform. However, no matter how far democracy is stretched, it is still democracy.
Centuries ago, a few Greek guys created the basis for a governing system that has persisted until now. What about the rest of the world’s population at the time? Why didn’t the governing systems of ancient India or the Pacific carry over? Why did the West (and now through globalization – the world) stick with the Greek plan? And now, why can we not think of an alternative? Is it because they pinned down human nature? We’re wild and need to be controlled, and democracy is the best way to do it?
It seems like we are stuck on democracy. We can’t seem to think outside of it. We’re locked into this idea, which has created an entire lifestyle, a culture. We’re so entrenched we can’t think of the next step.
Antonio Gramsci (philosopher, political theorist, and all-around polymath) referred to this as hegemony. Although today the word is usually used in a negative context, the original Gramscian concept was more or less a tool. It is the “way” we are, because it is ingrained into us. We can’t think outside of it because it defines us.
With looming crises on nearly every horizon, it is high time to reevaluate our existence here. In order to move forward, according to Gramsci, we have to create a counter-hegemony, an entirely new way of seeing and understanding the world and ultimately ourselves.
Who knows for sure, but let’s continue the thought experiment:
Perhaps human nature does not exist at all, undercutting the apparent need for democracy. Sure, we may need some type of governing system, but maybe it’s time to search for alternative ideas.
This is not some anachronistic plea to return to the ways of the past. There are more than 7 billion people on the planet now. We can’t hunt and gather. Nor is this a call to drop our current modes of existence entirely. It is more of a call to see the shades in between then and now, them and us. Rather than studying these alternative existences from a curious distance, we should immerse ourselves entirely in their potential validity. In doing so, we may create something completely beyond our current imagination.
We should strive to expand our critical capacity to create a counter-hegemony. To get out of our skin a little. Perhaps this search should not be limited solely to other ways of governance. In my own work, I have encountered Buddhist coping mechanisms among Tibetan former political prisoners in India, talked with ghosts in Indonesia, spent time in Freegan and anarchist communes across Europe, studied traditional human-nature relationships in Northern Bolivia, and have recently started to study communal living in Depression-era Mississippi (tracing my roots I suppose).
For me, all of these seemingly disparate topics coalesce under the effort of trying to understand alternative existences, different ways of being. While they do not exactly correlate with governing systems, this same type of exploration is necessary to color in the gray areas between democracy, communism, socialism, and anarchism.
Again, it seems absurd that we are unable to divine some new and better system. In order to remove the hegemonic blinders, we need to stretch our perceptions of what life can be like as far as possible, taking advantage of the beautiful diversity that we have had, still have, and could have on this planet.
This is not to accept alternative existences as being absolutely true for everyone (no more manifestos!), but to accept these existences as being absolutely true for someone at some point in time. Try to see how and why a certain idea may work.
If we can’t find some something different somewhere, at least these alternative existences provide an existential foil for our current mode of existence, expanding our views beyond our current way of life and giving us the critical capacity to create the necessary counter-hegemony to break out of our democracy culture.
In its first issue Distilled Magazine discussed the crisis in confidence the Western World currently experiences. The economic malaise, political opportunism, and a number of doubtful wars seem to hide a much more engrained sense of insecurity. In the texts they put forward, many of our contributors therefore debated the lack of faith some have in our society’s endurance. Although this certainly wasn’t their intent, I would like to rephrase the doubts they uncovered as one single question:
Are we still "modern"?
This of course doesn’t signify modernity’s material outlook. Indeed, the forces of steam, steel,and capital changed the face of the West, and they continue to be very much present. But when mentioned in a political context, modernity’s most compelling features are the ideals often associated with it. Free press, rule of law, and democracy form the inner core of our "modern world". And when demanding world leadership on moral grounds, these are exactly the values western countries claim to have embraced.
For many, including most of the historical periodization models, this modernity begun in 1789 with the French Revolution rocking the Ancien Régime. With the masses revolting against kingly rule, the values mentioned above slowly untied the aristocratic stranglehold over Europe. And although theend of the Régime notoriously has been moved back and forth by historians, the notion of changeassociated with The French Revolution remains so strong that it still stands as a defining point in (global) history.
It therefore comes to no surprise that throughout the last 70 years world affairs have been consistently seen in the light of 1789 and its aftermath. The most recent example is of course the Arab Spring: although a winter might be coming, the hope remains that North-Africa and the Middle East have finally found their way towards more freedom and democracy. For many it looks as if, after some failed episodes of ‘benevolent’ colonialism in earlier centuries, the French Revolution finally continues its march towards global adherence. And through its French origins, one revolutionary cry is again at the heart of global politics: Liberté et égalité! Freedom and equality for all!
The Modern Failure
The question, however, remains to what extent we can rely on these principles. For now the balance seems to be rather negative; whenever the French revolutionary call was heard during the last two decades, it translated into something like this:
Afghanistan changed by NATO! Iraq liberated by US forces! Khadaffi bombed out of his palaces! Le Mali sauvé!
In all these cases, a particular discourse was connected to the actual events: the West would defend its core values, punish those in violation, and introduce people to the legacy of 1789. Although doubts are well-justified, Western policy at least nominally holds these principles at the center of its foreign policy. Following the above-mentioned wars, they even got a more official cover under the umbrella ‘Responsability to Protect (R2P)’ -concept. Although intended to deal only with crimes against humanity, the potential link with other overarching principles is rather clear. No intervention relating to R2P would succeed if it was only meant to end the bloodshed without dealing with the underlying causes.
And yet we feel that the missions we have partaken in so far have failed, and rather miserably to. None of the above mentioned cases has so far delivered a stable result. Despite our lasting belief in the worldwide applicability of the French principles, countries that tasted liberty and equality rarely managed to hold on to them. And this even counts for countries were there was no Western military intervention, such as Egypt. Although the jury is still out on what will transpire in this particular country, even here it seems that freedom and equality are more elusive than many would like. With or without our involvement, new Pharaohs always arise.
The reasons for these failures have been debated on numerous occasions. Many of the explanations halted at simple policy mistakes, the weakness of our current world leaders, and their inability to set an honest and focused course towards worldwide modernization. Few, however, have considered the possibility that the fault also lay in the principles themselves, and not merely in their implementation.
This is of course not to say that there is something wrong with freedom and equality. Admittedly,these principles are the ones Western societies rightly can be proud of. Although the process was very gradual and there were enough bumps in the road, the post-1945 generation has managed to hold true to the heritage of 1789. Not everything is perfect as of yet, as is demonstrated by some flaws in gay rights or hidden racial and gender inequality, but from a political perspective, Western citizens are now both free and equal.
The point about the current inability to implement them is, however, that something crucial is missing, something which is required if we ever want to defend liberty and equality in a successful way. Indeed, the present day motto of La Republique française carries a third principle: fraternité.
The Modern Restored
Although it would be improper to connect brotherliness in a historical sense to the core of the French Revolution, the triad of liberté, égalité and fraternité is widely considered to be its main set of principles. These ideas are at the heart of the French Nation and its Western brothers, and have been engraved in our conception of 1789. But if we know it is a triad an not a duality, then why is its last principle never connected to foreign policy? Why do we continue to stress freedom and liberty, but ignore brotherliness?
One potential answer is that fraternité remains a rather elusive concept. Therefore, it remains much less clear why we should apply it in the same way as we try to do with the other two.
Compared to the passive or external principle of liberty, brotherliness is a value that resides within us and needs explicit action to be effective. People can only be free as long as they are allowed to be so by those that govern them. Although resistance and revolution are possible, freedom remains fully dependent on the attitudes of the people in power. This dynamic is currently at work in Egypt: despite being brought to power through a revolution, it is still the new government, meaning those with actual power, who determine the amount of freedom granted. You cannot act as a free man without someone allowing you to.
The notion of brotherliness on the other hand is not dependent on the attitude of those above you. You can always consider someone like a brother to you, and there is no possibility for him/her to prevent you from doing so. The only condition, however, to this principle is that you also act upon it, that you make clear its meaning to you: It is both an internal and active principle.
To demonstrate the contrast a bit further we also might discuss the position of equality on this sliding scale. It actually resides in between the extremes of freedom and brotherliness, although somewhat closer to the latter than to the former. The concept is both dependent on internal and external beliefs and relates to both your active behaviour and the passive influences you experience, but ultimately depends on the status granted to us by the other. Simply put: although I might have an equal political status to you, and I display the corresponding behaviour, we will never be truly equal until you actively grant me the same status.
The Modern Restored?
This distinction between the different qualities of the principles we are talking about is crucial when it comes to foreign policy. If both liberty and equality are (partially) determined by external factors, you can defend and expand them without changing your own behaviour. Simply by altering a particular authoritarian government you can make its people free, as the West tried on so many occasions.
But with equality this already becomes much more of a problem. Although the external conditions might be changed, for example by constructing a constitution guaranteeing the same political rights to everyone, internal and active inequality still create immediate problems. And this is not only a problem of Sunni’s versus Shia’s or Pathans versus Uzbekis, but also of coalition versus collateral casualties. Over 12 years of campaigning in Afghanistan, Western lives have consistently prevailed over those of the local people. The external/passive conditions for equality are seemingly fulfilled, the internal/active not so much.
This is of course pretty demoralizing for the implementation of brotherliness. If this principle is entirely related to an internal belief and active behaviour, the demands we place upon our foreign policy become ever larger. And yet, without this concept, there is no point in fighting wars on behalf of the other two.
The reason why there is no point in doing so becomes a bit clearer when looking back to the European background of the French triad. Fraternité is as a concept very much related to nationalism: Frenchmen are free and equal, but they are also brothers within the nation. And despite having caused some of the most atrocious wars in history, nationalism as force within states has greatly reinforced the internal coherence of the upcoming Western forces. Although I am too unfamiliar with the actual historical debates, a case could be made that the brotherly nation preceded liberty and equality. The national wars of the 19th and 20th centuries after all antedated all-encompassing voting rights.
So whenever we ground our foreign interventions in the so-called modernity of 1789, we might want to consider that this very modernity does not have its footing in either equality or liberty, but in brotherliness. There is no point in fighting over the end products of a particular historical evolution when you tend to ignore the principle with which it all started, the third leg that stabilizes the whole tripod.
So one huge problem arises: if policy-makers would continue the analogy (as has been done in many instances with the Arab Spring) with the French Revolution and defend our modernity on a global scale, they would have to think about how to create a global brotherliness/nationalism. And despite Samni Devji’s argument that such a notion is already growing (through people as diverse as Ghandi and Osama Bin Laden), it remains a huge question how to adapt international policy to this realization.
Nevertheless, one core point remains. Although forgotten through our focus on liberté and egalité, fraternité also remains a powerful principle. It has changed the world before and might do so again. But given the dangerous side-effects we know it possesses, we need need to consider carefully whom we want to call our brothers, and through which specific acts we can demonstrate this in the most powerful way. How we tackle these questions, will eventually determine how ‘modern’ we still are.
More than a few people will, when asked about the origin of their morality, answer with "God". When asked who this god is, they will usually describe a fatherly god, kind and gentle, who was very necessary and present at some point in history, but now only plays the role of a transcendent and inaccessible guarantee of human morality. Perhaps he even is an aesthetic x-factor of nature, but most importantly someone who speaks only to the heart: in other words the almost crossed-out God of modern metaphysics, the no longer necessary hypothesis. This is the modern image of God. The worst thing about it is that most people don’t realize that there are alternatives.
Almost impossible as it may be today, being God still means being the only god, with a power that is absolute. But in the first era of monotheism, it was very different. God was still one of the many gods and his supremacy was not evident at all.
Imagine yourself in God’s place. You are just starting out, as a young god, without followers, no one knows you, and you have no reputation, no street credibility, or self-confidence. What do you do? Easy. Like many childless people in antiquity, you go to the slave market (called Egypt) and you pick out a girl. Not a greedy princess, but someone you can easily influence, someone needy, a pretty woman. Israel is perfect. You had dealings with her before, and even though that didn’t go too well, you are confident that this is the perfect challenge for the both of you.
But of course, buying a slave is not very divine. Much better to "liberate" her from the clutches of the former owner! So, he tells Moses, "see that you perform before Pharaoh all the wonders I have given you the power to do. But I will harden his heart so that he will not let the people go.” (Ex. 4:21)
What a very different God this is than the one we are used to. Not at all that absent, rigid, obsolete father-figure Nietzsche and Freud couldn’t stop railing against. Yochanan Muffs calls this God "the young hero". The whole cat-and-mouse game between God and Pharao, with Moses running back and forth all the time, basically serves no other purpose than to show off, so that God’s fame "shall resound throughout the world". (Ex. 9:16)
But then, when He has "liberated" her, our young hero also has to take care of Israel, provide for her and lead her through the desert. Obviously, this is not the perfect honeymoon. Their relationship changes, and the street cred God had created for himself shifts from a mere nationalist fame to something deeper. God doesn’t just want to own Israel, he wants her love to reflect his glory. The hero becomes a provider, a God who provides and saves. There lies the origin of what we today call "providence". Like everything else in the Bible, it is presented not as a theological abstraction but in a series of "concrete situations figuratively conceived". (Muffs) In reading about this, one even gets the impression our young God almost "needs" to be needed. All the way, trough the desert, He craves situations that demand his sustaining care. His concern to provide food for his young bride is sometimes truly amazing.
When Moses sarcastically remarks: "Are there enough flocks and herds to slaughter for them? Are there enough fish in the sea to catch for them?" (Numbers 11:22) He becomes outraged! Also, mark Gods incense over the people’s questioning his ability to provide food in the desert. The answer is well known, in a rain – almost a storm - God gives his people the food they want. As if the young – frustrated – husband says: "Food you want? Food you’ll get!"
In fact, there are passages when God actually begs Israel to test his goodness. Muffs puts it this way: "If only they would try me! You think that the nature gods of the nations can provide an abundance? Just test me and I will outdo them at their own game." One of the major functions of the pagan gods was to provide rain. According to Jeremiah, God can outrain the rainmakers: "Do the heavens (of themselves) provide rains?" (14:22). There is a lot of jealousy in these first moments of the relation, a lot of show as well, with all kinds of hocus-pocus. As Muffs says: this God is "pulling rabbits out of his hat in order to convince an incredulous people that he really has the power to carry out his own prophecies."
Clearly, he does not only want Israel’s obedience, but also her faith, affirmation, love and enthusiasm. He is always afraid that even when he fulfills his word, Israel will consider it a coincidence rather than a product of his divine will.
Instead of being angry at their quasi-philosophical demand for empiric proof, he actually provides it in the form of an authenticating omen that they themselves can actually choose. "One is almost tempted to say that God delights in their doubt because it provides him with another opportunity to empirically / experimentally / scientifically authenticate His being. Instead of being outraged by human doubt, He actually encourages it." (see: Isa. 7).
No "God is dead" here, although the marriage often comes very close to being disbanded. The crisis between man and God we observe all over the Biblical texts seems to be an essential part of the Scripture. In fact, most of their relation seems to consist of these crises. As demonstrated, the prophets played a very important role in this, not so much as enforcers of obedience and piety but as mediators in a difficult and ongoing marriage. Hosea, Jeremiah and Ezekiel further developed this relation by adding various images and additional content. They saw Israel’s history as the story of a woman who was married and then betrayed her spouse, while God’s role in this story is that of a betrayed husband who is torn between the feelings of shame and disgrace, brought upon him by his faithless wife. This causes a fierce desire for revenge, but at the same time the husband also retains a strong love for his wife, impelling him to search for any means to bring her back to him. For Israel, loyalty is based on pay. In literary terms, the Israelites relate to God as a supplier of material goods, and when he seems to have disappointed them they turn to other gods.
The modern believer, often even more an avant-garde modernist than his contemporary non-believers, will no doubt protest. He will say: "Isn’t anthropomorphism something very problematic?" Assuming that God has even one face is already a huge theological obstacle, but an endless amount of faces? To us, rational people, this is supposed to be unthinkable unless it comes in the guise of myth or entertainment, but we can’t be expected to take it serious, can we? The historically sophisticated exegete of today will even caricature it as something that belonged to the primitive societies. No no no! Monotheism did away with all of that nonsense! Or didn’t it?
It is the difference between modern and other understandings of religion. The modern mind thinks in term of functions, a classical mind thinks in terms of agency. Everything has a responsible agent, a god or a metaphysical principle to justify its existence.
Everyone who sees God or the gods describes a different face, different attributes of the divine person or persons. Obviously everyone sees and describes that set of traits most congenial to his own spirit. This is not to say that he creates his own image of God; rather, each prophet sees what he can of the infinite spectrum of the divine self. The divine young hero serves to the prophets of an ever wandering people as an essential meaning-endowing figure that shapes the way both God and humans relate to each other, the way God relates to other gods, and the way we relate to other people.
If there is one conclusion we can make, it is that there is a tension at work in the heart of both our and God’s humanness. Although both of the main characters in this historical narrative resemble each other more than they would admit, there is also an irresolvable disproportionality between their respective roles. The tension between these two characters can lead to some serious drama, that can absorb a great deal of the world we inhabit, if not all. This interpersonal drama modern people are so eager to ignore in favor of their image of a rigid father-figure, may just be the source of the creative dynamism of both Biblical monotheism and our own humanness.
The crisis through which the Western world is currently passing is misleadingly spoken of as "a financial crisis". We are first and foremost in a moral and cultural crisis. The misleading characterisation of the current situation as an economic or financial crisis prevents us from identifying the right remedies to overcome it. We do not just need new institutional mechanisms — these affect only the symptoms of the crisis. We also need to tackle the root cause of the crisis: the lack of belief in ethical principles and the lack of the qualities of character or virtue that allow us to act upon these principles.
Yet, acknowledging the moral dimension of the crisis places our liberal states in a serious dilemma. For it is becoming increasingly apparent that these liberal states we have inherited from the nineteenth century are incapable of fostering the very foundations on which the state and the market are grounded: it cannot inculcate in citizens the virtues necessary to preserve a liberal order and at the same time remain liberal. Most Western states see themselves forced to step in with illiberal legislation concerning as diverse matters as smoking, fast food, youth drinking culture, environmental pollution, or the bonus culture of companies listed on the stock exchange, to mention only a few examples. All this after having tolerated for decades not only these activities themselves, but sometimes even their aggressive advertisement.
In the nineteenth century, the state could refrain from such illiberal legislation because it could rely on at least two other institutions that inculcated (amongst others) those virtues that sustain the liberal order. I am thinking of course of the institutions of the family and the church. If not the church, at least the institution of the family remained strong everywhere in Europe until after the Second World War. It was only in the wake of the war that education became more "liberal" (note the change of meaning!). Mostly in the urban centres, parents in the 80s and 90s started to adopt the educational "principle" that their children could do whatever they pleased, which really is no principle at all.
Especially the generations born after, say, 1970 should appreciate that this type of education is a social experiment that has never before been undertaken. We cannot take for granted that the experiment will be successful in the long run. Outside orthodox religious circles few will deplore the liberation from philistine morality, especially sexual morality, but the question is: have the generation of 1968 and those who came after them pushed things too far? Are they not living after an ideal of autonomy without principles, an ideal that they seek to justify by appealing to a mere illusion: that societies flourish best if their members are granted boundless freedom? These questions are not a hidden attempt to glorify older forms of family life, but an invitation for a critical assessment of the current culture. Any such criticism has to start from an identification of principles.
The premise for any talk about principles of action is free will. If we had no free will, that is to say, if all our actions were determined by our nature and/or external influences, it would make little sense to attempt to identify principles of action. Only if we are able to act on or pursue principles in a self-directed way does it make sense to try to identify moral principles. Let me inquire first into the origins of the recent doubts about free will before coming to speak of the principles themselves.
From about the 1980s onwards, it became popular to think of human, animal, and plant life as being determined by genes. In many respects the reliance on the newly discovered genetic fabric was merely a more sophisticated version of Darwinism. Perhaps today most people have a more nuanced view about the role of genes for human life, but there are new ‘hopes’ that neuroscientists can uncover the chemical processes in our brains and that it can be established that these processes determine — in a strong sense of ‘determine’ — our actions. Moreover, many sociologists and historians are still in the grip of a mechanistic explanation of social life. They think that a gifted social scientist can explain the action of an individual or group as a necessary consequence of the person’s or group’s individual and social situation. What all three groups of scientists have in common is their hope that, for instance, wars and crimes can be explained as the inevitable result of biological, chemical, and/or social causes.
Such exaggerated hopes to offer a scientific explanation of human action may serve to attract research funding and to exalt one branch of science at the cost of the others, but they really are offering just a partial and distorted picture of human nature and action. It is one thing to acknowledge biological and social influences on our behaviour; it is an altogether different thing to pretend that our actions are fully determined by such influences. Any scientist who thinks that he or she is able to provide a complete explanation of human action manifests that he or she fails to take seriously what distinguishes human beings from other animals and plants: free will and reason.
What is needed is a comprehensive approach to human action. Luckily, some moral philosophers – mostly those sympathetic to Aristotle – have started to recover just such an approach. This should not surprise us: social and natural scientists developed accounts of human nature in the eighteenth and nineteenth century as an attempt to side step older (Aristotelian) accounts of reason, free will and the virtues. The inadequacy of these scientific explanations becomes ever more apparent.
Having highlighted the importance of free will and pointed to the importance of reason and the virtues, we must ask about the relationship between the virtues and moral principles. In moral matters, the term "principle" (Latin principium; Greek archē) once meant "end" or "goal of action", rather than "abstract rule" as we now use the term. For an older tradition, going back to Aristotle, the principles of action are the goals that we pursue and the virtues are the qualities that enable us to pursue these goals. "Principles" in the modern sense of "abstract rules" can never be more than guidelines on how to realise or achieve "principles" in the sense of "goals". Moreover, Aristotelians deal with virtues together with goals because they are painfully aware that our capacity to realise most goods is not naturally given by birth, but requires a certain disposition acquired by education and training. Thus they acknowledge free will, but they also acknowledge the difficulty to use our free will in order to pursue most goods.
But which ends or goals do we pursue and who is to count as virtuous? In seeking to answer this question, I suggest to use an insight whose most prominent contemporary proponent is Alasdair MacIntyre. MacIntyre, following Aristotle, notes that "the virtues are precisely those qualities the possession of which will enable an individual to achieve eudaimonia" (After Virtue, 2007, 148). Now, eudaimonia or "happiness" is the name that the different factions in Aristotle’s city-state give to the ultimate end of life. Aristotle identifies four factions which respectively take happiness to consist in one of the following goods: pleasure, wealth, honour, and virtue (in the focal, moral sense). Given that there are four different conceptions of happiness there must be four different conceptions of virtue as well. For what counts as virtue depends on what happiness is. In my view, this provides a key for a critique of current mainstream culture and at the same time sharpens our understanding of the virtues which we need to cultivate in order to secure the liberal character of our states.
I will illustrate this by focusing on the twenty-first century faction that propounds happiness to consist in wealth or, as the economists call it, "economic growth". If I earlier criticised social and natural science for their neglect of free will and reason, I turn now to a critique of the concept of rationality used by many economists: the rationality of the homo economicus. Economists are social scientists who accept that human beings have free will and act rationally, but many (not all!) work with a (narrowly) self-interested rationality. Obviously the homo economicus is not used in economic models as an ideal of moral conduct, but as so often the model initially used for a specific question has been transferred onto new fields and eventually has come to shape our self-conception.
I would argue that the model of the homo economicus was bound to do so. The homo economicus describes the virtues one must have in order to achieve a special type of eudaimonia: wealth or economic growth. Just as older accounts of ethics, the economists’ model works with different social roles. Painting with a broad brush, we can distinguish roughly between three roles. As an employee or manager the homo economicus is hard working, efficient, profit-oriented, skilled, etc. As a shareholder, the homo economicus seeks to maximise profits regardless of other considerations such as environmental protection, labour rights, etc. As a consumer, the homo economicus buys as many goods as possible, if need be by taking up a loan, in order to make a private contribution to the growth of the economy. The economists preach the dogma of growth and wealth to politicians and managers and promise the hope of possible future wealth to the masses. The need for more immediate justification is met by advertisement: those who are not rich can at least experience the pleasure of material comfort bestowed on us by goods and services.
Although this analysis is admittedly grossly simplistic, it brings out the point at issue: the ultimate end determines what counts as virtuous; if eudaimonia is taken to consist in wealth or economic growth this ultimate end calls for a special set of virtues. To acquire these virtues then becomes an intermediary end or principle in one’s pursuit of happiness. Of course talk about virtues is so unpopular now that jobs advertisements do not require virtues, but diplomas and skills, but this is merely a matter of terminology. Similarly, it is rare that anyone (including parents) exposes wealth as the ultimate end of life; it is more common to refer to acquisition of skills as a precondition for happiness, but this merely proves Aristotle’s point that the different factions call the ultimate end "happiness".
All too often economic growth is simply presented not as one option among many options of life, but as the only option, or, as I recently read in an article by Botho Strauss, brief TINA: "There Is No Alternative". To convey such an impression of necessity allows economists to dictate our ends and hence the skills needed to achieve them, and ultimately our form of life. In my view, it is only once we become aware of the different ultimate ends and the possibility of choice that we can start to think of alternatives. I will focus on the alternative of a virtuous life.
Someone taking virtue (in the moral sense of "virtue") to be the ultimate end of life needs goods such as food, clothing, and shelter, and so forth. It is undisputable that a certain degree of wealth can help as a means to being virtuous. But the virtuous person will never regard these external goods as ends in themselves. Wealth is to be used for the sake of true goods such as peace, social justice, generosity, friendship, the arts, and so forth. What, you may wonder, has all this to do with my initial concern: the threatened liberal character of the state?
There is a certain conflict between the virtues required for an ever-expanding market and the virtues required for the secure flourishing of the state. The economy with its crude materialistic indoctrination of consumers and its demand for skilled "workers" is sapping the foundations of the state. Slowly, the task of schools and universities is being changed from offering an education preparing us for our duties as citizens and even more so to master the difficulties of life as a whole (e.g. to know where to find orientation about idealistic goals worth living for or where to find comfort and guidance in an existential crisis) to providing students merely with the necessary qualifications for a specific employment and the willingness to serve unquestioningly an economy geared towards growth at the expense of all other goods. Those parents who practise a laisser faire education contribute their share to establishing a caste of unscrupulous aspiring business graduates.
All this is part of one big social experiment whose outcome is unknown. Currently the experiment seems to be failing. The crisis exceeds beyond mere financial issues; it is a cultural crisis. Amongst other things states have reacted with illiberal policies in the hope of regaining control of the situation. These policies in turn undermine in the eyes of many citizens the state’s legitimacy: the citizens want autonomy, not a state that makes prescriptions. As a consequence, the popularity of the economy with its promise of freedom and material happiness increases while the state becomes ever less popular.
We, the young and youngish generation of Europe have to break this circle of insanity. We must recognise that: 1) we are capable to pursue ends in a self-directive manner, i.e. we have free will; 2) we have a choice between ends, economic growth is not the only option; 3) true happiness does not consist in wealth, it consists in love, friendship, justice, and truth; 4) the state can only be liberal if each and everyone of us contributes to a peaceful and just social order. This requires us to cultivate what is traditionally referred to as the four cardinal virtues. These virtues cannot be inculcated by the state if it wants to remain liberal, they have to be fostered by other institutions. If families and churches fail (and, luckily, they do not fail everywhere), the regeneration needs to come from the young and spread from circles of good friends to society at large.
What we need to possess is: 1) temperance: to resist the corrupting temptations of wealth (e.g. high salaries and bonuses), especially if the pursuit of wealth comes at the cost of justice, i.e. at the expense of other human beings—near or far—or the environment; thus we also need 2) justice; 3) courage, the courage to resist one’s corrupt supervisor at work and to face corrupt politicians; and 4) practical wisdom in order to know in the particular circumstances of one’s life how to act. What can be gained is that we free ourselves from the fetches of economists and their bleak vision of the human life that treats human beings as mere consumers. What matters in life is truth, peace, justice, friendship, the family and love. Here are the principles for us to act on.
The American political debate is dominated by the massive, growing gap between the left and right poles of public opinion on everything from petty scandals to fundamental questions about morality. On the Pew Research Center’s index of 48 “political values measures”, the gap between Republicans and Democrats is now larger than at any time in the past 25 years, twice as large as it was during the Clinton era.
This dizzying rise in polarization accounts for the popularity of political psychologist Jonathan Haidt’s theory that “tribal moral dynamics” are wreaking havoc on American politics, causing each side to draw further into an ideological cocoon when it feels that its sacred values are threatened. Haidt’s theory is intuitive: when we come to our views on public policy from a non-negotiable starting point (whether a belief in the sanctity of life, the injustice of preventive war, or the inviolability of private property), we’re bound to end up with positions so drastically opposed from each others that even debating them civilly will be difficult or unproductive.
If values are what divide us, though, why do some of our most vicious political disputes concern questions of fact, not value?
Think back to the 2012 election season. Arguably the two most consequential quotes of the campaign were Mitt Romney’s reference to the “47%” (the fraction of Americans who pay no income tax) and Senate candidate Todd Akin’s claim that “legitimate” rapes rarely result in pregnancy because “the female body has ways to try to shut that whole thing down”. Both are statements of “fact” (one more plausible than the other) not value, even though abortion and tax policy are two hotly contested subjects in American politics.
When Democrats used Mitt Romney’s “47%” speech to frame him as a class warrior, they highlighted his choice of statistic rather than the arguably more damning value claim that followed – that his “job is not to worry about those people” because Romney could “never convince them that they should take personal responsibility and care for their lives”.
To some commentators, incidents like these where debates over value-laden issues are presented as disputes over underlying facts, are a symptom of our broken politics; a political battlefield in which each side’s army lives in its own echo chamber and consequently ends up with a distorted and selective understanding of reality.
This is the phenomenon that libertarian Julian Sanchez, referring to conservative intellectual circles in particular, has dubbed “epistemic closure”. Where Haidt’s approach sees polarization as rooted in our most fundamental values, the epistemic closure argument implies that an increasingly divergent set of factual beliefs is to blame. In combination, Haidt’s argument and Sanchez’s add up to an epigram popularly attributed to Daniel Patrick Moynihan. “Everyone is entitled to his own opinion…” goes Haidt’s reasoning, “…but not to his own facts,” says Sanchez.
I do not dispute Haidt’s point that moral polarization threatens our democracy or Sanchez’s claim that the right is particularly guilt of epistemic closure, but I would like to argue that the way of thinking summarized by Moynihan’s quote tries to separate facts from values in a way that leads to bad diagnoses of our political problems. To tell the story of polarization accurately, we need to understand the ways that facts and values interact.
The philosophical distinction between facts and values is at least as old as David Hume’s description of the “is-ought problem” – namely, that there is no obvious basis by which we can make the leap from what “is” to what “ought” to be, or how we “ought” to behave.
Since Hume, political theorists who want to use moral language in their arguments have generally needed to assume some set of shared principles providing a basis for democratic discourse. In theory, Americans have the kind of deep, shared political tradition that should allow us to debate values on some common footing. Unfortunately, polarization has driven the left and right so far apart on even the most basic questions that liberals and conservatives rarely even need to pit values against each other explicitly.
For example, consider the debate over strict voter ID legislation, which hinges on a dispute over whether large numbers of fraudulent votes are actually cast. This should be a straightforwardly empirical question, but since evidence of absence is difficult to prove, no number of government investigations or independent studies has been enough to shake the belief among activists on the right that voter fraud is rampant.
Starting from such disparate factual premises, liberals and conservatives each arrive at the conclusion that their preferred voter ID policies are the best under any reasonable value system. Tea Party activists like True The Vote claim (with little or no evidence) that strict ID laws actually boost participation by increasing voter confidence in the process, while liberal activists argue that putting up arbitrary barriers to voting undermines the integrity of elections in its own right.
An extremely powerful neutral arbiter might be able to resolve this dispute by, let’s say, conducting a massive randomized trial of each side’s proposed policies to examine their effects on voter confidence and participation. Since such neutral intervention is unlikely, however, liberals and conservatives have to fall back on their preexisting understanding of the way the world works to judge whether it is more likely that our democracy is presently under threat from fraud or from impediments to participation. This kind of judgment about what constitutes a threat is bound to be highly subjective – as political theorist Corey Robin has argued in the context of national security – so it’s unsurprising that liberals are more attuned to perceive the threat from exclusion by the powerful, while conservatives are warier of rule-breaking by the socially marginalized.
It is tempting to describe this as motivated reasoning – values inappropriately colouring views of factual questions – but only in the way that all reasoning is motivated. People tend to believe that their values express deep truths about the way the world works, so they naturally fall back on those truths when they think about complicated "empirical" questions, such as what factors promote political participation.
Thus, having framed the danger so differently, Republicans and Democrats can each claim that their solution is the one that will preserve fairness AND integrity in elections. You don’t need to find both arguments equally plausible to see that disputes in which each side claims to represent all respectable values are bound to heat up.
This state of affairs makes life difficult for the institutions that attempt to mediate conflict in American politics, from nonpartisan think tanks to newspapers to debate moderators. Institutions like these are not necessarily unaware that values are inextricable from political inquiry, but generally insist that those values do their work only after the examination of the facts has taken place. Ideally, these neutral arbiters would like to take complicated policy questions, process the facts, and boil them down to a question of values. In other words, they want to be able to say “Policy A and B are both reasonable options, but if you value X you will support A, and if you value Y you’ll favor B”.
The problem with this approach is that values can’t simply be set aside until the fact-based part of the investigation is over; facts and values are entangled every step of the way.
As an example, my former employer, the Progressive Change Campaign Committee, was one of several progressive organizations that spent much of 2011 and 2012 combatting future Republican vice-presidential candidate Paul Ryan’s budget plan. Because Ryan’s plan would have phased out Medicare’s guaranteed benefits (while guaranteeing them for current seniors) in favor of a subsidy that the elderly could use to buy private insurance, we attacked Ryan for wanting to “end Medicare”. PolitiFact.com, the Tampa Bay Times’ fact-checking operation, called this charge by Democrats its “lie of the year”, reasoning that, under Ryan’s plan, the government would still run a program named “Medicare” that provided (greatly diminished) health benefits to seniors. Progressives were unconvinced. “According to [PolitiFact's] logic”, argued blogger Jed Lewison of Daily Kos, “if the FBI were replaced with a voucher program wherein citizens would receive subsidies for hiring private investigators to look into criminal activity, but the agency running the voucher program were still called the FBI, it would be unfair to say that the FBI had been ended”.
Lewison’s argument hints at why this question is such a difficult one. Settling on a neutral vocabulary should be the first step in examining the facts underlying a policy dispute, but even that first step is liable to founder on value questions. In this case, it’s not possible to agree on the meaning of the word “Medicare” without arguing about the program’s purpose.
Those whose liberal values lead them to view the program as just one element of the broader safety net for the most vulnerable look at Ryan’s plan and see an attempt to shatter the political coalition behind the welfare state. Those who see Medicare through a conservative frame of hard-earned reward for seniors, particularly those whose tax dollars have already paid into the program, see Ryan’s plan as an effort to preserve the essence of Medicare in an age of tight budgets. Without a way to resolve this definitional dispute without making reference to questions of policy and value, it is no wonder American political debate feels so totalizing – as though we can’t agree on anything without arguing about everything.
Failing to understand the relationship between facts and values leads to nonsensical prescriptions for policy and political strategy. The most prominent of these is probably the national punditry’s obsession with bipartisanship, particularly the idea that it is possible to build consensus on hotly contested issues by finding propositions on which all reasonable Americans should be able to agree. A bipartisan commission has been proposed at one time or another as a solution to just about every major policy issue – most recently by President Obama as a response to 2012’s numerous election administration fiascoes. The idea behind commissions like this one is that, even in an age of polarization, Americans share enough fundamental values to find common ground if only they could work from roughly the same set of facts.
This kind of end-run around polarization fails because it doesn’t account for the role ideology plays in choosing which facts are worthy of consideration.
For example, most liberals were unenthusiastic about the Simpson-Bowles commission on deficit reduction from the moment it was proposed, suspecting (correctly) that it would seek large cuts to entitlement programs. To center-right commentators like Joe Scarborough, this is the left-wing equivalent of climate change denialism: after all, how can you oppose deliberating about the best way to reduce the deficit?
What opponents of the commission recognized, however, was that at the moment a commission was created to focus specifically on deficit reduction, values had already done 80 percent of their work by identifying the deficit as a pressing national issue above or even alongside, climate change or inequality. This framing empowers pro-austerity advocates and proposals at the expense of alternatives in a way that liberals foresaw would inevitably produce recommendations they would oppose.
For the same reason, conservatives are unlikely to embrace Obama’s commission on election administration, or any of numerous bipartisan examinations of the nation’s gun laws proposed in the wake of the Sandy Hook shootings. Technocratic solutions are workable for issues where there is widespread agreement on a policy goal, and disagreement about the best way to achieve that goal doesn’t break down along left-right lines – see, for example, the commission that makes decisions about military base closings. Absent those conditions, though, we’re not likely to find a magic bipartisan formula for eradicating disputes over values.
As an alternative to the search for common ground on values, some political observers simply embrace the fact-value distinction and argue that we need to set values aside as best we can when thinking about policy. This strategy was popular on the left during the Bush years, when Republicans seemed to have a stranglehold on the rhetoric of values (with Moynihan’s quote as a rallying cry for liberals).
Take, for example, Ezra Klein’s 2007 article “Overvaluing American Values” in the progressive American Prospect. Making the case against future Obama State Department official Anne-Marie Slaughter’s case for an American foreign policy based on liberal values, Klein wrote that he was “fed up with values. Entirely. They've failed this country. As a lodestar, there is none worse”. Instead of relying on value judgments, Klein argued, we should assess our foreign policy by its likely consequences.
But appealing to consequences doesn’t eliminate values from the discussion. For instance, we can’t judge whether consequences are good or bad without choosing criteria:
How do we weigh the lives of Americans against those of foreigners?
How valuable is liberty relative to order?
Setting those questions aside, though, it’s not possible to think about an issue as complex as the likely effects of American foreign policy without leaning heavily on values. Those who believe strongly in the importance of discipline and resolve, whether in domestic policy or individual behavior, tend to think that a foreign policy with those qualities will produce good consequences. The same is true for people who believe in the value of cooperation. Calling this phenomenon “issue constraint” – to use the language of political science – doesn’t change the fact that we have to use values in this way in order to think about complicated political issues.
As I’ve tried to demonstrate, separating facts from values in thinking about politics is bound to lead to frustration and accusations of bad faith. Humans don’t blindly deduce facts from their ideology, nor do they arrive at values after a careful consideration of empirical data.
Rather than trying to puzzle out which of our political disagreements are “really” matters of fact or value, I would propose that we think about the problem another way: people generally have some intuition about the way the world works that guides both the facts we perceive as important and the values we profess.
This point has special relevance for American liberals (like myself), who occasionally wring their hands because they have difficulty articulating unifying, bright-line principles comparable to those professed by, say, Ayn Rand acolytes. Progressives, just like libertarians, have strongly held beliefs about the way society operates that underpin our empirical sense, our policy positions, and our values.
A good example is the progressive tenet that disparities of social and economic power have far-reaching consequences. Is this a statement of values masquerading as an empirical claim, or the empirical basis from which progressives derive the values of fairness and social justice?
Again, I would argue that the distinction doesn’t matter. What does matter is the fact that beliefs like this one shape every aspect of our engagement with politics.
For example, if conservatives make an argument to the effect that some minority group is behaving irrationally and irresponsibly, the belief described above governs the way progressives will evaluate that claim – in this case, by suggesting that we should first investigate whether social marginalization is actually the root cause of the conflict. Principles like this one aren't guaranteed to lead us to an accurate understanding of the issue, but we can say that anyone who subscribes to a philosophy of social justice is essentially claiming that this worldview is a better guide to thinking about politics and society than the alternatives.
Adopting this view of the relationship between facts and values is not cause for optimism about the state of political discourse in the United States. Liberals and conservatives probably will not find it possible to set values aside to focus on the facts, or use the language of values to break the cycle of epistemic closure. Instead, left and right will have to argue about the merits of the worldviews that underlie our policy disagreements.
We can try to convince our political opponents with the language of facts or the language of values, but no number of bipartisan commissions will make this kind of debate any less necessary.