Something Worth Fighting For

Let’s say that someone you knew bought a 1968 Shelby Mustang GT500KR back when they first rolled off the assembly line. They loved this car and took very good care of it. Whenever a part began to go bad, it was immediately replaced before anything could damage the car. They kept this car for the past 50 years. Over that time, 90% of the parts in that car were replaced with new parts. They now want to sell the car to you, and they say that it is an original 1968 – are they telling you the truth? Or is it now a completely different car than the one they bought 50 years ago?

This, of course, is a modern re-telling of the Ship of Theseus. The reason I ask is because this applies to more than just objects, but also ideas. Ideas mutate and evolve over time. Some aspects become obsolete, emphases are changed, new thinking is added, and sometimes ideas are rejected completely. Like switching out the different parts in our muscle car, these changes are due to the emergence of new information and technology, along with the growing and shifting social, political, and philosophical milieu.

And when I say “fighting for” something, I don’t necessarily mean physically fighting for it, but also advocating and arguing in favor of, and being willing to align oneself and take a position for, a particular set of beliefs, ideals, and principles.

Christianity, for example, as it is understood nowadays, is very different from its original conception – so, is it still the same thing that the early followers had in mind when they were persecuted for their beliefs? Is Christianity still the same thing that the Medieval people had in mind when they persecuted others in its name?

What about the United States of America? Certainly the country is very different from the one the founders understood – now slavery is abolished, women have equal rights, our government involves itself in the affairs of every other country. So, when someone says they are fighting for America, what does that really mean?

And, more interestingly, will people in the future think what you believe is worth fighting for now had been worth fighting for at all?

What comes immediately to mind is the Confederacy during the American Civil War. They believed they were fighting for something noble and just, and now most people think their cause anywhere between misguided all the way to despicable. But what if the Confederacy had won the American Civil War – would their cause be seen as righteous and just, the way the Union is often portrayed? Which raises the question – will the way the future judges us be based solely on which ideas win out over the others, or will the future be able to judge what we fight for objectively and see an idea, even if it “loses” the fight, as better than one that may have “won” the fight?

f_la_statue_topple_170814.nbcnews-ux-1080-600

[Protesters in Durham, North Carolina, toppled a statue called the Confederate Soldiers Monument.]

It also brings to mind the people fighting during the 30 Years War – many atrocities were committed, thousands killed through warfare, disease, and starvation, ruinous destruction wrought on the people of Europe, and yet nowadays most people don’t even know that this war happened, much less what it was even about. But the people fighting it (or, at least, funding it) thought it worth the catastrophic consequences. Which raises the question – are your ideas worth fighting for is they will simply fall by the wayside in history, forgotten by posterity? What if they only remember what you did for your cause, but not why you did it? How will you be judged?

30_year_war_Grave

[Mass grave from Battle of Lützen, 1632, during the 30 Years War.]

What about humanities greatest experiment with implementing an idea – Communism in places like China and the Soviet Union? Untold millions suffered and died for this grand experiment, the world being brought to the precipice of thermonuclear annihilation, only to have it all fail. Now that we are in their future, with the Soviet Union in our past, would we deem Stalinist or Maoist Communism to have been worth fighting for? At the time, many people certainly believed in those ideals enough to kill and die for them. Now, it seems, all of that suffering was for nothing.

Great_Leap_Forward

[Propaganda poster from Mao’s Great Leap Forward program, which resulted in the government executing 550,000 people and an estimated 16.5 million to 40 million people starving to death.]

So how does one know what to fight for now, given that it may be forgotten by posterity, or deemed misguided or even evil? Is it worth killing for a cause that will be judged so harshly by our descendants? Worth dying for? What if the ideas you believe will make the world a better place get put to the test, and it turns out that they make things worse for everyone? And if these questions are paralyzing, what if not fighting for anything is even worse than fighting for the wrong thing?

Is there something worth fighting for?

[Featured image is from Kharkov in the Soviet Union, 1933, during the Holodomor, where estimates of 2 million to 10 million people starved to death due to Communist collectivization policy.]

Sweet Little Lies

Is it better to believe in pleasant falsehoods or unpleasant truths?

Having a sense of purpose in ones life leads to better cognition, resilience to trauma, positive self-image and lawful behavior, as well as longevity. This has been utilized by programs like Alcoholics Anonymous and Narcotics Anonymous – the insistence on accepting a higher power and devotion to helping other struggling alcoholics and addicts is said to be the key to the success of these programs. What this doesn’t say is whether someone’s chosen purpose has to be true (or even morally good, but that may be a topic for another article). In other words, even if the purpose for which one has devoted their life is false, it could still have positive benefits for the individual.

Religious people have bemoaned the decreased emphasis on religion because it leads to dissatisfaction and unhappiness. It’s also possible that this loss of higher purpose has led to sociopolitical woes – in the absence of higher meaning, people seek purpose in other, less healthy outlets, such as drugs and sex. But it could also explain the rise of identity politics – without a sense of higher purpose, people find meaning in their various intersecting identities. Being female, or male, or gay, or straight, or transgender, or black, or white, or working class, or disabled, or liberal, or conservative…these things begin to provide the meaning that people no longer get from religion. Unfortunately, as is playing out, these sources of purpose are divisive and provide no overarching, unifying narrative to give meaning and purpose to everyone as a whole. Instead, our chosen purposes put everyone at odds.

But what if something like religion is no longer able to give meaning? What if belief in a higher power that gives us an overarching, unified meaning is now obsolete or impossible? This may be the legacy of globalization, multiculturalism, and scientific progress – a strange concoction of moral relativism and materialistic reductionism. So we find out that God doesn’t exist (or, at least, has yet to intervene and set us straight on the issue at hand) and we are condemned to freedom. As such, people have come to different conclusions on how to live The Good Life here on earth – each of them the flawed brainchild of human beings in all our biased thinking, all of them equally valid without an ultimate authority, mutually incongruous, and yet unable to avoid one another in a world shrunk by globalization and constant connection, leading to polarization, division, and conflict.

So what’s the answer? Do we continue believing the truth – that there is no ‘right’ answer to the question “what is my/our purpose in life?” and that we are permitted to do anything, no matter how bad it is for ourselves and those around us? Or do we take the Leap of Faith that Soren Kierkegaard suggested – that, given what we know in our modern times, we tell ourselves Sweet Little Lies – that will make our lives better? The former has obvious real world consequences, as discussed above, but the latter requires us to commit what Albert Camus called philosophical suicide by attempting to make transcendental meaning out of meaninglessness (to his credit, I imagine Camus would also condemn something like identity politics as a source of purpose as philosophical suicide, too).

This discussion may seem somewhat academic, but I contend that it has very important ramifications. How will we reconcile the truth that meaning is merely what we make it with the need for a higher purpose? I hope I am being melodramatic when I say that our civilization may depend on this reconciliation, but more and more I feel like this isn’t too much of an exaggeration.

Divided States of America

Anyone who pays attention to the news in the United States will at least be vaguely aware of the sociopolitical rift that has opened recently (1) (2) (3) (4). Whether this rift had been hidden for some time and has only just recently been brought to the surface, or if this is a new phenomenon in American culture, no one can say for sure.

Political Polarization

When it comes to these issues, subjective truth is often more salient than objective truth. It doesn’t matter whether immigration – both legal or illegal – is up or down, whether this has consequences for job markets, crime rates, and terrorism, or whether there is a concerted, racially motivated effort to crack down on immigration. It doesn’t matter if high profile police shootings of black people is part of an epidemic indicative of long-standing institutional racism in America or just a false narrative perpetuated by those who hate police and law-and-order or simply want special privileges for particular groups of people. It doesn’t matter if all transgender people are beautiful and inspiring heroes who are deserving of our utmost admiration, a mixed bag of people just trying to live their lives in a way that best suits them, mentally ill victims in need of psychiatric attention to alleviate their suffering without mutilating their bodies, or menacing perverts who will sink to any level depravity to pursue their fetishistic desires. It doesn’t matter if everyone on the right is a money-grubbing, capitalistic, environment hating white supremacist or not, or if everyone on the left is a cis-hetero-white-male hating neo-Marxist identitarian slouching toward Sharia Law or not.

What matters, as far as social interactions, voting trends, and policy decisions are concerned, is that people believe these things are true, one way or the other.

But why do people believe the things they do? And why is there a divide? And what was it that brings people together in the first place? Lets go through these questions one at a time.

  • Why Do People Believe the Things They Do?

It turns out that there is a large biological component to why people have the sociopolitical views that they do. For example, a twin study, conducted in five democracies (Australia, Denmark, Sweden, Hungary and the US) using a sample size of 12,000 people over the years 1980-2011, found that genetics played a significant role in peoples political self-identification. These results have been confirmed in other studies (1) (2) (3).

In addition, political self-identification is strongly correlated with personality type using both the Myers-Briggs Type Indicator (MBTI) and The Big Five (1) (2) (3).

Big Five Political BeliefsMBTI Political Beliefs

There are even brain imaging studies that show anatomical differences correlated to different political beliefs (1) (2) (3).

Figure1

This can be troubling, since people use what is called motivated reasoning when it comes to accepting different conclusions. This means that we start from preferred conclusions and then look for evidence to support those conclusions, as opposed to formulating conclusions based on the known evidence. In other words, humans are lawyers, not scientists. And once we reach a conclusion, our confirmation bias allows us to continuously find evidence to support those conclusions we’ve already accepted. The backfire effect makes it difficult for anyone to convince us otherwise, regardless of the facts. And our bias blind spot leads us to believe that we alone are immune to all of this.

Thus, having a genetic predisposition for certain sociopolitical beliefs, a propensity for motivated reasoning clad in the armor of confirmation bias, the backfire effect, and bias blind spot, can make it difficult to achieve reconciliation when divisive issues are given primacy in the national conversation.

  • Why Has the Sociopolitical Division Become so Large?

Studies show mixed results about how much the media effects political polarization (1) (2) (3), where there is a positive correlation for people who have a high interest in politics and in consuming traditional media, but it doesn’t fully explain the recent extreme polarization, especially since news media has been around for a very long time. One hypothesis is that social media, which is a new phenomenon compared to traditional media, is the driving force – whether through the creation of ideological echo chambers or the spread of fake news. However, the evidence doesn’t bare this out. The most extreme polarization comes from people who use social media the least (1).

So, what are some other possible explanations for the recent sociopolitical division?

In addition to more access to our own preferred narratives, the rhetoric has become more polarizing and divisive. When people who are political/ideological opponents are not just viewed as those who have the welfare of their fellow countrymen in mind, using a different approach of achieving that end, but instead are characterized as people acting in bad faith, or with malicious, self-serving, or spiteful intentions, it creates a whole new dynamic. In the former case, political and ideological opponents are people who can be engaged in dialogue, where ideas can be discussed and with whom compromises can be arrived. In the latter, political and ideological opponents are beyond the pale. They are not worth engaging in dialogue, since they will only do so in bad faith, and all that will result is giving legitimacy to ideas not worth considering, and in fact may be reprehensible. Thus, it is not necessarily that our information is being catered to our preconceived notions, but the atmosphere in which the information is being discussed. Instead of constructing a reasoned argument why a political or ideological opponent is incorrect or misinformed, all one needs to do is denounce them categorically. Virtue signaling trumps reasoned argument.

What we can say about the recent sociopolitical polarization, which may be a result of the divisive rhetoric, is that it seems not to come from increased extremism in one’s own belief in their own ideology, but an increased mistrust of those considered to be part of the opposition. Interestingly, mistrust of ones perceived ideological opponents is also correlated with a recent decline in trust for the government. What is also interesting is that distrust in the government leads to more calls for increased government regulation. It’s possible that the recent sociopolitical polarization could have something to do with each side perceiving the other as attempting to use the government as a weapon to impose their own ideology on others. As each side becomes more distrustful of the other, they more fervently attempt to stymie each others legislation, vote for people who will more vehemently oppose the other side, and enthusiastically support politicians who will use stronger verbiage to attack opponents. This leads to more distrust in the government, which results in calls for increased regulation as an attempt to reign in the other side.

But a primary driver of sociopolitical division is identity politics along with relative deprivation. The first aspect, identity politics, has been widely discussed lately. This is the phenomenon of viewing social and political interactions, decisions, and conceptual frameworks through the lens of one’s self identity. This alone isn’t enough to cause sociopolitical rifts. But along with relative deprivation – the fact or perception that one’s particular identity group(s) are being treated unfairly or having access to resources or political influence restricted by those outside the identity group – identity politics can result in division and conflict. This phenomenon has occurred on both the left and the right – the left focusing on the present and historical relative deprivation of minorities, the right focusing on the resulting relative deprivation of cisgendered, heterosexual, white males from this recent shift in ideological emphasis on the left. Where this initial ideological reorganization on the left originates is up for debate, but many issues may be contributing factors – changes in parental styles as a result of current scholarship, the relative lack of external threats to personal safety and well-being that results from living in a civilized soiety, the inevitable decline of national identity and subsequent replacement by moral decadence that befalls every hegemonic power, an extreme over-correction of past wrongs on account of guilt brought on by increased awareness, paternalistic government expansion that results in infantalizing the populace, etc.

  • What Brings a Population Together?

The identification of a population as a cohesive group is based on shared qualities of race, ideology, and culture. Race is an easy one to understand – people inherently identify with those who share common physical features (1) (2) (3). Ideology refers to a shared set of specific beliefs. For instance, in the United States, the belief that all people are morally and legally equal (ie despite obvious differences in intelligence or physical strength between individuals, it is just as wrong to deprive one of life as another, and each should be treated equally by the law), that we have the right to free speech, etc. Culture refers to a shared set of traditions, social mores/taboos, values, and language. For instance, in the United States, Christmas is widely celebrated, even by people who don’t believe in Christianity; money and material possessions are valued as a measure of social status and success; and most people expect those around them (at least while within U.S. borders) to speak English.

The U.S. was unique in that it did not have origins in racial similarity. Yes, the founders were white men, but the U.S. was and is a country of immigrants (both voluntary and involuntary). Even if the credo that “all [people] are created equal” wasn’t realized from the start, it was an eventual, and inevitable, destination. The U.S. instead originated from a shared ideology – namely, the ideology that all people are morally and legally equal – that resulted in a shared culture. A culture that valued chasing the vaunted American Dream – that one’s children will live a better life than they had.

What seems to have changed recently, on both the left and the right, is that this has been turned on its head. American, and more generally, Western culture has been demonized as inherently oppressive, through the denigration or appropriation of certain heritages. The ideology is held accused of being hollow and unequally distributed. Instead, we are told, race is what matters. Your rights are determined by your race (or, often times, sex) – whether you are white or non-white – and all conflict is based on the relative deprivation of certain races (or sexes). This will not result in national unification, but only further division. There are those who may see this as a desirable result, and in fact may be actively working towards these ends, but I don’t foresee this as resulting in anything good.

I don’t have any particular prescriptions for how to fix any of this. I can say, broadly speaking, that if the U.S., and the West in general, doesn’t find a common ideology and culture once again, that we will only further divide.

And Expecting Different Results

Imagine that you live in a 200 meter by 200 meter square cage of solid walls with a ceiling 10 feet above you and a dirt floor. This stone ceiling contains two barred openings in two of the corners, which allows you to see the sky and obtain water from the rain. One of these barred openings 100 feet by 100 feet, the other is only 10 feet by 10 feet.You have no idea how you got into the cage, why you are there, or even really how long you have been there.

Now imagine that you are not the only one living in this cage. There are nine other people living in the cage with you, none of them knowing how they got there, either. As far as any of you know, you have always lived in this cage. Out of the ten total people, including yourself, two groups have formed. Each group has four loyal members, led by one charismatic leader. You and one other person are only loosely tied to your respective groups. The group you are in has taken residence in the corner of the cage with the 10 foot by 10 foot opening in the ceiling, while the other group was able to take the prime real estate near the 100 foot by 100 foot opening, where they have had success in farming the dirt floor, while your group has had only modest success. This has created an imbalance within the cage, where your group would depend on theirs for food.

But there are also four barred doors that lead to the outside the cage, one on each wall of the cage. Outside these bars there is also a solid door that can be open or closed. The people inside the cage can determine whether these solid doors are open or closed. Whether the people within the cage have a tenuous connection to the outside. But the doors only seem to open into a large, dark, and empty corridor. And yet, mysteriously, if the solid doors are open, food rations are deposited at the doors seemingly at random. But necessity is the mother of all invention. As the other group is content to farm, whenever those solid doors are open, your group stakes out the four doors and retrieves the rations when they appear.

The result is that when the solid doors are closed, the other group thrives; when the solid doors are open, your group thrives.

Every year, on the year, ten consoles raise from the floor in the middle of the cage. The consoles contain three buttons, labeled Open, Close, and Leave. If most people vote Open, the solid doors will be open, allowing for outside food to come in and your group to thrive. If most people vote Close, the solid doors will be closed, making it so only food grown inside the cage is available, allowing the other group to thrive. If most people vote Leave, the solid and barred doors will all open, allowing you and everyone else in the cage to leave to an unknown fate. If it is a tie, the previous years’ state of affairs will remain in effect.

Finally, one year, just days before the election, you bring up the strange fact that nobody has ever voted “leave” before to your group. The others scoff at you. You’re that disloyal person anyway, who has voted for closing the solid doors a few times in the past. You did this because you saw the devastation of the other group when the doors were open and you were sympathetic. Their crops dried up, the people begrudgingly taking scraps from your group rather than put in the effort to make their own. But, of course, the other group has one person who has voted to open the solid doors a few times. The two of you have been chastised for being ‘undecided’ and ‘independent’ before, so your bringing up the third option is not all that surprising to your group.

“Leaving might be better than either other option,” your group leader points out, “but it will never win. Voting to leave is the same as voting to close off those solid doors. It’s throwing your vote away. It’s more important to make sure they don’t win, because closing those solid doors would be worse than having them open.”

“But having the solid doors open is worse than being able to leave,” you point out.

“We don’t know what’s outside the cage,” your group leader says, “for all we know, it might just be more cage, or someone out there might just trap us in another cage that’s even worse than the one we’re in.”

“But if we keep voting for the same two options every time,” you argue, “there is a one hundred percent chance of us being trapped in a cage. If there is even a one percent chance of no longer being in a cage by voting to escape, isn’t that worth pursuing?”

“It’s just not going to happen,” your leader says in a condescending tone, “this is just the way it’s always been.”

“But everyone agrees that they would rather not be stuck in the cage,” you say, “so why can’t we have bipartisan support for Leaving? If we all decided not to conform to the Open-Close duopoly, there wouldn’t be a need to vote strategically for the lesser evil in which every choice you want, Open or Closed, is still being inside the cage.”

“By voting Open or Closed,” you continue, “you are essentially voting for me, and everyone else, to remain inside the cage. The real choice isn’t between Open or Closed, but between Imprisonment and Freedom.”

“That something has been the way it is for as long as you can remember,” you plead, “is not a good reason to continue keeping it that way. Besides, there must have been a time in the past when we were not inside the cage, even if we don’t remember being outside, nor how we got inside, which was almost certainly not voluntary on our part. Someone put us here, and we have the means to escape, yet we choose not to. We continue choosing the same two options, but the ultimate result is that we remain in this cage, telling ourselves that we are free as long as it is our side who wins the next vote.”

“But this is a lie,” you say, “our side is the side that gets all of us – our group and theirs – out of the cage. There is no freedom for us if the solid doors are open, and there is no freedom for them if the solid doors are closed. We all know the two options are not good, yet we keep ourselves imprisoned merely to stop the people we think are our enemies from falsely believing they won. We think that ‘winning’ the next vote will make us free. But insanity is doing the same thing over and over again and expecting different results.”

Final vote:

Solid doors closed: 5

Solid doors open: 4

Escape: 1

You all remain in the cage.

American War Since the Cold War

Disclaimer: this post was inspired and largely (but not completely) influenced by the book “America’s War for the Greater Middle East” by Andrew Bacevich, a former military officer who started in 1969 in Vietnam and ended as a Colonel after Desert Storm, and has since become a military historian who is critical of American militarism (but possibly not for the reasons you think). For a much more in depth look, read his book, or listen to it on Audible (~15 hours) like I have done over the course of a few weeks (while falling asleep after long days of being a biochemistry graduate student).

With the 2016 election coming up, militarism is something I’ve seen a bit about (with the limited time I’ve been able to devote to politics and current events). I am a self described libertarian (or self confessed, depending on how you view libertarians), so I have no illusions about my anti-war stance. That makes seeing pro-war people much easier – to someone like me, everyone seems pro-war to some extent. That being said, liberals make Trump and Cruz out to be warmongers on the scale of Hitler or Mussolini, and libertarians make Hillary out to be worse than either of them on war, for reasons I’ll go into.

So, the American adventure into the Middle East didn’t start on September 11, 2001. America has been involved with the Middle East since the British after World War I (1914-1918). But the real intervention started during the cold war (1946-1990), most notably after the Iranian coup in 1953 when democratically elected Mosaddegh was overthrown with American help to put American friendly Shah Mohammad Reza Pahlavi in charge. The other notable American intervention was their undying support of Israel after 1948, which saw occasional disagreements, but remained an ally due to A) geopolitical Cold War reasons, B) cultural reasons, such as American Christians identifying more with European Jews than with Middle Eastern Arabs, and C) religious reasons (particularly by the evangelical Christians, who saw Jewish occupation in Christian prophecy).

Although the Israel/Palestine conflict has often been held up as the primary conflict in the region, nationalism has been an enormous part of Middle Eastern culture, although this has sprung from the Israel/Palestine conflict in many ways since the Balfour Declaration during World War I, but even more so since World War II and the winding down of European colonialism (i.e. post-colonialism in Asia, Middle East, and Africa). This has taken two forms. The first has been Arab nationalismArab nationalism, which seeks a pan-Arab nation. Muammar Muhammad Abu Minyar al-Gaddafi (leader of Libya from 1969-2011) was a self-proclaimed Arab nationalist who had run-ins with the USA as early as the 1980s. The second was Islamism, which started as early as the 1700s with Muhammad ibn Abd al-Wahhab, 1800s with Sayyid Jamāl al-Dīn al-Afghānī, and modernized with Sayyid Qutb for people like Ayman Mohammed Rabie al-Zawahiri and Osama bin Mohammed bin Awad bin Laden.

The events in the Middle East that ramped up America’s interest in the region primarily started in 1979. The first was the Iranian revolution that overthrew the Pahlavi regime and implanted the Ayatollah Khomeini regime. The second was the siege of Mecca in Saudi Arabia. And the third was the Soveiet invasion of Afghanistan. The first and the third had immediate influence on American foreign policy. Iran was at best an American ally, at worst an American colony – such was the motivation for revolution. The third was an immediate threat by then arch-enemy Soviet Union, yet it was also motivated by the Iranian revolution. The American Cold Warriors thought that Soviet intrusion into Afghanistan would quickly and easily bring the Soviets into Iran and give them more control over the Iranian oil fields. This prompted the Americans to give arms and aid to the Mujahedin fighting in Afghanistan, despite the fact that most of them held anti-American views.

Osama bin Laden, a rich citizen of Saudi Arabia, was a huge benefactor in the Soviet-Afghan war. He made many friends with the Taliban by funding their war and participating in the conflict.

The Afghan war happened all during the 1980s. Also in that time was the Iran-Iraq war. Hitler used gas on the Jews during his atrocious, inconceivable, and damnable holocaust. Yet the only two times gas was regularly used on the battlefield was in World War I and during the Iran-Iraq war. The American government sided with the chemical weapon using Iraq, which used chemical weapons on both the Iranians as well as the Shia and Kurds that lived within Iraq (the minority Kurds received American sympathy afterwards for a time, but the majority Shia got the cold shoulder). The Americans looked the other way on the cruel practices of their allys. Even after the USS Stark incident, the American government sided with Saddam Hussein.

And this hardly covers the Iran-Contra scandal, in which Oliver North, working within the Reagan administration, sold arms to Israel, which then sold the arms to Iran, in order to get Iranian help in freeing American prisoners in Lebanon as well as illegally make money under the table for the Contras in Nicaruagua to battle the Sandinistas  – which may have also involved the cocaine trade.

The Gulf War (ie Desert Storm), which came only shortly after the Iran-Iraq war, in which America sided with Saddam Hussein’s Iraq, happened for three reasons. The first was obviously for oil. Even in the early 1990s the politicians weren’t yet embarrassed enough to admit that, and the public wasn’t yet conscious enough to realize that nations outside of the United States had issues of their own that might influence global trade. The second was because the United States was still embarrassed by what had happened in Vietnam. For more on that second issue, I definitely recommend Andrew Bacevich’s book – I could never make the case he does in this space.

But the real reason is to exercise foreign policy. Here comes the doctrine:
1) Despite the fact that the Nuremberg trials found preemptive war a war crime (Nazi Germany invaded the Soviet Union preemptively in order to prevent a Soviet invasion of Germany), America will exercise previously decided war crimes and demonstrate that they are necessary and that laws against preemptive war is exempt, but only for America; this is because new technology makes those old preemptive war ideas obsolete, but only for America (which now exercised unrestrained power, with the Soviet Union collapse). 2) America alone has the willingness and capability to exert its diplomatic and military power, now that the cold war is over.

The first is somewhat self-explanatory. It says that America will do what it wants, when it wants, as long as it perceives a threat to its interests. The second has a bit more impact. If America tells you to do something, you had better do it, because nobody is coming to rescue you, and you have no chance of defeating America. It is a lesson taken to heart by the American military machine. This is a conceit seen plainly in America’s involvement in the Iraqi no-fly zone, Somalia, Bosnia, and Kosovo. I could spend an entire blog post on each of these, but it would probably do you better to just click the links and read about them – particularly the Iraqi no fly Zone. I remember hearing about this as a kid, but I simply thought that the Americans and Iraqis had agreed that Iraqis won’t fly in these zones – it turns out that Americans were shooting down Iraqis in these zones and that the Iraqis had never agreed to this; in other words, it was a continued war that few realized, but that went on for more than a decade, along with draconian sanctions that caused the suffering and death of literally millions.

It was an American version of the Siege of Leningrad.

The Iraqi Embargo.

The attacks of September 11, 2001 came as a surprise to everyone aware enough at the time. I think, personally, I was not quite aware enough at the time, despite being sixteen years old. I was in homeroom when I first heard that something was going on, which must have been sometime between 9:30 a.m. and 9:45 a.m. I also remember, throughout the day, rumors that other attacks had happened, in places like LA and Chicago. Obviously, none of them came to pass. I was an asshole in my teens, and about the biggest emotion I remember from the time was boredom. I remember hoping that this would upset each classes normal schedule.  The only two classes I remember specifically were my English class, with a teacher I hated, in which the normal class routine went as scheduled, which disappointed me. I also remember my last class of the day, which was creative writing, and we got to sit and watch the news for the whole class period, with the expectation that we would take notes in our journal and then write a few pages about how we felt afterward. That seemed like a sweet deal to me, since it meant we didn’t really have to do anything except watch TV. I remember, after getting home, wishing that every channel on TV would stop talking about the 9/11 attacks and get back to more entertaining programming (the coverage of the 9/11 attacks went on for more than a few days on many channels – and yet I was so far up my own ass I didn’t realize that this meant it was kind of a big deal). I remember complaining about seeing the same footage of the twin towers falling over and over again on TV, rather than showing something I would rather see (reruns of the Simpsons, anyone?). Needless to say, I had no idea the level of impact the 9/11 attacks had, and would have, on both American and world affairs.

The invasion of Afghanistan came soon after. Support for George W. Bush had skyrocketed. Everyone was a super patriot. American flags were everywhere. Nobody questioned that most of the hijackers were Saudi Arabian, nor that Osama bin Laden, the easily fingered ringleader, was also Saudi. And everyone had forgotten that the Soviet Union’s adventure into Afghanistan had ended in disaster. The idea of Afghanistan as the “graveyard of empires” was already in the aether. But it was only a decade after our supposed Vietnam redeeming victory in Desert Storm, and our purported success in the Balkans during the 1990’s, and to boot this was a just war against people who attacked us first.

There was success in Afghanistan at first. The Taliban, having seemingly forgotten the guerilla tactics that won them victory in the 1990’s after the Soviet-Afghan war, attempted to face American forces head on. They were obviously defeated over and over again. But after the distraction of the impending Iraq war had swayed the American attention, the Taliban, alongside al-Qaeda, rediscovered their guerilla/insurgent roots. America forgot about Afghanistan as Iraq loomed on the horizon.

Afghanistan is a war that makes sense to many Americans. Osama bin Laden had been given asylum in Afghanistan after being banished from Saudi Arabia (some time spent in Sudan in there, too). Osama bin Laden was given asylum by the Taliban, whom he had helped during the Soviet-Afghan war. The Taliban had been helped by al-Qaeda in the 1980s and 1990s. Osama bin Laden was in Afghanistan when 9/11 happened.

Although I also imagine that many Americans don’t know that Osama bin Laden is Arabic, but the Taliban is not, and that al-Qaeda and the Taliban are not the same thing. Either way, I think most Americans would see Operation Enduring Freedom as a just war, regardless of their ignorance on the war aims or the local culture being faced; or what it would actually require to ingratiate ourselves; or whether the culture even wanted what the west was offering, despite the propaganda that says everyone wants what Americans have.

However, the war in Iraq  was a hard sell. The justification was 1) that Saddam Hussein harbored  Weapons of Mass Destruction (WMD) with the implicit assumption that he had the willingness and capability of using them against the United States (or, at the very least, US allys such as Israel) and 2) that the Saddam Hussein regime had ties to al-Qaeda, despite al-Qaeda despising regimes like Saddam’s for not being Islamic enough. Many in the American public did not buy this, particularly liberal leaning people.

I was one of those liberal leaning people. I remember in March of 2003 when the war began, I had a debate with my co-workers at a restaurant I worked at. It was me and one of my managers against just about everyone else working there, with the two of us voicing our disapproval with the Iraq war – the conversation stayed civil, but it was an ongoing debate throughout the evening that I was working there. This is about the first time in my life I remember ever being political about anything, but I remember being very charged up about the issue. I don’t think I really knew much anything about politics at the time, except that I was against war, especially unjustified war. That is a position I maintain to this day – I am a libertarian, but above all, I am against war.

The Iraq war started in 2003 was supposed to be “shock and awe.” That phrase became somewhat of a punchline after what happened in Iraq, but given what the actual aims of the war were, it makes sense. One has to understand that all sorts of justifications for the war have been attributed to George W. Bush. All of them have some truth, but were almost certainly not the war aims. The first, that I’ve heard from numerous liberals, is that George W. Bush only wanted to go into Iraq because his daddy wasn’t able to finish the job. This is a very pedestrian motive that certainly fits with the liberal notion that GWB was an infantile minded rube that would throw around billions of dollars and human lives for petty feuds like a Roman Emperor. There might be some enmity harbored by the younger Bush, but I hardly see this as a primary motive. Another two motives, which I will combine into one, and one in which I myself bought into for a long time, is that the way the Iraq war went is exactly how the Bush administration wanted it to go – a long-term occupation that benefits a) the military-industrial complex and b) the oil companies. America’s prolonged presence would simply mean bigger profits for arms manufacturers, nation builders, and oil companies.

I don’t discount any of these motives. Primarily the war profiteering motives. I’m sure there are many businesses that saw their profits climb substantially during the Iraq war. But I am saying that this wasn’t the primary motive for this war. Once again, I attribute two motivations: 1) establishing the idea of preemptive war, and the American prerogative to do it and 2) demonstrating that America can easily overthrow a regime and implant a new one.

Let us imagine that the Iraq war had been a resounding success. Imagine if the American military had gone into Iraq and overthrown Saddam within a matter of weeks and installed a functioning democracy a month later. What do you think Syria, Lybia, and Iran would think? What would they do if America had threatened to do the same thing to them that America had done to Iraq?

That is what the American government wanted to achieve. A military preeminence that gave them the Big Stick that Teddy Roosevelt talked about – do as we say, or you know what will happen. Iraq was to be the smoking gun the government could hold to other countries, showing that they mean business. It was supposed to permanently throw off the humiliation that was the Vietnam war, showing that the American military had the teeth to back up its hardline diplomacy.

The biggest failure, as far as the Bush administration is concerned, is that we didn’t win with ‘shock and awe.’ That was the plan. We were supposed to be in and out quickly. We were supposed to take that smoking gun from the chest of Iraq and point it at all over the Middle East and ask who is next. Regimes such as Assad and Qaddafi were supposed to quail at our ability to overthrow regimes and give in to our demands. The fact that the military-industrial complex profited mightily from Iraq may have been part of the plan, or simply just an opportunity that arose, but we can say for sure that that the real war aims were a failure.

At least maybe the first military aim. America has not shown that it is capable of easily overthrowing regimes. But America has shown that it is willing to engage in war preemptively. George W. Bush codified this with Iraq. The Obama administrations have taken this to the next level in Somalia, Yemen, Pakistan, Syria, and West Africa.

But what has changed the most between George W. Bush and Barack Obama is that war itself has become too unpopular. So the new war is the use of drones and special operations, with little approval or knowledge by the American people. This allows the regime to announce minimal ‘boots on the ground’ while obfuscating how militarily involved the US government is.

The American public has forgotten about our wars. The liberals that protested George W. Bush have mostly stayed silent on Barack Obama’s military actions, no matter how much they inflict on war crimes. Foreign affairs have been only a side note to the 2016 elections – Trump’s supposed banning on Muslims entering the US ignores the gargantuan crisis in Europe and the actual conflict in Syria; the focus on ISIS ignores the greater Middle Eastern crisis in places like Libya, Egypt, and Afghanistan; both Trump and Sanders promising to close America off from foreign trade is advertised as being America First without any clue of how economics works.

The majority of Americans are behind the drone program without understanding the effect of blowback. The majority of Americans agree with the torture program with no self awareness about how this makes them the bad guy – or at the very best, the good guy that everyone hates.

No matter how you vote, just try to think of yourself as NOT the bad guy.

Hypothetical Island

Thought experiment: The United States launches a new satellite to study the earth. This satellite is the most advanced satellite ever engineered. It successfully goes into orbit and looks back to earth. Surprisingly, one of the first things it finds is an island that has somehow never been discovered before. An island the size of Hawaii. Somehow, no other satellite has ever seen it. No airplane has flown over it. No boat has ever accidentally run into it. But most surprisingly, we see from the satellite that there are people living on that island. Native people. Tribal people. A people that have culturally evolved with zero influence from the rest of the world for over a thousand years.

So what should the rest of the world do?

It’s certainly an interesting anthropological curiosity to study these people. A people who had been isolated from the rest of the world for over a thousand years. What is their culture like? How is their society organized? What religion do they have, given none of the big ones the rest of the world believes in have been introduced there? Do they have the same kind of morals that we do? What sorts of things did they do to progress, in the sense of technology (ie do they have bows and arrows? metallurgy? Glass? Domesticated animals? Transportation? etc.).

The problem here is the one Star Trek brought up with the Prime Directive: is it moral for us to interfere? I mean, what if they practice cannibalism? Or female genital mutilation? Or Spartan-esque eugenics? Or pedophilia? Do we have a duty to stop this? But wouldn’t that be a type of cultural hegemony? Or is it simply spreading enlightenment? What about introducing them to modern medicine that can stop an easily curable disease that’s given them problems for years? What about sending missionaries to teach them about our religions?

The problem is nobody owns that land as far as international recognition is concerned. The natives don’t have a deed proving ownership, so what recourse would they have against people coming in and taking it? And what would we even do once we got there? Perhaps we send in the anthropologists to observe. Even if observe is all they do, they will indirectly influence that society. And if anyone introduces those people to things outside their isolated land (anything from a screw to an iPhone), that will forever change them as well. Just think of the cargo cult culture. What about when people decide that the natives are poor and not well off, since their diet is meager (or hard fought, as in hunting) and infant mortality is high? Then you’ll have people trying to give them charity of some kind, which will influence their culture. And what if we find out that they have no written language. Someone better teach them to read and write, shouldn’t they?

Now imagine that the island is strategically significant. Lets say, for instance, it would give the United States easier military access to Iran. Now what does the United States do? Leave it alone and hope that Iran doesn’t take it over themselves?

But now let’s add something else. What if that new island has an enormous reserve of oil? The biggest and most untapped oil reserve left on earth. Now what does the United States do? What does the rest of the world do? The first one to annex the island gets the oil. Who is going to let anyone else get to it first? And who is going to stop them?

The idea behind this is that when we look at how people in the past have exploited natives in lands they “discovered,” we often like to think of the explorers as monsters. They were medieval. Imperial. Racist. Sexist. Ignorant. Greedy. They didn’t have the same respect for life that we do, nor the same appreciation for diverse cultures.

But how might people react nowadays to this Hypothetical Island? Are humans biologically any different now than they were back in the “Age of Discovery?” And don’t we all want easy access to scarce materials, the same way early explorers wanted gold, metals, crops (sugar, coffee, rubber, spices, etc.), and, lets face it, slaves? Near slave labor still exists in places like Bangledesh (cheap clothing), China (production, such as your smart phone), and the Congo (minerals, such as coltan for phones and diamonds), yet people will continue to buy those products, even knowing that those practices exist. People are used to a certain lifestyle, and giving it up is harder to them than knowing their lifestyle makes other people miserable. Why would anyone benefiting off the exploitation of Hypothetical Island care what’s happening to the natives of that island, so long as it provides cheap products?

But let’s say that nobody exploits the people politically and economically at first. What about ideologically? What if we discover that the governing system they came up with is strict communism? Or anarcho-capitalism? Or theocracy? Or fascism? Is it our duty to enlighten them on the benefits of some other system? If so, which system? Should we try to learn something from the native’s system, or just assume that because they’re primitive, that we know better?

The idea here is to realize that we’re just as human, biologically speaking, as people were 200, 500, 1000, and 10,000  years ago. The biggest difference is that we realize the ramifications of our actions. But with this realization comes a complex problem about how to treat other people. Is this dark aspect of humanity something we’ll ever get over, or is it an inexorable part of human nature? And would humanity ever have achieved what it has without this dark side? Have the achievements made up for the pain and suffering we’ve caused?

100 Years of War

I recently finished listening to Dan Carlin’s sixth and final episode of his amazing Hardcore History series “Blueprint for Armageddon” about World War I. It’s not hyperbolic to say that this six part series, totaling almost 24 hours worth of listening at almost two years in the making, is a masterpiece, and I can’t recommend it enough – and right now it’s still available to listen to for free. Not only is it a masterpiece because it was so well done, but also because World War I is still affecting our lives today more than most people realize.

Continue reading “100 Years of War”

Predictions 2015-2025

I recently skimmed through a report released by the Institute for the Future (IFTF) in 2005 making predictions for the next 10 years. It’s been 10 years now, and the report was certainly accurate about some things – social networking, the ubiquity of mobile phones, large amounts of user generated content (blogs, podcasts etc) – but also off on some things – the severity of effects from climate change, smart roads, holographic displays, and embedded brain chips. But of course, to me, the interesting thing about future predictions isn’t about being right, it’s about looking back on it when that future time comes and observing what was important to ourselves back when those predictions were made. With that in mind, I’m going to make some of my own predictions for the next ten years – from 2015 to 2025 (and maybe beyond) and perhaps when 2025 comes around, I can re-post this blog post and reminisce about what seemed important at the time.

So, here are just a few predictions I want to make on a few areas of science and society. These aren’t things I’ve diligently researched, but an extrapolation from my own observations, filtered through the values and views I hold in 2015, composed of my knowledge but shaped by my ignorance. Feel free to leave your own predictions on these areas (or others) in the comments.

Science:

In 2015 we are in the era of biotechnology. We are currently making many discoveries in biochemistry, cell biology, physiology, and medicine. But many of these advances take some time to be turned into practical uses and then open up to a wide market. Gene therapy, organs grown in vitro from a persons own DNA, treatments for diseases once thought insurmountable (Alzheimers, Parkinsons, ALS, diabetes, cancer, AIDS etc), and a growing number of stem cell treatments will begin to become available in the next ten years. Treatments meant to augment or prevent disease may become available, such as gene doping, smart drugs, and tissue grafting.

Materials science will also hit the market. Some say we live in the digital age, but if we go by the theme of materials used, we actually live in the polymer age. Polymer muscles, self-healing polymers, and polymer sensors will become part of our everyday lives in ways that are difficult to foresee. Polymer dendrimers will be used for various biological activities, such as focused drug delivery, biological sensors, and medical imaging.

Technology:

The current trend of Moore’s Law will continue until the point that transistors become so small that electrons can tunnel between gates. This will prompt more 3D processors, quantum computers, and nanotechnology breakthroughs. Computer integration is often predicted when it comes to the future of technology – having it hands free (headsets, things like Google Glass), integrated into our clothing or our workspace (the desk or chair), or even integrated into our body (computer tattoos or RFID chips) – and this will probably be the future at some point, but I think there almost needs to be a cultural shift for this to happen. As it stands right now, these types of integrations seem anywhere from mildly inconvenient (people would rather hold onto a phone than have it woven into their clothing) to socially taboo or even potentially illegal (having technology surgically implanted into your body). But, I think in the next ten years we’ll begin to see the technology itself adapt to being more conducive to this type of integration and culturally we’ll start to become more used to and accepting of this type of integration. I think by 2025 this type of integration will still be fairly new, but it won’t be seen as inconvenient or weird.

Social networking will still be around, but the internet will be very different. Government regulation of the internet will mean that the internet is reined in, become less of the wild west that it’s been since the nineties. Restrictions on content and access will prompt people to turn toward decentralization, possibly in the form of mesh networks. This will mean there is more than one internet, with access being based more on proximity and the number of people in the mesh network. I predict we’ll begin to see some of these arise in the next ten years, but that many people will cling to the current internet during that time, meaning the ubiquity of mesh networks won’t come around until after 2025 – cultural shifts can take time.

Economics:

Nothing that exists in 2015 can potentially alter economics as much as 3D printing. I predict that in the next ten years, 3D printers will start to become more and more affordable, showing up in the houses of middle class people the way personal computers did in the 1980’s. These first 3D printers that become affordable won’t make the best quality products, being somewhat of a novelty at first, but I think by 2025 we’ll start to see more practical and useful things come from them. This will make the market for schematics and polymer materials used in 3D printing boom, while the market for many finished products will stagnate (think what the internet did for many retail stores like Circuit City and movie rental places like Blockbuster). It will create a market of user generated schematics (the same way people make phone apps in 2015) for making any number of things. It will be a revolution in terms of how to prohibit and enforce certain types of things like guns, but also in the realm of patents – you will no longer be able to just pirate software, but pirate schematics for otherwise expensive real world products.

The second biggest potential impact economically will be crytpocurrency (things like Bitcoin and Dogecoin). In 2015, Bitcoin tends to take the spotlight, but I think it’s still up in the air as far as which cryptocurrency will “win” or if there might just be more than one that people use – although I tend to think it will settle on one the same way social networking settled on Facebook. As global fiscal policies continue to devalue their currency and younger generations turn toward technology and peer-to-peer networking, cryptocurrency will become the unit of economic exchange. Governments will take time to incorporate this new paradigm, being slow to adapt, but they will be unable to curb the replacement of old currencies with new. What we’ll see in the purview of the next ten years, though, will be more and more businesses accepting cryptocurrencies and more and more people using them. This will create a shift in the cultural mindset concerning cryptocurrency – whereas right now people think of their Bitcoin in terms of how many dollars they are worth, more people (especially young people as they enter the market place) will think of the cryptocurrency itself as the unit of currency, rather than mentally translating it into the old money. This will be a key step in getting off the old monetary system and onto a digital one. Governments will try to shut things down until they learn how to tax it and make money off of it, but during the next ten years, expect a lot of resistance to cryptocurrency from governments.

Culture:

As people born in the internet era (post 2000 mostly) grow up and enter the market place, culture will see another radical shift. We’re already seeing it emerge in what I called Granular Bubble Culture. I think looking from the top down, interconnectedness and globalization will cause somewhat of a homogenization of culture on its surface, but from the bottom up, we’ll see more and more of the Granular Bubble Culture, where pockets of subcultures keep themselves in a perpetual bubble, creating strange fads and phenomena within themselves that will seem alien to outsiders (think Bronies or people who watch Pewdiepie).

However, I think there will be a significant (although maybe not even the majority) of the population that once again becomes disenfranchised with the acceptance of consumerist culture we have now. The interesting thing about our modern times (2015) is that consumerism has become mainstream. Even the types of people who, back in the 1990’s, were often the biggest opponents of consumerist culture (primarily the left) are now fine with consumerism. They will gladly boast about their new smart phone or computer, show off their apps, and dress stylishly. I think, as is often the case in cultural evolution, we’ll see at least somewhat of a backlash against the acceptance of consumerist culture (the way the early 1990’s was a backlash to the consumerist culture of the 1980’s). It will once again become fashionable (in some areas of society, not all) to be minimalist, to spur certain types of technology, and to be thrifty and (at least somewhat) disconnected. There will mostly likely still be a silent majority that are happy to continue buying the next best thing, but this backlash will probably produce some forms of culture that will garner a lot of attention.

User generated content will continue to overtake mass media, at least in more affluent countries. We already see now that many movies make the most money overseas, so movies will continue to be produced for that audience. Podcasts, Youtube channels (or whatever video services take over), blogs, forums, and social media will continue to expand in production and consumption, many of them moving to the aforementioned mesh networks. TV will become more and more the platform for intelligent mass media, although the on-demand style of places like Netflix and Hulu will begin to overtake the old paradigm of scheduled television programs. However, I predict a change happening in the sorts of content seen on these programs. The new millenium has been big on shows about white, male antihero types (The Sopranos, The Wire, Breaking Bad, House, The Walking Dead, Mad Men, Boardwalk Empire, True Detective, The Knick, Dexter, and the list goes on) which will probably not go away completely, but I think there will be an expansion on the types of shows we’ll see (things like Orange is the New Black and Transparent).

I think the trend of becoming more liberal of social issues will continue. By 2025, I predict marijuana will be legal in at least the majority of states in the U.S. and that gay marriage will not only be legal in all 50 states, but will start to be seen as something somewhat more normal (especially by younger people who grow up in a world where it’s more accepted). However, I see the pendulum swinging too far into that direction in the form of what are often pejoratively called SJW’s or social justice warriors. In the well-intentioned pursuit of more social acceptance for those considered outside the norm, political correctness will become more and more prominent, with the court of public opinion passing swift judgment on people who don’t conform with a strict set of correct terminology to refer to people. However, I see the pendulum reaching its pinnacle in the next decade, swinging back the other way in a backlash against this type of forced tolerance. My only hope is that it doesn’t swing too far in the other direction, but can find a happy middle ground where everyone is accepted but people can talk freely.

I think culturally there will become mistrust in authority (it’s already happening now, particularly when it comes to police). This will go hand-in-hand with the decentralization of society – more user generated content, mesh networks, Granular Bubble Culture (taking the place of central authorities), cryptocurrencies and 3D printing – and cause a decrease in government legitimacy. Governments will never go away – in fact, they will probably only step up their surveillance and attempts to control things – but on many of these decentralization issues named above, people will continue to ignore and subvert them in the same way people do now with online piracy.

In the end, culture is probably one of the most difficult things to predict. Culture is the interaction of many actors with an assortment of different tastes and backgrounds. Most of what can be extrapolated about culture is how culture will interact with the other areas – science, technology, economics – and it would be just about impossible to predict particular things, such as what style of clothes people will wear, what genres of music people will listen to, or the brands they will be loyal to.

Concluding Remarks:

So, what do you think of my predictions? Do you think they will be accurate? Do they reflect current trends – in other words, are they good extrapolations of how things are now in 2015? Am I predicting things (science, technology) to move too slowly or too quickly? Is there anything I may have missed that would enhance or throw a wrench in my predictions? What are your predictions for the next ten years? Twenty years? Fifty years?

Guns, Germs, and Decentralization.

We live in an age of cultural decentralization but political and governmental consolidation. Decentralization has benefits and dangers associated with it. The largest benefit is that decentralization means parallel processing – multiple paths can be attempted while moving toward a single goal. This means that solutions to problems can come quicker, with one of the approaches being tried having the least resistance, and more efficiently, in that resources and time are not spent on trying to move forward on a single (or smaller number of) path(s) that may not be the best way to achieve the goal. The downside, obviously, is that more decentralization can lead to less oversight and a lack of a unified goal – it’s throwing everything against the wall and seeing what sticks.

One of the places where decentralization could have the largest impact on our lives and society is in science (and technology). As it stand right now, at least in America, science is a highly regulated, highly centralized institution. All funding must pass rigorous scrutiny in order to be awarded grants, there are many laws concerning ethics and the acquisition of scientific instruments and materials, and even having access to much of this requires a person to go through years of education.

But what if science was decentralized and deregulated?

Some possible benefits:

Lifting regulations such that burgeoning scientists can acquire scientific equipment on the free market easily and cheaply and learn how to do science from experts or knowledgeable amateurs without A) having to pay expensive university tuition (plus other fees) and B) pay for a bunch of liberal arts classes you don’t want or need and C) acquire a piece of paper that says they’re certified by the government to do science. This could also make testing new pharmaceuticals and GMO’s cheaper, easier, and faster if individuals are allowed to test their discoveries on voluntary individuals without government regulation. There would be a hand full of people working on issues – medicine, materials science, green energy – from different angles and backgrounds, coming up with novel solutions.

We know that decentralization has worked well for things like FoldIt. Imagine a world where having a working scientific knowledge about biology and biotechnology is just as common as having a working knowledge of computers and smartphones is right now. Imagine if new technologies, medicines, and scientific discoveries came out just as quickly and easily as smartphone apps and websites. How different might our world be?

But does this seem like a good idea, or does government regulation of scientific institutions and who is certified to do science make us safer? Certainly, as in anything, the potential for wrongdoing also exists. Does it help or hinder scientists and science in general?

Decentralization means that nobody has a monopoly anymore. This means more freedom, but freedom never promised to be comfortable. The loss of oversight means that there is no doctrine to be followed in helping humanity, but it also means there is no doctrine holding anyone back in potentially harming humanity, either.

One of the biggest technologies of decentralization is 3D printing. I think 3D printing has the potential to alter our economy and way of life as much as the internet, which was the biggest decentralizing technology of the 20th century. The American government is already reeling with the lack of oversight that 3D printing is beginning to bring about, with 3D printed guns. The potential here is that there is no way to track or regulate guns, but the same could hold for anything that can be 3D printed.

One technology that has only recently started to become decentralized is drone technology. Most people own small, remote-controlled drones, but the American government has held drone supremacy on the world stage when it comes to military technology. This means, of course, that the American government has set the precedence for how UAV’s can be used in battle. Once this technology gets spread around, becoming decentralized, then the precedence has been set – namely, that there is no recourse for collateral damage or killing the wrong target.

But when it comes to science, I think most people’s biggest fear comes from the potential for biological weapons. If science became more decentralized, it would make it much easier, and potentially more likely, that someone could produce a deadly bacteria or virus in their homemade laboratory. I’ve made mutant bacteria in the lab I work in numerous times, and it’s actually very easy to do. As decentralization may lead to more cures for diseases, it could also potentially create more.

And it wouldn’t have to stop at bacteria and virus. Transgenic organisms can already be made, and some people even do it as art. Imagine if gene doping was used to modify a persons own body in a way that could be artistic (think body modification a la tattoos, piercings, and implants) or beneficial in some way?

athleticgenedoping_1

The question we would want to ask ourselves then: is it worth it to decentralize science? I think the internet, possibly 3D printing, and who knows what other discoveries may come along in the 21st century, may all answer that question for us. Technology and scientific discovery doesn’t usually stay a secret forever. The computer itself began as something very exclusive, and now almost everyone carries a computer in their pocket that would put those exclusive ones to shame. Governments are slow to react, and attempt to keep the world moving at their own pace, but the information age is showing that governments are slowly losing what control they had over this. Are you ready for the world of DIY science?

Cultural Appropriation or Appropriate Acculturation?

It’s a well established trope that the main protagonist of most popular media – movies, TV shows, video games, literature – are straight, white males. Some people criticize this, saying that there should be more diversity in our entertainment, because the entertainment we consume can influence our behavior, or for the more mundane reason that non-diverse casts are just boring. Others say that trying to force diversity is politically correct tripe and that the people who produce entertainment are simply doing what they know will sell. There is certainly a debate to be had on the merits of trying to capture all aspects of the human condition versus doing what we know works, but this also leads to another issue: can someone who has not experienced a certain aspect of the human condition create that condition in their art? In other words, is it possible for a white, middle class male author to write a novel about the struggles of a poor black woman from the ghetto? And if it is possible, is it appropriate to do so?

To answer this, we must consider three questions, using the example above (although these questions are not exclusive to that scenario):

1) Is it possible for a white, middle class male to write a novel about the struggles of a poor black woman? In other words, will the story he writes be factually accurate in portraying what it is like for poor black women, or will it be completely off base?

2) If the white, middle class male does write a novel that is factually accurate as it pertains to the struggles of a poor black woman, can it still actually be said to capture that struggle since the author has never actually experienced that struggle? In other words, is there a connection between the factual accuracy of the novel and the emotional truth it is attempting to portray?

3) If the answer to the first two questions is yes, then is it appropriate for a white, middle class male to write a novel about the struggles of a poor black woman? Or, is this an example of cultural appropriation – stealing from another culture?

And remember, I used the white, middle class male writing a novel about a poor black woman as an example to illustrate the point. This could also pertain to music, movies, TV shows and so forth and be between people of all different races, genders, orientations, and socioeconomic status.

The first question is easy enough to answer. It’s very possible for a white, middle class male to write a factually accurate novel about a poor black woman. It is possible for him to scribe the words in the correct order such that they formulate thoughts that correspond to the reality of living as a poor black woman, even if he’s never met anyone like who he is writing about, nor has he ever actually experienced any of the situations he portrays.

The second question is where things begin to get a little trickier. If we accept that the author in our example can write a factually accurate depiction of life for his protagonist, it becomes a bit more of an abstract issue about whether that factual accuracy corresponds to a deeper emotional truth. The author himself can formulate words and thoughts that describe the subjective truth of his protagonist’s human condition, but that doesn’t mean he actually felt that emotional truth. A reader might read his words and have different emotions evoked by them, but can we say that those emotions are genuine when they were not scribed with that deep, emotional truth behind them? This question is a bit more indeterminate. It’s certainly possible that someone who has actually had the experiences described in the authors book could relate to the protagonist, having that deep, emotional truth evoked through the words. In that sense, the answer would be yes, there is a connection between the factual accuracy and emotional truth in the book, at least on the reader’s side. But it still leaves out the possibility that the author’s understanding of the facts doesn’t necessarily correspond to an understanding of the emotional truth behind them.

The third question is less tricky, but more hairy. If we accept that the author in our example can write a factually accurate novel and have it capture the emotional truth behind it, then we have to ask ourselves if this is appropriate for the author to do so. Does he have the right to scribe this novel, or does it constitute a theft of some kind of cultural, or at least experiential, property? And what does it even mean that a culture can own a certain point of view? Even if we say he has the right, is there not some sense that it just seems intuitively inappropriate, the way many people intuitively feel there is something inappropriate about taking from the fashions or rituals of another culture? Isn’t there some intuitive sense that it would be wrong for a Catholic family to throw a Bar Mitzvah for their son? Or for an Asian family to dress up in Native American garb and perform a Sun Dance? In that same way, doesn’t attempting to capture the struggles of another culture that you have had no part of seem intuitively inappropriate?

I don’t pose these questions as an excuse for continuing with the straight, white male protagonists. My own work is often from the point of view of people who don’t fall into that narrow category. I think there is even a case to be made that generating content about other aspects of the human condition can help people understand and empathize with it. The question then becomes how far is too far? And at what point does a culture get completely subsumed by another, causing it to lose its true identity? There are not simple answers to these questions, but I think it’s important to keep them in mind, especially for people who produce arts and entertainment.