Forbidden Knowledge

Knowledge is Power

There is an old adage that knowledge is power. People being able to acquire facts and information¬† gives them power over those who wish to control them. This is the cornerstone of first amendment rights in the United States. Preventing the government from having arbitrary power over the people by way of knowledge about the private lives and thoughts of people is the cornerstone of fourth and fifth amendment rights in the United States. Other governments – places like Nazi Germany and those in the Communist bloc – attempted to disempower their people by banning certain books and speech critical of their ideology or governing regime; by controlling people’s right to assembly (ie banning other political parties); by regulating or persecuting certain religions; by controlling and censoring the press; by spying on their people; and by forcing people to testify against themselves through torture and indefinite detention. The Khmer Rouge, for example, feared a knowledgeable populace so much that that would condemn and even kill people who wore glasses because ‘intellectuals’ were considered to be corrupted by modernity.

The point being, knowledge is generally viewed as a good thing in a liberal democracy. It allows us the opportunity to make informed decisions about who governs us and then hold them accountable. But is there knowledge which should not be known? Knowledge that could potentially be harmful if it gets out?

The Bible

This is an argument as old as time, but a particular instance comes to mind – the Bible. For much of the Church’s history, the Bible was read only by the clergy (and other higher status individuals), who could read Latin. The teachings could then be interpreted by the clergy and taught to their parishioners. This allowed for a single orthodoxy to be run by the Church bureaucracy. In the first 1500 years of Christianity, there was only a single schism in the church (not counting the Western schism, which was more political than theological). However, vernacular translations of the Bible in Greek by Erasmus and in German by the likes of Martin Luther were printed, helping to ignite the protestant reformation, the result being that the Church split into numerous churches. During those early days of the printing press, it was hotly debated whether it would be a good idea to let the people have access to the Bible. There are still those who think it was a bad idea.

The Internet

A more contemporary source of perhaps forbidden knowledge is the internet. Conspiracy theories, fake news, and other such nonsense aside, the internet is arguably the greatest means of spreading knowledge to come into existence since the printing press. The biggest obstacle one might find in their way online are paywalls and subscription fees, and even those are usually easily bypassed or avoided. But what about information like how to make bombs or 3D printed guns? Sure, most people are probably responsible enough to either not use this information, or even if they do, use it for benign purposes. But if that information is available on the internet, it is available to everyone – even those who would use it for malicious or self-serving purposes. I am not trying to make a political argument for banning these things, but generally a more philosophical argument – would humankind be better off if this information had never become available in the first place? Or is there something intrinsically good about such information being available – ie knowledge is power?

What about hacked or leaked information of a private or personal sort, like pictures of a politician doing something we might find disgusting, like cheating on their spouse or doing drugs? Does our knowledge of this lapse in character or poor judgment outweigh the privacy of the individual perpetrator? What about leaked classified information about government wrongdoing that could damage national security or put agents in the field in danger? This argument is made just about any time information about government wrongdoing is made available to the public, whether it damages national security or endangers field agents or not, which further demonstrates that the government is afraid of people becoming knowledgeable. But what about in cases where public knowledge is demonstrably dangerous, even if the government is in the wrong about something? Where is the crossover point, where the information becoming public knowledge becomes an unacceptable risk?

There is knowledge of a different kind on the internet – pornography. Social conservatives often argue that access to pornography has a deleterious affect on people’s minds and morals. There may be merit to this argument. Pornography can cause addiction, isolation and unrealistic expectations about romantic love. And what about the fact that after a terrifying experience, such as the false alarm about a missile strike in Hawaii, people seem to seek comfort in pornography? So, should pornography be included in the category of knowledge that humankind would be better off without? Or is it part of the knowledge as intrinsic good? Even if we argue that pornography is not harmful, psychologically or sexually, is there an argument for it being good? Or perhaps there is a cutoff point – pictures of naked people alone, or video of people having missionary position sex, is acceptable, but people doing other sex acts is not. Maybe if it’s only shown with people having safe sex – like the proposed condom law that failed in California – then it is acceptable. Once again, I’m not trying to make a political or civil liberties argument one way or the other, but I’m asking, philosophically speaking, would humankind be better off (psychologically, sexually, morally) if pornography didn’t exist, or if only certain types of pornography existed?

Political Correctness

Opponents of Political Correctness contend that it is a form of censorship that stifles society from having important conversations. Political Correctness is defined as “…the avoidance, often considered as taken to extremes, of forms of expression or action that are perceived to exclude, marginalize, or insult groups of people who are socially disadvantaged or discriminated against.” However, Political Correctness is often used as a way of shutting down conversation. For instance, bringing up crime and race, race and intelligence, or that men and women might have differences in preference when it comes to career path choices (as opposed to systemic barriers to entry in certain careers dependent on ones sex or gender identity) are often hot-button issues. I’m not making any claims about the truth of falsity of these topics, but Political Correctness dictates that even bringing them up is taboo. People who bring these issues up will often find themselves on the receiving end of criticism, and sometimes even threats of violence. Opponents of Political Correctness will say that if these subjects aren’t even up for discussion, then there is no way to find whether the claims are true or not, and if true, find the causes of these problems and be able to work out solutions. As a result, the problems will persist and get worse while people continue to pretend that they don’t exist. The truth value of these claims is not based on reason, facts, or evidence, but on how the topics make people feel. Things that are uncomfortable to discuss then become, essentially, forbidden knowledge. Do these subjects belong in that category, or should they be up for discussion?

The issue works the other way, too. There are plenty of people who would prefer not to have LGBT issues taught to children, while proponents of Political Correctness are often in favor of doing this. Whether one believes that sexual preference or gender nonconformity are choices, pathologies, or just part of the spectrum of human experience, they are still phenomena that occur in the real world; they are still impulses that dictate the lifestyle of real people. Refusing to teach people about these issues will not prevent them from being exposed to them, and will only leave people less knowledgeable about real world issues. It is a form of political correctness that attempts to pretend that something isn’t real, which stifles dialogue and does nothing to weigh truth claims about causes, effects, and society based on reason, facts, and evidence, but once again based only on how the topic makes people feel. Thus, not teaching people about the LGBT phenomena is relegating these issues to the realm of forbidden knowledge. Are people better off not knowing about these issues, or is knowledge still power in this case? Does knowledge necessitate acceptance – if a person is taught about LGBT people, will that person necessarily be accepting? Does acceptance necessitate knowledge – can you not accept someone’s lifestyle if you are ignorant of it? And, if this should not be forbidden knowledge, at what age should people be taught about LGBT issues? What is the best way to teach them? And what exactly should be taught, as there are competing theories?


I think probably the place where the most people will accept that some knowledge may be better left unknown is when it comes to the potential end of the world. Nuclear weapons are the first thing that come to mind. During World War II, there was a concerted effort by the United States and Britain to develop atomic weapons. Doing so opened up a Pandora’s box that still affects us to this day – the doomsday clock was just recently reset to 2 minutes to midnight (doomsday). When the Soviet Union tested their own nuclear weapons in 1949, the term Mutually Assured Destruction (MAD) soon came into vogue. Would it have been better if humankind had never learned how to develop nuclear weapons in the first place? What about the argument that Mutually Assured Destruction has prevented cataclysmic wars between major powers, as was the case in WWI and WWII before humans split the atom? Does that make knowledge of atomic weapons a net positive for the human race, even if the potential destruction of civilization as we know it as the hair-trigger whims of a few powerful people?

Nowadays, we also have to worry about possibly an even more insidious weapon of mass destruction: biological weapons. What makes this even more dangerous is that they are so cheap and easy to develop (particularly compared to nuclear weapons), a single person could do it in a DIY lab in their garage. It’s so easy a person could develop or release it on accident. Instructions on how to do it could easily be made available online (and probably are in some dark corners of the web). And once the disease is out, it will not distinguish between friend and foe – at the very least, an atomic weapon could potentially be contained to a single geographical location. This, of course, brings up the question of whether it has been a good thing or not that humankind has acquired knowledge about how genetics work – with knowledge of genetic manipulation, it’s not that difficult to make a dangerous pathogen. Our understanding of genetics and genetic manipulation has yielded amazing things for humanity, but if it ultimately spells our downfall, was any of it worthwhile? Or would humans have been better off never knowing?

And now, possibly in the not to distant future, we might have to worry about Artificial Intelligence. As it is often popular to say in AI circles, Artificial Intelligence could be the last thing humankind ever invents. So, does that mean that AI technology should be forbidden knowledge? Is humanity better off not discovering Artificial Intelligence? What if developing AI is the only way we can actually ensure that we don’t wipe ourselves out via other means? Unlike most of what I’ve talked about here, AI is knowledge that we have not yet acquired – it is still theoretically within our power to keep this knowledge forbidden, whereas other things I’ve discussed are already available. It may be that development of AI is inevitable, but it could be that we would have been better off never even considering it.

Predictions 2015-2025

I recently skimmed through a report released by the Institute for the Future (IFTF) in 2005 making predictions for the next 10 years. It’s been 10 years now, and the report was certainly accurate about some things – social networking, the ubiquity of mobile phones, large amounts of user generated content (blogs, podcasts etc) – but also off on some things – the severity of effects from climate change, smart roads, holographic displays, and embedded brain chips. But of course, to me, the interesting thing about future predictions isn’t about being right, it’s about looking back on it when that future time comes and observing what was important to ourselves back when those predictions were made. With that in mind, I’m going to make some of my own predictions for the next ten years – from 2015 to 2025 (and maybe beyond) and perhaps when 2025 comes around, I can re-post this blog post and reminisce about what seemed important at the time.

So, here are just a few predictions I want to make on a few areas of science and society. These aren’t things I’ve diligently researched, but an extrapolation from my own observations, filtered through the values and views I hold in 2015, composed of my knowledge but shaped by my ignorance. Feel free to leave your own predictions on these areas (or others) in the comments.


In 2015 we are in the era of biotechnology. We are currently making many discoveries in biochemistry, cell biology, physiology, and medicine. But many of these advances take some time to be turned into practical uses and then open up to a wide market. Gene therapy, organs grown in vitro from a persons own DNA, treatments for diseases once thought insurmountable (Alzheimers, Parkinsons, ALS, diabetes, cancer, AIDS etc), and a growing number of stem cell treatments will begin to become available in the next ten years. Treatments meant to augment or prevent disease may become available, such as gene doping, smart drugs, and tissue grafting.

Materials science will also hit the market. Some say we live in the digital age, but if we go by the theme of materials used, we actually live in the polymer age. Polymer muscles, self-healing polymers, and polymer sensors will become part of our everyday lives in ways that are difficult to foresee. Polymer dendrimers will be used for various biological activities, such as focused drug delivery, biological sensors, and medical imaging.


The current trend of Moore’s Law will continue until the point that transistors become so small that electrons can tunnel between gates. This will prompt more 3D processors, quantum computers, and nanotechnology breakthroughs. Computer integration is often predicted when it comes to the future of technology – having it hands free (headsets, things like Google Glass), integrated into our clothing or our workspace (the desk or chair), or even integrated into our body (computer tattoos or RFID chips) – and this will probably be the future at some point, but I think there almost needs to be a cultural shift for this to happen. As it stands right now, these types of integrations seem anywhere from mildly inconvenient (people would rather hold onto a phone than have it woven into their clothing) to socially taboo or even potentially illegal (having technology surgically implanted into your body). But, I think in the next ten years we’ll begin to see the technology itself adapt to being more conducive to this type of integration and culturally we’ll start to become more used to and accepting of this type of integration. I think by 2025 this type of integration will still be fairly new, but it won’t be seen as inconvenient or weird.

Social networking will still be around, but the internet will be very different. Government regulation of the internet will mean that the internet is reined in, become less of the wild west that it’s been since the nineties. Restrictions on content and access will prompt people to turn toward decentralization, possibly in the form of mesh networks. This will mean there is more than one internet, with access being based more on proximity and the number of people in the mesh network. I predict we’ll begin to see some of these arise in the next ten years, but that many people will cling to the current internet during that time, meaning the ubiquity of mesh networks won’t come around until after 2025 – cultural shifts can take time.


Nothing that exists in 2015 can potentially alter economics as much as 3D printing. I predict that in the next ten years, 3D printers will start to become more and more affordable, showing up in the houses of middle class people the way personal computers did in the 1980’s. These first 3D printers that become affordable won’t make the best quality products, being somewhat of a novelty at first, but I think by 2025 we’ll start to see more practical and useful things come from them. This will make the market for schematics and polymer materials used in 3D printing boom, while the market for many finished products will stagnate (think what the internet did for many retail stores like Circuit City and movie rental places like Blockbuster). It will create a market of user generated schematics (the same way people make phone apps in 2015) for making any number of things. It will be a revolution in terms of how to prohibit and enforce certain types of things like guns, but also in the realm of patents – you will no longer be able to just pirate software, but pirate schematics for otherwise expensive real world products.

The second biggest potential impact economically will be crytpocurrency (things like Bitcoin and Dogecoin). In 2015, Bitcoin tends to take the spotlight, but I think it’s still up in the air as far as which cryptocurrency will “win” or if there might just be more than one that people use – although I tend to think it will settle on one the same way social networking settled on Facebook. As global fiscal policies continue to devalue their currency and younger generations turn toward technology and peer-to-peer networking, cryptocurrency will become the unit of economic exchange. Governments will take time to incorporate this new paradigm, being slow to adapt, but they will be unable to curb the replacement of old currencies with new. What we’ll see in the purview of the next ten years, though, will be more and more businesses accepting cryptocurrencies and more and more people using them. This will create a shift in the cultural mindset concerning cryptocurrency – whereas right now people think of their Bitcoin in terms of how many dollars they are worth, more people (especially young people as they enter the market place) will think of the cryptocurrency itself as the unit of currency, rather than mentally translating it into the old money. This will be a key step in getting off the old monetary system and onto a digital one. Governments will try to shut things down until they learn how to tax it and make money off of it, but during the next ten years, expect a lot of resistance to cryptocurrency from governments.


As people born in the internet era (post 2000 mostly) grow up and enter the market place, culture will see another radical shift. We’re already seeing it emerge in what I called Granular Bubble Culture. I think looking from the top down, interconnectedness and globalization will cause somewhat of a homogenization of culture on its surface, but from the bottom up, we’ll see more and more of the Granular Bubble Culture, where pockets of subcultures keep themselves in a perpetual bubble, creating strange fads and phenomena within themselves that will seem alien to outsiders (think Bronies or people who watch Pewdiepie).

However, I think there will be a significant (although maybe not even the majority) of the population that once again becomes disenfranchised with the acceptance of consumerist culture we have now. The interesting thing about our modern times (2015) is that consumerism has become mainstream. Even the types of people who, back in the 1990’s, were often the biggest opponents of consumerist culture (primarily the left) are now fine with consumerism. They will gladly boast about their new smart phone or computer, show off their apps, and dress stylishly. I think, as is often the case in cultural evolution, we’ll see at least somewhat of a backlash against the acceptance of consumerist culture (the way the early 1990’s was a backlash to the consumerist culture of the 1980’s). It will once again become fashionable (in some areas of society, not all) to be minimalist, to spur certain types of technology, and to be thrifty and (at least somewhat) disconnected. There will mostly likely still be a silent majority that are happy to continue buying the next best thing, but this backlash will probably produce some forms of culture that will garner a lot of attention.

User generated content will continue to overtake mass media, at least in more affluent countries. We already see now that many movies make the most money overseas, so movies will continue to be produced for that audience. Podcasts, Youtube channels (or whatever video services take over), blogs, forums, and social media will continue to expand in production and consumption, many of them moving to the aforementioned mesh networks. TV will become more and more the platform for intelligent mass media, although the on-demand style of places like Netflix and Hulu will begin to overtake the old paradigm of scheduled television programs. However, I predict a change happening in the sorts of content seen on these programs. The new millenium has been big on shows about white, male antihero types (The Sopranos, The Wire, Breaking Bad, House, The Walking Dead, Mad Men, Boardwalk Empire, True Detective, The Knick, Dexter, and the list goes on) which will probably not go away completely, but I think there will be an expansion on the types of shows we’ll see (things like Orange is the New Black and Transparent).

I think the trend of becoming more liberal of social issues will continue. By 2025, I predict marijuana will be legal in at least the majority of states in the U.S. and that gay marriage will not only be legal in all 50 states, but will start to be seen as something somewhat more normal (especially by younger people who grow up in a world where it’s more accepted). However, I see the pendulum swinging too far into that direction in the form of what are often pejoratively called SJW’s or social justice warriors. In the well-intentioned pursuit of more social acceptance for those considered outside the norm, political correctness will become more and more prominent, with the court of public opinion passing swift judgment on people who don’t conform with a strict set of correct terminology to refer to people. However, I see the pendulum reaching its pinnacle in the next decade, swinging back the other way in a backlash against this type of forced tolerance. My only hope is that it doesn’t swing too far in the other direction, but can find a happy middle ground where everyone is accepted but people can talk freely.

I think culturally there will become mistrust in authority (it’s already happening now, particularly when it comes to police). This will go hand-in-hand with the decentralization of society – more user generated content, mesh networks, Granular Bubble Culture (taking the place of central authorities), cryptocurrencies and 3D printing – and cause a decrease in government legitimacy. Governments will never go away – in fact, they will probably only step up their surveillance and attempts to control things – but on many of these decentralization issues named above, people will continue to ignore and subvert them in the same way people do now with online piracy.

In the end, culture is probably one of the most difficult things to predict. Culture is the interaction of many actors with an assortment of different tastes and backgrounds. Most of what can be extrapolated about culture is how culture will interact with the other areas – science, technology, economics – and it would be just about impossible to predict particular things, such as what style of clothes people will wear, what genres of music people will listen to, or the brands they will be loyal to.

Concluding Remarks:

So, what do you think of my predictions? Do you think they will be accurate? Do they reflect current trends – in other words, are they good extrapolations of how things are now in 2015? Am I predicting things (science, technology) to move too slowly or too quickly? Is there anything I may have missed that would enhance or throw a wrench in my predictions? What are your predictions for the next ten years? Twenty years? Fifty years?

Guns, Germs, and Decentralization.

We live in an age of cultural decentralization but political and governmental consolidation. Decentralization has benefits and dangers associated with it. The largest benefit is that decentralization means parallel processing – multiple paths can be attempted while moving toward a single goal. This means that solutions to problems can come quicker, with one of the approaches being tried having the least resistance, and more efficiently, in that resources and time are not spent on trying to move forward on a single (or smaller number of) path(s) that may not be the best way to achieve the goal. The downside, obviously, is that more decentralization can lead to less oversight and a lack of a unified goal – it’s throwing everything against the wall and seeing what sticks.

One of the places where decentralization could have the largest impact on our lives and society is in science (and technology). As it stand right now, at least in America, science is a highly regulated, highly centralized institution. All funding must pass rigorous scrutiny in order to be awarded grants, there are many laws concerning ethics and the acquisition of scientific instruments and materials, and even having access to much of this requires a person to go through years of education.

But what if science was decentralized and deregulated?

Some possible benefits:

Lifting regulations such that burgeoning scientists can acquire scientific equipment on the free market easily and cheaply and learn how to do science from experts or knowledgeable amateurs without A) having to pay expensive university tuition (plus other fees) and B) pay for a bunch of liberal arts classes you don’t want or need and C) acquire a piece of paper that says they’re certified by the government to do science. This could also make testing new pharmaceuticals and GMO’s cheaper, easier, and faster if individuals are allowed to test their discoveries on voluntary individuals without government regulation. There would be a hand full of people working on issues – medicine, materials science, green energy – from different angles and backgrounds, coming up with novel solutions.

We know that decentralization has worked well for things like FoldIt. Imagine a world where having a working scientific knowledge about biology and biotechnology is just as common as having a working knowledge of computers and smartphones is right now. Imagine if new technologies, medicines, and scientific discoveries came out just as quickly and easily as smartphone apps and websites. How different might our world be?

But does this seem like a good idea, or does government regulation of scientific institutions and who is certified to do science make us safer? Certainly, as in anything, the potential for wrongdoing also exists. Does it help or hinder scientists and science in general?

Decentralization means that nobody has a monopoly anymore. This means more freedom, but freedom never promised to be comfortable. The loss of oversight means that there is no doctrine to be followed in helping humanity, but it also means there is no doctrine holding anyone back in potentially harming humanity, either.

One of the biggest technologies of decentralization is 3D printing. I think 3D printing has the potential to alter our economy and way of life as much as the internet, which was the biggest decentralizing technology of the 20th century. The American government is already reeling with the lack of oversight that 3D printing is beginning to bring about, with 3D printed guns. The potential here is that there is no way to track or regulate guns, but the same could hold for anything that can be 3D printed.

One technology that has only recently started to become decentralized is drone technology. Most people own small, remote-controlled drones, but the American government has held drone supremacy on the world stage when it comes to military technology. This means, of course, that the American government has set the precedence for how UAV’s can be used in battle. Once this technology gets spread around, becoming decentralized, then the precedence has been set – namely, that there is no recourse for collateral damage or killing the wrong target.

But when it comes to science, I think most people’s biggest fear comes from the potential for biological weapons. If science became more decentralized, it would make it much easier, and potentially more likely, that someone could produce a deadly bacteria or virus in their homemade laboratory. I’ve made mutant bacteria in the lab I work in numerous times, and it’s actually very easy to do. As decentralization may lead to more cures for diseases, it could also potentially create more.

And it wouldn’t have to stop at bacteria and virus. Transgenic organisms can already be made, and some people even do it as art. Imagine if gene doping was used to modify a persons own body in a way that could be artistic (think body modification a la tattoos, piercings, and implants) or beneficial in some way?


The question we would want to ask ourselves then: is it worth it to decentralize science? I think the internet, possibly 3D printing, and who knows what other discoveries may come along in the 21st century, may all answer that question for us. Technology and scientific discovery doesn’t usually stay a secret forever. The computer itself began as something very exclusive, and now almost everyone carries a computer in their pocket that would put those exclusive ones to shame. Governments are slow to react, and attempt to keep the world moving at their own pace, but the information age is showing that governments are slowly losing what control they had over this. Are you ready for the world of DIY science?

Simulated Reality (NSFW)

What does it mean for something to be real? This seems like an obvious answer: things are real if I can see and feel them. My house is real. My cup of coffee is real. My trip to Greenland was real (although ill-advised). My computer is real. But are the things you see on your computer real? What does it mean for something to be data – does it exist in the obvious way that we think something is real? Can a computer have a mind that thinks and feels – and if so, is that mind real?

These might seem like esoteric questions, or even pedantic bickering, but the answers have real world applications. For instance, violent video game usage is often correlated with aggressive behavior. But does that make playing the violent video game itself immoral? If one takes a consequentialist view, then it is – things that cause people to do immoral things are immoral. But not all people who play violent video games do violent things – it is not a direct cause-and-effect scenario. To say this would be to take any sort of agency out of the person playing the game – their actions are merely an effect of external factors. But there are those who would wish to seek certain types of legislation to keep certain video games off the market, or at the very least, out of the hands of impressionable children.

But I want to step away from this consequentialist argument, because there is an equally interesting question: Can an act be immoral if it is simulated? Computer graphics, robotics, and artificial intelligence is starting to reach a point where it’s difficult not to get our empathy mixed up in the simulation itself. Some people can feel sadness or anger at inanimate objects to greater or lesser extents, depending on how “real” the simulation is. This means that we feel empathetic toward simulations. If we accept that much of our moral sensibilities stems from our ability to be empathetic (sociopaths are people who do not feel empathy, which is why they act immoral) then it is not a stretch to say that these simulations have some level of moral character. But what sorts of implications does it have? Consider the following trailer for the video game “Hatred” which has quite a few people in an uproar (video is NSFW):

In case you don’t want to watch the video (or don’t care to stomach it) in this game, you play a character who decides he hates and despises the world and everyone in it, and so goes on a “genocide crusade” to kill as many people as he can in as brutal a fashion as possible until finally being violently gunned down. Your goal in the game is to just kill innocent people in a violent, and serious, fashion until you are killed. A lot of arguments I’ve read why people hate this game but not something like Grand Theft Auto 5, where you can also just go around brutally killing innocent people, is that in “Hatred” is is the goal of the game to brutally gun down and stab innocent people, whereas in GTA5 it is the persons choice to do this and has nothing to do with the goals of the game (and in fact will hinder your ability to accomplish those goals). I think people are not wrong when they say that “Hatred” somehow has a lower moral character than “GTA5” (although it’s a different subject, which I’ll address at some point, whether a work of art can have a good or bad moral character), but I think they’re wrong in using this argument.

In “Hatred” the simulations are very realistic. The victims sob, cry for mercy, and ask why. They are generated to look very realistic. This makes committing violent acts against them look and feel very realistic. And yet, anyone who see’s the trailer, or plays the game, knows that those people are not real. They aren’t feeling any real pain. There is no actual life that is really being extinguished. None of those simulated people had an real hopes or dreams or memories or loved ones. They are pixels. And yet, even ignoring the consequentialist argument – people playing this game may be influenced toward aggressive behavior – there is something that feels morally wrong about this game.

So, what happens when simulated people become even more realistic? What happens when they are able to simulate more of a sense of agency? When, if left alone, they do simulate a life that seems very real, very personal, and even very conscious? Would it be immoral to make a game – or a virtual reality – where people can just go on a killing spree and brutally murder these simulations? Would that make those players (or game developers) actual murderers?

And what about simulating sex acts that we find immoral? It’s not far-fetched to think that robots that simulate sex will one day become very real – perhaps sooner than we think.

Sex Robots

These things could become very realistic, and that doesn’t seem so bad. There are already such thing as sex dolls. Does it make a difference if the robot can simulate emotions? Desires? Or even fear and pain? And what happens if we make a sex robot that simulates a child in look, feel, and simulated emotions? We all know that the actual chassis itself is not a child, and that the simulation of a mind isn’t a thinking, feeling child that will have its life significantly altered by what someone does to it. So is it still immoral for someone who commit sexual acts against a robot that simulates the experience of a child?

These are certainly uncomfortable questions to consider, but I think with the accelerating pace of technology, they are questions that must be considered. And once these questions are considered, then there is always the hairsplitting about when a simulation of a person becomes something that is sentient, with its own first person subjective experience of the world. It’s amazing and fascinating that we now live in a world where such questions can be considered, but it’s equally as frightening, as we feel our way through Plato’s cave. I think these sorts of inquiries are going to be very real, possibly even within the next ten years. So, don’t shy away from the discomfort, because these things will come up, whether you pay attention to them or not.

Electrifying Intelligence

If someone offered you a pill that would make you smarter, how much would you be willing to pay for it?

While there are such thing as nootropics such as racetams and ampakines which purportedly help with things such as memory and attention, the effects are generally somewhat small and the chemicals can be somewhat expensive to buy or difficult to find. But there is another method that has fairly well established evidence for making you smarter, helping you concentrate, alleviating depression and anxiety, and increasing the speed at which you learn. And it all comes from zapping your brain with electricity, like at the end of “One Who Flew Over the Cuckoo’s Nest.” Well, not exactly like that, anyway. It’s called transcranial direct current stimulation or tDCS.

tDCS uses low amperage (< 2 mA) direct current (instead of alternating current) to either stimulate or inhibit (depending on the direction of polarity) a section of your brain. Neurons in the human brain send signals through their axons using electrical signals – depolarization through influx of sodium and efflux of potassium (an action potential) – so running electricity through certain areas can either cause depolarization or hyper-polarization in that area. Anodal (positive) stimulation increases activity while cathodal (negative) inhibits activity.

What’s great about tDCS is that the equipment for it is cheap and that it is non-invasive. The electrodes can be placed on the head without having to remove hair or break the skin. And depending on where you place the electrodes, you can receive different effects:


While it’s still too early to say whether this is a cure-all for various ailments such as depression and anxiety, or a boon for late night study sessions, there is already quite a bit of scientific evidence that this technique is effective, as well as anecdotal evidence from DIY users. This 25 minute Radiolab podcast showcases just how amazing this technique might be.


Imagine being able to get an extra boost in brain power and clarity when you’re tired but still have to work. Imagine learning to play an instrument, or learn a second language, or study for a math final and retain it all much quicker. Imagine getting home from a stressful day and just strapping a couple electrodes to your head and almost instantly having the stress melt away.

And while this is all great, I’d like to extrapolate a bit. Imagine having micro-electrodes implanted under the skin on your head, with currents that can be targeted at smaller and more specific areas of the brain that can run simultaneously. You can get multiple tDCS affects at once without having to strap the electrodes to your head. It could have a power source you carry with you like an MP3 player (or maybe even embedded into your body). Depending on what you were doing, you could have different settings to optimize your brain to the task. Does this seem possible? Is this something you might be willing to do?