Mesh networks figure extensively in my Incarnate series. They’re used by the forty-eights – and others – as a way to run parallel ‘internets’ so as not to be tracked on the original internet. But mesh networks are not all science fiction – they’re actually being used in the real world.
Augmented Reality (AR) is a technology that features heavily in my Incarnate novel series. This technology isn’t just a science fiction creation, though. Just like in my story, it’s becoming more and more a part of our lives and only promises to become a bigger part in the near future.
After two years of investigation and constant media coverage, the Mueller Report is finally finished. While anyone outside the Justice Department has yet to read the full report, Attorney General William Barr has released a summary. The so-called Russiagate story is not yet over, however, as there are now calls for the entire Mueller Report to be made public. Exactly what the Russiagate story is and how it started is expertly told by Matt Taibbi in his “It’s official: Russiagate is this generation’s WMD” piece. What I’m more interested in is how this whole story is indicative of human nature.
Video trailer for the sequel to Incarnate: Existence
Release date is April 18
Following the events in Incarnate: Existence, the handful of desperate freedom fighters known as the forty-eights, led by an immortal being who is reincarnated every time they die, are indelibly transformed by what happened. Beset by violent insurgents, mounting corporate influences, frantic ideological governments, and a group of zealous hackers known as the Anonymous Knights, our protagonist attempts to hold the forty-eights together, even as their own mind seems to be coming apart. Propelled into an unfamiliar future where advanced technology can alter the very genetics of a human being, where the scourge of a new drug called Shift threatens to become an epidemic, and where global conspiracies endeavor to steer the tides of destiny, our protagonist will continue seeking answers to their own puzzling existence as a possible way to ensure a better future for humanity. In this second installment of the five part Incarnate series, old and new allies join our protagonist, no matter what body they come to inhabit. In Incarnate: Essence, success is grim, but failure is not an option.
Don’t forget to follow me on Twitter @AuthorTomHarper
Liberalism, defined here in the classical sense of the enlightenment values of civil liberty and economic freedom, not a narrow left-leaning ideology, holds individual freedom above all else. In the U.S. both the left and right, except on the extremes of both, fall into the classical liberalism philosophy. Ideas that could be considered pre-cursors to liberalism began developing in the late seventeenth and early eighteenth century. But it was the American Revolution and French Revolution that put liberalism into practice. That means the experiment has been running for a little over two hundred years. Can we draw any conclusions from the results?
Social media, and twitter in particular, has recently become popular in the conversation about freedom of speech. This surrounds the issue of Twitter punishing people for posting right-wing and conservative ideas more than people on the left. Alex Jones being banned and Kathy Griffin not being banned are two exemplary cases.
The fear here is that Twitter is policing people for wrongthink. Only left-wing and liberal ideas are allowed, and with Twitter being a primary hub for communication, this threatens to silence right-wing and conservative views from the public conversation. This would give left-wing and liberal ideas de facto hegemony in western culture. This has prompted people to call for Twitter usage to be treated like a utility or even a human right, in the sense that humans have a right to free speech.
I think this is a misguided way of thinking about Twitter. Being banned from Twitter does not infringe on a person’s right to free speech. It only infringes on their ability to have that speech heard by a larger audience. This brings up the questions: do humans have a fundamental right to be heard? Is being heard a part of our right to free speech?
In this day and age, with social media and politically biased news media, it seems that truth isn’t truth. Thus it has become popular to talk about ‘my truth‘ when people talk about their opinion. It’s interesting that people can believe wildly different versions of things that happen. From Russiagate to Uranium One, it all seems to depend what ‘our truth’ is for any particular group.
The alternative, unfortunately, is to have some version of a single, orthodox ‘right Truth’ that we all must agree on, possibly enforced by a Ministry of Truth or a Truth Force.
I’m okay with a US Space Force. But what we need most is a Truth Force — one that defends against all enemies of accurate information, both foreign & domestic.
— Neil deGrasse Tyson (@neiltyson) August 20, 2018
The problem with this is obvious: who gets to say what the Right Truth is? What are their motivations for saying that A is the Right Truth but B is not the Right Truth? Is it ever possible for one source to give the truth, the whole truth, and nothing but the truth?
Knowledge is Power
There is an old adage that knowledge is power. People being able to acquire facts and information gives them power over those who wish to control them. This is the cornerstone of first amendment rights in the United States. Preventing the government from having arbitrary power over the people by way of knowledge about the private lives and thoughts of people is the cornerstone of fourth and fifth amendment rights in the United States. Other governments – places like Nazi Germany and those in the Communist bloc – attempted to disempower their people by banning certain books and speech critical of their ideology or governing regime; by controlling people’s right to assembly (ie banning other political parties); by regulating or persecuting certain religions; by controlling and censoring the press; by spying on their people; and by forcing people to testify against themselves through torture and indefinite detention. The Khmer Rouge, for example, feared a knowledgeable populace so much that that would condemn and even kill people who wore glasses because ‘intellectuals’ were considered to be corrupted by modernity.
The point being, knowledge is generally viewed as a good thing in a liberal democracy. It allows us the opportunity to make informed decisions about who governs us and then hold them accountable. But is there knowledge which should not be known? Knowledge that could potentially be harmful if it gets out?
This is an argument as old as time, but a particular instance comes to mind – the Bible. For much of the Church’s history, the Bible was read only by the clergy (and other higher status individuals), who could read Latin. The teachings could then be interpreted by the clergy and taught to their parishioners. This allowed for a single orthodoxy to be run by the Church bureaucracy. In the first 1500 years of Christianity, there was only a single schism in the church (not counting the Western schism, which was more political than theological). However, vernacular translations of the Bible in Greek by Erasmus and in German by the likes of Martin Luther were printed, helping to ignite the protestant reformation, the result being that the Church split into numerous churches. During those early days of the printing press, it was hotly debated whether it would be a good idea to let the people have access to the Bible. There are still those who think it was a bad idea.
A more contemporary source of perhaps forbidden knowledge is the internet. Conspiracy theories, fake news, and other such nonsense aside, the internet is arguably the greatest means of spreading knowledge to come into existence since the printing press. The biggest obstacle one might find in their way online are paywalls and subscription fees, and even those are usually easily bypassed or avoided. But what about information like how to make bombs or 3D printed guns? Sure, most people are probably responsible enough to either not use this information, or even if they do, use it for benign purposes. But if that information is available on the internet, it is available to everyone – even those who would use it for malicious or self-serving purposes. I am not trying to make a political argument for banning these things, but generally a more philosophical argument – would humankind be better off if this information had never become available in the first place? Or is there something intrinsically good about such information being available – ie knowledge is power?
What about hacked or leaked information of a private or personal sort, like pictures of a politician doing something we might find disgusting, like cheating on their spouse or doing drugs? Does our knowledge of this lapse in character or poor judgment outweigh the privacy of the individual perpetrator? What about leaked classified information about government wrongdoing that could damage national security or put agents in the field in danger? This argument is made just about any time information about government wrongdoing is made available to the public, whether it damages national security or endangers field agents or not, which further demonstrates that the government is afraid of people becoming knowledgeable. But what about in cases where public knowledge is demonstrably dangerous, even if the government is in the wrong about something? Where is the crossover point, where the information becoming public knowledge becomes an unacceptable risk?
There is knowledge of a different kind on the internet – pornography. Social conservatives often argue that access to pornography has a deleterious affect on people’s minds and morals. There may be merit to this argument. Pornography can cause addiction, isolation and unrealistic expectations about romantic love. And what about the fact that after a terrifying experience, such as the false alarm about a missile strike in Hawaii, people seem to seek comfort in pornography? So, should pornography be included in the category of knowledge that humankind would be better off without? Or is it part of the knowledge as intrinsic good? Even if we argue that pornography is not harmful, psychologically or sexually, is there an argument for it being good? Or perhaps there is a cutoff point – pictures of naked people alone, or video of people having missionary position sex, is acceptable, but people doing other sex acts is not. Maybe if it’s only shown with people having safe sex – like the proposed condom law that failed in California – then it is acceptable. Once again, I’m not trying to make a political or civil liberties argument one way or the other, but I’m asking, philosophically speaking, would humankind be better off (psychologically, sexually, morally) if pornography didn’t exist, or if only certain types of pornography existed?
Opponents of Political Correctness contend that it is a form of censorship that stifles society from having important conversations. Political Correctness is defined as “…the avoidance, often considered as taken to extremes, of forms of expression or action that are perceived to exclude, marginalize, or insult groups of people who are socially disadvantaged or discriminated against.” However, Political Correctness is often used as a way of shutting down conversation. For instance, bringing up crime and race, race and intelligence, or that men and women might have differences in preference when it comes to career path choices (as opposed to systemic barriers to entry in certain careers dependent on ones sex or gender identity) are often hot-button issues. I’m not making any claims about the truth of falsity of these topics, but Political Correctness dictates that even bringing them up is taboo. People who bring these issues up will often find themselves on the receiving end of criticism, and sometimes even threats of violence. Opponents of Political Correctness will say that if these subjects aren’t even up for discussion, then there is no way to find whether the claims are true or not, and if true, find the causes of these problems and be able to work out solutions. As a result, the problems will persist and get worse while people continue to pretend that they don’t exist. The truth value of these claims is not based on reason, facts, or evidence, but on how the topics make people feel. Things that are uncomfortable to discuss then become, essentially, forbidden knowledge. Do these subjects belong in that category, or should they be up for discussion?
The issue works the other way, too. There are plenty of people who would prefer not to have LGBT issues taught to children, while proponents of Political Correctness are often in favor of doing this. Whether one believes that sexual preference or gender nonconformity are choices, pathologies, or just part of the spectrum of human experience, they are still phenomena that occur in the real world; they are still impulses that dictate the lifestyle of real people. Refusing to teach people about these issues will not prevent them from being exposed to them, and will only leave people less knowledgeable about real world issues. It is a form of political correctness that attempts to pretend that something isn’t real, which stifles dialogue and does nothing to weigh truth claims about causes, effects, and society based on reason, facts, and evidence, but once again based only on how the topic makes people feel. Thus, not teaching people about the LGBT phenomena is relegating these issues to the realm of forbidden knowledge. Are people better off not knowing about these issues, or is knowledge still power in this case? Does knowledge necessitate acceptance – if a person is taught about LGBT people, will that person necessarily be accepting? Does acceptance necessitate knowledge – can you not accept someone’s lifestyle if you are ignorant of it? And, if this should not be forbidden knowledge, at what age should people be taught about LGBT issues? What is the best way to teach them? And what exactly should be taught, as there are competing theories?
I think probably the place where the most people will accept that some knowledge may be better left unknown is when it comes to the potential end of the world. Nuclear weapons are the first thing that come to mind. During World War II, there was a concerted effort by the United States and Britain to develop atomic weapons. Doing so opened up a Pandora’s box that still affects us to this day – the doomsday clock was just recently reset to 2 minutes to midnight (doomsday). When the Soviet Union tested their own nuclear weapons in 1949, the term Mutually Assured Destruction (MAD) soon came into vogue. Would it have been better if humankind had never learned how to develop nuclear weapons in the first place? What about the argument that Mutually Assured Destruction has prevented cataclysmic wars between major powers, as was the case in WWI and WWII before humans split the atom? Does that make knowledge of atomic weapons a net positive for the human race, even if the potential destruction of civilization as we know it as the hair-trigger whims of a few powerful people?
Nowadays, we also have to worry about possibly an even more insidious weapon of mass destruction: biological weapons. What makes this even more dangerous is that they are so cheap and easy to develop (particularly compared to nuclear weapons), a single person could do it in a DIY lab in their garage. It’s so easy a person could develop or release it on accident. Instructions on how to do it could easily be made available online (and probably are in some dark corners of the web). And once the disease is out, it will not distinguish between friend and foe – at the very least, an atomic weapon could potentially be contained to a single geographical location. This, of course, brings up the question of whether it has been a good thing or not that humankind has acquired knowledge about how genetics work – with knowledge of genetic manipulation, it’s not that difficult to make a dangerous pathogen. Our understanding of genetics and genetic manipulation has yielded amazing things for humanity, but if it ultimately spells our downfall, was any of it worthwhile? Or would humans have been better off never knowing?
And now, possibly in the not to distant future, we might have to worry about Artificial Intelligence. As it is often popular to say in AI circles, Artificial Intelligence could be the last thing humankind ever invents. So, does that mean that AI technology should be forbidden knowledge? Is humanity better off not discovering Artificial Intelligence? What if developing AI is the only way we can actually ensure that we don’t wipe ourselves out via other means? Unlike most of what I’ve talked about here, AI is knowledge that we have not yet acquired – it is still theoretically within our power to keep this knowledge forbidden, whereas other things I’ve discussed are already available. It may be that development of AI is inevitable, but it could be that we would have been better off never even considering it.
There is a truism that in everyone’s life, there will come a time when youth/pop culture doesn’t make sense anymore. You’ll hear the music or see the shows kids these days are listening to and you’ll just think “I don’t get it – how do people enjoy this?” The irony is that, when you’re a pre-teen, or teenager, or early-mid twenties person, you think this will never happen to you. You’ll always be hip and with it. And then it does happen to you. But you don’t think it’s that you aren’t hip and with it, it’s just that youth/pop culture became shallow and derivative. It’s them that sucks, not you.
This is all still very true. I’ll call the gradual stratification theory. In this view, there is more-or-less a homogenous culture that is spread throughout a particular age group, often categorized by the Strauss-Howe Generational Theory – the one we use to categorize people as baby boomers, or generation X, or millenials and so forth. Sure, this is variation in interests and across regions, but for the most part, those things which are massively popular are either consumed by a large number of people or are at least known about by the majority of people. This is a result of very little choice about what to consume. Before cable the eighties, there were three channels on TV, only a handful of movies came out each year, watching movies at home was at best a luxury, there were no video games. And, of course, there was no internet.
It’s obvious that the internet changed these paradigms in a fundamental way. But I think it’s important to really appreciate how much has changed. And what it has changed to is what I call the granular bubble theory of culture. One of the biggest changes that have come about, mostly (but not all) because of the internet, is on-demand content. No longer do people need to just turn on the TV and watch whatever is on those three channels at the time slot the network decided on. Now people seek out content – it’s all demand based rather than supply based. But that part is obvious. What’s more, though, is that people find those things that suit their taste – news sites, blogs, podcasts, forums, Twitter users, Facebook users, Youtube channels – and settle on those places. Why would people need to go outside of those places that supply them with the content they want to consume?
This creates both a granular culture and a culture bubble. The granular is because that near homogeneity that’s seen in the purely gradual stratification theory is gone for the most part. I consume the content I want and you consume the content you want, and there does not need to be much overlap. Sure, we might have the same taste on books, but then I can go immerse myself in a world of my taste in music and you your taste in music, and never do those worlds have to cross paths. But it also becomes a bubble culture, because I ONLY pay attention to the things I enjoy and culture in the other granular bubbles evolve on their own without my even being aware. And then those bubbles perpetuate themselves, because all the advertisements, all the external links, all the searches and everything in that bubble only points you to other areas of that bubble using algorithms that find out what granular areas you are interested in. You can watch shows you like on Hulu, and when the advertisements come on, it will literally ask you if that advertisement is relevant to you – you are not just choosing your content, but you are tailoring what is marketed to you.
Back in the 1950’s, people could go into work with the implicit assumption that three quarters of the people there watched “Leave it to Beaver” yesterday, and the other quarter at least knows what the show is. It was a part of the small amount of content that was available to consume on one of the only ways of consuming it. In the 1960’s, the culture changed as someone else supplanted the popular culture of the 1950’s, but that culture changed rather homogeneously. Same into the 1970’s. And then in the 1980’s, cable TV, VCR’s and video games became a big(ger) thing. Now I can watch TV channels that play content I enjoy all day and I can buy movies and video games that have content I want to watch/do. This only grew into the 1990’s, when more channels, more gaming systems, compact disks, and even computers started becoming a thing. Then it really blew up in the 2000’s when the internet really came into it’s own, when we started to have more and more of that content available to use everywhere on our cell/smart phones, and user generated content in the form of blogs, video hosting channels, self-published books, and social media became popular. Things have only become more granular and more enshrined in their bubbles in the twenty-teens (2010 to present) as on-demand has only picked up and solidified – user generated content is quickly surpassing professional content in popularity. And yet, someone who is incredibly famous on the internet can probably go home on Thanksgiving and have nobody in their family that knows or understands what they do.
I’m not judging whether any of this is a good thing or a bad thing, merely that it’s a different thing. And probably something we’ll all have to get used to, because as of right now, I don’t see anything coming that will shift us back, or even stranger yet, into some completely new direction.