Are You the Master of Your Fate?

“And this is the nature of the universe. We struggle against it. We fight to deny it, but it is of course pretense. It is a lie. Beneath our poised appearance, the truth is we are completely… out of control.”

– The Merovingian, The Matrix Reloaded (2003)

My History and Systems of Psychology professor posed an interesting question to my class the other week: you arrive home and surprise your girlfriend with a large and expensive bouquet of flowers, but how does she react? Does she run up to you with tears of joy in her eyes, and hug you with all the love in the world? Rather, does she react with scorn, convinced that you’re fruitlessly trying to repair your damaged relationship through presenting something as meaningless as a bouquet of flowers whose colors she finds ugly?

Context could help with predicting her reaction. If the relationship was healthy, then she would be more likely to give you all the love she has to offer. Alternatively, if the relationship was on the verge of its imminent dissolution, then she would be more likely to kick you out of the bedroom and force you to sleep on the couch for the night! Even if there is no context to clarify her reaction, no matter what, she is going to react one way or another.

The flower bouquet scenario was brought up in regards to a class discussion on explanatory reductionism, a set of philosophical ideas that states all universal phenomena, from the stars in the sky to the deep blue seas, can be explained by breaking them down into readily understandable terms. According to this paradigm, elements are broken down into molecules, which are broken down into atoms, which are further broken down into protons, neutrons, electrons, and a nucleus. Reductionists argue that by reducing complexity down to its simplest form, we can understand virtually everything in the universe, including free will.

At the same time, there is something deeply unsettling about explanatory reductionism. If we were to assume that, for instance, free will could be broken down and summed up as nothing more than an illusion the brain constructs for itself to reconcile a startling lack of control, then what does that say about us as a species? If we could use explanatory reductionism as an avenue toward perfectly predicting how our girlfriends would react upon the presentation of an expensive bouquet of flowers, then one might argue that it would take away a lot of the luster—the charm and the mystery—that make life worth living in the first place.

Consider the difficulty of describing a long walk on the beach with a beautiful woman you are maddeningly attracted to, sipping on an apple martini and sinking your toes into the sand as a gust of refreshing cool air brushes your face and the sun sets in the distance. As uniquely placid such an experience of consciousness may be, it therefore cannot be reduced to a few paragraphs in a science textbook that you would read once and take a test on the following week. But if a reductionist’s approach can be taken to the stars in the sky, the deep blue sea, and the molecules that constitute an element, what is to stop us from taking it to the mechanisms behind human consciousness and for the purposes of this article, free will?

Free will has always been a sensitive religious and spiritual subject because people despise being told that they do not have it, and they will go to great lengths to convince themselves that they are somehow special for having it. “Well, of course I have free will. After all, I chose what I wanted to eat for breakfast this morning, and I chose what I wanted to wear to work.” But rarely do they ever stop to consider where their choices come from, and how they are made.

The issue of free will became apparent to me after watching HBO’s critically acclaimed Westworld (2016). In Season 1, Episode 10 (“The Bicameral Mind), one of the hosts, or synthetic humans, discovers that what she interpreted as an awakening of self-awareness was yet another behavior that somebody had preprogrammed into her. Despite having thought she was in control the whole time, it turned out that control was simply a string of computer code. Aptly, the host snatched the tablet out of the programmer’s hands, and broke it, content with perpetuating the lie that her choices belonged to her and nobody else. Maybe the lead writers of Westworld were trying to communicate something about humanity’s own understanding of power, control, and freedom. If we discovered that our choices were under the control of something more powerful than ourselves, we, too, might react in the same way.  As the Architect puts it in The Matrix Reloaded, a movie that’s received heavy scrutiny for preaching determinism every five minutes, “Denial is the most predictable of all human responses.”

Whatever the case may be, I do not argue for free will because if there is no such thing as a “higher” or “lower” species, then there is very little to separate us from a spider, dog, chimpanzee, or rhinovirus. And yet, we never say that spiders, dogs, chimpanzees, or rhinoviruses possess free will—only that there is something special about the human being that endows him or her with the fantastic ability to choose. Why, then, must we always assume that our choices have any more weight to them than those of other species on the planet?

If you look at the animal kingdom, a lion’s life is essentially on rails from birth until death. The lion will hunt, look for food, and procreate, but nothing that the lion ever does will deviate it much from evolution’s prescription of its behavior. And if we happen to intervene on a lion’s hunt for its prey, we’ll take a step back and affirm that “it’s just nature. Let the lion do its thing,” as if to say that humankind should refrain from obstructing an animal’s lifestyle because unlike animals, humans benefit from being able to choose everything.

I’m not saying that humans are by and large incapable of choice because that’s a tough pill to swallow. To a certain degree, denial of the relative impossibility of free will is necessary toward maintaining sanity. What I am saying is that the decisions we make but that we think aren’t automatic are automatic, and the work of Ivan Pavlov would concur.

Ivan Pavlov was the first physiologist to systemically uncover the origins of learning. He discovered dogs that had been conditioned to associate the ringing of a bell with the presentation of a bowl of food would produce significantly more saliva than dogs that had not made the association, and thus even when the bells were rung but the food was withheld, the conditioned dogs would salivate anyway. This was considered a landmark study because it demonstrated that learned behavior can occur involuntarily when the subject is conditioned into pairing a stimulus with a physiological response to that stimulus.

I remember learning about classical conditioning in high school and thinking about how cool it was to catch myself in the middle of an automatic behavior. For example, I clean my retainer every morning by placing it into a glass cup, filling the cup to the brim with hot water, and letting a dissolvable antibacterial tablet eradicate all of the plaque while I take a shower. One day, I happened to leave my retainer in the cup without cleaning it, so I never stowed it away into its respective plastic case. Before going to sleep, I then reached for the case and opened it, and curiously, it was empty! “Oh… That’s right,” I reminded myself. “I forgot to clean it today.”

A second example of behavioral automaticity from my experience comes from routinely charging my smart phone, as its battery power does not last long. I spend a large portion of the day sitting at my desk writing articles, completing homework, and editing YouTube videos, and thus I look behind me quite often to see if my phone has received any text messages or Facebook notifications. However, when I unplug my smartphone to listen to Pandora and type a paper, I still check behind me to see if I have received any notifications, even when my phone is sitting on my lap. I’m sure you’ve experienced a similar phenomenon: you experience a mini heart attack when you can’t find your car keys, only to discover that they were in the palm of your hand the whole time. How silly of you!

Above were two rather mundane representations of what a lack of free will might look like, but they get you to contemplate the extent to which behavior is rhythmic, especially since most of our days consist of a pattern in that we wake up, attend to school and work responsibilities, and go to sleep. Everything in between—brushing teeth, eating breakfast, and watching Netflix—is on autopilot. If we make even the slightest alteration to our schedules, like going to the bar instead of watching a television show on Netflix, we exhaust many cognitive resources adjusting to it. After a while, however, it just becomes another box that we check off before the next day kicks in, and we don’t waste time on giving it a second thought.

Next, consider the science behind dreaming versus wakefulness as it applies to the debate of free will. We still don’t know how and why we dream, but we can be confident that dreams occur in REM sleep, or deep sleep. Dreams have also been speculated as protective mechanisms against the overwhelming bombardments of stimuli that we take in every day. In other words, the brain staves off information overload by taking these stimuli, including people’s faces, music lyrics, and smells and tastes of Chinese food, and weaves them into cohesive narratives that the hippocampus goes on to convert into memories. If dreams are therefore an unconscious response to the stimuli that the brain has encoded in the past 8 to 10 hours, our behaviors are likewise generated in large part by the unconscious mind, and everything we do, from going on a blind date to eating a slice of pizza at two in the morning, is nothing more than a story that’s been written out well in advance but that the brain has to “act out” by virtue of the vast amount of information that it sorts through for the sake of its ongoing survival. But if wakefulness cannot function without a certain amount of sleep and dreaming, who is to say that we have any more control over our wakeful states than we do over our dreams?

Matsuhashi and Hallett (2008) might be able to answer that question. They wanted to test the lag time between when the brain consciously intends to move and when movement is actually carried out, hypothesizing that if the conscious intention to move is what supposedly generates the movement, an action they referred to as “movement genesis,” then the movement should occur after the conscious intention, and not before it.

In Matsuhashi and Hallett’s study, participants were instructed to randomly perform brisk finger movements every time they heard a tone, and refrain from expending mental energy by counting the number of movements already made and planning when to make the following movements. They made sure to only move their fingers whenever a thought of finger movement had precipitated it. On occasion, a specialized stop signal was played that informed the participants of their intentions to move and thus signified to them to immediately cancel finger movements thereafter.

A graph of tones documents two key test conditions: (1) before participants are made aware by the stop signal of their conscious intentions to move their fingers, and (2) after participants are made aware of their conscious intentions to move their fingers but cannot cancel their movements because the stop signal was played too late. One subject yielded a lag time of about 1 second between his conscious intention to move and movement genesis, that is, he moved his finger 1 second before even thinking about it. As such, Matsuhashi and Hallett concluded that movement genesis occurs on multiple levels of the unconscious mind and is not as simple as thinking about when to move first, and carrying out the movement itself second. The evidence indicates that a movement is carried out well after any thoughts of movement have been created.

Obviously, there is more to be said about the topic of free will—about its philosophy, psychology, and neuroscience—but with what little evidence we have at our disposal thus far, there is a lot going on beneath the surface of every decision we’ve made and are going to make.

Maybe our minds just have minds of their own.

 

Reference

Matsuhashi, M., & Hallett, M. (2008). The timing of the conscious intention to move. European Journal of Neuroscience, 28(11), 2344-2351.

Why American Education Is Destined to Fail

“The mitochondria is the powerhouse of the cell.”

– Philip Siekevitz

A popular Internet meme is that adulthood is like skipping the tutorial section in a role playing video game, and then having no idea what you’re doing as soon as you first step out into the world. For example, after creating my character in Blizzard’s MMORPG World of Warcraft (2004), I recall struggling to figure out where to go next and how to so much as progress the storyline. I would see higher-level players glide past me on their fancy mounts with their badass-looking armor, and wonder how they were able to become so powerful. Surely, they must’ve had access to this esoteric knowledge on character progression, right?

Like adulthood, character progression in video games is a never-ending process, and not a product. It’s about the journey you must undertake to acquire a fancy mount and badass-looking armor—not about a fancy mount and badass-looking armor in and of themselves.

I never stuck with World of Warcraft, but I did transition to Bungie’s shared world shooter, Destiny in 2014. Destiny, I think, was addictive for many of the same reasons that made WoW addictive: it was a loot-based role playing video game with a heavy emphasis on teamwork and cooperation. Above all else, Destiny was about “becoming legend,” or grinding for a long period of time to create the most powerful character imaginable.

Although plagued by inexcusable problems at its launch, such as a fundamentally incoherent story mode, repetitive gameplay, and a broken reward system, Destiny has been renowned not only for its ability to cultivate relationships between people who are halfway across the world from each other, but for its longevity. Whereas player populations in most games drop off after one or two years, Destiny maintains a strong and loyal following, with players to this day grinding to reach the maximum light level despite the knowledge that all of the hard work and energy they expended will be nullified as soon as the next expansion or full sequel is released. Can you imagine, then, how quickly the Destiny community would have disbanded if within the first few hours of the game, you could max out your character?

The answer to the preceding question is part of why the American education system is faulty. Specifically, just as video games with weaker communities operate under the assumption that character progression is something that can be completed in a matter of hours, the institutions that teach our children assume that learning can be completed in a matter of years. That is, education begins in preschool, continues throughout elementary and middle school, and ends in high school, when in reality, education begins after we’ve received our diplomas and continues until we die.

Another point to consider is that inadequate education is no different from skipping the tutorial section in a video game in that it fails to articulate the skills we must practice before we can begin our education, and so, as soon as we reach a point where we are required to achieve independence (let’s say, age 20 or 21), we suffer tremendously. That might seem cynical, as if I’m suggesting that the first 18 years of life is a waste of time, but it’s quite the opposite: there is SO much to learn beyond high school that it’s actually very exciting. In fact, there is so much to learn beyond high school that it’s impossible to learn it all in a hundred lifetimes.

None of this is to say that you should downplay education, but you should question its value. For instance, my preliminary education has a special place in my heart for teaching me to rudimentarily read and write, but I will always criticize it for failing to teach me how to pay taxes, manage expenses, do the laundry, and cook a meal, and at the more advanced level, for failing to teach me how to remedy negative emotions, form and sustain a meaningful relationship, and make a sensitive decision with long-lasting ramifications.

Of course, it’s good to be able to write an essay about Shakespeare’s Romeo and Juliet and solve the Pythagorean Theorem with square roots, but what practical applications do these principles have in the wider context of my life, and the trials and tribulations I will face along its inevitably perilous course? Few, probably. Even worse, school teaches us what to think, but never how to think. This became evident to me upon taking an English examination in high school, in which I was instructed to complete the multiple choice section after reading a short story whose language was bewilderingly antiquated. Why should I be conditioned into thinking that there is a singular “right” answer to interpreting classic literature when classic literature is supposed to warrant multiple interpretations?

Sometimes I feel there are puppeteers dictating the contents that I should and shouldn’t put into my mind, or balding, old men in slick suits who’ve written the exams I’ve taken all my life—who think that math equations are supposed to teach me how to solve a personal problem, and that poems and short stories from three centuries ago will allow me to develop a deeper appreciation for works of fiction. However, I’ve actually learned more from “turning my brain to mush,” watching television shows and reading Wikipedia articles.

And that, you could say, is why American education is destined to fail. It prescribes knowledge that instruments of institutions “think” you should (or ought to) have once adulthood is on the horizon, but it doesn’t give you the tools you need to transition into adulthood, and navigate it successfully.

So you know what I do? I study EVEN when I don’t have a test to take the following week. I learn new things not as part of an endless, narcissistic pursuit, but as part of a contingency, so that in the event my car breaks down in the middle of nowhere, I won’t just solve a silly math equation. I’ll repair the damn thing and make it to my destination.

You Shouldn’t Joke About That

I have to be very careful with how I word this article, or else people will think that I’m the most heartless bastard to ever exist.  Regardless of its execution, I’m writing this entry for my personal blog, and thus I should be allowed the freedom and flexibility to say whatever comes to mind first. Even supposing that you might not agree with me.

In my sophomore year of high school, my English teacher told the class that (and I paraphrase), “You need to be careful with your words, because you just never know when you might upset someone.” I took his advice into deep consideration because there are extremely sensitive people out there who, even at the thought of lightly insensitive joke, will plunge into an epileptic rage. At the same time, I couldn’t help but feel that all jokes, offensive or lighthearted, span a metaphorical minefield: one false step and you’ll “set off” somebody else’s feelings. So where am I supposed to step in the minefield that is humor? What jokes am I allowed to make, and what jokes should I simply keep to myself for fear of striking a nerve?

There is an elevated level of ambiguity in acceptable versus unacceptable humor. What one person might find funny, another might find distasteful, so it’s important to always think carefully before you make that final, fateful punch line. But what if acceptable versus unacceptable humor, by its very subjective nature, is unknown?

I condemn such phrases as, “You shouldn’t joke about that” and “That’s not very funny” because the line separating topics that can be joked about and topics that cannot be joked about is obscured. We’re all unique individuals who have had different past experiences from which our senses of humor have emerged. Therefore, when I make a joke that fails to comply with your standards for acceptable humor, it’s not really necessary for you to express to me that you’re offended because I couldn’t have known that you would find it offensive in the first place. In fact, I’m offended that you’re offended!

Okay, okay. I need to moderate my tone now because I promised myself that I wouldn’t let this article devolve into an angry rant. But here me out for just a few more paragraphs.

George Carlin, my all-time favorite stand-up comedian, argued that you can make a joke about pretty much anything as long as the exaggeration that constitutes the joke is so out of line and so “out there” that is has no basis in reality. In other words, the setup of the joke should be mundane enough to where the punch line completely throws it off balance. That is something I consider to be the ingredient of a great joke—offensive or not.

Of course, there are limits that should not be pushed. For example, when an unfathomable tragedy such as a terrorist attack or school shooting occurs, we should probably put off making jokes about it for a while to give people the time and space to grieve. Making comedic references to a specific incident with the intent to downplay the enormous misfortune that it’s caused does come across as rather brash and ill-conceived. As far as sensitive topics are concerned, I don’t believe that they are entirely off-limits, and neither did Mr. Carlin.

I for one employ offensive humor as a way to emotionally distance myself from how chaotic this world can be. Many times, I feel like pointing fingers and laughing at something awful makes it less threatening.

But that’s just me. You might believe that some topics should never, under any circumstances, be joked about, and that’s okay. If we’re having a conversation and you happen to find that something I said upset or unsettled you, do not hesitate to call me out on it so that I can readjust my language accordingly. But to say that I’m not “allowed” to joke about [insert topic here] is an obstruction of my own free will.

Video: Why We Are Already Living in the Apocalypse: A Walking Dead Video Essay – Part 1 (Power)

Here is Part 1 of my 5 part Walking Dead video essay. Stick around for Part 2!

Why Are Dogs So Amazing?

I have an enormous soft spot for dogs. When a dog dies in a movie, I will start crying like a baby. And when a dog is in pain, then I, too, will be in pain. Simply put, I look upon dogs with a unique fondness that I cannot liken to humans.

The fact that dogs are so compassionate is not an accident. As much as I condemn humans for their remarkable capacity for evil and wrongdoing, humans were the ones who have made dogs into what they are today. Humans were the ones who, for the past 10,000 years, selectively bred dogs into domestication, transforming these animals from vicious predators into lovable (and quite loyal) idiots. But what makes dogs special enough to warrant this title of “man’s best friend’?

First, dogs are unconditionally accepting of all our flaws. They do not debate, argue, or contend with us. They care more about receiving love and affection from us than undermining our self-interests to advance their own. For example, when you come home from a long and strenuous day at work, your dog isn’t going to pester you about why dinner hasn’t been cooked yet. Your dog isn’t going to steal your credit card in the middle of the night and spend hundreds of dollars on clothes. And your dog isn’t going to wake up one morning and tell you that it doesn’t love you anymore. Your dog will always be there for you, no matter what.

Second, dogs sustain good physical and mental health. One study that was published in the American Journal of Cardiology found that of 400 patients who suffered a heart attack, the patients who owned pets had a significantly higher survival rate than patients who did not own pets. Multiple studies have also found that dogs reduce negative feelings such as boredom, depression, anxiety, and most importantly, loneliness.

At the end of the day, dogs do not extend from or substitute our humanity. Rather, they reflect our humanity, reminding us that despite all our moral shortcomings, there exists good in each of us. However, as delightful as dogs can be, their deaths are emotionally unfathomable. My mother told me that after our dog Charlotte passed away almost three years ago now, the grief she suffered was actually more intense than the grief she felt over her parents. There are two explanations I can offer as to why this happens. The first explanation is that because dogs have been around for such a large portion of our evolutionary history, our brains have been rewired to think of them as babies. The second explanation is that we have been conditioned to think of dogs as symbols of innocence, and thus when a dog dies, innocence dies with them.

The best things that we can do for ourselves (and for our dogs) is to enjoy the time that we do have with them, and cherish the happy memories that they help us create.

The Origins of Meme Culture

“Oh God… eleven articles in and he’s writing about memes? He must be REALLY out of ideas.”

Actually, memes fascinate me. I’ve always wondered how something as innocent as Rick Astley’s 1987 hit “Never Gonna Give You Up” could be turned into an Internet sensation. And ‘Rick Rolling’ isn’t your average meme, too. You might discover that a movie trailer you’ve looked everywhere for has been cruelly switched out with this 1980s dance-pop song, at which point the disappointment and frustration you feel are overwhelming. You also can’t help but feel slightly outsmarted.

Another meme that I’ve taken a guilty pleasure in was “Darude – Sandstorm.” For those of you who are unfamiliar with the background of this particular meme, the comedic appeal was that any question asking for the name of a song or movie was answered with “Darude – Sandstorm,” and that was it. That was your answer. That one film from 1997 with Harrison Ford about the terrorists and the plane? Darude – Sandstorm. That one catchy song you heard on the radio the other day, but couldn’t remember the name of? Darude – Sandstorm. For me, the crux of the “joke,” if that’s what you want to call it, was the unapologetically apathetic nature of responding to legitimate inquiries with “Darude – Sandstorm.”

What, like other viral phenomena, made the song special enough for cyber stardom? The website KnowYourMeme.com was founded to answer these types of questions. Special Internet analysts known as “Meme Scientists” are tasked with not only tracking down the origins of memes, but their popularity and interest over time. A common trend they’ve noticed across memes of all varieties is that they are many times short-lived and readily transmittable, spreading through social media like a flu virus.

The way I see it, memes are the fast food of the Internet in that they’re cheap, quick to prepare, and accessible to everyone. They’re just on the cusp of meeting the criteria for trueborn jokes, yet routinely fail in their mission to deliver any remote substance. They might as well be caught in an identity crisis, because if they cannot be classified as jokes, what are they?

To answer this question, we need to turn back the clocks a little bit. In the 1960s, the Advanced Research Projects Agency Network, or ARPANET, was conceived by the U.S. Department of Defense to establish a single communication network between multiple computers and thereby exchange vital information (Andrews, 2013). This technology then snowballed in the next three decades, first beginning in the 1970s with the groundbreaking work of Robert Kahn and Vinton Cerf, who both developed the first protocols for exchanging data across a wide array of networks. On January 1st, 1983, ARPANET incorporated ICP and IP into its connectivity parameters, becoming the pioneer of the modern day Internet. In 1990, computer scientist Tim Berners-Lee invented the World Wide Web, and helped to finally bring the first iteration of the Internet into the public eye.

With all of the technological advances made by such innovators as Robert Kahn, Vinton Cerf, and Tim Berners-Lee, and the amazing distance the Internet has traveled since then, nobody could have possibly predicted something as anomalous as meme culture to arise. Perhaps, then, our taste in memes lies not in the roots of the Internet itself, but in our own genetic makeup.

There are many characteristics differentiating human beings from animals. Politics, language, religion, law, and art are several, and all tie into a fundamental need for expression, or the need to feel like we’re being listened to.

In the early days of civilization, people devised unique methods of communicating their thoughts about the world, like the creation of cave paintings where they would draw on the walls of dank caves to tell stories. Thousands of years later, they assimilated such things as writing, music, and fashion into their lifestyles, effectively becoming the only species on the planet to express itself at such a sophisticated level. But what about the Internet?

The Internet has provided us with a remarkable capacity to both connect with people and exchange information across major geographical distances. As I’ve discussed, it started off as a military communications network but progressed into an entity of its own. With it came a slew of perks that would make our lives better, easier, and more enriched every day (video pornography comes to mind right now). So what role do memes play in all of this beautiful, and sexy, chaos?

In short, memes are another, more modernized way of expressing ourselves. They extend from the rise of major social media venues such as YouTube, Facebook, Reddit, and Vine, which have all contributed to the popularization of meme culture. More importantly, people enjoy memes so much because they see a part of themselves in the posts they share. Personality theorists all agree that the best way to measure individuality is by taking a look at the clothes people wear, the music they listen to, the food they eat, the movies they watch, and in this case, the memes they share on Twitter. All of those things? They’re not aspects of personality, but rather projections of personality.

When you like, share, or comment on a post, you do so because it resonates with you in a significant way, or because it speaks to you. I wouldn’t be writing this article right now if I didn’t believe memes were worth talking about. Therefore, I use language to project my personality onto the world, whereas others might use more subtle methods of accomplishing this task.

Memes are great expressive tools because they take on such an exaggeratory and emphatic quality. The informal phrases, “when you,” “be like,” and “all like” are often used to help convey universal truths about the human condition, such as waking up early for school, running into your ex-girlfriend at the mall, or going on a new diet. For example, a person wishing to make a commentary on college lectures might make a video of their dog sitting in a classroom to create the impression of cluelessness and confusion, two feelings that all college-level students are familiar with.

Another example would be minion memes. In the films Despicable Me (2010) and Minions (2015), these little yellow beans do not speak a single word of coherent English. However, people have created memes in which sassy and audacious statements like “I was born to be awesome, not perfect” are paired alongside a minion, thereby taking a seemingly neutral image and imbuing it with meaning and personality.

Furthermore, memes have gained popularity because they facilitate short attention spans. The traditional picture meme can only contain two lines of text: one on the top and one on the bottom. Snapchat only allows several lines of text, with videos and pictures lasting up to 10 seconds before they are no longer viewable. Additionally, 60 characters of text is the soft cap on Twitter posts, while the video-sharing service Vine only permitted its users to submit videos that were a few seconds in length.

Whatever the case may be, memes are symbols as much as they are communications of identity. Love them or hate them, they won’t disappear anytime soon.

 

Reference

Andrews, E. (2013, December 18). Who invented the internet? Retrieved September 27, 2016,   from http://www.history.com/news/ask-history/who-invented-the-internet

 

Are Multiplayer Videogames Dehumanizing?

“To rend one’s enemies is to see them not as equals, but objects—hollow of spirit and meaning.”

―Destiny (2014), In-game description of Exotic weapon Thorn

Thorn used to be one of the most loathed Exotic weapons in all of Destiny’s multiplayer. The Hand Cannon was so detestable that people felt offended whenever they were killed by it, complaining that it was a “noob’s weapon” that took no real skill to use. They would send you hate messages, rant about it on the forums, and even use the weapon itself to stoop to the level of its offenders. You would know when you were killed by Thorn, too, as getting hit by it twice to the head or three times to the body would cause your screen to turn into a mucky greenish color while your character slowly died from the weapon’s damage over time effect.

Bungie’s Hell spawn that was the Thorn was unarguably the most obnoxious weapon to ever plague the fronts of competitive multiplayer, but I couldn’t help but think that this obnoxious quality was what made it so enjoyable to use in the first place. During the five months when Thorn was in its prime, the time when everyone used the weapon to their sadistic pleasure, I too derived profound enjoyment from the poison effects and inevitable slow and humiliating deaths that would follow.

The widespread abuse of the Thorn brought to mind a broader question regarding the nature of online competition: do multiplayer videogames unknowingly cause people to lose touch with their more compassionate sides? In other words, do they diffuse empathy to where people become indifferent to the pain experienced by their virtual opponents?

Multiplayer videogames practically dominate the market right now—Halo, Battlefield, Call of Duty, TitanfallDestiny, and Overwatch are among the most popular and widely recognizable of the bunch. To answer the question of whether these types of games decrease empathy and increase indifference, I inquired as to why they’re so popular and how they affect perceptions of human emotions beyond just the immaterial game world. I arrived at a couple of interesting conclusions.

First, multiplayer videogames have gained traction as both an entertainment medium and as a way of relieving stress because they satisfy a primitive urge to compete against and weed out the weaker members of our own species. They appeal to man’s darker qualities such as greed, selfishness, and aggression.

If you are unfamiliar with Skill-Based Matchmaking, the idea is that if you adjust matchmaking parameters enough so that weak players get matched up against other weak players, and the strong against the strong, you appeal to a more generalized audience of casual players and thus sell more copies of your game. From a business standpoint, this makes sense. However, SBMM is actually counterintuitive to the principles of intraspecies competition (competition that occurs within a species as opposed to between two species) since the strong will always prey on the weak. In evolutionary terms, this is comparable to killing a weaker member of your own hunting tribe just so you can eat that extra piece of meat and stay alive yourself. It’s an intrinsically motivated act of selfishness.

Another explanation for why people are so drawn to multiplayer videogames as an outlet for aggression is that, plain and simple, they don’t have to worry about the consequences of murdering people in cold blood. Think of it this way: when you defeat an opponent in a multiplayer match, to you they are nothing more than an avatar stripped of virtually all human qualities. They are a cheeky and elusive moving target, or a bundle of pixels generated by your television screen. They are a virtual punching bag that you can slam on, beat, stab, humiliate, demean, and degrade to your heart’s content, and all without a single consequence to bare. Who wouldn’t take sick pleasure in that? I know I certainly have.

Yet when we give it a second thought, we start to realize that in control of that avatar, that cheeky moving target, that bundle of pixels, is a real person. A living entity with thoughts, feelings, memories, goals, dreams, aspirations, and heartbreaks. Have you considered that, beyond all of that bloodshed and mass chaos in Battlefield’s “Conquest” mode, someone is feeling a little hurt, even if they’re thousands of miles away from you?

By now, you probably think this article is a glorified criticism of multiplayer videogames. It’s far from it. Personally, I’ve invested hundreds of hours into Halo Reach, Destiny, and the Modern Warfare series. I have no qualm with these games; I love them. At the same time, I do have a few regrets about how I’ve treated my opponents over the years. I have tea-bagged, viciously wailed on corpses, and shouted vile obscenities over the microphone. Even today I display these behaviors out of compulsion but not intent. Nonetheless, I’m writing this article to attest to how our treatment of strangers over the Internet, despite the anonymity, still matters and that we should practice better sportsmanship. Just because they live halfway across the world doesn’t mean they deserve to be treated any differently from you or me.

And so, coming back to the question of whether multiplayer videogames decrease empathy and increase indifference, the answer is yes they do, but only if our behavior is left unrestrained. We can easily lose touch with our compassion, but we also have to remember that we can activate it when it’s needed.

I think I’ll just stick to RPGs.