Are our Brains at Risk? The Neurological Implications of Artificial Intelligence

Anna Marieta Moise, MD

Artificial Intelligence (AI) is defined by Merriam-Webster as a “branch of computer science dealing with the simulation of intelligent behavior in computers.” As machine learning becomes more advanced and AI continues to become more complex, what does that mean for the future of humanity?

One can easily identify the advantages of AI – these include, but are not limited to, AI’s potential use in improving the accuracy of medical diagnoses; in performing laborious and/or dangerous work; and in rational decision-making during situations in which human emotions can impair efficiency and/or safety. These are only a few examples of how AI can and has become so advantageous.

Smart-phones and Internet applications have implemented complex AI algorithms, as well – these applications are what make modern technologies so useful and attractive. AI has allowed for the World Wide Web to be more user friendly since its introduction in 1989 (viz., AI has allowed massive amounts of information on the Web to be organized in a searchable fashion). Communication has become nearly instant with the introduction of email and audio and video Internet applications. Smart-phones have the ability to ‘be used as phonebooks, appointment calendars, internet portals, tip calculators, maps, gaming devices, [and]…seem capable of performing an almost limitless range of cognitive activities for us, and of satisfying many of our affective urges”(1). The list of the advantages of AI applications goes on—it has never been easier to access online banking, pay bills, buy items remotely, etc. As AI technology expands, it will offer more, making our lives easier, and theoretically more satisfying.

But is there a dark side to the rise of this technology? Many people, including famous science and technology leaders such as Elon Musk and Stephen Hawking, assert that technological advancements should be approached with extreme caution. Moreover, even many of the creators of modern technology –engineers and designers in silicon valley—have taken up extreme measures to limit their own and their families’ exposure to smartphones and the internet, such as installing outlet timers in households that shuts down the internet at a certain time every day, sending their children to schools that ban iphones, ipads and other smart devices, and installing programs that prevent the use of social media. Some have even turned against the tech industry all together, claiming that people’s minds are being “hijacked” by technology, and that our technology-related “choices are not as free as we think they are”(12).

Edward Clint, evolutionary psychologist and author of recent thought-provoking Quillete article “Irrational AI-nxiety” argues that humans have an unnecessary fear of the rise of technology and AI due to evolutionarily acquired instinctual mistrust of the unknown. He claims that AI probably does not have the potential to risk the future of humanity, affirming that people’s fears of AI are analogous to the hysterical fear of aliens or poltergeists. I agree with Dr Clint in this respect. However, I am fearful of the peril that AI poses on the future of humanity, but for very different reasons. Reasons that, in my opinion as a neurologist, are more frightening because they are happening inside our very own and willing brains.

The danger of AI lies not in the manner in which it is portrayed by Hollywood films (that is, that robots will some day develop a conscious malicious predilection for destroying human beings).  AI is in the process of rendering humans meaningless and unnecessary, stealing away from us the very qualities that make us human. Allow me to explain.

As Nicholas Carr, author of The Shallows: What the Internet is Doing to our Brains writes in his book:

“Over the last few years, I’ve had an uncomfortable sense that someone, or something is tinkering with my brain, remapping the neural circuitry, reprogramming the memory…I feel it most strongly when I am reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through the long stretches of prose. That’s rarely the case anymore. Now my concentration drifts after a page or two … what the Net seems to be doing is chipping away at my capacity for concentration and contemplation… My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in a sea of words. Now I zip along the surface like a guy on a Jet Ski…”

The description of Carr’s experience is not an uncommon experience. I myself have felt the powerful effects of modern technology on my own brain. I recall as a young adult, prior to having a laptop or smart-phone, visiting the library and experiencing the intense wonder and calmness of the books that surrounded me. I never experienced the anxiety or loss of focus at the library that I do today when I skim through massive amounts of information on the Internet.

Are AI applications causing us to lose our concentration, attention, and our ability for linear, deep, and critical thought? If so, how, and what are the consequences?

 

LESSONS ON NEURAL CIRCUITRY

Despite early dogmas of the brain being a hard-wired circuit that does not change, research over the past century has discovered that the adult brain is a very plastic and dynamic organ. Its complex circuitry and neuronal connections are constantly changing and reorganizing based on our actions, thoughts, and exposures.

In the 1960s, University of Wisconsin neuroscientist Michael Merzenich showed how dynamic the brain is with his experiments with monkeys (3, 4). He inserted electrical probes into the parts of the monkeys’ brains that correlated with skin sensation of the hand. After damaging the nerves of a hand, he measured the monkey brain’s reaction to the injured nerve. After the nerve was injured, he noticed that the neural connections in the brain that correlated with the nerve became haphazardly scattered and disorganized. For instance, the area of the brain that previously correlated with the tip of a finger now correlated to a hand joint instead. But over time, as the nerve regenerated and healed, the neural circuitry in the brain also reorganized. By the time the nerve healed completely, the reorganized brain circuits once again correlated with the correct analogous body part. In other words, Merzenich was able to show that the neurons, the cells of the brain, were capable of changing and reorganizing. This demonstrated that the brain is not a hard-wired, rigid circuit. Our brains are “always breaking old connections and forming new ones, and brand-new nerve cells are always being created”(3).

The brain’s plasticity reflects why humans have the capability to form memories.  Research by neuroscientists such as Louis Flexner, at University of Pennsylvania, and Eric Kandel, at Columbia University, found that formation of long-term memory involves structural changes in the brain involving new synaptic connections across neurons, thus leading to measurable physical anatomical changes in the brain (6). However, long-term memory takes time and focused concentration to form. The consolidation of memories “involves a long and involved ‘conversation’ between the cerebral cortex and the hippocampus”(3).

In the words of the well-recognized adage, “neurons that fire together, wire together.” The opposite can be said about neurons that stop firing together – they unravel. While AI has made our lives easier, it has coddled our brains, allowing us to “outsource our memory.” Its distractions and temptations to revel in multitasking disintegrate the neuronal circuits involved in concentration and attention needed to form long-term memories.

Some might argue that outsourcing memory is not so bad, that it increases efficiency. We may not need to have everything stored in our brains if we have computers and smart-phones at our fingertips. Some might go as far as to argue that this reflects the rudimentary beginnings of the brain-computer interface. But biological human memory is very different from computer memory. Kobi Rosenblum, Head of Department of Neurobiology and Ethology at the University of Haifa in Israel states that “while an artificial brain absorbs information and immediately saves it in its memory, the human brain continues to process information long after it is received, and the quality of memories depends on how the information is processed” (7,3).

It is humans who give meaning to memories that are stored. “Biological memory is alive, [while] computer memory is not,” Carr writes in The Shallows. “[Enthusiasts of outsourced memory] overlook the fundamentally organic nature of biological memory. What gives real memory its richness and its character, not to mention its mystery and fragility, is its contingency.” The human brain may not be able to store as much data as the Internet, but it is able to decide what is meaningful—in other words, it is able to ‘separate the wheat from the shaft.’ Memory is what makes our own lives meaningful and rich. Evidence suggests that as biological memory improves, the human mind becomes sharper and is more adept at solving problems, learning new ideas and new skills. As William James declared in 1892, “the art of remembering is the art of thinking” (5, 3). If AI is to replace human memory, it no doubt will also come to replace the functions listed above, thus potentially rendering humankind meaningless.

 

WHAT TECHNOLOGY IS PHYSICALLY DOING TO OUR BRAINS

Some might argue that what we lose in terms of our deep thinking and memory-forming skills are made up for in our navigational and decision making skills. For instance, a UCLA study using fMRI in 2008 found that people who used the Internet had high activation in the frontal, temporal and cingulate areas of the brain, which control decision-making and complex reasoning (8). The study inferred that Internet use may actually improve complex decision making and reasoning. Surfing the Web may indeed improve our decision-making abilities, but likely only as it applies to Internet navigation. We are essentially giving up our higher level cortical functioning as human beings (i.e., those involved in deep learning, concentration and creativity) in exchange for distraction, Internet navigational ‘skills,’ multitasking, and superficial ‘learning.’

A recent psychology study demonstrated that people who read articles that are splattered with hyperlinks and other distracters (commonly seen on the Internet given the websites’ monetary incentives to encourage people to click on as many links as possible) are significantly less able to recall what they read compared to people who read the same articles without distracters. Moreover, those exposed to distracting information were less able to identify what the meaning was behind the articles they read. By constantly surfing the Web, we teach our brains to become less attentive, and we become “adept at forgetting, and inept at remembering”(3).

Studies have demonstrated that people with excessive use of the Internet show gray matter atrophy in the dorsolateral prefrontal cortex and anterior cingulate gyrus-areas of the brain involved in decision-making and regulation of emotions and impulses (9). The longer the duration of the unhealthy relationship with the Internet, the more pronounced the shrinkage. Moreover, there are disruptions in the functional connectivity in areas responsible for learning, memory, and executive function (11). Additional studies show that excessive use of smart-phone and the Internet is associated with higher rates of depression, anxiety, increased risk-taking behavior, and impaired ability to control impulses (10, 11). There has not yet been data on the long-term neurological effects of chronic exposure to expanding dependence on technology. As a neurologist, I can’t help but wonder whether it may pose an underlying risk for developing dementias, such as Alzheimer’s disease.

Humans have access to so much information, but we are “no longer guided toward a deep, personally constructed understanding of text connotations. Instead, we are hurried off toward another bit of related information, and then another, and another. The strip-mining of “relevant content” replaces the slow excavation of meaning (3).” Over time, we are unable to think profoundly about the topics we research because we are unable to acquire in-depth knowledge. We lose our deep learning, critical thinking, and introspective neural circuits. We lose our intellectual sharpness and richness, and instead become zombies that resort to primal ways of thinking.

Our dependency on (and addiction to) AI applications may even contribute to the increasing simplistic, polarized thinking among populations, as well as the dissolution of political structure. Instead of participating in critical thought and civilized debate, people with impaired attention, and atrophied deep learning, introspective, and impulse-controlling neural circuitries are likely to resort to simplified dogmatic ideologies and limited knowledge of political events (viz., short blurbs and snippets of what they see on Twitter). Tristan Harris, former design ethicist at Google, states that modern technology is “changing our democracy, and its changing our ability to have the conversations and relationships we want” (12).

Furthermore, because the prefrontal cortex dampens impulsivity and fear/aggression, and because connectivity between the prefrontal cortex and other regions of the brain is impaired in those with excessive Internet and smart-phone use, chronic excessive use of modern technologies may potentially increase the risk of criminality and psychopathology (13). With the introduction of AI, humanity ironically is at risk of regressing from an age of intellectual enlightenment to a Dark Age of ignorance, primal thinking, and possibly increased violence.

 

CONCLUSIONS: WHAT DO WE DO ABOUT AI?

Although AI holds the potential for being used for compassionate and advantageous purposes, it also poses a very real and urgent risk to human beings that cannot be ignored. During a scene in the recent superhero movie Justice League, Wonder Woman argues with Batman against resuscitating the dead Superman because it requires the use of an immensely powerful, and potentially destructive, energy source. She asserts that thorough and thoughtful consideration, rather than incomplete, myopic planning, must be used when introducing new technology: “Technology without reason, without heart, destroys us.”

Even people involved in the tech industry are beginning to emphasize the need for considering the consequences of future technological discoveries. Justin Rosenstein, creator of the popular Facebook “Like” button, mentioned in a recent article that “it is very common for humans to develop things with the best of intentions that have unintended, negative consequences” (12).

We MUST proceed with caution in the advancement of modern technologies that involve artificial intelligence. We must insist that developers of new technology deeply examine their rationale, and scrutinize the intellectual, ethical and cultural implications of their discoveries and pursuits. It is humanity that will be forced to deal with the repercussions of these creations. We must remain alert, and acknowledge the massive limitations and risks associated with technology. Artificial intelligence does threaten the survival of humanity, but not in the sense that is commonly portrayed. If we continue to ignore this dragon without truly examining the potential consequences, it will continue to grow until we are rendered powerless and obtuse. We must face the dark side of AI, intelligently and with a deeply critically eye. 

 

 

 

References:

1.     Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and Cognition: A Review of Research Exploring the Links between Mobile Technology Habits and Cognitive Functioning. Frontiers in Psychology8, 605. http://doi.org/10.3389/fpsyg.2017.00605

2.     Greenfield S. (2013). Screen Technologies. Available at: http://www.susangreenfield.com/science/screen-technologies/ [accessed April 16, 2015].

3.     Carr, Nicholas. The Shallows: What the Internet is doing to our brains.

4.     Schwartz and Begley, Mind and the Brain, 175.

5.     William James, Talks to Teachers on Psychology: And to Students on Some of Life’s Ideals (New York: Holt, 1906), 143

6.     Kandel, In Search of Memory, 221.

7.     University of Haifa, “Researchers Identified a Protein Essential in Long Term Memory Consolidation,” Physorg.com, September 9, 2008, www.physorg.com/news140173258.html

8.     http://newsroom.ucla.edu/releases/ucla-study-finds-that-searching-64348

9.     https://www.scientificamerican.com/article/does-addictive-internet-use-restructure-brain/

10.  Lamotte, Sandee. Smartphone addiction could be changing yourbrain. http://www.cnn.com/2017/11/30/health/smartphone-addiction-study/index.html

11.  Weinstein, Aviv. An Update Overview on Brain Imaging Studies of Internet Gaming Disorder.Front Psychiatry. 2017 Sep 29;8:185. doi: 10.3389/fpsyt.2017.00185.

12.  “’Our Minds can be hijacked’: the tech insiders who fear a smartphone dystopia”  https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia

13.  Koenigs, M. (2012). The role of prefrontal cortex in psychopathy. Reviews in the Neurosciences23(3), 253–262. http://doi.org/10.1515/revneuro-2012-0036

14.  Irrational AI-nxiety, Edward Clint. December 14, 2017. http://quillette.com/2017/12/14/irrational-ai-nxiety/