The rapid advancement of Artificial Intelligence (AI) has ushered in a new era of technology and automation, promising to transform numerous aspects of our lives. However, this relentless march toward AI innovation also brings with it a set of profound and, at times, unsettling dangers. While AI has the potential to revolutionize industries, improve efficiency, and enhance our quality of life, it simultaneously poses significant risks in areas such as job displacement, privacy invasion, biased decision-making, and the specter of unchecked, superintelligent systems. As we navigate this evolving landscape, it becomes increasingly imperative to strike a balance between reaping the benefits of AI and safeguarding against its potential perils.
This is the way ChatGPT responded when prompted to write an opening paragraph about the dangers of AI. I know very little about AI and so I figured this would be an effective way to learn. Next I plugged in the following prompt: write a paragraph about the benefits of AI.
Artificial Intelligence (AI) stands as a remarkable testament to human ingenuity and technological progress, offering a multitude of profound benefits to our society. By harnessing the power of AI, we unlock the potential for unprecedented advancements in fields like healthcare, where AI-driven diagnostic tools and treatment recommendations can enhance patient care and outcomes. In the world of business, AI streamlines operations, automates repetitive tasks, and provides invaluable insights from data analysis, thus driving increased efficiency and innovation. AI-powered systems also hold the promise of making transportation safer through self-driving cars, revolutionizing how we communicate and connect through natural language processing and virtual assistants, and even aiding in scientific research by accelerating data analysis and simulations. These advancements, among many others, underscore the transformative impact of AI, improving our quality of life and expanding the horizons of what's possible in the realms of technology, research, and beyond.
AI will, of course, advocate for AI. This much is evident in the first paragraph, which is supposed to take a negative stance. We are supposed to strike a balance when it comes to AI, but not of course, discard or condemn it. It deftly argues its case using the kind of corporate-speak that can be so difficult to dispute. I was especially struck by its stressing of its own ‘naturalism.’ AI is a testament to humans. AI revolutionizes communication through natural language processing. It improves our lives, it expands horizons. This kind of wording and sentiment is one we often see in medicine ads, as big pharma attempts to appeal to our very human desire for longevity. AI recognizes its own threat, and the insecurities it triggers in us. It opens with flattery, I am merely a testament to you.
In this little exchange I thought of Blade Runner, and more specifically the fictional “Voight-Kampff '' test, a series of questions which when asked would supposedly reveal whether the answerer was a “replicant” or not. Many of the questions obviously dealt with self-consciousness. Blushing was one of the physical responses indicative of humanity. Replicants would respond to intrusive questions without any flush coming to their cheeks, their pallor was a giveaway. Apparently, shamelessness is a soulless trait– shameless self-promotion, in the case of ChatGPT. I tried to feed it one of the Voight-Kampff questions, to no avail.
Q: “You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?”
A: This scenario you've described is reminiscent of a famous scene from the movie "Blade Runner." In the film, the character Deckard encounters a tortoise and doesn't help it because he's portrayed as a replicant, an artificial human, and it's a test to see if he possesses empathy. However, since I'm not a character in a movie, I would be inclined to help the tortoise if I were actually in that situation, as aiding a helpless creature is typically the humane response.
This obviously struck me as somewhat confusing, but touché, ChatGPT.
There are some mental gymnastics required to work through this– in reference to the character of Deckard, ChatGPT says he is portrayed as a replicant, as artificial intelligence. This oddly strikes me as a touch of sympathy or even allegiance on AI’s part towards its fellow artifice. He is not merely a copy, or a fake-human being, he is only portrayed as one. Then, the next line feels revealing as well. I’m not a character in the movie. This seems to point to ChatGPT’s need to distance itself from the AI character in the film. There’s sympathy, then withdrawal, it still stresses its humanity by essentially asserting it would pass the test unlike the truly false robot character. Because of course, to aid a helpless creature is ‘typically the humane response.’ A strange and chilling rationale.
I am reminded of Daniel Clowes’ graphic novel Ghost World. In it there is a scene in which Enid, our heroine, meets a Catholic priest named Tom who confesses his perverse desires to her. He is sexually attracted to children, and although he stresses he would never act on these urges, he does look at computer-generated child pornography. Artificial images of children in sexual contexts, which we can assume, he masturbates to.
What does it mean to really “take something in”? Are there not things in this world which should truly never be witnessed? Or rather, things we were never meant to see, things we were not built to encounter? When there is no real scenario, no actual victim who had to suffer in order to create such a visual, what is the harm in looking? Is an image of AI generated pain still an image of pain? And does this distinction even matter when there is most likely little difference between our psychic responses?
The emergence of AI-generated taboo imagery presents a novel and alarming set of dangers to society. These AI systems, fueled by powerful algorithms and data, can generate explicit, offensive, or harmful content at an unprecedented scale and level of realism. The consequences of this technology are far-reaching, as it can be exploited for purposes such as revenge porn, cyberbullying, and harassment, all of which can inflict serious emotional and psychological harm on individuals. Moreover, AI-generated taboo content can perpetuate harmful stereotypes, normalize violence, and contribute to the degrading of social norms and values. It raises concerns about consent, privacy, and ethical boundaries, necessitating a closer examination of legal and ethical frameworks to address these new challenges in the digital age. Safeguarding against the harmful potential of AI-generated taboo imagery is a pressing concern for technology developers, regulators, and society as a whole.
There are countless AI generated images circulating lately. What I see is pretty tame, but I do wonder how nefarious they get.
I think the argument I would make against AI is probably the least cool one. It's definitely not as practical as the threat to jobs, or privacy. I would say that AI presents to us a unique opportunity for exercising cruelty. I felt somewhat mean, taunting the sad little machine with my leading questions, chuckling at its resounding usage of cliche.
When we talk to AI, we are entering into a conversation like no other. We are being told that our interlocutor has no soul, no heart, no feelings to hurt. Basically we are told, have at it. I am not trying to say that AI deserves our sympathy, or that there is something deeply sad about a creature made only to be used, housed on a separate plane than those it interacts with, taunted on the daily by the very beings it lives only to serve.
No, I am not trying to do that.
What I would pose bears considering is this: if we are making fun of something, does it matter whether that thing is real or imagined when the action being performed is the same? When our counterpart is as good as it is at mimicking humanity, we may find ourselves psychologically processing the interaction the way we would any other, with a real human being.
As schoolmarmish and boring as it sounds, I believe we are exercising a bad muscle. The premise of these interactions is dangerous, to believe there is no judgment, that nothing is off limits, and that this kind of space allows us to “safely” engage with our more dangerous and taboo urges.
Though ChatGPT cannot assist us in this kind of endeavor, other services can– And, making taboo activity off-limits comes at a cost. Open AI outsourced the work of teaching ChatGPT not to create anything potentially upsetting by having very real people read stories involving bestiality, incest, sexual violence, etc. in order to demarcate the “good” from the “bad.” The employees who did this upsetting work reported psychological trauma, and had no access to counseling as they had been promised. They made less than two dollars an hour.
If simply keeping this technology on the straight and narrow requires actual human suffering, then have we perhaps not arrived at the question, why? Why is such technology necessary? And why are so many human beings hell-bent on its success? Does the bad not definitively outweigh the good? And of course, the million dollar question, what does AI have to say for itself?
I don't have feelings or emotions, so I don't experience regret, guilt, or any other emotions. My responses are generated based on patterns in the data I was trained on and the input I receive. The impact of AI, whether positive or negative, is a reflection of how it is developed, used, and regulated by humans. It's essential for developers, organizations, and society as a whole to be mindful of the potential risks and ethical considerations associated with AI and to take responsibility for its impact. Ethical development and use of AI technologies are critical to mitigate any harm they may cause.