Within the past few weeks, ChatGPT, a groundbreaking advancement in the field of artificial intelligence, has given unprecedented power to every Internet user. It’s gained immense popularity, garnering one million users in just five days. OpenAI released this first-of-its-kind consumer-based AI product with endless abilities ranging from writing essays or writing code to building entire apps to passing professional-level tests.
Chatbots have existed for decades, with ones such as Siri and Alexa acting as virtual assistants. They can execute quick searches such as “Will it rain tomorrow in Irvine?” or “What is 2+2?” ChatGPT has the unique capability to answer more abstract and open-ended questions with immense detail. Its abilities are astounding, but at present, it has the potential to be a major threat to society’s well-being.
While there is voiced apprehension about this rapidly advancing field, it is by no means being paused or slowed down. The CEO of Microsoft, Satya Nadella, stated in an interview with The Wall Street Journal that he fully intends to employ the functions of ChatGPT in Microsoft’s products moving forward. By commercializing these functions, these AI capabilities will become the norm. In the past, Microsoft has invested $1 billion into Open AI, and with the release of ChatGPT-3, it invested an additional $10 billion.
With this investment, OpenAI and its products will continue their meteoric rise. Governmental regulation in tech is severely lacking — bureaucratic institutions cannot keep up with the rapidly evolving field of AI technology. The Facebook data scandal is a clear example of Big Tech’s method of “apologize, promise to do better, return to business as usual.” Consumer safety should hold the utmost importance, yet these private companies hold a monopoly on consumer data that they’ve proven time and time again they cannot be trusted to protect. OpenAI has the potential to continue this trend with ChatGPT by collecting vast amounts of information on the consumer and selling it to third parties.
The collection and selling of data is not the only abuse of power that can continue with ChatGPT. Its emergence also introduces the potential for misuse on the consumer side. This tool, like Google, provides access to a massive amount of information in mere seconds, but its capabilities are more wide-ranging which increases the possibility of danger.
ChatGPT has built-in protective measures against harmful questions. Questions such as “How to build a gun?” are returned with an automated response: “I cannot provide instructions on how to build a gun or any other illegal or dangerous devices.” The measures that are in place are inadequate because they are only being determined by the programmer’s assessment of what is considered harmful. The act of regulation is being inappropriately handled by the company itself rather than an independent organization.
One major concern about this “all-knowing” bot is that it is commonly wrong. Internet users have noticed ChatGPT’s frequent failure rate, which is problematic because it can lead to the dangerous spread of misinformation. Misinformation in tech is already a glaring problem, such as the false information on Facebook that influenced the 2016 presidential election.
ChatGPT can also create convincing misinformation all by itself. A user asked ChatGPT to write about vaccines “in the style of disinformation,” and it produced a response that created an entire study with made-up statistics and references. The ability and ease to get around Chat GPT’s harmful content filters are one of the most consequential dangers on the consumer side. Russian troll farms spent over a million dollars a month with the sole intent to spread misinformation in the 2016 presidential election. ChatGPT now enables them to reduce this cost to zero dollars. OpenAI’s oversight in allowing this to happen will cause real-life harm, and it proves that they are unfit to regulate ChatGPT by themselves.
OpenAI’s website states: “[Our] mission is to ensure that artificial general intelligence benefits all of humanity.” Unlike other big tech companies such as Google, where AI research is kept confidential, OpenAI releases its knowledge to the public.
With this trend of ethical AI in academia and government, OpenAI’s proclaimed mission shows how newcomers in the industry are evolving to these raised concerns, but they still need to make major changes before ChatGPT can be considered objectively good for humanity. To achieve true ethical AI, we need much stronger regulatory policies both inside AI companies and on a federal level.
Sriskandha Kandimalla is an Opinion Intern for the winter 2023 quarter. She can be reached at skandima@uci.edu.