It’s been almost three years since OpenAI released ChatGPT, the now well-known artificial intelligence (AI) chatbot capable of reasoning, researching and automating tasks across multiple industries. This sparked an economic bubble that led to the proliferation of AI-based products and services.
In its early stages, people had high expectations for what ChatGPT and its contemporaries—referred to from here on as “Chatbots”—could do to change society, for better and for worse. People were hopeful that they could automate tedious tasks and improve education. At the same time, people were concerned about chatbots enabling misinformation and disinformation and losing their jobs to automation.
Three years later, how many of those hopes and fears have come true?
Chatbots have had the potential to automate a large workload in the U.S. workforce. According to Goldman Sachs Research, “roughly two-thirds of U.S. occupations are exposed to some degree of automation…of those…roughly a quarter to as much as half of their workload could be replaced.”
However, economists believed that wouldn’t translate into layoffs. According to Joseph Briggs and Devesh Kodnani, economists at Goldman Sachs, “most jobs…are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI.” So far, this has held true. There have yet to be substantial workforce layoffs from chatbot-related automation.
In education, chatbots were expected to use their ability to summarize, explain and simplify topics to improve access to personalized education. At the same time, they were also expected to be misused to undermine student academic integrity and reduce the quality of education provided by teachers. Today, chatbots have roughly met those expectations.
Anecdotally, chatbots have been successful at helping to explain and personalize learning of some topics, but no well-known study has been conducted on the effectiveness and utilization of chatbots in this regard. The effects of chatbots on lesson quality, similar to the positive effects of chatbots for learning, also have not been well documented.
However, chatbots’ effects on academic integrity have. According to a survey conducted by Victor R. Lee and others, 19.68% of students in a representative sample admitted to using chatbots “to write all of a paper, project or assignment.”
Chatbots’ ability to easily generate large volumes of human-like responses was expected to greatly increase online misinformation. While no studies have been conducted on the amount of misinformation online over time, it’s not hard to fathom that people intending to misinform use chatbots to decrease the amount of work they need to do.
Chatbots have an exceptional ability to replicate photos as well. This month Google showed a preview of what its new AI could do. It showed that its new AI was able to replicate movements, word patterns and images so perfectly that it looked like a real person was filmed in the process.
Now, not only is AI able to almost perfectly replicate humans in video form, but it can almost take people’s faces. A DeepFake is a video constructed by AI using a celebrity’s or another person’s face. This form of media has caused a lot of debates about whether this is safe or not. When Chatbots first came out, their video content made many mistakes, such as adding weird fingers, unnatural expressions and so on. However, it was quickly able to change and become so intelligent that it could replicate many other videos available online.
As the years go on, chatbots and AI in general are going to continue to grow and change. It’s impossible to know if the predictions we make today end up being true in the future. However, as we develop these tools, it’s important to consider that technological advancement could harm or help us—and we need to steer it towards the latter.