ChatGPT has exciting use cases. But it can also potentially ruin society if…
You will have heard about the wonders of ChatGPT by now. It is a miracle tool that has gripped the imaginations of millions of people. The capabilities of ChatGPT are incredible. It seems that it will bring humanity up to a new holy grail of productivity.
Or it will bring humanity down the abyss instead?
First, what is Generative-Artificial Intelligence (AI)? As this website explains,
Generative AI (GenAI) is the part of Artificial Intelligence that can generate all kinds of data, including audio, code, images, text, simulations, 3D objects, videos, and so forth. It takes inspiration from existing data, but also generates new and unexpected outputs, breaking new ground in the world of product design, art, and many more. Much of it, thanks to recent breakthroughs in the field like (Chat)GPT and Midjourney.
I am not optimistic about Generative-AI technologies like ChatGPT. It is not because I am pessimistic about the positive and amazing use-cases of tools like ChatGPT. I believe that IF it is used properly, it can be a force for good.
But it is a big IF.
Given what I have observed about human nature, it is very easy for things to go off the rails. If society is not careful, we will reap the dystopia within 10 years. How can it happen? There are at least 3 ways for dystopia to happen.
Positive feedback loop
I have seen a similar story before. As I wrote in my book, The Google Trap, there is a positive feedback loop with how Internet aggregators like Google works. As a result, the promise of meritocracy that Internet aggregators are supposed to bring about did not happen. For Generative-AI, the positive feedback loop works the same way. But the consequences are different.
Here is the crux of the problem: Generative-AI is not inherently creative. It can accelerate the creative process, thus giving a semblance of creativity. But it is not inherently creative. Computer science is not yet able to creative machines that are inherently creative. Maybe it can happen in the future. But not today.
Generative-AI gets trained by vast inputs of human creativity. Based on what it has ‘learned’, it generates outputs. Because it is not inherently creative, it merely mimics human creativity. Individually, if you use Generative-AI as a tool properly, it can be a very useful productivity enhancing tool. But can it also be true collectively as a society?
Once a critical mass of content is produced using Generative-AI over a critical mass of time, such synthetic content will crowd out content produced by human creativity. The reason is because as more and more content are produced by Generative-AI, economic incentives will increasingly drive creative people out of business. For example, it will be much cheaper to get content produced by Generative-AI than to pay for a human to create something truly original. As more and more creative people get driven out of business, Generative-AI will depend on more and more synthetically generated content (produced by other Generative-AI) as inputs.
When that day comes, will there be enough inherently creative input produced by humans to train Generative-AI? Also, what if the outputs produced by Generative-AI got fed back as inputs for it to learn? As the outputs gets produced, it got fed back again as inputs, which spawns further outputs.
In that case, you get a positive feedback loop.
How will end-users of Generative-AI know that it is starting to fall into such a positive feedback loop? Will manual tweaking of content produced by Generative-AI be enough to act as a circuit breaker to end this vicious cycle? If this vicious cycle is not broken, you can be sure that there will be a dearth of quality and original content in future.
Deliberate misinformation
Next, what if bad actors with malicious intent deliberately feed misinformation for Generative-AI to learn?
Given the positive-feedback loop mechanism that I have described above, misinformation can get further and further entrenched in the absence of human intervention to act as a circuit breaker.
Rise of AI Hacking
It will be tempting for businesses to deploy AI chatbots (which can be seen as a derivative of Generative-AI) to power their customer service. But at this stage, hackers are now employing social engineering techniques to trick the chatbots into spilling secrets. As this article reported, a student was able to do that, which has cybersecurity implications.
This is an example of AI Hacking. What we see today is just the beginning. AI Hacking today is what cybersecurity was 15 to 20 years ago. If society do not have a plan to deal with it today, it is going to be a huge problem down the track.

