The intersection of mental safety on the internet and the freedom of speech is a complex and contentious issue, with Content Moderation standing as the fulcrum balancing these two critical aspects of our digital lives. This topic has been the subject of much debate, with figures like Elon Musk expressing their disdain, while many working in the realm of social media advocate for its necessity. Recently, former employees of OpenAI have come forward to shed light on the vast amount of disturbing content they had to sift through to ensure that the AI model learns only from safe and appropriate content.
I have had the opportunity to work briefly, yet intensely, in this field, being involved in every facet of the process — from policy creation, operationalization, to ensuring the mental well-being of every content moderator. Based on my experience, I firmly believe that the manual process of content moderation is an indispensable part of our digital ecosystem. Here's why:
Firstly, let's face the harsh reality: people can be awful. It's an unfortunate truth that many of us, when left unchecked, tend to produce and share content that is distasteful, offensive, or downright disgusting. Consider the wide array of content found on platforms like Telegram, ranging from the horrifying "Nth Room" group to betting bots that promote inappropriate behavior. It seems we have an innate fascination with the obscene. As disheartening as this may sound, it underscores the need for some form of review mechanism to ensure the safety of viewers. While some may argue that such content falls under the umbrella of free speech, we must also consider the psychological safety of those who prefer not to be exposed to such material, as well as the impact on the psychological development of future generations.
Secondly, while AI and Large Language Models (LLMs) have made significant strides, they are far from perfect. The human mind, despite its flaws, is still more controllable and predictable. Consider the difference between a human tyrant like Genghis Khan, Attila the Hun, or Hitler, and an AI gone rogue like Skynet from the Terminator series. The former, while devastating, has a limited sphere of influence and takes time to gain power. The latter, however, can evolve and spread at an exponential rate, potentially causing widespread damage before we even realize what's happening. Take the adoption rate of ChatGPT as an example. If we inadvertently feed it incorrect or harmful information, it could quickly evolve into something unrecognizable and uncontrollable. Before we know it, it could be promoting harmful ideologies or even posing a direct threat to humanity.
Thirdly, human perception and culture are constantly evolving. Just a century ago, women were expected to stay at home and had no career opportunities. Today, we see women leading Fortune 500 companies. This constant evolution of human society and culture cannot be accurately captured or predicted by an AI or LLM. The best approach is to allow humans to decide what they want and to dictate the direction of our cultural evolution.
In conclusion, while I respect Elon Musk and his contributions to technology and society, I find myself disagreeing with him on this issue. Content Moderation, in my view, is a necessary component of our digital world. The safety and well-being of our minds rest in our hands, and we must take responsibility for shaping the digital landscape in a way that is safe, respectful, and conducive to positive growth.

