Regulating AI Not by Fear, but for Common Sense

Many believe that artificial intelligence should not be regulated, fearing it would lead to government censorship. I disagree. While it's true that humans, even in judicial bodies, can make mistakes that harm others, machines are not exempt from this. On the contrary, automated learning systems often inherit biases from their human creators.

Works like "Weapons of Math Destruction" and "Unmasking AI" have highlighted the dangers of unregulated AI, especially in public security. It's well-documented that systems such as facial recognition can reflect racist and transphobic biases. Some technology companies have made efforts to mitigate these issues by employing ethics and responsibility teams. In 2024, responsible technology is a hot topic.

However, cybercrime persists. We see the dissemination of sexist, racist, homophobic, and transphobic content on social networks, leading to real-life political stalking and harassment. Child sexual abuse material (CSAM) and non-consensual intimate image abuse (NCII) remain serious issues, as highlighted in Sophie Compton's documentary "Another Body". Amidst this, some major tech companies waver, torn between supporting philanthropic actions and profiting from online hate speech.

It seems almost absurd to think responsible regulation is possible, given the current scenario where profit often trumps mutual respect and common sense. Our parents taught us to be cordial and respect others' time. Why does this principle not apply to the virtual world? I don't feel censored by laws that prevent me from stealing someone's phone. So why would it be censorship to prohibit attacks on people online, especially using AI to create false motives?

By not regulating online platforms and AI systems, we allow our metaphorical phones to be stolen. We enable the harassment of women, people of color, and LGBTQIAPN+ individuals by an offensive content industry that has grown exponentially since the 2010s. This group is the most affected, but not the only one.

We need a strong legal and collaborative framework. Functional reporting centers must be established that believe victims. Guidelines for good digital coexistence and punishments for violators must be created. Currently, criminals accused of crimes like rape and harassment often face no consequences. How can we combine real and virtual measures to make our lives safer?

We must consider this common sense approach before big tech infiltrates governments under the guise of 'helping to keep the population safe'. First, we need them to help on their own.

Ana Carolina Sousa Dias

Ana Dias is a Brazilian computer scientist that loves all things responsible tech. She is an AI Policy researcher at CAIDP, Oscar Sala Chair (IEA/USP), and responsible tech research member at LAPIN and C-PARTES.

Previous
Previous

Synthetic Data and Privacy: The AI Glow-Up We’ve All Been Waiting For

Next
Next

What’s Next for AI? 3 Policy Moves Expected from Trump’s Return