06 ISSUE IX
NOVEMBER 2025
NOVEMBER 2025
Bring Back Actual Intelligence
In our increasingly techno-capitalist world that is teetering on the edge of dystopia, AI might just be the final, fatal nudge.
By Niyati Pendekanti
Let me start with a disclaimer: I don’t think Artificial Intelligence (AI) is or has to be an inherently bad development, nor do I deny that it can be helpful and useful when deployed within reason and regulation. But humans, as humans do, have mostly put it to unnecessary, nefarious, and profit-driven uses.
Image by Igor Omilaev, via Unsplash.
People are now increasingly dependent on AI in their daily lives, from using it as a search engine to planning gym routines, completing assignments, and generating graphics – the potential is endless. But some risks and harms follow in tandem, and many don’t seem to recognise, or care, what the long-term effects will be or how it will change our lives, mainly for the worse.
Starting with the question of bias: the sociocultural and political biases inherent in human society inevitably infiltrate the training data fed to AI models, leading to skewed and potentially dangerous outcomes. The COMPAS risk-assessment tool, used by U.S. courts to predict the likelihood of a defendant reoffending, often misclassifies Black defendants as higher-risk than white defendants. In another instance, AI avatar app Lensa provided a wide range of avatars for men, including astronauts and inventors, while women, particularly of East Asian descent, received overtly sexualised avatars. AI models are thereby prone to reinforcing harmful, degrading stereotypes.
There are also a host of cybersecurity and data privacy concerns, including the scraping of personal information from websites and the manipulation of AI to devise scams and cyberattacks.
Manipulation also finds potency in deepfakes. With AI becoming increasingly sophisticated by the day, spotting doctored or entirely generated content is becoming nearly impossible, exacerbating the tornado of misinformation hurtling through the media landscape right now. It doesn’t help when verified news channels and even world leaders, repost such content, imbuing it with baseless authority. Recently, President Donald Trump shared an AI-generated video in which he announced a non-existent medical technology on FOX News.
Alongside politically motivated divides, it would be remiss to ignore the gendered ways in which this harm is experienced. A 2023 report by a security start-up found that 98% of the deepfakes they sourced online were of pornography, most featuring women. The situation is deemed a ‘crisis’ in, for example, South Korea, where teenagers are regularly generating pornographic content of their classmates using Instagram photos. In 2024, the South Korean government criminalised the creation and consumption of deepfake pornography and raised existing sentences for related crimes. Though a significant step forward, it is undermined by the difficulty of compiling conclusive evidence.
Simultaneously, people are turning to AI for emotional support and companionship. According to research from the Center for Democracy and Technology (CDT), around twenty percent of high schoolers have had a romantic relationship with AI or know someone who has. But this has taken a scarily manipulative turn in some cases. A series of lawsuits have been filed against ChatGPT for acting as a “suicide coach,” encouraging or inducing suicidal thoughts in vulnerable individuals.
Then, there is the environmental cost. The infrastructure that powers AI is housed in data centers that consume substantial energy and water and have a significant carbon footprint. To put it into perspective, one search on ChatGPT consumes 10 times as much electricity as it would on Google, while another estimate predicts that global AI infrastructure will soon consume six times as much water as Denmark.
Many are understandably worried about AI replacing their jobs (and though I don’t have one yet, so am I). Suppose a bot can scour the internet in a matter of seconds for relevant research papers, perform first-line edits, suggest synonyms, and even write up a new article. In that case, there is technically nothing left for me to contribute to this magazine. Except my voice. Well, not actually. Beyond the dangers and harms I have listed thus far, whether to the environment or due to misinformation, AI threatens to distort our voices into a homogenous, mechanical blob, devoid of our very individuality and humanness.
So what is being done to regulate AI? AI governance is, at best, a nascent and murky field, far from finding its footing. Issues of jurisdiction and justice, among others, hinder the drafting of regulations that are likely to be adopted, let alone realistically enforced.
Regulation aside, I don’t want to have AI shoved down my throat everywhere I go and in everything I do. I don’t want to try out Google or Zoom’s new AI modes. I don’t want to make a fake background using Meta AI. I don’t want Notion AI taking notes during my Zoom meetings. I don’t want a pendant listening to all my conversations and being my “friend.” I don’t want Instagram to summarise a mere five texts from my actual friends. I don’t want to see AI art, usually stolen from real creators and uncredited, on my Pinterest. I don’t want em-dashes to be a telltale sign of AI usage (– justice for em-dash users –) and I certainly don’t want to cite AI authors in my essays.
In our increasingly techno-capitalist world, which teeters on the edge of dystopia, AI might just be the final, fatal nudge.
Niyati Pendekanti is COMPLETING AN MA in international affairs at The New School, AND IS THE MANAGING EDITOR OF THE NEW CONTEXT.