Art vs Algos: Bridging the AI Divide

By Gameli Ladzekpo, Founder, Flossy AI

Introduction

Last week, I had dinner with close friends: two artists and two technologists. The evening began copacetically enough —a story about a recent trip to Congo, a lament about the lack of affordable studio space in London. But then I asked, “What do you think of AI?” The question hung in the air, and the mood at the table shifted. The technologists leaned in, eager to discuss its potential,  the artists exchanged skeptical glances. It quickly became clear: this wasn’t just a debate about technology, it was a clash of perspectives.

At the heart of this divide lies a fundamental difference in how humanity views the world. Broadly speaking, there are two opposing lenses:

  • The Technologist sees nature as a “standing reserve,” something to be cultivated, controlled, and ultimately utilized.
  • The Artist sees nature as a source of reflection, a way to uncover deeper truths about existence.

This tension between utility and reflection has existed for centuries, but the rise of AI has sharpened the divide. As technologists, we tend to focus exclusively on what AI can do – its speed, scale, and efficiency – while overlooking where it should take us.

In this essay, I want to share a few ways that artist perspectives could serve as a vital counterforce to the dangers of a purely technological worldview, helping us navigate AI’s impact on creativity, ethics, and humanity itself.

De-Anthropomorphisation

AI systems are not human beings. Using human-like language, such as describing how AI “hallucinates” or “reasons”, blurs the line between a tool and a sentient being. While humanising AI with personality makes it more engaging, and metaphors simplify complex processes, it also risks implicitly assigning traits like moral agency or free will. This shift can blur accountability, placing the ethical burden on the algorithm itself. After all, when an AI “hallucinates”, is it the responsibility on the AI or its developers?

Unlike physicians, technologists don’t take a Hippocratic oath. The choice to accept responsibility is critical. By proactively addressing these concerns, we can ease the fears of artists and society at large, demonstrating that AI is a controllable tool and encouraging constructive engagement.

Understanding Technophobia

For artists, anthropomorphising AI can feel like a threat to their uniquely human voice, leading some to dismiss AI as incapable of making music that genuinely resonates with humans. Yet examples like BBL Drizzy, the viral AI-generated smash, show that AI can bring joy and entertainment. Rather than a threat, AI might be better framed as a catalyst for artistic evolution. Much like the advent of electronic instruments didn’t diminish jazz, but rather paved the way for entirely new genres like hip-hop and EDM. Challenges around copyright and fair compensation are real. Just as Napster’s disruption led to platforms like Spotify and new revenue models for artists, the current disruption caused by Suno can spawn new revenue opportunities. This time, artists have the chance to shape the policies and platforms that protect their interests while embracing innovation.

To the Accelerationists, innovation  progress

Acceleration requires both speed and direction. Technologists often fixate solely on speed. Racing toward a future of AI-generated memes and endless content, we must ask: is this where we want to go? Silicon Valley’s most transformative leaders, from Jobs to Graham, succeeded by bringing an artist’s vision to technology. The technologist turns trees into firewood, and the artist turns trees into poetry. But only together can we share songs by the warmth of the campfire.

I wrote this after attending a session at the Alan Turing Institute during Bridge AI. As a founder, it’s tempting to reduce ethics to checklists and compliance rather than grappling with fundamental questions about humanity’s relationship with technology. In my reading, I found Heideggar’s artist-technologist divide to be a useful framework. After all, if we want meaningful AI ethics, we must first examine the worldviews that guide our technological development. Thanks to Akil for reading a draft!

Scroll to Top