The silent revolution / VEFFEV
How AI is changing the game in the electronic music scene. A conversation with VValentino Fabris Valenti.
Written by Alessandra Pastaro
Handing over your ideas to a system, stepping back, and watching it bring them to life. (Almost) zero skills required.
As futuristic as it might sound, this is actually just one of the new frontiers of artistic production. AI’s ability to generate stunning, studio-grade visuals has already changed the way we create and perceive art – and music is quietly undergoing the same transformation. This revolution shakes every genre, yet hits differently in the electronic scene. In this space, where every sound is born and shaped within circuits, at what point does AI stop assisting the artist and start taking their place? And, most importantly, what does that mean for ownership, creativity, and music itself?
In electronic culture, the human–machine dialogue is nothing new. Just think of Brian Eno, the father of generative music, who for decades has been exploring composing as a fluid process, driven by self-evolving systems and never entirely finished.
Today, however, the autonomy and pervasiveness of AI tools have completely changed the game – every aspect of it. Platforms like Mubert or Soundful, for instance, can generate melodies, harmonies, and beats from a simple text prompt – either in a fully automatic or a collaborative mode, where the human sets the parameters and the algorithm shapes them into music. Sound design itself is following the same trajectory, with intelligent tools that learn from the producer’s taste, plug-ins that suggest new harmonic variations, and sonic automations that adapt to the energy of the track.
Artificial intelligence thus becomes an invisible co-author, capable of inspiring, accelerating, and transforming the artistic process.
Techno pioneer Richie Hawtin – always an icon of experimentation and creative avant-garde – was among the first to make his music interact with AI. For his DeepFocus project (in collaboration with Endel, an app that creates personalized soundscapes), the artist provided original sounds – which he described as Lego-like sound blocks – that the algorithm then autonomously combined, generating infinite variations.
But AI doesn’t just provide new tools to those who already know how to use them; it also allows complete beginners to produce tracks in minutes – a sort of automated ghost production. Fascinating, until you realize how easily this apparent democratization can turn into a flood of content, saturating an already overcrowded market while suffocating the creative side of artistic experimentation.
A problem that concerns not only quantity and quality, but also the very nature of what is being produced: algorithms are trained on existing music, often without authorization from artists or labels (as happened with Suno and Udio, recently targeted by major record companies). The result? A series of ethical and legal dilemmas that remain unresolved: can we truly speak of “originality” if AI reproduces styles built on others’ data? And ultimately, who is – today – the real author of a track?
We’ve seen data, tools, and scenarios. But to understand what actually changes inside the studio, behind the console, in the daily experience of those who create music, we spoke with Valentino Fabris Valenti – known as VEFFEV – an Italian techno DJ, producer and developer: a dual perspective that allows us to observe up close how AI is step by step rewriting both the craft and the imagination of music-makers.
What do you think about artificial intelligence composing tracks autonomously? In your opinion, is there a risk that AI could erase the uniqueness of the artist?
Although I’ve only recently started producing, I’ve been working for years as a developer, so I have direct experience with AI. As with coding, in music too, artificial intelligence can be a useful tool, but it should never replace human creativity. The real risk today comes from using AI to make everything easier: less research, less effort, less personality. That’s exactly where artistic uniqueness gets lost. As a DJ, I often hear tracks built with the same sounds or presets. When you hear something authentic instead, you recognize it immediately.
I’m very attached to old-school sounds, especially Detroit. I’ve always been fascinated by how those artists – with very limited tools – managed to create music that still sounds futuristic. That’s something we’re losing today, maybe because we have too much at our disposal and rely too heavily on automated tools. Jeff Mills, who’s always been ahead of his time, already said back in ’98 that AI would transform music, but he warned: “In the end, we’ll use machines to free ourselves from them”. I believe AI shouldn’t replace our work, but rather free us from mechanical tasks so we can dive deeper into our own artistic language.
If you had to teach an AI how to build a good track, which aspects of sound or structure would you teach it first? Is there a part of your work you would never delegate to a machine, a part of your artistic identity you consider untouchable? Why?
I’d definitely start with structure: tension, release, groove. These are essential elements for keeping attention alive and building a sonic narrative. Then I’d move on to texture, depth, and dynamics – elements that give a track its character.
But I’d never let AI make emotional decisions for me. That moment when you choose a dirty sound, an error, or decide to leave a raw loop... that’s where personality lives. That part is sacred to me. Richie Hawtin explains it well: “It’s always me deciding what to feed the AI. My decisions remain at the center.” Mills, too, talks about musical architecture as something profoundly human: “Creating a flow, a structure that grows... that’s the real programming.” In short, AI can learn grammar. But the voice – the thing that makes you unique – must remain yours.
From a technical point of view, do you think the absorption of style by an algorithm is comparable to sampling? Or are these, in your view, two completely different phenomena – with different ethical and creative implications?
I think they’re two very different things. Sampling is a creative and intentional act. You take something from someone else and transform it, you add your own style, you contextualize it. It’s a form of dialogue with the history of music. The absorption of style by AI, on the other hand, is often a passive process. The algorithm analyzes and reproduces without any real artistic intent. And if the output is too faithful to an existing style, without context or re-elaboration, it can become imitation or even appropriation. Technique and gesture, on the other hand, are integral parts of a producer’s identity. That’s why even when you use a sample, you add your own touch. AI can only approximate that style, but it will never have the intention behind it.
Given the boom of AI-generated tracks and the huge amount of content uploaded every day, how do you experience – as an artist and listener – this saturation? Do you think it will change the way people discover and experience music?
The saturation is obvious. Every day thousands of tracks are released, many built in similar ways. But I think that, amid this overload, the value of artistic identity will stand out even more. People will seek sounds that truly speak to them, coherent visions, labels with a clear aesthetic. It will no longer be about how much you produce, but about what you want to communicate. I think back to how the Wizard, in the ’90s, imagined a future where artists could expand their vision through technology, not disappear behind it. Well, I believe that will be the real balance: using technology to express ourselves better.
VEFFEV’s voice calmly reminds us that, despite how quickly innovation seeps into creative processes, there’s still a dimension AI struggles to imitate: the expression of an intention. In a universe where everything is potentially programmable, the greatest risk isn’t the replacement of the artist, but the abandonment of their expressive urgency.
And yet, perhaps it’s precisely in this contrast that the key lies: if AI can multiply possibilities, suggest directions, and spark new outcomes, then the artist’s goal isn’t to resist it, but to work with it and choose intentionally. To choose what to use, how, and above all why.
Because, in the end, a track leaves a mark only when it evokes something - in the heart of both the creator and the listener - and this is something AI can’t do… yet.