Undoubtedly, AI can optimize an incredible amount of processes, getting our daily routine activities done faster and more efficiently. AI is closely integrated into our lives and, at the same time, remains hidden. Users often don’t even notice AI’s presence in their lives.
However, as a civilization, we are inevitably heading towards an edge where AI will expand its boundaries and supplant and replace some human activities. It is already happening today when we speak about AI-generated art, text, music, and even fashion trends.
American singer and electronic music artist Grimes, who just recently changed her name to c, declared that we seem to be living in the age of the sunset of art - at least human art.
Once there’s actually AGI (Artificial General Intelligence), they’re gonna be so much better at making art than us.
A year later, c (Grimes) has written a personalized endless digital lullaby for the Endel app. She provided the music basis, and the app’s artificial intelligence now continuously changes it, adjusting to the time of day, weather, preferences, movement, and heart rate of the individual user.
Previously, when we were speaking about music composing, the process itself was always “sacred.” Will the AI music generation erase mystery or sacrament in music, or otherwise enrich it and give more space to the art above the commercium, and can a robot write a symphony? To answer this question, we need to take a closer look at it.
The first music piece composed of artificial intelligence appeared in 1956. Two professors at the University of Illinois, Lejaren Hiller, and Leonard Isaacson, used the university’s Illiac computer. Hiller and Isaacson prescribed the rules from which the machine generated a code which was then transposed into notes. The result of the experiment was a four-part work for strings, which Hiller and Isaacson called the Illiak Suite.
Later in 1960, Russian researcher Rudolf Zaripov released their first paper on algorithmic music composing using the “Ural-1” computer.
In 1965, Ray Kurzweil premiered a piano piece created by a computer. The program was capable of recognizing the various composition patterns and analyzing and using them to create novel melodies. Kurzweil came to the American show I’ve Got a Secret, where he played the music piece on the piano.
And in the early 1980s, the enthusiastic composer David Cope began work on Experiments in Musical Intelligence (EMI) to analyze existing music and create new works based on it. EMI has composed five thousand Bach-style chorales in a single day. In 1997 Cope presented three compositions to an audience. One was written by Bach, one by EMI, and the third by music theory teacher Steve Larson. The audience had to guess to whom each of these pieces belonged. The composition composed by Larson was taken by the audience as the music of artificial intelligence. The EMP music was mistaken for Bach.
Today, AI is broadly used to create modern pieces of music. Brian Eno, an English musician, record producer a pioneer in ambient music, recently released Reflection album. The record contains a single piece of ambient lasting 54 minutes, which was created with the help of AI.
Also, the virtual character and digital art project Miquela released her first single, Not Mine, which was also created with AI.
Generating music automatically is quite a challenge for many reasons. The biggest obstacle is that a simple three-minute song, which a group of people can easily memorize, contains too many variables for a computer. In addition, there is as yet no perfect way to train artificial intelligence to be a musician.
When composing music, one rarely creates a new piece from scratch. They reuse or adapt (consciously or unconsciously) musical elements that they have heard before, guided by principles and recommendations from music theory. In the same way, a computer assistant can be included at various stages of the creation of a piece to initiate, suggest or complement a human composer.
These are the different technologies and innovations today that generate or help to create music by using AI.
The Computer Music Project at CMU (Carnegie Mellon University) was designed to compose computer music to improve human musical experience and creativity. This computer process is based on many aspects, such as Music Theory, Cognitive Science, Human-Computer Interaction, Computer Graphics, and Animation, Programming Languages, Signal Processing Artificial Intelligence and Machine Learning, and others.
ChucK is a new sound-oriented programming language for generating, recording, and synthesizing sound in real-time. Among its most relevant features can be found a peculiar syntax. In addition, since its main function is real-time, synchronization and control of digital sound at the sample level are explicitly expressed in the language.
The popular Photo editor app PicsArt offers an endless AI music generator. In order to help social media content creators use copyright-free music for their videos, the app created an artificially intelligent musician that composes endless music.
Modern, cinematic, electronic, pop, jazz, tango, and many others are the music genres that can be generated with AIVA, the artificial intelligence composing software. The user, only for 33$ a month, can create any piece of music for any purpose. Listen to this track, it’s pretty impressive.
Another Software, Jukebox, generates music from scratch according to the chosen genre, artist, and lyrics as input. Jukebox’s autoencoder model compresses audio to a discrete space by using a quantization-based approach VQ-VAE.25 that can generate short instrumental pieces from a few sets of instruments.
AI music generation simplifies music as a concept itself, turning it into something consumeristic.
Music business consultant Mark Mulligan says:
AI may never be able to make music good enough to move us in the way human music does. Why not? Because making music that moves people – to jump up and dance, to cry, to smile – requires triggering emotions and it takes an understanding of emotions to trigger them, if AI can learn to at least mimic human emotions then that final frontier may be breached. But that is a long, long way off.
Valerio Velardo, AI music expert and former head of Melodrive, which until recently used AI to create video game soundtracks, says that even when AI creativity and human-created music become indistinguishable from each other, people will always appreciate being able to sit in a room with another human and create art. It’s part of our nature.
But even such imitation is unlikely to make AI a true artist. Deep learning algorithms trained on Bach’s chorales may be able to make music that even experts sometimes mistake for true Bach, but it is merely a mimicry. Imitating the style of the masters and developing it is what artists do at the apprentice stage of their careers. But that’s not at all why we appreciate composers.
The positive aspect of AI-generated music is that it gives hope that the artists who compose music for exclusively artistic purposes will be unchained. We’ll leave the music for commercial purposes for AI and have more time to create authentic music that speaks for our feelings and emotions.