Artificial intelligence is often deliberately portrayed as something magic: just say the word and it becomes reality. This is what happened when the posthumous song “Drowned in the sun” by Nirvana was released.
For everyone who loved the music of Kurt Cobain and Nirvana, every time they listen they can’t help but wonder what would have happened if Kurt hadn’t taken his own life in 1994. There was talk of a possible collaboration with Michael Stipe of R.E.M. or a solo project.
To please needy fans, people go looking in attics for some discarded recording that has never been officially released or played in a concert. But the result is often disappointing, interesting only for memorabilia collectors.

Drowned In The Sun was created as part of a project titled Lost Tapes of the 27 Club, created by non-profit organisation Over The Bridge, with the intention of raising awareness of how mental health issues continue to affect musicians and creative people in the music industry. The project has previously come up with “new” songs by Jimi Hendrix, Amy Winehouse, and The Doors.
The result is shocking: it starts from a quiet riff to get to the fury typical of the Seattle group. I mean, the ingredients are all there. The lyrics are also credible: “The sun shines on you but I don’t know how,” and then a surprising chorus: “I don’t care/I feel as one, drowned in the sun.”
The new Nirvana song was created by feeding MIDI files into an AI program which learns how to compose in the style of given artists by analysing their past work.
They started with a sample of 20-30 songs that were broken into their constituent elements: solos, rhythm guitars, bass, drums, backing vocals. This is because inserting whole songs confuses the software, unable to grasp the most unique aspects of the group. According to those who worked on the project, 90% of the riffs produced were really poor quality and unusable but in the remaining 10% they began to find something interesting. A similar process was used for the song’s lyrics: a simple neural network trained with the group’s songs and an initial cue of a few words, and the algorithm was able to complete sentences. Again, final selection was needed to find the verses whose syllabic structure combined the best with the generated music.
It took a year of research and development to complete the work. In this time frame, the team sought the help of experts from the group to monitor the possible presence of plagiarism. The song was later sung by Erik Hogan, lead singer of a grunge tribute band.
The Magenta Project
The new Nirvana song was created using Magenta, an open source project on Github, exploring the role of machine learning as a tool in the creative process. It is distributed as a Python library, powered by TensorFlow for manipulating source data (music and images), using this data to train machine learning models, and finally generating new content from these models.
It’s possible to execute Magenta in a web page using Magenta.js, an open source API Javascript, which can execute the pre-trained models in the browser. It’s also based on TensorFlow.js and performs at a very high level.
There are a lot of applications based on Magenta.js like, for example, Transformer, that can generate long pieces of music. This is a challenging problem, as music contains structures at multiple timescales, from milliseconds to repetition of entire sections.
Music and AI: a fatal attraction?
Getting back to Nirvana’s song, while once again keeping in mind the noble purpose of the project, we need to ask a few questions starting from the one on ethics, typical in any discussion about AI: “Is it the end of real music? Will musicians be replaced by machines?”
The number of people involved in the project “Lost Tapes of the 27 Club” seems to prove otherwise: Magenta programmers, music producers, audio engineers, and singers were all necessary in order to arrive at a result that we can say is reasonably close to what Kurt Cobain would have produced.
However, there is no doubt that more and more music producers are trying to create products for commercial success, making the creative process is very data-driven: what is the age group, the sex, the occupation, the religion of those who have to listen to the song? Adding a nostalgic note to a song because an algorithm that analyzes the datasets of music streaming software tells us to do so is not science fiction: it is the present.
But if in order to create a hit song you need it to influence our mood, how ethical is it that artificial intelligence tries to make us feel good all the time?
To understand what I mean, let’s look at the project called Kórsafn started by Microsoft in 2019 in collaboration with Icelandic singer Björk: music is created with AI, based on the position of the sun and changes in weather. The music is generated and played continuously in the lobby of the “Sister City” hotel on New York’s Lower East Side.
Kórsafn uses an archive of Bjork’s music and creates new arrangements, adapting them to the time of day, using a rooftop camera that is then able to capture moments such as the return of birds in the spring. The purpose? In addition to analyzing seasonal phenomena with AI, the idea is to create a mood for the hotel guests that is connected to the weather conditions.
Another, albeit simple, application of AI related to music has been visible for years in major music streaming applications. Based on what we’ve heard in the past, the app recommends new songs and playlists, potentially endless. And this ability improves day by day, even suggesting groups that we’ve never heard of.
Why did I use the adjective “simple”? We are not so far away from a point in time when the song that is suggested to us will be chosen by analyzing biometric data that comes from a device that we wear day and night. Now you understand the meaning of the second question: we will listen to music that will lower our pulses if we are agitated or music that will pump us up if we are depressed.
Conclusions
Once again, the ethical issue requires us to ask ourselves questions, assessing how much the impact of these technologies on different aspects of our lives gives more value than what they take away from them. I do not have an answer, as it should be, but I am convinced that continuing to talk about it is the only tool we have to find where the limits are.
Keep following us!