Think of the Beatles song "She Loves You." Recorded early in the band's career, it was captured live in the studio and tracked in mono, with all of the instruments and vocals squished down onto a single strip of tape. Now, imagine that you could somehow lift the sound of George Harrison's guitar out of that mix, isolating it from the rest of the band's glorious bashing so his intricately picked lines sound out all on their own. Until recently, this feat was possible only through countless hours of hands-on scrubbing inside professional audio production software. But recent advances in artificial intelligence, coupled with new approaches to audio processing, have made it easier than ever for professionals and enthusiasts alike to pick apart a song with stunning results. By disassembling a tune, the song's rights-owners could then use the raw material to dress the song up in new ways and inject it back into the marketplace. With all of the tracks properly isolated, an engineer could remaster the song to give it a more modern sound. Music supervisors could remix it in surround sound for use in a Netflix show, a Hollywood movie, or a video game. DJs and remix artists could take the components and retool them to create wholly new compositions. This phenomenon, known as upmixing, is a boon for recording artists, producers, and publishers, who are eager for ways to breathe new life into their most beloved works. It's also a big deal for fans, who get to hear vibrant, newly fleshed out versions of their favorite songs. Music journalist Jesse Jarnow traces the history of upmixing, interviewing the AI researchers, underground software hackers, and curious Abbey Road engineers standing at the forefront of innovation in this field. Michael Calore | Senior Editor, WIRED |
0 Comments:
Post a Comment