Remember in the last newsletter, when I said I was addicted to live performance? I actually have a four month pause between gigs…but then I have nine more planned by the end of the year. I think I may need professional help.
Why? Because I have a new studio I should be spending time in, creating new music! I do have a new album coming out later this month, and the studio itself has been getting some nice press coverage…but it does make me wonder about the time serious musicians dedicate to actually making music versus the time we need to spend doing everything else.
“Just use AI to make new tracks!” I hear someone suggest from the back of the room. Well, that’s something I’d like to talk about in this newsletter. I don’t claim to have The Answer, but I think there are a few discussions we should be having – and hopefully my musings will help kickstart some of those discussions.
That, as well as studio coverage and interviews, upcoming albums and gigs, and more are all covered in the online version of the full newsletter. Here is the summary; clicking on the subject headers will take you directly to that section:
- featured article: Artificial Intelligence is an unavoidable subject these days. Some are actively avoiding it; some use it to create entire albums; some have found a middle ground where they use it to help their own human creative process. I have a few thoughts on the subject I’d like to share; that’s the main article in this newsletter, below.
- Alias Zone updates: The stereo version of my next album – Paradise Lost – should be out by the end of July, and we have a great party planned…
- Learning Modular updates: My new studio has been getting some press coverage, including an interview for Bobby Owsinki’s Inner Circle Podcast. I’ve also overhauled the Learning Modular home page.
- Patreon updates: Some thoughts (and surprising facts) about analog synthesis, preparing a stereo set for quadraphonic performance, dealing with sound problems at a gig, and the origins of my new studio are among the new posts recently added to my Patreon page. I’ve also re-introduced annual subscriptions.
- upcoming events: I have a big album release party & gig happening in the Denver area on July 27th, plus several other interesting gigs planned the rest of this year.
- one more thing: Patch & Tweak Club is an online way to access Patch & Tweak and other Bjooks releases, including additional content.
Some Thoughts on AI in Music
The “executive summary” (written by me, not ChatGTP) would be:
- I have mixed feelings about the use of AI in the arts in general, and music in particular.
- I don’t like how many AI engines are trained on the work of human creatives without the permission, nor how some will use AI to create entire albums and claim they “created” it.
- However, I (and other composers) do use forms of machine intelligence to help us get over creative blocks and to help us save time.
The thoughts behind those bullet points are a bit more nuanced, which I’ll share below. I don’t claim to have The Answer – there are a lot of gray areas – but I do think we should be discussing these things, and sooner rather than later.
Who Gets Credit For Creativity?
I spent many years both at Roland R&D LA as well as solo working with lawyers in the US on intellectual property issues, including who gets credit for creating samples and music; I even co-presented talks at NAMM and CyberArts on these issues back in the 1990s.
The highly condensed version of what I learned is:
- A “process” which executes without human intervention (such as a synthesis circuit or algorithm) is the area of patents
- Making creative decisions is what gives someone the right to claim copyright in a work
For example, believe it or not, sound effects and single-note samples can be protected by copyright – as long as someone made creative decisions about what mic to use or where to place it, how they edited the file, et cetera.
If something does not have any underlying creativity – such as a nature sound – these choices are what can make the final edited work copyrightable. However, if they original source does exhibit creative choices, you coming along and making additional choices does not transfer the copyright to you – instead, you are creating a “derivative work,” while the original artist keeps the copyright. In most cases, the original artist is the only one allowed to creative derivative works (unless they give you permission to do so).
Of course, there are legal exceptions, such as Fair Use (which is narrower than many assume), and “transforming” the work from one area of creative expression to another – such as using the colors and patterns in a painting to create a piece of music. Plus, greedy artists and corporations with well-paid lawyers continue to find ways to erode the rights of the original artist.
For this and other reasons (including being an artist myself), I do believe the original artists should be the ones who can make money off of their creations, or who otherwise decide how they can be used. And as an extension of that, I am deeply opposed to artificial intelligence engines that are trained on the works of human artists without their permission (and certainly without compensation). To my mind, they are creating derivative works based on the work of the original artists.
But then I have to ask myself…how did I learn to create? By studying other artists, and initially by trying to imitate them. I’ve grown past that to trying to synthesize something new out of my own personal experiences and interests, but it certainly does create gray areas when thinking about AI art generation…
Is Prompting AI “Creative Enough”?
Regardless of my opinion about the current AI engines, I do think there is a difference between the artist who is actually creating the work, and the producer or director. A person writing prompts for an AI engine is more of a producer or director, telling someone (or something) else what they would like it to do.
If a person is going to going to create an album by entering prompts into an AI engine, on a moral level I do not think they should put themselves down as the artists – the AI engine, usually trained on the works of other artists, is filling the artist role. Crafting good prompts is indeed a skill worthy of recognition, but I think the prompt-writer is better described as the producer or director in this case. (I’m not naming any AI music generators because frankly I haven’t used any yet, and therefore can’t make an actual recommendation.)
But then, the complexities of intellectual property law start to drag me back into a gray area. If you argue that a generative AI engine is a “process,” then legally you can make the argument that the prompt-writer is the one expressing creativity in this case. But even if you take that legal position, I still think we need to come up with a separate term for the prompt-writer – even if it’s just “producer.”
The music streaming companies are also trying to deal with AI-generated music. Spotify has a mixed position on this: They won’t allow AI-generated music that impersonates a human artist, but they are okay with you uploading your “own” music or even music inspired by other humans as long as you can claim copyright in it (thus the discussion above). Some speculate that Spotify would be fine if all their music was AI-generated, as long as subscribers were happy listening to it; many record labels oppose this, and are lobbying the streamers not to accept AI-generated music at all.
For a different take, many report that Apple is getting increasingly aggressive about rejecting submissions. Some of the reasons appear to include a submission being too similar to other already-released songs, or if it appears to be too simplistic (i.e. not creative enough) – with at least some AI music failing these tests and getting rejected. This is happening often enough that a streaming music distributor actually warned me about it once they heard I did “ambient” music (even though I don’t come remotely close to those red flags which would cause rejection).
Good Uses of Artificial Intelligence
At the Electrowave music festival at University of Colorado / Colorado Springs earlier this year, I was on a panel about the use of AI in music. Steve McQuarry (aka Synsor) gave an excellent example of a music composer for film or television and on a deadline writing a core theme for a character or emotion, and then using that to train AI to generate other similar, related themes. They then use their own human judgement to decide which ones to use, or how to alter them to make them be a better fit. I think this is an excellent use of AI.
I consider some of the musical tools I use to embody a more primitive form of AI – let’s call it “machine intelligence” – to help me create and perform music.
The one I use most often is Chance Operations in the Five12 Vector Sequencer. After I write, say, an 8-note sequence, I will think about what variations on it I would personally play – such as a pick-up note before the downbeat, transposing a certain step either up an octave or down a fifth or whatever, doubling up (ratcheting) a certain note, etc. – and also how often I would play that variation. I can then enter those “rules” that express my own creativity, and have the sequencer automatically execute them – including “probability” amounts – while I play another line on top of it.
Going further, the Vector also has Generate and Evolve functions. Akin to Steve’s example above, after I have written a sequence I like, I might try using Evolve to see if it can come up with a nice variation to play later in the piece. I have even used Evolve during live performance, executing it every few bars during a “solo” – if I liked what the Vector came up with, I might stick with that for a few measures; if I didn’t, I would quickly hit Evolve again and hopefully get a variation I liked better. (That was a stressful high-wire act I would be slow to repeat again in front of an audience!)
And, if I’m completely stuck writing one of the musical layers for a piece, I will enter the key and scale, choose a musical style from the Vector’s list, and have it Generate a sequence. I do this over and over, throwing out many of the results, but keeping the ones that I think have potential. I then edit the notes that annoy me in the automatically-generated ones to end up with something more to my personal taste. (The same happens with sequence variations created using Evolve.)
This is when I consider the machine to be a true collaborator – and I am happy to have machines that can make me better than I am on my own. (Or which can at least speed up my own creative process.) I guess that means I should starting adding the Vector to my album credits? Check out the liner notes for my next album when it is released… 😉
In the meantime, I’d be curious to hear your thoughts and experiences on this subject; feel free to enter them in the Comments below.
Alias Zone Updates
The stereo version of my next album – Paradise Lost – is finally mixed and off to the mastering engineer. I have written the liner notes and started on the CD artwork. An early version of the images being used for the front cover and flaps are above; it will use my own artwork and photos (I consider that a way to distinguish myself from those using AI to generate their album covers). The album release party is on July 27 at Prismajic in the Denver, Colorado area; see Upcoming Events below for the details – it’s really going to be something special.
After that’s released, I am going to start work on the Atmos spatial audio version of the mix. When it’s done, I will release both the stereo and Atmos versions to all of the major and many of the minor streaming outlets. Atmos has made choosing a streaming distributor much more challenging, as some won’t handle Atmos, and many of those who do have large surcharges and small file size limits for Atmos uploads – one of the subjects I’ll be discussing later this year on my Patreon page.
I actually moved this album ahead of another in the queue. I have all of the tracks for a dark ambient album, including solos by a pair of Big Names in the electronic music world. I want to make another set of overdubs myself, and then we’ll see how this departure from my normal, more-rhythmic style of music goes over!
Learning Modular Updates
I built a new studio last year. Two of the main criteria were:
- enough room to set up all of my instruments as well as my live performance system at the same time
- the ability to mix in full Dolby Atmos 7.1.4 (12-speaker) sound
On that second point, I received a lot of technical advice from Focal (speakers) and Audient (audio interfaces) as well as the techs at the music superstore Sweetwater. Since it’s unusual for an independent musician to build an Atmos studio for their personal use (compared to hiring it out to clients and record labels for Atmos remixes), Focal wrote up an interview with me, and some magazines – such as Mix (page 16 of the magazine, and 18 of the online viewer) – followed it up with their own articles. Mix in particular seemed fascinated that I had both Atmos and quadraphonic monitoring areas – the latter being for my live performances, roughly half of which are in quad.
This also led to be me being interviewed by music industry veteran Bobby Owsinski. I read his book Music 3.0 (now up to Music 4.1) before I started releasing music again a few years ago; I have also taken his online courses on mixing. We had a wide-ranging conversation including modular synths, studio design, immersive audio, and more. That podcast is scheduled to go live on July 1. I’m writing this before then, so I don’t have a direct link, but you can hear Bobby’s highly informative and entertaining Insider’s Circle podcasts at bobbyoinnercircle.com, or via Apple Podcasts, Amazon Music, YouTube Music, Mixcloud, Spotify, Deezer, TuneIn Radio, or RadioPublic.
Oh, and Focal also added me to the Immersive section of their Focal Pro Experience website. Most of the others featured there are Grammy winners and the such, so this shows the power of doing something different – you have a better chance of standing out.
This all led me to update the home page of the Learning Modular website to share that I talk about music creation, performance, and recording – not just tweaking the knobs on modular synths. Like many modular users, I have been on my own journey of wanting to make cool sounds to wanting to make cool music – which means learning how to compose, record, perform, and release the results of my modular knob-tweaking. I will be updating the Glossary to include more recording terms, plus create a Products page for my creations such as Chaos Clip (available on eBay and Amazon).
Patreon Updates
In addition to updating my Learning Modular website, my Patreon subscription has also to now cover live performance, studio, and compositional ideas in addition to modular synthesizer patching tricks. Here are some of the new posts I’ve written since the previous newsletter:
- Tales From the Road #04: Trust – But Verify (the venue’s sound system): Recounting problems I’ve had with the sound in particular at recent gigs, and how I try to prepare for them to not bite me again. (for 1v and above subscribers)
- Analog Waveforms: Visual vs. Aural “Perfection”: Tim Shoebridge recently posted a video going over the visual and aural differences between waveforms of the same name on the Moog Matriarch and Muse synthesizers. This led me to talk about how those differences arise, and the important difference between something we see and what we actually hear. Some of the details discussed include where the common analog waveforms originally came from, and the difference between DC and AC coupling when it comes to waveshapes. (for 5v and above subscribers)
- Building an Electronic Music Studio 01: Initial Plans: I’ve started documenting in detail the process of designing, building, wiring, and tuning my new studio. Although not everyone has the luxury of building their own space from scratch, there will be ideas throughout that can be applied to even bedroom studios – after all, that’s what I recorded in for years! (free to everyone)
- Spatial Audio 04: Moving a Project from Stereo to Quad: You have a set you’ve created in stereo, and now you have the opportunity to play it in quad: What changes and decisions do you have to make? I start with the basic terminology of spatial audio (including speaker numbering and “bed” versus “object” channels), discuss how to conserve your resources (be they modules or CPU cycles), and share decisions I made in moving a recent piece from stereo to quad. (for 5v and above subscribers)
As I’ve suggested before, if you want to take your modular music experience to the next level, I humbly suggest you really should be a subscriber. There are roughly 500 posts in the archives now, all of which you get access to from day one of your subscription, including during the seven day free trial – so check it out for yourself and see if you agree. I’ve recently restored the annual subscription option; it’s like getting 12 months of access for the price of 10.
Upcoming Events
July 27, 2-7:30 PM, Paradise Lost album release party & concert, Prismajic, Lakewood, Colorado
This is shaping up to be a very special event: Chris Cardone of Luigi’s Modular Supply has found a wonderful venue called Prismajic in the greater Denver area which has a pair of fantasy interactive rooms (think Meow Wolf), an owl-themed bar, an additional dedicated performance area, and a restaurant. (It’s also in shopping mall, with plenty of free parking plus other restaurants!)
Performers will include Monoscene and Amra the White Lion in the interactive areas, Synth Bod in the bar, and Meridian Alpha and myself “in the round” with quadraphonic sound in the main performance space. There’s even going to be a synth petting zoo! I’ll also have copies of my new album (plus the others) for sale. Tickets are $10 in advance; $12 at the door. I expect the event to sell out, so don’t wait too long…
September 6, Knobcon Chill-Out Room, Schaumburg, Illinois
It has become an annual tradition for me to premiere a new ~20 minute piece at the Knobcon industry convention. In addition to playing the Saturday evening show, I am also curating the performers for the Saturday afternoon show – if you are interested, use the Contact form below to get in touch.
September 13, 8 PM, The Gatherings, Philadelphia, Pennsylvania
I am very proud to be part of The Gatherings concert series this fall at St Mary’s Hamilton Village, 3816 Locust Walk, Philadelphia. Orbital Decay will the opening act, and then I will play an extended set.
When done, I pack up and move to WXPN radio and play a live set on Chuck Van Zyl’s famous Star’s End radio program, starting at 1 AM (Eastern). It should be quite a day (and night)! I’m really looking forward to it.
September 21, 3-4:30 PM, Sigal Music Museum, Greenville, South Carolina
I will be giving a talk about the evolution of electronic music instruments – particularly in the context of acoustic instruments (of which the Sigal Music Museum has an amazing collection), which have been around for much longer – followed by a performance.
October 9-11, Wavetrails Festival, Albuquerque, New Mexico
Last year, I had the honor of headlining the opening night of the inaugural Wavetrails Festival – part of the larger Rising High Arts Fest – in Albuquerque, New Mexico during their annual Balloon Fiesta. The first Wavetrails had standing room only both nights (and sold out opening night), so we’re being upgraded to a larger room, and will be staging more events. I will be playing again – probably on the opening night – as well as giving a talk about performing electronic music live. Clicking the link above will take you to last year’s Wavetrails web site for now; it will be updated as the date gets closer.
I also have potential gigs lined up in Charolette North Carolina on September 19, Colorado City Colorado on November 8 or 9, and Albuquerque New Mexico on December 6. I’ll post details once those firm up.
And also, don’t miss my appearance on Bobby Owsinski’s Inner Circle podcast. It goes live on Tuesday July 1, and can be found at bobbyoinnercircle.com, or via Apple Podcasts, Amazon Music, YouTube Music, Mixcloud, Spotify, Deezer, TuneIn Radio, and RadioPublic.
One More Thing…
Kim Bjørn – my co-author on Patch & Tweak, and the person behind Bjooks – has created an online service called Patch & Tweak Online. The Core subscription (7.99 Euro/month, or 79 Euro for the year) gets you online access to the Bjooks titles Push Turn Move, Pedal Crush, Patch & Tweak, Patch & Tweak with Moog, Patch & Tweak with Korg, Synth Gems 1, and the Roland book Inspire the Music. Upgrading to Pro (9.99 Euro/month, or 99 Euro/year) also gives you access to articles (some by me!), interviews, and lot of additional content. The hot tip for the month of July is to look for the green banner across the top and click on the small line of white text in the middle: It will give you 50% off your first year of Pro membership! All come with a three day free trial, as well Check it out!
All of the above – plus a couple of large projects for a pair of non profits in the electronic music industry (more about that later this year) – is what caused such a large space between newsletters. I hope to get back on a faster track; we’ll see how the rest of this year goes.
amused by being an “emerging artist” in his 60s –
Chris
Excellent piece. Truly valuable to read your POV, not just because it matches mine, but because it is quite thorough and clear! Sharing this as a resource for sure.
Love your performance addiction schedule LOL
It’s a case where I really had to sit down for a while and un-pick my own thoughts. It’s easy to have an emotional reaction – this is cool; this is bad – but it really gets to the core of how we navigate through this world as creatives.
I have long proclaimed that I like my augmented lifestyle, not hesitating to Google anything I’m curious about or don’t know the answer to. But there are some things I wish to reserve for myself, such as using photos of my own physical artwork for my album covers.
That AI panel at Electrowave was very interesting, informative, and fun this year with the amount of luminaries we had. I felt more like a fly on the wall than a participant.
For my Jamuary this year I experimented with AI on one jam, but not to actually create the audio. I was originally just going to ask it to define the characteristics of Liminal music so I could create a piece in that genre, but then my friend suggested asking it how to patch my modular to create a piece of music in that genre.
It was an interesting exercise just trying to get it to recognize which modules were in my system (I finally had to list them out by copying the data sheet of my system in Modulargrid), but I did have to make some decisions/choices as it wasn’t very specific at times.
Here’s the video of that jam: https://youtu.be/ALgIlbeRkqE
I’ve described the process in the description of the video and preserved the AI generated text in the linked Google Docs.
I recorded video and audio while I followed the AI provided patch instructions. I hope to someday edit it and get it uploaded to my channel.
I’m fine with collaborating with machines. Many modular people get new modules and poke & prod them hoping something come back which inspires them (I call that my “R&D” time). And many have looked to written outside stimulus and guidance for inspiration, to get around creative blocks, or to just try something different – some throw the I Ching or use Patch Deck; I have both Oblique Strategies and Robert Fripp’s Guitar Craft Aphorisms; having AI make suggestions is an updated version of this.
There are certainly some AI prompters who sit back and want the computer to do the work, rather than having to put in the work themselves. But I also know AI prompters who work really hard at crafting and refining prompts – they are fully engaged in the act of creation, and are creative themselves. They’re creative in the way a good art director or studio producer or book editor is, which is different from the person wielding the paint brush, guitar pick, or patch cable.
I m glad you mentioned “Crafting good prompts is indeed a skill worthy of recognition”.
Your AI related thoughts also align with what is going on in software engineering. At the company I work, my VP gave a very interesting talk where he mentioned. “An engineer can give tens of prompts today to create a complete system in a few hours which they would ve otherwise taken weeks (or months) to write by hand. An accomplished engineer can do it in 4 (prompts) !”
Associating the prompt writer’s role to that of a producer seems apt to me. That s exactly what it is. There is no need to shy away from AI “taking over”. It wont “take over”, it will only weed out the undue attention even cruft ends up getting in today’s times. Also nobody has actually heard JS Bach playing any music, they ve only got notations (prompts) that he had left behind.
I asked Chat GPT the question ‘Artificial Intelligence in Music: Revolution or Just a Tool?’: its answer was very academic, nothing more 🙂