Generative AI Music System

Algorithmic Music with seeded HMM and Stochastic Noise

A demonstration of a prototype generative music system using a variety of techniques from seeded HMM to stochastic noise.

The prototype has two generative music systems:

  • A generative controller that uses a hidden markov model to generate new compositions from a seed music database
  • A random music generator using a variety of algorithms from a windchime emulator to stochastic noise.

The system is built with Java, and uses an open source synth ZynAddSubFX as the sound source.
It was written in 2006  based on research work I did for my Music Masters degree in 2003, and I’m currently porting parts of it to C#/Unity & HTML5/WebAudio.

In 2007 I produced 2 relaxation music albums each with 4 x 15 minute tracks using this system, mixed with ambient environment nature sounds from another generative system. Currently these are offline but I hope to redistribute them again sometime. Here is a track from Album #1:

Generative music systems are a rich field of exploration, and the methods presented here are well known.
I have extended them a little more with some added features such as:

  • Object database containing seed compositions with metadata
  • More parameters for randomization and variability
  • More experimentation with noise generation algorithms to drive music generation

Potential uses of such as system are varied:

  • Affective computing  – detected user emotions to drive system feedback via music mood matching
  • Art and music therapy
  • Music education

Some screen shots are below, followed by a video that briefly explains both systems.

Seeded HMM Generative Music Generator

Seeded HMM Music Generator

Stochastic Random Generative music generator

Stochastic Random music generator

Check out the video for a more in depth explanation.

You can read more about my music research here:

Sound/Music R&D

 

Spatial Music R&D

Sound/Music R&D

I completed a Music Masters degree in 2002 focused on Real Electronic Virtual instrument design and performance (see also http://www.linseypollak.com/past-projects/rev-real-electronic-virtual/ ).

I’m deeply interested in music technology and new music genres across all boundaries. I’m also a producer/developer of educational apps such as the benchmark Harmonica training app HarpNinja. As a Creative Technologist I also work with  different technologies across many areas.

Update: 2018-06-02: Accepted into Oculus Start Developer Program
Update 2018-01-09: Selected by Microsoft for the Windows Mixed Reality Developer Program .

Currently exploring new areas in music and sound with VR/AR/MR, AI/ML,  and spatial audio/music with various projects and collaborations, such as:

Groove Pilot

Wings

Music Flow

Spatial Music Visualizer

Immersive Audio and Musical AI

Generative Music System

Since 2005 I’ve performed online in virtual worlds like SecondLife playing solo (with and without my robot backing band) and also real time music jamming with multiple musicians located in different countries. You can check out my live online music performance website @ http://komuso.info/ and you also read more about it in the Streaming Live Music project.

You can hear some of these musical explorations on hearthis:

You can hear some of these musical explorations on Soundcloud:

Here’s a video demonstrating live networked music performance between myself in Tokyo and fellow SL musician Hathead Rickenbacker in Toronto, Canada.

Generative music systems are an interesting area that I’ve done a lot of research and experimentation in as well.
Moozk was an experimental audio visual app I developed for public use using a wacom pen tablet to drive a painting application that also produced generative music as you drew. Kids seemed to love it.

Blue Noise was an experimental audio visual performance using an eBow, slide guitar, digital effects and a PC running audio responsive custom designed graphics.

I’ve given some talks and performances about live music in SecondLife:

. Mixed Reality Komuso On The Future of Music Online

. SynaesthAsia: Dynamic, Live Music/Visual Spectacular from Musicians in Two Countries

Wings

What is Wings?

Wings is a Therapeutic VR prototype with interactive music and procedural visuals/camera.

The environment (time of day, speed, cloud cover etc) + adaptive music score (high quality emotive orchestral cinematic)  is driven by AI or responsive to bio/neuro feedback.

Update 2018-01-09: Selected by Microsoft for the Windows Mixed Reality Developer Program .

I did a talk about the development approach at VR Hub Tokyo Year-End Meetup Vol.4 | Health & Fitness with VR and AR titled “Design Framework for a Therapeutic VR app”.

What is Therapeutic VR?

It’s using VR in a variety of therapeutic contexts to achieve increased efficacy of targeted treatment protocols for areas such as:

  • Pain Management
  • PTSD
  • Stress Reduction
  • Exposure Therapy
  • Rehabilitation and Physical Therapy
  • ++

Watch some examples at https://www.virtualmedicine.health/videos

Related projects are:

Some video from the presentation:

VR app test version video:

Groove Pilot

Groove Pilot is a unique immersive and interactive spatial music experience for both desktop and VR.

Check the website out here: https://groovepilot.ninja/

As part of my continuing series of explorations around spatial music I threw together this prototype to play with some ideas about:

  • Spatial music
  • Real-time mixing
  • Interaction

Groove Pilot

How to play?

  • You must use headphones or earbuds to experience the spatial music
  • Collect all 14 sound orbs to complete the level.
    • Collect each sound orb before you run out of fuel.
    • Each one collected will give you a bonus fuel boost as well
  • As you collect each sound orb it enables another layer of the downtempo chill music track to play.
  • Each layer of music is located in 3D space so is mixed relative to all the others as you move around, dynamically changing the music mix and experience on the fly.
  • In a sense it’s a generative music piece controlled by the players movement through 3D space.
  • Each sound orb/music track/stem is also visualized procedurally in different ways with real time signal processing to drive reactive graphics.
    • Note: Sound visualization currently only works on desktop version, working on a fix for the WebGL browser version.
  • Best used with an XBox360 type controller to fly.
    • Left Stick: Pitch/Yaw
    • Triggers: Roll
    • Button A: Start/Brakes

Try it out: https://groovepilot.netlify.com    Note: This is WebGL and best used in Chrome/Firefox with minimal tabs open.

Research and Development

I did a talk about the development approach at VR Hub Tokyo Year-End Meetup Vol.4 | Health & Fitness with VR and AR titled “Design Framework for a Therapeutic VR app”.

Related projects are:

Sound/Music R&D

Web playable non-vr prototype submitted as part of Procjam 2017  @  https://itch.io/jam/procjam/rate/194089

Technical details.

Built in Unity 3D. Desktop only as Unity WebGL is currently not supported on mobiles.

Pin It on Pinterest