I’m deeply interested in music technology and new music genres across all boundaries. I’m also a producer/developer of educational apps such as Fopra [an MVP now retired] and the benchmark Harmonica training app HarpNinja. As a Creative Technologist I also work with many different technologies across many different areas.
Since 2005 I’ve performed online in virtual worlds like SecondLife playing solo (with and without my robot backing band) and also real time music jamming with multiple musicians located in different countries. You can check out my live online music performance website @ http://komuso.info/ and you also read more about it in the Streaming Live Music project.
Here’s a video demonstrating live networked music performance between myself in Tokyo and fellow SL musician Hathead Rickenbacker in Toronto, Canada.
Generative music systems are an interesting area that I’ve done a lot of research and experimentation in as well. Moozk was an experimental audio visual app I developed for public use using a wacom pen tablet to drive a painting application that also produced generative music as you drew. Kids seemed to love it.
Blue Noise was an experimental audio visual performance using an eBow, slide guitar, digital effects and a PC running audio responsive custom designed graphics.
I’ve given some talks and performances about live music in SecondLife:
Technology: Unity Team: Me and my 6 year old son as co-designer Process: It started out as a small project I put together to test some Affective AI plugins I was evaluating for client projects. Affective AI is software that responds to the users emotional state through different sensors and emotive state pattern detection.
While testing these out I thought of a side project to do some game development with my son, to show him a little of the process of development and engage him in some high level design and play testing. We talk a lot doing that so it’s great for his english skills too;-)
The goal was to develop something fun but quickly with high quality art, music, and sound. We developed the core mechanic first (the toy) then built some gameplay around it with a little story backdrop. Art style I was originally looking to do something quite abstract but we took a U-turn into the current style instead. I spent a bit of time choosing the music and sound FX as this is a key part of the experience as well.
I heavily leveraged the Unity Asset store for some key subsystems and art, most of which I already had. It was very much an iterative process to explore what might work, then implement and test. Total project length was ~4 weeks.
It’s quite a complex little game under the hood. I surfaced a few gnarly bugs in some of the 3rd party subsystems I used, so that involved some back and forth with the devs. Fortunately I chose packages wisely and these were all great devs who support their products really well.
I decided to release it into the wild for some early alpha testing to tune the gameplay, difficulty progression, and pacing with some more inputs from a wider audience.
Feel free to have a play and let us know your thoughts!