May 2020

The Making of Thingamagig

Final product first, then the story

Genesis

In early 2018, I got back into guitar for the first time since high school. Crypto was at the peak of its bubble, so I splurged on some nice guitars, a Mesa Boogie combo amp and a full pedalboard. Also I quit my (NYC) programming job and moved back to my home state of Kentucky.

As I got back into music, the lifelong bucket list goal of playing a live show (despite middling guitar and singing skills) started to gain traction in my head. I knew I couldn't compete with the average performer with an acoustic guitar and my vocal chords, so I thought I could up my game (and distract the audience from my lack of talent) by adding extreme automation. I would come up with a system to automate my guitar tones, lights and vocal effects along with a set of backing tracks.

Lighting automation

For my first stab at lighting automation, I bought a standard 4-port dimmer switch, plugged in some lamps from around the house and connected the MIDI drum track of an Ardour session to it via QLC+, both of which are amazing open source projects. Here is the result:

For guitar tones, I was unaware of (and not really looking for) the open source options available and settled on a hardware solution instead, a Line 6 Helix and a Line 6 Variax modeling guitar. For vocals, a TC Helicon VoiceLive. Yes, that represents almost $3000 alone, but keep in mind: at this point I just wanted to play a show for myself and had zero intention of making this project something many people could use.

I set up several sessions of famous songs with lighting tracks and MIDI automation signals to control my lights, Helix and Voicelive. Here is an example of this laptop+equipment solution:

In August 2018, I played both of my live shows at the Chevy Chase Inn in Lexington, KY, and they went fine. I'm not a natural performer, that's for sure. Here's a bit of grainy cell phone footage. You can see the Christmas lights blinking and disco ball changing colors in time. The song only has one guitar tone and no looping so there's no mid-song automation, but the laptop told the Helix (which told the Variax) to drop 2 semitones into my vocal range (ish). If you listen closely, you can also hear the laptop tell the Voicelive to kick in for vocal harmonies.

Sharing with everybody

Like I said, the shows weren't horrible, but, honestly, I didn't enjoy it. I can't remember lyrics or chord progressions to save my life and was constantly straining my eyes to read the Ardour session screen on the laptop in front of me. So I won't be surprised if that was the end of my short-lived live musical career.

But like any good open source adherent, I wanted to share what I'd built with others. So I ditched the Helix and Variax for Guitarix (eliminating the major up-front cost issues), put some session files on github and posted some video examples to a few forums. I even posted a finished video of Feel Like Making Love to Hacker News.

Nobody cared.

It was still ludicrously difficult to set up. Laptop + low-latency Linux + Ardour + QLC+ + precise connection and configuration... I should have my head examined for ever thinking anyone would do this.

It had to be hardware...

In order to remove this steep (nigh impossible) up-front system configuration from the equation, I started experimenting with the new Raspberry Pi 4 in the summer of 2019. Could it play back a simple MIDI track while also shaping a guitar tone with plugins? Could it do it with acceptable latency?

Turns out that after some overclocking, strict session creation guidelines and precise tuning, the answer was "yes".

"... when there's nothing left to take away."

Lights are trouble. They are hot, mechanical and expensive. Gone (for now).

Control interface

Obviously, requiring a monitor, keyboard and mouse for this system was essentially a non-starter. But how would someone control the device? From a web interface? That seemed nearly as bad.

No sooner had I ditched the lighting aspect of the project than a lightbulb went off in my head. (← cheese!)

Enter Alexa: (Yes those are PJs.) (And yes, I called it "Nexus" at this point of development.)

This is the moment when I knew I was onto something; when it went from "cool" to "ok this is actually very useful". Alexa already knows all song and artist names and is handsfree -- and ideal combination when requesting backing tracks and playing guitar.

Now I'm as skeptical of IoT as the next guy. https://twitter.com/internetofshit has half a million followers for a reason. But in this case, it really works well.

The innerds

It's realtime raspbian. Headless ardour (lua implementation). A mix of guitarix and other amp sims. Proprietary cab sim IRs. Various other effects packages like rkr, ardour-native plugins.

Person speaks to alexa, alexa calls a series of lambdas (basically the not-yet-public API), and sends MQTT messages to the device which is tied to the user's Thingamagig account which is linked to their Alexa account.

https://github.com/raspberrypi/linux/tree/rpi-4.19.y-rt

Tying it all together

Screens - It became obvious in early testing that using a screenless Alexa device made for a bad experience. It was easy to get lost without a series of visual menus. And Alexa wants to default back to "normal" operation (exiting from the skill). It takes some special tinkering with APL (Alexa Presentation Language) and long-running commands to make this work. Figuring that moving towards Echos with a screen is the macro trend anyway (and probably Magic Leap-y BCI stuff later on), I decided this was an acceptable requirement.

Content - Needed song rights. Hired lawyer. Got song rights. Only took $$$ and many months. (Ugh.) But this was yet another "I'm on to something" moment. Buying Karaoke-style composition rights isn't nearly as expensive as buying master (the actual famous recording you hear on the radio) rights. Some other "play along" solutions on the market circumvent this by incorporating Spotify (et al). But that's not what Thingamagig is going for. Our goal is handsfree automation. The system needs to know about the composition to control all your shit. And nobody wants to get out a tablet or phone to select songs. Furthermore, with MIDI, you can easily swap drum kits, decrease speed for learning purposes, or change the key to fit your current tuning and vocal capabilities. And the MIDI file size is about 1/50th of a normal webpage in 2020. Creating all these sessions is going to be a huge lift, but necessary to get where we want to go.

Case - At first I had someone "design" (i.e. download a model) and print something for me, but that process sucked. So I bought a 3D printer and became a CAD guy, I guess.





Tones only mode - In order to make this a viable product, the backing tracks have to be on a subscription basis. That meant I needed to offer something of solid value without the subscription. So I worked hard to create amp simulation and effects navigation which you can see in the final video.

IoT stuff - This is way more challenging than I anticipated, but AWS provides some solid tools that are satisfying the immediate need.

Final product

This is a slightly more promotional version of the video.
(Updated Nov 2020 with new injection molded model superimposed on the same video. Did you notice? h/t video wizard Daniel Benedict https://twitter.com/freshdannyb)

And donate to Ardour and Guitarix!