hardware and software for electronic music 

thetechnobear's Recent Posts

Thu, Sep 07, 2017, 05:01

@Sanne.... this works as expected for me in Live 9.7.4 / Mac OS X.
i.e. in arrangement , start record, it tracks all changes, when you switch presets it records the new values, and subsequent changes...
and no need to hit rearm automation.
(it works identical in Aalto as for other plugins e.g. u-he diva/ace )

one thing you do need to be careful of, is when you do playback, you must start aalto in the same state as you did when recording ... which Randy in his OP alluded too.

you can achieve this in one of two ways

  • use a program change on the recorded clip (MIDI Programs folder)
  • when you start recording, switch to your starting preset (and hence save all automation values)

as fars as I see this achieves what you want... i.e. start recording, select preset, alter values to get a nice sound, repeat many times... then use playback, and find the place you liked... stop, and hit 'save preset as'

note: when you do playback it will NOT change preset name, this is because ableton is as such recording the preset change, just recording the values of the preset.
... btw: i didnt try changing presets via Program change messages, Id assume that would be recorded. (though you'd have to be careful between sessions to NOT change your program ordering )

@randy, I dont think aalto needs to track changes, surely this is exactly what DAWs do with automation, why replicate the functionality... the only thing id prefer, is an easier/quicker way to assign program changes. perhaps just assign to existing presets into different banks/programs #. (rather than copying presets around)

Wed, Aug 02, 2017, 06:48

ideally what Id like is...

a) updated Aalto with more voices but apart from that the same.

b) a completely new instrument, kind of similar to Aalto ,but not Aalto 2, that would not need to be patch compatible i.e. not limited/shackled to what Aalto does already (so well)

(u-he are doing this with Zebra 3, they already have said it will not be compatible with Zebra ... I think they said they will allow imports of patches, but they may/will sound different)

I do agree about Kaivo, I don't use it as much as I thought I would (and no where near as much as Aalto) because its resonance peaks are really hard to contain... I find its quite a small zone of usability, perhaps within only an octave. (pitch can wander too, but I think thats part of the beast)

anyway, Id love to see another out n out synth from ML...
also perhaps we can skip the sequencer, so we can have a bit more UI real estate for voice control... I know alot use the sequencer, but is it not easier to just use the sequencing/automation facilities already present in every daw (or use a MIDI sequencer VST)

Tue, Jul 25, 2017, 02:58

if I want to build, is the 'embedded' branch, the latest and greatest? and are all the necessary changes to madronalib already checked in?

also do you know if the cpu load has dropped with the new tracker?

generally, thinking about updating my fork, to get my midi goodies... and also the mec repo, so I can test it again on my bela.

Sat, Jul 22, 2017, 15:19

so I got distracted and left the SP turned on for about an hour or so, untouched - when I came back there were lots of green patches in the middle...I had to turn it to 0.20 to get rid of them all.
but then by chance, I decided to hit 'recalibrate' ... and they all disappeared, even when turned down to 0.10 .. is this what you would expect?

Mon, Jul 17, 2017, 02:33

this is all great news :)

if you can improve the boundaries, that will make a huge difference for me, in the previous version I rarely used the top/bottom rows, as I couldn't be 100% sure it'd trigger the correct note.

when doing the boundaries, perhaps you can also work out (or even estimate) the useable area of the top and bottom row, so that why can be scaled to that...
e.g. if 50% of the top row perhaps its better, to have Y run 0..1 over that 50%?

another observation, so you talked about x/y and turning up to x10, and then using lo thresh to get rid of green patches... so I did this (though even without , I wasn't getting false triggers) , I turned it up to about 0.12-0.13 (default of 0.10).. and they disappeared, but noticed after about 5 minutes of playing, they seem to be appearing again, so turned it to .14, gone, then a few minutes, .15 etc...
but got distracted, turned off the SP/SP app... can back after a while, and noticed I could turned it down to .12/3 again
always the green patches in the same area (centre, most played/worn ? ... I couldn't really figure if this was a 'software' issue, or if the surface is 'warming up', perhaps not returning to the 'same point'


( as I said, it wasn't causing any false triggers, I was merely following your instructions above-- so perhaps this is expected, nothing to 'worry' about)

Sun, Jul 16, 2017, 07:39

This version definitely feels like a great improvement - congratulations, thank you for putting in the effort to bring about these improvements.

generally, I think the close touches register much better, and its a more consistent feel, and its great to not have the calibration step :)

a couple of things
midi, is still a bit hit n miss for me, in particular the velocity is a bit on/off still... also if you try quickly alternate between 2 different notes quickly, sometimes this will be considered a PB rather than new notes. ( the lopass , seems to have no affect on this)... if you x/y view you can see its thought of as a slide rather than a new touch.

i usually use OSC, and pressure rather than noteon/envelopers so its not a big deal for me, but to use with non ML mpe synths, improving these would help alot.

calibration, as i said seems to work well without it, the only exception to this, is i notice that sometimes the 'line' between row 1/2 , 4/5 is not detected as straight. so if you play a note high on row 2, somtimes it play as row 1, or a note low Y on row 4, it detects as row 5.
obviously this only is an issue if you play with 5 rows of notes, which I like to do, to give me a bit more range (of notes).
... what would nice is if we could draw the row boundaries with our fingers (in a calibration mode), then the software could use these to find the notes, and interpolate the Y position.

it is only an issue in a few places on the board, so i currently play around it.

as you say, I'm also getting used to the fact, that if you play one touch, hold it, and add another touch close (so a chord) it does require more pressure - guess it will take a short while to get used to this, but definitely worthwhile for the other advantages.

as i said really grateful for the effort you have put into it, its hard to tell with 'feel', but from my initially playing it does feel like a big improvement.

Big thanks

p.s. let me us know when the source code is up to date for SP and ML repo, as id like to update my fork to use the new code :)

Fri, Jun 30, 2017, 02:19

I'll pause the Soundplane work

oh, don't do that, Ive been holding my breath, and I'm going to implode if it doesn't arrive soon ;)

Sun, Jun 25, 2017, 12:03

yes, Ive been running some of the soundplane code on bela.

basically Ive take a subset of madronalib and soundplanelib, and put it in the project I'm working on (called MEC).

you can find it here,

its under mec-api/devices/soundplanelite

Ive found Bela is not really quite powerful enough to run the full touch tracker... its close, but not quite there. it is however ok for the raw data.

However, Randy is working on a new version of the touch tracker code, which I believe he previously said should take less cpu, so once that is ready I will move over to this newer code.

note: this is with a soundplane model A, ive no idea how compatible/or not, this is with the DIY version.

Sun, Jan 22, 2017, 14:48

Soundplane software

any news on further development on the Soundplane software, in particular the touch tracker? are you likely to be looking over that code base any time soon?

Ive been doing some more work on MEC, my project which provides a standalone solution for the Soundplane (and Eigenharp), basically turning things like rPI/BBB into 'intelligent dongles' , so the Soundplane can be more like 'a standalone instrument' rather than a computer controller.


currently works with Eigenharps and Soundplane, and connecting too MPE devices (im using the Axoloti to make it computer independent :))

if your back into 'Soundplane' mode, it'd be great to discuss a few thoughts i have. as Im getting a little bit too much latency with the Soundplane due to the CPU requirements of the touch tracker.

so not sure if I can get some performance gains, or really need to move to something more powerful, the O-Driod C2 is quite a tempting candidate, given the rPI3 wont work.

anyway, I know, not much point in discussing unless your actively looking at the Soundplane software... since I think its quite a while since you were look at that code base.

Mon, Jun 19, 2017, 15:32

Sounds like an excellent challenge, look forward to seeing the results

Wed, Jun 14, 2017, 10:54

I like the idea of this with MPE, but how would it work?

it seems to me, that if you have mono voices, and then a separate component taking these as input and processing the same MPE messages as the synth.
then you are relying on some kind of fixed allocation e.g. channel 2 = voice output 1, 3 = vo2.

this works in a simplistic way, but most mpe enabled synths do not have this fixed relationship ... the voice number is not related to the channel number directly (of course its tracked, so that messages from that channel are routed appropriately)

this I think is done for 2 reasons:

  • the midi channel range may be greater than number of voices, and also it may be using rotating channels. (e.g. its setup for channels 2-15) but only hits a synth with 4 voices... it should still work, just your limited to 4 voices.

  • mpe allows for polyphony on one midi channel e.g. say you have an mpe zone, with only 4 midi channels allocated, it allows for you to still play (e.g) 6 notes, albeit that you only get per note expression on 4 of the notes.

ok, I know aalto doesn't do either of these things, but certainly I think it should do the first one at least.

Wed, Jun 14, 2017, 10:38

oooh, beta... now I'm excited :)

Wed, Jun 14, 2017, 05:10

Is in the first sticky thread, on this sub forum ( Soundplane client for Mac)


Probably should be a link to it in the Soundplane production page too.

(Or perhaps I missed it ?)

Tue, May 16, 2017, 12:06

Of the USB to MIDI DIN solutions, aren't they mostly USB Devices?

yes, most are, which is why you want to support usb hosting by being a usb host ;)

Mon, May 15, 2017, 01:25

USB devices are hosted, this is not limited to controllers e.g. A synth that has a USB interface would require USB host, supporting USB midi class compliance.
This is becoming increasing common, and makes sense for MPE hardware given the bandwidth requirements of the data.

It also largely makes din irrelevant, since there are lots of USB to din solutions , which contain flexible routing options, I guess it's convienient but takes up rack space.

As for iOS , as long as you are class compliant it's not an issue, it just works - this is how we connect Axoloti to iOS.

Fri, May 12, 2017, 03:24

Is the midi for Soundplane output? If so USB midi ihost is better than midi din.

Thu, Apr 27, 2017, 13:22

excellent work :)
ok, I'll hold off a bit, shout when your ready - excited here too!

Thu, Apr 27, 2017, 03:00

Hi randy, How is the new touch tracking software going?

I've seen the check-ins on the repo, is it a stage where its worth playing with? testing? have you been mainly working on detection, or also lowering cpu loads?

If there is anything I can do to help, let me know

p.s. great to see you having some time for this, thank you for your efforts.

Fri, Dec 04, 2015, 02:52

Synths for Soundplane

I did this list for someone else, but thought it might be valuable here....

so what synths do you use with the Soundplane? and support MPE etc?

(I know we all have and enjoy Aalto/Kaivo... but Im sure many use others , no?)

anyway ,this is not an exhaustive list, more the ones I use,
but before we start we should group into categories

  • MPE, fully supports MPE ( or at least notionally, including PB range)
  • MPE compatible, actually really this is 'continuum' mode, polyphonic x/y/z via Ch Pres, CC74, PB, may need configuring/scripts etc, not 'automatic'
  • Voice per channel - polyphonic x/y/z but uses different CCs etc
  • Multitimbral, any of these can be used 'as' voice per channel, but some are easier than others, due to how parts can be duplicated/linked etc (Im not going to list)


  • Madrona Labs Aalto
  • Madrona Labs Kaivo
  • NI Reaktor (with my blocks/macros in user library)
  • Oscillot (M4L, Ive a modular for this if anyone is interested)


  • Axoloti (hardware)
  • Madrona Labs Aalto
  • Madrona Labs Kaivo
  • Futuresonus Parava (hardware -I don't have it)
  • Softube Modular (via 'RISE' module)

(ok, none are strictly compliant yet, but very close)

MPE Compatible

  • NI Reaktor (with my blocks/macros in user library)
  • UVI Falcon (since 1.0.2)
  • PPG Wavegenerator / Wavemapper
  • FXP Strobe 2
  • Max/MSP

Voice Per Channel

  • U-he Bazille
  • U-he ACE
  • U-he Diva
  • U-he Hive
  • Logic Pro X (various inbuilt synths)


  • Cubase Pro (possibly Artist) - per note expression
  • Bitwig - MPE support
  • Logic Pro X - can host VSTs very well, and some built-in synths are multi channel , but no recoding facility, need to use track channel or audio
  • Ableton Live - a pain, need to use track per channel or audio

BTW: Id really recommend checking out UVI Falcon, its turned into a powerhouse for MPE controllers :)

Wed, Apr 05, 2017, 05:15

yes, you'll need the soundplane client - but I'm working on eliminating the laptop ;) see MEC

with this I'm currently able to run the Soundplane on a raspberry PI,
which can be battery powered, which then connects directly to hardware synths... testing with Axoloti and my Virus so far.

not yet finished/released, I'm still working on it, but it works, just a matter of finding the time to complete it ;)

Wed, Mar 29, 2017, 04:45

What is a simple host? One users necessity is another's complexity.
Eg why a recorder, but no midi clock? What about send fx, given virta?
Fullscreen would be nice, for small screens, but a bit bare for larger ones ;)

One idea, perhaps integrate it into the soundplane app?
(soundplane tabs could be hidden when not needed)
So no extra app to maintain, and could make the soundplane act like a standalone instrument (direct communication to plugin rather than IPC), similar to what eigenlabs did with eigend.

Wed, Mar 29, 2017, 04:35

Cool, it's really interesting to hear your progress

Wed, Mar 22, 2017, 03:54

my experience with the BBB/Bela (A9 1Ghz running Xenomai) was it was the higher level TT code that ate all the cpu, it coped fine on the lower level stuff and 'crude detection' layer, but struggles with the full tracker. (the rPI2, now its kernel is fixed, runs fine, but thats got 4 cores)

I look forward to hearing how you get on, improved efficiency in the TT will help us all :)

will the firmware be open sourced?
it seems in many ways, we are going in a similar direction...

Sun, Mar 19, 2017, 17:41

Axoloti with Soundplane (video)

Ive mentioned before that I think they make a good team :)

in this video, Im using Axoloti as the only sound source/fx and playing it with the Soundplane, using MPE.

Tres Amigos

Fri, Jan 13, 2017, 01:15

with the soundplane (and other expressive controllers) , I tend to keep patches simple, as much of the character comes from the player thru the controller.

one thing, I like, is whilst all my patches sound different, I like them to behave in a similar way ,this (for me) makes the SP feel more like an instrument, because it has a 'character'.

what I tend to do is control level directly with z,(usually with LP enabled) this leads to subtle control, as the SP is relatively 'slow'. (you need to use velocity,envelopes if you want punchy) . then y, i often use to drive timbre/cutoff, to brighten the sound. z sometimes also does this, so when you push in it accents the notes.

one 'problem' I have with Aalto (and Ive mentioned before ;)) is the modulation amount is per input ... this means you are limited in how you can use multiple modulation sources, as you cannot tune in the effect correctly
e.g. imagine your using Y to modulate cutoff freq, and you want it to be quite 'pronounced', but you also want cutoff freq to track pitch, you cant really get a good balance.

in code we have amt * ( y + pitch) when we really need ( yamt * y ) + ( pamt * pitch)

anyway, this is just how I use Aalto with the soundplane, for sure there are lots of other ways, Id love to hear more about others approaches.

Id also love some advice on Kaivo, Ive alot more difficulty getting good things out of it with SP, than aalto.

Thu, Mar 16, 2017, 09:43

Great stuff, thank you

Looking forward to Soundplane enhancements :)

Sun, Feb 12, 2017, 07:52

I have some great news for Soundplane owners :)

the Raspberry PI kernel bug (in dwc_otg) which prevented the SP working has now been found/fixed. ( not released yet)

Yesterday, I ran a patched kernel on a rPI2, and have the SP working perfectly using MEC!

my setup:
Soundplane + rPI2 + Axoloti (midi mpe)

all perfect, Soundplane using 80% cpu on 1 core, other 3 cores very low cpu.
(Eigenharp tested too, and only uses 10%, others low)

the Soundplane and axoloti directly powered from the rPI2, so connect this to a USB battery and you have a completely portable setup, just add headphones/speakers :)

the rPI2 probably has enough cpu power left also to run a (light) synth... or you can add another couple of Axolotis or Belas (you have 4 usb ports to play with)

p.s. with MEC on a PI (or anything else), you can also send T3D OSC messages over ethernet or wifi ;)

use a rPI3, or an Asus Tinkerboard (mine arrived a couple of days back) , and you have even more cpu to spare.

of course it will still be great for the TT cpu to drop, as then the BeagleBone Black with Bela could be used, Bela providing low latency audio, and analog out (useful for modular cv ;))

oh... I've also done a successful technical test with the Eigenharps using libusb running on windows. (with usb iso traffic).
this means when my windows laptop arrives, I will be able to get MEC running on windows, and yes that includes the Soundplane on windows!

Im really excited, the fixing of the PI2/3 make this so much easier for everyone,
its freely available and only $35 (ok, perhaps tiny bit more for case, power, sdcard)

once Ive 'finished' mec, Im going to look at using buildroot, to turn this into an embedded appliance...so users just see the MEC/PI as a 'magic box' you just plug in.

Tue, Mar 29, 2016, 06:19

Virta : setting up with DAWS

thought id start a thread on this.. with my experiences so far, and perhaps others can elaborate, or suggest betters ways.

goal: Virta taking audio input and being controlled via midi (i.e. both audio input and midi input)
my experience is with Mac OSX, but I think relevant to windows too.

generally there seems to be 2 approaches:

  • use virta as a effects plugin, then route midi to it from a separate midi track.(most common)
  • create virta as an instrument, then select the audio input as a sidechain input.

Note: when I say use virta as an insert effect, you can almost certainly instead place as a separate send effect , assuming the DAW allows you target midi to the send effect.
this way can be use for multiple tracks.

Live 9.6
a) Create an instrument or audio track , put Virta as an audio effect on the same track
b) Create a midi track, then in the output destination (press IO button to reveal) select track you created in (a), and channel 1.
(limitation: no MPE, without lots of tracks, but thats Live for you ;) )

Bitwig 1.3.6
a) Create an instrument or audio track , put Virta as an audio effect on the same track
b) Create a midi track, then in the output destination (press IO button to reveal) select track you created in a (a), and channel 1.
Note: MPE works, if you select force MPE, the midi channel on the output destination is ignored, so doesn't matter what channel you set :)

Cubase 8.5 Pro
this is not working... is there a better way, or its a bug?
the way i think it should work is (as i use for other plugins) :(
a) Create an instrument or audio track , put Virta as an insert effect on the same track
b) Create a midi track, and target virta which is listed instrument track
(i.e. its similar to lives approach)

This doesn't work, as Virta is not listed as an effect,only as an instrument.... so cannot be selected as an insert etc. as an instrument it also doesn't have anything like a side chain input as far as i can find.

Logic 10.10.2
a) Create your audio/instrument track as normal (i.e not virta)
b) Create a BLANK instrument track then select virta as the instrument.
(under midi controlled effects)
(note: I'm having issues with logic selecting this from the normal new track dialog)
c) create a bus, and send some/all of your audio/instrument track (from a) to it
d) in virta dialog, select side chain input, and select the bus you created in (c)

Numerology 4 Pro
just add virta, route audio to it, route midi to it... simple ;)
... and wow, Virta + Aalto + N4 could have been made for each other, such a fun combination, most fun Ive had so far with the audio mangling side.

OSC option (e.g. soundplane), any daw
if you use OSC input, you don't have to do any of this, just use as an effect.
(but alas no recording)

Harrison Mixbus
(courtesy of phil999)

  • create a stereo audio track. Add a MIDI port to it
  • in that track, right click, New Plugin, select Virta
  • create a MIDI track without instrument, select keyboard input
  • in the MIDI track, click output, select Routing Grid
  • in the Routing Grid, patch MIDI out to audio track

In Traction 7
(courtesy of secretkillerofnames)

there seems to be a number of ways to get it working:
1) Drop Virta on a track - select MIDI input channel - drop audio into the track and it works fine for playing. You can even add a MIDI and an AUDIO input
BUT if you want to record and playback MIDI notes will processing an audio file
2) Create 2 tracks - one for MIDI, one for AUDIO - create a new plugin rack / wrapper for Virta on the MIDI track. Add the AUDIO track as an input to the wrapper and disconnect one of the track outputs (either AUDIO or MIDI - doesn't matter.)
BUT if you want to record live input and MIDI notes
3) change number of inputs to 2 - select MIDI input and audio input in the two boxes - arm them - make sure live input monitoring is selected - press record and go. It records both MIDI and AUDIO to the same track!

FL Studio
(courtesy of levendis)

PDF external link

Wed, Feb 01, 2017, 11:25

I recommend PrEditor...
this allows you to setup your own pages of controls (unlike Ableton 'out of the box')
Ive done this and find it pretty useful for a few VSTs,including Aalto.

takes about 10 minutes to setup a VST the first time, but then its done...

the end result for ML synths I find 'variable', its great for modifying a preset you already have created , since you can organise the parameters logically, but you cant change the routings so its limited for creating new patches- but I still find it very useful.

(hopefully Ableton might include something in a future version of Live, Live 10 :))

Tue, Jan 31, 2017, 04:53

Ableton Push supports browsing in the same way as Ableton Live generally does... i.e. what you can view in Lives browser is available in Push. (so doesn't need special testing)

basically there are two (actually 3) ways ;)
two described here;

a) AU - save your presets as aupresets... this works for everthing (well thats is an AU :))
b) VST - save as PC banks (ML doesnt support this)
c) put plugin in a instrument rack, and then save the rack.

I use (a) the most, but of course this wont work if your on a PC (only Mac), (c) is ok but a bit cumbersome, but has the advantage that you can assign some macros whilst you are there :)

In some ways I quite like saving these presets separate outside the ML preset system, as it means I have only my presets on the push :)

the main 'pain' is if you use other DAWs (that dont support AUs) then you cannot get to these saved presets...
to get around this I save in both .aupreset and in the ML browser, which of course means there can be 'discrepancies'

its a pain, plug in developers dont like the au/fxp format as they are limiting, and of course means you need to dupe au and fxp formats (and others if you support aax etc), but its not ideal for users either, as DAWs can only use these standards... they know nothing of internal presets. (though bitwig, seem to know something about u-he's preset)

Native instruments are trying to get around this with NKS, and a few developers have jump on board... but Im not sure they are making this technology available to DAWs, or if its just for their controllers.

anyway, its only really a pain, if your using multiple daws, otherwise there is a way to get these things to work :)