thetechnobear's Recent Posts

Thanks Randy, Lots more developments to come... its great being able to collaborate with both you and Antonio, its alot of fun, and so much potential.... looking forward to 2015.

part 2 up, getting fruity... again using the soundplane
EigenD : Modular Synth part2

Heres a video of my new EigenD agent which allows full control of EigenD from the soundplane, providing scales, splits, step sequencers, loop control all directly from the soundplanes surface
You tube demonstration

note: this video concentrates on showing features available, Im going to be doing a follow up video which will show 'exclusive' features for the soundplane, and in particular how to build a per note expressive synth in eigenD and more ... :)

yeah, though... I think its reaktor... as exactly the same tests work on Max/Msp and also on a C++ app I'm writing, on the same machine.

yeah, the frame message idea should work, and given the speed of NI fixes, I think is probably the only realistic solution for now.

touch off wont help in this scenario, as it would still look like a touch-off followed by a 'a new touch on' ... really timestamps/sequencing is the only real solution.
and I totally agree, really its Reaktor that should provide us with access to the bundle timestamp.

my only 'concern' over using the bundle though, is I think there are a few apps that don't explicitly support osc bundles and their timestamps e.g. Numerology also doesnt(though Jim may be willing to add it)

channel property only works for "controllers" not note_rows.

would be nice for splits, but would only work for non-multichannel setups.

here you go... not 1 but 2 T3D reaktor based synths

included is the t3d macro which replaces Marks 'continuum front end' and is pin compatible.
I then updated two of his synth NanoWave and Matrix using this macro.
(whole thing took less than 5 minutes)

you can easily do his other synths, by simply downloading his ensembles and then replacing the front end, with the macro.. just be careful to wire the correct things up.
(tip: import the t3d macro, and wire up one by one, as you attach the t3d wires the continuum front end wires will disappear, so you know whats left to do... and only delete the continuum front end once you have done all the wires)

thanks to Mark Smart for sharing the originals, and I hope others here find these useful and instructive
p.s. I checked with mark and he was fine with me sharing.

(another!) Mark

EDIT: ok, Ive noticed there are issues with Reaktor and T3D OSC,
a) note stuck, there appears to be a bug in Reaktor, when alot of osc data is sent quickly, it is present to the application out of order. this is most noticeable when the last couple of pressure values get reversed, so we 'miss' the note-off, so we get a stuck note.

possibly this could be circumvented, by watching frame messages, and if we haven't had an update since N ms, then turn the note off.

b) continuous event streams in osc
these should be sent at a continuous rate as specified in the soundplane app (in Hz),
but I'm seeing quite a variation in this, e.g. at 250hz i see between 175 and 375 hz,
in fairness I see the same behaviour in MaxMsp.
BUT the issue with reaktor is there seems to be a 'ceiling' at around 300hz, over this, and I still see that data at similar rates to 250hz.
THIS is not the case in Max, so its a Reaktor issue.

Im checking on the NI forums to see if its a known issue.
really (a) is a problem, I could work around it, but its not nice...
its a pity we don't have a timestamp/seq on the individual messages as this would make a workaround trivial.
(I know the time is on the OSC bundle, but Reaktor does not expose this)
perhaps an OSC option, which enables a seq on the tch messages... let efficient, useful for some hosts.

thanks @timoka those are really good examples.
they could be very easily adapted to use OSC, I might give that a go in next few days.
if so I will share
(Ive emailed Mark Smart, to see if he minds me sharing 'derivative' products)

nice track, your getting alot of variety out of Aalto... percussion to pads, great job.

Ah, I think i may have misread (or not looked closely enough) the t3d code, and assumed the note number was the starting note, not that it was continually changing.
(such is the problem with reading code, without being able to run it :))

EDIT: actually just looked at my code, i also did continuous pitch, so just 'forgot' this temporarily :)

that alters the approach a little, when converting existing reaktor instruments
(since they follow a midi model, note on w/ pitch, then pitchbend ), but still possible,
you just need to track the voices(=touches) as mentioned.

my plans are more around building my own instruments, so not an issue, as these don't need to follow the same midi model.

yeah the tchN comment, was not that it would be better the other way, just its a bit of a pain in Reaktor, as it is unable to match partial paths or even process the path. so you have to be explicit.
there are of course some use-cases where being able to match a particular voice is useful.


Im trying to get my head around how the soundplane software works, and Id like to understand the principles of how it determines touch x/y/z from the raw matrix.
This is really just 'out of interest' as I love to know how things tick :)

as the source code is open source, I can work through that, but I wondered if there is something a bit higher level.

In particular, are there some details of the basic maths involved, e.g. what maths techniques are used to go from the matrix (which i assume is a 2d pressure values).
(I can then get the specifics from the internet :))

and/or perhaps a max patch that does something similar?

I guess ideally, Id like to try modeling the process in Max, just so i can a feel for it.

note: I know Randy has spent a lot of time refining the touch data, eliminating noise etc, so i know it won't be as good… but more an understanding of the principles involved.
(its this refinements, that make me think perhaps the C++ code might make it tricky to see the basic process)


so i assume I'm after peak detection algos for 2D arrays (3D grid), and looking for one thats not a brute force search thru the grid for maxima (with respect to neighbours).

I've also had a quick check of the DIY projects here, I guess part of this may be a starting point. (just need to remove the bit about converting audio signal, as this is already done in the soundplane)

Q. is the matrix from OSC completely raw, or has it had the calibration data applied to remove 'noise' , if the former, I might look at the source, to see if I can 'optionally' output the later.

fair enough, will check out the code.

Max, agreed, I just thought it might be easier to visualize whats going on.
anyway, once I looked through the code i will have a better idea.

yeah, Im not an expert either, as Ive only built very small things too, really to try to understand reaktor

as far as I've played with it, there are two things you can do with OSC

  • osc learn
    this basically allows you to take one of the parameters from the messages (using an index), as far as i can tell this has to be 0.0 - 1.0 and then is automatically scaled by the control
    this is easy to do :)

  • OscReceive / Osc Receive Array

the first is trivial to use, just give it a pattern and specify the number of parameters (=ports) - but its limited to 10 outputs

the second, is standard array semantics in Reaktor, Ive not used but I assume it can do greater than 10 parameters... but t3d always has less than 10, so just use OSC receive :)

ok, playing notes into instruments, this is where it gets kind of tricky :)

you have to do 2 things:

  • you have to go through you instrument and find all the midi objects and replace them with an osc receive. e.g note pitch. (actually, you'd probably be best using an OSC recieve than then sending this thru the internal messaging of reaktor, as you will find many midi objects are repeated e.g. you might find a note pitch connected to the oscillator AND to the filter (to do key tracking) )

for simple instruments, its straightforward enough, for more complex, it be be quite difficult to find all the midi objects,
partly because there not prefixed with 'midi' or anything, just called note pitch, pb

there are a couple of things i find problematic with the t3d spec when using with reaktor:

  • the note off, does not send the pitch, this means you need to look it up in some kind of voice array
  • reaktor expects fixed patterns /t3d/tch* is not working, so you need to put a pattern in for N voices, which is a bit of a pain

ok, once you get over that it should work...

the next step brings 'extra fun'
you want to do per note expression, e.g. pitch bend on individual fingers,
this is something as i said, Ive looked into, its possible, but means you need to be tracking voices, and then only affecting the correct voice... this is possible in Reaktor as is has a pretty good poly system, but its not trivial.
(note: also you may have to redesign part of the instrument depending on how its using the poly mapping)

personally, my idea, is to probably build a simple synth of my own first, get used to doing the above
and only then looking a retro fitting existing instruments, which will be much more complicated.

I think its a rich avenue of investigation for sure, my only issue is time... as Im also wanting to do similar things in Max/Msp
which currently has my focus.

hope the above helps a bit.

looks interesting, hope to have a play with this next week, when my soundplane arrives.

Ive not tried SC but looks good, very compact. alot of functionality for a relatively small amount of code.

this is an area I hope to be experimenting with soon (though Ive quite a few soundplane projects, so may take some time :))

what are you trying to do?

  • use for control? e.g. mapping to sliders?

  • use for note input?


should be pretty much the same, as doing with touchOSC
basically you will need to set up a zone file, which contains X or Y sliders,
then the messages are
/t3d/zonename value

note input

much more effort, and tricker that mapping :)
first most ensembles are build around midi, so you have to find the appropriate midi inputs, and replace them with osc.
one of the difficulties, is t3d is touch focused, whereas midi is note focused.
(this is a problem with the note-off message in particular, as t3d does not sent the note value)
its actual possible to make it work nicely, I experimented with this in the past with the eigenharp, the touchId can be used to drive the poly
(in fairness, its quite a task to get it working, and many parts will be instrument specific)

as i said, ive not got my soundplane yet... but when I do, I will be experimenting.
probably simple control first, and then note input much later.

Randy: perhaps the final touch message should contain the note (but zero x,y,z etc) this means a trivial conversion to midi would be possible, which ignores touchId.
(usually this is not best practice, but for some applications is 'good enough')

I bought it last night ... very excited :)

I'm sure randy will be along soon, but I had a little play to see if I could reproduce,
as i use LPX and couldn't remember any issue

Im using with LPX on 10.9.4, aalto (i could try kaivo, but i have no reason to believe its different)

with OSC on my Eigenharp, I don't notice any appreciable latency, so unlikely to be above 1-2ms, certainly not 80ms...
(I've used this before with both aalto and kaivo and not had issues)

the osc latency is not really possible to time, but I tried with midi
i created a midi track , used a kick on 1st beat then directed that to an audio channel,
that showed no latency, in fact if anything it showed negative latency.
(PDC mismatch?)

I also tested in AULab, and no issue.

I wonder if for OSC you have your networks seutp correct?
I don't have access to a soundplane or yosemite (I keep my music machine on only proven releases), but perhaps an issue with soundplane software rather than aalto/kaivo?

one thing i did have odd with LPX/Aalto
I did get into a state where Aalto was ignoring first few bars of notes, I know thats sounds weird.... basically start transport and wasn't until bar 4 the notes would come thru as sound. I replaced aalto with another plugin, to check it wasn't me/lpx being stupid... and it immediately worked. I then put aalto back, and it was fine. (so obviously a fresh aalto instance fixed it)
The only thing i wonder about, is originally on the 'bad instance' id being used OSC, and then switched to midi.... aalto did switch (osc message gone etc), but I'm wondering if it was in an odd state.

@wanterkeelt, hard to say if your performance is 'normal' without machine specs/operating system etc.
but I found Live 9.1.6 was not really any different to other hosts, in practical use.
(LPX, Vienna ensemble pro,Bitwig, Max)

Im running Mac OSX 10.9.4, Live 9.1.6 on a i5 2.9Ghz, (so not that powerful) and on default patch, Live shows 9% and on Koto 40% (8 voices).
bare in mind, comments in randys article, about 'always' active.

as for Live…
its 'well known' live cpu meter, is not a cpu reading at all… its percentage time required to process the audio buffer, i.e. 25% means its used 25% of the max time it could to process a buffer.

this is a reasonable way, but its rather subject to external things, like the operating system preempting it, and can be a little misleading with multiple cores/cpus.

one thing is worth noting though, is its worth getting to know how a daw handles plugins with regard to threads (= distribution over cores).
e.g. in Live do not put FX on the same track as a heavy cpu use plugin, as it will be put in the same thread, instead put the fx on a return track.

Randy, a question … if you bypass Kaivo/Aalto does it stop the oscillators, and stop all other processing? i ask, as i noticed it keeps the osc connection open.
It would be nice, if the plugin is bypass if everything is stopped including closing the osc server port.

interesting read v8media :)

if you need any help with setting up the pico, just post on the eigenlabs forum,
or more active is the Eigenharp G+ community.

I agree, the EigenD software can be a bit 'intimidating' initially, due to its flexibility but the 'community' is very helpful, just ask questions on the above, and we will get you up and running quickly.


playing notes out of scale, you can either use chromatic scale, or alternatively try the 'fingerer' setup, which gives you a more wind instrument like setup, with a key triggering accidentals.

breath to open filter: use the midi matrix (same for both external midi and also AU/VST mapping), click on grid which intersects breath (column) and filter cutoff (row, either CC or automation)


if you really want to get deeper (later), then make sure to download the latest version from eigenlabs which now includes Workbench for free.

one thing i would 'recommend', try to play the pico 'as is' with the factory setup initially, learn it as an instrument in its own right. one common mistake new players make, is they see the flexibility of EigenD and start spending hours/days trying to 'bend it' to their vision/preconception.
this inevitably leads to frustration, as they get into complexities before they really understand it.

I liken it to a person picking up a guitar for the first time, and start using a bow on it with some weird tunings,

yes its possible, but most would agree it would be better to learn play it 'as is. first.

oh… and if you have aalto/kaivo checkout my soundplane agent for the eigenharp.
you can see me using it here:

enjoy your soundplane, eigenharp and ewi, your very lucky to have such a great selection of controllers :)

(please bare in mind I'm not connected in any way to madrona labs, just here to help!)

whoa, sounds like you've had a tough time.

ok, so you say you want to run the VST version (due to presets)

you could use any lightweight 'VST host', it doesnt have to be a daw , if you search on google you will find a number of free VST hosts.
e.g. , but there are lots available.

commercial offerings:
Max - as you say Max is good if you want to do more, and in fairness only takes 3 objects to create a full solution,
Bidule - Bidule is very popular, and lightweight

if you use OSC, then just remember a number of the presets are setup for midi, and so wont make a sound under OSC because they have xVel set on the envelope, untick this and it will start working. better still disconnect the enveloper and connect Z to the 'level'

good luck

been exploring creating aalto with the eigenharps expressive side, just a bit of a noodle really.

Eigenharp with Aalto

Im really enjoying the aalto/eigenharp combo, its very easy to very immersed in for hours.

details: aalto, my own patch (dark eigen), connected via OSC (t3d) using my EigenD soundplane agent.

its really the start of an exploratory piece, which i hope to build upon.

a few things, I'm hoping aalto/kaivo will add to help me are:

  • more controller support, so I can add breath input ( and possibly strips)

  • more voices (because its legato I'm stuck with about 3 usable voices, 4th is overlapped, so get too much stealing... id like at least one more :))

  • separate OSC ports for kaivo / aalto, so i can play a split with kaivo and aalto, I've got some ideas on the kaivo side now :)

there appears to be a regression bug, so this no longer works in 1.5

as params does not return the names, instead on the vst we get

names: param0

names: param1

names: param2

names: param3

names: param4

names: param5

names: param6

names: param7


(on the au we get blanks)

attached is my simplified max patch, which you can use for testing:

(tested, max6.1.8, aalto 1.5 mac osx 10.9.4)


no problem...

Valhalla (VV,RoomShimmer) are fine. tried as many plugins as i could and all others ok.
Its weird, as I said Aalto and Kaivo parameters show up fine in LPX, and EigenD.

BUT... I have got something for you...

With Max, the AU reports the names as 'blank' and the VST reports paramX, e.g. param79
... whereas all other plugins report the names 'correctly'
(remember VEP uses only AU on Macs, so i suspect this is the issue)

I think you have Max... so you can test it for yourself on the vst~ help page

xVel = dZ ??

unrelated, I look at the xVel 'issue'

Ive just noticed that dZ can be populated on a soundplane (touch) message, BUT soundplaneOSCoutput doesnt then send it in the osc message.
(so perhaps this is where the 'confusion' starts?
.. was dZ intended to be velocity?

Zone support

also, Im looking to add zone support (with the hope you will add to Aalto/Kaivo).
I thought Id send 3 xSym, zone Id 1, 2, 3, with just a 'text' string (i.e. assume this has no purpose)

one thing I'm slighly unclear on, are these syms unipolar?
Id have quite liked unipolar/bipolar variants.

Im a bit confused, why do you have xSym, ySym, zSym... I can understand these represents the axis on a soundplane, but id have thought its irrelevant to the 'client' they will just want to know its a 1 dimensional control,
its orientation is irrelevant? i can see xySym is cool for 2d controls.

Im guessing your idea is that you just define a region on your board, and are then choosing to send x,y or z, which I assume you then scale to 0..1 ?
but id say its more valuable for the client to be agnostic to this...
e.g. if i set up my clients to read a 'virtual slider' on the soundplane, if i change the shape of it on the soundplane, should i really to change the client?

multiple UDP ports an idea

finally I had a thought about using different udp ports, and a protocol to allow multiple instances.
How about introducing a 'change port t3d message'
simple idea, kaivo(/aalto) default listen on 3123, if they receive this message they change to specified port
once client has sent, it then attempts to connect to it (after say 1sec)
when vst is saved, then it saves the port , so when starting next time it uses this new port immediately.

on client and server, logic is 3123 is the default, unless above message used. also if they ever 'fail' then they revert back to 3123.
osc connected messages, should be changed to "osc connected : 3123" (etc)

should work quite well, just means client only has to send the message when vst is very first created (not restored),
and then can remember that new port from then on.
no pesky ui, compatible with existing behaviour, and you don't have to fiddle with both VST and client.



Im using Vienna Ensemble Pro ( to host aalto/kaivo,
which is working really well (a great way to get maximum performance).

But Ive just found that VEP cannot see any automation parameters for aalto or kaivo,
(it uses the AU version, no option to use VST)

all my other plugins, I have no issues with, just kaivo/aalto.

this is a problem, as i wanted to use to circumvent OSC not having any support for things like breath controllers

EDIT: tested with LPX, Live, Bitwig - they all show the parameters ok. so I think its an issue with VEP... though interesting, it has no issue with any other plugin. anyway, Ive emailed VEP support.

been playing with both Kaivo and Aalto some more via OSC and thought id put together a few FRs that hopefully are relatively simple to implement :)
(well I can hope)

  • Configurable OSC port, so I can run both Kaivo and Aalto at the same time on a split,
    and also more than one patch (e.g. lead + pad) (K+A)

  • more voices, (say 8?) in Aalto (A)
    (I'm running in Vienna Ensemble Pro, which nicely load balances)

  • Pitchbend range for OSC (K+A) , i know its in the osc message BUT its nice to have alternative ranges for different patches. so configurable you could the incoming fractional offset perhaps as a multiplier? (K+A)

    .. i currently have this in my OSC interface, but it cannot be stored 'per patch' (K+A)

  • envelope x Vel* , in osc mode should be x Z (as we have no access to sustain) (K+A)

  • CC support for OSC, or additional 'osc controls'
    the drawback with OSC mode is it only has key support, albeit in 3D.
    however, Id like to use a breath controller/strips as well for more global control, e.g. say reverb level, I can see this would be useful in the soundplane too, as you may have a number of 'zones' defined for the soundplane as a slider for such things. (K+A)

longer term wishes :)

  • gate output, and trigger input for Aalto (A) (as done in Kaivo)

  • voice per channel midi, I like OSC, but voice per channel is more standardised, e.g. linnstrument/continuum also support. (K+A)

  • modifiers for signals, multipliers/additions/lag ... see u-he bazille for the idea :) (K+A)

  • envelope x Vel
    Q. Am i correct in saying when you put connect more than wire to an input it sums?
    (its what it seems to do), it might be nice to have multiply as an option?
    then we could do envelope x Z ourselves
    (UI: perhaps clicking on the input level, changes colour and so mode between add/multiply?)

Im really enjoying both Kaivo and Aalto, absolutely love the way they allow us to control the envelope directly, really is a unique feature

hope the soundplane build is going well,

thank for the explanation, that does seem to tie up with what I'm seeing.
I even tried using a stick on the eigenharp keys (not as suitable as with a soundplane!),
and indeed it improves it, also upping the sample rate helps too.

but in the end, I think your right envelopes are good for this, in the same way i guess we still use LFOs for modulation.

When in t3d mode I believe the "x vel" setting on the envelope uses a "touch-on" velocity value calculated from the initial z.
hmm this doesn't work for me at all, I've checked the messages sent, and there is definitely an initial pressure but I hear nothing. Ive always had to turn of xVel on all patches to get anything
… does it work on the soundplane?

BTW.. id say you might want to take a couple of Zs as he depending on data rate, at higher data rates, the first few Zs can be quite low.

as you say its an exciting area to explore… great that we have both the hardware and software to experiment :)

Id agree i can get alot of dynamic range, especially on pad/lead type sounds.

one thing I am finding tough though is getting a snappy percussive sound.

lets say we have a sound we want to be percussive (like the koto in Kaivo),

but we want to retain, control over the rather than use the envelope (which we cannot use, as we don't have xVel, and without its a full volume)

instead we connect z to level , (this is pretty much the normal pattern id say for OSC/soundplane - no?)

this works really well for pads...

but if i play a percussive sounds, and quickly tap the keys its a little drawn out,
it doesnt sound percussive at all ... is this really because I cannot remove my finger quickly enough? (id have thought it would be possible at high refresh rates)

have you noticed the same on the soundplane, randy?

perhaps for these percussive sounds the only real way is with an envelope.

one other thought... perhaps initial velocity is still useful in these scenarios?
should kaivo/aalto calc this from Z (i.e. dz?) or would it be useful to send via t3d?
( i think i favour the former, as velocity is a 'derived value', and t3d is as close as possible to raw data)

EDIT: bit more playing, i used some processing in EigenD to compress the pressure input, an this does help alot. but what I noticed that even when I think eigenD is pretty much sending binary pressure, its not as 'sharp' as if I connect the gate input to the level.
(when i connect gate to level, and tap the keys, I get as percussive as with the envelope)

have to say in not totally convinced of my diagnosis though, as I found the more I played with using Z as input to level, the closer I got to percussive sound, and it some cases found I almost need to release a little slower than expected. (i.e theres perhaps a longer release on the env that I thought?)

anyway, no big deal... the truth is you get just get 'different' sounds, some of the 'percussion' sounds played in this way are very interesting :)
(and in non-percussive sounds, its never a problem... its feels very reactive/expressive)

ooh 1.6, do you know the contents yet?
Im pretty desperate to have voice count increased (8), and support for multi instance when using OSC (or at least run Kaivo and Aalto at the same time).
guess though your pretty busy with the soundplanes, must be getting pretty close to shipping time.

I suspect this refers to modulation…
easiest to see when you are using a per note modulation source…

say you have poly aftertouch, this could modulate something in the oscillator, and so each voice could be being modulated differently, say timbre, fm carrier etc

with single channel is a bit limited, as there are few per voice modulate sources - velocity, pitch (key tracking), poly aftertouch.

its a bit more obvious if you are using OSC with say a soundplane (or eigenharp), here you could be modulating all parameters with 3 axis, and each voice would be therefore different

it uses the soundplane OSC addressing (t3d)
a brief description is here:

in practice its better (more up to date :)) to read the soundplane source code, which is located at:

its pretty straightforward stuff, you will find the osc mapping in SoundplaneOSCOutput.cpp

a couple of points:

  • uses UDP port 3123 for both Kiavo and Aalto, so you cannot use both synths at the same time, or multiple instances of the same synth :( (of course you can revert to midi for this)

  • i think only /t3d/tch and t3d/frm are used e.g. i don't think any of the zone messages are supported yet.

hope this gets you started, Im sure Randy will be along to let you know if Ive got something wrong.

BTW: what are you planning to use it for? and in what?

(Ive done this for the eigenharp, and Im thinking of doing a MaxMsp implementation too for another project... though Im half waiting to see if Randy will be updating to add zone support)