thetechnobear's Recent Posts
ok, Im getting a bit more in to Reaktor so Ive updated this Macro :)
as I mentioned, the issue is Reaktor does not keep the OSC messages in order... (its a known bug in Reaktor). This would cause 'stuck notes' with the previous macro.
In this newer version, I look for frame messages, and if I receive a frame, but no touches then I know the note is 'off' (as touches are continuous, so if your not receiving a tch its because its no longer active)
Ive played with it a bit, and seems to be working fine.
here it is Reakor t3d it contains the macro, and again the 2 demo ensembles from Mark Smart, converterd to use it.
I'm always playing with 1.2, it so tempting as the 1.2 code is much more responsive :)
( the zone thing is a small issue introduced by changes in underlying ML lib they are usually pretty simple to fix if your a programmer)
a thought, if I setup 2 different midi controllers to talk to any VST (via midi), I fully appreciate that sending the same note/cc from both controllers will possibly lead to oddities. so perhaps to some extent, this is 'the users problem' :)
saying that, aalto/kaivo (osc), and some other VSTs (e.g. u-he in poly mode) already seem to handle this situation, as I can send in the same note twice already from my eigenharp/soundplane due to duplicate notes on the surfaces, and the note plays twice.
(can be nice if they are not sync'd :))
anyway, sure post 1.6... which Im hoping will bring me multi instances, which will cover me for this purposes for the time being (albeit a bit cpu intensive).
is aalto getting more voice in 1.6 ... I could do with ~6, and I think my CPUs can cope :)
Question, which unfortunately I think is related :(
how can I change presets when OSC is active?
Im assuming if your not listening to midi then I cannot sent a program change?
or have i missed something obvious?
Is there any news on it?
Last I heard, is that at NAMM 2015 the MMA had their annual HD meeting, but they have had held this every year for 10 years. I suspect its going to be quite awhile before they release the specs yet, and even then sometime before hardware and software supports.
One thing that could be done in the current Soundplane software is to support 14 bit midi CC, in the same way as the Continuum/Eigenharps do. Id be willing to add this to the source, once you are in a position to accept 3rd party changes to the software. (I don't need myself as i use OSC)
ref : midi hd news
what is the algo for the quantise function in the SP app?
background: Ive two modes in EigenD for using the SP, one is using the frequency determined by the SP app, the other is to use the SP as a continuous surface where the layout is defined in EigenD.
(layout functionality already existed in EigenD so makes sense to support, as its dynamically configurable)
so in this latter mode, Id like to have a similar 'quantise' note function. this is easy enough, to have on 'touch on', as it just sets 'roll' to zero.
but I wondering, how does the SP decide when to 'kill' this quantisation.
tch1, your off by 0.25 semi tone, so zero out the roll/pb
BUT you will very soon (@DR time) get another tch1, which will show you are off by (say) 0.2502, you didnt move, but sensors are accurate... but at this stage, you want to remain quantised (id say)...
does the quantisation stop when you start sliding? or when you leave that cell?
or when quantising do you always use key centres, and use portamento, when sliding from note to note?
also is vibrato, measured as an offset to the original touch position (i.e. un-quantised)
Im asking, to make my stuff consistent, but Im also interested so I know how the SP app works.
(I know where its doing it in the SP code base (touchtracker), but its taking a bit of time to trace it, as my dev version of the SP codebase is no longer working.)
thanks for any pointers
cool, totally clear now :-) thanks
really? have you got a reference, Id be interested...
afaik, the following is possible:
soundplane can output T3D OSC messages (real time), which can be used by Aalto and Kyma. so Kyma understands T3D.
But Ive not heard it can generate T3d, but it possibly can, it certainly can output OSC, so I suspect it could be coded to output the t3d protocol
there are some tools to record OSC to files, and replay them, so this might work with Aalto. (a bit like recording/playing back midi)
I don't think, you could save any sounds/patches from kyma... the closest would be to export them, and import them in to Kaivo (rather than Aalto)
but I could be wrong, and would be interested to hear, as Im often tempted by Kyma :)
Ive been trying to use Aalto and Kaivo with a sustain pedal (CC 64) and I'm having issues with notes sometimes being stuck on.
It seems to be worst if you go over the voice count, but it also happens sometimes when this is not the case.
Ive put a midi monitor in front of Kaivo/Aalto and I can clearly see CC 64 = 0 is being sent to it.
(its very easy to reproduce as it happens pretty frequently)
related, Id really like to be able to hold sustain when using the soundplane over OSC,
how could this be achieved?
ok, this is a bit of a weird one :)
Im using my soundplane with my Virus TI.
all works ok, except the sending of CCs was causing issues, as these CCs are used by the TI for other purposes. So I decided to use M4L to filter them out.
when I did this, I started getting instability in the note pitch (a constant rapid vibrato)
when I slide to a new note, I noticed this instability was not there, and found a pattern (U=unstable, S=stable)
(regardless of starting note, bend range etc... and its hardly affected by lowering the data rate)
odd, so starting on a C instability, slide to D stable. but then start on D its unstable, and you an slide back to C and its stable. (i.e. its the pitchbend values)
Initially I assumed it was the virus, but then noticed, if I don't have the Max device, it was absolutely fine.
so i tried in Max directly, if I send midi data directly thru, no issue, but if pitchbends are 'parsed' eg. midiin to midiparse to midiformat to midiout, the data gets 'garbled'
ok, so I thought, its a max issue...
so plugged in my Eigenharp, no issue at all (its every bit as sensitive, and if anything has faster data rates)
So, it 'appears' (I did say it was odd :o) ) to be some combination of the soundplane software + max,
my only 'guess' (having looked at the midi data), is that the soundplane software seems to be 'beating' between a few values even when your finger is still, and I wonder if this rapid changes is causing issues in max. (Perhaps EigenD smooths it, Id need to check the code)
But I kind of thought thats what vibrato would do, kind of smooth out the data a bit?
(my usual settings are 0.5, bend range +/-24)
Nope, Midi is note/cc input is deactivated when OSC is active.
(but plugin automation is still possible)
out of interest, why not just use the pitch/gate from the OSC inputs?
Just one touch, Ive not tried with more touches.
Max patch, simple notein->noteout bendin->bendout
thats it, there is no processing going on.
i get the same if i do midin -> midiparse -> midiformat-> midiout (and connect note and pb only)
as I say its odd, if instead i do midiin->midiout , it works fine, including pitchbends
Im sure some how, the TI is a factor, as I don't see it with VSTs, but as i say, I cannot really blame it, as it doesnt do it when max is not processing the messages, and max doesnt do it when I use the eigenharp.
I suppose Id need to see exactly what PBs are being sent by SP software.
a question, if you are using quantise, and vibrato at 0.5, how much movement is required before a pitchbend should be sent?
Im assuming different levels of vibrato, some compress the X movement of the signal, such that small movements are ignored?
yeah, /t3d/sustain could work for me :o) I could then easily support this in EigenD.
I could also look into changing the soundplane app, to be able to listen on a midi port, to allow for some pedal inputs, that could then be routed over OSC for synths.
(the 'issue' here being, how do musicians get their midi pedals to work alongside the soundplane
when using aalto/kaivo)
currently in eigenD I've done something similar for t3d for breath etc
e.g. I have the messages
( I configured these in the soundplane app as zones for use with the soundplane to EigenD for use with t3dInput , and I also output these on my t3doutput agent)
I think the most common 'additional' inputs used are:
(I guess there are other 'pedals' for things like hold, legato etc)
I guess the 'issue' for Aalto/Kaivo is only sustain has a known function,
where as the others would all need outputs on device section, so we could route them appropriately. (Id settle for breath and expression :o) )
I know we also talked about automation names over OSC, which is also useful,
but its not as useful, as its not saved per patch, and I can also do this routing by using the plugins automation, a simple M4L device could do this.
I know lots of 'ideas', but id settle for sustain working over midi, and some way of getting sustain when using the soundplane for now
but of course I don't want to delay you getting 1.6 out... as think there are quite a few waiting for it.
cool, like Windrush... would be nice to hear more about what synths your using, and how your using Aalto etc
Thanks, I hope to be doing more over time.
Currently doing more stuff with Reaktor, which is also fun.. pity its OSC implementation is broken, Im still trying to 'perfect' the multi touch handling with Reaktor.
Ive also order a few Axoloti boards, which I will be using for voice per channel for both my Eigenharp and Soundplane ... very excited by this prospect.
My latest video is the start of a series where Im going to show how you can use EigenD to build a modular synth, with full per note expression.
part 1 goes from the basics... and we then get more serious and fruity
Im using the Soundplane as my controller, as its great for this... but techniques are applicable to all controllers.
Soundplane Im using t3d osc, but you can also use midi (including voice per channel.
Note: please subscribe, as I don't want to spam this forum with my videos, so may not always update this thread.
great idea, Ive got a few aalto and kaivo patches Ive been working on...
perhaps we should have a separate 'Patches for soundplane thread' , as the Aalto/Kaivo patches threads are very long.
Yeah, Id love to see some improvement in this area, as I find chords challenging,
with fourths, there are fingerings (I've found)
a) linear, which is a bit of a stretch, and takes a little too much space
b) over two rows, this can work, but i find getting equal pressure tricky due to fingering
e.g. (left hand)
the main issue though, is many inversions & other chords cannot be done, since you end up with adjacent notes, either in the vertical or horizontal axis.
I think it 'how musicians play the soundplane' I think is a possible rich area for discussion,
I'll start another topic rather than de-trail this one :o)
Playing chords (assuming fourths layouts)
heres now I'm trying to play chords
a) on one row - most obvious, but can take up quite a bit of surface to play a chord
b) over two rows - more compact, fingering not too bad, not all chords/inversions possible due to adjacent (vertical) notes, equal pressure can be difficult, and practice required to consistently get correct spacing.
example (left hand)
a) Currently primarily I use it as a playing surface
b) Using rows as fourths
c) Im getting pretty comfortable with the SP, playing solo parts with perhaps 2-3 touches active using one or both hands. Im still practicing to do multi part pieces, (see Difficulties)
Arps, getting even pressure and ensuring each sounds and does not slide
Chords, fingering is diffcult (see next post), and I find it easy to either be too close to a border, and trigger incorrect note, or pressure is not enough on some fingers
(its getting better but its still hard)
playing non-legato with adjacent notes, when played faster... too often i end up with a slide. I think this is partially me, and partially the software not always treating as a new touch (regardless of LP setting)
Consistent velocity over midi, I don't seem to be able to get very light touches or hard, seems to play in the range 40-90 (rather than 0-127), can make subtle playing on some soft synth tricky
I love playing both hands, where only 2-3 touches are used. i.e. 1 touch left hand playing 'bass', 2-3 touches in right, the sliding between notes is brilliant. the 'poly pressure' is great, and the Y movement for times is excellent.
Its a really different instrument to the Eigenharp,
The Eigenharp excels in playing 'anything' as its key action is faster, and no limitations on layout.
The soundplane excels with multi finger expressiveness, its hard to explain why, but I think, its partly the size of the key zone, means you can really slide around it, its a more 'exaggerated' action. Overall Im glad i have both, as they compliment each other really well.
part 3 is up... thats the last of the basics
this rounds of this 'section', covering sub oscillator, LFO for PWM, and envelopes.. and a bit of FM.
From here, it will be less regular, and will concentrate on more complex patches and techniques, and integrating things in perhaps unexpected ways.
EigenD : Modular Synth part3
Note: Link to downloads and documentation in youtube description of each video
Thanks Randy, Lots more developments to come... its great being able to collaborate with both you and Antonio, its alot of fun, and so much potential.... looking forward to 2015.
part 2 up, getting fruity... again using the soundplane
EigenD : Modular Synth part2
Heres a video of my new EigenD agent which allows full control of EigenD from the soundplane, providing scales, splits, step sequencers, loop control all directly from the soundplanes surface
You tube demonstration
note: this video concentrates on showing features available, Im going to be doing a follow up video which will show 'exclusive' features for the soundplane, and in particular how to build a per note expressive synth in eigenD and more ... :)
yeah, though... I think its reaktor... as exactly the same tests work on Max/Msp and also on a C++ app I'm writing, on the same machine.
yeah, the frame message idea should work, and given the speed of NI fixes, I think is probably the only realistic solution for now.
touch off wont help in this scenario, as it would still look like a touch-off followed by a 'a new touch on' ... really timestamps/sequencing is the only real solution.
and I totally agree, really its Reaktor that should provide us with access to the bundle timestamp.
my only 'concern' over using the bundle though, is I think there are a few apps that don't explicitly support osc bundles and their timestamps e.g. Numerology also doesnt(though Jim may be willing to add it)
channel property only works for "controllers" not note_rows.
would be nice for splits, but would only work for non-multichannel setups.
here you go... not 1 but 2 T3D reaktor based synths
included is the t3d macro which replaces Marks 'continuum front end' and is pin compatible.
I then updated two of his synth NanoWave and Matrix using this macro.
(whole thing took less than 5 minutes)
you can easily do his other synths, by simply downloading his ensembles and then replacing the front end, with the macro.. just be careful to wire the correct things up.
(tip: import the t3d macro, and wire up one by one, as you attach the t3d wires the continuum front end wires will disappear, so you know whats left to do... and only delete the continuum front end once you have done all the wires)
thanks to Mark Smart for sharing the originals, and I hope others here find these useful and instructive
p.s. I checked with mark and he was fine with me sharing.
EDIT: ok, Ive noticed there are issues with Reaktor and T3D OSC,
a) note stuck, there appears to be a bug in Reaktor, when alot of osc data is sent quickly, it is present to the application out of order. this is most noticeable when the last couple of pressure values get reversed, so we 'miss' the note-off, so we get a stuck note.
possibly this could be circumvented, by watching frame messages, and if we haven't had an update since N ms, then turn the note off.
b) continuous event streams in osc
these should be sent at a continuous rate as specified in the soundplane app (in Hz),
but I'm seeing quite a variation in this, e.g. at 250hz i see between 175 and 375 hz,
in fairness I see the same behaviour in MaxMsp.
BUT the issue with reaktor is there seems to be a 'ceiling' at around 300hz, over this, and I still see that data at similar rates to 250hz.
THIS is not the case in Max, so its a Reaktor issue.
Im checking on the NI forums to see if its a known issue.
really (a) is a problem, I could work around it, but its not nice...
its a pity we don't have a timestamp/seq on the individual messages as this would make a workaround trivial.
(I know the time is on the OSC bundle, but Reaktor does not expose this)
perhaps an OSC option, which enables a seq on the tch messages... let efficient, useful for some hosts.
thanks @timoka those are really good examples.
they could be very easily adapted to use OSC, I might give that a go in next few days.
if so I will share
(Ive emailed Mark Smart, to see if he minds me sharing 'derivative' products)
nice track, your getting alot of variety out of Aalto... percussion to pads, great job.
Ah, I think i may have misread (or not looked closely enough) the t3d code, and assumed the note number was the starting note, not that it was continually changing.
(such is the problem with reading code, without being able to run it :))
EDIT: actually just looked at my code, i also did continuous pitch, so just 'forgot' this temporarily :)
that alters the approach a little, when converting existing reaktor instruments
(since they follow a midi model, note on w/ pitch, then pitchbend ), but still possible,
you just need to track the voices(=touches) as mentioned.
my plans are more around building my own instruments, so not an issue, as these don't need to follow the same midi model.
yeah the tchN comment, was not that it would be better the other way, just its a bit of a pain in Reaktor, as it is unable to match partial paths or even process the path. so you have to be explicit.
there are of course some use-cases where being able to match a particular voice is useful.