thetechnobear's Recent Posts
bit more progress... on soundplane with beaglebone
Ive now got the thing properly calibrating and filtering on the BBB, data looks cleaner,
though Im sometimes getting lingering touches. I suspect this may be because sometimes Im not using the optimum carrier. (Ive seen this with SP on the mac before). its good to see the extra processing really didnt appear to hit the CPU too much.
now this is going, Im now considering pulling in SoundplaneModel and 'hacking' this back to a slightly more naked form. The reason behind this, is the model has a lot of code that really binds the soundplane driver and touch tracker together. with my current experimentation its clear, if you dont use the model, then you have to still do much of the processing thats in the model anyway.
I need to decide now, exactly where the hatcheting starts and stops...
thoughts so far...
things going : OSC and MIDI output connection, OSC services,kyma, any storage of signals 'for display purposes' (we just don't have the memory to spare), references to the likes of MLFileCollection, anything that refers to Juce.
parameters : 'under consideration' , but I think will stay
zones : 'under consideration', theoretically I dont want the mapping code here, but its tempting to keep the flexibility.
main things Im going to be keeping/modifying are loading soundplaneapp.txt for normalisation maps, general interaction between TT and Driver.it will probably then have some simple callback which can be overridden for output.
I dont think this will take too long, as I'm 'reasonably' familiar with SoundplaneModel already, main effort I think will be breaking dependancies that i dont want :)
cool, glad its working for you..
interesting you see the same behaviour, it is a bit 'odd' - the cpu usage you've got is pretty good, unsurprisingly better than the BBB.
as you say, time to get it doing stuff thats useful, so we can see how useable it is in a more 'real world scenario'. I do expect to have to do a few changes yet, in particular to some input filtering, and also probably loading the soundplane json file to get some better calibration data, and carrier settings. otherwise I suspect the playability will be variable.
(of course I think this depends on how accurate you need the tracking to be, for me, Im using it as a playing surface, so it has to be pretty good, but if your were after more general x/y/z , simple might work ok)
anyway, Id obviously appreciate it, if anything you can improve is fed back...
yeah, Im not going to spend to much time on this for the same reason, once you get the new tracker working (after virta?!) , I'll then port this newer code (and dependent code), only from there do I think optimisation is worth really looking into.
as I mentioned, as far as I can tell so far, its seems to be 'good enough' so far on the BBB. I only investigated, due to the oddity of seeing CPU load drop as the BBB got heavier loading - so was really looking for some kind of busy wait... but that did materialise, so still at a bit of mystery, but one that currently seems to play to my advantage.
of course, its also fair to say, that the code on the Mac has plenty of performance, its only going to be when your start using low powered microcomputers, like these, that more use of the FPU is really going to be noticeable.
anyway, lots more things to try to improve and tidy up, and get running, before optimisation :)
ok, I've set up a private repo on bitbucket, which has this stuff in it.
you should have an invite, let me know if you would prefer it on a different email address.
yeah, Im not expecting much from the BBB...
Im currently just using it to communicate with controllers, in particular my soundplane , eigenharps and a Push 2 :) Im doing this in C++, and then sending/receiving OSC, which I'm picking up in PureData (-nogui) to translate to MIDI (MPE) and then that gets fired off to a bunch of Axolotis for sound generation and sequencing duties.
Im using pure data as its quite flexible to allow me to do the key/surface mapping, if I find I'm running out of processing power, I'll either move the controller stuff to a pure data external (to avoid osc handling) or just move the midi mapping etc to C++.
the later of course is most efficient, but makes the solution a little less flexible.
so no the BBB wont be running X, or anything else too heavy
the only thing I dont like about the BBB is you can't bus power the soundplane and eigenharps from it. so what Im currently doing is using a powered hub, which then is hosted by the BBB but also powers the BBB.
anyway, Im pretty close to being able to test this out.
Ive being looking at performance....
its a little strange on the BBB, the cpu load is varying quite a bit, from 60-80% BUT oddly if you start doing other things the cpu load drops. (!??) I thought this was some kind of busy wait, but not found any evidence of this.
I did do a gprof thats quite interesting, showing some definite 'hot spots' that perhaps with some fpu could be optimised. (Im sure Randy already knows all this)
convolve3x3r is the big consumer of resources, nearly 25% of time spent in it.
whats a bit odd, is I see in processTouches, its called 4 times but 3 times with same params
// to make sum of touches a bit bigger mSumOfTouches.scale(2.0f); mSumOfTouches.convolve3x3r(kc, ke, kk); mSumOfTouches.convolve3x3r(kc, ke, kk); mSumOfTouches.convolve3x3r(kc, ke, kk);
I wonder if this could be done in a different way, it could make a big difference
setting the coefficients on the background filter is also taking a good amount of time.
apart from that, its a few key functions (like clamp/max) which are called a huge number of times, so any minor improvement in these would make very large improvements.
anyway, as I said, I need to run this 'in context' to see if it its going to work, but good to know there are some things to look into if we need to squeeze a bit more out :)
as such i don't 'need' an x15, it appears the BBB is enough for the soundplane.
(or at least at this stage, I need to connect it up to a synth and play it to check for sure :) )
but perhaps the X15 would give me a bit more headroom, my only real requirement, is I want all this to run off a small battery unit, as I want it all to be portable. Im not after integrating with eurorack.
Im thinking something like this:
Compulab CL-SOM-AM57x , yeah, seems nice, but perhaps not viable for a one off order... and I guess the x15 will see better general support.
hey, but whilst I'm not into eurorack, the idea with the soundplane is enticing, and your products could well tempt me further ;)
(though, Id need eigenharp support too... but I have the code for that already, assuming I can rebuild your firmware ;) )
funny about the SSE2NEON, I thought that was where I got it from... add I only added one function to get the whole ML DSP compiling, and that wasn't it ;)
( we don't need the whole DSP lib for the TT though)
anyway, I'll use vdivq_f32
code... I'll see what I can do. perhaps a private repo might work, would allow us to get the basics worked out. Id like to restrict access at this time to people who can/will contribute.
soundplane - beaglebone black status update.
ok, so Ive now split out all the necessary files for the soundplane and touch tracker
Ive integrated changes made by scott for vector for neon, and also made similar changes for signal.
I've also taken a slightly different approach, rather than using #ifdef which complicates the code line. Ive removed the SSE code and moved to separate files
so, now I have things like
note: the arch sub dir files only contain the fpu code (as much as is possible, without introducing inefficiencies), common code is in the original file (so above MLVector.h)
one question for scott...
in your port of vector you say there is no neon replacement for _mm_div_ps, however, SSE2NEON uses vdivq_f32 , is this not correct?
its not a big deal, as its only used in makeDefaultTemplate() , so not on the performance line... but obviously as TT changes, it may become more important if that operator us used elsewhere.
also is the X15 actually shipping, I can't seem to find a way to order one.
(preferably in Europe, but US would do too :) )
got it... soundplane with touch tracking is now working on my BeagleBone Black.
and also Mac... a simple oversight on my own part...
now just got to pretty the data up a little bit to integrate with my new setup :)
Randy have you fixed the midi/OSC mode in virta?
reported against Kaivo/Aalto ... when plugin is persisted, it will not be in correct mode until UI is shown - most hosts will not initially show the plugin UI when restoring a session.
( I guess since you might have hundreds of plugins in a session )
Would be good to know it's fixed from day 1 in virta
I don't have a repo with the code in, but if you wish to collaborate, then you could email me.
( mharris AT technobear DOT com)
Ive got what you have so far, and Ive got it running on Linux (x86_64, PI, BBB) and Mac.
Ive also 'ported' the ML DSP and parts of the MLApp and TouchTracker.
(Ive used SSE2NEON as a starting point rather than updating the code, as at this stage I think its important to remain compatible with the ML code base!)
I made some minor additions to SSE2NEON
Ive also got a new version of the hellosoundplane, (touchtrackertest)that integrates the touch tracker.
I re-tested this yesterday, and found that this touchtrackertest has the same behaviour on the mac as BBB, i.e. its to do with configuring the touch tracker rather than the SSE2NEON code... which his 'good news'... it also means I can debug it in Xcode :)
current status, touch trackers is reporting fake touches..
Im guess this is probably something to do with the calibration or normalisation maps.
(the other more static values Ive set as they are in the app)
Ive two ideas how to move this forward.
debug the fake touches, using the current setup
(and compare to how this goes in the soundplane app)
load the calibration/normalisation data from the soundplane json file.
(id been avoid this for the test to simplify things, but mid-term i need it, as I don't plan to have calibration in this app... rather Id used the calibration thats made in the soundplane app on my mac)
@Randy, I know your busy with Virta, but would you perhaps have a few minutes to look over my test app, and see if Ive got anything obvious missing when initialising the touch tracker?
... its good to know others are interesting/looking into this, I think we can crack it easily together.
Ive actually got the soundplane reporting pressure data (etc) on a Beaglebone Black.
though it doesn't at the moment quite look correct, as the touch tracker is not quite right.
Im not sure if this is my SSE to NEON layer, or other code Ive written and how its configures the soundplane/touch tracker, and haven't had time to check
I did this quite a while back now, but then got sidetracked on other projects, but Im hoping to get back to it soon. and when I do I'll have a new 'OSC' layer which will allow me I hope to debug the issues.
its tantalisingly close :)
(oh and the pico which didn't work on the RPi does work fine on the BBB, that bit of the code Ive got up and running)
Interesting video, great to hear Virta will be released this week!
good luck with the group, bit too far for me to commute from Spain :) but sounds great for the Seattle area!
@griffley you have to set bitwig to "Force MPE"
I get similar behaviour using the Soundplane with MPE,
so I think its likely its aalto/kaivo rather than MPE implementation on the Linnstrument.
(its a bit different behaviour if I use rotate channels or not, but similarly 'confused' :) )
not a big issue for me, as I tend to use polyphonically, and also with T3D which doesn't have any legato behaviour (as far as i remember)
I know your not looking at the soundplane software at the moment, but for your 'issue' list, when you get to it after Verta
I notice my console logs filling up with hundreds of :
4/2/16 00:12:36,000 kernel: Limiting icmp unreach response from 6710 to 250 packets per second 4/2/16 00:12:40,000 kernel: Limiting icmp unreach response from 22822 to 250 packets per second
a quick tcproute trace led me back to the source being the soundplane client software.
as i could see it constantly trying to reach ports 3123 to 3138
00:14:10.331996 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3123 unreachable, length 36 00:14:10.332014 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3124 unreachable, length 36 00:14:10.332022 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3125 unreachable, length 36 00:14:10.332025 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3126 unreachable, length 36 00:14:10.332028 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3127 unreachable, length 36 00:14:10.332034 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3128 unreachable, length 36 00:14:10.332038 IP 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 3129 unreachable, length 36
I had a look at the code, and in sendFrame(), sure enough I can see it attempting to transmit to every port offset for every frame.
from a quick test it appears this can be rectified by adding to the loop
Im not sure if this is the full picture however, as it may not cover the split case,
I also assume there is some 'logic' behind this, to do with the fact that the frame message needs to go to every client regardless of if theres a touch or not.
Id need to do further testing to check this, though of course I'm limited on how I can fix the protocol as I cannot alter the client software (aalto/kaivo)
btw: I suspect mFrame++ is not correct, should it not be contiguous for each client, i.e. you want it outside the port offset loop.
as I said, I know your busy with Virta, but I hope once thats done you will have some time to resolve some of the issues in the soundplane software, so hope the above will assist you,
happens with just the soundplane client running.
the above fix, I think is fine, I just need to check some of the surrounding logic to check that the initialised flag is set in the appropriate cases.
hope virta development is progressing nicely :)
Since the Soundplane software is open source, Ive developed a few extensions to the Madrona Labs (official) application which I thought Id share. my changes are also open source.
you can download a build version from here click
Obviously this is not supported by ML, so post here if you have issues (unless of course the issue is also present in the official release).
also if you like a feature, let me know... or if you want other features let me know, perhaps I may be working on them, or want them too :)
which version is it? - I always keep my build in-line with the latest ML version, usually the current development version (rather than released)... assuming I don't find any major issues with the dev version.
how well tested is it? - I use it everyday, but of course my usage may vary from yours, so cannot guarantee it. personally I have this, and the official version installed. so that if I have any issues I can cross checked with the official release.
changes included in TB141
- bug fixes (from official release) related to note on/off behaviour and sending got pitchbend and cc messages
changes included in TB140 :
- Midi modes (click here to see screenshot)
single channel with poly or channel pressure
MPE ext - extended midi with 14 bit midi support
Multi 73,74,11 (CC x,y,z)
Multi PB,1,CP ( pitchbend, CC 1, channel pressure)
its easy for me to add more if needed...
when quantize is off, the touch has an indicator of how far you are from being in tune. I use this to practice playing unquantized tuning. (click here to see screenshot)
just like the XY zone, but has a 3rd CC for pressure.
don't send touch data when a touch is not active.
Im also working on a couple of other features,
Midi Pedal Input
I want to be able to publish sustain (& perhaps expression) pedal info out on the MIDI Soundplane OUT, and also OSC.
Midi Program change for setups, to allow me to use a midi pedal to switch between different soundplane setups, so i don't have to use the soundplane app during play. e.g. so I can have a full surface playing Kaivo, then switch it to playing aalto
no promises, if/when this is coming... quite a few projects on the go, but I wondered if others have thought the same.
finally, a thank you to Randy/Madrona Labs for making the source code open source, so making this is possible!
updated version TB141 - I found a number of bugs in the midi handling from the official release concerning note on/off and the pitchbend and CCs sent.
These are now fixed in TB141
Randy, if you look at my latest checkin on my repo it will be pretty obvious the issues.
unfortunately, I can't issue a pull request as my code base now contains quite a few 'enhancements', so there a bit I divergence. and I dont have the time to do changes on both my build and yours.
(my build is a superset of yours, i.e. includes all your changes, should you wish to 'take as is' )
Dying to show you something!
Dying to hear something :)
I don't have any issues with bitwig/aalto (or kaivo) but may be because Im no a Mac.
for what its worth, I use Bitwig 1.3.5 , Aalto 1.7
Bitwigs options are simply setting a fixed buffer size (which Id recommend) or Auto which I assume 'calculates' one for you. (rather than varies it, but hey might be wrong)
some options to perhaps try are:
- put aalto in the same process as Bitwig, rather than run it as a separate process.
- try turning auto-suspend off
neither should be necessary, though the later may be useful with aalto, as it kind of expects to be running all the time (afaik)
Im not sure the above will do anything, as its not necessary on Mac OSX, but perhaps windows is a little more fickle
N4 has the host clock, and all sorts of interesting clock manipulations...
not seen the new PLL in Aalto yet, so hard to say... Im not sure if N4 uses PLL to keep its internal clock in sync with the host, Ive asked on the N4 forum.
I have to say I don't use the sequencer in Aalto for much more than a modulator, an LFO with waveshaping (and sometimes at high rate), so quite possibly under utilising it :)
why not use dedicated sequencer VST (etc)?
if your on a mac take a look at Numerology 4 (five12.com) its a fantastic match to Kaivo/Aalto, its a sequencer+++ :)
I don't really do PC, so only know things that are cross-platform, in that sense, other options would be Reaktor/M4L both offer really rich sequencing options
Im a little surprised to hear, you saying you can hear quarter tone steps on the Seaboard... as Ive not heard this expressed before - have you got an example of that?
(though id have thought the piano layouts perhaps doesnt lend itself to microtonal configuration anyway... but perhaps I'm wrong)
Ive seen a few comments on microtonal music here, but randy is the best to answer...
but from what Ive seen in the software/code, if you turn quantisation off, then the soundplane presents a continuous value for pitch, and I don't hear stepping... but frankly its only very recently I've started to "understand" microtonal music :)
like the continuum, of course continuous in digital systems, is not really continuous discrete steps are at affected by sensor accuracy, and any touch detection and following processing.
my new interest in microtonal music, makes me intrigued, on your thoughts on how you would setup the soundplane for this use.
btw: do you know of any good introductions to playing microtonal music, ive little idea where to start... but if i could figure that out, id be interested in experimenting in ways to use the soundplane in this context.
ooh - cool :)
Dec 15-20, possibly to coincide with a Virta release ?
(or just wishful thinking :) )
@theheliosequence , I don' think they have a demo currently, but there are quite a few (excellent!) tutorial videos, sound demos, and the manual online. i know, not quite the same but gives a pretty good impression.
(other UVI products have demo versions so perhaps they will have one the future)
@andrewj, my pleasure , I'll update if I think of more, or get others :)
this is a bit of a follow up to a previous topic, where I have issues with setting Kaivo/Aalto mode, but also after some experience using with the new Bitwig (1.3.3) which has much improved MPE support.
below are some suggestions, on improvements that I think could significantly improve the usability of the Soundplane and also aalto / kaivo with other MPE enabled devices.
a) MPE mode in Bitwig/ canDo()
Issue: Bitwig does not automatically detect Aalto is MPE compatible, this means you need to use 'FORCE MPE' option when loading the VST.
this appears to be something to do with the canDo message, Id recommend, putting some trace in Aalto, and see what Bitwig is requesting.
b) Pitchbend range (PBR)
Issue: currently you have to change the PBR in both Soundplane app and Aalto, auto slide range was one of the main selling point of MPE, getting wrong means slides are incorrectly calibrated,
Kavio - should support +/- 48 semitones
Kavio/Aalto should process the NRPNs to set the PBR automatically
c) MPE detection
Issue: Aalto/Kavio does not automatically swtich to MPE, you have to currently select in both soundplane app and VST. if you get this wrong, you get silence/incorrect behaviour
the MPE standard documents how MPE mode is selected, via CC 127, Aalto/Kaivo should when they receive this switch to MPE mode automatically. also when it is 'switched off' it can be set to single channel midi mode.
d) T3D/OSC detection
Issue: Aalto/Kavio does not automatically swtich to OSC, you have to currently select in both soundplane app and OSC, you also have to be careful to use the correct port. if you get this wrong, you get silence
Ive thought about this, I think the easiest solution is to take the same/similar approach as MPE. have an NRPN which users can sent to put Aalto/Kaivo into OSC mode.
the NRPN value could be used as an offset to 3122 , such the 0 means OFF, others are the 3122+offset = udp port.
for splits these would be sent on different midi channels. so in a DAW you could route to different instances of Aalto/Kaivo
why use NRPN? because this completely open to your use, no need for ratifying etc.
why use MIDI for OSC? because its there... the plugin has to support it anyway, and allows us to setup routing for the OSC within the DAW. using UDP to do similar is a bit of a pain, as you need to listen to this to know if to switch to OSC, and midi to know if to switch to MPE etc.
(btw: if aalto/kaivo is told to use both OSC and MPE/Midi it should take OSC in preference, arbitrary decision though ;) )
I really do think the above could really streamline the whole connection process, and make it easier for users to just play music, without the tech getting in the way. something I know you are passionate about.
I mention it now, as with Virta in the pipeline, perhaps its good to address in Virta now, so that it would not have to wait for an update to get these features,
of course, these are just my thoughts and ideas, and I hope are reasonably straightforward to implement.
some 'asides' :
there is a new version of the MPE spec, but I think staying with 1.0 is reasonable until this is ratified, as BWS is still using the old version.
On my side Im planning on making the following changes to my version of the Soundplane app.
always have a midi Input : "Soundplane", this will accept
the MPE messages to set PBR and OSC/MIDI mode
program change... to switch 'presets', so I can change setups to reflect instruments, or switch to a split setup, without having to interact with Soundplane GUI
a few standard midi messages and pass them though to both T3d/OSC e.g. sustain pedal/breath. ... really this is designed to be use with pedals
Midi output : rename Soundplane IAC out to "Soundplane",
cosmetic/consistency with other devices.. "out" is redundant, in the midi api, you always know if its input or output, they never get listed together. I think "IAC" is pretty redundant, 'techie talk'... as far as I see its just the Soundplane :)
Im using a combo of Live and Bitwig...
Now MPE is supported in VSTs, I started using Bitwig more...
But funny, soon after BWS released 1.3, Ableton released 9.5 and the new Push 2 ... so that has tilted me back a bit, as the Push 2 is really good.
Im a bit caught,
I love using Live/Push as its 'hands off computer' (which is partly what this is all about) and use it just with audio input (i.e. dont bother recording midi), BUT setting up multiple midi channels to feed VpC/MPE is a pain!
on the other hand, Bitwig with MPE support, makes this side easy, but whilst I hugely respect the guys that added Push support (its a huge endeavour) its not as 'complete' as Ableton (partly due to the BWS controller API not being complete)
So, I guess for me its kind of a race, Bitwig getting better push support, or Live getting multi channel midi support , till then I switch between them, depending on what Im doing, and if I think midi is important. (when Im using Ableton, I tend to use OSC/T3D and then record the audio)
hopefully, Ableton 10, or BWS 2.0 will be perfect ;o)
Bitwig, yeah I like it, some things are really well thought through, but every now and then you stumble across things that are missing or awkward (e.g. you cant bounce in place real-time, a pain for hardware synths)... but I guess comparing a 1.3 product to a 9.5 product is hardly fair (also big price difference between Live Suite and Bitwig)
... so on balance I think they are doing a good job, and it looks to be a promising DAW for the future.
Not sure if anyone else has been having fun with Reaktor 6, and the new modular blocks
I certainly have, and there are lots of new user blocks :)
anyway, I thought Id do my bit of the community, and publish 2 blocks which soundplane owners might be interested in:
MPE Expression - a midi polyphonic expression block for 8 voices
T3D OSC - a block supporting T3D
both pretty easy to use, create a set of voice chains as normal in Reaktor blocks, and then link P/G/X/Y/Z where you want :)
If you have a soundplane and havent tried Reaktor 6 yet... why not :)
blocks are really cool, just build patches like a physical modular synth, loads of modules, and the user library is growing at an amazing rate... link this up with a soundplane and its a fantastic playground.
its almost worth it alone, to build an 8 voice poly Monark (takes about 5 minutes!) ,that is completely controlled by x/y/z... the oscillator and filters are lovely
(Id assume this also applies to Kaivo, but not checked)
Ableton Live (9.5) now supports loading VSTs/AU from the Push (yippee!)
so now my workflow is to use the Push to create new sets and tracks without having to go back to my computer/mouse ...
this is really working nicely, as I use AUs I can save presets in ableton and all is good, I can browse them and load them
(I don't need all aalto presets, just a few of my own, so I'm not too fussed that I cannot get to the Aalso 'factory presets' as they are not stored as aupresets)
EXCEPT... when I load an au preset, it doesnt restore the input transport properly.
It correctly displays that i have it set to OSC and to offset 2, but its not actually listening.
so I have to go back to my keyboard/mouse, bring up aalto and change it back and forth.
please... can you fix this, with Push/Maschine and controller like the P8 (and quite a few others) its becoming more common place that people work away from the computer ... and don't want to return to it have to 'set things up' ... which is a real workflow killer.
also it would be handy to have the transport and port and automation targets, this would mean that I could adjust them from the push, handy if Ive put the soundplane in VpC mode, to use other instruments (e.g. u-he) and then need to switch to aalto.
BTW: I don't know if your considering NKR support but might be worth it, if this becomes a standard for preset browsing.
Im not sure how much you've been considering this trend for musicians to (physically) move away from the computer and treat it as an instruments, I find it enjoyable, allows me to focus more on music making...
I'm finding the soundplane software also needs a bit of tweaking for this... e.g. due to limitations of software synths, some might require me to use VpC, but I use T3D for ML... and also I sometimes want to switch the SP to single channel midi to control hardware synths. again these kind of things I don't want to have to come back to the computer to do...
ideally Id like to be able to do this 'switching' via midi, so I could either do via the Push, or use a midi pedal.
but perhaps thats for a different discussion ...