thetechnobear's Recent Posts
STM32F7 (and Axoloti) has both FS and HS support.
the advantage of using a hub, is not only being able to connect multiple devices, is that I'm using it as a single power source... which is not only convenient but also will help when I add a USB batter pack to power the whole thing :)
I guess later, I might put in a power rail, but for now this make its easy to get on with the software side :)
it should be noted, the PI2 and Axoloti's can deliver 500mA to USB devices, so its feasible to use these without a hub with the soundplane, however the BBB, can only deliver 100mA so currently you need a powered hub for use with soundplane
(Id need to check the STM32F7 disco, for what it can supply as a USB host)
(I wonder if there is a device, thats a straight thru USB-USB, but can add external power? this might alleviate the need for the hub in some scenarios!)
Im also assuming that once the soundplane software is on-board that will pretty much max out a CPU/board. (perhaps excluding PI2) ... so you need another USB (or midi din) to get the data to another device... so devices with only one usb port will need a hub for that purpose
MTT, well every (2.0/3.0) hub has a translator in it to do 2.0 to 1.x... the MTT just has one per port, important IF you connect multiple 1.x devices.
does a translator create latency... I don't think any more than any hub will...
Im pretty sure they are 'protocol' aware, so the translation function is not used HS to HS,
only HS to FS.
Id have thought (may be wrong though) the latency is 0.125 (due to HS leg), e.g.:
w/o hub computer < 1.0ms FS < device
w/ hub computer < 0.125 HS < hub < 1.0ms FS < device
but honestly, id need to read the USB hub specification to see if this is true.
But, in practice Ive used my Eigenharp Alpha (HS) for quite some time through a MTT hub , and never noticed any latency difference to plugging it directly into the Mac. so technically perhaps some increase, but I've never felt it.
(and since last year, put the soundplane in the same hub, and also not noticed any extra latency)
but perhaps because the latency is 'constant' we just cope, we are generally very good at dealing with constant delays... its more jitter we 'feel'
hard to describe a feeling ... but here goes :)
if you quickly 'strike'* the surface then I don't think you really 'feel' the initial surface give, but as you then then apply a pressure, you do feel it 'give' and provide resistance , this means you can grade the pressure quite easily. (its controllable the amount of force to pressure in software)
saying you cannot feel the 'strike' give is not a criticism, its really not necessary, as your initial velocity is already determine by you before contact... so there is no need for a feedback element.
if however, you slowly touch the surface, then you basically move straight to the second phase (pressure), so you can feel the give immediately. (this way you can play a slow attack pad type sound)
Roli have coined this idea of 5 expressions which I think works reasonably well
- Strike - traditionally called velocity, initial force
Pressure (Z) / Glide (X) / Slide (Y)
Lift - traditionally 'release velocity', how fast you release a key
amount of give... its a few millimeters... more like pressing a surface that gives but is rigid, like a plastic lid i suppose.
BUT you have to remember, its a musical instrument, so the feedback obviously is given by sound, the feel is highly correlated.
so, when the soundplane is unplugged, you might think.. "oh that doesn't give much feedback" but when you connect it to a sound source, the feel takes on a different dimension, the sound means you can feel the give more (odd i know, but its completely unlike playing say on an iPad)
then of course it also depends how/what you play...
I tend to think i play in two styles...
tapping - this is quite fun, almost like finger tapping. quick strikes, it kind of bounces.
deliberate touch - i.e. slow approach/softer, far feel the pressure from the start.
(of course you can also kind of combine this i.e. quick strike then play the pressure)
fatigue, I play with it for hours, and never feel fatigued, the tapping would probably get tiring if you did it for a long time ... but i think the give perhaps helps reduce the impact ...if you have a medical complaint perhaps not advisable (e.g. Rheumatism)
compared to Continuum (I've tried one, there is a post here somewhere on my comparison), yeah completely different... not better or worst, just different, I prefer the slide on the soundplane, but the Continuums dynamics is incredible (actually quite difficult to control initial velocity... but thats probably something you get used to)
sorry, lots of words, but probably inevitable when trying to describe how something feels.
summary: you can feel the give and combined with sound source its plenty of feedback, both physically and 'emotionally'
awesome stuff , and good news all around :)
Dom from Bitwig is saying that BWS does support VSTs with MPE.
but they must support the canDo() operation specified at the end of the MPE spec,
can you confirm that Aalto 1.7 has implemented this?
k, if you could possibly get me a beta at some point, then I can re-test with BWS, and so help 'move the ball along' :)
not got one, but the manual implies it should work well...
it appears you can make the pad send x/y/pressure on different channels. (using the editor) so that should give you independent control for each pad.
(pity it doesn't allow appear to allow pitchbend for x, but you should be able to use mod and mod+1)
then the other controls on the Neo you should be able to map to Aalto controls using automation in your DAW.
should be fun .... anyone actually tried it?
@mcgreave ... yes, Max can receive x/y/z/note via OSC (this is higher precision that using midi)
one note, if you use X, the you will to do your own 'quantisation' (its not that hard, but the SP has a few extras for things like vibrato), however you could also use the note parameter, and 'rescale' this to 2 notes per cell, this will allow you to use the quantisation from the SP software.
the only disadvantage is that you need to ensure Max knows the midi note for the row start. (since you need to calc with x = 2x-root)
(its a pity the soundplane software doesn't broadcast this meta information to clients )
ok, some success, and some failure on the PI2 :)
first I got it all compiling, main bit was figuring a way to make MadronaLib compile, as I needed to go from SSE to NEON fpu instructions (Intel vs ARM) . after that all proceeded pretty smoothly.
UI is very sluggish, unsurprising, I have the same on EigenD... basically for these machines you will want a headless setup.
(Its not helped by the fact that I use remote X, probably would be better if I connected via HDMI, but still, not what you want.
Soundplane is detected, and connected to... I get power and serial numbers but no pressure data, it looks like its all zeros, but without errors. this was also confirmed using HelloSoundplane
Ive not double checked this, but I fear this is going to be the same issue Ive hit with the Eigenharp Pico.... basically there appears to be a bug in the PI usb kernel module, that means its not waiting for data correctly on iso traffic.
i raised an issue on the kernel here..
odd though, I thought the issue was 1.1 vs 2.0 devices, but the Eigenharp Alpha (usb 2.0) works fine, and the soundplane is a 2.0 device. (the Pico which didnt work is 1.1)
hmmmmm.... food for thought.
EDIT: cool, have this version running on both Mac OS X and Ubuntu :)
btw, but you have issues with your merge...
you have created a git submodule dependancy to bitbucket which is private...
I changed this to point to your github repo in ./gitmodules.
linux, yeah I have 32 and 64 bit versions of debian and ubuntu here, so I can test these, I can also test it on a PI2 which i run a debian derivative on.
axoloti, is not unix based... it uses a real time OS, as do all micro-controllers, this actually is a real advantage as it means you get much more reliable timing :)
cmake... cool, no problem familiar with this.
(Ubuntu has some issues with performance/packet counts, but Im sure you know this, and its work in progress... but looking good)
these are private repos, so no access ... no issue, I think I got the gist of whats going on in the Mac OSX versions.
There's also a Linux port in the works using libusb
is it possible to get a look at the 'work in progress' of this, I don't care if its working or not, its just I'm more familiar with libusb than the OS X api, so this would help me figure out the STM implementation.
(either send me a copy, or just chuck it in a branch in github and I can take a look from there)
14 bit CC support added to Axoloti ... will be in next release (but can supply if required)
Ive also added 14 bit MPE support, using the Continuums low data CC number (85,86,87).
it uses the principles as described by Leepold H, to ensure we don't get stepping.
I also added 14 bit MPE support to my version of the soundplane client, only took about 15 minutes :)
X - CC 85, used to extend PB to 21 bits, useful for large slide ranges (think 96 note continuum)
Y - CC 87, combines with CC 75 to give 14 bit
Z - CC 86, combines with channel pressure to give 14 bit
additional CC 14 bit support in Axoloti
new CC object, defaults to CC + 32 = low data
new CC object, specify CC for use with high and low data (useful where 14 bit 'standard' is not followed
yeah, I plan to do the usb hosting, partly as once i figure out how to do iso for the SP, I can also then add the eigenharp, which has more complex surrounding code, as it needs to upload firmware etc (though no complexity with tracking etc)... so SP is a good start for me.
as for touch, yeah, my current 'assumption' is i will dedicate one board (F4 or F7) to USB handling/touch, and then initially spit out midi (including mpe) data, but Axoloti will also later have support an inter-board protocol so we can connect other boards in a ring (hardware support is there, just needs firmware support added)
(of course the low level code will be compatible with any STM32F4/7 chipset, not just axoloti)
so initially Axoloti will be a standalone midi bridge... (it has usb midi host and device, and midi din support) ... some people are also working on a CV interface which they have connected to a modular, so once thats finished we will get that 'for free' :)
if you checkout the forum you will see, we are seeing a lot of excitement / involvement.. and thats with I think about 60% of boards delivered, and many still 'finding their feed' with the platform - so looking promising.
(note: Im just an early contributor and a 'fan' of axoloti, no financial connection etc. but do think its got an exciting future :))
I'll keep you posted.
DAWs tend to support 14 bit CC, which is cool for automation of VSTs (since this is don't using floats) , but this of course is not voice per channel.
u-he synths which support voice per channel, I think only do this for CCs and these are fixed and are 7bit only.
Its a good idea I'll add 14 bit support in Axoloti, for all midi modes, i.e. mono, poly, multi channel and mpe ... quite simple to do.
then I'll add it to my SP client.
useful as Axoloti, doesn't have OSC (as its usb/midi din) only... and also I know a few continuum users with Axoloti so they will appreciate it :)
Hi @rsdio and @scottwilson
Id also be interested if any progress has been made on this...
Im interested in getting the SP to directly connect to Axoloti, which is based on STMF4 using Chibios + STM host lib (etc), I helped get the USB host midi working on Axoloti, so have some experience on this.
Ive now got a full debugging environment for axoloti, and I also have an F7 discovery board here,
so im setup to go... the only thing i don't have (as outrageously expensive) is a USB analyser
(Im also pretty familiar with the soundplane software)
I see there are two tasks:
a) writing USB level
I've look at the SP driver, doesn't look particular complex, main new thing for me is getting iso working on STM. (Ive done using libusb before, but not the STM lib)
b) touch tracker
Whilst the SP software may be portable, having done a bit of work on the STM32F4 now, Id be concerned that its going to be a bit heavy.
and of course there is potential to use the cortex instructions to significantly increase performance.
(this is no criticism to the SP code, code for a modern computer makes different compromises for when we come to footprint, efficiency, flexibility, readability ... as it has much more leeway)
as Ive already said to Randy, I'm happy doing (a) but to do a reasonable job of (b) is difficult, I could probably getting it tracking single touches ok, but thats useless (to me) , and going beyond that gets complex very quickly.
so I think (b) is where ML could really help out... as I assume you will need this for your eurorack module.. and the STM32F4 (or F7, which is compatible) is the most obvious choice.
perhaps we could collaborate?
I think you still need to be a member of MMA to have seen the draft HD midi spec.
(unless someone here knows of a draft in the public domain?)
yeah, adding 14 bit CC to soundplane client is easy.
(mpe spec stupidly (imho) didn't cater for this... as at least continuum/eigenharp can already deal with it)
to be honest though, Ive not bothered adding it to my version of the soundplane client yet, simple because my experience with the eigenharp software (eigend) which does support 14bit, is very few synths support it.
Id be amazed if there are any hardware synths out there that can support 14 bit midi and voice per channel.
saying that, I can easily add this to Axoloti (axoloti.com) if there is a demand... i added mpe already, and I could do a 14 bit mpe extension, as well as a multi 14 bit version.
one 'issue' with the current soundplane software, is its really tight on UI space to put extra options, in my version, I've removed the buttons (like mpe/pressure) and replaced with a combo dropdown which allows me to switch between different mode (single w/ channel pressure, single w/ poly pressure, mpe, multi 11/74/76, multi cp/1/3 etc).
@spunkytoofers, do you mean the arppegiator on the linnstrument?
(as you don't appear to be using the sequencer in this patch, so cant see its aalto)
if you mean there is a difference between using aalto with the linnstrument's arp on and off, Id say its most likely to be an issue with how the linnstruments arp code is working with MPE... i.e. Id talk to Geert.
(also he either has Aalto or can use the demo, so easy enough for him to test it out)
Id suspect (without looking at its code, and not having a linnstrument) its something to do with how the arp on the linnstrument is doing voice allocation, and sending the relevant pressures. (the difference in using an envelope is your using note_on gates, rather than channel pressure on the channel)
its a pity that we cannot blend the gate of the internal sequencer with Z, then you could just have your arp using the aalto sequencer.
(oh, for being able to choose the mixing mode on the inlets, and simple multiple would allow use to to this mixing, but the current add doesnt)
Ok. Yes, I use a simple max patch that duplicate and route the OSC.
yeah, but we don't really want to be running multiple applications, not only creates overhead but also points for failure (and hassle setting up, e.g. remembering to start multiple things)
Now.. if Soundplane driver and tracker were placed into a Max external, then that would be very interesting, no overhead and the power to do whatever OSC mapping you want... or even just send MPE straight to a VST etc.
I did this with the Eigenharp driver, wasn't difficult, basically write a C to C++ layer, but we have all written these countless times, so not hard :) , and you have to be careful of threading in Max.
(again another reason to structure code, such that the low level stuff should be cross platform, not too heavy, so that it can be pulled out from the UI, and used in different contexts)
here a video of some noodling using Axoloti, and my Soundplane (and also Eigenharp later).
Axoloti is a hardware board, a virtual modular programmed in a similar way to Max/Nord Modular, which you can then remove from the computer a play 'standalone', really fun and easy to use, and opens so many possibilities with the Soundplane.
I also implemented MPE (included in shipping) , and here I use Midi Expression (MPE) to control each voice independently, so we can do 'unusual' things like alter LFO rates per voice.... and of course 'normal stuff' like per voice control of vca/filter...
things we are 'used' to in Aalto/Kaivo, but now this is in hardware.
1.3 seems to be spamming my computer :)
it is broadcasting 16 touches continuously (with zero data) even when no touches are active ... whereas it used to only be sending data for active touches (and of course 'off')
not only creates more traffic, but more processing requirements on the client (as osc packet has to be parsed)
a bug I assume? ( as seems unnecessary)
id like to be able to change midi port with a midi pedal :)
but osc port/midi channel and zone setup would all be useful
Ideally, what Id like is the midi output/osc output/zones (and transpose/quantise would be nice ) to be bundled as a 'setup', and then I could flip between these setups with program change messages :)
good to see MPE support :)
small issue : pitch bend range is not changing when the soundplane tells it too via the NRPN
1.3 ... like the splits, fun with Aalto! ... look forward to kaivo 1.2
btw: the note names are wrong, on at least split example 1, not checked others...
doesn't matter I will change the layout anyway, but perhaps might be worth correcting in a future release?
email sent... thanks.
yeah, I suspect better TT/Calibration will help, but it may be spacing also plays a role.
touch tracker 'bug' -yeah I fully recognise why its doing it, the 'sucking in' is a really noticeable phenomena, but I think even that should be 'curtailed' to some distance (perhaps closer than the normal 'new touch'?) because this sucking in, if done too close will start generating ghosts notes as well.
I think the rule is 'reasonably' firm, you cannot have 2 notes that are closer than N, because once you get this, chaos follows pretty quickly, regardless of the original cause.
(e.g even if I slide two notes close together the TT will soon start having issues).
perhaps its possible, that perhaps N may be slightly different for different scenarios, e.g. sliding together (touch age?) or very new touches, or perhaps even touch thresholds.
but I look at it this way, I can get the SP into a position, where with one sustained touch, can be playing two (sustained) notes on adjacent cells... for me this is breaking a precondition... id be firing code asserts :)
you can see it in my video - https://www.dropbox.com/s/kkh009ykcd80u0d/SpCalib2.mov?dl=0 @ 0:50
ok, this is edge of the board, but I get this in the centre of the board too.
anyway, I only said an 'easy fix' as I thought it might be doing an additional inhibit check, but I do recognise the TT code is complex, and there are lots of 'use cases' so improving one thing, may make something else worst/stop working... so just an idea, one that you are in a much better position to judge, it could well be time much better spent on the new TT.
Sorry, been busy trying to get Axoloti ready for its release :)
my soundplane, is #54 I guess from second batch? (yes from a guy in Norway)
Heat, agreed two SP is not much of a sample set, could be a wide variety of other things.
I guess my hypothesis is not so much about stretching, more that the rubber is warmer, and so is taking longer to return to rest state, which I think would lead to false touches, as the other touches are not properly released yet.
its hard to say if it is, but it does feel a bit more 'sluggish'... but perhaps Im imagining it :)
I tried recalibrating with a CD case to get an even pressure... and its different, but not really better.
I guess I'm still a bit vague on what I should use for extra spacing material, don't really have any spacer that I can think of, and not sure where to get some, but will have a look around... (perhaps I can order something online?)
one thing has struck me, that may be an 'easy temporary fix' for the touch tracker.
Its pretty obvious that the ghosts notes are usually just one cell from the actual touch, but this should be impossible as a new touch should not start within 1 cell range... its suppose to inhibit close touches... so this seems like it might actually be a small bug.
ok, it wont solve the issue of the tracking being out, but it might help reduce ghost notes, and also (in my experience) sometimes fixing such bugs can reveal other issues.
If you want to intervene mechanically, adding a bit of material to the rightmost spacer bar will hold the sensor more tightly together, and should reduce the spreading
Im a little confused about this, Ive watched your disassembly video (a few times), but don't see any spacers... are you saying that you thing the two sensors boards could be separating slightly?
Im sure i will see if I open up, but kind of want to check what I need before I dive in :)
try normalizing again being sure to press very evenly. You can use an phone or a CD case for this!
this also confuses me a bit, really the difference between the first step (with palm) and second.. are they calibrating different things? It would I think be useful to know a little more about this.
as Ive mentioned before, I do wonder if an 'editor' for the calibrated data would be useful,
perhaps were we could select regions, and increase/decrease/smooth out the difference, and retry... I know it could be fiddly and perhaps would need good 'explanations' to stop fumbling around in the dark.
but one of the problems Ive had in the past is, I have an issue, so recalibrate, it fixes one area of the board, but another gets worst. (so sometimes I have lived with a bad area of the board, getting a worst compromise)
You could just wait and accept the ghost notes into your life until the next software update. I am sure that they will be improved by the next touch tracker. Meanwhile the thing with ghost notes is that they are tiny, so when using direct envelope control via z they may not even be heard.
ghosts notes, true to some extent,it is more the tracking thats my main issue. and in some ways I have lived with it till now (its always existed to some extent), but its just got to a point where its pretty hard to play... it not very gratifying playing an instrument that doesn't have consistent/repeatable behaviour, which is pretty fundamental for any instrument (imho)
I'm eager to work on the new touch tracker, but meanwhile I also need to release another plugin.
Yeah, I recognise this, bills to be paid and all that :) same for all of us, and of course there is only so much one man can do at a time. so fully appreciate priorities etc.
I only hope this can receive some future priority, against all the calls for new versions of Aalto/ Kaivo/ new plugin/ Modular interface... which perhaps generate more revenue, than software that is given away free and has less than 100 users (albeit its part of an expensive hardware product)
Id dive into the touch tracking software myself, but frankly, when I checked it out, its obvious the amount of knowledge/experience you have built up in this area, my efforts are pretty futile by comparison... a few hours of your time, are worth hundreds on mine.
So I hope some of these tips give you something to chew on, and I'll thank you in advance for your patience.
Of course, I will try... as Ive said (repeatedly) , I love playing the soundplane, its a wonderful experience and its noticeable how much its teaching me too, so I really don't want to put it away until it cooler (3+ months here) or a newer software version is out.
anything I can do in the meantime I will definitely try, and of course try to be patience :)
(let me know about the spacer / calibration and I will look into this.)
thanks for your help
oops realised that was the dev version... anyway same on production version,
this is with 1.2.5, just some general movements, showing edge issues, and also how its also not predicable with intervals.
(I play intervals like this, as not enough room to do horizontal, and this formation prevents having to play adjacent cells ... and I don't like playing in row 1/5 which I find has less Y range, and can be more unpredictable.)
ok, uploaded a video as you requested... here : https://www.dropbox.com/s/ocr32uct37mk5iy/SpCalibIssue.mov?dl=0
(fyi, here is a picture of the calibration
I ran my finger over row 1 , then 5, 2,3,4 (and then over the end columns) as evenly as possible.
you can see some unevenness, and in particular around column 26 (in all rows), which is noticeably,when playing, more sensitive that any where else on the board.
the I switch to touch mode.and first play a M7 interval (carefully centred on both cells!) (so 1 up, and across), you can see the root is okay(ish) but the G is half a cell away, and this typically will randomly choose between the D and G... so its unpredictable... also of course even if it is the right note... its now got Y = 0, what I played was Y=0.5
after this i just play D and G, and you can clearly see the ghosts notes firing...
(Ive tried with higher thresh, but it doesn't make any difference unless its at about 0.022, which means you have to almost hammer notes, no subtlety.
Ive tried manually altering template, usually it calcuales around 0.3, but I've tried much lower and much higher, again which no perceptible difference in getting reliable touches.
Ive watched the videos of you opening.. but looks easy enough, but not tried myself as wouldn't really know what to adjust or how to make it better :)
FYI, after a request for the T3D macro, Ive added a newer version to the Reaktor User Library T3D OSC macro
In this video I show how to add per note expression to Geetar (Chet Singer) and NI Sparks, but the techniques are applicable to most polyphonic Reaktor ensembles.
and there are lots of those, so this opens up a rich selection of expressive synths for us to use with the Soundplane