copernikit's Recent Posts

So, I'm fiddling with zones and OSC now, trying out different controllers than just note rows and I've got a few questions:

First, 'y' type controllers don't seem to work at all. They show up in the OSC stream, but stay at a constant value of '0.0' despite the value in the GUI changing.

'x' type controllers seem to work as intended (question below).

Second, is it intentional that all non-note row type controllers are constantly outputting OSC data when unchanged? I'm not sure whether this will cause any issues down the line, but it is definitely causing Supercollider to get backed up when printing all incoming data for debug purposes.

Finally, I notice there's no differentiation between multiple controllers of the same type, other than their relative position in the OSC frame (I think, it's hard to verify this is Supercollider). Is this correct?

I was going to try MIDI to see if perhaps this would make non-note control areas easier to work with, but my Soundplane client crashes every time I activate midi.

I'm running Mojave (probably a bad idea), so the MIDI crashes may be related to that. Or maybe it's all related to that and I just need to downgrade (got a new/used computer and thought maybe I'd get away keeping the new OS on it).

Sorry if this is answered elsewhere, but I can't seem to find it: are there any soundplane presets for Kaivo?


Hi There,

I hacked together a granular app in Supercollider during the July iteration of the Monthly Music Hackathon in NYC.

It took me a while, but I finally made a functional GUI and an experimental standalone version (OS X only) so you don't have to know how to use Supercollider to mess around with it. I also cleaned up the code so I wouldn't be totally embarrassed ;-)

It's pretty simple, but also fun. It maps 5 samples across the 5 rows of keys on the Soundplane. Think of your finger as a playhead dragging across the sample. Grain size and density are controlled by pressure (z). Dragging your finger to the right plays the grains forward, dragging to the left plays in reverse.

To use, your Soundplane client has to be transmitting on port 3123 (the default) and the layout must be 'Chromatic'.

The UI is just a rectangle with 5 rectangles in it representing the rows. Drag supported sound files (basically any uncompressed format, not .mp3 or .aac) onto the box representing the row you want to play the file on in the UI to load. If the file loads successfully a waveform view is loaded.

Oh, and only mono buffers for now. You can drag multi-channel files into the window, but only the first channel will load. There's probably a limit to the length of the sample, though I find anything over a minute gets hard to predictably control. Let me know if you find it.

6 touches are supported by default. The currently playing area will be highlighted for each touch (for visibility, the size of the area is not to scale, but the start point is correct).

You can download the experimental .app bundle here.

It's basically a shortcut to a script that opens Supercollider and loads the file automatically, wrapped in an application bundle.

Code is below.

Let me know if you like it, or have any questions or suggestions.


------------ copy and past everything below this line into supercollider --------------

s.boot //

// boot your server, then double-click the line above this to select all, then shift-return to evaluate.

// INIT //////////////////////////////////////////////

~maxTouches = 6; // set this to the maximum number of touches you want to support
~oldX = Array.fill(~maxTouches, 0); // for keeping track of direction of movement in the x axis


    { | out = 0, pan = 0, x = 0, y = 0, z = 0, buf, rate = 1, gate = 0 |

        var snd, env, trig;

        env =, 1, 0.01), gate);

        trig = * z) + 10);

        snd =
            numChannels: 1,
            trigger: trig,
            dur: 0.0001 + (0.1 * z),
            sndbuf: buf,
            rate: rate,
            pos: x +,
            pan: * (1 - z)

        snd = snd * env;

        snd =, x - 0.5);, snd);


s.sync; // wait for synthdef to load to server fully before continuing

~samples = 5.collect({}); // init buffers

~players = Array.fill(~maxTouches, { Array.fill(5, {|i| // spawn synths

    Synth(\tapeGrains, [\buf, ~samples[i]]);




// GUI //////////////////////////////////////////////

w ="Magnetophon", Rect(400, 400, 600, 510));

~soundViews = 5.collect({|i| SoundFileView(w, Rect(5, 5 + (100 * i), 590, 95))
.canReceiveDragHandler_({ View.currentDrag.isKindOf(String)})
arg v;

    ~samples[].allocReadChannel(View.currentDrag, channels: [0],
        completionMessage: { |buf| buf.loadToFloatArray(action: { |a| { v.setData(a) }.defer });
            fork({0.25.wait;{|i| i.query});}) } );


});{|view|{|i| view.setSelectionColor(i, Color.yellow.vary(0.2, 0.2, 0.8))};);});

w.view.onResize = {|v|{|view, i|
var width, height, x, y;

height = v.bounds.height/5 - 6;
width = v.bounds.width - 10;

x = 5;
y = 5 + (i * height) + (i * 5);

view.moveTo(x, y);
view.resizeTo(width, height)};



// Generate OSCDefs //////////////////////////////////////////////{ |i|

OSCdef(("touch" ++ i).asSymbol, { | msg, time, addr, recvPort|

    var x = msg[1] - 0.006, y = msg[2], z = msg[3];

    var rate = if(x > ~oldX[i]) { 1 } { -1 }; // set direction of grain playback
    var row = 4 - (y * 5).floor;

    ~oldX[i] = x;

        | player, j|

        if(row == j)
            player.set(\gate, z, \x, x, \y, y, \z, z, \rate, rate);
        } {
        | view, j |

        if(row == j && ~samples[row].numFrames.notNil)
            { view.setSelection(i, [(~samples[row].numFrames * x), 2000 * z]) }.defer;
            { view.selectNone(i) }.defer;


}, ('/t3d/tch' ++ (i+1).asString).asSymbol, recvPort: 3123 ); // make sure recvPort is set to Soundplane port




So I made a quick video to demonstrate the app.

Here's a little improv I did with an earlier version of it (plus a monome white whale module that I'd just gotten):

The newer version doesn't map z to playback rate (pitch) anymore, but that would be easy to remap if you wanted to. It was kinda hard to control.

I was just playing around with Aalto and Kaivo in Logic X, and the latency is pretty terrible.. I'd estimate 80-100 ms. My audio buffer is set to 32 samples, and other synths have practically imperceptible latency.

Actually, scratch that.

Things are just straight weird in Logic X on Yosemite. Some patches won't play reliably (or at all) via OSC, but play fine with MIDI output from soundplane.

Latency is hit or miss. It's reliably fine when triggered via MIDI from soundplane, but unreliable via OSC.

It's been probably more than a year since I played with Aalto + Soundplane in Logic, but I remember it being weird before... is this just not a good combo? Or is Yosemite introducing weirdness?

FWIW, I tried it in AU Lab, and OSC input isn't detected at all.

Thanks for the help!

I just had a chance to get back in and test things again, and you are totally right about the presets.

As far as the latency goes, I'm still seeing significant latency via OSC that doesn't exist when using MIDI from the Soundplane.

Actually, n/m, I just switched it back to OSC from MIDI, and now the latency is fine. Didn't even re-instantiate the plugin.


Anyways, seems like it's probably a Logic thing?

I can say that OSC works fine with Supercollider, same as it did prior to the upgrade. No perceptible latency.

Thanks again!

Hi there,

I'm wondering if anyone else here is fiddling around with SuperCollider and the Soundplane?

Just in case anyone is interested, I've thrown together a little Soundplane class to make it easier to get to making noise with the Soundplane in SC3. It's a very simple class that basically just creates some OSCFuncs and instantiates/manages instantiation of synths to respond to soundplane touches.

Currently it's using static voices gated by the 'Alive' message. I'm trying to get dynamic synth spawning based upon new touches, but so far that's proving difficult to do reliably.

Any input from other folks is welcome. I'm a newbie to SuperCollider and programming, so I'm sure there's tons that could be done more intelligently, efficiently, etc.

There's a default synth in the class that is just a saw wave passed through a resonant low-pass filter (cut-off mapped to y), so if you are just curious you should be able to make noise if you install the class, boot the server and evaluate:

p =