randy's Recent Posts
Thanks for the note. Still working hard and feeling like I'm close to a beta here.
I think you get this. You had it right up until the end. The antennas are not mixed with each other. So to do an 8x8 version you need 8 audio inputs and a separate FFT on each.
I also wish you could describe how to record it properly, as an example, some render settings do a good job even if the audio in real time is distorting from CPU burden. Reaper offline idle seems to work. I will do more experiments. I wish this advice could be available to the community.
Sorry, I just saw this post. It's hard to offer advice about recording because it depends almost entirely on the DAW, and there are a lot of different ones. Please post about your experiments in the software forum and hopefully we can build up some knowledge about Reaper use over time. I don't think I have a lot of users on Reaper but from what I have heard it is a good alternative to have.
OK, I fixed it for today only.
A lot of people seem to have missed it this time around, so you can use the code 'nottoolate' until the 30th. Enjoy.
Oops, sorry, my mistake, I had Aalto on the brain. I don't know about any then.
If you missed by a day or two feel free to email me at support for the hookup.
I realized that I could at least give users of Mac OS 10.6 some more security by making a permanent download for Aalto 1.6.1. So if you are still on Mac OS 10.6, and would like access to it, let me know and I'll add a special license for Aalto 1.6.1 to your account. This will enable a personalized download of 1.6.1 indefinitely, and of course you can still download Aalto 1.7 or higher if and when you upgrade your computer.
There are three good sets by Adrian Jimenez: you can find these at http://www.zensound.es/ . I think there are some others out there too.
@granum, thanks for your support!
When a DAW sends audio to a plugin for processing, it stores chunks of samples in a sample buffer. If this buffer is big, say 2048 samples, the buffering adds latency you can hear. When the buffer is very small, say 16 samples, processing overhead starts to use up lots of CPU. There is almost always a user setting to vary this buffer size.
Some DAWs do not send the same number of samples every time, but send a variable number: say 256, then 3, then 17, then 216... This is usually done to make sample-accurate automation work. However, some plugins have a problem with these different-sized chunks and this can cause bugs. So, there is sometimes a setting to disable this buffer size changing, thus: fixed buffer size.
I have tested that Aalto / Kaivo work with changing buffer sizes in the past. But maybe there is some problem that only the way Bitwig is doing it is revealing. Or maybe this is not the issue at all! But now you know what I meant.
Please check out this article on "getting the most out of Kaivo": http://madronalabs.com/topics/3565-getting-the-most-out-of-kaivo
It's got some tips and points out ways that Aalto and Kaivo are a bit different from other synths in use. Turning down the number of voices to the amount you are actually using is often a good idea. Also, turning off graphics may help.
I haven't used Reaper much, but if it's at 20-30% but not recording that a problem. What happens instead of recording? You get glitches?
Most DAWs have a freeze or offline render option where the track is recorded in non real time. This should do what you are trying to if you can find such an option in Reaper.
Hey zenwarlord at yahoo.com, thanks for the feedback, please check your email re: duplicate purchase.
Any pre sale with the winter discount?
No.
Hi, I'm working to finish the next plugin very soon, hopefully in the next few weeks. After that I will look at this issue.
Meanwhile, does Bitwig have some kind of "fixed buffer size" setting? That has been known to fix similar issues.
Hi there. The only way to sync the oscillator is through a trigger sent to the "reset" input. You could send that trigger from the sequencer, or from one of the envelopes.
Or if those are in use you could send it from a controller using the KEY module. By drawing a CC curve in your DAW you can do the resync wherever you like. The resync will happen when the CC changes from 0 to 1 or greater.
There is an orange RSS link to the right of "News" on the front page. This feed is for the main News section, where I will certainly post when Soundplanes become available, and when it is anything like close. Also, in your account settings you can decide whether you are subscribed to the newsletter or not.
How would I set up the Soundplane to achieve this?
I set up the Soundplane's defaults with this kind of playing in mind. Just turn a "quantization" toggle off, and any scale you pick will be played in a fretless way.
After some urging from people I also recently implements .kbm and .scl files, you the scales can be mapped to the keyboard flexibly. Most of the built in scales use a default keyboard mapping.
A good reason for Aalto to have MIDI out would be to capture the crazy rhythmic things you can do with the clock and PLL sync in the sequencer. These are hard to get with any other tools I'm aware of.
As far as what notes are coming out, just listen I guess. It so depends on how the sequencer is being used that there is nothing "automatic" I can think of that would help.
The Soundplane is a continuous surface at heart. It can give continuous pitch response or discrete steps, depending on how you set up the software. Much more like the Continuum in this cases but with the added benefit of full 2D sensing.
Greetings! I'm writing to announce the Madrona Labs five-day winter sale, starting now. From now until December 10, all of our software is 50% off. Now is a great time to get everyone’s favorite patchable software synths for yourself or a friend. To get the discount, use the coupon code ‘dougfir‘ on any product page, just above the ‘Buy Now’ button.
If you would like to give someone an Aalto or Kaivo license as a gift, it’s easy.
Just buy the software in the usual way, but enter your friend's first and last name instead of yours when you make the Madrona Labs account. Then you can gift your friend the account name and password. Your friend can log in, download the software and change the account email to his or her very own.
With a beta, maybe.
but why is magnitude 1/distance, and not 1/ distance squared?
That is a very hard question to answer. Why do the laws of physics do one thing and not another in this case? It would be a long answer and I would have to look a bunch of stuff up. And then ultimately we would get to some aspect of quantum electrodynamics as we currently understand it and then same question would be there: why is it that way? And ultimately, we don't know.
I guess maybe the question you really want answered is, why do I say it is one over the distance, how do I know that? And the reason is that I looked up the equation for the capacitance between parallel plates. I really don't understand it any more deeply than that.
...so 15 volts p-p?
sure, or whatever your audio interface is capable of putting out. It would depend on the interface. What you don't want is distortion. So use an oscilloscope or similar and make sure you don't have it.
what is the optimal distance between strips to stop cross-talk
I used to use bigger gaps but capacitance between the edges is very small. a few mm should be fine.
Or is the design less for striking and more more for touching
The design is for striking, touching, whatever. So as a basis for all these things you need an accurate position, the rest of the behavior depends on whatever you do with that data. Most of the time I am controlling envelopes directly with pressure. Or in a physical modeling setup I may be adding energy with velocity. I don't think about MIDI ever unless I have to, it is very limiting.
My overall advice as always is just to experiment with one or two junctions before you build a whole array.
I remember the B-spline is only needed for the audio synthesis. If you just want a controller, you are probably fine using the magnitude data directly. There is a paragraph somewhere in my thesis about the filtering being in some sense optimal. It doesn't preserve all the bandwidth but for sending controller data to an audio signal it's probably more important to avoid audible glitches.
how do I extract velocity information from this set?
Phase is just a detail of the FFT, which is being used as a multiplexer for real data. So you put real data in and get real data out. The magnitudes are proportional to 1 / distance. So the range of physical distances you use is very important. Having the magnitudes you take the inverse to get position and to get velocity you take the derivative.
Yes, sounds useful. To varying extents in different DAWs, I guess. Thanks for the tips.
I'm working hard to get a beta out this month, at least.
Maybe Dec 15-20? Timing will depend a bit on my Virta work so please stay tuned.
Thanks for getting back with the solution. I'm glad it's sorted.
You mean each voice on its own stereo pair of outputs? That's a lot of outputs (when we get 8 voices for Aalto anyway). Would one per voice be as useful? What software would you use to route them? Curious about your application.