randy's Recent Posts
Thanks for the question.
Sumu has 64 bandwidth-enhanced, frequency-modulated partials. So it's really comparing completely different things, to compare it against other synths. A single partial in Sumu would be a musically useful oscillator all by itself.
If your additive synth can make wide-bandwidth noise, then it has enough partials! Then the fun is in figuring out how to control all the parameters of a voice meaningfully and expressively — I'm much more interested in this "how" than in "how much" or "how many".
Hi, thanks for posting.
You have the right idea about OSC. As far as I know there's nothing really that records it so you just record the audio.
MPE is just MIDI, so any DAW should be able to record it and play it live. If a plugin / synth supports MPE, you can play with all the multi-dimensional control a Soundplane can offer.
I wish I were more up on the Soundplane -> CV possibilties. If the Reaktor thing you are using does MPE that should work. Also you could just get something like a Polyend Poly and send it MPE MIDI over a USB cable.
It shouldn't be hard. I would try to interest someone else who is a Reaktor user and can write the software. I don't have time to work on it but I'm happy to answer questions.
Wow, old thread... but a good idea! Thanks for posting.
Thanks for writing. What you ask is totally reasonable. At the very least an alias should work, which it does not seem to, now. So this is on my list to fix.
I'll appreciate the info, thanks. As far as I know this is an AU-only and computer keyboard-only problem.
I just received my first Apple Silicon-based computer this past weekend. Of course I was excited to find out how all the Madrona Labs software worked on it, so I dove right into some informal tests and benchmarks.
None of my plugins are yet released in native versions for the Apple Silicon-branded ARM processor in the new Macs—rather, they run using Apple's Rosetta 2 emulator which translates their Intel code into ARM instructions. Kaivo is definitely capable of putting a heavy load on the CPU, so I used it as a basis for testing. Because Ableton Live is the hosting environment my customers use the most, and also because it's not yet released for native Apple Silicon, I did the bulk of my testing in Live 10. So the test is all in emulation, and hopefully gives a good general indication of what moving your existing software setup over to an M1 Mac would be like. I tested both the M1 and my 2015 MacBook Pro, which is my daily driver and a pretty typical machine for a lot of my customers out there.
I said above that these tests are informal. That means I didn't take the average of multiple runs, I just did my measurements by looking at Apple's Activity Monitor, and my reports of inferface speed are subjective. That said, I have seen the same behavior while working on the new Macbook Air over the last few days, and everything is consistent with the reporting I've read on these new computers.
- MacBook Air (M1, 2020)
- 8GB RAM
- macOS Big Sur v. 11.1
- MacBook Pro (Retina, 13-inch, early 2015)
- 2.7 GHz Intel Core i5
- macOS Mojave v. 10.14.6
- Ableton Live 10.1.30
- Kaivo 1.9.4
All tests were run at 48000Hz, with Live's sample buffer size set to 512. Built-in audio was used. Each voice used the factory patch "peaking lights," a high-CPU patch with all the resonators and body code in use.
I looked at the CPU percentage used when no Kaivo windows were visible. In addition, I noted if there were any audible glitches, how Live's UI performed with one Kaivo window showing, and how warm the computer was. Here's the data I collected:
Test 1: 32 voices of Kaivo (4 instances x 8)
%CPU: 138, ± 1
UI: a bit slow but usable
%CPU: 310, ± 10
heat: warm, fan audible
Test 2: 16 voices of Kaivo (2 instances x 8)
%CPU: 82, ± 1
%CPU: 133, ± 5
UI: slow but usable
heat: warm, fan audible
Test 3: 12 voices of Kaivo (2 instances x 6)
%CPU: 78, ± 1
%CPU: 101, ± 2
UI: slow but usable
An Ableton Live project with 32 voices (4 instances) of Kaivo is totally usable on the M1 Macbook Air, with only a little bit of UI slowdown evident with a Kaivo window open. The same project is definitely not runnable on my old MacBook Pro.
Interestingly, looking at the eight cores of the M1 running all these voices, it appears that only four of them are doing the heavy lifting. This probably accounts for the fast UI response even under the most load I tested. The M1 has four performance cores and four less powerful efficiency cores—my guess is that cores 5–8 here are the performance cores. Right now it takes more esoteric tools to really determine this.
On my old MacBook Pro, I can add up to around 12 voices of Kaivo before glitching is audible. (This is over two instances of the plugin. The same number of voices over more instances will take a little more CPU because of the overhead of each instance, but may be less glitchy because the scheduling is easier, soooo... it's complicated.)
It may not seem too amazing that a new machine is much faster than a five year old one. But remember, both Live and Kaivo are compiled for a different processor and running in emulation! This is an impressive feat, especially if one can remember the lackluster performance of the original Rosetta's PowerPC to Intel emulation.
I tested my five year old laptop against the M1, not to be mean, but because I don't have a newer one. Since the work that Kaivo is doing is basically CPU-bound, looking at relative CPU benchmarks for newer machines should give us a decent guess at how they would perform in the same test. Geekbench gives us the following numbers for multi core tests, where higher is faster: New Air: 7614 , Old Pro: 1358, Recent 13" MacBook Pro (13-inch Mid 2020): 4512. So going from my old Pro to a current 13" model should give us roughly (4512 / 1358), or three times the number of voices, or what the M1 can do in emulation.
If you have a Mac that's more than a couple of years old, the upgrade to any Apple Silicon Mac should be a big leap in speed, even when running applications that are not yet native. If you have a high end or more recent Mac, one that pulls in a Geekbench 5 multi-core score of around 5000 or higher, it probably makes sense to wait—either until your favorite apps are all native, or for the next generation of Apple Silicon computers.
I have not tested on M2 personally but I think a lot of my customers have M2 computers by now. If there is an issue on your new computer, I think it's more likely to be software related than to do with the M2 chip.
Can you please give more info about what you mean when you say it doesn't work? What specifically is the symptom? What did you expect to happen and what happens instead? Thanks.
Thanks for writing. I like the platform and still want to deploy on iOS. The pricing is hard. I wouldn't want to charge you another $89 for Virta iOS! Ideally one purchase would cover both platforms in the future but Apple makes it hard to do this. Fortunately we have smart friends with the same problem and I know we will figure something out, given time.
Sorry, I stopped adding people to the discord while I'm finishing the beta. This is because I have as many people as I think I can handle now. There's currently no beta to try anyway but I'm getting close!
Sumu comes first!
I realize the Aaltoverb preset UI is really limited. As part of Sumu I'm working on a more capable preset browser that will be shared with all the instruments.
Sounds fun! I don't have anything beyond what's on the website here—you can drag the plugin UI to any size you like and take a screenshot.
Sumu beta comes first.
If you look at the Sumu preview, you can see that and a lot of other manipulations are possible. I guess this is obvious so maybe my idea of the workflow is not obvious:
Vutu: make faithful reproductions of the sound as partials
Sumu: mess those partials up
I'm going to make this thread sticky. It should be a good place to find and share Aalto patches. I'll try to post one every day or two for a while.
It would be cool if we could embed Soundcloud links here, but setting that up will take some time.
It should run on 10.10 and up. But I can't say with certainty until the beta is out.
by Dave Segal
“The intent [with my music] is to get to a point where people listen but aren't sure how they got to this place.”
American producer Seth Horvitz's transition from Sutekh to Rrose represents a subtle yet monumental occurrence in electronic music. As Sutekh, Horvitz created heady, spare techno and skewed, funky IDM (see also his Pigeon Funk side project with Kit Clayton and Safety Scissors for the latter style). The Sutekh years—1997 to 2010—resulted in a large, respected catalog with releases on Force Inc., M-nus, Soul Jazz, Orac, Plug Research, and his own label Context.
Without question, Sutekh had name recognition and had risen to festival-playing status, but the endeavor had reached a logical end. For as rigorous and cerebral as Sutekh's output was, it never quite attained the sublime. From a club DJ's perspective, the tracks were more warm-up material than peak-time weaponry, although “Dirty Needles” from 1998's Influenza EP approaches anthem status. And On Bach (2010) is an experimental-techno record that foreshadows some of Rrose's explorations of chilling atmospheres and pointillistic textures. Oddly, it's Sutekh's best, most adventurous album, as Horvitz loosens the structural reins and goes out on several mad limbs timbrally. “The Last Hour” is a fierce organ drone that reflects Horvitz's time spent studying with master maximalist composer/keyboardist Charlemagne Palestine. (They released the collaborative LP The Goldennn Meeenn + Sheenn in 2019.)
“I was kind of flailing for a direction,” Horvitz says about Sutekh's last days. “I was trying a lot of different things. I was trying to be influenced by all the different music that I love. I have really broad taste in music, and I was trying to incorporate it all and it become somewhat of a mess. I kind of lost focus from this root in techno. Then I got tired of techno and wanted to try other things, but I didn't really find the place to go. So the transition point was going back to school and studying at Mills for two years and getting my Master's degree.”
Horvitz's Mills experience and subsequent encounter there with experimental musician Bob Ostertag convinced him that it was time to move into a new musical direction under a different name: Rrose. Their remix project, heard on 2011's Motormouth Variations, launched Horvitz into deeper, darker techno realms. The wildly percolating and bizarrely textured “Arms And Legs [variation one]” serves as a perfect merger of the two artists' skills. And while he doesn't have any desire to revive Sutekh, Horvitz—who played the first Mutek festival in 2000—did a DJ set as Sutekh a few years ago, and enjoyed it. But for the foreseeable future, Rrose remains Horvitz's primary focus. Which is understandable, as Rrose reigns as one of the planet's most riveting techno producers.
Rrose's ascent began with the Primary Evidence and Merchant Of Salt EPs that British label Sandwell District released in 2011. They established the severe, mesmerizing techno and remorseless, industrial atmospheres that have become Rrose hallmarks. “Waterfall” from the latter record set the bar high for trance-inducing transcendence within a technoise framework. It's Rrose's greatest and most psychedelic track, but Horvitz is actually most proud of the 2019 LP Hymn To Moisture (like most Rrose releases, it's on Horvitz's Eaux imprint). “It felt like a culmination of many years of doing this project. It's always a challenge to create something that feels really cohesive beyond just a longer collection of tracks, which I think happens a lot, especially in techno. I embrace the challenge of trying to create something that feel like one piece where all the tracks belong together and support each other and tell a different story from the EPs.”
With its variations on subtle rhythmic hypnosis and textural otherworldliness, Hymn To Moisture achieves rarefied effects without relying on the crutch of melody. Rather, Rrose creates tension and drama through gradual ebbs and flows of microtonal sediments. One outlier, “Horizon,” is as cosmic as anything by New Age genius JD Emmanuel.
Horvitz got exposed to microtonal music while DJing at UC Berkeley's KALX radio station from the early '90s to late '90s and from hitting Bay Area record stores every week. He picked the brains of KALX DJs and spent many late nights browsing the station's abundant vinyl library. Through some friends at Mills College, Horvitz got to know the work of experimental composer Pauline Oliveros, who was teaching there at the time.
“What attracts me to this idea of microtonal music is partly the way it plays with our perceptions and the way it gets away from the typical emotional structures that are in so much tonal music we listen to.
[I]t places your focus firmly on the sound and what the sound is doing to you in almost a physical sense more than an emotional sense.
Of course they're related. But the focus is more on perception of the physical response to the sound itself, rather than telling a narrative or expressing emotions.”
Going back even further to Horvitz's earliest interest in electronic music, he can trace it to the feeling as a listener that “anything is possible with electronic music, and any sound is possible. Later on I realized that almost any sound is possible with acoustic instruments, as well. So it's a different palette. But the possibilities are a little more diverse in generating electronic sounds, as far as making sounds that we've never heard before.”
After phases of liking band-based music such as punk, goth, indie-rock, and industrial, Horvitz discovered early-'90s dance music and its attendant culture. “The idea that I could just get a couple of pieces of random equipment and make electronic music was also exciting.” He cites two formative discoveries as a fledgling DJ in the early '90s: Aphex Twin's Selected Ambient Works 85-92 and Detroit techno renegades Underground Resistance. “DJ Jonah Sharp [aka Spacetime Continuum] was playing in this chillout room and he had both [Aphex Twin] records on the turntable, and you could see the logo. This was 1992, maybe early '93. I was mesmerized by these logos. I had no idea what it was. Then I actually discovered that record... somebody had brought it into the radio station at Berkeley where I was DJing. That was a real epiphany.” As for UR, Horvitz was enthralled by “the whole mythology around that—the fact that you didn't know who was making the records and it had this political undercurrent to it were exciting to me.”
In a 2014 interview with Secret Thirteen, Horvitz said, “I like hardware synths and use them sometimes, but I'm generally happier with the final result when I can control everything in the computer.” He says that that's still the case, “but those things can kind of work together. I have a couple of analog synths, but I don't connect my studio in this professional way, where everything is through a patch bay and into a mixing console and synced up with all the proper equipment so you can run everything together at the same time, synced with the computer.
“My method of working is much simpler. If I'm going to use a synth, like a Buchla Easel—which is partly what inspired [Madrona Labs'] Aalto synth, which I use a lot—I will sit with that synth and play it for a few hours and record stuff that I like with it. I use that as the inspiration for building a track around it. Sometimes I just use the sounds of the Buchla, maybe do a little more with it in the computer, and that's that. I just focus on one synth and see what I can come up with. I let that be the seed that generates other material.”
Horvitz's preferred software for production is Ableton Live, after years of using Logic, but hardware plays an important role in his productions. “I use all of the Madrona Labs synths, but Aalto is the one I use the most. I do like them all and have found uses for all of them in my tracks. The only other gear that I have around is the Buchla Easel and Lyra, which is made by Soma Labs. It's fun—it's a feedback-noise machine, basically.
“I have a lot of recordings I've made from a couple of residencies I'd done, where I had access to Serge Modular System, ARP2500, this kind of stuff. So I tap on these archives of recordings I've made in a couple of residencies in the Netherlands and in Stockholm.”
Like many of the best minimal-techno artists, Rrose avoids blatant emotional signifiers in their work. It's a major part of music to inspire emotions in listeners, but Rrose has decided to de-emphasize that. “I wouldn't say that I've eliminated it, but I'm very aware of it and I try to avoid it. When I made music as Sutekh, I didn't really avoid it. I played with it a lot more. So it was more of a subversive and playful attempt to use tonal language.
“When I started the Rrose project, I made a conscious decision that I was going to stay away from composing melodies, for the most part, and put the focus on the sound. It was interesting because I started trying to make melodic music on my own, and then I became obsessed with learning about it, studying piano, studying jazz and classical music. I wrapped my head around all that theory and then went to Mills College. After learning it all and knowing it, I decided to not use it. I have so much reverence for classical composers and jazz musicians and the way they use the tonal language. One of the reasons I decided not to use it is because I feel like I don't have something important to say in that world that hasn't already been said much better.
“Staying away from that and going into these areas that potentially have shorter history and maybe fewer avenues explored. Applying these non-tonal ideas to techno has a lot of potential and I've been able to make more of a contribution in that area.”
What, if anything, does Horvitz view as the purpose of techno? Does he anticipate that certain people in certain clubs are going to be on hallucinogens and therefore is he trying to enhance that experience? “Ideally, I would like to give someone the same experience as being on a hallucinogen without having to be on one of those,” Horvitz says, laughing. “I don't think I ever quite get there. It's kind of this idea that if you take hallucinogens, you can go on a fast track to some form of enlightenment. But you're never going to be enlightened because you're taking the shortcut. The real way to get there would be to meditate for decades.
“Ideally, the purpose is to create a meditative space for the dancer or listener. I want the listener to experience something that feels like a profound meditation. I think that that can be accomplished through dance, as well. Which is why I get really annoyed when I'm playing for a crowd of people who are all talking. Sometimes people are really social and seem to be really enjoying it and dancing, too. I don't want to get too angry when that happens, but my ideal audience is in a real dark, foggy black box where you can tell everyone is in the zone for the whole set. I want them to have a meditative experience... hopefully, not too specific. Meditation, but the standard way to meditate would be with silence. The idea is to experience whatever is happening in your mind. I hope to have something similarly open-ended.
“Of course, there's an entertaining aspect: People go to dance and they hear this music and it gets your adrenaline going and stimulates all kinds of other things. But I wanted to have some relationship to this meditative state.”
Meditative states also are manifested by Horvitz's excellent collaboration with Luca “Lucy” Mortellaro, known as Lotus Eater. Compared to Horvitz's makeshift home-studio setup, Mortellaro has a big modular rig and patches everything properly. “It's fun to work in someone else's space that has a whole different setup. It ends up being much more spontaneous, especially on the most recent album [Plasma]. We worked really fast on it. The first one [Desatura] took a little longer, compared to how I work on Rrose stuff, where I comb over every detail and revisit things.
“[Lotus Eater is] much more spontaneous and improvisational. It's fun to work that way. It's almost more limited in the sound sources approach than Rrose. We don't use melodies and chords, but we also focus on noise and feedback as central sound sources.” Lotus Eater's music carries a strain of dread similar to that of Throbbing Gristle and Les Vampyrettes. Plasma (Stroboscopic Artefacts, 2022), is a solarized palimpsest of minimal techno, an infernal phantasm of techno's subliminal pulses, a reduction to molecular activity, beats and textures composted into charcoal dust.
While Lotus Eater is anything but formulaic, dance music relies heavily on formulas. Rrose sometimes defaults to certain production techniques or tropes in order to satisfy DJs and dancers, but never in obvious ways. “There are times when I use methods that I know might be slightly manipulative or might achieve a certain effect. But I want it to creep up on people in a way that still feels natural. I don't want to surprise people in the way that I may have done as Sutekh, where I wanted to jar people—at least in the later years of that project. [With Rrose], I want to do something more seductive, like bring people to that point of euphoria, but in a mysterious way, so they're not sure how they got there.
“I have certain working methods that can achieve certain effects, but I try to keep them evolving.” Rrose has put those methods to powerful use on their next two releases on Eaux: the Tulip Space EP (out in February) and the Please Touch album (likely out in May). Tulip Space contains some of Rrose's most slamming dance-floor fillers and weirdest abstract explorations while Please Touch delves more intensely into Rrose's well of hallucinogenic textures and disorienting implied rhythms. Also, a slew of Rrose remixes loom on the horizon for 2023, including those for Pole, JK Flesh (aka Justin Broadrick), Luigi Tozzi, Dutch techno duo Abstract Division, and others.
“The intent [with my music] is to get to a point where people listen but aren't sure how they got to this place. Still, it feels like it's grabbing you and taking you somewhere unexpected, but drawing you in in this immersive way.”
Thanks @sntr for the input.
@alino.romano does this problem happen with MIDI from a keyboard controller, or only the computer keyboard MIDI like @sntr is talking about? Thanks.
Instead of closing and reopening the plugin, what if you change the resonator mode to something else and back? Does that fix it?
If not you could also try the body. In this way we might narrow it down.
I have not seen the midi input issue and will investigate.
Huh, I changed the extension to .utu and on my machine it doesn't even let you load the .json anymore, so I didnt' find this issue. I'll try to load a .json and fix. Meanwhile if you change your .json to a .utu hopefully it will work?
The noise component is very important to Sumu.
Each time point is corrected for a each partial, starting from the center of the FFT frame then moving forward or backward. So mapping to ~sinusoids might not br worth it. Maybe there's a Loris resynthesis object for Max out there somewhere?
Glad that works. Everyone wants to protect you from evil people like me who might try to give you a computer program.
here's a past thread with the same idea!
Apparently Aalto is a gateway drug :-)
Works for me. Did you save the link or something? It will change whenever the version changes.
There's no need to sign up for the beta, I'll announce when it's available via the website.
Sumu voices: I'm not totally sure!
here's a newer link: https://discord.gg/BAJYprsU
In the video you sent, you are turning the dial to change the number of voices. Doing so makes more lights in the sequencer, because when voices are turned on their sequencer positions are reset to 0. So you go from all voices being at the same sequencer step to being at diffferent steps.
If you connect the KEY module's gate output in Kaivo to the sequencer trig in, this will do the same thing as turning on "key trig" in Aalto.
Hi stew, I'm trying to understand what your patch is doing but from your description I'm not quite sure of how the notes are being triggered. If you want to just make a small movie and share it with me by email that might be easier.
Aalto and Kaivo should work in the same way as far as the sequencers and triggering.
Each voice does have its own sequencer offset, and these will be different depending on when you reset the sequencers with a MIDI note.
I feel bad Kaivo is still not working smoothly for garf. Lots to do here as I try to get a new synthesizer out. I'll try to get out a Kaivo update soon. Qoqo (our new social media person) is also seeing Kaivo pretty slow on her laptop so hopefully this will help with testing.
Yes, I could add that at some point.