randy's Recent Posts

I'll appreciate the info, thanks. As far as I know this is an AU-only and computer keyboard-only problem.

I just received my first Apple Silicon-based computer this past weekend. Of course I was excited to find out how all the Madrona Labs software worked on it, so I dove right into some informal tests and benchmarks.

None of my plugins are yet released in native versions for the Apple Silicon-branded ARM processor in the new Macs—rather, they run using Apple's Rosetta 2 emulator which translates their Intel code into ARM instructions. Kaivo is definitely capable of putting a heavy load on the CPU, so I used it as a basis for testing. Because Ableton Live is the hosting environment my customers use the most, and also because it's not yet released for native Apple Silicon, I did the bulk of my testing in Live 10. So the test is all in emulation, and hopefully gives a good general indication of what moving your existing software setup over to an M1 Mac would be like. I tested both the M1 and my 2015 MacBook Pro, which is my daily driver and a pretty typical machine for a lot of my customers out there.

I said above that these tests are informal. That means I didn't take the average of multiple runs, I just did my measurements by looking at Apple's Activity Monitor, and my reports of inferface speed are subjective. That said, I have seen the same behavior while working on the new Macbook Air over the last few days, and everything is consistent with the reporting I've read on these new computers.

Tests

Configurations tested:

New Air:

  • MacBook Air (M1, 2020)
  • 8GB RAM
  • macOS Big Sur v. 11.1

Old Pro:

  • MacBook Pro (Retina, 13-inch, early 2015)
  • 2.7 GHz Intel Core i5
  • macOS Mojave v. 10.14.6

Common software:

  • Ableton Live 10.1.30
  • Kaivo 1.9.4

All tests were run at 48000Hz, with Live's sample buffer size set to 512. Built-in audio was used. Each voice used the factory patch "peaking lights," a high-CPU patch with all the resonators and body code in use.

I looked at the CPU percentage used when no Kaivo windows were visible. In addition, I noted if there were any audible glitches, how Live's UI performed with one Kaivo window showing, and how warm the computer was. Here's the data I collected:

Test 1: 32 voices of Kaivo (4 instances x 8)

New Air:
%CPU: 138, ± 1
glitches: none
UI: a bit slow but usable
heat: warm

Old Pro:
%CPU: 310, ± 10
glitches: many
UI: unusable
heat: warm, fan audible

Test 2: 16 voices of Kaivo (2 instances x 8)

New Air:
%CPU: 82, ± 1
glitches: none
UI: fast
heat: warm

Old Pro:
%CPU: 133, ± 5
glitches: intermittent
UI: slow but usable
heat: warm, fan audible

Test 3: 12 voices of Kaivo (2 instances x 6)

New Air:
%CPU: 78, ± 1
glitches: none
UI: fast
heat: cool

Old Pro:
%CPU: 101, ± 2
glitches: no
UI: slow but usable
heat: warm

Discussion

An Ableton Live project with 32 voices (4 instances) of Kaivo is totally usable on the M1 Macbook Air, with only a little bit of UI slowdown evident with a Kaivo window open. The same project is definitely not runnable on my old MacBook Pro.

Interestingly, looking at the eight cores of the M1 running all these voices, it appears that only four of them are doing the heavy lifting. This probably accounts for the fast UI response even under the most load I tested. The M1 has four performance cores and four less powerful efficiency cores—my guess is that cores 5–8 here are the performance cores. Right now it takes more esoteric tools to really determine this.

On my old MacBook Pro, I can add up to around 12 voices of Kaivo before glitching is audible. (This is over two instances of the plugin. The same number of voices over more instances will take a little more CPU because of the overhead of each instance, but may be less glitchy because the scheduling is easier, soooo... it's complicated.)

It may not seem too amazing that a new machine is much faster than a five year old one. But remember, both Live and Kaivo are compiled for a different processor and running in emulation! This is an impressive feat, especially if one can remember the lackluster performance of the original Rosetta's PowerPC to Intel emulation.

I tested my five year old laptop against the M1, not to be mean, but because I don't have a newer one. Since the work that Kaivo is doing is basically CPU-bound, looking at relative CPU benchmarks for newer machines should give us a decent guess at how they would perform in the same test. Geekbench gives us the following numbers for multi core tests, where higher is faster: New Air: 7614 , Old Pro: 1358, Recent 13" MacBook Pro (13-inch Mid 2020): 4512. So going from my old Pro to a current 13" model should give us roughly (4512 / 1358), or three times the number of voices, or what the M1 can do in emulation.

Summary

If you have a Mac that's more than a couple of years old, the upgrade to any Apple Silicon Mac should be a big leap in speed, even when running applications that are not yet native. If you have a high end or more recent Mac, one that pulls in a Geekbench 5 multi-core score of around 5000 or higher, it probably makes sense to wait—either until your favorite apps are all native, or for the next generation of Apple Silicon computers.

I have not tested on M2 personally but I think a lot of my customers have M2 computers by now. If there is an issue on your new computer, I think it's more likely to be software related than to do with the M2 chip.

Can you please give more info about what you mean when you say it doesn't work? What specifically is the symptom? What did you expect to happen and what happens instead? Thanks.

Thanks for writing. I like the platform and still want to deploy on iOS. The pricing is hard. I wouldn't want to charge you another $89 for Virta iOS! Ideally one purchase would cover both platforms in the future but Apple makes it hard to do this. Fortunately we have smart friends with the same problem and I know we will figure something out, given time.

Sumu is an additive instrument that I've had in the works for a long time. Now that it's nearing completion and heading towards a public beta soon I'm going to break with the way I normally do things and put some detailed info out ahead of its release.

Sumu preview

Sumu is another semi-modular instrument. It shares the general appearance of its patcher-in-the-center design with Aalto, Kaivo and Virta. As you can see, it's on the more complex end of the spectrum like Kaivo. Everything is visible at once and there are no tabs or menu pages to navigate, which suits the way I like to program a synthesizer tweaking a little something here, a little something there.

In the same way that Kaivo brought two different and compatible kinds of synthesis together, combining granular synthesis with physical modeling, Sumu combines advanced additive synthesis with FM synthesis.

What's most different about Sumu compared to my other synths is that the signals in the patcher are not just one channel of data, but 64—one for each partial in a sound! By keeping all these channels of data independent and still using the same patching interface, Sumu offers a very usable entry point into additive synthesis, and a range of musical possibilities that have only been approachable with high-end or academic tools or just coding everything yourself... until now.

Sumu oscillators

Each of Sumu's oscillators is the simplest possible kind of FM:a single carrier+modulator pair. And the modulator can produce a variable amount of noise, which like the modulation ratio and depth can be controlled individually per oscillator. In a single voice there are 64 such pairs. Obviously a lot of sounds are possible with this setup—in fact, with the right parameters varying appropriately we can reproduce any musical sound very faithfully with this kind of oscillator bank.

Sumu partials

There are a few ways of generating all of those control channels without the kind of painful per-partial editing that some of the first digital synths used. The first is the PARTIALS module up top, where you can see a diagram of all the 64 partials over time. This is like a sonogram style of diagram where x is time, y is pitch, and thickness of each like is amplitude. There is also an additional axis for noisiness at each partial.

A separate application will use the open-source Loris work by Kelly Fitz and Lippold Haken to analyze sounds and create partial maps.

Sumu envelopes

Another way of generating control data is with the ENVELOPES module. It’s a normal envelope generator more or less—except that it generates 64 separate envelopes, one for each partial. Generally you would trigger them all at the same time, but each does have its own trigger so they can be separate. Using the “hi scale” parameter the high envelopes will be quicker than the low ones, making a very natural kind of lowpass contour to the sound.

Sumu pulses

Finally on the top row there’s the PULSES module. This combines an LFO and a randomness generator into one module. The intensity and other parameters of the pulses can be different for every partial. So this makes modulations that can be focused on a certain frequency range, but you don’t have to mess around editing partials one by one. You could also, for example, use the pulses to trigger the envelopes all at different times.

The PULSES module was inspired by my walks in a small canyon near my house, and listening to the very finely detailed and spatially spread sounds of water running in a small creek. Each drop contributes something to the sounds and the interplay between the parts and the whole is endlessly intriguing. 

To make a water drop sound, two envelopes are needed at the same time: a rise in pitch and an exponential decay in amplitude. So PULSES lets you put out two such envelopes in sync. Then of course we generalize for a wider range of functions, so we can find out, what if the drops were quantized, or had different shapes over time? A voice turning into a running river is the kind of scene that additive synthesis can paint very sensitively. The PULSES module is designed to help create sounds like this. 

Sumu space

The SPACE module lets us position each partial in the sound independently. Coming back to the creek idea, we can hear that certain pitch ranges happen in certain locations around us due to the water speed and the resonances of different cavities. This all paints a lively acoustic scene. By positioning many little drops independently, while allowing some variation, we can approximate this kind of liveliness.

This module centers around two kinds of data, a set of positions for each partial known as home, and a vector field: a direction [x, y, z] defined at each point in a 3-dimensional space. There will be a set of both the home and the field patterns to choose from. By offering these choices, and a small set of parameters controlling the motion of the partials, such as speed, the homing tendency, and the strength of the vector field, we can quickly create a wide variety of different sonic spaces without the tedium of editing each partial independently. 

The RESONATORS module is very simple and inspired by the section of the Polymoog synthesizer with the same name. It’s simply three state-variable filters in parallel, with limited bandwidth and a bit of distortion for that “warm” sound. In Sumu, a synth we could otherwise describe as “very digital,” it’s nice to have a built-in way of adding a different flavor. 

So I have this interface you see above, and a sound engine, and I'm working feverishly to marry the two. To enable all of the animations and the new pop-up menu, I wrote a whole new software layer that provides a completely GPU-based UI kit and interfaces directly with the VST3 library. Because it's been such a long process this time, I'm going to "build in public" more than I am used to doing, and have a public beta period. My plan is for this to start in December. (Yes, of 2021, smarty pants.) Meanwhile I hope this information gives you interested folks something to whet your appetites, and even a basis for starting to think about what kinds of patches you might want to make.

Sorry, I stopped adding people to the discord while I'm finishing the beta. This is because I have as many people as I think I can handle now. There's currently no beta to try anyway but I'm getting close!

I've just posted a public beta of Vutu for MacOS. Vutu is the sound analysis program for the upcoming Sumu synthesizer.

A Vutu quickstart video is also online now. I haven't had a chance to write any better documentation yet, and I"m not sure I will before I get the Sumu beta out. However, Vutu in its current form is pretty simple anyway, and most of what you need to know you can find out by fooling around with the dials and listening and looking.

Vutu analyzes sounds using Loris, developed by Kelly Fitz and Lippold Haken at the CERL Sound Group. A detailed intro to Loris is available on Hakenaudio.com: Current Research in Real-time Sound Morphing More publications are also linked from the CERL Sound Group Loris page. Loris is distributed under the GNU General Public License (GPL) and thus, Vutu is also. Vutu's source is available on Github.

Vutu is built on a cross-platform GUI framework I developed called mlvg. Compiling it for Windows and Linux should therefore be a reasonably easy task, but I know there will be a bunch of details to iron out, so I'm not taking that on until after I can make a Sumu beta.

That was a lot of info and links. Why would you want to play with Vutu right now? Some reasons might be:

  • You want to get started making your own sound bank for Sumu.
  • You have to try out the newest audio software, whatever it is, and this was just released today.
  • You enjoy looking at bandwidth-enhanced partials and hearing odd noises.

Each voice of Sumu will be able to play back 64 bandwidth-enhanced partials simultaneously. A bandwidth-enhanced partial is basically a single sine wave, modulated with noise. So at any given instant of time, in addition to frequency, amplitude and phase, it also has a bandwidth, or noisiness. Making sounds out of such partials is a very powerful technique, and I think it's pretty easy to grasp. What's been difficult about additive synthesis is the large amount of control data that's needed. How do you generate it all? My answer in Sumu is to use the familiar patchable interface, but extended so that each patch cord carries separate signals for each partial. This allows sound design in a playful, exploratory way that should be familiar to any modular user. Honestly I think it will be fun as hell.

Thanks to Kelly Fitz and Lippold Haken for creating and sharing Loris. Thanks also to Greg Wuller for helping me get going with the Loris source code, and for utu, which became Vutu. Utu is a Finnish word for "mist" or "fog", like Sumu. Vutu is short for visual utu.

Vutu requirements

A Metal-capable Mac running MacOS 10.14 (Mojave) or greater.
Vutu is native for Intel and Apple Silicon.
Since it's an analyzer and not a real-time program (except for playing the results), CPU doesn't really matter.

Sumu comes first!

I realize the Aaltoverb preset UI is really limited. As part of Sumu I'm working on a more capable preset browser that will be shared with all the instruments.

Sounds fun! I don't have anything beyond what's on the website here—you can drag the plugin UI to any size you like and take a screenshot.

Sumu beta comes first.

We have updated all of our software instruments—Aalto, Kaivo, and Virta— to version 1.9.5, bringing native Apple Silicon support for M1 and M2 Macs. The new versions are Universal Binaries, which support both Apple Silicon and Intel processors. Users with Apple Silicon computers should be able to run 30% more voices or more, as compared with the previous versions in Rosetta 2 emulation.

This update is free. Installers contain Universal Binaries for both VST2 and Audio Units V2 versions.

Windows versions are unaffected by this update. Aaltoverb, previously released with Apple Silicon support, is also unaffected.

If you look at the Sumu preview, you can see that and a lot of other manipulations are possible. I guess this is obvious so maybe my idea of the workflow is not obvious:

Vutu: make faithful reproductions of the sound as partials

Sumu: mess those partials up

I'm going to make this thread sticky. It should be a good place to find and share Aalto patches. I'll try to post one every day or two for a while.

It would be cool if we could embed Soundcloud links here, but setting that up will take some time.

It should run on 10.10 and up. But I can't say with certainty until the beta is out.

by Dave Segal

“The intent [with my music] is to get to a point where people listen but aren't sure how they got to this place.”

American producer Seth Horvitz's transition from Sutekh to Rrose represents a subtle yet monumental occurrence in electronic music. As Sutekh, Horvitz created heady, spare techno and skewed, funky IDM (see also his Pigeon Funk side project with Kit Clayton and Safety Scissors for the latter style). The Sutekh years—1997 to 2010—resulted in a large, respected catalog with releases on Force Inc., M-nus, Soul Jazz, Orac, Plug Research, and his own label Context.

Without question, Sutekh had name recognition and had risen to festival-playing status, but the endeavor had reached a logical end. For as rigorous and cerebral as Sutekh's output was, it never quite attained the sublime. From a club DJ's perspective, the tracks were more warm-up material than peak-time weaponry, although “Dirty Needles” from 1998's Influenza EP approaches anthem status. And On Bach (2010) is an experimental-techno record that foreshadows some of Rrose's explorations of chilling atmospheres and pointillistic textures. Oddly, it's Sutekh's best, most adventurous album, as Horvitz loosens the structural reins and goes out on several mad limbs timbrally. “The Last Hour” is a fierce organ drone that reflects Horvitz's time spent studying with master maximalist composer/keyboardist Charlemagne Palestine. (They released the collaborative LP The Goldennn Meeenn + Sheenn in 2019.)

“I was kind of flailing for a direction,” Horvitz says about Sutekh's last days. “I was trying a lot of different things. I was trying to be influenced by all the different music that I love. I have really broad taste in music, and I was trying to incorporate it all and it become somewhat of a mess. I kind of lost focus from this root in techno. Then I got tired of techno and wanted to try other things, but I didn't really find the place to go. So the transition point was going back to school and studying at Mills for two years and getting my Master's degree.”

Horvitz's Mills experience and subsequent encounter there with experimental musician Bob Ostertag convinced him that it was time to move into a new musical direction under a different name: Rrose. Their remix project, heard on 2011's Motormouth Variations, launched Horvitz into deeper, darker techno realms. The wildly percolating and bizarrely textured “Arms And Legs [variation one]” serves as a perfect merger of the two artists' skills. And while he doesn't have any desire to revive Sutekh, Horvitz—who played the first Mutek festival in 2000—did a DJ set as Sutekh a few years ago, and enjoyed it. But for the foreseeable future, Rrose remains Horvitz's primary focus. Which is understandable, as Rrose reigns as one of the planet's most riveting techno producers.

Rrose's ascent began with the Primary Evidence and Merchant Of Salt EPs that British label Sandwell District released in 2011. They established the severe, mesmerizing techno and remorseless, industrial atmospheres that have become Rrose hallmarks. “Waterfall” from the latter record set the bar high for trance-inducing transcendence within a technoise framework. It's Rrose's greatest and most psychedelic track, but Horvitz is actually most proud of the 2019 LP Hymn To Moisture (like most Rrose releases, it's on Horvitz's Eaux imprint). “It felt like a culmination of many years of doing this project. It's always a challenge to create something that feels really cohesive beyond just a longer collection of tracks, which I think happens a lot, especially in techno. I embrace the challenge of trying to create something that feel like one piece where all the tracks belong together and support each other and tell a different story from the EPs.”

With its variations on subtle rhythmic hypnosis and textural otherworldliness, Hymn To Moisture achieves rarefied effects without relying on the crutch of melody. Rather, Rrose creates tension and drama through gradual ebbs and flows of microtonal sediments. One outlier, “Horizon,” is as cosmic as anything by New Age genius JD Emmanuel.
Horvitz got exposed to microtonal music while DJing at UC Berkeley's KALX radio station from the early '90s to late '90s and from hitting Bay Area record stores every week. He picked the brains of KALX DJs and spent many late nights browsing the station's abundant vinyl library. Through some friends at Mills College, Horvitz got to know the work of experimental composer Pauline Oliveros, who was teaching there at the time.

“What attracts me to this idea of microtonal music is partly the way it plays with our perceptions and the way it gets away from the typical emotional structures that are in so much tonal music we listen to.

[I]t places your focus firmly on the sound and what the sound is doing to you in almost a physical sense more than an emotional sense.
Of course they're related. But the focus is more on perception of the physical response to the sound itself, rather than telling a narrative or expressing emotions.”

Going back even further to Horvitz's earliest interest in electronic music, he can trace it to the feeling as a listener that “anything is possible with electronic music, and any sound is possible. Later on I realized that almost any sound is possible with acoustic instruments, as well. So it's a different palette. But the possibilities are a little more diverse in generating electronic sounds, as far as making sounds that we've never heard before.”

After phases of liking band-based music such as punk, goth, indie-rock, and industrial, Horvitz discovered early-'90s dance music and its attendant culture. “The idea that I could just get a couple of pieces of random equipment and make electronic music was also exciting.” He cites two formative discoveries as a fledgling DJ in the early '90s: Aphex Twin's Selected Ambient Works 85-92 and Detroit techno renegades Underground Resistance. “DJ Jonah Sharp [aka Spacetime Continuum] was playing in this chillout room and he had both [Aphex Twin] records on the turntable, and you could see the logo. This was 1992, maybe early '93. I was mesmerized by these logos. I had no idea what it was. Then I actually discovered that record... somebody had brought it into the radio station at Berkeley where I was DJing. That was a real epiphany.” As for UR, Horvitz was enthralled by “the whole mythology around that—the fact that you didn't know who was making the records and it had this political undercurrent to it were exciting to me.”

In a 2014 interview with Secret Thirteen, Horvitz said, “I like hardware synths and use them sometimes, but I'm generally happier with the final result when I can control everything in the computer.” He says that that's still the case, “but those things can kind of work together. I have a couple of analog synths, but I don't connect my studio in this professional way, where everything is through a patch bay and into a mixing console and synced up with all the proper equipment so you can run everything together at the same time, synced with the computer.

“My method of working is much simpler. If I'm going to use a synth, like a Buchla Easel—which is partly what inspired [Madrona Labs'] Aalto synth, which I use a lot—I will sit with that synth and play it for a few hours and record stuff that I like with it. I use that as the inspiration for building a track around it. Sometimes I just use the sounds of the Buchla, maybe do a little more with it in the computer, and that's that. I just focus on one synth and see what I can come up with. I let that be the seed that generates other material.”

Horvitz's preferred software for production is Ableton Live, after years of using Logic, but hardware plays an important role in his productions. “I use all of the Madrona Labs synths, but Aalto is the one I use the most. I do like them all and have found uses for all of them in my tracks. The only other gear that I have around is the Buchla Easel and Lyra, which is made by Soma Labs. It's fun—it's a feedback-noise machine, basically.

“I have a lot of recordings I've made from a couple of residencies I'd done, where I had access to Serge Modular System, ARP2500, this kind of stuff. So I tap on these archives of recordings I've made in a couple of residencies in the Netherlands and in Stockholm.”

Like many of the best minimal-techno artists, Rrose avoids blatant emotional signifiers in their work. It's a major part of music to inspire emotions in listeners, but Rrose has decided to de-emphasize that. “I wouldn't say that I've eliminated it, but I'm very aware of it and I try to avoid it. When I made music as Sutekh, I didn't really avoid it. I played with it a lot more. So it was more of a subversive and playful attempt to use tonal language.

“When I started the Rrose project, I made a conscious decision that I was going to stay away from composing melodies, for the most part, and put the focus on the sound. It was interesting because I started trying to make melodic music on my own, and then I became obsessed with learning about it, studying piano, studying jazz and classical music. I wrapped my head around all that theory and then went to Mills College. After learning it all and knowing it, I decided to not use it. I have so much reverence for classical composers and jazz musicians and the way they use the tonal language. One of the reasons I decided not to use it is because I feel like I don't have something important to say in that world that hasn't already been said much better.

“Staying away from that and going into these areas that potentially have shorter history and maybe fewer avenues explored. Applying these non-tonal ideas to techno has a lot of potential and I've been able to make more of a contribution in that area.”

What, if anything, does Horvitz view as the purpose of techno? Does he anticipate that certain people in certain clubs are going to be on hallucinogens and therefore is he trying to enhance that experience? “Ideally, I would like to give someone the same experience as being on a hallucinogen without having to be on one of those,” Horvitz says, laughing. “I don't think I ever quite get there. It's kind of this idea that if you take hallucinogens, you can go on a fast track to some form of enlightenment. But you're never going to be enlightened because you're taking the shortcut. The real way to get there would be to meditate for decades.

“Ideally, the purpose is to create a meditative space for the dancer or listener. I want the listener to experience something that feels like a profound meditation. I think that that can be accomplished through dance, as well. Which is why I get really annoyed when I'm playing for a crowd of people who are all talking. Sometimes people are really social and seem to be really enjoying it and dancing, too. I don't want to get too angry when that happens, but my ideal audience is in a real dark, foggy black box where you can tell everyone is in the zone for the whole set. I want them to have a meditative experience... hopefully, not too specific. Meditation, but the standard way to meditate would be with silence. The idea is to experience whatever is happening in your mind. I hope to have something similarly open-ended.

“Of course, there's an entertaining aspect: People go to dance and they hear this music and it gets your adrenaline going and stimulates all kinds of other things. But I wanted to have some relationship to this meditative state.”

Meditative states also are manifested by Horvitz's excellent collaboration with Luca “Lucy” Mortellaro, known as Lotus Eater. Compared to Horvitz's makeshift home-studio setup, Mortellaro has a big modular rig and patches everything properly. “It's fun to work in someone else's space that has a whole different setup. It ends up being much more spontaneous, especially on the most recent album [Plasma]. We worked really fast on it. The first one [Desatura] took a little longer, compared to how I work on Rrose stuff, where I comb over every detail and revisit things.

“[Lotus Eater is] much more spontaneous and improvisational. It's fun to work that way. It's almost more limited in the sound sources approach than Rrose. We don't use melodies and chords, but we also focus on noise and feedback as central sound sources.” Lotus Eater's music carries a strain of dread similar to that of Throbbing Gristle and Les Vampyrettes. Plasma (Stroboscopic Artefacts, 2022), is a solarized palimpsest of minimal techno, an infernal phantasm of techno's subliminal pulses, a reduction to molecular activity, beats and textures composted into charcoal dust.

While Lotus Eater is anything but formulaic, dance music relies heavily on formulas. Rrose sometimes defaults to certain production techniques or tropes in order to satisfy DJs and dancers, but never in obvious ways. “There are times when I use methods that I know might be slightly manipulative or might achieve a certain effect. But I want it to creep up on people in a way that still feels natural. I don't want to surprise people in the way that I may have done as Sutekh, where I wanted to jar people—at least in the later years of that project. [With Rrose], I want to do something more seductive, like bring people to that point of euphoria, but in a mysterious way, so they're not sure how they got there.

“I have certain working methods that can achieve certain effects, but I try to keep them evolving.” Rrose has put those methods to powerful use on their next two releases on Eaux: the Tulip Space EP (out in February) and the Please Touch album (likely out in May). Tulip Space contains some of Rrose's most slamming dance-floor fillers and weirdest abstract explorations while Please Touch delves more intensely into Rrose's well of hallucinogenic textures and disorienting implied rhythms. Also, a slew of Rrose remixes loom on the horizon for 2023, including those for Pole, JK Flesh (aka Justin Broadrick), Luigi Tozzi, Dutch techno duo Abstract Division, and others.

“The intent [with my music] is to get to a point where people listen but aren't sure how they got to this place. Still, it feels like it's grabbing you and taking you somewhere unexpected, but drawing you in in this immersive way.”

Thanks @sntr for the input.

@alino.romano does this problem happen with MIDI from a keyboard controller, or only the computer keyboard MIDI like @sntr is talking about? Thanks.

Instead of closing and reopening the plugin, what if you change the resonator mode to something else and back? Does that fix it?

If not you could also try the body. In this way we might narrow it down.

I have not seen the midi input issue and will investigate.

Huh, I changed the extension to .utu and on my machine it doesn't even let you load the .json anymore, so I didnt' find this issue. I'll try to load a .json and fix. Meanwhile if you change your .json to a .utu hopefully it will work?

The noise component is very important to Sumu.

Each time point is corrected for a each partial, starting from the center of the FFT frame then moving forward or backward. So mapping to ~sinusoids might not br worth it. Maybe there's a Loris resynthesis object for Max out there somewhere?

Glad that works. Everyone wants to protect you from evil people like me who might try to give you a computer program.

here's a past thread with the same idea!
https://madronalabs.com/topics/505-aalto-to-hardware

Apparently Aalto is a gateway drug :-)

Works for me. Did you save the link or something? It will change whenever the version changes.

There's no need to sign up for the beta, I'll announce when it's available via the website.

Sumu voices: I'm not totally sure!

here's a newer link: https://discord.gg/BAJYprsU

In the video you sent, you are turning the dial to change the number of voices. Doing so makes more lights in the sequencer, because when voices are turned on their sequencer positions are reset to 0. So you go from all voices being at the same sequencer step to being at diffferent steps.

If you connect the KEY module's gate output in Kaivo to the sequencer trig in, this will do the same thing as turning on "key trig" in Aalto.

Hi stew, I'm trying to understand what your patch is doing but from your description I'm not quite sure of how the notes are being triggered. If you want to just make a small movie and share it with me by email that might be easier.

Aalto and Kaivo should work in the same way as far as the sequencers and triggering.

Each voice does have its own sequencer offset, and these will be different depending on when you reset the sequencers with a MIDI note.

I feel bad Kaivo is still not working smoothly for garf. Lots to do here as I try to get a new synthesizer out. I'll try to get out a Kaivo update soon. Qoqo (our new social media person) is also seeing Kaivo pretty slow on her laptop so hopefully this will help with testing.

Yes, I could add that at some point.

Aalto is at its heart a modular synthesizer—a lot of patches do make sound when the DAW is not running. You can try switching to the default patch to see if this is the case. Or look at the gate level and make sure it is down!

It could be. What kind of computer do you have? What OS and version?

A restart may help!