Recently I added support for VST, Bela and responsive design for mobile devices. It would be great if you can try and give me some feedback here or in the GitHub repo:
I have zero musical knowledge or experience and I just spent the last 30 minutes tweaking the examples in the project homepage. Super fun, incredible project! Thanks for this!
Let me repeat the comment I made two weeks ago when I first saw this:
> Wow, glicol looks amazing! One of the few music languages I’ve seen which actually tries to balance low-level synthesis with higher-level sequencing. (The only other one I know of is extempore: https://extemporelang.github.io/)
Thank you! In the past two weeks I have been mainly working on vst and Bela board support and it works.
update:
another thing is that I added responsive design to the website and support for the safari. Really wish to get more feedback on that as to adapt to different browsers is really a painful process.
As far as I know, SC is based on UGens (https://en.wikipedia.org/wiki/Unit_generator), but if you want to have sample-level control you need to precompile your node. This is discussion is quite relevant:
https://github.com/grame-cncm/faust/issues/685
In Glicol, you can use different node directly as UGens and you can also define your own `meta` node in real-time, one of my personal favourite feature of Glicol for live coding. Yet there are some limitations so far, as embedding Rhai.rs syntax significantly limits the audio performance. I will try to develop another meta-syntax for defining meta nodes in the future.
Also, there is an open-source implementation of Reaper's JSFX language: https://github.com/asb2m10/jsusfx. The Repo already contains Max and Pd objects. I have been thinking of contributing a SuperCollider UGen, but there are so many other things on my list :-)
Thanks for sharing! Look forward to your Ugen as well. It's great to see that many languages have supported sample-level control, including the Pd and Max you mentioned. There was a discussion on Faust repo before that might be interesting for you too: https://github.com/grame-cncm/faust/issues/685
> It's great to see that many languages have supported sample-level control, including the Pd and Max you mentioned
FWIW, Pd and Max/MSP always had sample-level control in the sense that subpatches can be reblocked. For example, if you put a [block~ 1] object in a Pd subpatch, the process function will be called for every sample, so you can have single-sample feedback paths. Pd also has the [fexpr~] object which allows users to write FIR and IIR filters in a simple expression-syntax. Finally, Max/MSP offers the very powerful [gen~] object. You can check it out for inspiration (if you haven't already).
Pd (and Max/MSP) also allow to upsample/resample subpatches, which is important for minimizing aliasing (caused by certain kinds of processing, such as distortion).
Pd also uses the reblocking mechanism to implement FFT processing. The output of [rfft~] is just an ordinary signal that can be manipulated by the usual signal objects. You can also write the output to a table, manipulate it in the control domain with [bang~], and then read it back in the next DSP tick. IMO, this is a very powerful and elegant approach. SuperCollider, on the other hand, only supports a single global blocksize and samplerate which prevents temporary upsampling + anti-aliasing, severly limits single-sample feedback and leads to a rather awkward FFT implementation (you need dedicated PV_* objects for the most basic operations, such as addition and multiplication).
Another thing to think about is multi-threaded DSP. With Supernova, Tim Blechmann miraculously managed to retrofit multi-threading onto scsynth. Max/MSP offers some support for multi-threading (IIRC, top level patches and poly~ instances run in parallel). Recently, I have been working on adding multi-threading to Pd (it's working, but still very much experimental): https://github.com/Spacechild1/pure-data/tree/multi-threadin.... If you design an audio engine in 2022, multi-threading should be considered from the start; you don't have to implement it yet, but at least leave the door open to do it at a later stage.
Finally, every audio engine needs a plugin interface :-) That's where the problems really start, because for all of the things mentioned above, you have to offer proper abstractions to plugin clients.
---
I'm not sure how far you want to go with Glicol. I guess for the typical Algorave live coder all these things are probably not important. But if you want Glicol to be a flexible modern audio engine/library, you will have to think about FFT, upsampling, single-sample feedback, multi-processing etc. at some point. My advice is to not leave these things as an afterthought; you should at least think about it from the start while designing your engine - if you want to avoid some of the mistakes that other existing audio engines made. This is just a word of "warning" from someone having spent countless of hours in Pd and SuperCollider source code :-) If you come to the conclusion "I won't ever need any of this" - that's fine, of course!
---
That being said, Glicol looks very promising. Keep up the good work!
As far as I could see when I experimented with it, SuperCollider is excellent for creating sounds, but isn’t too good at sequencing these sounds into music. It can do it, but its sequencing support seemed verbose, inflexible and poorly integrated into the rest of the language. I assume this is why numerous other languages have been built on top of SuperCollider with a specific focus on sequencing (TidalCycles, Sonic Pi etc.).
I guess it depends on what you want to do. For typical Algorave-style music, I would agree that sclang doesn't offer the right set of high-level abstractions. However, it's possible to implement live coding dialects on top of sclang, e.g.: https://github.com/jamshark70/ddwChucklib-livecode
I've been writing effects in ChucK for several years but have found it difficult to deploy to various platforms of interest.
Is it possible to build an effect written in Glicol into an iOS app or effect (AudioUnit v3? haven't been keeping up)? LV2 plugin to run on MOD devices guitar pedals? VCVRack, which has its own plugin interface? onto a Rasp Pi Zero?
And for Rasp Pi or Bela, Glicol can also help. Although these are all work in progress, I personally really enjoy coding in Rust and the package management is fantastic.
However, VCVRack seems to be a different story; Faust may be a better option:
Can you expound on the VST support? This can write code that compiles to a standalone VST? Or do you mean you provide a VST that can run this code?
I make electronic music as a hobby and have wanted to make my own VST, but it has proven to be more pain than fun. This looks like a very promising candidate!
In writing the vst, you may need some rust audio lib, you can search dasp, fundsp, or wait for glicol_synth published as a Rust crate (like pip or npm)
Thanks for your reply! Faust is a great project, and it is really versatile.
However, I have checked some generated Rust code from Faust and found it relies on some unsafe blocks there. My concern is if one wishes to use Rust for audio, it doesn't make a lot of sense not to control all the details and make sure of the safety in detail.
The other concern is the incompatibility of license. In Rust community, MIT or Apache are more popular for further usage.
Still a POC for now, and I need to make the input working. But I am not very optimistic for many use cases in VST. I think two main goals for Glicol is collaborative live coding in browsers and quick prototyping on Bela.
I would say in terms of abstraction, Glicol is more comparable to Chuck, SuperCollider or Csound, yet written in Rust, with some priority towards collaboration, syntax ergonomics from music interaction perspective, following the trend of Algorave and Web Audio. These languages' audio engine has all been contributing as an audio lib. And for example, Sonic Pi relies on the audio engine of SuperCollider.
Still, for the VST, I kinda feel that users may find it easier to use Max4Live or PD-VST (e.g https://github.com/pierreguillot/Camomile) rather than a text-based language, since VST is already a graphic interface :)
https://glicol.org
Recently I added support for VST, Bela and responsive design for mobile devices. It would be great if you can try and give me some feedback here or in the GitHub repo:
https://github.com/chaosprint/glicol
Thanks!