I’ve been spending a lot of time recently working with familiarising myself with some of the processes of Max/MSP/Jitter to help with developing a better understanding of the process of building plugin instruments and effects from the root up. As a fanboy of Ableton Live, I work in Live 9 Suite most of the time in my music production so I have been working in particular with Max for Live which is an import of the Max/MSP/Jitter programme within Ableton. It seems to me to have all of the flexibility of Cycling 74’s standalone product but with a kinder looking interface to map to within the Ableton shell. Win win… It does have some limitations however, and that is really down to the issues within the Ableton sound engine and it’s lack of surround sound output module options. The only way I could see that would be possible to generate multiple outputs would be by creating standalone sends with individual outs for each speaker. This means that you can essentially have as many outputs as you might like, making it possible to have multi speaker diffusion or whatever you might like and you can create spacial movement by balancing the send feed into each channel but what this doesn’t enable you to do is make a smooth movement from one speaker to the next. I’ve found some calculations within Max modules that you can programme across a quadrophonic system that generates an X-Y module where you can place sound across the various outputs that provides a logarithmic amplitude depending on proximity along each axis. So it is possible to have a surround system but it is extremely clunky and certainly does not support the use of any multi-channel plugins for post production processing such as multi channel Impulse Response plugins like Altiverb. It is a pity as I would prefer to do all of my music creation in ableton.
Not to be disheartened, I’ve begun to look into the potential of creating visuals and synchornising them all within Ableton via Jitter modules in Max for Live. The bonus of this is that you only need to run one programme and don’t need to go through a learning curve of how to programme visuals in an additional programme and work out how to get the two programmes to communicate via internal MIDI routings etc which can become quite laborious.
So there are three (to my knowledge) excellent Jitter Max for Live imports that can be obtained online already and are well worth having a look at:
- Rokvid by Adam Rokshar (https://www.ableton.com/en/packs/rokvid/)
- VIZZable by Zealousy (http://zealousy.com/2011/01/vizzable/)
- V-Module Suite by Fabrizio Poce (http://www.fabriziopoce.com/max.html)
The latter two are free to download (although donations are appreciated by both) and the first is available via the Ableton store as a Pack.
I’ve listed them in order of (in my opinion) complexity to use. Rokvid is an immediate gratification system where you throw a video cut into the module and it starts working straight away. There are a good range of filters to manipulate the video all within the singular module. It is easy to use and doesn’t require additional routing. As a Windows user, I don’t know if it is just me but I am finding it to perform quite unreliably which is frustrating given that it is an official Ableton Pack. I’m concerned that not enough testing on Windows machines has been carried out. Either way, when it does work, it is easy to make something colourful and animated very quickly that can sit well with your arrangement.
VIZZable is my current favourite and Zealousy (Bob Jarvis) has made a much more comprehensive modular system that is very flexible to use. You can create multiple channels of video clips or reactive visualisers that respond to the audio signal on a track and blend them through a mixer, customising your effects chains. It is largely reliable although I seem to be having some serious issues with the potentially awesome ‘clip player’ module that should enable you to play mini audio/visual clips as if they are actual clips in the live player. It just doesn’t seem to work on either of my Win7 or Win8 computers. I can’t work out why this is as they both have reasonable graphics cards, the videos I am using are encoded at 80% JPEG2000 as .mov which is what seems to be the general consensus so I am wondering if this once again, something that hasn’t really been tested properly in Windows or if I am simply just doing something wrong.
Anyway, I think there is a lot of potential with VIZZable and I am looking forward to getting some cool pieces finished by the end of the year that will relate to both the Coffee Machine Music project and some other work I am doing with the band Capac.
The last set of modules is by far the most complex and difficult to use but unsurprisingly this means it is also by far the most flexible and powerful set of tools available. V-module is harder to work with initially but having seen the sample work from June74 (Fabrizio Poce) it looks like you can generate some really awesome blends between images and reactive processing which is really exciting. Fortunately, Fabrizio has worked with Bob to make the modules integrate with VIZZable which should hopefully make the ouput more user friendly (although I am yet to mix between the two just yet). There is also a very helpful, thorough (and with unwieldy) guide to creating your own M4L Jitter modules and understanding how to use the many options Fabrizio has kindly provided.
I think this is potentially a very exciting to play around with Ableton and am looking forward to creating multimedia experiences all within the Ableton environment and posting it up soon.
There is a great M4L Jitter Vimeo group setup by Fabrizio that I am hoping to contribute to by the end of the year.