Building the Next Computer Based Pedalboard

  • Thread starter Thread starter pipelineaudio
  • Start date Start date
pipelineaudio

pipelineaudio

Active member
In case you didn't get the five thousand influencer videos and what not, the Helix Stadium will include "Showcase" features, many of which are normally done by the Playback Engineer (yes that's a thing, a very very critical thing to many modern bands and likely 100% of any band playing stadium level shows), the lighting department and the production manager. Things like autoswitching scenes on the mixer or presets (or snapshots or stomps) on the pedalboards, click playback, cues playback, stinger and sound fx playback, all with definable events on the timeline, stem separation, basically a DAW in your pedalboard

All that babbling aside, people may be envious that what you normally have to take a computer to the gig for are being done on the Stadium's pedalboard

And maybe the thinking goes 'If I have to bring a computer anyway, why not just play the guitar thru it too?"

What almost seems like ancient history years ago now I did a series of articles and videos along the subject of taking your beautifully manicured studio chain of guitar sounds to the show called Bringing the Studio to the Stage and while this was great fun and showed a lot of promise, it still felt too clunky, unreliable, super labor intensive, confidence destroying and just not really ready. Though with the caveat that there were still many many many guitar players and way way more keyboard players doing just that with Gig Performer, Cantible or Mainstage

Fast forward to today and with the mini computers and mac minis, generally more stable audio systems in the OSs and even dedicated hardware we have some viable systems...maybe

The Paint Audio CE-1 puts a windows computer in a VERY convenient box with interface and buttons, very very cool, touch screen as well. Cons are probably a very underpowered processor and almost 100% surely Thesycon.de drivers for the interface....Still, I will likely be trying it
1762930896117.png


Along a very similar vein, Octave Technologies unveils their HS-1, with a similar CPU and touch screen, but a divorced pedalboard, probably with the same cons as the CE-1

1762930982983.png



Very cool, I will likely be trying both these units very soon, but:

I found a used M1 Mac Mini for peanuts on Facebook marketplace and decided to try my hand at my own. I'll be using Gig Performer, Helix Native, likely Jam Origin MIDI Guitar 2/3, maybe Tonex, and definitely a very cool product my company will be releasing in the Tonex-ish sort of vein soon

So Far I'll be basing it on a Focusrite 4i4 (so not getting away from TheSycon.de drivers either, real deal should be an RME Babyface or similar), Behringer FCB1010 (likely to be replaced with a Paint Audio MIDI Captain as it is WAY smaller), some expression pedals, a Shure PSM 300 for IEM and for now a Line 6 Relay G50 for guitar wireless. I definitely need a touch screen monitor, but for now I will be trying Luna Display for the iPad which can turn the iPad into the primary monitor



Initial hardware layout.jpg


Learning Gig Performer (especially the scripting) has been pretty tricky for me, but I'm getting there, I have auto engage the tuner on volume pedal minimum and auto engage the wah on wah pedal movement and some obvious at a glance type of view stuff, still working on how to pick songs on the setlist with the pedalboard rather than the touchscreen, but getting there!

Here's the iPad showing the Gig Performer screen

ipad as controller.jpg


And doing some NAM captures! (WAY WAY WAY WAY WAY easier than doing Tonex captures, though I read that Tonex has a new capture program so maybe its vastly improved?

NAM Capture.jpg



NAM.jpg


Tomorrow I'll start wiring this together for real and testing it with the band! Got the software working very well for live performance already

Here's a Pedal Playground mockup and showing the Paint Audio MIDI captain for a size comparison with the FCB 1010


chrome_pzBTPp8ivZ.png


This will probably be a long long thread of updates!
 
Last edited:
This is cool! I think this is the direction everyone will go. How is the latency?
Given that its likely 99% of the people out there will be using audio interfaces powered by TheSycon.de drivers, I have a chart that will give you an idea at https://kailuamusicschool.com/audio-tech/

looking at 9-13 milliseconds at 128 samples, its likely with the newer computers that 64 is an option which puts you in the 6-ish range that used to be reserved for the RME stuff at 128.

Not to have a religious argument with Latency Purists, but in general 13 is more than acceptable for the vast vast majority of double blinded tests takers. But start adding it up, digital guitar wireless? Add 2-8miliseconds depending on the unit. Digital IEM? Another 2-8ms. Send and return to analog pedals on your inserts? This can get ugly fast. For that last one, luckily hopefully every effect you want will be on the computer so you don't need to take that hit

When I did the original articles a lot of people were blown away to find that all those artists playing thru their iPads, at a MINIMUM had 20ms latency and weren't bothered by it, especially funny to see those same people complaining about the 2 or so milliseconds latency on Axe FX and Helix :)
 
Given that its likely 99% of the people out there will be using audio interfaces powered by TheSycon.de drivers, I have a chart that will give you an idea at https://kailuamusicschool.com/audio-tech/
Yes, x-ms on top of everything else which I can feel. The Babyface is ~ .46ms for real time monitoring with dsp which made tracking infinitely easier. You can simulate direct monitoring with a small mixer with just about any interface IF you are using an amp or hardware.

I mostly use my older laptop and I've found I can get used to the additional latency Reaper and Tonex VST add but direct monitoring with a mic'd cab gives the easiest and best results. 2nd best is offloading the dsp to hardware using the big ToneX pedal. You do get used to it but its not as tight, at least for right hand stuff. Snapping + editing in the box gives a big safety net for amp sims.

If someone can introduce a hardware VST host with similar latency to current modelers its game over. I think the VST format has to do with windows apis so thats probaly why its not been done yet as you need a mini pc.

Same thing with digital mixers, I know they're already using software hosts but anything to get that latency down would really start to open things up.
 
I mostly use my older laptop and I've found I can get used to the additional latency Reaper and Tonex VST add but direct monitoring with a mic'd
Reaper won’t add any latency on its own. That would be thru purposely delaying outs to sync or a latent plugin
 
I used to have a computer-based rig, and it did not add anything in the league of 10-12 ms

This is a video I captured while playing. Maybe 2-4ms, but not much else.



Note that I was running the computer output into a Fryette PS-2, then capturing the audio with some camera or the other. There really was imperceptible latency, or very low latency at least,

A bit of talking in there (probably shouldn't have, the video got to 21 minutes), but I liked the tones.

I'd much rather take my laptop to a gig then something that's on the floor and has a screen on it, because accidents do happen on stage. You could just use the laptop to do all that fancy light stuff and also queue preset changes, etc.
 
There's no way you are getting 2 miliseconds on a TheSycon.de driver.

The guys who are comfortable with laptops are already doing the shows and its a lot of their scripting help that is making this possible, if that's what you like and your tolerance level, thank you, you are the guys pushing this forward. However products like the Helix, on the floor, Turd Blaster Pro, on the floor, Quad Cortex, on the floor, regular old pedalboards, on the floor, already exist and that is a big enough chunk of the market to definitely try to get into. The major modelers ARE computers on the floor, they are just locked up at the moment so you can't add your own plugins to them.

Are you doing all the switching and routing in the DAW? I need to watch this video ASAP! If you are using Gig Performer, I have at leat a million questions I need help with
 
i remember a guest musician at our church over a decade ago that ran abelton remotely from his iphone on an armband to trigger lighting cues, video backgrounds, midi preset changes for the band and something like this for backing tracks:

https://loopcommunity.com/en-us/prime

it was my introduction to how useful bluetooth could be for the live musician. but, if something tech goes wrong like a dropped bluetooth connection, any glitch in timing, ect. you’re screwed trying to play while troubleshooting.

i experimented with playing a jam up pro digital guitar rig live through a $30 Ampkit/Peavey link hd guitar interface plugged into my ipad in 2013.



except for the novelty of my rig being condensed into a tiny ipad, the experience was not thrilling as a player (thin, latency) but the band said the sound through the monitors and in the house was great.

i was much happier plugging into my fractal v1 ultra. but in hindsight i would have created a monitoring scenario similar to my last live rig,

patch my iems directly out of my rocktron rack interface headphone out.
and the rocktron also had a stage monitor mix input which connected to this:

https://digitalaudio.com/livemix/

a killer 24 channel personal monitor mixing station
 
IMG_9876.jpeg


Running well! I have lots to program and especially to try out the show control features. (Maybe today!)

I’ve been documenting the build on the live streams especially so the chat could help with questions I get stuck on

8 plus hours already on here...still tons to go:
 
Not to have a religious argument with Latency Purists, but in general 13 is more than acceptable for the vast vast majority of double blinded tests takers.
Got any sources? Not that I'm trying to challenge it, I just want to go read up on it myself. In my minimal spare time I'm working on rhythm skills too, and trying to figure out how tight I should be aiming for. Right now the target is < 5 ms , but I'm still uncertain if that's realistic or not.
 
Got any sources? Not that I'm trying to challenge it, I just want to go read up on it myself. In my minimal spare time I'm working on rhythm skills too, and trying to figure out how tight I should be aiming for. Right now the target is < 5 ms , but I'm still uncertain if that's realistic or not.
The averages the research finds usually are WAY higher, but that explains why so many people "concerned about latency" were just fine at 20 miliseconds when playing guitar thru iPhones.

This one finds an average of 49 miliseconds before the average person notices latency, which is insane because if they hear both sources, 35 is usually given as echo detection minimum threshold. https://dl.acm.org/doi/fullHtml/10.1145/3678299.3678331

AES has tons, but usually behind paywalls: https://aes2.org/publications/elibr...ed,document for exact affiliation information.)

http://eecs.qmul.ac.uk/~andrewm/jack_am2016.pdf

The scholarly articles usually have a much much higher value than we'd expect as musicians, but when doing the Round Trip Latency Roundup testing to see what actual RTL came from a claimed buffer size from each manufacturer, unless both the acoustic and RTL sound were heard in significant portion to each other, the average respondent came in around 15 msec, with a lot of piano players coming in closer to 10, before the Just Noticeable Difference, and when I tallied them all it averaged around 13.2-ish. But there were more than a few people who could quickly be driven upset to insane with just around 6-7 miliseconds, so it sure seems to depend on the person and sometimes the instrument.

But then again, given how many shows have the guitarist 20 feet from their amp and not screaming bloody murder, we really shouldn't be all that surprised.

5 ms RTL IS achievable, but it'll take some good drivers and a stable computer
 
The averages the research finds usually are WAY higher, but that explains why so many people "concerned about latency" were just fine at 20 miliseconds when playing guitar thru iPhones.

This one finds an average of 49 miliseconds before the average person notices latency, which is insane because if they hear both sources, 35 is usually given as echo detection minimum threshold. https://dl.acm.org/doi/fullHtml/10.1145/3678299.3678331

AES has tons, but usually behind paywalls: https://aes2.org/publications/elibrary-page/?id=14256#:~:text=A subjective listening test was,than for in-ear monitors.&text=Affiliations: Purdue University; Shure Incorporated,document for exact affiliation information.)

http://eecs.qmul.ac.uk/~andrewm/jack_am2016.pdf

The scholarly articles usually have a much much higher value than we'd expect as musicians, but when doing the Round Trip Latency Roundup testing to see what actual RTL came from a claimed buffer size from each manufacturer, unless both the acoustic and RTL sound were heard in significant portion to each other, the average respondent came in around 15 msec, with a lot of piano players coming in closer to 10, before the Just Noticeable Difference, and when I tallied them all it averaged around 13.2-ish. But there were more than a few people who could quickly be driven upset to insane with just around 6-7 miliseconds, so it sure seems to depend on the person and sometimes the instrument.

But then again, given how many shows have the guitarist 20 feet from their amp and not screaming bloody murder, we really shouldn't be all that surprised.
Thanks for the info.
5 ms RTL IS achievable, but it'll take some good drivers and a stable computer
Sorry, I meant my playing's tightness to a click.
 

Similar threads

Back
Top