Simple Questions about Digital Modeling

I have 3 questions related to 3 dimensions of Digital Modeling:

  1. Digitally Modeled Effect Chain Visualization (more of a rant -sorry)
  2. Making sense of Digitally Modeled Effects and Amplifiers
  3. Backward Engineering Digitally Modeled Patches

My question regarding Digital Modeling Platforms isn’t which is best, although I imagine there may be some strongly held opinions there. Also I am not arguing for or against the merits of Digital Modeling (DP). Rather it is the efficient and effective utilization of these platforms which translates to overcoming some of the challenges they inherently present.

By way of example, I purchased the (poor man’s DP) GT-1000 and a few weeks later began to notice a few short comings which have nothing to do with its reproduction of ‘real’ amps and effect sounds.

First, with real amps and effects, it’s not hard to discern what equipment you’re using or how that equipment is dialed in (parameterized). All you have to do is look around. In contrast, with DPs you see amps and effects as icons on the device or a computer screen but those icons are somewhat opaque- you have to drill into them individually to see what those icons represent. Likewise the parameters for those devices are only visible when you’re drilled into that specific device. With physical devices, you can easily scan your effects chain and understand what’s in play. With DPs you can’t. So, although I believe it to be a fanciful idea, “feature request” for digital platforms - please give me a picture of the devices and amps in the chain with visible parameters.

Second, while these platforms model an impressive (if not amazing) selection of amps and effects, at least in the case of BOSS, the literature regarding what is being modelled is painfully thin. For example, the X-CRUNCH amp is explained to be “Crunch sound that uses MDP to deliver a crisp tone from all strings.” Is that Bud Light “Crisp”? For classic amp types e.g. BRIT STACK = “This models a Marshall 1959.” which is admittedly more precise, but still implies you know what that means practically.

Third, the GT-1000 comes with 50 factory preset patches which presumably are the sounds BOSS thought would be of the most interest to their market. However, there is very little, well, no information as to the rationale underlying that patch selection or why BOSS chose a particular combination of amps and effects to create those sounds.

Finally, a more esoteric issue. With real amps it’s not unusual to have to get loud! to get those amps to deliver their iconic sound. Is that also true of their digital twins? The answer isn’t entirely obvious. The good news might be that DPs can give you that ‘get loud!’ performance without risking a restraining order or divorce proceeding. The bad news might be that you still have to drive the digital twin (presumably into FRFR amps) to get that iconic delivery. The answer is probably in the middle somewhere which implies making some decisions and tradeoffs.

So my (simple) questions:

  1. Is it possible to associate digitally modelled amps with signature sounds or artists? For example, Santana uses a Mesa Boogie King Snake amp with more or less nothing else. Can I reduce Mesa Boogie to that sound? Likewise, Clapton and others used Marshalls back in the Wheels of Fire days and presumably not much else. Is that classic the “1959 Marshall” sound? Without some sort of reference point it’s difficult to make practical sense of these digital twin amps (e.g. how to implement them?).

  2. Similarly, regarding the dozens of effects offered by DPs, how do I associate a useful sound with those devices? Obviously, with both the amps and effects, there may be a number of associated “iconic” sounds and artists, which is fine, but some sort of reference table (cheat sheet) would be useful.

  3. Regarding the patches themselves, each patch typically has AT LEAST 6 devices, each with 6 - 12 parameters so backward engineering those sounds can be daunting. I guess I could dissect each patch, painstakingly turning off and on devices and adjusting parameters to learn what scope of each device can deliver. And of course some patches are stupid, er I mean, “personally unappealing” so I could kick those to the curb. I realize some “painstaking” effort will be unavoidable, but it would be nice to have a decision tree, so to speak, by which to evaluate the default patches so I could use that knowledge to create the patches I want and need.

  4. What do people think about the get loud! question? Must digital amps be played as loudly as their mighty & distinguished (but so last week) parents?

Best,

Hagen

2 Likes

Lots of interesting questions here! I have a few half-baked thoughts I‘ll throw into the discussion. Full disclosure: I have been a full-time Kemper user for the last three years so keep that in mind with the following responses.

Questions 1 & 2

I think it is possible to model these famous tones digitally, but these are very complex things to get right. If we take a cold, logical, scientific view of amp sounds, they are basically a kind of mathematical transformation of the plinky little analog signal coming from your pickups to the glorious tones we all love. The trick is figuring out what that mathematical transform is, and this where all the arguing about which units do the best job or if digital modeling is even worth it happens.

These transforms are extraordinarily complex which is where I think a lot of the “cheap” models get their bad reputations from. Different platforms use different approaches to reproducing these transforms. Some models are built from the inside-out to mimic these transforms, others use a kind of external observation (Kemper is famous for this).

A metaphor would be like writing a software program to be a chat bot. You could either write a bunch of rules like “if the person types this, then respond with this…” or you could observe a bunch of conversations and start working out the probabilistic likelihood of the most appropriate response. Each approach is a valid one for achieving the same goal of simulating a conversation with another human being. Each also has its relative strengths and limitations in an attempt to mimic the real thing.

One final point I’d make here is that I think many guitarists don’t realize how different their favorite amp tones sound in isolation from the fully-mixed tracks of their favorite recordings. A quick search for isolated guitar tracks on the web never fails to turn up surprises. That skyscraper-tall sound from that first Van Halen record sounds much thinner in isolation. This is fine as it’s what makes the mix work. The “bedroom” tones that many of us noodle around with at home are generally too sonically broad across the EQ spectrum to function well in a recorded mix or live setting with a full band.

Now, none of this rambling really quite gets to your original question, which I think is about a common, fixed set of references. There really isn’t a good answer here and the best advice I can offer is don’t let the name of the amp model/effect/patch affect how you judge the sound.

I’ve found some real “metal” tones by tweaking what are advertised as twangy country or lush jazz tones. Likewise, I’ve found some glorious edge-of-breakup Strat tones by backing down some searing metal patches. Let the ears be the guide.

Question 3

I find that lot of digital models tend to have too many effects on them in an attempt to recreate a particular sound. Again, if we remember that we’re trying to regenerate a complex transform of pickup signal to sonic glory, the more things in that chain, the harder it is to understand what’s going on here.

Is that EQ slot really necessary, or is that trying to cover up something that would be better addressed in the pre-amp stage? Is that pitch-shifting effect really the right thing to get that high-end crispness, or would a simple tweak of the “presence” knob get a similar result in a more consistently pleasing way?

As you point out, you’re facing a combinatorial nightmare of options to twiddle on and off, fumbling about in the dark trying to find “the sound”. If you have a specific tone in mind, I would suggest just starting with amp models (no effects) and get as close to what you’re looking for with those. Then add effects sparingly to make up the difference.

Question 4

I think this is an interesting question because the loudness thing with digital systems is very different than traditional analog amps. The first thing is that the volume knob affects these two kinds of amps very differently. With digital amps it just makes them louder. It’s the same signal, only with more dB. Now you might perceive a tonal difference with changes in volume, but these are likely the result of certain frequencies making it through your ears and into your brain that aren’t perceived at lower volumes. But the actual transform is the same (unless the underlying digital model accounts for this)

For tube amps, it’s a different story. The actual output transform is affected, in some cases quite radically. A Fender Twin Reverb cranked up to paint-pealing levels is doing something different than at a more sociable level because of how it’s built.

What I think is interesting about volume in the digital vs. analog debate is how the final output sounds and feels to the guitarist. One thing I don’t think enough people talk about is how the room affects the output as well as the ear’s relation to the speaker output. Your favorite tube amp sounds radically different right up on the speaker cone than it does when it’s on the floor and your ears are five to six feet above it.

Digital models are a little different as the full-range speaker sounds much more consistent, regardless of where your ears are in relation to it. So, in a sense, a lot of digital models “bake” the room sound into the patch. Which, at live volumes, can sometimes feel like you’re playing an amp in another room, instead of right next to you.

Much of that room stuff is related to reverb and delays, which are functions of adding space and depth to a sound. Again, what we think we hear on our favorite records, is often quite different in isolation. In a full-mix/band situation usually the less-is-more approach is better.

So, that’s my very long .02 on the subject (after only one cup of coffee). I’m sure others will have other opinions, but this is where my brain is at after 30 years of playing and a lot of time spent on digital stuff (the last 3 as a full-time “Kemper guy”).

Cheers!

1 Like

@alexvollmer Thank you for your thoughtful response. Although your answer didn’t exactly “solve” my dilemma, at least I don’t feel like it’s crazy to concern myself with it.

I believe at a more philosophical level there is a deeper question regarding the way we articulate sounds. Today, it’s convenient and market savvy to define digitally modelled sounds in reference to existing amps and effects - that carries with it the appealing idea that one can have a digital warehouse full of expensive equipment for a few thousand dollars with a near infinite ability to mix match and combine gear virtually. However, at some point engineers are going to come up with credible and desirable digital effects and amps which aren’t based on the iconic sounds of past equipment. At that point a need for a more sophisticated vocabulary will surface because ‘sounds like 1959 Marshall’ won’t be helpful or true. It’s quite possible it’s not exactly helpful or true today.

Another facet of this issue is what one might call the functional question. At least in my case, I am attracted to the blended transitions between notes that is produced by overdrives, gain, distortion and delays. It’s definitely possible to argue that I like that because it muffels some sloppy technique. However, be that as it may, it would be appealing to engineer that blending of notes without all the noise generated by those effects (at least in cases where a cleaner sound is desirable). I would also like to avoid having to crank up the volume to bleed to achieve that goal.

Achieving that goal implies a more sophisticated grasp as to how those effects create that “blending” (note using the word “blending” makes my point - we don’t have a particularly precise vocabulary for articulating guitar sounds and tones). All of those effects seem to deliver that same function while approaching the function in different ways. I would like to be able to talk about that blending function independently of distortion, gain, overdrive and echo. What’s really going on there? How can I get that sound without the extra historical baggage.

I hardly know everything, well, anything, about digital modelling or audio engineering so it’s very possible there is a body of information that people have generated over the years. However, so far what I HAVE found is a kind of split between old amp and effect speak (‘1959 Marshall’) or electrical engineering speak (‘germanium transistor design’). Neither of those are much help.

Eric Evans wrote a programing book called Domain Driven Design (DDD) in which he argued that subject domain experts and software engineers need to consciously develop a new shared language which bridges the knowledge gap between the two interested parties. Eric Drexler has a similar chapter about the cross purposes of scientists and engineers in Radical Abundance. I don’t want to over dramatize the matter (too late :wink: ) but I would argue that digital modeling is forcing us to develop a shared language regarding sound and tone.

1 Like

Wow, you had me with the Eric Evans reference. I still think about that book to this day.

Yes, sorry for sort-of hijacking the thread. I see now what you were really getting at and I think you’re on to something. I know it’s easy to throw the term “marketing” around as a quick-explanation of things, but in this case I do think a lot of digital modeling has a “marketing” problem in that it doesn’t address a broad enough customer base on its own without referring to existing models (the “1959 Marshall”, in your example).

At some point, this technology will become so ubiquitous that, perhaps, it will develop its own domain-specific vocabulary separate from its analog ancestors. Until that time, I think we will live in an odd in-between place where you have to know a bit about both technologies and then do some tea-leaf reading to grok what modern, modeled interpretations of older technologies are trying to be.

One other thing that doesn’t help with clarity is how fear of litigation encourages many digital manufacturers to “encode” the names of their patches and configurations in sort-of in-jokey kinds of ways that only make sense if you know inside story of the original sound they are trying to mimic. It would be great if you just pressed a button labeled “Jimi Hendrix — Little Wing” and BOOM, instant Little Wing. But instead you have to puzzle out some oblique reference in the hopes of a finding a tone that matches the idea in your head

Your idea of note-blending sounds fascinating and reminds me a bit of some of the experimental stuff Andy Summers and Robert Fripp did together in the 80s. Maybe not so much in terms of execution, but perhaps in intent. I think you’ve hit on a very interesting idea of the limitations we are artificially placing on guitar tones based on the technologies we have used up to this point.

1 Like

@alexvollmer I have also worried about the legal challenges confronting the digital modeling engineers. I haven’t delved too deeply into the AXE FX platform (I decided I needed to learn to ride with training wheels on the GT-1000 first :wink: ) but it seems like they are a lot more willing to say we sampled this amp, these speakers, this effect and now you can by the sound without buying the hardware - something with which Marshall and Mesa Boogie … might take issue. That problem definitely makes the design and engineering landscape a lot more treacherous. It’s possible that the hardware manufacturers will follow some of the tech vendors down the path of licensing their own digital versions of their products- a business model they likely can’t embrace soon enough.

Although no one would suggest it’s a easy (or short) read, Domain Driven Design is a brilliant book and did more to shape my own perspective than much of what I have read (which is a lot). The Drexiler book is different but had the same profound impact on my thinking. In the chapter to which I refer he argues that scientists seek ‘truth’ about the universe whereas engineers want to apply that knowledge to human centric problems (including how to make a lot of money). It turns out the overlap there, especially from the profession / career point of view, is much more tenuous that one might assume.

You might be interested in this whitepaper from Fractal Audio (Axe FX).

Amp Modeling is not exactly “Sampling” a real-life audio signal. The process for capturing Impulse Responses (IR) is similar to sampling, but the Amp Model itself is essentially a digital replication of the audio shaping done by the amplifer on the source signal. The best modern models use component level simulation of the amp circuit - it’s not just capturing the end result of the speaker vibration.

1 Like

Most of modern digital modelling software use stage based approach. They have their procedure imitating one valve stage. Then applying it several times with different parameters they imitate multi stage amp. Then some small tricks like imitating output transformer, negative feedback, cabinet impulse response etc and - voila.
The problem is with the stage model. It can’t be really precise, because in that case we have to solve numerically a system of nonlinear differential equations (like SPICE schematic software does). Noone would wait minutes to get a second of guitar sound. So all that models are quite simplified. For example, it wasn’t long ago when they started to imitate bias shifting caused by cathode-grid leakage current. But even now they imitate it in a very simple manner not considering nonlinearity of cathode-grid diode. The same with load change, when this leakage increase load for previous stage. Calculating this would require some kind of iterative solving procedure which is not acceptable for real time applications.

This is likely quite subjective, but how audible is the effect of “cathode-grid leakage current” on the signal? I suspect that we’re reaching a point of diminishing returns on the amount of processing required to get from 99% simulation to 100% simulation, if that’s even possible.

@ASTN @LuckyMojo @alexvollmer It strikes me that today’s digital modeling is artificially (or possibly commercially) anchored to imitating existing iconic sounds likely because the business value proposition is ‘we can give you that same sound coupled with the ease and flexibility of digital twin technology’. However, it seems obvious to me that that posturing is simply a historical accommodation because once you successfully digitalize the sound from the hardware, there is really no reason to limit one’s imagination to reproducing tones that were popular (or at least originated) decades ago. It doesn’t make sense to me that the only measure of excellence is sounding like something else.

I’ve been reading Guitar Effects by Dave Hunter which includes interviews with Roger Mayer (famous for developing effects for Jimi Hendricks). Mayer said two things that stood out to me:

  1. ‘we are only concerned with making a good sound that goes onto tape, and we’re going to use anything we can to get it’

Hunter, Dave. Guitar Effects Pedals the Practical Handbook

so what was going onto tape was a unique experiment which included the room, post processing etc… not a quest for THE SOUND.

  1. The thing I miss probably most of all about Jim is that he was up for anything. If you came up with a new idea and it appealed to him and he could imagine it, he’d say, “Let’s do it!” The enthusiasm he exhibited for doing something new and exciting and innovative was great. Nowadays you speak to a lot of the people and… you know, any guitar player who cannot be bothered enough to sort himself out and thinks the roadie can do it for him, in my book, he’s not even interested.

Hunter, Dave. Guitar Effects Pedals the Practical Handbook

So it seems as though our obsession with reproducing sounds is wrong headed - that wasn’t the mindset that led to the origination of the sound we’re trying to emulate.

I was thinking that there were 50 presets that came with the BOSS GT-1000 and then I realized there are 50 Banks of 5 presets - so 250 and that without ANY information about why those patches were selected or why BOSS chose the components in the effect chain to arrive at that sound. It’s pretty overwhelming! It doesn’t seem reasonable to imagine BOSS is saying ‘pick one of these as your sound’ but it’s a really big jump to derive your own effects strategy based on those presets.

I really like the MIMIC PDF that @ASTN referenced - although it’s pretty dense and abstract so it’s going to take a few days to digest. It’s interesting that Fractal says “the presence control on many amps did nothing for the first 80% of its rotation … so we did not model that aspect” (pg 7). My question is ‘Why doesn’t Fractal create a Fractal Amp?’ Why not improve other aspects of the original engineering?

One answer is most of us wouldn’t be smart enough to deal with those new parameters. Another is most of us aren’t adventurous enough to risk coming up with a inferior (no Marshall Good Housekeeping seal) sound in the manner Hendrix took risks. Finally, I certainly don’t want drift off into sunless emptiness of synthesizer sampling and patches :wink:. One good thing about the old iconic sounds is they exert enough gravity that we don’t risk drifting off into space.

Net net, it’s a fine mess. While digital modeling simplified the task of assembling effect chains, it dramatically increased the complexity of determining the tone we want (unless it’s just someone else’s). As soon as you cut the tether to the iconic mother ship sounds, you’re adrift. Couple that with the fact that playing guitar isn’t reducible to tone, it’s difficult to tell how much energy to devote to this issue. How does one do what Hendrix appeared to do? - have fun with tone, but get on with making music.

I’m sure others have said lots of great things. I’ll say this responding to the original post.

If you have a chance to spend some time with a recent AxeFX, your questions may answer themselves. I purchased the GT-1 to help consult with a friend that needed portability on their jazz tour. Is it a nice little unit? Sure. Does it meet the requirements that my Axe-Fx II xl+ does for replacing the mountains of gear now getting dusty and leaking capacitor fluid? Nope.

As for loudness, I think it’s best to standardize patch programming on a given level using a loudness meter for sending to front of house, and then you can turn up your monitors to pants waving loud if you so desire (just please use earplugs/hand them out to audience members). Cheers.

Why not make your own digital model dsp, to gain further insight?

use Reaper and it’s JS fx system (super simple to learn - super quick to dev)

Here’s the general pipeline for a amp “model”:

  1. Write a Bandpass filter (use the well documented RBJ algos or use code from the large pool of JS fx)

  2. Write a gain and sigmoid clipper (atan,tanh or roll your own), maybe oversample it and maybe integrate the clipper if you care about aliasing.

  3. Write another clipper or fuzz for the power amp stage.

  4. Write a 3 band eq tone stack.

  5. Write a Convolution IR processor for speaker emulation (JS FX has an example of this along with a few choice impulses)

Balance and tweak untill you’re happy.

Finally realize that “Amp Modelling” is purely a marketing term and fancy GFX mean jackshit when it comes to quality :slight_smile:

Wow cool, I had never looked at those! Thanks for the tip!

Hey Tommo!!

Yeah it’s a really great system to play with and learn DSP - because it’s instant/no compiling. The language is a little bit odd compared to C/C++, seems a little “Mickey Mouse” but it’s easy after a while and has ok performance (about 10-20 slower than native optimized C/C++).

You also have FFT/Spectrographs in the Cockos plugins ReaEQ and ReaFir if you’re too lazy to write your own and want to analyse any output!

Have fun!

I was looking to write some DSP for a pedal experiment, would you advise that I prototype in Reaper as you suggested for amp models? Does Reaper have FFT and all of that stuff in its library? When you get an algorithm that you like, is there a modern DSP chip that you use or would suggest (for a pedal)?

Reaper JS has all that stuff:

https://www.reaper.fm/sdk/js/js.php

I would say it all depends on your coding experience and DSP knowledge.

If you can code a little and have a rough idea about DSP - use reaper.

For prototyping def use Reaper - it is sooooo quick to develop and test in context with your audio setup and music. Reaper JS Ell - is just amazing for DSP development - everything is already setup for you - it works with pure doubles and handles all the messy FPU exception handling for you.

Embedded coding is a little tricker - you have to understand low level C and various hardware protocols and busses: I2C/I2S/SPI/DPI/8080 to name a few.

Chips for embedded - my experience is limited but here’s my thoughts so far:

  1. Arduino Nano - super quick to code with the arduino ide, 8-16mhz, no DAC, limited memory - its a fun little device to start with but a no no for any kind of serious DSP (2kb ram!)

  2. RPI Zero - beast of a SOC 1ghz - 512mb ram, bags of FPU power for DSP - no ADC or DAC for output - Bare metal coding, tough as nails!

  3. ESP32 Lyrat or A1S - 240mhz dual core - nice SDK - ADF/IDF, Has audio input and output built in and on board - also has a 3w power amp and speaker connections. Limited ram 512kb + 4mb PS-Ram

There are plenty of other dev boards out there - but unless u’ve been doing Embedded stuff for a while I would avoid them - they are expensive and require the Hardware skills and gear to setup properly and debug.

I’m currently playing with the ESP32 - and after a horrendous start with it - Loving it.

2 things in DSP take up the most CPU power:

  1. Oversampling for alias reduction

  2. IR Convolution processing - the longer the IR - the more CPU power needed.

Have fun!

1 Like

I just remembered this board - it looked pretty cool when I looked into it years ago - was too $$$ for me tho, has the ram - a little low on CPU power tho @ 168mhz.

http://www.axoloti.com/