Creating an app for the Magnet


I’ve been thinking some more about how to use the Magnet as a practice tool and think we need apps to accomplish a workflow like this:

  1. Scan a barcode on the PC to link phone with computer
  2. Configure desired amount of slowdown, fps etc on the PC.
  3. The app running on the Magnet-mounted phone detects a change in sound levels and starts recording as you tear it up
  4. The sound levels drop and recording stops. Post-processing according to the rules from 1) are applied and the snippet uploaded to the PC.
  5. Playback starts automatically on the big screen

The idea here is that each snippet should be a few seconds long only, that way it can be instantly processed and uploaded for playback and the feedback loop is tightened.

Two apps would be required here, one that runs on the PC (most likely in the browser) and one that runs on the phone.

Does this sound like a good idea to anyone else? Anyone interested in pitching in?

I write software for a living, but I haven’t done anything in this space beyond messing around, and even that was a few years ago.


That sounds good. How would the linking happen only with a barcode though?


Actually not sure that’s even necessary, given a few assumptions:

  • Computer and phone on same wifi network
  • One running app per wifi network

I know for sure it’s doable with a barcode though, as that’s what these guys do: Since we don’t have to worry about someone else on the coffee shop’s wifi network receiving our files, we can probably do without, though.


That’s really neat. I it might work out really well. Thinking about that, I remembered my Huawei has a feature where if you connect it with your pc via USB you have an option see what you’re doing live (like you’re streaming it, for example). Perhaps this can also be done via Wi - Fi?


Yeah, there are several options available to live-stream from your phone, but I find it really hard to see what’s going on as the speed increases.


Well, you couldn’t use it when you’re picking fast, but for example, if you want to know if you’re achieving DWPS correctly, there’s no need to transfer a file if you could see it live.
Edit: In real time


If you want to livestream from your phone in real-time, this already exists and we’ve used it during one “Talking the Code” help session a couple years back. You can check out Wirecast broadcasting software from Telestream. That’s the desktop piece. Then you pick up their mobile app and it streams wirelessly at selectable bitrates. Works very well if you’ve got the wifi bandwidth. There is about .7 seconds of latency but it’s close enough that you can still process visually what you’re doing.

A second option is Mimolive, which is the broadcast app we’re currently using. They too have a companion mobile app for phone streaming, but it’s wired. Connect via lightning cable to your Mac. The latency is a bit less. This is what we’ll be using moving forward if we go that route again, since Mimo has other features we like for Skype-type callers.

Keep in mind that none of this is high frame rate - the processor and wifi bandwidth is not there. So you can’t, for example, stream at 120fps and record the stream later for detailed viewing. If you want that, your only option is to record locally on the phone and watch it later. It’s possible you could hack something that could transmit wirelessly at 30fps or some kind of drop-frame rate, and record locally at 120fps for later viewing. But I doubt it. You’d need to encode two streams simultaneously, and that kills even the hottest desktop video cards right now.

Nevertheless, that we can even have this conversation - man, amazing times.


Figured I’d post an update here. I did hack something like this together during a weekend earlier this year. It’s a piece of shit that I quit working on the minute I got it working, but it got me the following workflow:

  1. Set a countdown (typically 3-5 seconds), clip length (3 seconds is a good value) and playback speed (typical setting 20%).
  2. Hit spacebar on the computer
  3. Get ready while an annoying ‘beep’ counts you down.
  4. Recording starts on the magnet-mounted phone at 120 fps and you shred your heart out
  5. Once the snippet has been recorded the file is saved on the phone and immediately made available for streaming
  6. The video file is opened on your computer and played back in slow motion.

At the time I used it extensively to diagnose my TWPS, but recently I’ve found it even more valuable as I’ve been trying to learn crosspicking.

This setup provides close to immediate feedback and makes for a much more efficient practice time IMO.

I feel like this POC has been a success, in the sense that the value of this thing has been proven.

It would be great to make this tool available to everyone else, but I must say, after this experience, that I hate android development with a passion. Do we have any professional app developers that would be willing to take on the burden of the phone part? If so, I’d be willing to make the other half of the app (the part running on the computer).


I don’t think an app for the Magnet would be very interesting, the interesting practice tool (IMHO) would be a guitar pick with a built-in 3-axis accelerometer with a Bluetooth connection to the phone. The phone could presumably do the math and give things like beats/minute, distance of motion, a plot of the tip of the pick, etc. My guess is that it’s really obvious when the pick hits a string, etc.

I’d much rather your brainpower go in a direction like that… and I’d be sure to buy one! :grinning:


Great work here! This is essentially the way the original Shredcam app worked, except the camera was connected via Firewire. So you were always recording to disk (RAMdisk actually) so the file could open immediately. I could review and hit “save” to commit a copy to the HDD. Of course the copy took time so I’d try to keep the conversation going while watching the “save” operation out of the corner of my eye. How long are you recording for and how long do the wireless transfers take?

One drawback of the old rig was the limited filming time. The camera output was 30MB/sec RAW video so I could only do about ten or fifteen second clips before running out of RAMdisk. And I had to know when to hit “record” so I’d frequently miss the best takes of things which often happened spontaneously.

What would be even better is some type of ring buffer so you could go back and capture the preceding ten or fifteen seconds once you know you played something you want to look at. Optimally you’d do this via footswitch. So there is no countdown, the camera is always recording at least that far back, and you just grab the part you want by stomping. Have you experimented with anything like that?


I agree that real-time is better than waiting, and that direct measurement is probably the future more so than video. But the problem with monitoring the pick is that it doesn’t tell you if you’re actually making the right body movements.

But move the accelerometer to the arm and hand, and now you have something. We took a look at this type of tool in our live chat with Teemu:

This has a lot of potential, but we’d need a different interface than the one they’ve designed for golfers, and we might need more sensors than just the two that are provided if we want to capture the upper arm’s position and also the position of the guitar.


I think that if you have something like the position of the pick tip and a description of its orientation (a quaternion, whatever), then it will likely be immediately obvious about how good the player is, and I think that many mistakes (like string hopping) will really stand out, and can be captured. I agree that instrumenting more parts of the body reveals a lot, but just the pick is likely enough for people to develop speed. Video might be good as well for training machine learning systems, PARTICULARLY if you make a Fender Stratocaster pick guard with a built-in camera (so there is no reason for the Magnet).

But a moment’s reflection suggests to me that I was wrong about a 3-axis accelerometer being enough, you also need a gyroscope, but these sensors are cheap, the expense would be the board, firmware, etc.


This is an interesting idea! I’m sure it would be pretty easy hook up a usb footswitch of some sort (could always fall back to stepping on the spacebar of a throwaway keyboard :grin:)

The biggest issue I see with that approach is that I suspect you’d have to spend time navigating the video after the fact. E.g. if you’re working on TWPS at close to 200 bpm you’d probably want to view the video back at 20% or even 10%. At 20% a 15s video would take 75s to play and you’re presumable only interested in say 5 seconds of that (1s out of the 15s in the buffer).

So as a practice tool that seems less useful to me, but it would be really cool to implement that as the default mode so you can go “What did I just do?” whenever the magnet is connected.


I’m not sure how you’ve been using this, but the ability to record continuously and instantly review only the last X seconds of playing is ideal for learning motions. This way you’re not filming huge quantities of video, and only saving the attempts that you’re trying to learn from. As long as the desktop UI has a playhead, you’re golden. Ideally, again, stomp controlled. We can fuss over the exact stomp controls we’d need, but some kind of seek forward and backward, and switching from regular speed to slow speed, stop/start playback, and save / delete.

Again, I think the future is actually direct measurement of body motions, and not necessarily video - or not exclusively video as we have now. But if we’re doing video, this is the system I’d use the hell out of.


Alternative - it’s not time-based, it’s take-based. The app scans for pauses in the audio, as defined by the audio dropping below a certain level, and automatically chops them up into separate numbered takes that you up/down arrow through with your footswitch. Then you don’t really need seek controls. You just need start, stop, save / favorite, etc.

Again, we can fuss over the details but now you’re not even saving video any more, you’re automatically saving clips and possibly favoriting and rating them as well.


Stringhopping isn’t really the issue - that’s pretty easy to spot without technology because you will not be able to play fast, and you will probably experience some kind of arm discomfortt.

Instead, what we need is a way to differentiate between the thousand correct ways of doing something. New players are uncoordinated and often switch randomly between motions, even over the course of a single phrase, and even over the span of a few notes. This is going to be impossible to see if you can only see the way the pick is moving but not which motions are causing it.

Instead, if you show me, for example, a real-time scrolling graph of what the two wrist axes are doing, and the forearm is doing, I can tell you right away which motion you’re making, even when those motions are similar. And I (and you) can see exactly when that pattern deviates awkwardly to something else.

If we record experts doing these motions fluidly, we can probably dispense with a lot of the charting and just ding a bell sound when you do it correctly - instantly, while you’re playing. This is the kind of instant feedback that you can learn from, because it lets you associate the feel of “correctness” with the motion while you’re in the act of performing it. This is what video review can’t provide.

The golf app has these features and gets a lot of things right in this regard - it’s just not for guitar players.


Lately I’m been working on rolls to learn crosspicking. As I’m doing that, with the magnet connected, I just lift my fretting hand and roll on the open string as I hit the spacebar to record. While I get a 3 second countdown I move my fretting hand back (or not). I typically record a 3 second snippet which then autoplays at 30% speed. Then I just repeat this process while making tiny adjustments based on what I’m seeing. The feedback loop here is around 10 seconds, which is pretty tight.

The hard part is really knowing how to adjust. For me the final pickstroke before the roll repeats is the one that that fails most often, but the other string changes usually look great. TBH I wasn’t really sure what to do with that information so I ended up changing everything and anything until I got it. At least this prevents me from burning in a bad motion :man_shrugging:

I’ve been practicing next to a standing desk with a desktop computer, so I just adjust all this stuff using keyboard and mouse. I’m not sure how useful it would be to be able to manipulate these things using a foot controller as the review of takes so much focus that you’re not playing the guitar anymore anyway.

I think I’ll put an old keyboard on the floor, though, so I can start recording instantaneously without taking my hands off the instrument! :slight_smile:


You can’t fix a pickstroke, but you can fix a motion so that all pickstrokes look good. And the fix is not likely to be subtle or tiny. 99 times out of 100, when you catch yourself thinking this way, there is little or no difference in the recorded attempt. If the fix isn’t obvious to you as easily visible or feel-able without the camera, then its probably not the fix. The camera is just a way to test that it worked. In fact, the camera is most often a negative verification, as in, yep, no change, that didn’t work.

I gather you already know this but the tiny adjustment mania is worth repeating for others here because I see it all the time in Technique Critique posts.

This is the core issue with video review - it’s not the best way to learn. It’s just the best way most of us have available until motion capture becomes standard.

The one benefit of video is that it forces you to stop. Short clips with frequent breaks means that 50% of your “practice” time is spent not playing. That can be a good thing. From what I’ve seen on here a lot of people have a tendency to play for long periods of time, essentially repeating a motion that isn’t working. You can’t tell me that someone has legitimately changed their form each attempt over a two hour span. There just aren’t that many things to change. And when we think there are, it’s usually just micro-adjust mania.

So the camera shows you that what you are doing is making no change, and that keeps your practice realistic. If nothing substantial is working after 40 minutes, your practice is done.

Hang it up, come back later!


Yes, this was one benefit of making hundreds of these 3s snippets. I did the micro-adjustment thing two years ago for countless hours trying to get TWPS. I just didn’t have the feedback that none of the ‘adustments’ I was making had little or no impact. It sure felt like work!

Beyond the magnet, I’m hoping VR gloves will afford the next leap in feedback like what you describe here.