Hacker Newsnew | past | comments | ask | show | jobs | submit | more fivedogit's commentslogin

(I asked him to email me, he did, now I've removed the address for spam reasons.)


Nice to meet you. I sent you an email. Let me know how I can help.


This just makes me happy. I always love seeing people help others. :-)


I became aware of Spark a couple of weeks ago. On the one hand, I was irritated that someone else was on the same "voice-controlled guitar playing" thread as me, but then (a) it's validating and (b) they're not really going the same place I am.

Thingamagig understands the underlying composition which means it can automate tones, loopers, lights, cameras, etc. That need was the genesis of this project and that's where its going. Everybody else (including Spark, Fender Play) seems to think the playalong is the end goal, which is why they short circuit the hard work of building the composition library by integrating Spotify or whatever.

Maybe they're right. Maybe I'm right. Maybe we're both right. We'll see, I guess.


Sorry if it was short on software details.

It's realtime raspbian. Headless ardour (lua implementation). A mix of guitarix and other amp sims. Proprietary cab sim IRs. Various other effects packages like rkr, ardour-native plugins.

Person speaks to alexa, alexa calls a series of lambdas (basically the not-yet-public API), and sends MQTT messages to the device which is tied to the user's Thingamagig account which is linked to their Alexa account.

https://github.com/raspberrypi/linux/tree/rpi-4.19.y-rt

Let me know if you have any other questions.


I was able to get 48000/256/3 working.


i get 48000Khz sample rage, 256 sample buffer size. what is the the '3'?


Guitarix is really, really impressive and I'm eternally grateful, but it is raw. It takes a lot of work to get it to sound the way you want and is inconsistent from sim to sim. However, by having control over the hardware and the sessions (i.e. presets, basically) and totally bypassing the Guitarix GUI, I was able to make it work for this project.

BTW, I'm not using any of guitarix's cab sims. Instead I purchased a commercial license to professionally-shot proprietary cab simulation IRs which make all the difference in terms of tone quality. That metal portion of the demo is all about the cab sims.


Why the hate?

Do you know of any modeling solution where you can pick up your guitar and play without your hands ever leaving the guitar?

Do you know of any modeling solution that will automatically change your guitar tones during playback?

And automate your loopers?

And scroll lyrics and chords in perfect time?

And (eventually) automate your vocal effects, lights, fog machines, drone cameras and dancing baby Groot?

For < $150?

Thingamagig understands the underlying composition which is a critical component of advanced automation. No other solution on the market does this. If it did, I would have bought it and been done with it.


Launching products is super hard I wish you luck. Getting detractors means you are doing something that at least garners a reaction from people. If there are negative reactions, there will also be positive ones.


You can all that with a helix or axed if you’re going to mindlessly slave everything to a master track. All straightforward midi.

Except those all have pedalboards so you can do it all with your feet while playing if you are actually creating anything.

I’m still confused why you think all this total automation is a good thing. It’s like selling a sewing machine that only works with premade patterns.


You didn't read it. He did exactly what you're suggesting in the first iteration: Helix + MIDI controllers. Then he explained why it sucked.

I mean, have your own opinion and all but if you don't read, then it's not as valid.


I read it, and I don't agree that he explained why Helix sucked. Unless I'm reading it wrong, he seems to think a voice-controlled rig would be better for live performance?

I understand that the tone of these user's comments might be rubbing people wrong, but the criticisms were absolutely spot-on (including the one that got flagged/removed). As a DIY project, I think it's awesome...but as a product being brought to (a heavily saturated, well-trod) market it deserves criticism/skepticism.


He's making a subtle point here. There's

"do x"

vs

"Alexa, do x"

vs

"Alexa, ask AppName to do x"

The first is when the skill does a thing and then immediately starts listening again. The blue bar is already there and you don't need to say "Alexa".

The second is when the skill is still open but not currently listening. Takes a bit of hackery to keep the skill from constantly closing, namely, long-running APL commands.

The third is when the skill is closed and you're trying to get it to do something. I call this "deep launching" but it almost never works. Amazon has built an amazing system here, but the Alexa system needs work on recognizing skill names for deep launching.


Yeah good question. Alexa skills don't want to be long-lived so you have to force them to stay open so you're not constantly saying "Alexa ask <app name> to <x>" which, by the way, hardly ever works.

With a video skill like this, I can submit APL documents and then long-running commands to make it keep "doing something" throughout the playback. There is a limit to how long you can command it to "delay" but I think it's more than 5 minutes or something.

I haven't tried it, but for non-video skills I've read that you can play silent audio for a period of time to keep the skill open.

Janky, for sure.



I'm primarily an early-stage full stack dev; I do and learn whatever is necessary to ship. But I also have extensive experience in customer-facing, sales-y roles in companies above 200 employees.

Most recently, I went fully serverless on a Node-based hardware IOT personal project. Before that, I was a post-sales, professional services, API integration engineer at a mid-sized startup. Before that, I built one of Ethereum's first working dApps and presented it at Devcon 1. Before that, a pre-sales consultant at a networking-oriented BigCo.

So yeah. I'm hard to describe, but open to everything!

Location: Any large tech hub, including those outside the US Remote: No, I want to relocate. Have done so 3x before. Technologies: Java, Spring, Javascript, everything AWS, Ethereum/Solidity Resume: https://bit.ly/36T5rsi (github profile, email for resume pdf) Email: hnjobs2019@mailcyr.us


Last part didn't format correctly and I couldn't edit. Here it is:

  Location: Any large tech hub, including those outside the US 
  Remote: No, I want to relocate. Have done so 3x before. 
  Technologies: Java, Spring, Javascript, everything AWS, Ethereum/Solidity 
  Resume: https://bit.ly/36T5rsi (github profile, email for resume pdf) 
  Email: hnjobs2019@mailcyr.us


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: