I'd love to but since upgrading to the latest macOS, every chromium-based browser (Brave, Edge, Chrome, etc) gets ERR_CONNECTION_RESET repeatedly on every single network connection until they eventually go through. Some special handling of this makes Edge basically unusable in that it almost never loads, whereas Chrome and Brave will eventually get there. Despite my best efforts in search and posting online, I have yet to find a solution. Firefox works perfectly, but I would like to be able to use Brave again.
I've been surprised at how much I like Rust for backend work, but looking at the read me I have no idea what the case for this tool is. What do you use it for?
It let's you deploy apps quickly and manage infrastructure without having to battle the AWS console and/or config files. A simple server (via most Rust frameworks) can be deployed by utilizing an annotation and then running `cargo shuttle deploy` whereas adding a resource, like a Postgres database for example, is also a matter of adding an additional annotation.
Glad Im not the only one. I checked out their examples repo, and Im still very confused as to what the library does. The shuttle runtime trait has me very confused as the API is built on Rocket.
I would generally try to characterize Shuttle as "the Vercel of Rust", though I guess that might not explain much to people that don't know what Vercel does.
That it's for toy projects. Using annotations to provision infrastructure hides complexity from you, but that complexity is what's required to actually manage and resolve infrastructure issues in production. In a prototype or toy project, sounds great.
This argument can be made at any level of abstraction I think.
For example, you can make the same case for AWS Lambdas abstracting the infrastructure away from you, or VMs that run on-top of a hypervisor abstracting away the bare-metal servers.
IMO it really boils down to the quality of the implementation of a product and also designing your product such that if users need to debug (which hopefully isn't often) you offer that visibility into the internals.
> IMO it really boils down to the quality of the implementation of a product
I think this is true. But I can understand why people are more skeptical of Shuttle than Lambda. Running a function is a fairly simple task and since lambdas are stateless, it's relatively easily to feel confident about this abstraction. I'm less confident that I won't need to worry about the details of how my database is provisioned, configured, and maintained.
This still seems great for hobby projects. It also seems like it would be relatively easy to transition to something more manual if the need arises.
I'm not too cool for MySQL, but PostgreSQL has been the default anywhere I've been in fifteen years. Is anybody aware of something equivalent for pg? I see a lot of old projects that aren't getting updates anymore.
Do you have a link or a brand/model? I'm just recently shopping for smart home equipment and the Hubitat seems quite full-featured and I'd like to compare.
It's perfectly suitable and it looks like one of the better out-of-the box solutions out there that respect user privacy. If I don't have any SBCs and just want something that works without fiddling, I'd likely go for Hubitat too.
That said if you're up for fiddling, something like https://www.amazon.ca/Waveshare-VisionFive2-Processor-Integr... will provide much more oopmh (and it's RISC-V) with 4GB ram and m2 slot, at $100 CAD, with mainline kernel support (so any USB devices will just work). Grab a zigbee USB + SBC like that, and you'd be able to run much more on your hub that just a gateway for your devices.
One of my favorite early blog stories is from stilldrinking.org, where he accidentally ingested way too much and ended up institutionalized. It's great writing about the entire experience going there and back. http://www.stilldrinking.org/the-episode-part-1
With how useless Google search and even DuckDuckGo have been for me recently (quotes not working, minus not working, silently ignoring words in the search query, etc) I've started using kagi and perplexity.ai and been really happy with the results. Most of my web searches are for work, and paying a few dollars that can save me hours is worth it.
Maybe the big search companies are optimizing correctly for their business, but they are definitely not optimizing for my use cases.
I'd be curious to see actual benchmarks of Kagi against DuckDuckGo and the Searx alternatives.
I've heard good things about Kagi, but I've been happy with DuckDuckGo for a long time. I'm not going to spend a lot of effort into considering to switch to Kagi unless someone can show me that it is measurably better and faster than DuckDuckGo, and is worth the price.
If you haven't watched the brilliant Git For Ages 4 and Up I can't recommend it more highly. I make everyone watch it on my teams and my students. https://m.youtube.com/watch?v=3m7BgIvC-uQ Git is quite simple under the hood, it's the convoluted user interface that makes it seem so difficult. Once you understand technically what the commands are doing, it's a lot easier to know which one to reach for.
I bought little wooden boxes with a hinged lid, stained them in a lovely walnut color, put ReSpeaker 2.0 USB mic arrays under round speaker grilles on top of the lids, two 3w/8ohm speakers into the sides of the box, and a RasPi4 with a WM8960 sound card / speaker driver on the inside.
The boards are raised off the wood surfaces by PCB spacers (I embedded M2.5 threaded sockets into the wood) and I bought speaker grilles that bulge out a little at edge of the cylinder, so that the 4-mic array would remain fully exposed also laterally. I covered the side speakers and the mic array speaker grille with acoustic textile.
The onboard code is written in very pedestrian Python and uses porcupine and the OpenAI APIs.
It roughly works like this:
1. Capture audio frames and run overlapping frames by porcupine to perform hot word detection (overlapping avoids the problem of the hotword falling inbetween frames, at a cost to latency)
2. Once the hot word has been detected, buffer all audio frames into a command buffer until silence is detected as a stop (detecting "silence" is a bit involved, taking noise levels into account, and a few other tricks, more below)
3. The command buffer is sent to Whisper for transcription
4. GPT-4 is prompted with a system message steering it's behavior, the user command transcription and a JSON print out of the state of all devices (e.g. lights and Sonos speakers) in the home, grouped by rooms
5. Following the system message, GPT-4 replies with a JSON structure of changes it would like to make to the device state, omitting unchanged bits from the original
6. Add the sensor event and memory system described above
There's a few other tricks. To improve the audio capture, I take note of spatially where the hot word is detected (i.e. which mic in the array gets the best signal) and then capture the rest & perform the silence detection with a corresponding bias.
This is actually done in a distributed fashion over the network, so if two of the AI speakers hear the same command, only one of them will end up processing it.
They end up making mainly HTTP calls to APIs that already exist around my house. I have a second RasPi in my LED shelf (another old project, https://github.com/eikehein/hyelicht/) that doubles as a Philips Hue bridge with a zigbee dongle. That's what the DIY AI speakers interact with when making changes to the lighting.
I will say: Depending on the user command and the weather in the cloud, it's pretty slow. I've tried my best to optimize the client side for perceived user latency, but there's no way around the GPT-4 API just being pretty slow, even if it's amazingly low-friction and reliable otherwise. And 3.5-turbo just doesn't cut it for what I'm trying to do.
I'd like to get all of this out of the cloud entirely. I predict the next generation of my home NAS will have a GPU in it and try to run things like fine-tuned llama2 for the home.
Have you looked at finetuning GPT3,5? I’ve heard anecdotally that it can significantly improve its ability to handle correctly formatting outputs like JSON, and the increased speed and significantly reduced cost would make it much more appealing.
Also I hope you consider posting more about your home setup as I’d love to see more.
It looks like every other comment in this thread is favorable to very positive, can you go into more detail about what specifically isn't good about it?
Not the previous poster, but I had to implement it a few years ago, and I found it unbelievable complex with dense and difficult to read specifications. I've implemented plenty of protocols and formats from scratch using just the specification, but rarely have I had such difficulty than with OpenTelemetry.
I guess this is something you don't notice as merely a "user", but IMHO it's horribly overengineered for what it does and I'm absolutely not a fan.
I also disliked the Go tooling for it, which is "badly written Java in Go syntax", or something along these lines.
This was 2 years ago. Maybe it's better now, but I doubt it.
In our case it was 100% a "tick off some boxes on their strategy roadmap documents" project too and we had much much better solutions.
OTel is one of those "yeah, it works ... I guess" but also "ewwww".
I'd recommend trying it out today. In 2021, very few things in OTel were GA and there wasn't nearly as much automatic instrumentation. One of the reasons why you had to dive into the spec was because there was also very little documentation, too, indicative of a heavily in-progress project. All of these things are now different.
I'll be happy to take your word that some implementation issues are now improved, but things like "overengineered" and "way to complex for what it needs to do" really are foundational, and can't just be "fixed" without starting from scratch (and presumably this is all by design in the first place).
That's fair. I find that to be a bit subjective anyways, so I don't have much to comment on there. Most languages are pretty lightweight. For example, initializing instrumentation packages and creating some custom instrumentation in Python is very lightweight. Golang is far more verbose, though. I see that as part and parcel of different cultures for different languages, though (I've always loved the brevity of Python API design and disliked the verbosity of Go API design).
One of the main reasons I became disillusioned with OTel was that the project treated "automatic instrumentation" as a core assumption and design goal for all supported languages, regardless of any language-specific idioms or constraints.
I'm not an expert in every language, but I am an expert in a few, and this just isn't something that you can assume. Languages like Go deliberately do not provide the sorts of features needed to support "automatic instrumentation" in this sense. You have to fold those concerns into the design of the program itself, via modules or packages which authors explicitly opt-in to at the source level.
I completely understand the enormous value of a single, cross-language, cross-backend set of abstractions and patterns for automatic instrumentation. But (IMO and IME) current technology makes that goal mutually exclusive with performance requirements at any non-trivial scale. You have to specialize -- by language, by access pattern (metrics, logs, etc.), by concrete system (backend), and so on -- to get any kind of reasonable user experience.
The Spec itself is 'badly written Java'. I haven't been a Java dev for about ten years. At this point it's a honey pot for architectural astronauts - a great service to humanity.
That is, until some open standard is defined by said Java astronauts.
[Spoiler-free] Reducing the brain to an advanced calculator is a basic premise if Murakami's cyberpunk fantasy book Hard-Boiled Wonderland and the End of the World. One of my favorite books.