I have an experiment at work that is generating gaseous hydroflouric acid at 800 degrees F. It's inside a triple containment system that takes a full day to set up and take apart, and we have all sorts of quality checks to validate that it is safe to access and has been fully titrated after the experiment has run. We accidentally ruined a very expensive ion chromatography machine a few weeks ago... Acid gasses are just no fun to work with.
From the link: "Note: The inbuilt WiFi chip is not natively supported by FreeBSD, so you will need to (temporarily) use a USB WiFi or Ethernet dongle, or (as I will explain) copy some files from a different system to the Macbook. You could also just transplant a different chip into the system."
You say "works perfectly". I do not think it means what you think it means.
To be fair, Linux also has trouble with the Broadcom chip, the driver needs to be installed as a separate step on most distros.
| Works Perfectly | Mostly Works | Has Lots Of Bugs
-------------------+-----------------+--------------+-----------------
Default Install | | |
-------------------+-----------------+--------------+-----------------
With Add-Ons | X | |
-------------------+-----------------+--------------+-----------------
Major Config Work | | |
i.e. Declare its working quality after the install is done. The install may take multiple steps. (In this case, copying some files over, apparently.)
Broadcom (and to a lesser extent, Realtek) devices had always been anywhere between hit-or-miss and completely unworkable on Linux, LONG before Raspberry Pi came around.
It's MIT licensed now, which isn't particularly useful when it comes to Pi (there's some Broadcom crap in that boot loader so it won't be open sourced) but otherwise is kind of interesting.
I always saw Broadcom as evil, and saw Raspberry Pi as just reusing cheap parts from set top boxes or similar, with all the proprietary stuff that that comes with.
By that logic, every piece of software ever made can be said to work perfectly in every situation, because there is always some amount of additional work which could be done to make up for its native deficiencies.
That's quite the leap. The work is already done, they just can't/won't ship the driver in base, right? Isn't it comparable to installing Debian and needing to load in non-free drivers separately?
It is definitely not appropriate. If you break the chop sticks and use them correctly your fingers will never touch the surface where there are splinters.
I always do it under the table; something I instinctively do without ever being told to. Now I wonder if I might have picked up on nonverbal cues at some point in the past. If I were someplace where chopsticks were the norm, I would probably just carry my own as I find the disposable wooden ones very off putting. I have to wonder if there is a rule about using your own chopsticks though.
The exact quote is "Thanks for the submission! We have reviewed your report and validated your findings. After internally assessing your report based on factors including the complexity of successfully exploiting the vulnerability, the potential data and information exposure, as well as the systems and users that would be impacted, we have determined that they do not present a significant security risk to be eligible under our rewards structure." The funny thing is, they actually gave me $500 and a lifetime GitHub Pro for the submission.
Tangential, but that's quite interesting, I had no idea you could get GitHub Pro for life, and certainly not through something as "accessible" as bug bounties.
> an LLM can ingest unstructured data and turn it into a feed.
An LLM can try to do that, yes. But LLMs are lossy compression. RSS feeds are accurate, predictable, and follow a pre-defined structure. Using LLMs to ingest data which can easily be turned into an parseable data structure seems strange: use the LLM to do the "next part" of the formula (comprehension, decision making, etc)
I mean that your RSS feed can basically be "Go to https://techcrunch.com/latest/ and use each non-video item as a feed item" or "Go to x.com/some_user and make each tweet a feed item", and the LLM can do a perfect extraction of links from html response blobs.
The only thing you have to do is ensure it can reliably get the response html. Maybe MCP browser + proxy or mirror to seem more human.
I built this for myself. The idea is that each feed is a url + title + a prompt to tell the LLM how to extract the links you want so that it generalizes over all websites.
And each feed item is a canonicalized url + title + a local copy of the content at that url which is an improvement over RSS since so many RSS feeds don't even contain the content.
reply