I’m running on a very thin margin here. I want to get the robot into the hands of robot lovers without them having to eat McDonald’s all month just to save up for it. (I did that once myself, and I strongly advise against it—it’s really bad for your health, both physically and mentally)
When you're selling a physical product at the single-digit-hundreds scale, "thin margin" is a false economy at best, fatal self-sabotage at worst. You should have a thick margin, which you reinvest into scaling up.
I tried Cursor, and will occasionally switch into it, but I'm having a hard time using it because its relationship to extensions (particularly extensions that the user develops and sideloads) is badly broken. I tried doing minor customization (forking the vim plugin from the github version, creating a vscode hello-world-style plugin), and while everything worked in VsCode, transferring those plugins into Cursor did not. There was no documentation for plugins in Cursor, you just had to hope that things were similar-enough to VsCode. And then they failed to load with no debugging leads.
I think this is an artifact of Cursor being a closed-source fork of an open-source project, with a plugin architecture that's heavily reliant on the IDE at least being source-available. And, frankly, taking an open-source project like VsCode and commercializing it without even making it source-available is a dishonorable thing to do, and I'm rooting against them.
> For IHM risk prediction, we utilized the LSTM model, CW-LSTM model, transformer, LR, AdaBoost, XGBoost, and random forest (RF) models. For 5-year BCS prediction, we used MLP, AdaBoost, XGBoost, and RF models.
All of those acronyms are obsolete machine learning techniques from prior to the current trend of large language models. In other words, this paper is not reporting that any actually-deployed AI is failing, it's reporting that a group of researchers tried to build an AI for evaluating health, but failed to do so.
This headline is a lie. A backdoor in a Bluetooth chip would be something which enabled a wireless attacker to gain code execution on the chip. This article reports on something which allows the device drivers of the attached device to gain code execution on the chip, which does not violate a security boundary.
(In a well-functioning journalism ecosystem, this would require a retraction and would significantly harm the reputation of the outlet that wrote it. Sadly this will not happen.)
The HN title added the word "rationalist", which isn't in the source article. This is editorializing in a way that feels kind of slander-y. Their relationship to the bay area rationalist community is that we kicked them out long before any of this started.
I mean, they seemed kind of visibly crazy, often saying threatening things to others, talking about doing crazy experiments with their sleep, often insinuating violence. They were pretty solidly banned from the community after their crazy "CFAR Alumni Reunion" protest stunt, and before then were already very far into the fringes.
We see similar issues on LessWrong. We're constantly being hit by bots that are egregiously badly behaved. Common behaviors include making far more requests per second than our entire userbase combined, distributing those requests between many IPs in order to bypass the rate limit on our firewall, and making each request with a unique user-agent string randomly drawn from a big list of user agents, to prevent blocking them that way. They ignore robots.txt. Other than the IP address, there's no way to identify them or find an abuse contact.
This is yet another in a long line of glucose-measurement devices designed to sell to unsophisticated research grantmaking agencies, rather than to diabetics. Making a device that "measures" blood sugar in a watch form factor is easy, and many research groups have done so. Making one that's accurate enough to compete with the CGMs that are already on the market is a different matter entirely.
If you're type 1 diabetic, accuracy is paramount, especially in a closed loop insulin pump setup. Even with access to the bloodstream, existing CGMs leave plenty of room for improvement that would directly improve quality of life a lot more than the annoyance of applying the sensor under the skin.
Talking about someone's track record of predictions is not an ad hominem, in the context of evaluating their credibility with respect to the subject of those predictions.
An ad hominem attack occurs when someone attacks the person making the argument rather than addressing the argument itself. For example, saying "You're always wrong, so you're wrong about this too" without addressing the current claim would be ad hominem.
That's exactly what the person I responded to was doing.
> Take a look at videos on YouTube by ThunderF00t. SpaceX is pretty problematic.
Response:
> Thunderf00t has basically 0 credibility when it comes to SpaceX. He predicted that starlink couldn't ever work, for example. In fact, he even thought that this landing would fail.
You’re claiming Thunderf00t is a good resource on SpaceX. The response gave examples of how he’s been consistently wrong.