Hacker Newsnew | past | comments | ask | show | jobs | submit | jimrandomh's commentslogin

The author doesn't claim it, but worth stating explicitly: tmux is a security-critical piece of software that doesn't get the attention it deserves.


Your price is too low. You should raise your price until you aren't selling out.


I’m running on a very thin margin here. I want to get the robot into the hands of robot lovers without them having to eat McDonald’s all month just to save up for it. (I did that once myself, and I strongly advise against it—it’s really bad for your health, both physically and mentally)


When you're selling a physical product at the single-digit-hundreds scale, "thin margin" is a false economy at best, fatal self-sabotage at worst. You should have a thick margin, which you reinvest into scaling up.


I tried Cursor, and will occasionally switch into it, but I'm having a hard time using it because its relationship to extensions (particularly extensions that the user develops and sideloads) is badly broken. I tried doing minor customization (forking the vim plugin from the github version, creating a vscode hello-world-style plugin), and while everything worked in VsCode, transferring those plugins into Cursor did not. There was no documentation for plugins in Cursor, you just had to hope that things were similar-enough to VsCode. And then they failed to load with no debugging leads.

I think this is an artifact of Cursor being a closed-source fork of an open-source project, with a plugin architecture that's heavily reliant on the IDE at least being source-available. And, frankly, taking an open-source project like VsCode and commercializing it without even making it source-available is a dishonorable thing to do, and I'm rooting against them.


That does seem a bit shady… is there really still no documentation on this after they’ve raised so much money?


The article links to another news article which links to http://dx.doi.org/10.1038/s43856-025-00775-0 which says:

> For IHM risk prediction, we utilized the LSTM model, CW-LSTM model, transformer, LR, AdaBoost, XGBoost, and random forest (RF) models. For 5-year BCS prediction, we used MLP, AdaBoost, XGBoost, and RF models.

All of those acronyms are obsolete machine learning techniques from prior to the current trend of large language models. In other words, this paper is not reporting that any actually-deployed AI is failing, it's reporting that a group of researchers tried to build an AI for evaluating health, but failed to do so.


This headline is a lie. A backdoor in a Bluetooth chip would be something which enabled a wireless attacker to gain code execution on the chip. This article reports on something which allows the device drivers of the attached device to gain code execution on the chip, which does not violate a security boundary.

(In a well-functioning journalism ecosystem, this would require a retraction and would significantly harm the reputation of the outlet that wrote it. Sadly this will not happen.)


The HN title added the word "rationalist", which isn't in the source article. This is editorializing in a way that feels kind of slander-y. Their relationship to the bay area rationalist community is that we kicked them out long before any of this started.


It does appear in the article.

> The group is a radical offshoot of the Rationalism movement, focusing on matters such as veganism and artificial intelligence destroying humanity.

You yourself seem to acknowledge this as a fact.


Can you tell us more about how they were kicked out? Are there other groups that have been kicked out?


I mean, they seemed kind of visibly crazy, often saying threatening things to others, talking about doing crazy experiments with their sleep, often insinuating violence. They were pretty solidly banned from the community after their crazy "CFAR Alumni Reunion" protest stunt, and before then were already very far into the fringes.


We see similar issues on LessWrong. We're constantly being hit by bots that are egregiously badly behaved. Common behaviors include making far more requests per second than our entire userbase combined, distributing those requests between many IPs in order to bypass the rate limit on our firewall, and making each request with a unique user-agent string randomly drawn from a big list of user agents, to prevent blocking them that way. They ignore robots.txt. Other than the IP address, there's no way to identify them or find an abuse contact.


This is yet another in a long line of glucose-measurement devices designed to sell to unsophisticated research grantmaking agencies, rather than to diabetics. Making a device that "measures" blood sugar in a watch form factor is easy, and many research groups have done so. Making one that's accurate enough to compete with the CGMs that are already on the market is a different matter entirely.


What about this research indicates to you that it doesn't address the accuracy issue?


> Shaker said. “No other technology can provide this level of precision without direct contact with the bloodstream.”

The existing alternatives do have access to the bloodstream.


How does that indicate that they didn't address the accuracy of their technology, as the GGP claims?

> The existing alternatives do have access to the bloodstream.

So? Who says otherwise? People don't want invasive tests that have direct contact with their bloodstream.


If you're type 1 diabetic, accuracy is paramount, especially in a closed loop insulin pump setup. Even with access to the bloodstream, existing CGMs leave plenty of room for improvement that would directly improve quality of life a lot more than the annoyance of applying the sensor under the skin.


> accuracy is paramount

Who says that's not addressed by this technology?

> directly improve quality of life a lot more than the annoyance of applying the sensor under the skin

How annoying is it? They don't cut a hole and insert it. How can you say how much it would improve the quality of life?

I don't see where all these assumptions about this technology come from.



Talking about someone's track record of predictions is not an ad hominem, in the context of evaluating their credibility with respect to the subject of those predictions.


Actually, it is. From the dictionary:

An ad hominem attack occurs when someone attacks the person making the argument rather than addressing the argument itself. For example, saying "You're always wrong, so you're wrong about this too" without addressing the current claim would be ad hominem.

That's exactly what the person I responded to was doing.


Claim:

> Take a look at videos on YouTube by ThunderF00t. SpaceX is pretty problematic.

Response:

> Thunderf00t has basically 0 credibility when it comes to SpaceX. He predicted that starlink couldn't ever work, for example. In fact, he even thought that this landing would fail.

You’re claiming Thunderf00t is a good resource on SpaceX. The response gave examples of how he’s been consistently wrong.

That is not an ad hominem attack.


Actually both the examples you cited are misrepresented. But whatever. You're in the cult. I get it.


And you’re in the other cult ¯\_(ツ)_/¯


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: