Hacker Newsnew | past | comments | ask | show | jobs | submit | swores's commentslogin

Other than Americans wanting to feel superior (no offence intended, I'm sure most countries want to feel that when possible!) is there actually any public evidence that's the case?

Even when it comes to superiority of physical military forces, different people (with a range of different biases) have different opinions on stuff like whether a hot, all-out (but non-nuclear) war between USA and China would prove one or the other to be stronger, and while you may read that and think "I know which side is better and anyone who disagrees is just buying into delusional propaganda" at least to form that view you've had the ability to follow a lot of publicly available details on military developments over the years, learning about current and next gen fighter jets, drones, ships... etc.

But when it comes to cyber stuff, both offensive and defensive, it's generally a lot more secretive in terms of stuff that's actually been done (see for example the speculation in this thread that US power grid failures in recent years might have been caused by foreign adversaries - there's no evidence that's true, but if the US and China had both spent the last decade trying to take offline as many of the other country's power grids as possible we likely wouldn't have heard about it). Yet alone for hypothetical but saved for war capabilities. If a hot WW3 broke out tomorrow, who actually knows what hacking tools any country (from superpowers to smaller players) actually has, waiting to be used? Presumably they all spend a lot of effort trying to learn about each other's capabilities, and maybe they're successful enough that they actually do all know most of what everyone else can do - but they don't then announce that the way we hear about North Korea testing a new missile or about America developing a new fighter jet. I feel like we the general public just have no idea how advanced or not wartime capabilities might be. Am I wrong? (I may well be, as I'm in no way an expert in this field; I just believe that things like the document you linked are massively influenced by both the politics of the authors and the information available to them.)


The opposite of technically correct - there's some logic to their thinking of it as being about the 90s, but technically it very much did not happen "in the 90s".

Ok so we settle for technically incorrect ;)

The worst kind

Maybe I'm being dense, but could someone kindly explain to me the "Web App" example on that Sprites page?

"30 hours of wake time per month (~5 concurrent users avg), averaging 10% of 2 CPUs and 1 GB RAM"

Does that mean it would sit available but using 0% when there's nobody on the site, and just bill for usage when web traffic is causing the server to do work? So if the web app went a month with no visitors it would cost nothing (except for the file storage fees)?


> So if the web app went a month with no visitors it would cost nothing (except for the file storage fees)?

Yes that's the idea. The public URL for a sprite is served by a (free) load balancer. The sprite is normally suspended, gets resumed when a request comes in, then suspended again. Not sure on the exact timeouts, they probably don't suspend immediately after a response is sent.


Alright, thanks!

You'd be surprised how many people fit in the venn overlap of technical enough to be doing stuff in unix shell yet willing to follow instructions from a website they googled 30 seconds earlier that tells them to paste a command that downloads a bash script and immediately executes it. Which itself is a surprisingly common suggestion from many how to blog posts and software help pages.

Are LLMs too expensive, or not reliable enough at not making mistakes, or just something you haven't considered?

It's not something I generally need to do, so I haven't been keeping up with how good LLMs are at this sort of conversion, but seeing your question I was curious so I took a couple of examples from https://www.json.org/example.html and gave them to the default model in the ChatGPT app (GPT 5.2 - at least that's the default for my ChatGPT Plus account) and it seemed to get each of them right on the first attempt.


It's not often anybody writes a sentence that combines cynicism/negativity about AI with anthropomorphising AI agents!


I don't have time right now to watch the video and will be coming back to do so later, but here's a couple of snippets from the text on that page that made me want to bother watching (either they're overhyping it, or it sounds interesting and significant)

> The identified vulnerabilities may allow a complete device compromise. We demonstrate the immediate impact using a pair of current-generation headphones. We also demonstrate how a compromised Bluetooth peripheral can be abused to attack paired devices, like smartphones, due to their trust relationship with the peripheral.

> This presentation will give an overview over the vulnerabilities and a demonstration and discussion of their impact. We also generalize these findings and discuss the impact of compromised Bluetooth peripherals in general. At the end, we briefly discuss the difficulties in the disclosure and patching process. Along with the talk, we will release tooling for users to check whether their devices are affected and for other researchers to continue looking into Airoha-based devices.

[...]

> It is important that headphone users are aware of the issues. In our opinion, some of the device manufacturers have done a bad job of informing their users about the potential threats and the available security updates. We also want to provide the technical details to understand the issues and enable other researchers to continue working with the platform. With the protocol it is possible to read and write firmware. This opens up the possibility to patch and potentially customize the firmware.


Here's an excerpt from [1]:

> Step 1: Connect (CVE-20700/20701) The attacker is in physical proximity and silently connects to a pair of headphones via BLE or Classic Bluetooth.

> Step 2: Exfiltrate (CVE-20702) Using the unauthenticated connection, the attacker uses the RACE protocol to (partially) dump the flash memory of the headphones.

> Step 3: Extract Inside that memory dump resides a connection table. This table includes the names and addresses of paired devices. More importantly, it also contains the Bluetooth Link Key. This is the cryptographic secret that a phone and headphones use to recognize and trust each other.

> Note: Once the attacker has this key, they no longer need access to the headphones.

> Step 4: Impersonate The attacker’s device now connects to the targets phone, pretending to be the trusted headphones. This involves spoofing the headphones Bluetooth address and using the extracted link-key.

> Once connected to the phone the attacker can proceed to interact with it from the privileged position of a trusted peripheral.

[1] https://news.ycombinator.com/item?id=46454740


Keep in mind that "making money" doesn't have to be from people paying to use uv.

It could be that they calculate the existence of uv saves their team more time (and therefore expense) in their other work than it used to create. It could be that recognition for making the tool is worth the cost as a marketing expense. It could be that other companies donate money to them either ahead of time in order to get uv made, or after it was made to encourage more useful tools to be made. etc

Edit: 6 months ago, user simonw wrote a HN comment "Here's a loose answer to that question from uv founder Charlie Marsh last September [2024] : https://hachyderm.io/@charliermarsh/113103564055291456

«« I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).

What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.

An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]

But the core of what I want to do is this: build great tools, hopefully people like them, hopefully they grow, hopefully companies adopt them; then sell software to those companies that represents the natural next thing they need when building with Python. Hopefully we can build something better than the alternatives by playing well with our OSS, and hopefully we are the natural choice if they're already using our OSS. »»


Agree with you but would add that even on the professional managerial side it is indeed a luxury - yes for many people it would be possible, but there's also many people (in startups, or small businesses, or not small but struggling businesses) whose options are as limited as teachers.

Some of whom might have good options for changing jobs, or good hopes of things improving in the near future, but for many it would be the lesser evil compared to trying to find a different job with the same positives (whether salary or other motivation) but without those negatives.


> "I also decided to invest time and money in my own website/school and that was probably the best decision I've ever made."

Thanks for not jumping to self promotion, but I'm actually curious to see how you did it - would you mind sharing a link?


pikuma.com


How are the videos served?


wistia.com


That's right. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: