Hacker Newsnew | past | comments | ask | show | jobs | submit | DennisP's commentslogin

Software verification has gotten some use for smart contracts. The code is fairly simple, it's certain to be attacked by sophisticated hackers who know the source, and the consequence of failure is theft of funds, possibly in large amounts. 100% test coverage is no guarantee that an attack can't be found.

People spend gobs of money on human security auditors who don't necessarily catch everything either, so verification easily fits in the budget. And once deployed, the code can't be changed.

Verification has also been used in embedded safety-critical code.


If the requirements you have to satisfy arise out of a fixed, deterministic contract (as opposed to a human being), I can see how that's possible in this case.

I think the root problem may be that most software has to adapt to a constantly changing reality. There aren't many businesses which can stay afloat without ever changing anything.


Maybe it's difficult for the average developer to write a formal specification, but the point of the article is that an AI can do it for them.

I think that's a really interesting idea.

You could also fill the context with just the book portion that you've read. That'd be a sure-fire way to fulfill Amazon's "spoiler-free" promise.

I agree but for some reason, there are people who enjoy doing that. I think they should be allowed to do as they like.

In any case, Amazon claims this feature is spoiler-free and that would be easy to implement. It likely works by feeding the book into an LLM context, and they could simply feed in the portion you've already read.


And yet, some people enjoy doing that. I have no idea why but I think they should be allowed to do it.

In any case, Amazon claims this is spoiler-free, which would be easy to implement by feeding only the portion you've read into the LLM context.


There are many things that people are allowed to do which are not advisable. All a matter of degree I suppose.

I don't mind bezos using 1.21 gigawatts per request, as long as it's only for a very short time.

Brutal on the crest factor though, you'll definitely get a snotty phone call from your power company if you keep that up.

Maybe. Depends on how many capacitors you have.

So your claim is that this massive data collection, done at massive public expense, is not used at all? That seems unlikely. And given how good computers are at natural language processing these days, the data is more usable than ever.

Of course it is used. But unless you're a target of interest to intelligence analysts, the metadata generated by your online activities will be of no interest whatsoever. It won't even be looked at.

The whole point of mass data collection is that you can check everyone to see if they should be targets of interest. And as societies get more totalitarian, what qualifies you to be a target becomes less and less dramatic.

Doing this is easy these days. You keep using phrases like "looked at" as if humans had to manually read through the records.


It leads to a Chilling Effect which has a huge negative impact on society.

And despite the X-files spinoff and the best-selling Clancy novel, the administration kept repeating "nobody could have predicted this!"

Just because it's in the training data doesn't mean the model can remember it. The parameters total 60 gigabytes, there's only so much trivia that can fit in there so it has to do lossy compression.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: