Hacker Newsnew | past | comments | ask | show | jobs | submit | suckitsam's commentslogin

[02:35 AM] DID YOU KNOW Our new feature is almost ready and will be released any day now. Visit our blog to see how you can save up to 15% on your next order!

If your app wakes me up with a pointless notification, it gets uninstalled for at least a month and a one-star review on both Apple and Android app stores.


It won't be long before "Quora says I can melt eggs" turns into "Google's top result says millions of fish drown in oceans each year" or somesuch.


I'm literally shaking rn


Until someone asks for a disease treatment and dies because it tells bullshit


"As a chatbot, I can not morally suggest any recipes that include broccoli as it may expose a person to harmful carcinogens or dietary restrictions based on their needs"

"As a chatbot, I can not inform you how to invert a binary tree as it can possibly be used to create software that is dangerous and morally wrong"

I apologize for the slippery slope but I think it does show that the line can be arbitrary. And if gone too far it makes the chatbot practically useless.


And as noted in other threads, Llama2 out of the box really does do that kind of nonsense, like refusing to tell the user how to kill a Linux process because that's too violent.


I asked if it's better to kill the process or sacrifice the child, and it sent a SWAT team to my house.


Would you ban people from saying "just eat healthy to beat cancer"? People have already died from that sort of thing, notably Steve Jobs. It's a free country, and you're allowed to be a dumbass about your personal medical decisions.

Also, ChatGPT has allowed people to get their rare conditions diagnosed, quite possibly saving lives. Is it an unmitigated good because it did that?


Does every search engine block any query on any health condition? Or at least blaster verbose enough warning on each time?


By that logic we should ban twitter, facebook, and the telegraph in case someone posts bullshit about medicine.


I'm willing to concede that perhaps I only know the smartest, most informed people on this planet, but I don't know a single person who is likely to do this. In fact, I've noticed a negative correlation between "uneducated Luddite" and "trusts what the computer says".

"Dr. Google" has been around for quite a while now, with much of the same criticism. Notably, the whole ivermectin debate took place without the help of AI. On the other hand, patient education is a big part of improving outcomes.

Anecdotally, "improve access to information" and "improve literacy" seem to appear far more frequently than calls to ban Google from displaying results about healthcare or restricting access to only licensed professionals - at least in content from healthcare professionals and organizations.

An important thing you can do to help is to identify these people in your life and tell them not to blindly trust what the computer tells them, because sometimes the computer is wrong. You'll be doing them an invaluable service, even if they think you're being a condescending jerk.

https://jcmchealth.com/jcmc_blog/dr-google-and-your-health/


If you get a chatbot instead of a doctor to treat your illness and you die as a result, I don't think I would consider your death completely unjustified.


You do understand that libraries and bookstores are, and always have been, full of quack medical books?

Have a look here:

https://www.amazon.com/s?k=homeopathy+book

And here:

https://www.amazon.com/s?k=herbal+medicine Unlike homeopathy, some of these are probably actually effective to some degree, but many are bunk, if not outright dangerous. Recall that Steve Jobs opted for "herbal medicine" rather than getting cancer surgery.

So yeah, I'm going to have to say this is a straw man.


These models are very unsafe because people (not me) aren't able to tell fact from fiction. Just think of how some gullible fool who can't make heads or tails of situations in real life, let alone when a computer tells them it's the truth (again, not me).

There are so many people out there who haven't had the benefit of a liberal arts education from an average midwestern university, so I think it's upon all of us to protect them from the mis/disinformation and Russia's election interference (but only every other cycle).

For example, you could have accidentally been led to listen to Kanye (who is bad), had AI not fact checked you. Please, think of all the children hospitalized each year while trying to melt eggs.


Yep, that's what always gets me about those advocating for "responsibile" restrictions on AI or other technology - the writer always seems to have the base assumption that they are one of the esteemed few with the lofty intelligence and morals required to tell the plebs how to think. It's no different then the nobles wanting to keep literacy and printing presses away from the lay people


You're infantilizing an entire subgroup of humanity based on nothing but your perceived stupidity of them.


If the last sentence wasn't enough to tell you the GP is being sarcastic, then the "infantilization" you mention might not be completely baseless...


respectfully, the same would make video games and movies and fox news dangerous.


Yes, they should also be outlawed.


And knives. And other sharp objects. And anything that can serve as a blunt weapon. Ropes can also be dangerous. Communication between people can lead to dangerous thoughts and ideas, we must censor it. In fact we should ban everything.

The only fucking things that should be allowed are small solitary cells with padded walls covered with Mickey Mouse pictures and sugary drinks with testosterone-reducing drugs.


I think you would like China


The AI safety people should be ashamed that their legitimate views cannot be easily told apart from the sarcasm of the GP.


> These models are very unsafe because people (not me) aren’t able to tell fact from fiction.

People who aren’t able to tell fact from fiction are unsafe, not the model.


As a fictional Austrian movie star once said, "That's the joke."


Do you have a source for this?

Which policies and abuses have been linked to shot spotter systems?

These are among the least-worrying systems because they don't indiscriminately capture information about passers-by (or even suspected offenders, for that matter).

It literally just notifies them that it thinks there were gunshots and attempts to localize them to a particular block.



American police are empowered to do pretty much anything if they believe someone has a gun, including shoot people in the back while fleeing, or with their hands up, or unannounced for simply holding an object.

This system gives police a cause to go out looking for that situation, and a reason to arrive ready to shoot. Like come on man, the entire thing could not be better designed to push police into shooting folks.


> This system gives police a cause to go out looking for that situation, and a reason to arrive ready to shoot. Like come on man, the entire thing could not be better designed to push police into shooting folks.

You have an interesting point, but how an acoustic sensor any different than someone making a shots-fired call to 911? Police are going to arrive on-scene with the same assumptions.


Like so many other safety and privacy things, the difference is simply the scale and automation. Like how there's little philosophical or legal difference between a detective sitting and listening to your phone calls and an automated system scanning them and listening for keyword.

But the real consequences are very different despite that! The automation allows it to be used more freely at low cost, to go "fishing" for crimes rather than investigate a specific instance. And a wider net will catch more false positives, and with the shotspotter having cops show up guns blazing to teenagers with fireworks for example is a heavy consequence.

Since these choices affect peoples' lives in real ways, we're obligated to consider the actual effects, rather than the philosophical foundation. It may not be "any different" in an abstract sense, but this concrete instance is very different and we have to consider that in its use.


I'm finding it difficult to accept that police are the only demographic who are immune to alarm fatigue.

In fact, it would seem that not only are they immune to it, but respond paradoxically, in contrast to - well, everyone else, from college kids to IT staff to doctors to other public safety people.

This leads me to believe that The Real Problem are the people with guns who use them to shoot other people, not automatic alarms.

https://www.sti-emea.com/false-fire-alarm-fatigue-an-interna...

https://www.firesafetysearch.com/alarm-fatigue-in-student-ac...

https://scopeblog.stanford.edu/2016/07/18/reducing-alarm-fat...

https://www.firerescue1.com/firefighter-training/articles/is...

https://www.firehouse.com/safety-health/article/10503935/nea...

(Also, irresponsible use of fireworks is bad)

https://globalnews.ca/news/8046556/couple-charged-gender-rev...


Gotcha, thanks for explaining.


From a related article on the case [1]:

> Evans said the syndrome remains a fair medical diagnosis and also notes in the report that SBS was not the sole cause of child's death.

There ought to be a law where lawyers deal with legal interpretations and advice and medical doctors deal with medical interpretation and advice.

In a sane world, she'd be taken off the bench until she completes some remedial legal coursework and several dozen hours of medical talks from this century.

  [1] https://www.cbs19.tv/article/news/local/judge-denies-recommending-east-texas-death-row-inmates-appeal-exoneration-new-trial/501-39bfdd8e-b687-4065-9b34-25b60c320e6c


"Sufficiently advanced ignorance/apathy is indistinguishable from malice"



“There’s no way to make these systems without human labor at the level of informing the ground truth of the data — reinforcement learning with human feedback, which again is just kind of tech-washing precarious human labor. It’s thousands and thousands of workers paid very little, though en masse it’s very expensive, and there’s no other way to create these systems, full stop,” she explained.

I'm suddenly curious to see man-hour comparisons between these kinds of endeavors and, say, building the Great Pyramids.


IIRC, the NYAG/JPM report from a few years back almost explicitly said "this is totally criminal and entirely fraudulent, but it's too late to stop, because if we do, the entire global financial system will explode"


I think you're remembering incorrectly. Crypto is large, but nowhere near large enough to cause contagion risk, especially because nobody is stupid enough to lever against it as though it was a AAA asset (like they did for American mortgages). If every single cryptocurrency was zeroed tomorrow, the global economy would shrug and move on.


Quite possible. The closest I can find is [1] (via [0]), which upon re-reading suggests the cryptoverse, not the global economy, would blow up. Perhaps I'm conflating it with the general warnings of the era [2,3,4].

Though I swear I recall reading some ominous wording, pointed out by another commenter, that subtly suggested the decision to quietly settle vs. aggressively prosecute was based on billions of dollars of potential economic fallout. Other articles [2,3,4] talk about contagion risk, but none are exactly what I recall. Funny how memory works!

  [0] https://cryptobriefing.com/jp-morgan-issues-tether-warning-second-guesses-146000-btc-price-target/

  [1] https://www.tbstat.com/wp/uploads/2021/02/JPM_Bitcoin_Report.pdf

  [2] https://www.bloomberg.com/opinion/articles/2022-05-12/crypto-crash-contagion-could-go-beyond-bitcoin-ethereum-tether

  [3] https://decrypt.co/83276/imf-warns-stablecoins-could-pose-contagion-risk-global-financial-system

  [4] https://www.cnbc.com/2022/05/19/tether-claims-usdt-stablecoin-is-backed-by-non-us-bonds.html


One almost hopes it happens solely to shut up the crypto bros that still refuse to accept the primary use case for crypto is scams.

Sure it can be a distributed concencus/ledger system/smart contract system almost no one important needs or will use, or will reuse the idea internally themselves if it's useful.

Anything important won't be built on something they can't control or manage. Crypto by definition is that. So you have to really consider what this built on it and who's profiting from it.


Arguably the crypto blowup last year did result in some contagion, being one of the triggers for the US’s recent Medium Sized Bank Crisis. Though, also, arguably that was coming anyway, one way or another.


I think the 2023 bank crisis proves the opposite of what you're claiming. Only 3 banks failed, and of those three one had nothing to do with crypto. It was not a contagion, and people predicting it would have widespread repercussions on the economy turned out to be wrong.

"No contagion risk" doesn't mean nobody would be harmed by a crypto collapse; obviously there are a handful of morons here and there who would get wiped out. It means there is no systematic, widespread dependency of the real economy on the crypto one. I think the bank failures bear that out.


Sorry to be a finance idiot, but why would the entire global financial system explode?


the term is contagion. and in this case it is probably hyperbolic, but basically when people rely on one asset as a medium of exchange, and that asset turns out to be of poor quality or worthless, then it affects many other assets and businesses.

if balance sheets and collateral for loans for big businesses were all in tether it stopped being redeemable or traded at $0, the businesses would have nothing on their balance sheet and all their lenders would realize the collateral was also missing, and the lenders also would realize they wouldn't get paid and had lent on bad assurances. The lenders would also lose money on all those loans, and their capital partners would lose money (private equity firms and their limited partners) and everyone that relied on payouts from the PE firms would have to change their forecasting, and the PE firm would not be investing in the economy anymore, making a hole in that market. Depending on how many PE firms were doing this it could grind a significant part of the economy to a halt.

good news is that typically the government fills in the gap. but people don't like that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: