Hacker Newsnew | past | comments | ask | show | jobs | submit | gigama's commentslogin

"Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs)."

https://arxiv.org/abs/2401.05566


"The detective’s request to run a DNA-generated estimation of a suspect’s face through facial recognition tech has not previously been reported. Found in a trove of hacked police records published by the transparency collective Distributed Denial of Secrets, it appears to be the first known instance of a police department attempting to use facial recognition on a face algorithmically generated from crime-scene DNA."

"It’s really just junk science to consider something like this," Jennifer Lynch, general counsel at civil liberties nonprofit the Electronic Frontier Foundation, tells WIRED. Running facial recognition with unreliable inputs, like an algorithmically generated face, is more likely to misidentify a suspect than provide law enforcement with a useful lead, she argues. "There’s no real evidence that Parabon can accurately produce a face in the first place," Lynch says. "It’s very dangerous, because it puts people at risk of being a suspect for a crime they didn’t commit."


Police LOVE using BS "science" in their work. So many "forensic methods" are just bogus, completely made up by one guy who tours the country selling his "method" to police departments. Most of them are even regularly used in court, despite being utter trash. As long as you can get an "Expert" to take some money to say it in court, a judge will allow it, and allow juries to believe it is as true as cops say.


That is true for a number of forensic methods, indeed [1]. For the sole issue of the controversial diagnosis of abusive head trauma (one forensic method among many other, which specifically involves sudden infant deaths or collapses), there may be thousands of wrongful convictions. Courts deserve better.

[1] https://cifsjustice.org/about-cifs/reform-in-forensic-scienc...

[2] https://www.cambridgeblog.org/2023/05/a-journey-into-the-sha...

[3] https://news.ycombinator.com/item?id=37650402


And so trivial to falsify. In this case, put your own DNA in and see that the picture output doesn't look like you.

For fiber analysis, give the expert some fibers of known origin and see if they get it right. Give them some hair from one of 200 people; see if they can tell who it's from. None of that got done for decades. Police and judges clearly do not care.


> completely made up by one guy who tours the country selling his "method" to police departments

... oftentimes that one guy is an ex-cop. No bias there.


Kyle Chayka: "What I worry about is the passivity of consumption that we've been pushed into, the ways that we're encouraged not to think about the culture we're consuming, to not go deeper and not follow our own inclinations."


“There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law. Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.”


"If there’s one thing which probably unites all of Hackaday’s community, it’s a love of technology. We live to hear about the very latest developments before anyone else, and the chances are for a lot of them we’ll all have a pretty good idea how they work. But if there’s something which probably annoys a lot of us the most, it’s when we see a piece of new technology misused. A lot of us are open-source enthusiasts not because we’re averse to commercial profit, but because we’ve seen the effects of monopolistic practices distorting the market with their new technologies and making matters worse, not better. After all, if a new technology isn’t capable of making the world a better place in some way, what use is it? It’s depressing then to watch the same cycle repeat itself over and over, to see new technologies used in the service of restrictive practices for short-term gain rather than to make better products."

tl;dr:

* New technology should not be used to shorten the lifespan of a product

* New technology should not be used as an excuse to inhibit repairability

* New technology should not be tied to unnecessary services

* New technology should not be detrimental to the planet


Another question that troubles Olympics security watchers is how long the system should remain in place. “It is very common for governments that want more surveillance to use some inciting event, like an attack or a big event coming up, to justify it,” says Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation, a civil-society organization in San Francisco. “The infrastructure stays in place and very easily gets repurposed for everyday policing.”


Reply hazy, concentrate and ask again.


The Medium article (linked) is a write up of this ~5 min video from @emilymbender. It is a very good summary of the current state of AI from a virtual roundtable convened by Congressman Scott on "AI in the Workplace: New Crisis or Longstanding Challenge?"

https://youtu.be/eK0md9tQ1KY?si=3GJx7l70SupDHmTl

In it, she clearly underlines how replacing the term “AI” with “automation” quickly opens up a whole host of useful questions: including what’s being automated, who’s automating it and why, and who’s impacted or harmed by that automation.


"...and in the darkness, bind them."

A sincere "thank you" to all the determined Hobbits, Elves, Dwarves and Ents helping to keep the TOR network up and running.

"Mission: To advance human rights and freedoms by creating and deploying free and open source anonymity and privacy technologies, supporting their unrestricted availability and use, and furthering their scientific and popular understanding."

https://torproject.org


Here's a related Intercept article re: TwitX from today:

    https://theintercept.com/2023/10/27/elon-musk-twitter-purchase/
This recent PBS Frontline episode was also quite good:

    https://www.pbs.org/wgbh/frontline/documentary/elon-musks-twitter-takeover/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: