Hacker Newsnew | past | comments | ask | show | jobs | submit | more blueblob's commentslogin

Continuous tasting and integration


That would be a testable hypothesis that you have not tested. Anybody can do whataboutism but it isn't a meaningful argument. If it's your belief that the literature is biased by 'green' funding, prove it.


how would this be done, without being dismissed as being "just a bunch of plots with no effort to engage with the existing literature and study"?


Engaging with the existing literature would be a good start.

If you were interested in studying the relationship between bias and funding you might do a survey of published papers and articles on the subject.

If you can identify some measurable criteria that can be used to rate how "green" a paper's funding is then you can use that as an axis and compare the surveyed papers to reveal trends.


Additionally, since one is already doing that, check out what kind of funding Fossil Fuel companies have done in the past and present.


I think it is because the proprietary part for them is the data, not the particular algorithm. They benefit more from other people making advances on their technology because they have the data to get more benefit than anyone else. If they kept it to themselves, they would get no "free" advancement. So they trade-off the secret of the technique in the hope that others will advance the technique, making their data more valuable.


There are ads in the old version too. They show up as sponsored posts though I don't know if they are as egregious because I also use old.

The only thing that I feel I miss out on in using old.reddit.com is the predictions. The /r/muaythai has a predictions tournament that's pretty fun but there's no way to get notifications that there are new predictions to be made with the old interface (as far as I know).


Use an ad blocker. I use ublock origin and don't see any ads on old.reddit.


I also use ublock origin, but it doesn't imply that there are not ads.


I feel exactly the opposite. AI has not yet posed any significant threats to humanity other than issues with the way people choose to use it (tracking citizens, violating privacy, etc.).

So far, we have task-driven AI/ML. It solves a problem you tell it to solve. Then you, as the engineer, need to make sure it solves the problem correctly enough for you. So it really still seems like it would be a human failing if something went wrong.

So I'm wondering why there is so much concern that AI is going to destroy humanity. Is the theoretical AI that's going to do this even going to have the actuators to do so?

Philosophically, I don't have an issue with the debate, but the "AI will destroy the world" side doesn't seem to have any tangible evidence. It seems to me that people seem to take it as a given that it's possible AI could eliminate all of humanity and they do not support that argument in the least. From my perspective, it appears to be fearmongering because people watched and believed Terminator. It appears uniquely out-of-touch.


As someone not in the field of audio engineering, I had no idea what this product is. The website focused on describing all the features without giving a simple 1-2 line description of what the product even is.


it's the kind of thing that if you don't know it's not for you. very hipster product for weird electronic musicians, more of a fashion statement than anything else really.


Personalization would be to have Spotify give you ads for a Rolex right after you purchased a Rolex. At least the way that kind of thing usually works for me is that I always get the recommendations after the purchase (definitely not a Rolex for me).


Why is that interesting though? I can just as easily put a backdoor in preprocessing before passing it to the algorithm. Outside of machine learning, you can do the same thing anywhere. This doesn't appear to be anything new, it's citing an article that's not even peer reviewed yet. It's just not good writing, in my opinion.


It’s interesting if someone supplies you a model which you build an application around yourself (and thus control any preprocessing), because they basically prove that you have no way to check that the model doesn’t contain any backdoors, even though you can inspect the model (it’s not a black box to you). It’s as if someone gives you an software component as source code but you still can’t detect that it has a backdoor.


How is this different from the halting problem?


ML models aren’t turing machines (unless you loop their output back as input). The paper is about simple classifiers, which run in a predetermined, finite number of steps.


But it's similar to using a compiler, no?

I almost never compile the compiler I use, so I'm implicitly trusting that the compiler actually spits out what I expect and not some kind of backdoor[1].

[1]: https://dl.acm.org/doi/10.1145/358198.358210


What exactly corresponds to the compiler and its input/output in your analogy? It doesn’t seem very similar.


I guess I misunderstood the context.

I thought the issue was that you get some premade model from a company, feed it input and it classifies for you. With a compiler you feed it input and it produces a binary.

If you don't have access to the source, meaning model training data or source code for the compiler, then you can't be sure the model won't intentionally misclassify or the compiler won't insert trojan code.

But I see now the op meant something different.


The difference I see is that an ML model is at first glance not a compiled binary with hidden mechanics: It’s a network graph with weights on the edges and where all nodes work in the same easy-to-understand way. The model also isn’t a unique function of the training data in the way that the compiler binary is a function of the compiler source — you can get slightly differently behaving models from the same training data, so you can’t totally predict the model’s behavior from the training data like you can predict the compiler’s behavior from the compiler source. The model itself is generally the better “source” for predicting (well, simulating) its exact behavior. That’s why it is surprising that the presence of a backdoor can remain undetectable by inspecting the model. There would be somewhat of an analogy if there was a backdoored compiler where the backdoor cannot be detected by analyzing the compiler binary’s machine code.


I agree this is completely unremarkable.

What's remarkable is that anyone thinks it's remarkable that a machine, or a person for that matter, or a person operating a machine, can be wrong.

A person can give a wrong answer or perform a wrong action, as a result of bad input. So what? That input can be crafted specifically to confuse them and trick an honest person into performing some bad act. So what?

Alk the same is exactly the same true for an ai. So what?

And lastly, aside from a person or ai being in error, an operator/user of an ai (or person) can be in error (believing the ai's output is good when it's not). So what?

None of this is the slightest bit remarkable.


The novel result is not "code can be wrong," it's " code can be wrong in a way that cannot be detected via any sort of audit or review, even when said code is restricted to some class less complex than Turing machines."


What’s remarkable is that you can inspect all the details of the machinery (ML model) and still can’t detect that it contains a backdoor.


I thought that was always true of any ai? You only know the input data, weights, and starting conditions/code, but know nothing about the actual workings once started.

You can only audit that by duplicating the results, corroboration, and consensus, like with scientific research. IE, other ais doing the same job but using other code and run by other people, do they produce the same output, or the same pattern of output.

I'm not in ml/ai so I'm not stating that as something I know, just something I always assumed.

I would be stunned if you said that people actually thought they could audit ai inner workings after kick-off.


Spot-testing usually gives you a representative picture of what the ML model will produce in general. Of course there can always be outliers (and usually there are), but they are just that, outliers, and they can’t be systematically exploited by an attacker with normal-looking inputs. The present paper however basically shows that those outliers can be systematically and deliberately spread throughout input space in such a way that any given input can be slightly tweaked by the attacker (in ways that the input still looks unsuspicious) to get the desired “lying” output, without that fact being detectable either by spot-checking or any other practically feasible analysis on the model. The fact that this is possible to do in such a general fashion (any given model can be modified to contain such a backdoor) is a new finding.


That is interesting. Thank you.


Oh good, another fearmongering piece about AI and Machine Learning. Nobody's ever seen an original thought like that before.


It turns out that your google cache page doesn't render the page, but the retrieved source does actually contain all of the content. If you view page source, you get:

How Candidates Can Signal Sincerity in an Era of Cynicism

---------------------------------------------------------

[[Download data and study materials from OSF]](https://osf.io/rj3aw/)

Principal investigators:

Scott Clifford

University of Houston

*Email:* scliffor\@central.uh.edu

*Homepage:* <https://scottaclifford.com/>

Elizabeth Simas

University of Houston

*Email:* ensimas\@uh.edu

*Homepage:* <https://www.elizabethsimas.com/>

\

*Sample size:* 769

*Field period:* 01/03/2020-06/26/2020

Abstract

Partisan polarization has reached historical highs, while politicians' credibility has reached historical lows. For example, recent polls suggest that as few as 8% of Americans think that politicians believe most of the stances that they take on issues. This extreme level of cynicism threatens to break a fundamental link in representation. If candidates cannot credibly convey their positions, then voters cannot evaluate them on policy. Yet, we know little about the strategies politicians might take to convey the credibility of their claims. In this paper, we investigate whether politicians can signal credibility by taking extreme positions or by justifying their stances in moral terms. Across three experiments, we show that moral justifications tend to enhance credibility, while extreme positions do not. In a fourth study, we show that while extreme stances increase polarization in candidate ratings, moral justifications do not. Taken together, our findings suggest that moral justifications are a useful strategy to enhance credibility without contributing to rising levels of polarization.

Hypotheses

- H1: Candidates taking extreme issue positions will be perceived as more sincere.

- H2: Issue stances justified with moral language will be perceived as more sincere.

Experimental Manipulations

A within-subjects vignette experiment. Respondents will be asked to evaluate three hypothetical politicians, each taking a stance on a particular issue. Within each candidate profile, the stance will be randomly assigned to one of four conditions in a 2x2 design. The stance will be either extreme or moderate and moral or pragmatic.

Outcomes

An index of the following questions:

- Do you think this candidate truly believes in {stance}, or is just saying what some people want to hear?

- In your opinion, how committed do you think this candidate is to {stance}?

- In your opinion, how likely is it that this candidate will be a leader on {stance}?

- In your opinion, how likely do you think it is that this candidate will flip-flop on {stance} in the future?

Summary of Results

As expected, the moral justification is perceived as significantly more credible than the pragmatic justification (b = .02, p = .002). The extreme position, on the other hand, is seen as slightly, but not significantly less credible than the more moderate position (b = -.007, p = .295). Thus, consistent with Study 1, moral justifications increase credibility, but extreme positions do not. Additionally, we find no evidence of an interaction between the treatments.

References

Paper presented at the the 2019 Texas American Politics Symposium (TAPS).


This link from there to a viewer with the PDF is up for now: https://osf.io/eumn6


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: