I was an active contributor to /r/espresso for a while, but in the process of the hobby I realized I disagree with some of their advice and best practices. Minor stuff really.
I would not describe the sub as toxic or anything, but it's literally impossible to get a dissenting opinion across on Reddit. Other hobby subs were the same.
Every single time I mentioned an opinion differing from the "hive-mind" consensus it was downvoted to hell, with no responses, counter arguments or anything resembling discussion. I would have liked to trade experiences but that's not possible.
While at the same time some of the other posters giving advice freely admit they don't actually have experience with what is discussed and are just repeating older posts.
There is no real value in that, and nowadays you can get mostly the same experience by just asking ChatGPT. Both have no clue and no real opinion of their own when it comes to details.
I take part in a few forums now, and it's a breath of fresh air. Much better experience and a lot more personal as well.
Everywhere you let the masses upvote and/or downvote, you're going to have the hive-mind problem. We have it here, too.
I'd propose having a separate UI for users to agree/disagree, vs. for users to flag rule breaking posts, like spam, flamebait, insults and so on. The agree/disagree count would just display a vanity number, but the rule-breaking UI would actually downweight the article or comment. You could audit occasionally and remove voting privileges from people abusing the rule-breaking UI as a "Mega-disagree."
Science is a long game, it's not about sales where you need to sell right now. Extreme results will be attempted to be replicated, which in turn costs a lot of funding. That is money and time, sometimes a whole persons career.
This money and time is taken directly away from funding other, potentially more worthy or more likely to be correct studies.
There is no point of looking at every (flawed) study in the most positive way, unless you have unlimited time and money to pursue every avenue of research.
Often (not always), the studies that are most heavily promoted among the news and in business or politics are really not the best research and other, less visible but more solid research gets ignored in favor of whats popular or what has had good marketing.
This is very frustrating for people doing solid good research, because every so often someone else will come along with wild, exaggerated claims and very little data to back it up, and then gets funding for it.
It takes literal years away from good science just because someone markets and speaks well.
Which is fine in business, but in science this is not something "the market" can or will correct for well, simply because the timespans are so long.
> There is no point of looking at every (flawed) study in the most positive way
This line epitomizes the nonsense in the discussion. I didn't say every study, you can't know it's flawed without seriously examining it, and I didn't say in the most positive way at all.
By using these exaggerations, you damage any serious discussion - you give people nothing to respond to except your emotional state.
What I said was, the point is to build knowledge, and so the way to examine research is to find the valuable knowledge - which includes evaluating the accuracy, etc. of that knowledge. There's no other point to it - we're not awarding tenure here, so there's point in keeping some overall score. We just want to learn what we can.
I did not say this study is flawed or that every study is flawed. And I have made no exaggerations or said that you personally look at it the most positive way.
Reading comprehension is important, and especially important in a discussion like this.
I do however really mean that some studies are not worth looking at all in more detail: if the methodology is flawed, the results are meaningless. At most the premise of such a hypothetical (not saying this one necessarily!) study could be used as an idea for further research, but not to build knowledge on or derive knowledge from the results.
Are there some examples of "non flawed" research that is getting ignored? Because (as a non-academic) I feel like I'm seeing the same HN attitude that OP describes. No study is good enough for HN. There are always nit pickers that come out of the woodwork. For every science article about some study or finding, the top comment is always a variation on: "This study is flawed because..." Almost without exception. Also, the standard is so high: A single flaw found is grounds for dismissing the whole study as flawed.
My guess is if you raise examples of "good science" the HN peanut gallery will jump in to point out the flaws in that science, too.
Yes because most studies that end on up HN are there because they were reported on somewhere as news.
This usually happens in usually these cases:
1. when a paper is extremely good and it's results are groundbreaking, or
2. when a study itself claims it has groundbreaking results, or
3. when it's a regular study that's gotten some great marketing/promotion e.g. by their university.
The case of 1. is extremely rare, and even when everyone believed the results and they were peer reviewed by a reputable paper like Science, some of them turned out to be academic fraud that was later retracted.
Most studies that pop up on HN are of types 2. and 3. That's just because otherwise they would not get news attention.
But most studies in general are in category 4: the ones an academic or professional would read going about their daily business / research. These range from terrible, to OK, to really great, but 99% never make the news.
As a (former) academic, I've read lots of papers and like in real life it's usually the people (papers) that get attention who scream the loudest. There are some gems too of course, and it's right to not ignore anything.
But in my personal experience and over time, I've been very right to be very sceptical once a result turns up in the news because of the 3 ways it can get there.
This is amplified even more so with papers that base their results / outcome purely on statistics, such as most experimental studies done. These derive their results from the statistics (sample size, experiment design, etc) so their power and the probability of their result being correct (what the authors say) it directly coupled.
Translations suffered a similar fate already a while ago. Some of the results are completely unusable AI gibberish now, sometimes even with invented words. That's for English -> German.
What's baffling is that translation is the core competency LLMs were designed for.
This happened a few months ago. They changed the backend completely. You can also notice it’s much slower (say from 5 ms to 200 ms). But I don’t think it’s worse.
The translation built into Chrome regularly misinterprets Japanese as Chinese (despite this being objectively, trivially detectable for any significantly long string based just on the codepoints) and offers no way to correct it.
It is. There are cases where simple words or common expressions in very mainstream languages (French, German, English) just aren't translated at all or translated ridiculously badly.
This time around the changes might actually take effect quickly enough for people to feel them during the current administration.
Compared to the previous playbook of making changes and then blaming the effects on the next administration. While taking credit for everything good kicking in from the previous administration of course.
Even Google ratings are sometimes gamed nowadays. This wasn't always the case, they used to be reliable. Tripadvisor ratings on the other hand were always garbage.
I recently had some really bad experiences with some fast food places in my corner of the world, at a train station as well.
They all had 4.9 stars but lots of 1 star reviews matching my experience. But also tons and tons of eerily similar 5 star reviews with a generic photo of the counter (no faces, no food) and a random name and glowing review of who "served" them. Which is impossible at those places.
reply