Hacker News new | past | comments | ask | show | jobs | submit | more azan_'s comments login

> Even Donald Trump now admits that stalling NATO expansion and not treating Russian security concerns with utter contempt could have prevented this.

Even person who panders to Putin repeat bullshit Russian propaganda? How surprising. The NATO expansion excuse is just ignorant talking point. Russian imperialism is the very reason why every neighbour of Russia (apart from the ones that are it's puppet states) want to be in NATO, not the other way around.


Safety features work only if you do not ignore them and turns out that semi-authoritarian ruling parties can do that.


>Their results get published. Well, nobody will be able to reproduce their work (unless other people also publish fraudulent work from there), and fellow researchers will raise questions, like, a lot of them.

Sadly you seem to underestimate how widespread fraud is in academia and overestimate how big the punishment is. In the worst case when someone finds you are guilty of fraud, you will get slap on the wrist. In the usual case absolutely nothing will happen and you will be free to keep publishing fraud.


It depends, independent organizations that track this stuff are able to call out unethical research and make sure there is more than a slap on the wrist. I also suspect that things may get better as the NIH has forced all research to be in electronic lab notebooks and published in open access journals. https://x.com/RetractionWatch


> I also suspect that things may get better as the NIH has forced all research to be in electronic lab notebooks and published in open access journals.

Alternatively, now that NIH has been turned into a tool for enforcing ideological conformity on research instead focussing on quality, things will get much worse.



> Sadly you seem to underestimate how widespread fraud is in academia

Anyway, I think "wishful thinking" is way more rampant and problematic than fraud. I.e. work done in a way that does not explore the weakness of it fully.


Isn't that just bad science though?

People shouldn't be trying to publish before they know how to properly define a study and analyze the results. Publications also shouldn't be willing to publish work that does a poor job at following the fundamentals of the scientific method.

Wishful thinking and assuming good intent isn't a bad idea here, but that leaves us with a scientific (or academic) industry that is completely inept at doing what it is meant to do - science.


Ye but narrower and pushed by publishing pressure I think.


I don’t actually believe that this is true if “academia” is defined as the set of reputable researchers from R1 schools and similar. If you define Academia as “anyone anywhere in the world who submits research papers” then yes, it has vast amounts of fraud in the same way that most email is spam.

Within the reputable set, as someone convinced that fraud is out of control, have you ever tried to calculate the fraud rate as a percentage with numerator and denominator (either number of papers published or number of reputable researchers. I would be very interested and stunned if it was over .1% or even .01%.


There is lots of evidence that p-hacking is widespread (some estimate that up to 20% are p-hacked). This problem also exists in top instutions, in fact in some fields it appears that this problem is WORSE in higher ranking unis - https://mitsloan.mit.edu/sites/default/files/inline-files/P-...


Where is that evidence? The paper you cite suggests that p hacking is done in experimental accounting studies but not archival.

Generally speaking, evidence suggests that fraud rates are low ( lower than in most other human endeavours). This study cites 2% [1]. This is similar to numbers that Elizabeth Bik reports. For comparison self reported doping rates were between 6 and 9% here [2]

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC5723807/ [2] https://pmc.ncbi.nlm.nih.gov/articles/PMC11102888/


The 2% figure isn't a study of the fraud rate, it's just a survey asking academics if they've committed fraud themselves. Ask them to estimate how many other academics commit fraud and they say more like 10%-15%.


Those 15% is actually if they know someone who has committed academic misconduct not fraud (although there is an overlap it's not the same), and it is across all levels (I.e. from PI to PhD student). So this will very likely overestimate fraud, as we would be double counting (I.e. Multiple reporters will know the same person). Imporantly the paper also says if people reported the misconduct it had consequences in the majority of cases.

And just again for comparison >30% of elite athlete say that they know someone who doped.


So which figure is more accurate in your opinion?


See my other reply to Matthew. It's very dependent on how you define fraud, which field you look at, which country you look at, and a few other things.

Depending on what you choose for those variables it can range from a few percent up to 100%.


I agree and am disappointed to see you in gray text. I'm old enough to have seen too many pendulum swings from new truth to thought-terminating cliche, and am increasingly frustrated by a game of telephone, over years, leading to it being common wisdom that research fraud is done all the time and its shrugged off.

There's some real irony in that, as we wouldn't have gotten to this point a ton of self-policing over years where it was exposed with great consequence.


There's an article that explores the metrics here:

https://fantasticanachronism.com/2020/08/11/how-many-undetec...

> 0.04% of papers are retracted. At least 1.9% of papers have duplicate images "suggestive of deliberate manipulation". About 2.5% of scientists admit to fraud, and they estimate that 10% of other scientists have committed fraud. 27% of postdocs said they were willing to select or omit data to improve their results. More than 50% of published findings in psychology are false. The ORI, which makes about 13 misconduct findings per year, gives a conservative estimate of over 2000 misconduct incidents per year.

Although publishing untrue claims isn't the same thing as fraud, editors of well known journals like The Lancet or the New England Journal of Medicine have estimated that maybe half or more of the claims they publish are wrong. Statistical consistency detectors run over psych papers find that ~50% fail such checks (e.g. that computed means are possible given the input data). The authors don't care, when asked to share their data so the causes of the check failures can be explored they just refuse or ignore the request, even if they signed a document saying they'd share.

You don't have these sorts of problems in cryptography but a lot of fields are rife with it, especially if you use a definition of fraud that includes pseudoscientific practices. The article goes into some of the issues and arguments with how to define and measure it.


0.04% is an extremely small number and (it needs to be said) also includes papers retracted due to errors and other good-faith corrections. Remember that we want people to retract flawed papers! treating it as evidence of fraud is not only a mischaracterization of the result but also a choice that is bad for a society that wants quality scientific results.

The other two metrics seem pretty weak. 1.9% of papers in a vast database containing 40 journals show signs of duplication. But then dig into the details: apparently a huge fraction of those are in one journal and in two specific years. Look at Figure 1 and it just screams “something very weird is going on here, let’s look closely at this methodology before we accept the top line results.”

The final result is a meta-survey based on surveys done across scientists all over the world, including surveys that are written in other languages, presumably based on scientists also publishing in smaller local journals. Presumably this covers a vast range of scientists with different reputations. As I said before, if you cast a wide net that includes everyone doing science in the entire world, I bet you’ll find tons of fraud. This study just seems to do that.


The point about 0.04% is not that it's low, it's that it should be much higher. Getting even obviously fraudulent papers retracted is difficult and the image duplications are being found by unpaid volunteers, not via some comprehensive process so the numbers are lower bounds, not upper. You can find academic fraud in bulk with a tool as simple as grep and yet papers found that way are typically not retracted.

Example, select the tortured phrases section of this database. It's literally nothing fancier than a big regex:

https://dbrech.irit.fr/pls/apex/f?p=9999:24::::::

Randomly chosen paper: https://link.springer.com/article/10.1007/s11042-025-20660-1

"A novel approach on heart disease prediction using optimized hybrid deep learning approach", published in Multimedia Tools and Applications.

This paper has been run through a thesaurus spinner yielding garbage text like "To advance the expectation exactness of the anticipated heart malady location show" (heart disease -> heart malady). It also has nothing to do with the journal it's published in.

Now you might object that the paper in question comes from India and not an R1 American university, which is how you're defining reputable. The journal itself does, though. It's edited by an academic in the Dept. of Computer Science and Engineering, Florida Atlantic University, which is an R1. It also has many dozens of people with the title of editor at other presumably reputable western universities like Brunel in the UK, the University of Salerno, etc:

https://link.springer.com/journal/11042/editorial-board

Clearly, none of the so-called editors of the journal can be reading what's submitted to it. Zombie journals run by well known publishers like Spring Nature are common. They auto-publish blatant spam yet always have a gazillion editors at well known universities. This stuff is so basic both generation and detection predate LLMs entirely, but it doesn't get fixed.

Then you get into all the papers that aren't trivially fake but fake in advanced undetectable ways, or which are merely using questionable research practices... the true rate of retraction if standards were at the level laymen imagine would be orders of magnitude higher.


> found by unpaid volunteers, not via some comprehensive process

"Unpaid volunteers" describes the majority of the academic publication process so I'm not sure what you're point is. It's also a pretty reasonable approach - readers should report issues. This is exactly how moderation works the web over.

Mind that I'm not arguing in favor of the status quo. Merely pointing out that this isn't some smoking gun.

> you might object that the paper in question comes from India and not an R1 American university

Yes, it does rather seem that you're trying to argue one thing (ie the mainstream scientific establishment of the western world is full of fraud) while selecting evidence from a rather different bucket (non-R1 institutions, journals that aren't mainstream, papers that aren't widely cited and were probably never read by anyone).

> The journal itself does, though. It's edited by an academic in ...

That isn't how anyone I've ever worked with assessed journal reputability. At a glance that journal doesn't look anywhere near high end to me.

Remember that, just as with books, anyone can publish any scientific writeup that they'd like. By raw numbers, most published works of fiction aren't very high quality.[0] That doesn't say anything about the skilled fiction authors or the industry as a whole though.

> but it doesn't get fixed.

Is there a problem to begin with? People are publishing things. Are you seriously suggesting that we attempt to regulate what people are permitted to publish or who academics are permitted to associate with on the basis of some magical objective quality metric that doesn't currently exist?

If you go searching for trash you will find trash. Things like industry and walk of life have little bearing on it. Trash is universal.

You are lumping together a bunch of different things that no professional would ever consider to belong to the same category. If you want to critique mainstream scientific research then you need to present an analysis of sources that are widely accepted as being mainstream.

[0] https://www.goodreads.com/book/show/18628458-taken-by-the-t-...


The inconsistent standards seen in this type of discussion damages sympathy amongst the public, and causes people who could be allies in future to just give up. Every year there are more articles on scientific fraud appear in all kinds of places, from newspapers to HN to blogs yet the reaction is always https://prod-printler-front-as.azurewebsites.net/media/photo...

Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession suddenly they're all unpaid volunteers. This Is Fine.

Journals don't retract fraudulent articles without a fight, yet the low retraction rate is evidence that This Is Fine.

The publishing process is a source of credibility so rigorous it places academic views well above those of the common man, but when it publishes spam on auto-pilot suddenly journals are just some kind of abandoned subreddit and This Is Fine "but I'm not arguing in favor of it".

And the darned circular logic. Fraud is common but This Is Fine because reputable sources don't do it, where the definition of reputable is totally ad-hoc beyond not engaging in fraud. This thread is an exemplar: today reputable means American R1 universities because they don't do bad stuff like that, except when their employees sign off on it but that's totally different. The editor of The Lancet has said probably half of what his journal publishes is wrong [1] but This Is Fine until there's "an analysis of sources that are widely accepted as being mainstream".

Reputability is meaningless. Many of the supposedly top universities have hosted star researchers, entire labs [2] and even presidents who were caught doing long cons of various kinds. This Is Not Fine.

[1] https://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736...

[2] https://arstechnica.com/science/2024/01/top-harvard-cancer-r...


> Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession suddenly they're all unpaid volunteers. This Is Fine.

Academics are paid by grants to work on concrete research and by their institution to work on tasks institution pays for. These institutions do not pay for general "tasks critical to their profession".

> This Is Fine.

That is as much fine as me not working on an open source project on my employer time.


> The inconsistent standards seen in this type of discussion

I assume you must be referring to the standards of the person I was replying to?

> damages sympathy amongst the public

Indeed. The sort of misinformation seen in this thread, presented in an authoritative tone, damages the public perception of the mainstream scientific establishment.

> Every year there are more articles on scientific fraud appear in all kinds of places

If those articles reflect the discussion in this thread so far then I'd suggest that they amount to little more than libel.

Even if those articles have substance, you are at a minimum implying a false equivalence - that the things being discussed in this thread (and the various examples provided) are the same as those articles. I already explained in the comment you directly replied to why the discussion in this thread is not an accurate description of the reality.

> Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession

Do they? It was already acknowledged that they do unpaid labor in this regard. If society expects those tasks to be performed to a higher standard then perhaps resources need to be allocated for it. How is it reasonable to expect an employee to do something they aren't paid to do? If management's policies leave the business depending on an underfunded task then that is entirely management's fault, no?

> The publishing process is a source of credibility ... but when it publishes ...

As I already pointed out previously, this is conflating distinct things. Credible journals are credible. Ones that aren't aren't. Pointing to trash and saying "that isn't credible" isn't useful. Nobody ever suggested it was.

If you lack such basic understanding of the field then perhaps you shouldn't be commenting on it in such an authoritative tone?

> This Is Fine "but I'm not arguing in favor of it".

Precisely how do you propose that we regulate academic's freedom of speech (and the related freedom of the press) without violating their fundamental human rights? Unless you have a workable proposal in this regard your complaints are meaningless.

I also eagerly await this objective and un-gameable metric of quality that your position appears to imply.

> Fraud is common but This Is Fine because reputable sources don't do it, where the definition of reputable is totally ad-hoc beyond not engaging in fraud.

Violent crime is common [on planet earth] but this isn't relevant to our discussion because the place we live [our city/state/country] doesn't have this issue, where the definition for "place we live" is rather arbitrarily defined by some squiggles that were drawn on a map.

Do you see the issue with what you wrote now?

If a journal publishes low quality papers that makes that journal a low quality outlet, right? Conversely, if the vast majority of the materials it publishes are high quality then it will be recognized as a high quality outlet. As with any other good it is on the consumer to determine quality for themselves.

If you object to the above then please be sure that you have a workable proposal for how to do things differently that doesn't infringe on basic human rights (but I repeat myself).

> today reputable means American R1 universities because they don't do bad stuff like that

It's a quick way of binning things. A form of profiling. By the metrics it holds up - a large volume of high quality work and few examples (relative to the total) of bad things happening.

> except when their employees sign off on it but that's totally different

The provided example upthread was a journal editor, not an author. No one (at least that I'm aware of) is assessing a paper based on the editors attached to the journal that it appeared in. I'm really not sure what your point is here other than to illustrate that you haven't the faintest idea how this stuff actually works in practice.

> The editor of The Lancet has said

Did you actually read the article you refer to here? "Wrong conclusions" is not "fraud" or even "misconduct". Many valid criticisms of the current system are laid out in that article. None of them support the claims made by you and others in this comments section.

> star researchers, entire labs [2] and even presidents who were caught doing long cons of various kinds. This Is Not Fine.

We finally agree on something! It is not fine. Which is why, naturally, those things generally had consequences once discovered.

An argument can certainly be made that those things should have been discovered earlier. That it should have been more difficult to do them. That the perverse incentives that led to many of them are a serious systemic issue.

In fact those exact arguments are the ones being made in the essay you linked. You will also find that a huge portion (likely the majority) of the scientific establishment in the west agrees with them. But agreeing that there are systemic issues is not the same as having a workable solution let alone having the resources and authority to implement it.


Thanks for the link to the randomly-chosen paper. It really brightened my day to move my eyes over the craziness of this text. Who needs "The Onion" when Springer is providing this sort of comedy?


> More than 50% of published findings in psychology are false

Wrong is not the same as fraudulent. 100% of Physics papers before Quantum Mechanics are false[1]. But not on purpose.

[1] hyperbole, but you know what I mean.


It's hyperbole to the level that obfuscates, unfortunately. 50% of psych findings being wrong doesn't mean "right all the time except in exotic edge cases" like pre-quantum physics, it means they have no value at all and can't be salvaged. And very often the cause turns out to be fraud, which is why there is such a high rate of refusing to share the raw data from experiments - even when they signed agreements saying they'd do so on demand.


Not trying to be hostile but as a source on metrics, that one is grossly misleading in several ways. There's lots of problems with scientific publication but gish gallop is not the way to have an honest conversation about them.


Well it depends, France adopted widespread use of diffusion weighted MRI as first line modality for stroke because it's much more sensitive than cect, but yeah, most institutions do CT scan as a first line for several reasons including one you've provided.


Can you provide a citation for the France assertion? I think it’s wildly unlikely a protocol for acute stroke would favor mri over ct but could be wrong. It would take 20 minutes to transfer a pt to mri in a lot of stroke centers in the USA, as opposed to CT’s that are generally across the hall, where imaging should be read within 30 minutes of door time I believe.

Also I’m not sure what you increased “sensitivity” would get you. Acute stroke is a clinical diagnosis, the imaging determines the type of stroke and treatment.


> Can you provide a citation for the France assertion? I think it’s wildly unlikely a protocol for acute stroke would favor mri over ct but could be wrong.

https://www.sciencedirect.com/science/article/abs/pii/S00353... (there's free pdf available when you search for it): "The first-line brain imaging at WH was MRI in 69 SU (56.1%), CT in 6 (4.9%), and either MRI or CT depending on delay and severity in 48 (39.0%). The first-line brain imaging at NWH was MRI in 54 SU (43.9%), CT in 16 (13.0%) and either MRI or CT in 53 (43.1%). In practice, the proportion of patients who really underwent first-line MRI was higher than 90% in 46 SU (37.4%) at WH and in 36 SU (29.3%) at NWH"

> Also I’m not sure what you increased “sensitivity” would get you. Acute stroke is a clinical diagnosis, the imaging determines the type of stroke and treatment.

In clean and easy cases sure, not all cases are like that though and MRI is very useful then; by sensitivity I mean sensitivity - https://pmc.ncbi.nlm.nih.gov/articles/PMC1859855/


Reading that couldn't be more clear, CT is the primary modality for stroke, worldwide.

  > by sensitivity I mean sensitivity
You're a little confused. You're using "sensitivity" to mean sensitivity of detecting ischemic stroke. MRI is the obvious follow-up. When available, worldwide. But it doesn't guide emergency treatment.


> Reading that couldn't be more clear, CT is the primary modality for stroke, worldwide.

Well yes, it's primary modality for stroke worldwide and it's leading modality in France, just like I've said before.

> You're a little confused. You're using "sensitivity" to mean sensitivity of detecting ischemic stroke. MRI is the obvious follow-up. When available, worldwide. But it doesn't guide emergency treatment.

I would appreciate if you stopped using condescending tone. It does not guide emergency treatment decisions because in most cases it is not performed in emergency settings. When it is performed in this setting it is guiding treatment and MRI is included in stroke guidelines for cases where clinical diagnosis is not clear (and these cases are not that rare). Why is it not widely adopted? Mostly logistic reasons (which can be overcome - like they were in France) and because TOF-MRA is generally worse than CTA. It has others positives apart from higher sensitivity though, e.g. you can use FLAIR/DWI mismatch in wake-up strokes which are VERY common (obviously perfusion serves generally same purpose).


Letting Russia keep the territories they control now is a great way to ensure that in few years when they rebuild their military potential they will attack again.


It also signals wars of conquest are back. That’s a message that will also be heard by revanchists in Beijing and New Delhi and expansionists in Tel Aviv, Riyadh and D.C.


It's neither obvious nor true, generalist models outperform specialized ones all the time (so frequently that it even has its own name - the bitter lesson)


So you think that one general model can outperform thousands of specific ones in their specific areas?


I wonder what effect DOGE will have on attracting talent to government jobs. It was already challenging to recruit qualified individuals to government positions; with these changes, I believe the situation will worsen significantly.


> I wonder what effect DOGE will have on attracting talent to government jobs. It was already challenging to recruit qualified individuals to government positions; with these changes, I believe the situation will worsen significantly.

The idea is to replace these government positions with private positions.

Remember feudalism? Some guy with all of the capital - which, in those days, meant arable land - basically got to dictate how things worked, because he had the stuff you, as a peasant, needed to live. He was called the king. If you didn't do what he wanted, your life got cut short. Government was just one guy's ideas enforced by his brute squad. In many ways, it was a sole proprietorship. Eventually, nobility was required to administer the land, and that nobility eventually turned into a structure that could keep the king somewhat in check, lest they carry out a coup. They became the administrative state. Today, we have legislative, judicial and executive bodies that - at least ostensibly - need to win election in order to do the same thing, thus replacing the nobility.

That's the ultimate goal here: the dismantling of the administrative state. The administrative state carries out laws made by a body - in this case, Congress - that, at least in theory, puts society's desires at the center.

A number of these laws directly impact the ability of capital holders to generate more capital. Since the people holding this capital think that the only reason humans do anything is to create more capital, they go to any lengths to keep society's "unprofitable" desires at bay.

Since the accumulation of capital can result in monopoly, you will, at some point, have someone controlling all of the capital again. This is a return to feudalism. You won't be swinging a scythe in a field in a toque and tunic, but the structure will be the same.


[flagged]


Feudalism is when everyone works for the government.


Yes, yes, we know. You won't believe us until we actually are living in the feudalist society they're working, right now, to create brazenly in public. Enjoy gloating about crazy we are until you realize we really are not.


They won't believe anyone, ever.

The narcissist is incapable of being incorrect and will always find a scapegoat to explain the consequences of their poor decision-making.

The world is now run by these people, and because most people are more ape than man, they will emulate and elevate these people until some other stronger ape comes around to convince them to emulate and elevate them instead.


This is a really unfortunate perspective. The people that you are casting as "more ape than man" believe you to be doing the exact same thing you accuse them of; emulating and elevating people they think are also ruining the world.

I genuinely don't understand how you can comfortably make such sharp insults towards people who don't agree with you. I understand that it's easy to get caught up in echo chamber - which any website that uses upvote/downvote based ordering and hiding schemes inherently encourage - but the people that disagree with you politically aren't apes. They're not narcissists. You are not special or above others.


I think this article explains where I'm coming from better than I ever could, stumbled on it today quite presciently:

https://open.substack.com/pub/claireberlinski/p/impeach-him

Ironically, I'd say what characterizes a "man" vs an "ape" is their capacity for self reflection... which is your moniker.

A narcissist, as described in the article, has no capacity for self reflection because it requires them to enter a reality outside of their ego from which to observe themselves objectively.


I can comfortably make these observations (not insults) because I'm describing what I have observed over the last many decades, not reacting to some news ephemeral news item.

What amazes me is that this is a conversation about Elon Musk and Donald Trump and their sycophants... people who are even more caught up in echo chambers and more insulting to our fellow humans, all while being far more insulting in their online speech.

And what is rich is you trying to cast me as the one who thinks they're special because I'm insulting the people who blindly follow Musk and Trump in their naive belief that they're helping to "save humanity" or "America" or whatever.

I live in the real world, not echo-chambers, this is the place I post most frequently and it's still like once every other week/month (and declining). My comment was directed at a group of people in general, while yours makes all kinds (very incorrect) assumptions about me personally.

You actually sound very much like the person who is "too online" and "in an echo chamber" since you seem to respond to the least charitable interpretation of what is said in order to score internet points.

It's certainly a lot easier to respond to my comment as if I was dehumanizing entire swaths of the public based on their voting choices or political beliefs... much more difficult to consider that I'm speaking about a very narrow segment of the population defined by their specific belief that Musk and Trump are special and can do nothing wrong and will not countenance any evidence to the contrary.


There is a podcast from New York Times with interviews from government workers [1].

From all accounts the firing was incompletely indiscriminate and so many people who you would think would never be fired were e.g. US Army Corp of Engineers working on flood prevention.

And so I can't imagine anyone wanting to join the government when there is a strong chance you will be fired in the medium term with no notice and with no reason. All after you've physically moved you/family to Washington because remote work is no longer available.

[1] https://www.nytimes.com/2025/02/19/podcasts/the-daily/trump-...


The intended one I imagine. They said they want to create trauma for government workers.


"We want the bureaucrats to be traumatically affected. When they wake up in the morning, we want them to not want to go to work"

https://www.nytimes.com/2025/01/11/books/review/administrati...


> WFH is largely like a chicken in a cage, just pooping out eggs without even being able to rotate.

I think that chicken in a cage is much better metaphor for staying at the office than for WFH.


It's not about details of each environment, it's about how the parent comment focuses on maximizing productivity (number of eggs), while discarding everything else.


What kind of undergraduate trains 70 B models?


> It sounds like you've never used a welding torch, installed a kitchen sink, or done similar blue collar work. These jobs will never be replaced by robots, or by a non-trained person wearing a headset.

Why do you think they will never be replaced by robots?


Not the person who said it and I wouldn't say "never"...

But I will say that until we have a robot that can fold laundry, we won't have a robot that can go into your crawlspace and replace a chunk of crusty old galvanized pipe with copper or pex.

Robots have excelled, so far, in controlled environments. Dealing with the chaos of plumbing in a building that has been "improved" by different people over the course of a century is the opposite of that.


We do have robots that can fold laundry (in a regular laundry room, and supposedly trained with a generalist policy that can learn other tasks).

https://www.youtube.com/watch?v=YyXCMhnb_lU


One thing is as sibling post commented, the complexity of such jobs are staggering from a robotics point of view.

The other thing is that the salary of the plumber or welder is in the range $20/hr to $40/hr. Can you make a general purpose, agile robot function at a total cost of ownership that's substantially lower than this?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: