Talon is not open source as far as I know. It's freeware with Patreon early access and support. The community plugins cover a wide range of applications and are easy to modify. I also found their Slack good for discussing accessibility options like gaze tracking. It looks like development has slowed significantly but the developer recently rewrote the core in Rust.
Have you considered voice dictation and control? There are good commercial solutions and even some free ones (like https://talonvoice.com/ - edit: not open source but has lots of community plugins). I used it for a while when I was recovering from hand problems. I was surprised how easy it was to learn. It helped a lot for tasks like navigating windows, writing emails etc. There are even voice coding applications now (https://www.cursorless.org/).
Uber used a VC treasure chest to run at a multiple-billion dollar per quarter loss for years to run rivals out of business (while operating illegally in many market places).
Yes. Uber and Lyft have driven many taxi companies out of business. SF Yellow Cab and LA Green Cab are the two largest ones. Uber and Lyft ran at losses for years losing billions of dollars to keep prices low and drive people out of business.
"San Francisco's Yellow Cab collective is not stopping service. You'll still be able to spot, and maybe hail, a Yellow Cab in San Francisco. It says business is still strong and that it averages 15,000 fares a day in the city. The chapter 11 filing will allow it to restructure its debts, it said in a statement. "
From what I can tell the company is still around. They have relatively recent Google reviews and there are posts with complaints about them on Reddit all in the past 6 months. I think it was acquired by a competitor (CityWide) through the bankruptcy, which then rebranded as Yellow Cab.
The point is SF still has traditional cabs and they compete with Uber.
We used to have a bug in our FW that was caught quite late. We have to unbox thousands of products, connecting it to our phone to download new FW, then re-package it. Not fun at all.
LLMs are being used everywhere from research to helping draft laws. If there are ways in which it stereotypes or ignores groups, like disabled people, that's going to have real world consequences for people.
> I'd like to buy a BYD Electric Car, but apparently the US Government (Biden and Trump Admins) think that is a bad thing too.
It's a bad thing if you believe China is subsidising BYD and others (including with the use of slave labour) in order to destroy non-Chinese automotive makers by flooding external markets with cheap cars. Add on top the fact that modern EVs are rolling surveillance platforms...
I don't understand why this isn't more obvious. I understand why industrialists want us to ignore working conditions in other countries (the extra money goes into their pockets), but why regular folk (which I assume most of us here are) don't care about the conditions of labor in other parts of the world. It baffles me. If it can happen there it can happen here (obviously it is not going to happen here soon, but given the right conditions... and conditions seem to be changing quite fast these days...).
Is it destroying a market or is it out-competing? US makers are not at all interested in building EVs that Americans want or need, they only want to build luxury adware platforms. If another country builds an EV that Americans do want, why is it suddenly a bad thing? Is this a capitalist economy where anyone can compete, or is this a communist country where only government blessed companies can compete?
US makers are (or at least were) pivoting to making cheaper EVs. Supply chain constraints during COVID was one of the reasons they were focusing on luxury EVs.
>why is it suddenly a bad thing
There is no "suddenly". These arguments have existed as long as nation states have. A country does not like it when another country attempts to undermine its companies and manufacturing capacity with unfair competition. Just look at all the tariffs and lawsuits between the US and the EU over state subsidies of airline manufacture over the last decades. It's not a China-only issue and never has been.
> Is this a capitalist economy where anyone can compete
Capitalist economies are not magical natural occurrences. They result from rules, in this case those agreed between countries. The WTO exists for a reason. State subsidies and a counties degree of protectionism will always play a part in economic discussions.
> This anti-Tesla sentiment seems to be purely isolated to online communities
The dramatic decline in Tesla purchases in European countries proves otherwise. Here in Europe, I've heard Tesla discussed negatively in office canteens and doctors' waiting rooms a lot since Musk's 'arm gestures'. One of the doctors at my local surgery had his Tesla covered in red paint recently.
Both sentiments exist in Europe at the same time. When I moved in where I live one year ago, there was one other Tesla parked in the area. Now there’s 5 more. And yes, Musk is the (negative) talking point in waiting rooms.
You mean he extended his heart out to the crowd twice? Yes, I don't think anyone debates that. Saying he made a nazi salute is the misinformation, Elon isn't a fascist any more than Biden was a communist or Michelle Obama was a man. This sort of low-IQ name-calling is exhausting
What's amusing about your comment is that Musk is on video doing a literal "my heart goes out to you" gesture in an older press event recording where he does not in fact give a nazi salute but instead does a heart shape with both his hands and extends it outward from his chest, showing that he's informed about the accepted version of that gesture.
And you may be misinformed of the context as well. He did the nazi salute because him and Vivek were being raked over the coals for their H1-B stance by the Bannon wing of MAGA and his bruised ego was clearly throwing a bone to the base of that wing to get into their good graces again.
Please don't engage in flamewars like this on HN. It's not the kind of topic that can be resolved with this kind of back-and-forth, and it's against the guidelines to use HN in this way, so please just avoid participating in them in future.
Please don't comment like this on Hacker News. HN is not for political/ideological battle, and whatever you're commenting about, we need you to avoid snark.
There's nothing to be gained by continuing to discuss the topic, as nobody is interested in change their minds about it. All comments in any subthreads about it should be flagged equally for being flamebait and a generic tangent. People who want to keep discussing it have much of the rest of the internet in which to do so.
Are AI models able to detect abnormalities that even an experienced radiologist can't see? i.e. something that would look normal to a human eye but AI correctly flags it for investigation? Or are all AI detections 'obvious' to human eyes and simply a confirmation? I suspect the latter since it was human annotated images the model was trained on.
For example, let's say I'm looking at a chest x-ray. There is a pneumonia at the left lung base and I am clever enough to notice it. 'Aha', I think, congratulating myself at making the diagnosis and figuring out why the patient is short of breath.
But, in this example, I stop looking closely at the X-ray after noticing the pneumonia, so I miss a pneumothorax at the right lung apex.
I have made a mistake radiologists call 'satisfaction of search'.
My 'search' for the patient's problem was 'satisfied' by finding the pneumonia, and because I am human and therefore fundamentally flawed, I stopped looking for a second clinically relevant diagnosis.
An AI module that detects a pneumothorax is not prone to this type of error. So it sees something I did not. But it doesn't see something that I can't see. I just didn't look.
> I have made a mistake radiologists call 'satisfaction of search'.
Ah, now I have a name for it.
When I've chased a bug and fixed a problem I found that would cause the observed problem behavior, but haven't yet proven the behavior is corrected, I'm always careful to specify that "I fixed a problem, but I don't know if I fixed the problem". Seems similar: found and fixed a bug that could explain the issue, but that doesn't mean there's not another one that, independently, would also cause the same observed problem.
I've been going to RSNA for over 25 years, in all that time, the best I've seen from any model presented to me was the smack the radiologist on the head and say, "you dummy, you should have seen that!" model.
That is, the models spot pathologies that 99.9999% of rads would spot anyway if not overworked, tired, or in a hurry. But, addressing the implication of your question, the value is actually in spotting a pathology that 99.9999% of rads would never spot. In all my years developing medical imaging startups and software, I've never seen it happen.
I'm sure it's a matter of training data, but I don't know if it's a surmountable problem. How do you get enough training data for the machine to learn and reliably catch those exceptions?
One Harvard study trained an AI that could reliably determine someone's race from a chest X-ray. AIs can be trained to see things we can't.
The difficulty is likely in making a good training dataset of labeled images with pathologies radiologists couldn't see. I imagine in some cases (like cancer), you may happen to have an earlier CT scan or X-ray from the patient where the pathology is not quite yet detectable to the human eye.
I suspect that radiologists could identify race from a plain chest x-ray if they were given the patient’s race and asked to start noticing the difference. They just aren’t doing it because, if it’s important, you can just look at the patient.
There are a lot of things in medicine that aren’t in literature, but are well-known among certain practitioners. I’m an anesthesiologist and practice in an area with a large African-American population. About 10-15% (rough guess) of people of West African descent will have a ridiculously strong salivary response to certain drugs (muscarinic agonists). As in, after one dose their mouths will be full of saliva in seconds. We don’t have East Africans for comparison, so I can’t say it’s a pan-Bantu thing, but I have seen it in a Nigerian who lived here. Not in the literature, but we all know it. I had a (EDIT: non-anesthesia) colleague ask me about a hypersecretory response from such a drug. I said, oh, was he black? Yes, how did I know? Because we give those drugs all the time and have eyes. It’s very rare to see in European-descended populations.
It's possible humans could learn to do this, but I'm skeptical they could do it this well. According to the article, humans experts couldn't tell race from the chest X-rays and the researchers couldn't figure out how the AI was detecting race. They fed it corrupted images to figure out what information it was relying on. It was robust against both low pass and high pass filters.
That's the reason rads never train to determine race from a chest x-ray.
BTW, models don't need to train that either. Because if it's important, it's recorded, along with a picture, in the guy's medical record.
I'd just like to gently suggest that determining someone's race from an X-Ray instead of, say, their photograph, is maybe not how we should be burning training cycles if we want to push medical imaging forward. Human radiologists had that figured out ages ago.
You're being snide about the Harvard/MIT researchers being idiots doing useless research because they don't realize radiologists can just look at the patients face, but that's obviously not what happened. They were trying to see if AI could introduce racial bias. They're not releasing an AI to tell radiologists the race of their patients.
According to the article, human experts could not tell race from chest X-rays, while the AI could to do reliably. Further, it could still determine race when given an X-ray image passed through a low pass filter or high pass filter, showing that it's not relying solely on fine detail or large features to do so.
Firstly, that doesn't tell us whether there is bias in data. That tells us whether or not there is bias in their data.
Secondly, it tells us it can train to spot things that human rads do not train to spot. It tells us nothing at all about whether or not an AI can train to spot things a human rad also trains to spot, but can't.
Human rads don't train to spot race. Why? Because they don't need to do so. Human rads do train to spot pathologies in as early a stage as possible. I've never seen an AI spot one at an earlier stage than the best human rads can. But I have seen several AIs fail to spot pathologies that even human rads at the median could spot.
That's the state of play today. And it's likely to remain that way for a long, long time. Human rads will be needed to review this work not because they are human, but right now, it's because human rads are just better. At the top end, human rads are not only better, but are manifestly superior.
They aren't studying bias in data. They were studying bias in AI. The data used was 850,000 chest x-rays from 6 publically available datasets. They aren't studying whether this dataset differs from the general public or has some kind of racial bias; that's irrelevant to the study.
> it tells us it can train to spot things that human rads do not train to spot
You're kidding yourself if you think you could determine someone's race with 97%+ accuracy from a chest x-ray if only you trained at it. The study authors (who are themselves a mix of radiologists and computer scientists) claim that radiologists widely believe it to be nearly impossible to determine race from a chest x-ray. No one is ever going to try to train radiologists to distinguish race from chest x-rays, so you'll always be able to hold out hope that maybe humans could do it with enough training. But your hope is based on nothing; you don't have a shred of evidence that radiologists could ever do this.
> I've never seen an AI spot one at an earlier stage than the best human rads can.
According to the article, AIs aren't trained to do this, because we don't have datasets to train this. You need a dataset where the disease is correctly labeled despite the best radiologists not being able to see it in the x-ray. Trained with a good enough dataset, they'd be able to see things we miss.