Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
It is it even worse in a sense that. It is not either. It is not neither. It is not even both as variations of Branda exist throughout the multiverse in all shapes and forms including one that can troubleshoot her own formulas with ease and accuracy.
But you are absolutely right about one thing. Brenda can be asked and, depending on her experience, she might give you a good idea of what might have happened. LLMs still seem to not have that 'feature'.
I don't like to discourage questions like that, because they kill curiosity. We know what the likely answer is, but reasonable assumptions are just that. Assumptions. Why not let mind wander about the exciting ( if somewhat hard to consider as possible ) lines of thinking.
Planetary scientist academics are angry because he's getting all of the attention and it isn't even in the field he's most known for previously. Even smart humans are still humans.
Yeah it doesn't instill a lot of confidence in the quality of ivy league credentials when guys like this are running around spouting nonsense. I'm surprised there's not a clause in his employee handbook that says to not be an obvious troll. Kooky science is one thing but this is just the type of person the men with butterfly nets and white coats should be interested
If, and I do mean if, government is a solution here, its only role is to ensure that app use cannot be required for service ( and we can argue over what services can stay app-only ).
I wish you were wrong, but I don't disagree with assessment. I am on grapheneos ( edit: on pixel ) now, but even that should only be a pitstop now since google has decided to show its hand in such a nasty ( if not that unexpected ) manner.
Everyone is quick to ascribe malice without understanding why changes are made. It's never done for the reasons you think. Without a formal relationship between Graphene and Pixel, things were operating out of luck. This is why the next target hardware is starting with a business relationship. Even desktop Linux is most successful when business relationship between a vendor and the distro maker. Everything else is ripe for random breakage in support.
It is not quick. Whatever goodwill google had, it is gone based on their actions alone. And this is beside the point, because, I am not judging on what they intended to do, but what their actions, including after intense community backlash, were. In other words, their intent is irrelevant given the circumstances. Their actions, however, even without intended malice, will cause tremendous damage all around.
Eh, yes. In theory. In practice, and this is what I have experienced personally, bosses seem to think that you now have interns so you should be able to do 5x the output.. guess what that means. No verification or rubber stamp.
It is bad in a very specific sense, but I did not see any other comments express the bad parts instead of focusing merely on the accuracy part ( which is an issue, but not the issue ):
- this opens up ridiculous flood of data that would otherwise be semi-private to one company providing this service
- this works well small data sets, but will choke on ones it will need to divvy up into chunks inviting interesting ( and yet unknown ) errors
There is a real benefit to being able to 'talk to data', but anyone who has seen corporate culture up close and personal knows exactly where it will end.
edit: an i saying all this as as person, who actually likes llms.
I think, given some of the signs of the horizon, there is a level of MAD type bluffing going around, but some of the actions by various power centers suggest it is either close, people think its close or it is there.
I was going to make a mildly snide remark about how once it can consistently make better decision than average person, it is automatically qualifies, but the paper itself is surprisingly thoughtful in describing both: where we are and where it would need to be.
- flood of 3rd party apps offering medical/legal advice
reply