Were they actually broken, as in violated? I don't remember them being broken in any of the stories - I thought the whole point was that even while intact, the subtleties and interpretations of the 3 Laws could/would lead to unintended and unexpected emergent behaviors.
Oh I didn't mean 'violated', but 'no longer work as intended'. It's been a while, but I think there were cases where the robot was paralysed because of conflicting directives from the three laws.
If I remember correctly, there was a story about a robot that got stuck midway between two objectives because it was expensive and so its creators decided to strengthen the law about protecting itself from harm.
I'm not sure what the cautionary tale was intended to be, but I always read it as "don't give unclear priorities".
Yeah, the general theme was the laws seem simple enough but the devil is in the details. Pretty much every story is about them going wrong in some way (to give another example: what happens if a robot is so specialised and isolated it does not recognise humans?)
It doesn't have to be high intent all the time though. Chrome itself is "free" and isn't the actual technical thing serving me ads (the individual websites / ad platforms do that regardless of which browser I'm using), but it keeps me in the Google ecosystem and indirectly supports both data gathering (better ad targeting, profitable) and those actual ad services (sometimes subtly, sometimes in heavy-handed ways like via ad blocker restrictions). Similar arguments to be made with most of the free services like Calendar, Photos, Drive, etc - they drive some subscriptions (just like chatbots), but they're mostly supporting the ads indirectly.
Many of my Google searches aren't high intent, or any purchase intent at all ("how to spell ___" an embarrassing number of times), but it's profitable for Google as a whole to keep those pieces working for me so that the ads do their thing the rest of the time. There's no reason chatbots can't/won't eventually follow similar models. Whether that's enough to be profitable remains to be seen.
> Search is all about information retrieval. AI is all about task accomplishment.
Same outcome, different intermediate steps. I'm usually searching for information so that I can do something, build something, acquire something, achieve something. Sell me a product for the right price that accomplishes my end goal, and I'm a satisfied customer. How many ads for app builders / coding tools have you seen today? :)
Not if the problem as written is "does this code compile", which is still a useful stepping stone for some workflows. Yours is certainly a more useful query in most cases but repositioning or re-scoping the original question can still lead to a net win.
It can be taken too far, of course, but the amount of bad code and misleading comments in most systems is substantial. On a different company, two of our teams had a competition every Sprint to see who could take out the most code...
Completely agree. It was a little tongue-in-cheek, but actually removing code and the complexity and tech debt associated with it is incredibly valuable.
Especially if more experienced and knowledgeable engineers can remove the code paths that are "in use" but shouldn't be - premature optimizations that can be simplified, redundancies that can be eliminated, features that aren't bringing in value. It's usually an underappreciated job but can lead to greatly improved velocity and significantly less fragility.
I was excited for Matter for all it promises... but companies seem to be explicitly holding back support for it because they recognize that it will bring less control for them, less differentiation, and far fewer opportunities to force these money-squeezing ideas onto consumers. I hope to be proven wrong but I'm not feeling very optimistic about its long-term future right now.