He produces Broadway shows these days. Never say never but that kind of thing screams an “I’ve got all the cash I need, now I’m following my passions” mindset. You certainly don’t do it for the money…
I think that’s the wrong way of framing it. If, before the launch of the iPhone, you asked what people wanted from their phones you’d be there a very long time before anyone described something like an iPhone (no buttons, capacitive touch interface, etc). And yet, once they were offered it, people flocked to it.
This regulation is targeted to devices with poor battery lives. Just because it hasn’t occurred to people to ask for the feature doesn’t mean they won’t appreciate it.
That's an odd reply since by that argument they also flocked to a phone with no replaceable battery, which was pretty standard in the 2000s.
But you could be right. I guess this will be an experiment to watch: If EU consumers show a strong preference for replaceable batteries once they become more widely available, we can expect manufacturers to start offering it in other markets as well.
I think everything is a tradeoff and at that point people took the trade. But the place smartphones take in our lives today compared to 2006 is radically different, I wouldn’t assume much carries over.
Looks to be a great proof of concept. No, running a standalone executable alongside the browser is not the way you'd want to do WebUSB. But it's great to see someone working on it.
Except the sandbox is a huge target already, and breaking it means any website can now access and mess with your usb devices. If you can develop an exploit for Chrome's WebUSB system, you potentially have millions upon millions of targets available.
Downloading an arbitrary executable can be made safe (via multiple avenues: trust, anti virus software, audits, artifact signing, reproducible builds, etc) and once the software is vetted, it exposes (or it should at least) little to no attack vector during daily use.
But a keyboard flashed with malicious firmware becomes an undetectable keylogger, a USB rubber ducky, and a virus-laden USB stick all in one.
The concept that someone would want to reflash their keyboard firmware, but wants a sandbox because they don't trust the firmware programmer makes no sense.
Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.
Sort of. At the moment there is a fad of websites that mess with your scrolling and have very low content density. They are all trying to imitate Apple's marketing pages. Most startup websites do this. It's not at all good design, it's user-hostile, but it's trendy and popular right now.
> people here were so confident that OpenAI is going to collapse because of how much compute they pre-ordered
That's not why. It was and is because they've been incredibly unfocused and have burnt through cash on ill-advised, expensive things like Sora. By comparison Anthropic have been very focused.
Nobody was talking about them betting too much on compute, people were saying that their shady deals on compute with NVIDIA and Oracle were creating a giant bubble in their attempt to get a Too Big To Fail judgement (in their words- taxpayer-backed "backstop").
That’s just short term talk. The main thesis behind their collapse is that they won’t be able to pay their compute bills because they won’t have enough demand to.
That doesn't really track because their compute isn't like a debt obligation.
The compute topic was more around how OpenAI, Nvidia, Oracle, and others were all announcing commitments to spend money in each other in a circular way which could just net out to zero value.
To me it seems like they burn so much money they can do lots of things in parallel. My guess would be that e.g. codex and sora are very independently developed. After all there's a quite a hard limit on how many bodies are beneficial to a software project.
Personally its down to Altman having the cognitive capacity of a sleeping snail, the world insight of a hormonal 14 year old who's only ever read one series of manga.
Despite having literal experts at his fingertips, he still isn't able to grasp that he's talking unfilters bollocks most of the time. Not to mention is Jason level of "oath breaking"/dishonesty.
You'd think Apple would go after the top-charting apps that are leveraging the scam companies (like Monopoly Go and Disney Solitaire) for actively engaging with scams like this to pump their own numbers up...
(https://old.reddit.com/r/FreeCash/comments/1i4132r/monopoly_... - like this. What the everloving hell? Straight up enticing users to shove themselves into a game, expose themselves to ads galore, and then keep goading them into blowing even more money in the partner app under the guise of 'real cash'.)
It has a massive user base. And political connections. And lawsuit money. Apple (and Google) will absolutely treat these publishers differently than a random app developer.
Maybe—I don't think anyone is choosing between the two based on access to grok of all things. I think it's simply treated as an extension of twitter, which will almost certainly never be forced out while it remains the premier app for diplomacy and AI porn.
Yeah, Apple doesn't care about losing money or pissing off a large user-base. They assume they have enough money and they'll always have the larger user-base.
Which I actually agree with, as the Wasm ecosystem is trying to be yet another UNCOL outside the browser, bringing CORBA back while pretending it is some great new idea.
> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?
Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.
I think that the interior structure doesn't necessarily matter—the problem here is that we don't know what consciousness is, or how it interacts with the physical body. We understand decently well how the brain itself works, which suggests that consciousness is some other layer or abstraction beyond the mechanism.
That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"
I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.
reply