I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
He doesn't trust it for anything else either as far as I can tell. In an interview he's boasted about how he uses a paper notebook for everything all day.
if youre using a firefox based browser, slow fullscreen for media can be fixed by setting the full-screen-api.macos-native-full-screen flag to false in about:config
Twitter still does have quite a lot of unique content that either appears there first or isnt accessible anywhere else at all, unlike paid article websites, previews without logging in actually work for the most part, and xcancel as you said is a thing. Which extension are you using for redirects?
For the best quality reply, I used the Gemma-4 31B UD-Q8_K_XL quant with Unsloth Studio to summarize the URL with web search. It produced 4.9 tok/s (including web search) on an MacBook Pro M1 Max with 64GB.
Here an excerpt of it's own words:
Unsloth Dynamic 2.0 Quantization
Dynamic 2.0 is not just a "bit-reduction" but an intelligent, per-layer optimization strategy.
- Selective Layer Quantization: Instead of making every layer 4-bit, Dynamic 2.0 analyzes every single layer and selectively adjusts the quantization type. Some critical layers may be kept at higher precision, while less critical layers are compressed more.
- Model-Specific Tailoring: The quantization scheme is custom-built for each model. For example, the layers selected for quantization in Gemma 3 are completely different from those in Llama 4.
- High-Quality Calibration: They use a hand-curated calibration dataset of >1.5M tokens specifically designed to enhance conversational chat performance, rather than just optimizing for Wikipedia-style text.
- Architecture Agnostic: While previous versions were mostly effective for MoE (Mixture of Experts) models, Dynamic 2.0 works for all architectures (both MoE and non-MoE).
I wrote it originally because I wanted my openclaw install to talk to my assistant's openclaw, and my openclaws that were local at different houses.
It's morphed a lot since then, and is close to being super useful -- it allows group chat, and is close to having a realistic API call on threshold vote gateway system built in.
That stuff is built to support Corpo's main business model which is providing real world asset and governance access to agents.
So, for example, I think agents might like to vote on sending a wire transfer by approving a specific mercury bank API call.
I could go on. You can also use it to remotely chat to an agent across firewalls - it's pull / poll only.
reply