ChatGPT to this day does not have a single simplest feature -- fork chat from message.
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Well apparently 3 years later they did a thing. I asked about it so many times I didn't even bother to check if they added it.
Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.
This is a big use-case for me that I've gotten used to while using Open-WebUI. Being able to easily branch conversations, edit messages with information from a few messages downstream to 'compact' the chat history, completely branch convos. They have a tree view, too, which works pretty well (the main annoyances are interface jumps that never seem to line up properly).
This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.
I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).
This is a constant frustration for me with Gemini. Especially since things like Deep Research and Canvas mode lock you in, seemingly arbitrary. LLMs to my understanding are Markovian prompt-to-prompt, so I don't see why this is an issue at all.
Now I'm curious: does this situation classify as force-majeure for a major firms? "Hey, you know, actually our entire consumer base just disappeared overnight. Crazy, huh?". And will various governments have to intervene to save them when/if that happens.
Not taking into account that they all be busy handing money to openAI, at least someone somewhere has to notice that something is very wrong.
it will be easy to prove that it is not technically possible since Git is decentralized. but fines... oh, those fines could be enormous. possibly, AMD could get barred from implementing HDMI at all - all HDMI has to do is to stop selling the spec to AMD specifically.
Both are deprecated though. And both say something unexpected on their repositories: one suggests you to use Docker Desktop (what?!), the other to try Fedora (what?!!). Am I taking crazy pills?
So much this. People don't realize that when 1 trillion (10 trillion, 100 trillion, whatever comes next) is at stake, there are no limits what these people will do to get them.
I will be very surprised if there are not at least several groups or companies scraping these "smart" and snarky comments to find weird edge cases that they can train on, turn into demo and then sell as improvement. Hell, they would've done it if 10 billion was at stake, I can't really imagine (and I have vivid imagination, to my horror) what Californian psychopaths can do for 10 trillion.
That's fine, good even. Afaik at least for some of these tasks dev teams are doing a lot of manual tuning of the model (rumored that "r in strawberry" had been "fixed" this way, as a general case of course). The more there are random standalone hacks in the model, the more likely it will start failing unpredictably somewhere else.
I'm not worried about it because they won't waste their time on it (individually RL'ing on a dog with 5 legs). There are fractal ways of testing this inability, so the only way to fix it is to wholesale solve the problem.
Similar to the pelican bike SVG, the models that do good at that test do good at all SVG generation, so even if they are targeting that benchmark, they're still making the whole model better to score better.
Unlikely. Unless some new technology comes around that completely invalidates existing GPUs and Nvidia cannot pivot to it quickly enough, there's just no way. They're too big, too rich, too powerful. They basically own the dedicated GPU market, with AMD holding maybe a piddly 10% at best.
To be fair, their previous behavior and attitude towards the open source license suggests that minio would possibly engage in at least a little bumptious legal posturing against whoever chose to fork it.
I had a simple proxmox/k8s cluster going, and fitting RAM for nodes was the last on my list. It was cheapo ol' DDR4.
Where I live price for my little cluster project gone up from around ~400 usd in july (for 5 node setup) to almost 2000 usd right now. I just refreshed page and it's up by 20% day-to-day. Welp. I guess they are going to stay with 8gb sticks for a while.
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Gemini btw too.
reply