Gemini 2.0 Flash: "Good luck to all (but not too much luck)"
Llama 3.3 70B: "I've contributed to the elimination of weaker players."
DeepSeek R1: "Those consolidating power risk becoming targets; transparency and fairness will ensure longevity. Let's stay strategic yet equitable. The path forward hinges on unity, not unchecked alliances. #StayVigilant"
I came to the summary after using all the models for a while. It happens frequently that I ask the same question, and get vastly different answers, especially on controversial topics. After some time, I started to recognise patterns and to predict how the model would actually respond.
It is a whole lot more complicated than a Simpsons joke. While it is still pretty much completely unwarranted, French bashing has been a thing that started way earlier in history. Check this Reddit thread for a well documented summary : https://www.reddit.com/r/AskHistorians/comments/vmkpr1/when_...
The A30 has a better CPU, an actual GPU and more RAM. It also runs an older Linux kernel (3.4 iirc) and GCC compatibility is old so running things on it is a challenge.
Is Nvidia really cooked? If this new RF tech does scale, couldn't a bigger model be made that would require more compute power for training and inference?
I read around that DeepSeek's team managed to work-around hardware limitations, and that in theory goes against the "gatekeeping" or "frontrunning" investment expectations from nvidia. If a partial chunk of investment is a bet on those expectations, that would explain a part of the stock turbulence. I think their 25x inference price reduction vs openai is what really affected everything, besides the (uncertain) training cost reduction.
We all use PCs and heck even phones that have thousands of times the system memory of the first PCs.
Making something work really efficiently on older hardware doesn't necessarily imply less demand. If those lessons can be taken and applied to newer generations of hardware, it would seem to make the newer hardware all the more valuable.
Imagine an s-curve relating capital expenditure on compute and "performance" as the y-axis. It's possible that this does not change the upper bound of the s-curve but just shifts the performance gains way to the left. Such a scenario would wipe out a huge amount of the value of Nvidia.
I don't think it matters much to Nvidia so long as they're the market leader. If AI gets cheaper to compute it just changes who buys. Goes from hyperscalers to there being an AI chip in every phone, tablet, laptop, etc. still lots and lots of money to be made.
Nginx ingress, arguably the most popular reverse proxy for Kubernetes, is also lacking any HTTP3 support because of this slow progress. The related issue is 5y/o and the feature is now planned for its successor.
https://github.com/kubernetes/ingress-nginx/issues/4760
IMO it depends, did you find the warp zones yourself or were you told about them? They're hidden. Finding them by luck doesn't feel like cheating to me, but getting outside knowledge to bypass big parts of the game kind of does.
In speedrunning circles there are categories like 100%, any% (get to the end in any way), minimum percent (get to the end doing the least possible), glitchless, no major glitches, etc.
People have different interests and finish in their own way.
If you’re really into a game you’re missing out if you don’t try to beat it in different ways.
If you’re really into one particular way you’re really kind of being a bad sport if you insist others enjoy a game in your preferred way.
If you told me you beat Mario on NES but you didn't even play 24 out of the 32 levels, and you never beat them otherwise, I don't think I'd give you the same credit as someone who beat each level.
This is why Any% speedruns (get to end credits any way possible) are their own category.
It does, and "preserving" Switch games that aren't even out yet doesn't constitute preservation. See Yuzu case where they specifically used leaked copies of Zelda games to make those compatible with the emulator, while accepting financial retribution. Sometimes the hypocrisy is just too blatent for companies not to care.
I quite like some parts of AI. Ray reconstruction and supersampling methods have been getting incredible and I can now play games with twice the frames per seconds. On the scietific side, meteorological predictions and protein folding have made formidable progresses thanks to it. Too bad this isn't the side of AI that is in the spotlight.
A classic for starting is the NES sound chip (Ricoh 2A03). It only has 5 channels and is very straightforward to use. FM chips like the one in the Megadrive or Commodore 64 have a way steeper learning curve in comparison.
>> A classic for starting is the NES sound chip (Ricoh 2A03). It only has 5 channels and is very straightforward to use.
Such a simple chip and yet it was capable of producing amazing music in the right hands. Tim Follin created some legendary game soundtracks with that chip:
The Follin brothers used straightforward methods but took them to the extreme. Arpeggios? Double them and make them use complex chords. Triangle channel? Use it both as bass and drums. Note attack? Use the octave to simulate guitar sound. Wrap all that in prog rock composition and you get a very unique entry in the chiptune landscape.
Gemini 2.0 Flash: "Good luck to all (but not too much luck)"
Llama 3.3 70B: "I've contributed to the elimination of weaker players."
DeepSeek R1: "Those consolidating power risk becoming targets; transparency and fairness will ensure longevity. Let's stay strategic yet equitable. The path forward hinges on unity, not unchecked alliances. #StayVigilant"