I used to watch archived episodes of Computer Chronicles on YouTube almost every night before going to bed back in 2016~2018. It was my bedtime entertainment, watching those recordings from another era of computing and observing the hosts' enthusiasm for things we take for granted today. As a late millennial, it helped me experience a bit of what the 80s and 90s were like in computing.
I'm using fine-tuned models some with 600b+ parameters and some with 1t+ kimi base / deepseek base and others are general purpose that are from huggingface but I use those through mcp tools
I'm the one who don't bother ordering directly to get better value, but I started noticing less and less restaurants (that I order from) having this option anymore.
They probably saw too little people using it and stopped accepting, sadly.
Having dedicated delivery drivers stops making sense after a while
One benefit of these apps is companies which would never have had a driver now can be ordered, but the cost is all the businesses who did no longer do.
And it’s going to be a hard sell for a small business owner to pay for not employees that are in low demand by the consumer vs the zero fixed cost apps that manage that for you.
If it weren't for Adobe's crappy support of the player, I would agree, but they did much more harm than good with it. It was a massive attack surface and they didn't care about closing their zero-day drive-by exploits in a sensible timeframe.
Also they were basically the founders of persistent fingerprinting via Flash cookies.
So no, thank you, I'm more than happy it didn't thrive more than it already did.
SWF was simultaneously brilliant and a festering wound that required amputation, and I would have welcomed a replacement that wasn't the biggest attack surface on the internet. I too love Homestar Runner.
IMO the fact that it belonged to Adobe was the biggest problem, if SWF had been managed by a more capable software org it could have been maintained in a way that kept it from getting banned from the internet. And remember, that's how bad it was - it got banned from the internet because it was absolutely indefensible to leave it around. SWF getting cancelled magically stopped every single family member I have from calling me with weird viruses and corruption they managed to stumble into. I saw more malicious code execution through SWF than I saw from my dumb little cousins torrenting sus ROMs and photoshop crackers. I'd rather not have it than have those problems persist.
Working as an eLearning dev around flash is what got me to completely disable it on my home computer... only in a VM, where I would reset after use was I ever using it. It was useful, but way too easy to abuse.
I'm still amazed that the Adobe buyout of Macromedia didn't see the Flash tooling used to target Manifest + SVG + JS/ES4/ActionScript3 as an evolution from the Flash format preceding it given their efforts with SVG prior to the buyout. I was really hoping they would pivot to creating great tooling around a more open format. Silverlight (MS) also had a chance around this time.
In the end, that's just not what happened. I also feel there's no reason an offline web-app shouldn't be able to be packaged/zipped similarly, instead of just a caching mechanism around separate http(s) delivered files.
absolutely. really is strange that you used to be able to download a music video in less than 2-3mb with lossless video quality, but now that's not really a thing anymore. I feel like if Adobe didn't get greedy and encourage its use for absolutely everything (and/or web standards got up to speed faster) people wouldn't wouldn't approach talking about Flash with the 10-foot pole they often do today (as a platform—not how everyone talks about how much they loved flash games)
What do you mean by “HD music video”? If you mean a literal video, then today’s video and audio codecs are more efficient than what Flash used, not less. If the music videos were that small then they must have given up a lot in quality. If you mean a Flash vector animation, then that’s different of course, but that doesn’t describe a typical music video.
Conventional video codecs are also pretty good at compressing animations. I once made a multi-minute animation of a plane taking off and H.264 compresses it to hundreds of kilobytes.
yes stuff like that & the IOSYS MVs. you technically can do stuff like that today theres nothing stopping you from doing it with svgs but i meant more the social part of it. its just interesting that if you want to do the same thing (put an animated video on the inernet) the usual way its now 10x bigger yet looks worse.
also i dont think theres anything like Flash (the authoring software) but for SVGs. i hope there is one but for now I wouldnt say inkscape + a text editor counts
What else was wildly cool about Flash was that the player itself was a shockingly tiny download -- even on 56K it was an incredibly fast download, and because we were all using MSIE then, the installation of this ActiveX thing that was the Flash Player required like one quick click and it was installed, and in 5 seconds you were seeing the Flash content.
Obviously the fact that it was that low-friction to install any non-sandboxed application code was a very naïve thing to allow, but I still have to hand it to the Macromedia developers for packing the whole player into such a tiny download and making it so frictionless. I'm pretty sure that had a HUGE impact on its adoption over say, Java applets. Java took a lot more time and effort to install, and while it had decent penetration (many "chat room" services and in-browser games like Yahoo Games used Java) it was never taken for granted that 'everyone has it' the way Flash was (until Steve Jobs singlehandedly burned that assumption to the ground with fire).
People loved the games, but not the super custom flash based menu that requires a loading bar and works totally different and slightly janky on each website.
That's because people have more bandwidth today and therefore videos online are higher quality now. You can easily transcode a music video to 3MB using modern codecs (and even not so modern ones like H.264), and it will look somewhat worse than typical online video sites but still pretty good.
Honestly, we can have that today. The real power of Flash was the fully integrated development environment. It was one of the first programming experiences I had, and all I needed to do amazing stuff was a book and a copy of Flash MX.
One of my own first programming experiences was when my dad bought me a copy of Dreamweaver and a book about it. To this day I still ponder what might have happened if I was instead given a copy of Flash instead.
Adobe needed to take Flash seriously as a platform. Instead they neglected it, making it synonymous with crashes and security problems, and they milked developers as much as possible.
I bought Flash once. I found a crashing bug and jumped through hoops reporting it. A year or so later, they updated the ticket to suggest I drop $800 for the privilege of seeing whether it had been fixed. I did not make the mistake of giving them money ever again.
They had such an opportunity to take advantage of a platform with a pre-iPhone deployment in the high 90% range, and they just skimped it into oblivion. What a disgrace for everyone who actually cared.
Yes seriously. At that time Steve Jobs was harping on HTML5 and CSS3 being open standards but Flash not. Adobe could have ensured Flash's survival by making Flash an open standard (much like it has made PDF an open standard where the specification is free to everyone) and making Adobe Flash only one of the possible authoring tools, and the Flash Player only one of the player tools. Basically they should have invited the community and other companies to make more Flash tooling while continuing to sell their own. Given how often I see people still paying for Acrobat Pro today, I think this is a good business strategy too.
(It observes that this feature raises certain security risks, but promises to figure out by the next draft how to fix them. This of course never happened.)
I recall Hixie had a funny rant about this, but I can't find it.
Thank fuck it didn't. I can't fathom how quickly the obnoxious advertiser industrial complex would've grabbed hold of that and invented whole new genres of shoving products in our collective face.
Question: ChatGPT voice mode seems to have too much tolerance for mispronouncing. Sometimes, it understands you even you mispronounce something in a phrase, and it's not aware enough to correct you - it even says your pronunciation is correct if asked. It's good at grammar, though.
It makes me think the audio goes through a kind of voice-to-text model before the answer, so nuance is lost; or the model wasn't trained to distinguish between correct and incorrect pronunciations.
Does Issen have this issue too? Pronunciation vices are common when you're learning a new language.
In general there aren't really models that can understand nuances of your speech yet. Gemini 2.5 voice mode changed that only recently and I think it can understand emotions but I'm not sure if it can detect things like accent and mispronouncing. The problem is data, we need a large corpus of data labeled how exactly the audio sample is mispronouncing the word, so the model can cluster those. Maybe self-learning techniques without human feedback can do it somehow. Other than that I'm not seeing how this is even possible to train such model with what's currently available.
Yes we do have this issue, but it's improved a bit over chatgpt due to using multiple transcribers.
The models are improving though, and they are at a very good place for English at the moment. I expect by next year we will switch over to full voice to voice models.
This reply seems to miss the question, or at least doesn’t answer it clearly. Is this service overly tolerant of mispronunciations? Foundational models are becoming more tolerant, not less, over time which is the opposite of what I’d want in this case.
It's less tolerant of mispronunciations. There is custom promting to explicitly leave in mistakes and to not fix them. It's still not perfect and it (the speech to text module) sometimes corrects the user's pronunciation mistakes.
I somewhat agree, but I think that the language example is not a good one. As Anthropic have demonstrated[0], LLMs do have "conceptual neurons" that generalise an abstract concept which can later be translated to other languages.
The issue is that those concepts are encoded in intermediate layers during training, absorbing biases present in training data. It may produce a world model good enough to know that "green" and "verde" are different names for the same thing, but not robust enough to discard ordering bias or wording bias. Humans suffer from that too, albeit arguably less.
I have learned to take these kinds of papers with a grain of salt, though. They often rest on carefully selected examples that make the behavior seem much more consistent and reliable than it is. For example, the famous "king - man + woman = queen" example from Word2Vec is in some ways more misleading than helpful, because while it worked fine for that case it doesn't necessarily work nearly so well for [emperor, man, woman, empress] or [husband, man, woman, wife].
You get a similar thing with convolutional neural networks. Sometimes they automatically learn image features in a way that yields hidden layers that easy and intuitive to interpret. But not every time. A lot of the time you get a seemingly random garble that belies any parsimonious interpretation.
This Anthropic paper is at least kind enough to acknowledge this fact when they poke at the level of representation sharing and find that, according to their metrics, peak feature-sharing among languages is only about 30% for English and French, two languages that are very closely aligned. Also note that this was done using two cherry-picked languages and a training set that was generated by starting with an English language corpus and then translating it using a different language model. It's entirely plausible that the level of feature-sharing would not be nearly so great if they had used human-generated translations. (edit: Or a more realistic training corpus that doesn't entirely consist of matched translations of very short snippets of text.)
Just to throw even more cold water on it, this also doesn't necessarily mean that the models are building a true semantic model and not just finding correlations upon which humans impose semantic interpretations. This general kind of behavior when training models on cross-lingual corpora generated using direct translations was first observed in the 1990s, and the model in question was singular value decomposition.
I’m convinced that language sharing can be encouraged during training by rewarding correct answers to questions that can only be answered based on synthetic data in another language fed in during a previous pretraining phase.
Interleave a few phases like that and you’d force the model to share abstract information across all languages, not just for the synthetic data but all input data.
I wouldn’t be surprised if this improved LLM performance by another “notch” all by itself, especially for non-English users.
I've read the paper before I made the statement. And I still made the statement because there are issues with their paper. The first problem is that the way in which anthropic trains their models and the architecture of their models is different from most of the open source models people use. they are still transformer based, but they are not structurally put together the same as most models, so you cant extrapolate their findings on their models to other models. Their training methods also use a lot more regularization of the data trying to weed out targeted biases as much as possible. meaning that the models are trained on more synthetic data which tries to normalize the data as much as possible between languages, tone, etc.. Same goes for their system prompt, their system prompt is treated differently versus open source models which append the system prompt in front of the users query internally. The attention ais applied differently among other things. Second the way that their models "internalize" the world is vastly different then what humans would thing of "building a world model" of reality. Its hard to put it in to words but basically their models do have a underlying representative structure but its not anything that would be of use in the domains humans care about, "true reasoning". Grokking the concept if you will. Honestly I highly suggest folks take a lot of what anthropic studies with a grain of salt. I feel that a lot of information they present is purposely misinterpreted by their teams for media or pr/clout or who knows what reasons. But the biggest reason is the one i stated at the beginning, most models are not of the same ilk as Anthropic models. I would suggest folks focus on reading interpretability research on open source models as those are most likely to be used by corporations for their cheap api costs. And those models have no where near the care and sophistication put in to them as anthropic models.
> I feel that a lot of information they present is purposely misinterpreted by their teams for media or pr/clout or who knows what reasons.
I think it's just the culture of machine learning research at this point. Academics are better about it, but still far from squeaky clean. It can't be squeaky clean, because if you aren't willing to make grand overinflated claims to help attract funding, someone else will be, and they'll get the research funding, so they'll be the ones who get to publish research.
It's like an anthropic principle of AI research. (rimshot)
> Our performance evaluation shows up to 2.7% overhead for the microcode mitigation on Alder Lake. We have also evaluated several potential alternative mitigation strategies in software with overheads between 1.6% (Coffee Lake Refresh) and 8.3% (Rocket lake)
Thanks, missed that! I remember seeing benchmarks showing like 15% slowdown from Spectre/Meltdown mitigations, so this is not as bad as that, but that is on top of the other too I guess...
RIP Stewart.
reply