There is a fairly impressive installation of these at Heathrow airport in Terminal 5 outside the BA lounges. Struggling to find a decent video on YouTube, but this one’s not terrible https://youtu.be/G03WA30yFMI?si=hx5aLlrj_BH21yr2
Not a single mention of “maybe WE should have tested our backup strategy and scrutinised it”. Or even “maybe we should have backups away from the primary vendor”. Because this also says negligible DR and BC strategy.
Certainly AI editorialised. I wonder if this is because English isn’t their first language, and they are confidence compensating. I’ve worked with a lot of folks also from Philippines and the Tagalog/English mix leads to some confidence challenges sometimes.
You might be surprised…or you might not. I’ve found it’s a good barometer for whether you actually don’t like AI writing or you just don’t like bad AI writing.
1. This test has really zero to do with what we're talking about. Stylized fiction is a completely separate domain from non-fiction writing of personal anecdotes. There's effectively zero relation between them.
2. Picked the human 5 out of 5. Since it's pointless to take as a judge of preference due to 1), I took it as a test of "spot the AI", and clearly it was obvious to me in every instance.
3. Of course we just "don't like bad AI writing". "Good AI writing" would be unnoticeable. This is incredibly rare in the domain we're talking about.
Small, pithy quotes vs dozens of paragraphs are rather different things.
It does not surprise me in the least that a machine can produce excellent small quotes. Markov chains have been production some fantastic stuff for decades, for example, and they're about as complicated as an abacus. https://thedoomthatcametopuppet.tumblr.com/
It seems I chose AI 5 times out of 5. I'm not a native speaker, so I might have preferred a slightly more straightforward text.
On one side, I think this suffers a lot from selection bias: short AI snippets specifically chosen by humans for their quality and they do not necessarily reflect the average experience of AI text. On the other hand, AI generated text does not preclude human editing.
This is like the coke vs pepsi tests, where people prefer pepsi when given a small amount but prefer coke in larger amounts. short snippets aren't a good test of anything useful.
I got 4/5 human. #3 - I chose AI, it was very close.
I noticed something-humans will use words precisely and loosely at the same time. AI will seem like it’s precise but a lot of the wording it uses can be cut or replaced by something else without losing much meaning.
A few paragraphs isn't writing, it's a snippet. The shorter something is, the better AI will be at mimicking it, because underlying flaws are less likely to be made apparent.
Music is another great example of this. I enjoy techno/trance type stuff, but YouTube is becoming borderline unusable for this genre due to AI slop. You'd think AI would do a good job of producing tracks here since this genre is certainly somewhat formulaic. And about 2 minutes into a lengthy track I'd probably do relatively mediocrely at determining whether it was human or AI, but by about 10 minutes into a track it's often painfully obvious. I run this experiment regularly as I find myself having to skip the AI slop which YouTube seems obsessed with recommending anyhow.
Ironically AI is probably providing a boon to human DJs here, because actively seeking them out it is one of the only ways to escape YouTube's sloparithm.
I preferred the AI 4 out of 5 times. That's a little confronting. And judging by the amount of cope in the comments section, others found it the same. I guess it is a small test, but I think it successfully makes it's point.
You are of course specifically referring to the math optimised models, not the chat ones folks would generally encounter. Not that I’m trying to contradict you, your point is super valid and I agree with you! But I’m supplementing to help anyone following along who may make choices.
Shouldn't one use e.g a Wolfram Alpha MCP endpoint for math in AI? From what I've seen on even premium non-quantized models, I would never ever trust the innate ability of a LLM to calculate.
/*
* RealTek 8129/8139 PCI NIC driver
*
* Supports several extremely cheap PCI 10/100 adapters based on
* the RealTek chipset. Datasheets can be obtained from
* www.realtek.com.tw.
*
* Written by Bill Paul <wpaul@ctr.columbia.edu>
* Electrical Engineering Department
* Columbia University, New York City
/
/
* The RealTek 8139 PCI NIC redefines the meaning of 'low end.' This is
* probably the worst PCI ethernet controller ever made, with the possible
* exception of the FEAST chip made by SMC. The 8139 supports bus-master
* DMA, but it has a terrible interface that nullifies any performance
* gains that bus-master DMA usually offers.
*
* For transmission, the chip offers a series of four TX descriptor
* registers. Each transmit frame must be in a contiguous buffer, aligned
* on a longword (32-bit) boundary. This means we almost always have to
* do mbuf copies in order to transmit a frame, except in the unlikely
* case where a) the packet fits into a single mbuf, and b) the packet
* is 32-bit aligned within the mbuf's data area. The presence of only
* four descriptor registers means that we can never have more than four
* packets queued for transmission at any one time.
*
* Reception is not much better. The driver has to allocate a single large
* buffer area (up to 64K in size) into which the chip will DMA received
* frames. Because we don't know where within this region received packets
* will begin or end, we have no choice but to copy data from the buffer
* area into mbufs in order to pass the packets up to the higher protocol
* levels.
*
* It's impossible given this rotten design to really achieve decent
* performance at 100Mbps, unless you happen to have a 400Mhz PII or
* some equally overmuscled CPU to drive it.
*
* On the bright side, the 8139 does have a built-in PHY, although
* rather than using an MDIO serial interface like most other NICs, the
* PHY registers are directly accessible through the 8139's register
* space. The 8139 supports autonegotiation, as well as a 64-bit multicast
* filter.
*
* The 8129 chip is an older version of the 8139 that uses an external PHY
* chip. The 8129 has a serial MDIO interface for accessing the MII where
* the 8139 lets you directly access the on-board PHY registers. We need
* to select which interface to use depending on the chip type.
*/
Those comments are about the 25 years old RTL8139, among the world's first highly affordable and fully-integrated Fast Ethernet controllers that ended up on pretty much every motherboard. Contrary to all of the aged complaints about the RTL8139, I ran several such on OpenBSD (and Windows) for close to ten years with no problems at all.
Anthropic recently dropped all inclusive use from new enterprise subscriptions, your seat sub gets you a seat with no usage. All usage is then charged at API rates. It’s like a worst of both worlds!
SSO Tax is a large part of it, controls around plug-in marketplace, enforcement of config, observeability of spend. But it’s all pretty weak really for $20 a month.
And Microsoft are going the same route to moving Copilot Cowork over to a utilisation based billing model which is very unusual for their per seat products (I’m actually not sure I can ever remember that happening).
Yeah, it’s an interesting one. I think inertia and expectations at this point? I don’t think the big labs anticipated how low the model switching costs would be and how quickly their leads would be eroded (by each other and the upstarts)
They are developing their moats with the platform tooling around it right now though. Look at Anthropic with Routines and OpenAI with Agents. Drop that capability in to a business with loose controls and suddenly you have a very sticky product with high switching costs. Meanwhile if you stick with purely the ‘chat’ use cases, even Cowork and scheduled tasks, you maintain portability.
I bought a Thinkpad P16s with 64 GB of LPDDR5x ram in October 2024 for just over $1100.
64 GB of LPDDR5x will add $849 to the price of a Framework 13 Pro. Thats insane. I would love a future Framework 16 Pro but that will probably run $3500 for the configuration that I would want if memory and storage prices don't come down.
You need to consider the upgradability aspect too - the next time you want to upgrade, you just need to buy a new mainboard, which would be considerably cheaper than buying a whole new laptop.
A new mainboard that may need new memory. From another comment in thread:
I have 64GB of DDR4 in my current laptop, and replacing that with the same amount of LPCAMM2 LPDDR5X is probably more expensive than the rest of the laptop itself.
And another:
I just had my mainboard die, and I was advised there currently isn't another mainboard in stock that works with my old DDR4 RAM
reply