You are a bad programmer if you think silently doing the wrong thing is not a bug. The right thing to do with unexpected input as the setTimeout library author is to raise an exception.
Standard library is an API just like any other library. The only thing different about it is backward compatibility (which in JS is paramount and the is reason setTimeout can't be fixed directly). It is a bad design still.
I had this exact idea and I've described it to colleagues before. Fun to see parallel evolution. It feels like a simple concept that should already exist, so I'm surprised it's not more commonly attempted. But you're missing a few of the features that I came up with that build on the initial idea. I haven't gotten around to implementing it yet, but it's on my todo list for this year/next year.
I was planning to build it with ultralig.ht, but I'm not 100% sure if it's ready for it. But since most of the content I'm interested in for research is textual/reader mode, and the rest can be viewed with yt-dlp, I think it can render them and it seems the lightest weight. Otherwise it's webkit or servo that I could think of for this.
LLMs make mediocre engineers into slightly less mediocre engineers, and non-engineers into below mediocre engineers. They do nothing above the median. I've tried dozens of times to use them productively.
Outside of very very short isolated template creation for some kind of basic script or poorly translating code from one language to another, they have wasted more time for me than they saved.
The area they seem to help people, including me, the most in is giving me code for something I don't have any familiarity with that seems plausible. If it's an area I've never worked in before, it could maybe be useful. Hence why the less breadth of knowledge in programming you have, the more useful it is. The problem is that you don't understand the code it produces so you have to entirely be reliant on it, and that doesn't work long term.
LLMs are not and will not be ready to replace programmers within the next few years, I guarantee it. I would bet $10k on it.
If you don't mind me asking, what generation are you from? Perchance you're newer than me to Earth, among those who find it hard that others have different opinions?
Love the expanded C API support! Also those performance improvements are massive!
Pushing through filters and the streaming optimization for fetchone() is great! This makes it more viable to use duckdb in smaller queries from python.
I'm pretty excited for variables too! I really wanted them for when I'm using the CLI. Same with query/query_table! I appreciate the push for features that make people's lives easier while also still improving performance.
Everyone who I've introduced duckdb to (at work or outside of work) eventually is blown away (some still have lingering SQL stigma)
Adding a way to query the path at the current node would let you skip out on doing stuff like keeping track of `in_section`.
I wonder if the `enter|exit ...` syntax might be too limiting but for a lot of stuff it seems nice and easy to reason about. Easier than tree-sitter's own queries.
I think if you really wanted performance and whatnot, you might end up compiling the queries to another target and just reuse them.
I could see myself writing a lua DSL around compiling these kinds of queries `enter/exit` stanzas or an SQL one too.
I get nervous reading when people write exceptions and name "good" CEOs or "good" celebrities. Lots of skeletons come out later. I don't like to put people on a pedestal, especially those we don't know intimately well.
Funnily enough that's also often the issue with engineers - good at technical things but not so good elsewhere. There's a reason we all have different jobs.
>Funnily enough that's also often the issue with engineers - good at technical things but not so good elsewhere.
Exactly. There's plenty of examples of people we consider great engineers in history. Does it matter that some of them may have been terrible marriage partners? When we're discussing their engineering accomplishments, no. When we're talking about someone as a great scientist, engineer, business leader, etc., we're not also claiming that they're literally a saint. Of course they have flaws in other parts of their lives; everyone does.
It's pretty cool that I can read "anablibg" and know that means "enabling." The brain is pretty neat. I wonder if LLMs would get it too. They probably would.
> I encountered the typo "anablibg" in the sentence "I wonder how much help they had by asahi doing a lot of the kernel and ecosystem work anablibg 16k pages." What did they actually mean?
GPT-4o and Sonnet 3.5 understood it perfectly. This isn't really a problem for the large models.
For local small models:
* Gemma2 9b did not get it and thought it meant "analyzing".
* Codestral (22b) did not it get it and thought it meant "allocating".
* Phi3 Mini failed spectacularly.
* Phi3 14b and Qwen2 did not get it and thought it was "annotating".
* Mistral-nemo thought it was a portmanteau "anabling" as a combination of "an" and "enabling". Partial credit for being close and some creativity?
I wonder if they'd do better if there was the context that it's in a thread titled "Adding 16 kb page size to Android"? The "analyzing" interpretation is plausible if you don't know what 16k pages, kernels, Asahi, etc are.
Local LLM topics are a treadmill of “what’s best and what is preferred” changing basically weekly to monthly, it’s a rapidly evolving field, but right now I actually tend to gravitate to Gemma2 9b for coding assistance for Typescript work or general question and answer stuff. Its embedded knowledge and speed on the computers that I have (32GB M2 Max, 16GB M1 Air, 4080 gaming desktop) make for a good balance while also using the computer for other stuff, bigger models limit what else I can run simultaneously and are slower than my reading speed, smaller models have less utility and the speed increase is pointless if they’re dumb.
Personally, when I read the comment my brain kinda skipped over the word since it contained the part "lib" I assumed it was some obscure library that I didn't care about. It doesn't fit grammatically but I didn't give it enough thought to notice.
I remember reading somewhere that LLMs are actually fantastic at reading heavily mistyped sentences! Mistyped to a level where humans actually struggle.
As someone who stopped getting involved in blockchain "tech" 12 years ago because of the prevalence of scams and bad actors and lack of interesting tech beyond the merkle tree, what's great about it?
FWIW I am genuinely asking. I don't know anything about the current tech. There's something about "zero knowledge proofs" but I don't understand how much of that is used in practice for real blockchain things vs just being research.
As far as I know, the throughput of blockchain transactions at scale is miserably slow and expensive and their usual solution is some kind of side channel that skips the full validation.
Distributed computation on the blockchain isn't really used for anything other than converting between currencies and minting new ones mostly AFAIK as well.
What is the great tech that we got from the blockchain revolution?
But zk-based really decentralized consensus now does 400 tps and it's extraordinary when you think about it and all the safety and security properties it brings.
And that's with proof-of-stake of course with decentralized sequencers for L2.
But I get that people here prefer centralized databases, managed by admins and censorship-empowering platforms. Your bank stack looks like it's designed for fraud too. Manual operations and months-long audits with errors, but that is by design.
Thanks everyone for all the downvotes.
For many of us it isn't that we think the status quo is the RightWay™ - we just aren't convinced that crypto as it currently is presents a better answer. It fixes some problems, but adds a number of its own that many of us don't think are currently worth the compromise for our needs.
As you said yourself:
> The crypto ecosystem is shady, I know, but the tech is great
That but is not enough for me to want to take part. Yes the tech is useful, heck I use it for other things (blockchains existed as auditing mechanisms long before crypto-currencies), but I'm not going to encourage others to take part in an ecosystem that is as shady as crypto is.
> Thanks everyone for all the downvotes.
I don't think you are getting downvoted for supporting crypto, more likely because you basically said “you know that article you are all discussing?, well I think you'll want to know that I didn't bother to read it”, then without a hint of irony made assertions of “angst and negativity”.
And if I might make a mental health suggestion: caring about online downvotes is seldom part of a path to happiness :)
The main problem with blockchain is identical to the one with LLMs. When snake oil salesmen try to apply the same solution to every problem, you stop wasting your time with those salesmen.
Both can be useful now and then, but the legit uses are lost in the noise.
And for blockchain... it was launched with the promise of decentralized currency. But we've had decentralized currency before in the physical world. Until the past few hundred years. Then we abandoned it in favor of centralized currency for some reason. I don't know, reliability perhaps?
[2] 1982 for blockchains/trees as part of a distributed protocol as people generally mean when they use the words now³, hash chains/trees themselves go back at least as far as 1979 when Ralph Merkle patented the idea
Very much so. Is there a problem with that? To what time period would attribute their creation?
In fact it is only the 70s if you mean networks that learn via backprop & similar methods. Some theoretical work on artificial neurons was done in the 40s.
The point is whatever you said in defense of blockchain/crypto applies or does not apply to neural networks/LLMs in equal measure.
I for one fail to see the difference between these two kinds of snake oil.
> Some theoretical work on artificial neurons was done in the 40s.
"The perceptron was invented in 1943 by Warren McCulloch and Walter Pitts. The first hardware implementation was Mark I Perceptron machine built in 1957"
> whatever you said in defense of blockchain/crypto
You seem to be labouring under the impression that blockchain and cryptocurrencies are one in the same. The point you seem to be missing is that I'm saying they are not the same. Blockchains (usually actually trees like merkle trees) are a thing that has existed long before cryptocurrencies which are one application of technique.
> I for one fail to see the difference between these two kinds of snake oil.
The gaggle of quackish sales people with miracle cures based on LLMs is pretty much the same sort of quackish sales people that touted miracle cures based on crypto currencies, yes. But LLMs are one use of neural networks and crypto/proof-of-work is one use of blockchains.
This all started with me correcting “And for blockchain... it was launched with the promise of decentralized currency.” — which is the incorrect equivalency of blockchain/cryptocurrency writ large.
> > Some theoretical work on artificial neurons was done in the 40s.
> "The perceptron was invented in 1943 by Warren McCulloch and Walter Pitts.
Exactly. You've just repeated my sentence with a little more detail.
40s: theory
50s: attempts at practical implementation
early 70s: backprop methods (as we currently mean the term wrt neural networks, backpropagation as a more general concept existed before that) first published, starting that decades' big excitement over the potential for neural networks.
Gold is and has been a decentralized currency for a very long time. It’s mostly just very inconvenient to transport.
> Then we abandoned it in favor of centralized currency for some reason. I don't know, reliability perhaps?
The global economy practically requires a centralized currency, because the value of your currency vs other countries becomes extremely important for trading in a global economy (importers want high value currency, exporters want low).
It’s also a requirement to do financial meddling like what the US has been doing with interest rates to curb inflation. None of that is possible on the blockchain without a central authority.
> Gold is and has been a decentralized currency for a very long time. It’s mostly just very inconvenient to transport.
Even precious metal coins became endorsed by one authority or another (the cities/banks/little kingdoms stamping the coins). Because you as a normal person don't have the resources to validate every single piece of gold/silver you are paid with.
There has also been a short period when every 3rd bank had its own paper currency. That seems to be gone too. Perhaps because as a normal person maintaining a list of banks you could trust was too much.
I don’t think having a centralized validation authority makes a currency centralized. Centralized currency usually implies the means to control that currency. While the monarch may control the supply of gold minted into coins, they don’t control the supply of gold itself.
It would have been an inconvenient currency for small transactions, but it’s still a currency.
The bank currencies were weird. Iirc, some of that was wrapped up in the Civil War and the Confederate currency being “official” but also basically worthless towards the end of the war. I think the Great Depression killed them, when banks became insolvent and their currencies became worthless.
> The main problem with blockchain is identical to the one with LLMs. When snake oil salesmen try to apply the same solution to every problem, you stop wasting your time with those salesmen.
+100
Rule in blockchain: Whenever there is money beyond paying for services/infra like AWS, there is a problem.
> I don't think you are getting downvoted for supporting crypto
Still, every of my post that is more or less supportive of crypto gets downvoted. And I am the first to tell the ecosystem is one of the worst in tech so that's always mild support.
But yes, you're right it's probably sem web people overreacting to _my_ rant :)