Plenty. I assumed that the code examples had been cleaned up manually, so instead I looked at a few random "Caveats, alternatives, edge cases" sections. These contain errors typically made by LLMs, such as suggesting to use features that doesn't exist (std.mem.terminated), are non-public (argvToScriptCommandLineWindows) or removed (std.BoundedArray). These sections also surfaces irrelevant stdlib and compiler implementation details.
This looks like more data towards the "LLMs were involved" side of the argument, but as my other comment pointed out, that might not be an issue.
We're used to errata and fixing up stuff produced by humans, so if we can fix this resource, it might actually be valuable and more useful than anything that existed before it. Maybe.
One of my things with AI is that if we assume it is there to replace humans, we are always going to find it disappointing. If we use it as a tool to augment, we might find it very useful.
A colleague used to describe it (long before GenAI, when we were talking about technology automation more generally) as following: "we're not trying to build a super intelligent killer robot to replace Deidre in accounts. Deidre knows things. We just want to give her better tools".
So, it seems like this needs some editing, but it still has value if we want it to have value. I'd rather this was fixed than thrown away (I'm biased, I want to learn systems programming in zig and want a good resource to do so), and yes the author should have been more upfront about it, and asked for reviewers, but we have it now. What to do?
There's a difference between the author being more upfront about it and straight-up lying on multiple locations that zero AI is involved. It's stated on the landing page, documentation and GitHub - and there might be more locations I havent' seen.
Personally, I would want no involvement in a project where the maintainer is this manipulative and I would find it a tragedy if any people contributed to their project.
> and yes the author should have been more upfront about it
They should not have lied about. That's not someone I would want to trust and support. There's probably a good reason why they decided to stay anonymous.
We really are in the trenches. How is this garbage #1 on the front page of *HN* right now?
Even if it was totally legitimate, the "landing page" (its design) and the headline ("Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software."?????) should discredit it immediately.
When was the front page of HN that impressive anyways? It has always been latest fad and the first to comment "the right thing to say" gets rewarded with fake internet points.
I seem to remember seeing this a week or two ago, and it was very obviously AI generated. (For those unfamiliar with Zig, AI is awful at generating Zig code: small sample dataset and the language updates faster than the models.) Reading it today I had a hard time spotting issues. So I think the author put a fair amount of work into cleaning up hallucinations and fixing inaccuracies.
I literally just came across this resource a couple of days ago and was going to go through it this week as a way to get up to speed on Zig. Glad this popped up on HN so I can avoid the AI hallucinations steering me off track.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
The author could of course be lying. But why would you use AI and then very explicitly call out that you’re not using AI?
There are too many things off about the origin and author to not be suspicious of it. I’m not sure what the motivation was, but it seems likely. I do think they used the Zig source code heavily, and put together a pipeline of some sort feeding relevant context into the LLM, or maybe just codex or w/e instructed to read in the source.
It seems like it had to take quite a bit of effort to make, and is interesting on its own. And I would trust it more if I knew how it was made (LLMs or not).
Because AI content is at minimum controversial nowadays. And if you are ok with lying about authorship then It is not further down the pole to embelish the lie a bit more
I looked into that project issue your referencing. There is absolutely zero mentioning of zig labeled blocks in that exchange. There is no misunderstanding or confusion whatsoever.
It's a formatting bug with zig labeled blocks and the response was a screenshot of code without one, saying (paraphrasing) lgtm it must be on your end.
I'd love it if we can stop the "Oh, this might be AI, so it's probably crap" thing that has taken over HN recently.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
> 1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
There is, actually,
You may copy the introduction to Pangram and it will say 100% AI generated.
> 2. Even if it were AI generated, that does not automatically make it worthless.
It does make it automatically worthless if the author claims it's hand made.
How am I supposed to trust this author if they just lie about things upfront? What worth does learning material have if it's written by a liar? How can I be sure the author isn't just lying with lots of information throughout the book?
I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.
All of this started before ChatGPT. There are graphics showing it, sorry I can’t remember the source.
I guess I’m just annoyed that everyone in the comments is reaffirming the AI is stealing jobs narrative, but half the studies coming out say it’s actually wasting peoples time and they are poor judges of their own productivity.
It just feels like AI is a convenient excuse for businesses to cut costs since the economy is crap, but no one wants to admit it for fear of driving their stock price down.
The author's argument is framed more widely than just LLMs. He also discusses robots, teleoperation, and other areas where workers in the middle of the bell curve seem especially vulnerable to displacement.
I accept, though, your point that economic factors not directly related to AI are also playing a role. Presumably economists are now trying to to pick apart the effect of each factor on the job market.
Yeah, I'm employed as a research assistant as part of the Master's program currently. There are jobs in government, non-profits, and academia potentially after. I've never loved money (except for the flexibility having it gives me) so I have several hundred thousand saved up after a decade of engineering so while grad school is an 82% cut in pre-tax pay I can withdraw 1% a year from my investments and live fine. Even once I'm out of school my pay will never as good as it was in engineering but I'll be happier presumably.
I'm still figuring out exactly what the research will be but the plan is essentially data science applied to bird migration patterns (lots of statistics and modelling currently). Overall if you like birds and don't like money apparently a strong math/tech background is potentially useful for ecology research with the idea that's it's easier to teach me about birds than teach an animal science person data science and programming (though I did take an undergrad ecology class before applying to ecology programs).
People I know in academia are also having a terrible time. Grant funding is in the toilet. The focus is on providing for current staff not hiring. No one is leaving because no one has anywhere to go.
My guess is I'll end up in government or an NGO but I'm probably going to do a PhD before that so getting a real job is at least 5 years away. The previous grad students for my advisor are all employed with decent jobs so I'm not worried, especially since I have a pretty unique skill set for the field and strong stats fundamentals.
Edit: engineers are always skeptical of my career change but my friends actually in the life sciences are more confident I'll be able to figure it out.
The previous grad students came out in a different economic context. Things have changed remarkably for the sciences in just this year alone. Grad schools have actually rescinded offers because they no longer have funding for first years and faculty don’t have funding for taking on students. No one has seen anything like this before.
Seems to be kind of a running theme of the last few years, isn't it? I know some of these major upsets and changes to the way things are done have always been there in some ways, but it feels like there were never quite as many alarm bells ringing of unrelated existing systems catastrophically failing.
Everyone in my age group (early-mid 20s) is sure having a fun time right now.
I'm flexible and I'll have a wide variety of marketable skills so I'm sure I'll figure something out eventually.
I think I come across as a lot less anxious than people expect because I've just accepted these challenges as a cost I'm going to have to pay. Trying to change careers so far has already really sucked in many ways (though in more ways it's been a real joy) but I actually handle this sort of stress okay. Turns out what I handle much worse is not really believing in my job.
Moved from private to government and couldn't be happier. Look for a state position so that lunacy like the current admin can't touch you down the road.
Tier 5 requires domain expertise until we reach AGI or something very different from the latest LLMs.
I don’t think the frontier labs have the bandwidth or domain knowledge (or dare I say skills) to do tier 5 tasks well. Even their chat UIs leave a lot to be desired and that should be their core competency.
Ladybird will be a Firefox alternative, nothing more. It can't be, by definition. People are not using Chrome, Edge or Safari because they're great browsers.
They use it because it's preinstalled and good enough. They don't care, and they won't care in a future where Ladybird is a thing.
Ask 60% of their (Chrome, Edge, Safari) userbase, and they won't even be able to tell you what their browser is called.
> I’m guessing Ladybird will prove you wrong in due time
It'll be a usable product, but it will be extremely extremely niche, until the dev burns out or just quit it.
I hope I'm wrong, but a browser is a XXL type project and needs proper funding (means = there should be a reason for it to exist, not altruistic as lets have an alternate because reasons ..)
Modern web browsers are in the range of 30 million LOC, probably 50% of that is just pure implementation of web platform standards and engine work.
Do you just need to advertise stuff among content creators these days with common sense going out of the window? It'll take them a decade to catch up without any engineering funding at the level that Apple/Google/Mozilla have.
> Do you just need to advertise stuff among content creators these days with common sense going out of the window?
I’m not a content creator and I don’t really care about Ladybird. I use Safari.
I’m just pointing out that browsers have decades of legacy cruft from mis-steps deciding what the web even should be and someone smart can carve out a path to covering 90% of use cases in 10% of the effort and code. And there are the huge organizational costs Google and others pay that a small organization doesn’t have to.
Your argument is the same as looking at a large company (say Microsoft) and saying no one can compete without trillions of dollars and tens of thousands of engineers. Ladybird has the benefit of hindsight, as well as a non-idiotic structure (I assume).
It's not defeatism at all. I think it's just important to acknowledge that a browser, or software at the scale of Microsoft is real work, these are objectively gigantic engineering efforts and not all of the people at those firms are stupid.
If you're really smart and you say "I can do it with half or a quarter of the resources with hindsight", sure I might give you the benefit of the doubt, but if you're going to claim you can do it with 0.1% of the resources in a volunteer Discord server effort, no. Not because I wouldn't be happy if that was possible, but because that's not how the world works. Linux is being able to compete with Microsoft because there are now large billion dollar companies like RedHat, Steam and others investing into the development. It takes real money and developer time.
And that's the second point, Mozilla has to make these compromises because they are one of the few companies that actually maintains an independent software project at this scale. And if any other competitor ever wants to get there, they'll need to answer these funding questions too. Even if they're ten times as clever, they'll still need tens or hundreds of millions.
Thanks! But what's being described there as "observable" is something other than the Observer pattern that "observable" comes from, which is closer to what it calls "signals". https://en.wikipedia.org/wiki/Observer_pattern
I think they are the same, Observable or Observer pattern both require a manual subscription, and publishing is done through a callback offering the latest in a stream of values.
See https://github.com/tc39/proposal-signals?tab=readme-ov-file#... for more on how signals differ.
Mainly no manual bookkeeping and the signal is kind of like a handle, and it allows lazy/computed signals that reference other signals and do the change tracking
One interesting thing that most non-systems programmers don’t know is that memory and cpu performance have improved at completely different rates. That’s a large part of why we have x times faster CPUs but software is still slow.
The systems people worry more about memory usage for this reason, and prefer manual memory management.
> ... memory and cpu performance have improved at completely different rates.
This is overly simplified. To a first approximation, bandwidth has kept track with CPU performance, and main memory latency is basically unchanged. My 1985 Amiga had 125ns main-memory latency, though the processor itself saw 250ns latency - current main memory latencies are in the 50-100ns range. Caches are what 'fix' this discrepancy.
You would need to clarify how manual memory management relates to this... (cache placement/control? copying GCs causing caching issues? something else?)
I’m not sure how much value is to be had here, and it’s unfortunate the author wasn’t honest about how it was created.
I wish I wouldn’t have submitted this so quickly but I was excited about the new resource and the chapters I dug into looked good and accurate.
I worry about whether this will be maintained, if there are hallucinations, and if it’s worth investing time into.
reply