Hacker News new | past | comments | ask | show | jobs | submit | lynguist's comments login

https://www.anthropic.com/research/tracing-thoughts-language...

The section about hallucinations is deeply relevant.

Namely, Claude sometimes provides a plausible but incorrect chain-of-thought reasoning when its “true” computational path isn’t available. The model genuinely believes it’s giving a correct reasoning chain, but the interpretability microscope reveals it is constructing symbolic arguments backward from a conclusion.

https://en.wikipedia.org/wiki/On_Bullshit

This empirically confirms the “theory of bullshit” as a category distinct from lying. It suggests that “truth” emerges secondarily to symbolic coherence and plausibility.

This means knowledge itself is fundamentally symbolic-social, not merely correspondence to external fact.

Knowledge emerges from symbolic coherence, linguistic agreement, and social plausibility rather than purely from logical coherence or factual correctness.


While some of what you say is an interesting thought experiment, I think the second half of this argument has, as you'd put it, a low symbolic coherence and low plausibility.

Recognizing the relevance of coherence and plausibility does not need to imply that other aspects are any less relevant. Redefining truth merely because coherence is important and sometimes misinterpreted is not at all reasonable.

Logically, a falsehood can validly be derived from assumptions when those assumptions are false. That simple reasoning step alone is sufficient to explain how a coherent-looking reasoning chain can result in incorrect conclusions. Also, there are other ways a coherent-looking reasoning chain can fail. What you're saying is just not a convincing argument that we need to redefine what truth is.


Validity is not soundness. Wonder why people are just beginning to realize what logicians have been studying for more than a century. This goes to show that most programming was never based on logic but vibes. People have been vibe coding with themselves before AI became prominent.


For this to be true everyone must be logically on the same page. They must share the same axioms. Everyone must be operating off the same data and must not make mistakes or have bias evaluating it. Otherwise inevitably sometimes people will arrive at conflicting truths.

In reality it’s messy and not possible with 100% certainty to discern falsehoods and truthoods. Our scientific method does a pretty good job. But it’s not perfect.

You can’t retcon reality and say “well retrospectively we know what happened and one side was just wrong”. That’s called history. It’s not useful or practical working definition of truth when trying to evaluate your possible actions (individually, communally, socially, etc) and make a decision in the moment.

I don’t think it’s accurate to say that we want to redefine truth. I think more accurately truth has inconvenient limitations and it’s arguably really nice most of the time to ignore them.


> Knowledge emerges from symbolic coherence, linguistic agreement, and social plausibility rather than purely from logical coherence or factual correctness.

This just seems like a redefinition of the word "knowledge" different from how it's commonly used. When most people say "knowledge" they mean beliefs that are also factually correct.


As a digression, the definition of knowledge as justified true belief runs into the Gettier problems:

    > Smith [...] has a justified belief that "Jones owns a Ford". Smith 
    > therefore (justifiably) concludes [...] that "Jones owns a Ford, or Brown 
    > is in Barcelona", even though Smith has no information whatsoever about 
    > the location of Brown. In fact, Jones does not own a Ford, but by sheer 
    > coincidence, Brown really is in Barcelona. Again, Smith had a belief that
    > was true and justified, but not knowledge.
Or from 8th century Indian philosopher Dharmottara:

   > Imagine that we are seeking water on a hot day. We suddenly see water, or so we 
   > think. In fact, we are not seeing water but a mirage, but when we reach the 
   > spot, we are lucky and find water right there under a rock. Can we say that we 
   > had genuine knowledge of water? The answer seems to be negative, for we were 
   > just lucky. 
More to the point, the definition of knowledge as linguistic agreement is convincingly supported by much of what has historically been common knowledge, such as the meddling of deities in human affairs, or that the people of Springfield are eating the cats.


I don’t think it’s so clear cut… Even the most adamant “facts are immutable” person can agree that we’ve had trouble “fact checking” social media objectively. Fluoride is healthy, meta analysis of the facts reveals fluoride may be unhealthy. The truth of the matter is by and large what’s socially cohesive for doctors’ and dentists’ narrative, that “fluoride is fine any argument to the contrary—even the published meta-analysis—is politically motivated nonsense”.


You are just saying identifying "knowledge" vs "opinion" is difficult to achieve.


No, I’m saying I’ve seen reasonbly minded experts in a field disagree over things-generally-considered-facts. I’ve seen social impetus and context shape the understanding of where to draw the line between fact and opinion. I do not believe there is an objective answer. I fundamentally believe Anthropic’s explanation is rooted in real phenomena and not just a self serving statement to explain AI hallucination in a positive quasi-intellectual light.


> The model genuinely believes it’s giving a correct reasoning chain, but the interpretability microscope reveals it is constructing symbolic arguments backward from a conclusion.

Sounds very human. It's quite common that we make a decision based on intuition, and the reasons we give are just post-hoc justification (for ourselves and others).


> Sounds very human

well yes, of course it does, that article goes out of its way to anthropomorphize LLMs, while providing very little substance


Isn't the point of computers to have machines that improve on default human weaknesses, not just reproduce them at scale?


They've largely been complementary strengths, with less overlap. But human language is state-of-the-art, after hundreds of thousands of years of "development". It seems like reproducing SOTA (i.e. the current ongoing effort) is a good milestone for a computer algorithm as it gains language overlap with us.


Why would computers have just one “point”? They have been used for endless purposes and those uses will expand forever


Exactly, most of us behave in almost the same as AI does. We finally have a mirror to reflect upon.


The other very human thing to do is invent disciplines of thought so that we don't just constantly spew bullshit all the time. For example you could have a discipline about "pursuit of facts" which means that before you say something you mentally check yourself and make sure it's actually factually correct. This is how large portions of the populace avoid walking around spewing made up facts and bullshit. In our rush to anthropomorphize ML systems we often forget that there are a lot of disciplines that humans are painstakingly taught from birth and those disciplines often give rise to behaviors that the ML-based system is incapable of like saying "I don't know the answer to that" or "I think that might be an unanswerable question."


Are they incapable? Or are they just not taught the discipline?


In a way, the main problem with LLMs isn't that they are wrong sometimes. We humans are used to that. We encounter people who are professionally wrong all the time. Politicians, con-men, scammers, even people who are just honestly wrong. We have evaluation metrics for those things. Those metrics are flawed because there are humans on the other end intelligently gaming those too, but generally speaking we're all at least trying.

LLMs don't fit those signals properly. They always sound like an intelligent person who knows what they are talking about, even when spewing absolute garbage. Even very intelligent people, even very intelligent people in the field of AI research are routinely bamboozled by the sheer swaggering confidence these models convey in their own results.

My personal opinion is that any AI researcher who was shocked by the paper lynguist mentioned ought to be ashamed of themselves and their credulity. That was all obvious to me; I couldn't have told you the exact mechanism the arithmetic was being performed (though what is was doing was well in the realm of what I would have expected from a linguistic AI trying to do math), but the fact that its chain of reasoning bore no particular resemblance to how it drew its conclusions was always obvious. A neural net has no introspection on itself. It doesn't have any idea "why" it is doing what it is doing. It can't. There's no mechanism for that to even exist. We humans are not directly introspecting our own neural nets, we're building models of our own behavior and then consulting the models, and anyone with any practice doing that should be well aware of how those models can still completely fail to predict reality!

Does that mean the chain of reasoning is "false"? How do we account for it improving performance on certain tasks then? No. It means that it is occurring at a higher level and a different level. It is quite like humans imputing reasons to their gut impulses. With training, combining gut impulses with careful reasoning is actually a very, very potent way to solve problems. The reasoning system needs training or it flies around like an unconstrained fire hose uncontrollably spraying everything around, but brought under control it is the most powerful system we know. But the models should always have been read as providing a rationalization rather than an explanation of something they couldn't possibly have been explaining. I'm also not convinced the models have that "training" either, nor is it obvious to me how to give it to them.

(You can't just prompt it into a human, it's going to be more complicated than just telling a model to "be carefully rational". Intensive and careful RHLF is a bare minimum, but finding humans who can get it right will itself be a challenge, and it's possible that what we're looking for simply doesn't exist in the bias-set of the LLM technology, which is my base case at this point.)


I haven’t used Cursor yet. Some colleagues have and seemed happy. I’ve had GitHub Copilot on for what feels like a couple years, a few days ago VS Code was extended to provide an agentic workflow, MCP, bring-your-own-key, it interprets instructions in a codebase. But the UX and the outputs are bad in over 3/4 of cases. It’s a nuisance to me. It injects bad code even though it has the full context. Is Cursor genuinely any better?

To me it feels like people that benefit from or at least enjoy that sort of assistance and I solve vastly different problems and code very differently.

I’ve done exhausting code reviews on juniors’ and middles’ PRs but what I’ve been feeling lately is that I’m reviewing changes introduced by a very naive poster. It doesn’t even type-check. Regardless of whether it’s Claude 3.7, o1, o3-mini, or a few models from Hugging Face.

I don’t understand how people find that useful. Yesterday I literally wasted half an hour for a test suite setup a colleague of mine introduced to the codebase that wasn’t good, and I tried delegating that fix to several of the Copilot models. All of them missed the point, some even introduced security vulnerabilities in the process invalidating JWT validation, I tried “vide coding” it till it works, until I gave up in frustration and just used an ordinary search engine, which led me to the docs, in which I immediately found the right knob. I reverted all that crap and did the simple and correct thing. So my conclusion was simple: vibe coding and LLMs made the codebase unnecessarily more complicated and wasted my time. How on earth do people code whole apps with that?


I think it works until it doesn't. The nature of technical debt of this kind means you can sort of coast on things until the complexity of the system reaches such a level that it's effectively painted into a corner, and nothing but a massive teardown will do as a fix.


> The model genuinely believes it’s giving a correct reasoning chain

The model doesn't "genuinely believe" anything.


Yes

https://link.springer.com/article/10.1007/s10676-024-09775-5

> # ChatGPT is bullshit

> Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.


Offtopic but I'm still sad that "On Bullshit" didn't go for that highest form of book titles, the single noun like "Capital", "Sapiens", etc


Starting with "On" is cooler in philosophical tradition, though, starting in classical and medieval times, e.g. On Interpretation, On the Heavens, etc by Aristotle, De Veritate, De Malo, etc. by Aquinas. Capital is actually "Das Kapital", too


It's very hipster, Das Kapital. (with the dot/period, check the cover https://en.wikipedia.org/wiki/Das_Kapital#/media/File:Zentra... )

But in English it would be just "Capital", right? (The uncountable nouns are rarely used with articles, it's "happiness" not "the happiness". See also https://old.reddit.com/r/writing/comments/12hf5wd/comment/jf... )


Yeah so I meant the Piketty book, not Marx. But I googled it and turns out it's actually named "Capital in the Twenty-First Century", which disappoints me even more than "On Bullshit"


And, for the full picture it's probably important to consider that the main claim of the book is based on very unreliable data/methodology. (Though note that it does not necessarily make the claim false! See [1])

https://marginalrevolution.com/marginalrevolution/2017/10/pi...

And then later similar claims about inequality were similarly made using bad methodology (data).

https://marginalrevolution.com/marginalrevolution/2023/12/th...

[1] "Indeed, in some cases, Sutch argues that it has risen more than Piketty claims. Sutch is rather a journeyman of economic history upset not about Piketty’s conclusions but about the methods Piketty used to reach those conclusions."


You misunderstand. I never read it. I simply liked the title, at least before I understood "Capital" that wasn't actually the title.


See also previous discussion in 2016: https://news.ycombinator.com/item?id=12069662

One commenter shows that this was caused by Bayer Nemacur (an anti-nematode pesticide), a pesticide which was only banned in 2021(!) https://eur-lex.europa.eu/eli/reg_impl/2020/1246/oj

Here is an overview over the EU pesticide regulation updates: https://indianembassybrussels.gov.in/pdf/Pesticide_Monitorin...

Quoting the pesticide in question: > Fenamiphos : Non-renewal of the active substance A nematicide used to kill plant parasite roundworm and thrips infestation in fruiting vegetables (i.e. tomato, aubergine, cucumber, pepper and courgette), herbaceous ornamentals and in nursery stock (both perennial and herbaceous species).

Effective date: 23 Sept., 2020 (for non-approval of substance); 23 March, 2021 (for withdrawal of authorization of plant protection products); 23 Sept., 2021 (for grace period given if any).

Reason for non-approval: Potential acute risk for consumers - despite of incomplete data, was identified for all the representative uses concerning fruiting vegetables.

Usage in India: Used in wide variety of vegetables such as okra, cauliflowers etc. and fruit crops.


Thank you! I had a very similar feeling, as I had read another long-format investigative report on Covid (I don’t remember where I put the link now), which especially paralleled the Lancet magazine and high profile officials!


It feels like under whatever hood one looks or whichever layer of paint one looks behind - when it’s a relatively new change in society, one always finds the root cause to be neoliberalism.

Your comment reconfirms this especially pictorially with the last statement: productivity and it alone goes up and quality or service are not even part of the equation.


I get why people took issue with my parent comment, because it seems like I am just whining that economists are measuring the only things they can measure. But it's like the fish asking "what's water?".

why do we pay people to measure these things? why do they measure what they measure and not something else? why is this type of thinking dominant among the technocratic elite? Especially when the view from the ground floor only seems to be getting worse and worse.


Most of the profession of economics in the last several decades has been focused on figuring out how to measure difficult-to-measure truths about how people behave. That's what behavioral economics is about. The other name on the UChicago institute you don't like, Gary Becker, won a Nobel for finding ways to creatively study behaviorism: family structure, why people commit crime, studying societal benefits from education, and quantifying damage of discrimination.

You might argue that we shouldn't measure things at all -- that it is better to live in darkness and ignorance, and make vague gestures about the way things are with no evidence or rigorous thought. Measuring is how we find truth. Sometimes it is hard. But throwing away our measuring tools is not the path to enlightenment.

Many people choose to ex ante dislike economists, and dismiss anything any of them have to say without further study, because there is a risk that the results of quantifying the world might disagree with their preconceptions.


I understand the value of economics as a social science, and there are some policies that economists propose that I like. However, a lot of this stuff is either just justification for neoliberal policies, or just common sense.

For example, a neoliberal economist like Becker would say: "There's actually an economic disincentive for racial discrimination"

Sure, if your economic analysis does not take into account class conflict, which most orthodox economists do not. Discrimination is borne intentionally out of class conflict as a tool of dividing the working class.

Orthodox economics is like having a bunch of scientists devoted to a pre-heliocentric model of the solar system. They can describe isolated phenomena within the accepted framework, but they will never be able to accurately describe (in full) the world around them because their foundational world model is incorrect.


Correct. Try Ms Rachel for your younger child instead of Cocomelon. The very young ones tend to love her.


Agree that Ms Rachel has same effect but with the difference that it teaches kids something.


> After the Ottomans took Constantinople, the use of Tyrian purple died out. Popes and cardinals changed their vestments from purple to red, and the color disappeared from use.

The article was almost right until here but this is just unbased. The truth is: The Byzantine Empire fell in 1204 and then Tyrian purple fell into disuse. The late new Byzantine Empire under the Palailogos had already switched to red and gold as their royal colors, and the Palailogos-Ottoman transition didn’t change this.


I actually think it’s not a coincidence and they specifically built this M3 Ultra for DeepSeek R1 4-bit. They also highlight in their press release that they tested it with 600B class LLMs (DeepSeek R1 without referring to it by name). And they specifically did not stop at 256 GB RAM to make this happen. Maybe I’m reading too much into it.


Pretty sure this has absolutely nothing to do with Deepseek and even local LLM at large, which has been a thing for a while and an obvious use case original Llama leak and llama.cpp coming around.

Fact is Mac Pros in the Intel days supported 1.5TB RAM in some configurations[1] and that was 6 years ago expectations of their high end customer base. They needed to address the gap for those customers so they would have shipped such a product regardless. Local LLM is cherry-on-top. Deepseek in particular almost certainly had nothing to do with it. They will still need to double their supported RAM in their SoC to get there. Perhaps in a Mac Pro or a different quad-Max-glued chip.

[1]: https://support.apple.com/en-us/101639


The thing that people are excited about here is unified memory that the GPU can address. Mac Pro had discrete GPUs with their own memory.


I understand why they are excited about it—just pointing out it is a happy coincidence. They would have and should have made such a product to address the need of RAM users alone, not VRAM in particular, before they have a credible case to cut macOS releases on Intel.


Intel integrated graphics, technically also used unified memory with the standard dram


Those also have terrible performance and worse bandwidth. I am not sure they are really relevant, to be honest.


Did the Xeons in the Mac Pro even have integrated graphics?


So did the Amiga, almost 40 years ago...


You mean this? ;) http://de.wikipedia.org/wiki/Datei:Amiga_1000_PAL.jpg

RIP Jay Miner who watched his unified memory daughters Agnus, Denise and Paula be slowly murdered by Jack Tramiel's vengeance against Irving Gould. [Why couldn't the shareholders have stormed their boardroom 180 days before the company ran out of cash, installed interim management who, in turn, would have brought back the megalomaniac Founder that would, until his dying breath, keep spreading their cash to the super brilliant geniuses that made all the magic chips happen and then turn the resulting empire over to ops people to make their workplace so uncomfortable they all retire early and live happily ever after on tropical islands and snowy mountain tops?]


Yep! Though one could argue the Amiga wasn't true unified memory due to the chip RAM limitations. Depending on the Agnus revision, you'd be limited to 512, 1 meg, or 2 meg max of RAM addressable by the custom chips ("chip RAM".)


fun fact: M-series that are configured to use more than 75% of shared memory for GPU can make the system go boom...something to do with assumptions macOS makes that can be fixed by someone with a "private key" to access kernel mode (maybe not a hardware limit).


I messed around with that setting on one of my Macs. I wanted to load a large LLM model and it needed more than 75% of shared memory.


That or it's the luckiest coincidence! In all seriousness, Apple is fairly consistent about not pushing specs that don't matter and >256GB is just unnecessary for most other common workloads. Factors like memory bandwidth, core count and consumption/heat would have higher impact.

That said, I doubt it was explicitly for R1, but rather based the industry a few years ago when GPT 3s 170B was SOTA, but the industry was still looking larger. "As much memory as possible" is the name of the game for AI in a way that's not true for other workloads. It may not be true for AI forever either.


The high end Intel Macs supported over a TB of RAM, over 5 years ago. It's kinda crazy Apple's own high end chips didn't support more RAM. Also, the LLM use case isn't new... Though DeepSeek itself may be. RAM requirements always go up.


Just to clarify. There is an important difference between unified memory, meaning accessible by both CPU and GPU, and regular RAM that is only accessible by CPU.


As mentioned elsewhere in this thread, unified memory has existed long before Apple released the M1 CPU, and in fact many Intel processors that Apple used before supported it (though the Mac pros that supported 1.5TB of RAM did not, as they did not have integrated graphics).

The presence of unified memory does not necessarily make a system better. It’s a trade off: the M-series systems have high memory bandwidth thanks to the large number of memory channels, and the integrated GPUs are faster than most others. But you can’t swap in a faster GPU, and when using large LLMs even a Mac Studio is quite slow compared to using discrete GPUs.


Design work on the Ultra would have started 2-3 years ago, and specs for memory at least 18 months ago. I’m not sure they had that kind of inside knowledge for what Deepseek specifically was doing that far in advance. Did Deepseek even know that long ago?


> they specifically built this M3 Ultra for DeepSeek R1 4-bit

Which came out in what, mid January? Yeah, there's no chance Apple (or anyone) has built a new chip in the last 45 days.


Don't they build these Macs just-in-time? The bandwidth doesn't change with the RAM, so surely it couldn't have been that hard to just... use higher capacity RAM modules?


"No chance?" But it has been reported that the next generation of Apple Silicon started production a few weeks ago. Those deliveries may enable Apple to release its remaining M3 Ultra SKUs for sale to the public (because it has something Better for its internal PCC build-out).

It also may point to other devices ᯅ depending upon such new Apple Silicon arriving sooner, rather than later. (Hey, I should start a YouTube channel or religion or something. /s)


No one is saying they built a new chip.

But the decision to come to market with a 512GB sku may have changed from not making sense to “people will buy this”.


Dies are designed in years.

This was just a coincidence.


What part of “no one is saying they designed a new chip” is lost here?


Sorry, non of us a fan boys trying to shape apple is great narratives


I don’t think you understand hardware timelines if you think this product had literally anything to do with anything DeepSeek.


Chip? Yes. Product? Not necessarily...

It's not completely out of the question that the 512gb version of M3 Ultra was built for their internal Apple silicon servers powering Private Compute Cloud, but not intended for consumer release, until a compelling use case suddenly arrived.

I don't _think_ this is what happened, but I wouldn't go as far as to call it impossible.


DeepSeek R1 came out Jan 20.

Literally impossible.


The scenario is that the 512gb M3 Ultra was validated for the Mac Studio, and in volume production for their servers, but a business decision was made to not offer more than a 256gb SKU for Mac Studio.

I don't think this happened, but it's absolutely not "literally impossible". Engineering takes time, artificial segmentation can be changed much more quickly.


From “internal only” to “delivered to customers” in 6 weeks is literally impossible.


This change is mostly just using higher density ICs on the assembly line and printing different box art with a SKU change. It does not take much time, especially if they had planned it as a possible product just in case management changed its mind.


That's absurd. Fabing custom silicon is not something anybody does for a few thousand internal servers. The unit economics simply don't work. Plus Apple is using OpenAI to provide its larger models anyway, so the need never even existed.


Apple is positively building custom servers, and quantities are closer to the 100k range than 1000 [0]

But I agree they are not using m3 ultra for that. It wouldn’t make any sense.

0. https://www.theregister.com/AMP/2024/06/11/apple_built_ai_cl...


That could be why they're also selling it as the Mac Studio M3 Ultra


My thoughts too. This product was in the pipeline maybe 2-3 years ago. Maybe with LLMs getting popular a year ago they tried to fit more memory but it’s almost impossible to do that that close to a launch. Especially when memory is fused not just a module you can swap.


Your conclusion is correct but to be clear the memory is not "fused." It's soldered close to the main processor. Not even a Package-on-Package (two story) configuration.

See photo without heatspreader here: https://wccftech.com/apple-m2-ultra-soc-delidded-package-siz...


I think by fuse I mean't its stuck on to the SOC module, not part of the SOC as I may have worded. While you could maybe still add NANDs later in the manufacturing process, it's probably not easy, especially if you need more NANDs and a larger module which might cause more design problems. The NAND is closer cause the controller is in the SOC. So the memory controller probably would also change with higher memory sizes which would mean this cannot be a last minute change.


Sheesh, the...comments on that link.


$10k to run a 4 bit quantized model. Ouch.


That's today. What about tomorrow?


The M4 MacBook Pro 128GB can run a 32B perimeter model with an 8 bit quantized model just fine


[flagged]


I'm downvoting you because your use of language is so annoying, not because I work for Apple.


So, Microsoft?


what?


Sorry, an apostrophe got lost in "PO's"


[flagged]


are you comparing the same models? How did you calculate the TOPS for M3 Ultra?


An M3 Ultra is two M3 Max chips connected via fabric, so physics.

Did not mean to shit on anyone's parade, but it's a trap for novices, with the caveat that you reportedly can't buy a GB10 until "May 2025" and the expectation that it will be severely supply constrained. For some (overfunded startups running on AI monkey code? Youtube Influencers?), that timing is an unacceptable risk, so I do expect these things to fly off the shelves and then hit eBay this Summer.


> they specifically built this M3 Ultra for DeepSeek R1 4-bit.

This makes sense. They started gluing M* chips together to make Mac Studios three years ago, which must have been in anticipation of DeepSeek R1 4-bit


Any ideas on power consumption? I wonder how much power would that use. It looks like it would be more efficient than everything else that currently exists.


Looks like up to 480W listed here

https://www.apple.com/mac-studio/specs/


Thanks!!


The M2 Ultra Mac Pro could reach a maximum of 330W according to Apple:

https://support.apple.com/en-us/102839

I assume it is similar.


Can you compile your game to target Mac? Why/why not? (Like is it a technical or business decision?)



Yes, curious about this. Looks like Mac and Linux support are available with Godot…


The juxtaposition of how the original comment starts, then the appearance of the verbatim “good comment”, then the spacecraft engravings made me laugh tears.

Especially the buildup from how bizarre the understanding of UB of comments is to actually seeing one “in the wild”.


This was such a riveting and literary read, I enjoyed it and couldn’t put it away, like a novel where I was invested in the characters!

Are there any other such reads in the software engineering field?


Knuth?

He has argued in the past that the concept of Literate Programming

http://literateprogramming.com/

is the most important work he has done, and I highly recommend his various collections of lecture notes/papers including:

https://www.goodreads.com/book/show/112245.Literate_Programm...


A Discipline of Programming, Dijkstra

The architecture of Concurrent Programs, Per Brinch Hansen

Literate programming--see http://www.literateprogramming.com/

I don't recommend Design Patterns, as these are elements needed if your programming language is inadequate.

There are many more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: