Anecdata: I found most people don't have an issue with the vocabulary itself but rather their attention spans. From what I've experienced from family members and friends, the younger ones seem to get exasperated by any longer amount of text that isn't in very simple English language.
A friend told me his daughter was one of the few that could actually sit through a whole reading session in her 2nd grade class. And these are mostly pick and choose books so not really forced literature they don't enjoy.
> Every extra second you sit in the (home)office adds to productivity
I'm not sure I believe that. I think at some point the additional hours worked will ultimately decrease the output/unit of time and at some point that you'll reach a peak whereafter every hour worked extra will lead to an overall productivity loss.
Its also something that I think is extremely hard to consistently measure, especially for your typical office worker.
If you can 100% reproduce the same generated code from the same prompts, even 5 years later, given the same versions and everything then I'd say "Sure, go ahead and don't saved the generated code, we can always regenerate it". As someone who spent some time in frontend development, we've been doing it like that for a long time with (MB+) generated code, keeping it in scm just isn't feasible long-term.
But given this is about LLMs, which people tend to run with temperature>0, this is unlikely to be true, so then I'd really urge anyone to actually store the results (somewhere, maybe not in scm specifically) as otherwise you won't have any idea about what the code was in the future.
> If you can 100% reproduce the same generated code from the same prompts, even 5 years later
Reproducible builds with deterministic stacks and local compilers are far from solved. Throwing in LLM randomness just makes for a spicier environment to not commit the generated code.
Temperature > 0 isn’t a problem as long as you can specify/save the random seed and everything else is deterministic. Of course, “as long as” is still a tall order here.
My understanding is that the implementation of modern hosted LLMs is nondeterministic even with known seed because the generated results are sensitive to a number of other factors including, but not limited to, other prompts running in the same batch.
> Now, when you send a request to one of the Gemini 2.5 models, if the request shares a common prefix as one of previous requests, then it’s eligible for a cache hit. We will dynamically pass cost savings back to you, providing the same 75% token discount.
> In order to increase the chance that your request contains a cache hit, you should keep the content at the beginning of the request the same and add things like a user's question or other additional context that might change from request to request at the end of the prompt.
I didn't read it as that - If I understood correctly, generated code must be quarantined very tightly. And inevitably you need to edit/override generated code and the manner by which you alter it must go through some kind of process so the alteration is auditable and can again be clearly distinguished from generated code.
Tbh this all sounds very familiar and like classic data management/admin systems for regular businesses. The only difference is that the data is code and the admins are the engineers themselves so the temptation to "just" change things in place is too great. But I suspect it doesn't scale and is hard to manage etc.
> I feel like using a compiler is in a sense a code generator where you don't commit the actual output
Compilers are deterministic. Given the same input you always get the same output so there's no reason to store the output. If you don't get the same output we call it a compiler bug!
LLMs do not work this way.
(Aside: Am I the only one who feels that the entire AI industry is predicated on replacing only development positions? we're looking at, what, 100bn invested, with almost no reduce in customer's operating costs other than if the customer has developers).
Determinism is predicated on what you consider to be the relevant inputs.
Many compilers are not deterministic when only considering the source files or even the current time. For example, any output produced by iterating over a hash table with pointer keys is likely to depend on ASLR and thus be nondetermistic unless you consider the ASLR randomization to be one of the inputs. Any output that depends on directory iteration order is likely to be consistent on a single computer but vary across computers.
LLMs aren’t magic. They’re software running on inputs like anything else, which means they’re deterministic if you constrain all the inputs.
LLMs are 100% absolutely not deterministic even if you constrain all of their inputs. This is obviously the case, apparent from any even cursory experimentation with any LLM available today. Equivocating the determinism of a compiler given some source code as input, with the determinism of an LLM given some user prompt as input, is disingenuous to the extreme!
> If LLM generation was like a Makefile step, part of your build process, this concern would make a lot of sense. But nobody, anywhere, does that.
Top level comment of this thread, quoting the article:
> Reading through these commits sparked an idea: what if we treated prompts as the actual source code? Imagine version control systems where you commit the prompts used to generate features rather than the resulting implementation.
Ohhhhhhh. Thanks for clearing this up for me. I felt like I was going a little crazy (because, having missed that part of the thread, I sort of was). Appreciated!
To continue with your analogy: maybe they don't need understand every detail, but they should know how they function, what safety precautions to take, and when it is a better/more useful tool compared to what they're currently using.
That doesn't mean knowing every single bit there is to know about it, but a basic understanding will go a long way in correctly using it.
MS Office is far from a good piece of software itself though. Frankly, the amount sub-menus and other bullshit I constantly have to fix for my parents does not make for a great experience either.
Mind, I barely actually use any Excel/Word/PowerPoint software, but I often have the feeling that a lot of user complaints for these types of things simply come down to: "It's not what I'm used to, therefore it's terrible.".
Yep. With known software there's always this "learned helplessness" of dismissing problems with "ah yeah, this is how it is". Even when it's quirky, inconsistent or just broken.
With new stuff, the blame will always lay on the new software even in situations where it's lack of skill or attention from the user.
I remember a University I used to work at as a dev moving a few classes of a few loud professors from open source Moodle to a paid product, and professors basically replicating Moodle's discussion board functionality by creating public wikis and hoping students wouldn't mess up when editing.
One day one professor approached me wanting a way to prevent students from messing up the "fake discussion board". He got a mouthful from the Dean who was nearby and was footing the bill of a few thousand per month on the expensive SaaS.
This is not the first article I have read about it. Throughout all of them, though, one main question I still could not find an answer to is: stronger than which steel? HSLA, carbon, rebar?
Other than that, I'm all for it. We're renovating our house currently and made some structural changes. Would've loved to exchange some load-bearing steel beams with wooden ones so we could even leave them exposed as a design element.
Not only do we need to ask which steel, we also need to ask which strength. Off the top I my head, if were thinking about contructing buildings, then I'd want to know:
I'm also interested in how long it retains those properties. Steel can rust, but some alloys the rust protects the rest, while others it will rust away. Salt is also a factor in rust (important near the ocean). Wood often rots in water.
They claim the fire properties are good, but I don't know enough about fire to know if they tested all the important properties.
I don't know about this specific tech (which seems to be vaporware given the AI photos...) but the fire properties of some of the composite wood products are terrible they pose a massive danger to firefighters.
They are very commonly used in new house construction past a certain year for the central support beam, or the side beams, or both - that virtually everything rests on.
In a house fire, the beam heats up, the binder/glue weakens, and the beam suddenly fails - causing the interior of the house to collapse partially or entirely which not only sends the firefighters into the basement and possibly under a pile of debris, but it breaks up a bunch of housing materials that are suddenly exposed to the fire..
Done solely to make the profit margin for the contractor slightly bigger...
If you have such beams, it's probably worth looking into how to add insulation to extend the time before the beam fails.
Sure, but only if you choose a steel with a low tensile strength. There are common steels with between 200 and 2000 MPa tensile strength, whereas I understand this wood material has about 500 MPa.
This is (imho) impressive, and much better than untreated wood, but I think it's misleading to say "stronger than steel" when that labels a huge range of materials.
Absolutely, its unfortunately just a matter of cost. To get an equally sized wood beam that could support the weight, its almost 5x the price. Even factoring in other materials and labour.
CLT is not inherently more expensive and the cost difference is typically less dramatic. Steel just has a few centuries of a head start on learning curves, economies of scale, etc. Scaling up usage of CLT would bring down cost just like it has with steel.
The biggest issue actually is that there's a lot of resistance in the construction industry that is simply locked into using steel and concrete and more or less blind to the advantages of wood. Switching materials would mean new tools, new skills, etc. are needed. I have a friend who is active in Germany pushing the use of this material and he talks a lot with companies in this space.
Companies seem to default to doing what they've been doing for a long time without considering alternatives. Many construction projects are actually still one-off projects that don't leverage economies of scale or learnings from previous construction projects. Construction could be a lot cheaper and much less labor intensive than it is today.
CLT could actually make on-site assembly a lot simpler and faster than it is today. Ship pre-fab components created in large scale facilities optimized to manufacture those cost effectively. Assemble on site using simple tools and processes.
I don't work in the industry, but from my admittedly very consumer-oriented perspective that wanted to build a house for a while:
The reason why economics of scale never really made sense in this context was that shipping the prefab components to the building site mostly wiped out the savings.
Ignoring the actual shipping cost (which is substantial for heavy things that get assembled into a house), it also comes with the risk of things getting damaged while en-route etc. another reason is the fact that places in reality very rarely are actually the same. They can do best effort, but things will likely still vary a little. That's another error scenario wiping out a good chunk of the savings, which fundamentally doesn't exist of you just build on-site.
I'm not knowledgeable on this new material to judge wherever this could potentially change this status-quo, but I wouldn't hold my breath either.
I think the concerns you raise aren't actually show stoppers for a lot of prefab housing that has been happening for decades.
Wood is a lot lighter than steel and concrete. And that has to be transported as well. So you'd have less cost there, not more. About 50% weight savings. That's a lot of diesel.
As for parts getting damaged. That's what insurance an warranty are for. I don't think that's a show stopper issue.
And there are advantages to producing prefab components in a facility that is optimal for that and climate controlled that has all the right tools, specialists, equipment etc. Also, pooring concrete in the winter is problematic. Water freezes. And it expands when it does so. Working with steel is a PITA when it freezes as well. It conducts heat very well. Construction sites aren't very active in the winter in those places that have them for this reason. Prefab wood components don't have a lot of these issues. You can still work wood when it freezes. And bang in some nails. Or drill holes.
It isn't a show stopper, but it is why site built it competitive with prefab unless (as is all too common) prefab cuts corners. Prefab because it needs to ship on current roads often has size limitations of the modules that limit how you can arrange your house.
It’s not just construction company resistance to change.
The regulatory landscape around home building is intense. Especially for fire code. You basically have an entire industry of inspectors whose job is to fail things that don’t match any known pattern, so getting new patterns established is quite difficult.
There is likely also some resistance to it in the home insurance space where they are incredibly data driven, so until you have data built up to justify the statistically supported lower prices of stone houses, the insurance companies will keep premiums higher resulting in non standard materials being limited to the wealthy or fanatics willing to eat the cost.
Yes, the Glulam alternative tends to be a bit more expensive for some applications, but I am surprised that it is 5x more expensive than the steel solution. The reference I have (in Europe at least) is that the cost of Glulam is currently about 350 €/m³. Steel is quite more expensive, but of course, the profiles are slender, so less material is used.
That's even less clear. The total cost of replacing what with what is 5× what? I think what you want to remove is steel beams, but I'm not even sure of that.
I have to replace a steel beam inside my house. It's old and when it was installed (1936), the building and load requirements were quite a bit different. With the modifications I'm making to the house, a new one is needed.
The beam runs across the ceiling in my living and dining room. Previous owners installed a lowered drywall ceiling to hide it but that took 20cm of height from the rooms. I'd like wood beams because I could leave this exposed in the room as a design element and have 20cm more ceiling height. I would not want to see the steel beam (even the new one).
For the entire replacement, including labour, materials, and anything else to have a finished ceiling, the quotes I received from multiple contractors are all at least 5x more expensive for the wooden beams.
This may ultimately not be down to the cost of the beam itself but rather that partial wooden construction is newer trend in Germany and they can simply ask for more but I don't have confirmation for that.
So the total building project becomes 5× more expensive if you use wood beams than if you get a new, thicker steel beam with a new lowered drywall ceiling over it? Where does the glu-lam alternative come in?
I think the most annoying part is the external QR reader (on faregates?). I've rarely had a good experience with those whether using a QR on paper or from a phone screen.
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency.
I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
I grew up never needing paper maps.
Once I got my license, GPS was ubiquitous.
Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.
I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.
>Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.
That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...
Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.
A friend told me his daughter was one of the few that could actually sit through a whole reading session in her 2nd grade class. And these are mostly pick and choose books so not really forced literature they don't enjoy.
reply