For all the folks on the "reduce mental burden", "reduce cognitive load" train, are you all aware that this basically means you are exercising less your brain day in and day out, and in the end you will forget how to do things? You will learn how to guide an AI agent, but until the day an AI agent is perfect (and we don't know if we will ever see that day), you are just losing inch by inch your ability to actually understand what the agent is writing and what is going on.
I'm pretty radical on this topic but for me cognitive load is good, you are making your neurons work and keep synapses in place where they matter (at least for your job). I totally accept writing down doc or howto to make doing some action in future easier and reduce that cognitive load, but using AI agent IMO is like going to bike in the mountain with an electrical bike.
Yes, you keep seeing the wonderful vistas but you are not really training your legs.
This, to me, feels like you're complaining to the 45 year old builder that they should be using a hammer instead of a nail gun.
I know how to nail a nail, I've nailed so many nails that I can't remember them all.
My job is to build a house efficiently, not nail nails. Anything that can make me more efficient at it is a net positive.
Now I've saved 2 hours in the framing process by using a nail gun, I have 2 extra hours to do things that need my experience. Maybe spot the contractor using a nail plate in the wrong way or help the apprentice on their hammering technique.
IMO it's different. That's why I brought the e-bike similitude: climbing even mild mountains or hills with your own legs will actually make your legs, heart and lungs stronger in the process. So you get both the wonderful views (building the house or delivering the software) but also you get improved health (keeping your mind trained on both high level thinking and low level implementation vs high level only). We might say that using a hammer constantly will develop more your muscles, but in carpentry there are still plenty of manual work that will develop your muscles anyway. (and we still don't have bricks laying machines)
Ironically, e-bikes, at least in the EU, are having the exact opposite effect. More people that don't normally ride bikes are using e-bikes to get about. The motor functions not as a replacement, but as a force multiplier. It also makes "experimenting" easier, because the motor can make up for any mistakes or wrong turns.
Caveat: In the EU, an e-bike REQUIRES some physical effort any time for the motor to run. Throttles are illegal.
> Ironically, e-bikes, at least in the EU, are having the exact opposite effect. More people that don't normally ride bikes are using e-bikes to get about.
At least in Germany people rather joke that the moment e-bikes became popular, people began to realize that they suddenly became too unathletic to be capable of pedaling a bicycle. I know of no person who uses an e-bike who did not ride an ordinary bicycle before.
> In the EU, an e-bike REQUIRES some physical effort any time for the motor to run.
The motor must shut off when 25 km/h is reached - which is basically the speed that a trained cyclist can easily attain. So because of this red tape stuff, e-bikes are considered to be useless and expensive by cyclists who are not couch potatoes.
Yes, you pedal with almost zero effort when flat and with a little more effort when uphill. Obviously if you are going to go up the Tourmalet you will run out of battery pretty soon, but that's not the context most e-bikers use them.
In that it fits the LLM situation quite well. LLMs remove the anxieties around coding for newbies at scale better than they make indisputable productivity gains for senior developers, similar to how e-bikes help with newbies more than cyclists.
I know that's what many people, especially elder one, say but this is still a hill I will die on :) They are mostly used to go in mostly flat roads, like some slow-speed motorcycle that needs some low effort. The ones using them outside paved road, using them as multipliers, are the ones that already did mountain biking when they were younger and they want to continue doing it at a higher level their age would permit without (which it's perfectly fine!).
> So you get both the wonderful views (building the house or delivering the software) but also you get improved health (keeping your mind trained on both high level thinking and low level implementation vs high level only).
The vast majority of developers aren't summitting beautiful mountains of code, but are instead are sifting through endless corporate slop.
> We might say that using a hammer constantly will develop more your muscles, but in carpentry there are still plenty of manual work that will develop your muscles anyway.
The trades destroy human bodies over time and lead to awful health outcomes.
Most developers will and should take any opportunity to reduce cognitive load, and will instead spend their limited cognitive abilities on things that matter: family, sport, art, literature, civics.
Very few developers are vocational. If that is you and your job is your identity, then that's good for you. But don't fall into the trap of thinking that's a normal or desirable situation for others.
> The vast majority of developers aren't summitting beautiful mountains
I'm not sure you're approaching this metaphor the right away. The point is that coding manually is great cognitive exercise which keeps the mind sharp for doing the beautiful stuff.
> The trades destroy human bodies over time and leads to awful health outcomes.
Again, you're maybe being too literal and missing the point. No one is destroying their minds by coding. Exercise is good.
I am using LLMs, too, and I do not consider myself thinking less. I still have to be part of the whole process, incl. architectural process among other things that require my knowledge and my thinking.
I use them too and actually agree with you that the cognitive load is somewhat comparable. I was only pointing out what seemed like an abuse of the metaphor.
> I'm not sure you're approaching this metaphor the right away. The point is that coding manually is great cognitive exercise which keeps the mind sharp for doing the beautiful stuff.
No, I'm challenging the metaphor. Working the trades isn't exercise - it's a grind that wears people out.
> Again, you're maybe being too literal and missing the point. No one is destroying their minds by coding. Exercise is good.
We actually have good evidence that the effects of heavy cognitive load are detrimental to both the brain and mental health. We know that overwork and stress are extremely damaging to both.
So reducing cognitive load in the workplace is an unambiguous good, and protects the brain and mind for the important parts of life, which are not in front of a screen.
> We actually have good evidence that the effects of heavy cognitive load are detrimental to both the brain and mental health. We know that overwork and stress are extremely damaging to both.
I don't think this is fair either, you're comparing "overwork and stress" to "work." It's like saying we have evidence that extreme physical stress is detrimental ergo it's "unambiguously" healthier to drive than to walk.
Maybe you could share your good evidence so we can see if normal coding tasks would fall under the umbrella of overwork and stress?
We have plentiful evidence and studies on the effect of even moderate day-long cognitive work has on cognitive ability and on the effect of stress.
This is so well-founded that I do not have to provide individual sources - it is the current global accepted reality. I wouldn't provide sources for the effect of CO2 emissions on the climate or gravity, either.
However, the opposite is not true. If you have evidence that routine coding itself improves adult brain health or cognitive ability, please share RCTs or large longitudinal studies showing net cognitive gains under typical workloads.
Again you're conflating things and are now also moving goalposts (overwork->moderate work) and asking me for very precise kinds of studies while refusing to even point towards the basis for your own claims. On top of this you're implying that I'm some kind of lunatic by associating my questions with climate denial.
It's clear that you're more interested in "winning" than actually have a reasonable discussion so goodbye. I've had less frustrating exchanges with leprechauns.
Come on. We’ve had decades of occupational-health research on cognitive load, stress, and hours. The pattern is clear. Higher demands and longer hours raise depression risk. Lab and field work shows day-long cognitive tasks produce measurable fatigue, decision drift, and brain chemistry changes. These are universally accepted.
And yet, you now want me to source individual studies on those effects in a HN thread? Yes, in this instance you are approaching flat-earth/climate-change-denial levels of discourse. Reducing cognitive load is an unambiguous good.
If you think routine coding itself improves brain health or cognitive ability, produce the studies showing as you demanded from me, because that is a controversial claim. Or you can crash out of the conversation.
> No, I'm challenging the metaphor. Working the trades isn't exercise - it's a grind that wears people out.
If your job is just grinding out code in a stressful and soul-crushing manner, the issue lies elsewhere. You will be soon either grinding out prompts to create software you don't even understand anymore or you will be replaced by an agent.
And by no way I'm implying you are part of the issue.
> If your job is just grinding out code in a stressful and soul-crushing manner, the issue lies elsewhere.
The vast majority of developers are in or near this category. Most software developers never write code outside of education or employment and would avoid doing so if an AI provided the opportunity. Any opportunity to reduce cognitive load is welcome
I think you don't recognise how much of an outlier you are in believing that your work improves your cognitive abilities.
But LLMs can make the soul crushing part so much easier.
I need to add a FooController to an existing application, to store FooModels to the database. The controller needs the basic CRUD endpoints, etc.
I can spend a day doing it (crushing my soul) or I can just tell any Agentic LLM to do it and no something that doesn't crush my soul, like talk with the customer about how the FooModels will be used after storing.
"But it'll produce bad code!"
No it doesn't. It knows _exactly_ how to do a basic CRUD HTTP API controller in C#. It's not an art form, it's just rote typing and adding Attributes to functions.
Because it's an Agentic LLM, it'll go look at another controller and copy its structure (which is not standard globally, but this project has specific static attributes in every call).
Then I review the code, maybe add a few comments for actual humans, commit and push.
My soul remains uncrushed, client is happy when I delivered the feature on time and I have half the day off for other smaller tasks that would become technical debt otherwise.
> My soul remains uncrushed, client is happy when I delivered the feature on time and I have half the day off for other smaller tasks that would become technical debt otherwise.
This is a very optimistic take. If you are in the type of company that just gives you boring code and tasks, you will required to use the other half-day to work on some other boring feature. This will not give us time to pay tech debt. Maybe it will do in the beginning when using AI agents has not been institutionalized yet, but once it has, you will be asked to churn out more "features"
If the mechanics of the work are “soul-crushing,” isn’t that the root cause, and the LLM is just a bandaid? I’m not saying every professional dev is enthused with all their tasks. But if you’re so eager to avoid parts of the job (however rote they are), then maybe it’s time for something new?
I can’t write this without feeling preachy, and I apologize for that. But I keep reading a profound lack of agency in comments like these.
But if my goal is to ride Downhill[0] me exhausting myself riding up the hill isn't bringing any extra enjoyment for me.
With electric assist, I can get up faster and to the business of coming down the hill really fast.
We have ski-lifts for the exact same reason. People doing downhill skiing would make their legs, heart and lungs stronger in the process of walking up the hill with their skis. But who wants to do that, that's not the fun part.
And to step back from analogy-land.
I'm paid to solve problems, I'm good at architecture, I can design services, I can also write the code to do so. But in most cases the majority of the code I write is just boilerplate crap with minor differences.
With LLMs I can have them write the Terraform deployment code, write the C# CRUD Controller classes and data models and apply the Entity Framework migrations. It'll do all that and add all the fancy Swagger annotation while it's at it. It'll even whip up a Github build action for me.
Now I've saved a few days of mindless typing and I can get into solving the actual problem at hand, the one I'm paid to do. In reality I'm doing it _while_ I'm instructing the LLM to do the menial crap + reviewing the code it produced so I'm moving at twice the speed I would be normally.
"But can't you just..." nope, if every project was _exactly_ the same, I'd have a template already, there are just enough differences to not make it worth my time[1]. If I had infinite time and money, I could build a DSL to do this, but again, referring to [1] - there's no point =)
It's more cost efficient to pay the LLM tax to OpenAI, Anthropic or whatever and use it as a bespoke bootstrapping system for projects.
I'd like to counter argue we need to shift the notion of efficiency.
While the nailgun is more efficient than the hammer, both need to be operated by a real human. Including all limitations of physical and mental health.
While we can improve the efficiency of the tools we should consider not to burn out humans to match the goal.
The builder still needs to study the plans and build a mental model of what they’re building.
A nailgun isn’t automated in the way an LLM is, maybe if it moved itself around and fired nails where it thought they should go based on things it had seen in the past it would be a better comparison.
On the contrary, I am a lot more willing to think through the contours of the problems I need to solve because I haven't drained my mental energy writing five repetitive - but slightly different - log lines and tweaking the wording slightly to be correct.
I'm training smarter, and exercising better, instead of wasting all the workout/training time on warmups, as it were.
It completely depends on how you use AI. If you turn off your brain entirely and just coast as much as possible, then yeah your comment would apply.
But I think of work as essentially two things - creative activity and toil. I simply use AI for toil, and let my brain focus on creativity and problem solving.
Writing my 100,000th for loop is not going to preserve my brain.
I have been coding and athletic training for about as long as eachother, your anecdote works. However just like in physical training, you should really spare your energy for stuff that you enjoy and actually progresses you.
By using LLMs to do some of the stuff I have long gotten over, I have a bit more mental energy to tackle new problems I wouldn't have previously.
As well LLMs just aren't actually that competent yet, so it's not like devs are completely hands off. Since I barely do boilerplate as I work across legacy projects, there's no way Claude Code today is writing half my code.
Do you sit there multiplying two digit numbers in your head for fun, for the practice, to keep operating at peek mental capacity on weekends? In the name of operating at peek mental capacity, that seems like the most logical thing to do. Just wake up at 6 am Saturday morning and start multiplying numbers on your head.
If you don't wanna use AI, that's entirely up to you, but "you're gonna forget how to program if you use AI and then whatever are you going to do if the AI is down" reeks of motivated reasoning.
My current pattern is to manually craft during the first half of the day when I enjoy that, and during the second half when I'd be normally burnt on hard thought and not quite up for another coffee, pomodoro, theanine deep dive, I can start tackling tests, exploratory data analysis, or small bugs, and these tasks are 50% or more LLM.
So yeah, 30%-50% seems right, but it's not like I lost any part of my job that I love.
That's a good approach, I like it and probably adopt it as well. I'm not dogmatically against LLMs, I just need we should think about possible consequences, and not thinking about them like a holy grail of everything.
You're right, but usually at the end of the day I'm completely mentally exhausted and dont want to talk to anyone. Its something I've realized is a big problem in my life. I'm actively trying to reduce mental load to leave stuff for other hobbies and social activities.
you are obviously not past 50 years old. At that age, eventhough I "train" (as you imply it) a lot (I code about 50 hours a week, 40 at work, the rest for my own projects; not of them easy, my job is writing numerical code for simulation and my pet project is writinga very accurate emulator (which implies physics modelling, tons of research, etc)). I can definitely feel I'm not as productive as before (before I could sustain 60 hours a week) eventhough I do feel I'm using my brain to the maximum...
So yeah, past a certain age, you'll be happy to reduce your mental load. No question about it. And I feel quite relieved when Claud writes this classic algorith I have understood long ago and don't want to re-activate in my brain. And I feel quite disappointed when Claude misses the point and I have to code review it...
Strangely I've found myself more exhausted at the end of the week and I think it's because of the constant supervision necessary to stop Claude from colouring outside the lines when I don't watch it like a hawk.
Also I tend to get more done at a time, it makes it easier to get started on "gruntwork" tasks that I would have procrastinated on. Which in turn can lead to burnout quite quickly.
I think in the end it's just as much "work", just a different kind of work and with more quantity as a result.
No, what I want better is tooling that makes supervising and getting insight into what it's doing a superior experience.
A far more interactive coding "agent" that makes sure it walks through every change it makes with you, and doesn't just rush through tasks. That helps team members come up to speed on a repository by working through it with them.
> Strangely I've found myself more exhausted at the end of the week and I think it's because of the constant supervision necessary to stop Claude from colouring outside the lines when I don't watch it like a hawk.
Welcome to management. Computers and code are easy. People and people wannabes like LLMs are a pain.
I find AI is most useful at the ancillary extra stuff. Things that I'd never get to myself. Little scripts of course, but more like "it'd be nice to rename this entire feature / db table / tests to better match the words that the business has started to use to discuss it".
In the past, that much nitpicky detail just wouldn't have gotten done, my time would have been spent on actual features. But what I just described was a 30 minute background thing in claude code. Worked 95%, and needed just one reminder tweak to make it deployable.
The actual work I do is too deep in business knowledge to be AI coded directly, but I do use it to write tests to cover various edge cases, trace current usage of existing code, and so on. I also find AI code reviews really useful to catch 'dumb errors' - nil errors, type mismatches, style mismatch with existing code, and so on. It's in addition to human code reviews, but easy to run on every PR.
Wow, 30 minutes to rename functions and tests? I wonder how much energy and water that llm wasted for something that any lsp supporting editor can do in a second.
For me, this is the biggest benefit of AI coding. And it's energy saved that I can use to focus on higher level problems e.g. architecture thereby increasing my productivity.
What kind of codebases do you work on if you don't mind me asking?
I've found a huge boost from using AI to deal with APIs (databases, k8s, aws, ...) but less so on large codebases that needed conceptual improvements. But at worst, i'm getting more than 10% benefit, just cause the AI's can read files so quickly and answer questions and propose reasonable ideas.
Do you feel like you lack mental energy at 40? I do not feel any different from 30. I think the main difference between 30 and 40 is I am much more efficient at doing things and therefore am able to do more.
I do, but I’m not sure it’s age related. Ever since 2020, it feels like my physical and mental energy has been on a downward trajectory. I feel like I’ve lost several IQ points. What’s interesting is that I’ve heard the same from a lot of people. Not sure what the root cause is, but I do need to take better care of myself.
Is the alleviating of the mental energy going to make you a worst programmer in the long run? Is this like skipping mental workouts that were ultimately keeping you sharp?
Also in my 40s and above senior level. Theres not many mental workouts in day to day coding because the world is just not a new challenge every day. What I consider 'boilerplate' just expands to include things I've written a dozen times before in a different context. AI can write that to my taste and I can tackle the few actual challenges.
At 51, no one hires me because of my coding ability. They hire me because I know how to talk to the “business” and lead (larger projects) or implement (smaller projects) and to help sales close deals.
Don’t get me wrong, I care very deeply about the organization and maintainability of my code and I don’t use “agents”. I carefully build my code (and my infrastructure as code based architecture) piece by piece through prompting.
And I do have enough paranoia about losing my coding ability - and I have lost some because of LLMs - that I keep a year in savings to have time to practice coding for three months while looking for a job.
I know a couple of people whose mental faculties took a sharp nosedive after they started relying on LLMs too much. They might be outliers but just a few years ago I considered them to be really sharp and these days they often struggle with pretty basic reasoning and problem solving.
Do coding in non-assembly programming languages make you a worse programmer in the long run because you are not exposed to the deepest level of complexity?
My guess is if we assume the high level and low level programmers are equally proficient in their mediums, they would use the same amount of effort to tackle problems, but the kinds of problems they can tackle are vastly different
I will say that some weeks it make me 10% more productive, some weeks -10%.
I came to the conclusion that I need to do all the hard work and only ask it to fill the gaps otherwise it will generate too much crap.
I always took for granted that devs knew about SVG, but I had come into the industry by way of design, so I had already been toying with vectors for years with Adobe Illustrator and Sketch.
SVG is awesome. Heavily underinvested in the web spec, would love to see SVG2 get some attention.
There are now a few sync engines that tackle this problem. Rocicorp Zero, Electric SQL, and one or two others. By no means a crowded space, but there are options now.
Have you had a chance to use either of these yet? Electric looks like an obvious mature choice — curious if you think Zero's approach is compelling enough to be worth trying in alpha
Haven't used either of them. I met the guy behind Zero and he's super smart. He had been working on Replicache for a long time before he started this thing.
That being said, I haven't tried them, so can't really give an educated opinion. But I feel pretty confident in the domain expertise on the Zero team.
The approach I've taken to "vibe coding" is to just write pseudo-code and then ask the LLM to translate. It's a very nice experience because I remain the driver, instead of sitting back and acting like the director of a movie. And I also don't have to worry about trivial language details.
Here's a prompt I'd make for fizz buzz, for instance. Notice the mixing of english, python, and rust. I just write what makes sense to me, and I have a very high degree of confidence that the LLM will produce what I want.
fn fizz_buzz(count):
loop count and match i:
% 3 => "fizz"
% 5 => "buzz"
both => "fizz buzz"
That's a really powerful approach because LLMs are very very strong at what is basically "style transfer". Much better than they are at writing code from scratch. One of my most recent big AI wins was going the other way; I had to read some Mulesoft code in its native storage format, which is some fairly nasty XML encoding scheme, mixed with code, mixed with other weird things, but asking the AI to just "turn this into psuedocode" was quite successful. Also very good at language-to-language transfer. Not perfect but much better than doing it by hand. It's still important to validate the transfer, it does get a thing or two wrong per every few dozen lines, but it's still way faster than doing it from scratch and good enough to work with if you've got testing.
My mental model for LLMs is that they’re a fuzzy compiler of sorts. Any kind of specification whether that’s BNF or a carefully written prompt will get “translated”. But if you don’t have anything to translate it won’t output anything good.
> if you don’t have anything to translate it won’t output anything good.
One of the greatest quotes in the history of computer science:
“On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"
I agree with that assessment but that makes me wonder if a T5 style LLM would work better than a decoder only style LLM like GPT or Claude. Has anyone tried that?
Is this seriously quicker than just writing in a language that you know? I mean, you're not benefitting from syntax highlighting, autocompletion, indentation, snippets etc. This looks like more work than I do at a higher cost and insane latency.
I find it particularly useful when I would need to look up lots of library functions I don't remember. For example, in python I recently did something (just looked it up:
for ever my file in directory 'd' ending '.capture':
Read file
Split every line into A=B:C
Make a dictionary send A to [B,C]
Return a list of pairs [filename, dict from filename]
I don't python enough to remember reading all files in a directory, or splitting strings. I didn't even bother proof reading the English (as you can see)
Those are just features waiting to be developed. I'm currently experimenting with building LLM-powered editor services (all the stuff you mentioned). It's not there yet, but as local models become faster and more powerful, it'll unlock.
This particular example isn't very useful, but anecdotally it feels very nice to not need perfect syntax. How many programmer hours have been wasted because of trivial coding errors?
> How many programmer hours have been wasted because of trivial coding errors?
Historically probably quite a lot, but with a decent editor and tools like gofmt that became popular in the past 10 years I'd say syntax is just not a problem any more. I can definitely recall the frustration of a missing closing bracket in HTML in the 90s, but nowadays people can turn out perfectly syntactically correct code on day 1 of a new language.
That’s fair. Not to shift the goal post but my intuition has shifted recently as to what I’d consider a “trivial” problem. API details, off-by-one errors, and other issues like that are what I’d lump into that category.
Easy way to say it is that source code requires perfection, whereas pseudo-code takes the pressure off of that last 10%, and IMO that could have significant benefits for cognitive load if not latency.
Still all hypothetical, and something I’m actively experimenting with. Not a hill I’m gonna die on, but it’s super fun to play and imagine what might be possible.
> Is this seriously quicker than just writing in a language that you know?
Yes. Well, it depends.
Most of the prompts specifying requirements and constraints can be reused, so you don't need to reinvent the wheel each time you prompt a LLM to do something. The same goes for test suites: you do not need to recreate a whole test suite whenever you touch a feature. You can even put together prompt files for specific types of task, such as extending test coverage (as in, don't touch project code and only append unit tests to the existing set) or refactoring work (as in, don't touch tests and only change project code)
Also, you do not need to go for miracle single-shot sessions, or purist all-or-nothing prompts. A single prompt can fill in most of the code you require to implement a feature,and nothing prevents you from tweaking the output.
It is seriously quicker because people like you and me use LLMs to speed up how the boring stuff is implemented. Guides like this are important to share some lessons on how to get LLMs to work and minimize drudge work.
I do something similar, merely writing out the function signatures i want in code. The more concrete of the idea i have in my head the more i outline, outline tests, etc.
However this is far less vibe coding and more actual work with an LLM imo.
Overall i'm not finding much value in vibe coding. The LLM will "create value" that quickly starts to become an albatross of edge cases and unreviewed code. The bugs will work their way in and prevent the LLM from making progress, and then i have to dig in to find the sanity - which is especially difficult when the LLM dug that far.
Yeah I'm nowhere near ready to loosen the leash. Show me a long-running agent that can get within 90% of its goal, then I'll be convinced. But right now we barely even have the tools to properly evaluate such agents.
I initially used natural language as prompts, but the code output wasn’t ideal. When I listed the steps it should follow, I found that it executed them very well.
I do something like that when I get down to the function level and there is an algorithm that is either struggling for the role or poorly optimized, but the models that excel in codebase architecture have their hands held behind their back with that level of micromanaging.
the results are good because as another replier mentioned, LLMs are good at style transfer when given a rigid ruleset -- but this technique sometimes just means extra work at the operator level to needlessly define something the model is already very aware of.
"write a fizzbuzz fn" will create a function with the same output. "write a fizzbuzz function using modulo" will get you closer to verbatim -- but my point here is that in the grand scheme of "will this get me closer to alleviating typing-caused-RSI-pain" the pseudocode usually only needs to get whipped out when the LLM does something braindead at the function level.
But "write a fizzbuzz fn" has one important assumption / limitation: the LLM should have seen a ton of fizbuzz implementations already to be able to respond.
Hence, LLMs can be helpful to produce boilerplate / glue code, the kind that has already been written in many variations, but cannot be directly reused. Anything novel you should rather outline at a more detailed level.
Another flaw is that humans won’t find other things to do. I don’t see the argument for that idea. If I had to bet, I’d say that if AI continues getting more powerful, then humans will transition to working on more ambitious things.
This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea.
It sounds nice. But to have that, you need resources. Whoever controls the resources will get to decide whether you get them. If AI/machines are our entire economy, the people that control the machines control the resources. I have little faith in their benevolence. If they also control the political system?
You'll win your bet. A few humans will work on more ambitious things. It might not go so well for the rest of us.
There are more mouths to feed and less territory per capita for each person (thus real estate inflation in desired locations). Like lanes on a highway, the population just fills the capacity with growth without the selective pressure of even selecting for skill or ability. The ways we've come see mostly front loaded as initially population takes time to grow while the immediate low hanging fruit of domestic drudgery being eliminated was quite a while ago. Meanwhile now "work" that has filled much of that obligation in the home has expanded to necessitating two full-time income households.
This could’ve been written by me, it so closely matches my own experience. I know too well the “hit with a bag of bricks” realization that much of your professional life has been more or less you winging it. Math has that tendency of shining a bright ugly light on your real capability. It’s deeply humbling.
I’ve been using MathAcademy, trying to do at least one lesson each night after the kid is asleep. But instead of rote memorization, I sit with each problem until I truly and deeply understand it.
It’s going to be a long time before I’m mathematically competent, but there’s nowhere to go but up.
Yeah, that’s my thinking now as well. It’s going to take an incredibly long time but truly understanding each problem is probably the only way to go.
Which is where this beats self study using books, I think. With a book, I can sort of wing it and think I understand something when I only do so very superficially whereas when you do the problems you truly learn what you understand and what you do not. And MathAcademy is only problems, so …
Anchor positioning sounds cool, but I ran into some very unintuitive behavior when I tried to use it. Can’t remember the details, it was a couple years ago.
I guess you're being downvoted as a general nay-sayer, but you're right. I tried this feature last month and a bunch of browser bugs and design issues got in the way. I reported them, and they're being worked on https://github.com/w3c/csswg-drafts/issues/12466
The `margin:0` issue was particularly frustrating & imo should have been covered in the article, as it's a real gotcha when trying to use popover & anchor positioning in combination.
Yeah I could have mentioned the actual issues I had.
My first attempt was to anchor an element to another one that occurred later in the document order, and it didn’t work. The anchor must be placed before any of its dependents. It kind of makes sense, but doesn’t jump out as intuitive.
My problem is always been on sites that have a menu or something similar at the top. The anchor always inevitably goes to the very top of the screen gets covered by whatever menu it is.
reply