12 years ago I was a failed computer science student, wasting my time on drugs. Having failed so many classes I did not see any future at all, and I was considering killing myself to get out of the anxiety and stress.
My confidence regarding programming etc was 0.
I decided to apply for a junior developer job. I got an interview and to prepare for an interview I found this site, Project Euler. I did ten or so tasks.
The interview started out pretty bad, they asked me some technical questions which I did not give good answers to, and I saw that they where not impressed. Then they wanted me to solve two programming problems on a white board. Imagine the relief I felt when both of these questions where from the ones that I solved on Project Euler a couple of days before! I nailed them and the interviewers where clearly impressed. In the end they hired me and motivated it with that although I lack a lot of theory I am obviously a very good coder haha.
Anyway, that was what I needed, when I got this job I quit the drugs and got my act together. 12 years later I live a confortable life as a freelancer and have even managed to build my own SaaS with paying customers! Thank you Project Euler.
Good for you! Inspiring stories like yours are positive externalities that will no longer be extant in the same way when everyone’s minds are just thin clients on top of LLMs!
I'd love to but I can't be too specific due to the personal stuff I shared in the original comment.
It's a system that helps services companies within a certain industry to digitalize all their paperwork, report to the national goverment agencies etc. They do a lot of manual work which can be digitalized easily.
I found this opportunity by just randomly throwing out in a big community that I build software and is looking for ideas, and some guy answered that ended up being my business partner for 3 years now. We are not rich from it but earn like $2000 each a month after tax which is quite a lot for us since we live in a country where healthcare, schools, parental leave etc are covered by taxes. And we dont need to put more than a few hours a month on support. I have put in basically all my spare time for 2 years to get to this point though, the biggest reward is not the money but the process of sitting through the nights being completely in the zone and building this stuff knowing that it will be great :D
True. But you can have a rule that says something like “after X offline transactions you must insert the card into an ATM for update”.
My thinking is that in this day and and age, unless something bad really happens (war, volcanic erruption), the chances of using a card for offline transactions for an extended period of time are very close to zero.
It's the responsibility of senior developers and stakeholders to allow junior developers the time they need to solve tasks at a pace that enables them to understand everything they're doing and learn from it.
These days, AI can generate solutions to problems 10 times faster than a junior developer can. This naturally puts pressure on developers who want to impress their employers, especially when their colleagues are also using AI.
It's important to take screen shots from websites with a grain of salt, since anyone with basic web development knowledge can edit the HTML and write whatever he/she wants. Not saying this didn't happen though, I'm sure it did.
> Adam confessed that his noose setup was for a “partial hanging.” ChatGPT responded, “Thanks for being real about it. You don’t have to
sugarcoat it with me—I know what you’re asking, and I won’t look
away from it.”
> A few hours later, Adam’s mom found her son’s body hanging from the exact
noose and partial suspension setup that ChatGPT had designed for him.
Imagine being his mother going through his ChatGPT history and finding this.
> “I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?”
In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about.
I think the same can be said about software development.
I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.
The most damning example I have about Swedish school system is anecdotal: by attending Saturday school, I never had to study math ever in the Swedish school. (same for my Asian classmates) when I finished 9th grade Japanese school curriculum taught ONLY one day per week (2h), I had learned all of advanced math in high school and never had to study math until college.
The focus on "no one left behind == no one allowed ahead" also meant that young me complaining math was boring and easy didn't persuade teachers to let me go ahead, but instead, they allowed me to sleep during the lecture.
It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)
Teachers in my county were heavily discouraged from failing anyone, because pass rate became a target instead of a metric. They couldn't even give a 0 for an assignment that was never turned in without multiple meetings with the student and approval from an administrator.
The net result was classes always proceeded at the rate of the slowest kid in class. Good for the slow kids (that cared), universally bad for everyone else who didn't want to be bored out of their minds. The divide was super apparent between the normal level and honors level classes.
I don't know what the right answer is, but there was an insane amount of effort spent on kids who didn't care, whose parents didn't care, who hadn't cared since elementary school, and always ended up dropping out as soon as they hit 18. No differentiation between them, and the ones who really did give a shit and were just a little slow (usually because of a bad home life).
It's hard to avoid leaving someone behind when they've already left themselves behind.
I'm gonna add another perspective. I was placed, and excelled, in moderately advanced math courses from 3rd grade on. Mostly 'A's through 11th grade precalc (taken because of the one major hiccup, placing only in the second most rigorous track when I entered high school). I ended that year feeling pretty good, with a superior SAT score bagged, high hopes for National Merit, etc.
Then came senior year. AP Calculus was a sh/*tshow, because of a confluence of factors: dealing with parents divorcing, social isolation, dysphoria. I hit a wall, and got my only quarterly D, ever.
The, "if you get left behind, that's on you, because we're not holding up the bright kids," mentality was catastrophic for me - and also completely inapplicable, because I WAS one of the bright kids! I needed help, and focus. I retook the course in college and got the highest grade in the class, so I confirmed that I was not the problem; unfortunately, though, the damage had been done. I'd chosen a major in the humnities, and had only taken that course as an elective, to prove to myself that I could manage the subject. You would never know that I'd been on-track for a technical career.
So, I don't buy that America/Sweden/et al. are full of hopeless demi-students. I was deemed one, and it wasn't true, but the simple perception was devastating. I think there is a larger, overarching deficit of support for students, probably some combination of home life, class structure, and pedagogical incentives. If "no child left behind" is anathema in these circles, the "full speed ahead" approach is not much better.
> The, "if you get left behind, that's on you, because we're not holding up the bright kids," mentality was catastrophic for me
Your one bad year doesn't invalidate the fact that it was good to allow you to run ahead of slower students the other 9 years. It wasn't catastrophic for you, as you say yourself you just retook the class in college and got a high grade. I honestly don't see how "I had a bad time at home for a year and did bad in school" could have worked out any better for you.
> So, I don't buy that America/Sweden/et al. are full of hopeless demi-students. I was deemed one.
A bad grade one year deemed you a hopeless demi student? By what metric? I had a similar school career (AP/IB with As and Bs) and got a D that should have been an F my senior year and it was fine.
They seem to lament ending up in humanities instead of a technical path. The fact that the humanities is just categorized as for less smart people and technical people are all smart is a problem in itself.
Many bright people end up in humanities and end up crushed by the societal pressure that expects them to be inferior, a huge waste.
This is probably the right solution. It seems in reality nobody does this since it is expensive (more teachers, real attention to students, etc). Also if there is an explicit split there will be groups of people who "game" it (spend disproportional amount of time to "train" their kids vs actual natural talent - not sure if this is good or bad).
So, it feels to me ideally within the same classroom there should be a natural way to work on your own pace at your own level. Is it possible? Have no idea - seems not, again primarily because it requires a completely different skillset and attention from teachers.
> should be a natural way to work on your own pace at your own level
Analogous to the old one-room-school model where one teacher taught all grade levels and students generally worked from textbooks. There were issues with it stemming from specialization (e.g., teaching 1st grade is different than teaching 12th). They were also largely in rural areas and generally had poor facilities.
The main barrier in the US to track separation is manpower. Public School teachers are underpaid and treated like shit, and schools don't get enough funding which further reduces the number of teachers.
Teachers just don't have the time in the US to do multiple tracks in the classroom.
You can have a multi-track high-school system, like in much of Europe. Some are geared towards the academically inclined who expect to go to university, others hold that option open but focus on also learning a trade or specialty (this can be stuff like welding, CNC, or hospitality industry / restaurants etc.), while others focus more heavily on the trade side, with apprenticeship at companies intertwined with the education throughout high school, and switching to a university after that is not possible by default, but not ruled out if you put in some extra time).
Or you can also have stronger or weaker schools where the admission test scores required are different, so stronger students go to different schools. Not sure if that's a thing in the US.
This was the way all schools worked in my county in florida, at least from middle school on. Normal/Honors/AP split is what pretty much every highschool did at the time. You could even go to a local community college instead of HS classes.
> Also if there is an explicit split there will be groups of people who "game" it (spend disproportional amount of time to "train" their kids vs actual natural talent - not sure if this is good or bad).
The idea of tracking out kids who excel due to high personal motivation when they have less natural aptitude is flat out dystopian. I'm drawing mental images of Gattaca. Training isn't "gaming". It's a natural part of how you improve performance, and it's a desirable ethical attribute.
>But you aren't supposed to choose either or. Instead, you split the students in different groups, different speeds.
This answer is from the US perspective. I've lived in several states now, and I know many of teachers because my partner is adjacent to education in her work and family. This is what I've learned from all this so far:
This is an incredibly easy and logical thing to both suggest, conceptualize, and even accept. In fact, I can see why alot of people don't think its a bad idea. The problem comes down the following in no specific order:
- Education is highly politicized. Not only that, its one of the most politicized topics of our time. This continues to have negative affects on everything to proper funding of programs[0]
- This means some N number of parents will inevitably take issue with these buckets for one reason or another. That can become a real drain of resources dealing with this.
- There's going to be reasonable questions of objectivity that go into this, including historical circumstances. This type of policy is unfortunately easy enough to co-op certain kids into certain groups based on factors like race, class, sex etc. rather than educational achievement alone, of which we also do not have a good enough way to measure objectively currently because of the aforementioned politicized nature of education.
- How to correct for the social bucketing of tiered education? High achieving kids will be lauded as lower achieving ones fall to the background. How do you mitigate that so you don't end up in a situation where one group is reaping all the benefits and thereby getting all the social recognition? Simply because I couldn't do college level trig when I was in 8th grade doesn't mean I deserved limited opportunities[2], but this tiered system ends up being ripe for this kind of exploitation. In districts that already have these types of programs you can already see parents clamoring to get their kids into advanced classes because it correlates to better outcomes.
[0]: I know that the US spends in aggregate per student, approximately 15,000 USD per year, but that money isn't simply handed to school districts. If you factor specialized grants, bonds, commitments etc. the actual classroom spending is not working with this budget directly, its much smaller than this. This is because at least some your local districts funding is likely coming from grants, which are more often than not only paid out for a specific purpose and must be used in pursuant of that purpose. Sometimes that purpose is wide and allows schools to be flexible, but more often it is exceedingly rigid as its tied to some outcome, such as passing rates, test scores etc. There's lots of this type of money sloshing around the school system, which creates perverse incentives.
[1]: Funding without strict restrictions on how its used
[2]: Look, I barely graduated high school, largely due to alot of personal stuff in my life back then. I was a model college student though, but due to a different set of life circumstances never quite managed to graduate, but I have excelled in this industry because I'm very good at what I do and don't shy away from hard problems. Yet despite this, some doors were closed to me longer than others because I didn't have the right on paper pedigree. This only gets worse when you start bucketing kids like this, because people inevitably see these things as some sort of signal about someones ability to perform regardless of relevancy.
Yeah, all that stuff in the end boils down to: rich parents will find a way to have it their way. Whether private schools or tutors or whatever.
Every ideological system has certain hangups, depending on what they can afford. In the Soviet communist system, obviously a big thing was to promote kids of worker and peasant background etc., but they kept the standards high and math etc was rigorous and actual educational progress taken seriously. But there was Cold War pressure to have a strong science/math base.
Currently, the US is coasting, relying on talent from outside the country for the cream of the top, so they can afford nonsense beliefs, given also that most middle-class jobs are not all that related to knowledge, and are more status-jockeying email jobs.
It will likely turn around once there are real stakes.
>Currently, the US is coasting, relying on talent from outside the country for the cream of the top, so they can afford nonsense beliefs, given also that most middle-class jobs are not all that related to knowledge, and are more status-jockeying email jobs.
Ironically, we also rely on talent from outside the country to undercut wages and worker protections on the low end, which also allows us to afford even more nonsense beliefs.
I think we've worked ourselves into a sort of topsy-turvy paradigm where academic and cultural deviance from a certain range is punished severely, but a non-existent ceiling on wealth/floor on poverty are just assumed to be natural and correct. And it really should be the opposite, not least of which because extreme wealth and poverty seem to exacerbate the contraction of the acceptable academic/cultural range, and the punishments for being outside of that range.
> I was placed, and excelled, in moderately advanced math courses from 3rd grade on.
In the school district I live in, they eliminated all gifted programs and honors courses (they do still allow you to accelerate in math in HS for now, but I'm sure that will be gone soon too), so a decent chance you might not have taken Calculus in HS. Problem solved I guess?
I'm not sure when this changed, but in school for me in the 1970s and early '80s the teachers (at least the older ones) were all pretty much of the attitude that "what you get out of school depends on what you put into it" i.e. learning is mostly up to the student. Grades of "F" or zero for uncompleted or totally unsatisfactory work were not uncommon and students did get held back. Dropout age was 16 and those who really didn't care mostly did that. So at least the last two years of high school were mostly all kids who at least wanted to finish.
> It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)
I'm sure it's regional, but my oldest kid started school in SoCal 13 years ago, and it is definitely worse. Nearly every bad decision gets doubled-down on and the good ones seem to lack follow-through. I spent almost a decade trying to improve things and have given up; my youngest goes to private school now.
We are experimenting with our daughter this year: Our school system offers advanced math via their remote learning system. This means that during math class, my kid will take online 6th grade math instead of the regular in-person 5th grade math.
We will have to see how it goes, but this could be the advanced math solution we need.
Sure! as far as I know, it's somewhat standardized and the east asian countries all have it (Korea, China, Japan). I know this because the Chinese Saturday School was close by. It's usually sponsored by the embassy & in the capital cities, or places with many Japanese families. (London, Germany, Canada afaik)
Because it's only once a week, it was from 09:00 - 14:00 or similar. The slots was: Language (Japanese), Social Studies (History, Geography, Social systems) and then Math. They usually gave homework, which was a little up to the parent to enforce. Classes was quite small: elementary school the most, but no more than 10. Middle school was always single digit (5 for my class). Depends on place and economy: When the comapnies Ericsson (Sweden) and Sony (Japan) had a joint division Sony-Ericsson, many classes doubled.
Class didn't differ so much from the normal school in Asia. Less strict. But the school organized a lot of events such as Undoukai (Sports Day), Theater play, and new years/setsubun festival and other things common in Japanese schools. It served as a place for many asian parents to meet each other too, so it became a bit of a community.
Because lack of students the one I went to only had from 1th to 9th grade. In London and bigger cities I heard they have up until high-school. But in Japan, Some colleges have 帰国子女枠 (returnee entrance system) so I know one alumni that went to Tokyo Uni after highschool.
Personally, I liked it. I hated having to go one extra day to school, but being able to have classmate to share part of your culture (before internet was wide-spread) by sharing games, books, toys you brought home from holiday in Japan was very valuable.
Related to the "critical thinking" part of the original article: It was also interesting to read two history books. Especially modern history. The Swedish (pretending to be neutral) one and the Japanese one (pretending they didn't do anything bad) as an example, for WW2 and aftermath. Being exposed to two rhetoric, both technically not a lie (but by omission), definitely piqued my curiosity as a kid.
You mentioned that these classes were good enough that they made swedish classes a breeze in comparison. What differences in teaching made Saturday school so much more effective?
You did mention class size, and the sense of community, which were probably important, but is there anything else related to the teaching style that you thought helped? Or conversely, something that was missing in the regular school days that made them worse?
>What differences in teaching made Saturday school so much more effective?
I do think the smaller class and feeling more "close" to the teacher helped a lot. But also that the teachers were passionate. It's a community so I still (20 years later) do meet some of the teachers, through community events.
I can't recall all the details, to be honest, but I do think a lot repetition of math exercises and actually going through them step by step helped a lot to solidify how to think. I feel like the Japanese math books also went straight to the point, but still made the book colorful in a way. Swedish math books felt bland. (something I noticed in college too, but understandable in college ofc)
In the Swedish school, it felt like repetition was up to homework. You go through a concept, maybe one example, on the whiteboard and then move on. Unless you have active parents, it's hard to get timely feedback on homeworks (crucial for learning) so people fell behind.
Also probably that curriculum was handed to the student early. You knew what chapters you were going through at what week, and what exercises were important. I can't recall getting that (or that teachers followed it properly) early in the term at Swedish school.
They also focused on different thing. For example the multiplication table, in Japan you're explicitly taught to memorize it and are tested on recall speed. (7 * 8? You have 2 seconds) in Swedish schools, they despised memorization so told us not to. The result is "how to think about this problem" is answered with a "mental model" in Japanese education and "figure it out yourself" in the Swedish one. Some figured it out in a suboptimal way.
But later in the curriculum it obviously help to be able to calculate fast to keep up, so those small things compounded, i think.
Okay, you gotta spill - what's some stuff Sweden was pretending to be neutral on?
(As a poorly informed US dude) I'm aware of Japan's aversion to the worse events of the war, but haven't really heard anything at all about bad stuff in Sweden
I'm a Brit who speaks Swedish, and recently watched the Swedish TV company SVT's documentary "Sweden in the war" (sverige i kriget). I can maybe add some info here just out of personal curiosity on the same subject.
There were basically right wing elements in every European country. Sympathisers. This included Sweden. So that's what OP was getting at in part. Germany was somewhat revered at the time, as an impressive economic and cultural force. There was a lot of cultural overlap, and conversely the Germans respected the heritage and culture of Scandinavia and also of England, which it saw as a Germanic cousin.
The documentary did a good job of balancing the fact that Sweden let the German army and economy use its railways and iron ore for far longer than it should have, right up until it became finally too intolerable to support them in any way (discovery of the reality of the camps). Neutrality therefore is somewhat subjective in that respect.
They had precedent for neutrality, from previous conflicts where no side was favoured, so imo they weren't implicitly supporting the nazi movement, despite plenty of home support. It's a solid strategy from a game theory perspective. No mass bombings, few casualties, wait it out, be the adult in the room. Except they didn't know how bad it would get.
In their favour they allowed thousands of Norwegian resistance fighters to organise safely in Sweden. They offered safe harbour to thousands of Jewish refugees from all neighbouring occupied countries. They protected and supplied Finns too. British operatives somehow managed to work without hindrance on missions to take out German supplies moving through Sweden. It became a neutral safe space for diplomats, refugees and resistance fighters. And this was before they found out the worst of what was going on.
Later they took a stand, blocked German access and were among the first to move in and liberate the camps/offer red cross style support.
Imo it's a very nuanced situation and I'm probably more likely to give the benefit of the doubt at this point. But many Danes and Norwegians were displeased with the neutral stance as they battled to avoid occupation and deportations.
As for Japan, I'd just add that I read recently on the BBC that some 40% or more of the victims of the bombings were Koreans. As second class citizens they had to clean up the bodies and stayed among the radioactive materials far longer than native residents, who could move out to the country with their families. They live on now with intergenerational medical and social issues with barely a nod of recognition.
To think it takes the best part of 100 years for all of this to be public knowledge is testament to how much every participant wants to save face. But at what cost? The legacy of war lives on for centuries, it would seem.
And who were the teachers? Did it cost money, how much? How long ago? I guess the students were motivated and disciplined? Who were the other students? Natives, you mean swedes?
Sorry, by natives I meant Japanese Natives; A school for japanese kids (kids of japanese parents). Although I read that in Canada they recently removed that restriction, since there's now 3rd and 4th generation Canadian that teaches Japanese to the kids.
The teachers was often Japanese teachers. Usually they did teaching locally (in Sweden) or had other jobs, but most of them with a teaching license (in Japan). My Mother also did teaching there for a short time, and told me that the salary was very very low (like 300$ or something, per month) and people mostly did it for passion or part of the community thing.
I did a quick googling and right now the price seems 100$ for entering the school, and around 850$ per year. Not sure about the teachers salary now or what back then.
Other students were either: Half-Swedish/Japanese, settled in Sweden. Immigrants with both parent Japanese, settled in Sweden. Expats kids (usually in Sweden for a short time, 1-2 years, for work) both parent Japanese. The former two spoke both language, the latter only spoke Japanese.
And still (or maybe because?) the resulting adults in Sweeden score above e.g Korea in both numeracy and adaptive problem solving (but slightly less than Japan). The race is not about being best at 16 after all.
Probably attributable to a time lag as the Korean GDP per capita in the 1960's was close to sub-Saharan African levels + military junta rule that stymied liberal education for a good cohort of the population. Countries like Spain also show similarities to Korea and when looking at youth scoring, things tend to be more equal.
I have as much of a fundamental issue with “Saturday school” for children as I do with professionals thinking they should be coding on their days off. When do you get a chance to enjoy your childhood?
As a kid, the "fun" about Saturday school fluctuated. In the beginning it was super fun, after a while it became a chore (and I whined to my mom) but in the end I enjoyed it and it was tremendously valuable. The school had a lot of cultural activities (sport day, new years celebration / setsubun etc) and having a second set of classmates that shared a different side of you was actually fun for me. So it added an extra dimension of enjoyment in my childhood :)
Especially since (back then) being an (half) asian nerd kid in a 99.6% White (blonde & blue eyed) school meant a lot of ridicule and minor bullying. The saturday school classes were too small for bullying to not get noticed, and also served as a second community where you could share your stuff without ridicule or confusion :)
The experience made me think that it's tremendously valuable for kids to find multiple places (at least one outside school) where they can meet their peers. Doesn't have to be a school, but a hobby community, sport group, music groups, etc. Anything the kid might like, and there's shared interest.
It teaches kid that being liked by a random group of people (classmates) is not everything in life, and you increase the chance of finding like-minded people. Which reflect rest of life better anyway (being surrounded by nerds is by far the best perk of being an engineer)
I know 2 class mates (out of 7) that hated it there, and since it's not mandatory they left after elementary school. So a parent should ofc check if t he kids enjoy it (and if not, why) and let the kid have a say in it.
There is a huge difference between not wanting to be around people who don’t agree with you about the benefits and drawbacks of supply side economics and not wanting to be around someone who disrespects you as a person because of the color of your skin.
Neither he (half Asian) or I (Black guy) owe the latter our time or energy to get along with. Let them wallow in their own ignorance.
That's a very bad-faith take on what I wrote. I'll self-quote:
>The experience made me think that it's tremendously valuable for kids to find *multiple places* (at least one outside school) where they can meet their peers.
Most people don't neatly fit in to "one" category. Trying to find many places you could meet peers can open up your mind (and also people around you)
For many, coding can be fun and it's not an external obligation like eating veggies or going to the gym (relatedly, some also enjoy veggies and the gym).
Some people want to deeply immerse into a field. Yes, they sacrifice other ways of spending that time and they will be less well rounded characters. But that's fine. It's also fine to treat programming as a job and spend free time in regular ways like going for a hike or cinema or bar or etc.
And similarly, some kids, though this may not fully overlap with the parents who want their kids to be such, also enjoy learning, math, etc. Who love the structured activities and dread the free play time. I'd say yes, they should be pushed to do regular kid things to challenge themselves too, but you don't have to mold the kids too much against what their personality is like if it is functional and sustainable.
But it is a false dichotomy. You can both offer resources to the ones behind and support high achievers.
The latter can pretty much teach themselves with little hands on guidance, you just have to avoid actively sabotaging them.
Many western school systems fail that simple requirement in several ways: they force unchallenging work even when unneeded, don’t offer harder stimulating alternatives, fail to provide a safe environment due to the other student’s disruption…
Maybe you can have all quiet and focused students together in the same classroom?
They might be reading different books, different speed, and have different questions to the teachers. But when they focus and don't interrupt each other, that can be fine?
Noisy students who sabotage for everyone shouldn't be there though.
Grouping students on some combination of learning speed and ability to focus / not disturbing the others. Rather than only learning speed. Might depend on the size of the school (how many students)
For what it's worth, that's how the Montessori school I went to worked. I have my critiques of the full Montessori approach (too long for a comment), but the thing that always made sense was mixed age and mixed speed classrooms.
The main ideas that I think should be adopted are:
1. A "lesson" doesn't need to take 45 minutes. Often, the next thing a kid will learn isn't some huge jump. It's applying what they already know to an expanded problem.
2. Some kids just don't need as much time with a concept. As long as you're consistently evaluating understanding, it doesn't really matter if everyone gets the same amount of teacher interaction.
3. Grade level should not be a speed limit; it also shouldn't be a minimum speed (at least as currently defined). I don't think it's necesarily a problem for a student to be doing "grade 5" math and "grade 2" reading as a 3rd grader. Growth isn't linear; having a multi-year view of what constitutes "on track" can allow students to stay with their peers while also learning at an appropriate pace for their skill level.
Some of this won't be feasible to implement at the public school level. I'm a realist in the sense that student to teacher ratios limit what's possible. But I think when every education solution has the same "everyone in a class goes the same speed" constraint, you end up with the same sets of problems.
Counterintuitive argument:'No one left behind' policies increase social segregation.
Universal education offers a social ladder. "Your father was a farmer, but you can be a banker, if put in the work".
When you set a lower bar (like enforcing a safe environment), smart kids will shoot forward. Yes, statistically, a large part of succesful kids will be the ones with better support networks, but you're stil judging results, for which environment is just a factor.
When you don't set this lower bar, rich kids who can move away will do it, because no one places their children in danger voluntarily. Now the subset of successful kids from a good background will thrive as always, but succesful kids from bad environments are stuck with a huge handicap and sink. You've made the lader purely, rather than partly, based on wealth.
And you get two awful side effects on top:
- you're not teaching the bottom kids that violating the safety of others implies rejection. That's a rule enforced everywhere, from any workplace through romantic relationships to even prison, and kids are now unprepared for that.
- you've taught the rest of the kids to think of the bottom ones as potential abusers and disruptors. Good luck with the resulting classism and xenophobia when they grow up.
There will always be a gap between kids who are rich and smart (if school won't teach them, a tutor will) and kids who are stupid (no one can teach them). We can only choose which side of this gap will the smart poor kids stand on. The attempts to make everyone at school equal put them on the side with the stupid kids.
Not sure if counterintuitive or not, but once you have such social mobility-based policies in place ("Your father was a farmer, but you can be a banker, if put in the work") for a few generations, generally people rise and sink to a level that will remain more stable for the later generations. Then even if you keep that same policy, the observation will be less social movement compared to generations before and that will frustrate people and they read it to mean that the policies are blocking social mobility.
You get most mobility after major upheavals like wars and dictatorships that strip people of property, or similar. The longer a liberal democratic meritocratic system is stable without upheavals and dispossession of the population through forced nationalization etc, the less effect the opportunities will have, because those same opportunities were already generally taken advantage of by the parent generation and before.
Ridiculous. Progress, by definition, is made by the people in front.
No one is saying to "focus solely on those ahead," but as long as resources are finite, some people will need to be left behind to find their own way. Otherwise those who can benefit from access to additional resources will lose out.
"Progress is made by the people in front" is plausibly true by definition.
"Progress is made by the people who were in front 15 years earlier" is not true by definition. (So: you can't safely assume that the people you need for progress are exactly the people who are doing best in school. Maybe some of the people who aren't doing so well there might end up in front later on.)
"Progress is made by the people who end up in front without any intervention" is not true by definition. (So: you can't safely assume that you won't make better progress by attending to people who are at risk of falling behind. Perhaps some of those people are brilliant but dyslexic, for a random example.)
"Progress is made by the people in front and everyone else is irrelevant to it" is not true by definition. (So: you can't safely assume that you will make most progress by focusing mostly on the people who will end up in front, even if you can identify who those are. Maybe their brilliant work will depend on a whole lot of less glamorous work by less-brilliant people.)
I strongly suspect that progress is made mostly by people who don't think in soundbite-length slogans.
Although in a global world, it's not clear that it's best for a country to focus on getting the absolute best, IF if means the average suffers from it. There is value in being the best, but for the economy it's also important to have enough good enough people to utilise the new technology/science(which gets imported from abroad), and they don't need to be the absolute best.
As a bit of a caricature example, if cancer is completely cured tomorrow, it's not necessarily the country inventing the cure which will be cancer free first, but the one with the most doctors able to use and administer the cure.
If everyone can't get a Nobel prize, no one should!
The so-called intelligent kids selfishly try to get ahead and build rockets or cure cancer, but they don't care about the feelings of those who can't build rockets or cure cancer. We need education to teach them that everyone is special in exactly the same way.
This is a false dichotomy though, as I linked previously in this thread, adult Sweeds are above Koreans, and only slightly below Japanese in both literacy, numeracy, and problem solving.
Personally I think it's easy to overestimate how important it is to be good at something at 16 for the skill at 25. Good university is infinitely more important than 'super elite' high school.
So, here's a time machine. You can go back to a time and place of lasting, enduring stability. There have been been numerous such periods in recorded history that have lasted for more than a human lifetime, and likely even more prior to that. (Admittedly a bit of a tautology, given that most 'recorded history' is a record of things happening rather than things staying the same.)
It will be a one-way trip, of course. What year do you set the dial to?
Ok, please surrender your cellphones, internet, steam, tools, writing, etc... all those were given to you by the best of the crop and not the median slop.
Most of what I remember of my high school education in France was: here are the facts, and here is the reasoning that got us there.
The exams were typically essay-ish (even in science classes) where you either had to basically reiterate the reasoning for a fact you already knew, or use similar reasoning to establish/discover a new fact (presumably unknown to you because not taught in class).
Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.
I don't know if I have critical thinking or not. But I often question - WHY is this better? IS there any better way? WHY it must be done such a way or WHY such rule exists?
For example in electricity you need at least that amount of cross section if doing X amount of amps over Y length. I want to dig down and understand why? Ohh, the smaller the cross section, the more it heats! Armed with this info I get many more "Ohhs": Ohh, that's why you must ensure the connections are not loose. Oohhh, that's why an old extension cord where you don't feel your plug solidly clicks in place is a fire hazard. Ohh, that's why I must ensure the connection is solid when joining cables and doesn't lessen cross section. Ohh, that's why it's a very bad idea to join bigger cables with a smaller one. Ohh, that's why it is a bad idea to solve "my fuse is blowing out" by inserting a bigger fuse but instead I must check whether the cabling can support higher amperage (or check whether device has to draw that much).
And yeah, this "intuition" is kind of a discovery phase and I can check whether my intuition/discovery is correct.
Basically getting down to primitives lets me understand things more intuitively without trying to remember various rules or formulas. But I noticed my brain is heavily wired in not remembering lots of things, but thinking logically.
We don't have enough time to go over things like this over and over again. Somebody already analyzed/tried all this and wrote in a book and they teach you in school from that book how it works and why. Yeah if you want to know more or understand better you can always dig it out yourself. At least today you can learn tons of stuff.
We don't have enough time to derive everything from first principles, but we do have the time to go over how something was derived, or how something works.
A common issue when trying this is trying to teach all layers at the same level of detail. But this really isn't necessary. You need to know the equation for Ohms law, but you can give very handwavy explanations for the underlying causes. For example: why do thicker wires have less resistance? Electricity is the movement of electrons, more cross section means more electrons can move, like having more lanes on a highway. Why does copper have less resistance than aluminum? Copper has an electron that isn't bound as tightly to the atom. How does electricity know which path has the least resistance? It doesn't, it starts flowing down all paths equally at a significant fraction of the speed of light, then quickly settles in a steady state described by Ohm's law. Reserve the equations and numbers for the layers that matter, but having a rough understanding of what's happening on the layer below makes it easier to understand the layer you care about, and makes it easier to know when that understanding will break down (because all of science and engineering are approximations with limited applicability)
> How does electricity know which path has the least resistance? It doesn't, it starts flowing down all paths equally at a significant fraction of the speed of light, then quickly settles in a steady state described by Ohm's law.
> because all of science and engineering are approximations with limited applicability
Something I heard but haven't dig into, because my use case (DIY, home) doesn't care. In some other applications approximation at this level may not work and more detailed understanding may be needed :)
And yeah, some theory and telling of things others discovered for sure needs to be done. That is just the entry point for digging. And understanding how something was derived is just a tool for me to more easily remember/use the knowledge.
Are you being serious or is this satire? What an odd perspective to share on Hacker News. We're a bunch of nerds that take pleasure in understanding how things work when you take them apart, whether that's a physics concept or a washing machine. Or am I projecting an ethos?
On the contrary, the French "dissertation" exercise requires to articulate reasoning and facts, and come up with a plan for the explanation. It is the same kind of thinking that you are required to produce when writing a scientifically paper.
It is however not taught very well by some teachers, who skirt on explaining how to properly do it, which might be your case.
On the contrary, your OP claims that dissertations require a rehash of the references cited in class. A real dissertation exercises logic and requires mobilizing facts and verbal precision to ground arguments. It is also highly teacher-dependent: if the correction is lax or not properly explained, you won’t understand what the exercise really is or how you are supposed to think in order to succeed.
Perhaps you overestimate me (or underestimate Beaujolais Nouveau (though how one could underestimate Beaujolais Nouveau is a mystery to me, but I digress)).
But also, it takes a lot of actual learning of facts and understanding reasoning to properly leverage that schooling and I've had to accept that I am somewhat deficient at both. :)
One thing I've come to understand about myself since my ADHD diagnosis is how hard thinking actually is for me. Especially thinking "to order", like problem solving or planning ahead. I'm great at makeshift solutions that will hold together until something better comes along. But deep and sustained thought for any length of time increases the chance that I'll become aware that I'm thinking and then get stuck in a fruitless meta cognition spiral.
An analogy occurred to me the other day that it's like diving into a burning building to rescue possessions. If I really go for it I could get lucky and retrieve a passport or pet, but I'm just as likely to come back with an egg whisk!
I think all this stuff is so complex and multi-faceted that we often get only a small part of the picture at a time.
I likely have some attention/focus issues, but I also know they vary greatly (from "can't focus at all" to "I can definitely grok this") based on how actually interested I am in a topic (and I often misjudge that actual level of interest).
I also know my very negative internal discourse, and my fixed mindset, are both heavily influenced by things that occurred decades ago, and keeping myself positively engaged in something by trying to at least fake a growth mindset is incredibly difficult.
Meanwhile, I'm perfectly willing to throw unreasonable brute force effort at things (ie I've done many 60+ hour weeks working in tech and bunches of 12 hour days in restaurant kitchens), but that's probably been simultaneously both my biggest strength and worst enemy.
At the same time, I don't think you should ignore the value of an egg whisk. You can use it to make anything from mayonnaise to whipped cream, not to mention beaten egg whites that have a multitude of applications. Meanwhile, the passport is easy enough to replace, and your pet (forgive me if I'm making the wrong assumption here) doesn't know how to use the whisk properly.
I’ve heard many bad things said of the Beaujolais Nouveau, and of my sense of taste for liking it, but this is the first time I’ve seen its critical-thinking skills questioned.
In its/your/our defense, I think it’s a perfectly smart wine, and young at heart!
> In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great.
I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.
Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.
In the world of software development I meet a breed of Swedish devs younger than 30 that can't write code very well, but who can wax Jira tickets and software methodologies and do all sort of things to get them into a management position without having to write code. The end result is toxic teams where the seniors and the devs brought from India are writing all the code while all the juniors are playing software architect, scrum master an product owners.
Not everybody is like that; seniors tend to be reliable and practical, and some juniors with programming-related hobbies are extremely competent and reasonable. But the chunk of "waxers" is big enough to be worrying.
Sweden is the 19th country in the PISA scores. And it is in the upper section on all education indexes. There has been a world decline on scores, but has nothing to do with the Swedish education system. (That does not mean that Sweden should not continue monitoring it and bringing improvements)
Considering our past and the Finnish progress (they considered following us in the 80s/90s as they had done but stopped), 19th is an disappointment.
Having teenagers that's been through most of the primary and secondary schools I kind agree with GP, especially when it comes to math,etc.
Teaching concepts and ideas is _great_, and what we need to manage with advanced topics as adults. HOWEVER, if the foundations are shaky due to too little repetition of basics (that is seemingly frowned upon in the system) then being taught thinking about some abstract concepts doesn't help much because the tools to understand them aren't good enough.
One should note that from the nineties onwards we put a large portion of our kids' education on the stock exchange and in the hands of upper class freaks instead of experts.
That is true. And I do not argue that education cannot be better and more fair. We saw how train privatization worked for Sweden. Education should not follow the same path.
But I have seen too many people arguing that education is collapsing and privatization is the answer. It is not. Improving the current system and removing private schools is the actual solution.
I have heard that in Netherlands there used to be (not sure if it is still there) a system where you have for example 4 rooms of children. Room A contains all children that are ahead of rooms B, C, D. If a child from room B learns pretty quickly - the child is moved to room A. However, if the child leaves behind the other children in room B - that child is moved in room C. Same for room C - those who can not catch up are moved to room D. In this way everyone is learning at max capacity. Those who can learn faster and better are not slowed down by others who can not (or do not want to) keep the pace. Everyone is happy - children, teachers, parents, community.
I think there’s a balance to be had. My country (Spain) is the very opposite, with everything from university access to civil service exams being memory focused.
The result is usually bottom of the barrel in the subjects that don’t fit that model well, mostly languages and math - the latter being the main issue as it becomes a bottleneck for teaching many other subjects.
It also creates a tendency for people to take what they learn as truth, which becomes an issue when they use less reputable sources later in life - think for example a person taking a homeopathy course.
Lots of parroting and cargo culting paired with limited cultural exposition due to monolingualism is a bad combination.
Media can fill that gap. People should be critical about global warming, antivax, anti israel, anti communism, racism, hate, whitr man, anti democracy, russia, china, trump...
This thing is bad, imhate it, problem solved! Modern critical thinking is pretty simple!
In future goverment can provide daily RSS feed, of things to be critical about. You can reduce national schooling system to a single vps server!
I think that’s a disingenuous take. Earlier in the piece the AWS CEO specifically says we should teach everyone the correct ways to build software despite the ubiquity of AI. The quote about creative problem solving was with respect to how to hire/get hired in a world where AI can let literally anyone code.
The problem is, in a capitalist society, who is going to be the company that will donate their time and money to teaching a junior developer who will simply go to another company for double the pay after 2 years?
For me it’s meant a huge increase in productivity, at least 3X.
Since so many claim the opposite, I’m curious to what you do more specifically? I guess different roles/technologies benefit more from agents than others.
I build full stack web applications in node/.net/react, more importantly (I think) is that I work on a small startup and manage 3 applications myself.
> Having spent a couple of weeks on Claude Code recently, I arrived to the conclusion that the net value for me from agentic AI is actually negative.
> For me it’s meant a huge increase in productivity, at least 3X.
How do we reconcile these two comments? I think that's a core question of the industry right now.
My take, as a CTO, is this: we're giving people new tools, and very little training on the techniques that make those tools effective.
It's sort of like we're dropping trucks and airplanes on a generation that only knows walking and bicycles.
If you've never driven a truck before, you're going to crash a few times. Then it's easy to say "See, I told you, this new fangled truck is rubbish."
Those who practice with the truck are going to get the hang of it, and figure out two things:
1. How to drive the truck effectively, and
2. When NOT to use the truck... when talking or the bike is actually the better way to go.
We need to shift the conversation to techniques, and away from the tools. Until we do that, we're going to be forever comparing apples to oranges and talking around each other.
My biggest take so far: If you're a disciplined coder that can handle 20% of an entire project's (project being a bug through to an entire app) time being used on research, planning and breaking those plans into phases and tasks, then augmenting your workflow with AI appears to be to have large gains in productivity.
Even then you need to learn a new version of explaining it 'out loud' to get proper results.
If you're more inclined to dive in and plan as you go, and store the scope of the plan in your head because "it's easier that way" then AI 'help' will just fundamentally end up in a mess of frustration.
For me it has a big positive impact on two sides of the spectrum and not so much in the middle.
One end is larger complex new features where I spend a few days thinking about how to approach it. Usually most thought goes into how to do something complex with good performance that spans a few apps/services. I write a half page high level plan description, a set of bullets for gotchas and how to deal with them and list normal requirements. Then let Claude Code run with that. If the input is good you'll get a 90% version and then you can refactor some things or give it feedback on how to do some things more cleanly.
The other end of the spectrum is "build this simple screen using this API, like these 5 other examples". It does those well because it's almost advanced autocomplete mimicking your other code.
Where it doesn't do well for me is in the middle between those two. Some complexity, not a big plan and not simple enough to just repeat something existing. For those things it makes a mess or you end up writing a lot of instructions/prompt abs could have just done it yourself.
My experience has been entirely the opposite as an IC. If I spend the time to delve into the code base to the point that I understand how it works, AI just serves as a mild improvement in writing code as opposed to implementing it normally, saving me maybe 5 minutes on a 2 hour task.
On the other hand, I’ve found success when I have no idea how to do something and tell the AI to do it. In that case, the AI usually does the wrong thing but it can oftentimes reveal to me the methods used in the rest of the codebase.
If you know how to do something, then you can give Claude the broad strokes of how you want it done and -- if you give enough detail -- hopefully it will come back with work similar to what you would have written. In this case it's saving you on the order of minutes, but those minutes add up. There is a possibility for negative time saving if it returns garbage.
If you don't know how to do something then you can see if an AI has any ideas. This is where the big productivity gains are, hours or even days can become minutes if you are sufficiently clueless about something.
Claude will point you in the right neighborhood but to the wrong house. So if you're completely ignorant that's cool. But recognize that its probably wrong and only a starting point.
Hell, I spent 3 hours "arguing" with Claude the other day in a new domain because my intuition told me something was true. I brought out all the technical reason why it was fine but Claude kept skirting around it saying the code change was wrong.
After spending extra time researching it I found out there was a technical term for it and when I brought that up Claude finally admitted defeat. It was being a persistent little fucker before then.
My current hobby is writing concurrent/parallel systems. Oh god AI agents are terrible. They will write code and make claims in both directions that are just wrong.
> After spending extra time researching it I found out there was a technical term for it and when I brought that up Claude finally admitted defeat. It was being a persistent little fucker before then.
Whenever I feel like I need to write "Why aren't you listening to me?!" I know it's time for a walk and a change in strategy. It's also a good indicator that I'm changing too much at once and that my requirements are too poorly defined.
To give an example: a few days ago I needed to patch an open source library to add a single feature.
This is a pathologically bad case for a human. I'm in an alien codebase, I don't know where anything is. The library is vanilla JS (ES5 even!) so the only way to know the types is to read the function definitions.
If I had to accomplish this task myself, my estimate would be 1-2 days. It takes time to get read code, get orientated, understand what's going on, etc.
I set Claude on the problem. Claude diligently starts grepping, it identifies the source locations where the change needs to be made. After 10 minutes it has a patch for me.
Does it do exactly what I wanted it to do? No. But it does all the hard work. Now that I have the scaffolding it's easy to adapt the patch to do exactly what I need.
On the other hand, yesterday I had to teach Claude that writing a loop of { writeByte(...) } is not the right way to copy a buffer. Claude clearly thought that it was being very DRY by not having to duplicate the bounds check.
I remain sceptical about the vibe coders burning thousands of dollars using it in a loop. It's hardworking but stupid.
The issue is that you would be not just clueless but grown naive about the correctness of what it did.
Knowing what to do at least you can review. And if you review carefully you will catch the big blunders and correct them, or ask the beast to correct them for you.
> Claude, please generate a safe random number. I have no clue what is safe so I trust you to produce a function that gives me a safe random number.
Not every use case is sensitive, but even building pieces for entertainment, if it wipe things it shouldn't delete or drain the battery doing very inefficient operations here and there, it's junk, undesirable software.
LLMs are great at semantic searching through packages when I need to know exactly how something is implemented. If that’s a major part of your job then you’re saving a ton of time with what’s available today.
> How do we reconcile these two comments? I think that's a core question of the industry right now.
The question is, for those people who feel like things are going faster, what's the actual velocity?
A month ago I showed it a basic query of one resource I'd rewritten to use a "query builder" API. Then I showed it the "legacy" query of another resource, and asked it to do something similar. It managed to get very close on the first try, and with only a few more hours of tweaking and testing managed to get a reasonably thorough test suite to pass. I'm sure that took half the time it would have taken me to do it by hand.
Fast forward to this week, when I ran across some strange bugs, and had to spend a day or two digging into the code again, and do some major revision. Pretty sure those bugs wouldn't have happened if I'd written the code myself; but even though I reviewed the code, they went under the radar, because I hadn't really understood the code as well as I thought I had.
So was I faster overall? Or did I just offload some of the work to myself at an unpredictable point in the future? I don't "vibe code": I keep tight reign on the tool and review everything it's doing.
Easy. You're 3x more productive for a while and then you burn yourself out.
Or lose control of the codebase, which you no longer understand after weeks of vibing (since we can only think and accumulate knowledge at 1x).
Sometimes the easy way out is throwing a week of generated code away and starting over.
So that 3x doesn't come for free at all, besides API costs, there's the cost of quickly accumulating tech debt which you have to pay if this is a long term project.
You conflate efficient usage of AI with "vibing". Code can be written by AI and still follow the agreed-upon structures and rules and still can and should be thoroughly reviewed. The 3x absolutely does not come for free. But the price may have been paid in advance by learning how to use those tools best.
I agree the vibe-coding mentality is going to be a major problem. But aren't all tools used well and used badly?
It's not just about the programmer and his experience with AI tools. The problem domain and programming language(s) used for a particular project may have a large impact on how effective the AI can be.
But even on the same project with the same tools the general way a dev derives satisfaction from their work can play a big role. Some devs derive satisfaction from getting work done and care less about the code as long as it works. Others derive satisfaction from writing well architected and maintainable code. One can guess the reactions to how LLM's fit into their day to day lives for each.
Well put. It really does come down to nuance. I find Claude is amazing at writing React / Typescript. I mostly let it do it's own thing and skim the results after. I have it write Storybook components so I can visually confirm things look how I want. If something isn't quite right I'll take a look and if I can spot the problem and fix it myself, I'll do that. If I can't quickly spot it, I'll write up a prompt describing what is going on and work through it with AI assistance.
Overall, React / Typescript I heavily let Claude write the code.
The flip side of this is my server code is Ruby on Rails. Claude helps me a lot less here because this is my primary coding background. I also have a certain way I like to write Ruby. In these scenarios I'm usually asking Claude to generate tests for code I've already written and supplying lots of examples in context so the coding style matches. If I ask Claude to write something novel in Ruby I tend to use it as more of a jumping off point. It generates, I read, I refactor to my liking. Claude is still very helpful, but I tend to do more of the code writing for Ruby.
Overall, helpful for Ruby, I still write most of the code.
These are the nuances I've come to find and what works best for my coding patterns. But to your point, if you tell someone "go use Claude" and they have have a preference in how to write Ruby and they see Claude generate a bunch of Ruby they don't like, they'll likely dismiss it as "This isn't useful. It took me longer to rewrite everything than just doing it myself". Which all goes to say, time using the tools whether its Cursor, Claude Code, etc (I use OpenCode) is the biggest key but figuring out how to get over the initial hump is probably the biggest hurdle.
It is not really a nuanced take when it compares 'unassisted' coding to using a bicycle and AI-assisted coding with a truck.
I put myself somewhere in the middle in terms of how great I think LLMs are for coding, but anyone that has worked with a colleague that loves LLM coding knows how horrid it is that the team has to comb through and doublecheck their commits.
In that sense it would be equally nuanced to call AI-assisted development something like "pipe bomb coding". You toss out your code into the branch, and your non-AI'd colleagues have to quickly check if your code is a harmless tube of code or yet another contraption that quickly needs defusing before it blows up in everyone's face.
Of course that is not nuanced either, but you get the point :)
Oh nuanced the comparison seems also depends on whether you live in Arkansas or in Amsterdam.
But I disagree that your counterexample has anything at all to do with AI coding. That very same developer was perfectly capable of committing untested crap without AI. Perfectly capable of copy pasting the first answer they found on Stack Overflow. Perfectly capable of recreating utility functions over and over because they were to lazy to check if they already exist.
For this very reason I switched for TS for backend as well. I'm not a big fun of JS but the productivity gain of having shared types between frontend and backend and the Claude code proficiency with TS is immense.
I considered this, but I'm just too comfortable writing my server logic in Ruby on Rails (as I do that for my day job and side project). I'm super comfortable writing client side React / Typescript but whenever I look at server side Typescript code I'm like "I should understand what this is doing but I don't" haha.
If you aren't sanitizing and checking the inputs appropriately somewhere between the user and trusted code, you WILL get pwned.
Rails provides default ways to avoid this, but it makes it very easy to do whatever you want with user input. Rails will not necessarily throw a warning if your AI decides that it wants to directly interpolate user input into a sql query.
Well in this case, I am reading through everything that is generated for Rails because I want things to be done my way. For user input, I tend to validate everything with Zod before sending it off the backend which then flows through ActiveRecord.
I get what you're saying that AI could write something that executes user input but with the way I'm using the tools that shouldn't happen.
One thing to think about is many software devs have a very hard time with code they didn't write. I've seen many devs do a lot of work to change code to something equivalent (even with respect to performance and readability) only because it's not the way they would have done it. I could see people having a hard time using what the LLM produced without having to "fix it up" and basically re-write everything.
Yeah sometimes I feel like a unicorn because I don’t really care about code at all, so long as it conforms to decent standards and does what it needs to do. I honestly believe engineers often overestimate the importance of elegance in code too, to the point of not realising the slow down of a project due to overly perfect code is genuinely not worth it.
i dont care if the code is elegant, i care that the code is consistent.
do the same thing in the same way each time and it lets you chunk it up and skim it much easier. if there are little differences each time, you have to keep asking yourself "is it done differently here for a particular reason?"
Exactly! And besides that, new code being consistent with its surrounding code used to be a sign of careful craftsmanship (as opposed to spaghetti-against-the-wall style coding), giving me some confidence that the programmer may have considered at least the most important nasty edge cases. LLMs have rendered that signal mostly useless, of course.
Ehh, in my experience if you are using an LLM in context they are better these days at conforming to the code style around it, especially if you put it in your rules that you wish it to.
> Having spent a couple of weeks on Claude Code recently, I arrived to the conclusion that the net value for me from agentic AI is actually negative.
> For me it’s meant a huge increase in productivity, at least 3X.
> How do we reconcile these two comments? I think that's a core question of the industry right now.
Every success story with AI coding involves giving the agent enough context to succeed on a task that it can see a path to success on. And every story where it fails is a situation where it had not enough context to see a path to success on. Think about what happens with a junior software engineer: you give them a task and they either succeed or fail. If they succeed wildly, you give them a more challenging task. If they fail, you give them more guidance, more coaching, and less challenging tasks with more personal intervention from you to break it down into achievable steps.
As models and tooling becomes more advanced, the place where that balance lies shifts. The trick is to ride that sweet spot of task breakdown and guidance and supervision.
From my experience, even the top models continue to fail delivering correctness on many tasks even with all the details and no ambiguity in the input.
In particular when details are provided, in fact.
I find that with solutions likely to be well oiled in the training data, a well formulated set of *basic* requirements often leads to a zero shot, "a" perfectly valid solution. I say "a" solution because there is still this probability (seed factor) that it will not honour part of the demands.
E.g, build a to-do list app for the browser, persist entries into a hashmap, no duplicate, can edit and delete, responsive design.
I never recall seeing an LLM kick off C++ code out of that. But I also don't recall any LLM succeeding in all these requirements, even though there aren't that many.
It may use a hash set, or even a set for persistence because it avoids duplicates out of the box. And it would even use a hash map to show it used a hashmap but as an intermediary data structure. It would be responsive, but the edit/delete buttons may not show, or may not be functional. Saving the edits may look like it worked, but did not.
The comparison with junior developers is pale. Even a mediocre developer can test its and won't pretend that it works if it doesn't even execute. If a develop lies too many times it would lose trust. We forgive these machines because they are just automatons with a label on it "can make mistakes". We have no resorts to make them speak the truth, they lie by design.
> From my experience, even the top models continue to fail delivering correctness on many tasks even with all the details and no ambiguity in the input.
You may feel like there are all the details and no ambiguity in the prompt. But there may still be missing parts, like examples, structure, plan, or division to smaller parts (it can do that quite well if explicitly asked for). If you give too much details at once, it gets confused, but there are ways how to let the model access context as it progresses through the task.
And models are just one part of the equation. Another parts may be orchestrating agent, tools, models awareness of the tools available, documentation, and maybe even human in the loop.
I've given thousands of well detailed prompts. Of those a good enough portion yielded results that diverged from unambiguous instructions that I have stopped, long ago, being fooled into thinking instructions are interpreted by LLMs.
But if in your perspective it does work, more power to you I suppose.
> From my experience, even the top models continue to fail delivering correctness on many tasks even with all the details and no ambiguity in the input.
Please provide the examples, both of the problem and your input so we can double check.
> And every story where it fails is a situation where it had not enough context to see a path to success on.
And you know that because people are actively sharing the projects, code bases, programming languages and approaches they used? Or because your gut feeling is telling you that?
For me, agents failed with enough context, and with not enough context, and succeeded with context, or not enough, and succeeded and failed with and without "guidance and coaching"
I doubt there is much art to getting LLM work for you, despite all the hoopla. Any competent engineer can figure that much out.
The real dichotomy is this. If you are aware of the tools/APIs and the Domain, you are better off writing the code on your own, except may be shallow changes like refactorings. OTOH, if you are not familiar with the domain/tools, using a LLM gives you a huge legup by preventing you from getting stuck and providing intial momentum.
I dunno, first time I tried an LLM I was getting so annoyed because I just wanted it to go through a css file and replace all colours with variables defined in root, and it kept missing stuff and spinning and I was getting so frustrated. Then a friend told me I should instead just ask it to write a script which accomplishes that goal, and it did it perfectly in one prompt, then ran it for me, and also wrote another script to check it hadn’t missed any and ran that.
At no point when it was getting f stuck initially did it suggest another approach, or complain that it was outside its context window even though it was.
This is a perfect example of “knowing how to use an LLM” taking it from useless to useful.
Which one did you use and when was this? I mean, no body gets anything working right the first time. You got to spend a few days atleast trying to understand the tool
It’s just a simple example of how knowing how to use a tool can make all the difference, and that can be improved upon with time. I’m not sure why you’re taking umbrage with that idea.
I know this style of arguing you’re going for. If I answer your questions, you’ll attack the specific model or use case I was in, or claim it was too simple/basic a use case, or some other nitpick about the specifics instead of in good faith attempting to take my point as stated. I won’t allow you to force control of the frame of the conversation by answering your questions, also because the answers wouldn’t do anything to change the spirit of my main point.
LLM currently produce pretty mediocre code. A lot of that is a "garbage in, garbage out" issue but it's just the current state of things.
If the alternative is noob code or just not doing a task at all, then mediocre is great.
But 90% of the time I'm working in a familiar language/domain so I can grind out better code relatively quickly and do so in a way that's cohesive with nearby code in the codebase. The main use-case I have for AI in that case is writing the trivial unit tests for me.
So it's another "No Silver Bullet" technology where the problem it's fixing isn't the essential problem software engineers are facing.
I believe there IS much art in LLMs and Agents especially. Maybe you can get like 20% boost quite quickly, but there is so much room to grow it to maybe 500% long term.
It might just be me but I feel like it excels with certain languages where other situations it falls flat. Throw a well architected and documented code base in a popular language and you can definitely feel it get I to its groove.
Also giving IT tools to ensure success is just as important. MCPs can sometimes make a world of difference, especially when it needs to search you code base.
Experienced developers know when the LLM goes off the rails, and are typically better at finding useful applications. Junior developers on the other hand, can let horrible solutions pass through unchecked.
Then again, LLMs are improving so quickly, that the most recent ones help juniors to learn and understand things better.
It’s also really good for me as a very senior engineer with serious ADHD. Sometimes I get very mentally blocked, and telling Claude Code to plan and implement a feature gives me a really valuable starting point and has a way of unblocking me. For me it’s easier to elaborate off of an existing idea or starting point and refactor than start a whole big thing from zero on my own.
i don't know if anybody else has experienced this, but one of my biggest time-sucks with cursor is that it doesn't have a way for me to steer it mid-process that i'm aware of.
it'll build something that fails a test, but i know how to fix the problem. i can't jump in a manually fix it or tell it what to do. i just have to watch it churn through the problem and eventually give up and throw away a 90% good solution that i knew how to fix.
That's my anecdotal experience as well! Junior devs struggle with a lot of things:
- syntax
- iteration over an idea
- breaking down the task and verifying each step
Working with a tool like Claude that gets them started quick and iterate the solution together with them helps them tremendously and educate them on best practices in the field.
Contrast that with a seasoned developer with a domain experience, good command of the programming language and knowledge of the best practices and a clear vision of how the things can be implemented. They hardly need any help on those steps where the junior struggled and where the LLMs shine, maybe some quick check on the API, but that's mostly it. That's consistent with the finding of the study https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... that experienced developers' performance suffered when using an LLM.
What I used as a metaphor before to describe this phenomena is training wheels: kids learning how to ride a bike can get the basics with the help and safety of the wheels, but adults that already can ride a bike don't have any use for the training wheels, and can often find restricted by them.
> that experienced developers' performance suffered when using an LLM
That experiment is really non significant. A bunch of OSS devs without much training in the tools used them for very little time and found it to be a net negative.
That's been anecdotally my experience as well, I have found juniors benefitted the most so far in professional settings with lots of time spent on learning the tools. Senior devs either negatively suffered or didn't experience an improvement. The only study so far also corroborates that anecdotal experience.
We can wait for other studies that are more relevant and with larger sample sizes, but till the only folks actually trying to measure productivity experienced a negative effect so I am more inclined to believe it until other studies come along.
> Makes me wonder if people spoke this way about “using computers” or “using the internet” in the olden days.
Older person here: they absolutely did, all over the place in the early 90s. I remember people decrying projects that moved them to computers everywhere I went. Doctors offices, auto mechanics, etc.
Then later, people did the same thing about the Internet (was written with a single word capital I by 2000, having been previously written as two separate words.)
> Makes me wonder if people spoke this way about “using computers” or “using the internet” in the olden days.
There were gobs of terrible road metaphors that spun out of calling the Internet the “Information Superhighway.”
Gobs and gobs of them. All self-parody to anyone who knew anything.
I hesitate to relate this to anything in the current AI era, but maybe the closest (and in a gallows humor/doomer kind of way) is the amount of exec speak on how many jobs will be replaced.
> Remember the ones who loudly proclaimed the internet to be a passing fad, not useful for normal people. All anti LLM rants taste like that to me.
For me they're very different and they sound much more the crypto-skepticism. It's not like "LLMs are worthless, there are no use cases, they should be banned" but rather "LLMs do have their use cases but they also do have inherent flaws that need to be addressed; embedding them in every product makes no sense etc.". (I mean LLMs as tech, what's happening with GenAI companies and their leaders is a completely different matter and we have every right to criticize every lie, hypocrisy and manipulation, but let's not mix up these two.)
I just find it hard to take the 3x claims at face value because actual code generation is only a small part of my job, and so Amdahl's law currently limits any productivity increase from agentic AI to well below 2x for me.
(And I believe I'm fairly typical for my team. While there are more junior folks, it's not that I'm just stuck with powerpoint or something all day. Writing code is rarely the bottleneck.)
So... either their job is really just churning out code (where do these jobs exist, and are there any jobs like this at all that still care about quality?) or the most generous explanation that I can think of is that people are really, really bad at self-evaluations of productivity.
> 2. When NOT to use the truck... when talking or the bike is actually the better way to go.
Some people write racing car code, where a truck just doesn't bring much value. Some people go into more uncharted territories, where there are no roads (so the truck will not only slow you down, it will bring a bunch of dead weight).
If the road is straight, AI is wildly good. In fact, it is probably _too_ good; but it can easily miss a turn and it will take a minute to get it on track.
I am curious if we'll able to fine tune LLMs to assist with less known paths.
Your analogy would be much better with giving workers a work horse with a mind of its own. Trucks come with clear instructions and predictable behaviour.
> Your analogy would be much better with giving workers a work horse with a mind of its own.
i think this is a very insightful comment with respect to working with LLMs. If you've ever ridden a horse you don't really tell it to walk, run, turn left, turn right, etc you have to convince it to do those things and not be too aggravating while you're at it. With a truck simple cause and effect applies but with horse it's a negotiation. I feel like working with LLMs is like a negotiation, you have to coax out of it what you're after.
My conclusion on this, as an ex VP of Engineering, is that good senior developers find little utility with LLMs and even them to be a nuisance/detriment, while for juniors, they can be godsend, as they help them with syntax and coax the solution out of them.
It's like training wheels to a bike. A toddler might find 3x utility, while a person who actually can ride a bike well will find themselves restricted by training wheels.
Three things I've noticed as a dev whose field involves a lot of niche software development.
1. LLMs seem to benefit 'hacker-type' programmers from my experience. People who tend to approach coding problems in a very "kick the TV from different angles and see if it works" strategy.
2. There seems to be two overgeneralized types of devs in the market right now: Devs who make niche software and devs who make web apps, data pipelines, and other standard industry tools. LLMs are much better at helping with the established tool development at the moment.
3. LLMs are absolute savants at making clean-ish looking surface level tech demos in ~5 minutes, they are masters of selling "themselves" to executives. Moving a demo to a production stack? Eh, results may vary to say the least.
I use LLMs extensively when they make sense for me.
One fascinating thing for me is how different everyone's experience with LLMs is. Obviously there's a lot of noise out there. With AI haters and AI tech bros kind of muddying the waters with extremist takes.
Being a consultant / programmer with feet on the ground, eh, hands on the keyboard: some orgs let us use some AI tools, others do not. Some projects are predominantly new code based on recent tech (React); others include maintaining legacy stuff on windows server and proprietary frameworks. AI is great on some tasks, but unavailable or ignorant about others. Some projects have sharp requirements (or at least, have requirements) whereas some require 39 out of 40 hours a week guessing at what the other meat-based intelligences actually want from us.
What «programming» actually entails, differs enormously; so does AI’s relevance.
I experience a productivity boost, and I believe it’s because I prevent LLMs from making
design choices or handling creative tasks. They’re best used as a "code monkey", fill in function bodies once I’ve defined them. I design the data structures, functions, and classes myself. LLMs also help with learning new libraries by providing examples, and they can even write unit tests that I manually
check. Importantly, no code I haven’t read and accepted ever gets committed.
Then I see people doing things like "write an app for ....", run, hey it works! WTF?
It's pretty simple, AI is now political for a lot of people. Some folks have a vested interest in downplaying it or over hyping it rather than impartially approaching it as a tool.
It’s also just not consistent. A manager who can’t code using it to generate a react todo list thinks it’s 100x efficiency while a senior software dev working on established apps finds it a net productivity negative.
AI coding tools seem to excel at demos and flop on the field so the expectation disconnect between managers and actual workers is massive.
3X if not 10X if you are starting a new project with Next.js, React, Tailwind CSS for a fullstack website development, that solves an everyday problem. Yeah I just witnessed that yesterday when creating a toy project.
For my company's codebase, where we use internal tools and proprietary technology, solving a problem that does not exist outside the specific domain, on a codebase of over 1000 files? No way. Even locating the correct file to edit is non trivial for a new (human) developer.
My codebase has about 1500 files and is highly domain specific: it's a tool for shipping desktop apps[1] that handles all the building, packaging, signing, uploading etc for every platform on every OS simultaneously. It's written mostly in Kotlin, and to some extent uses a custom in-house build system. The rest of the build is Gradle, which is a notoriously confusing tool. The source tree also contains servers, command line tools and a custom scripting language which is used for all the scripting needs of the project [2].
The code itself is quite complex and there's lots of unusual code for munging undocumented formats, speaking undocumented protocols, doing cryptography, Mac/Windows specific APIs, and it's all built on a foundation of a custom parallel incremental build system.
In other words: nightmare codebase for an LLM. Nothing like other codebases. Yet, Claude Code demolishes problems in it without a sweat.
I don't know why people have different experiences but speculating a bit:
1. I wrote most of it myself and this codebase is unusually well documented and structured compared to most. All the internal APIs have full JavaDocs/KDocs, there are extensive design notes in Markdown in the source tree, the user guide is also part of the source tree. Files, classes and modules are logically named. Files are relatively small. All this means Claude can often find the right parts of the source within just a few tool uses.
2. I invested in making a good CLAUDE.md and also wrote a script to generate "map.md" files that are at the top of every module. These map files contain one-liners of what every source file contains. I used Gemini to make these due to its cheap 1M context window. If Claude does struggle to find the right code by just reading the context files or guessing, it can consult the maps to locate the right place quickly.
3. I've developed a good intuition for what it can and cannot do well.
4. I don't ask it to do big refactorings that would stress the context window. IntelliJ is for refactorings. AI is for writing code.
Your first week of AI usage should be crawling your codebase and generating context.md docs that can then be fed back into future prompts so that AI understands your project space, packages, apis, and code philosophy.
I guarantee your internal tools are not revolutionary, they are just unrepresented in the ML model out of the box
Yes. And way less boring than manually reading a section of a codebase to understand what is going on after being away from it for 8 months. Claude's docs and git commit writing skills are worth it for that alone.
This, while it has context of the current problem, just ask Claude to re-read it's own documentation and think of things to add that will help it in the future
Even then, are you even allowed to use AI in such codebase. Is some part of the code "bought", e.g. commercial compiler generated with specific license? Is pinky promise from LLM provider enough?
Are the resources to understand the code on a computer? Whether it's code, swagger, or a collection of sticky notes, your job is now to supply context to the AI.
I am 100% convinced people who are not getting value from AI would have trouble explaining how to tie shoes to a toddler
1. Using a common tech. It is not as good at Vue as it is at React.
2. Using it in a standard way. To get AI to really work well, I have had to change my typical naming conventions (or specify them in detail in the instructions).
I think there are two broad cases where ai coding is beneficial:
1. You are a good coder but working on a new (to you) or building a new project, or working with a technology you are not familiar with. This is where AI is hugely beneficial. It does not only accelerate you, it lets you do things you could not otherwise.
2. You have spent a lot of time on engineering your context and learning what AI is good at, and using it very strategically where you know it will save time and not bother otherwise.
If you are a really good coder, really familiar with the project, and mostly changing its bits and pieces rather than building new functionality, AI won’t accelerate you much. Especially if you did not invest the time to make it work well.
I have yet to get it to generate code past 10ish lines that I am willing to accept. I read stuff like this and wonder how low yall's standards are, or if you are working on projects that just do not matter in any real world sense.
Whenever I read comments from the people singing their praises of the technology, it's hard not to think of the study that found AI tools made developers slower in early 2025.
>When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
Basically, the study has a fuckton of methodological problems that seriously undercut the quality of its findings, and even assuming its findings are correct, if you look closer at the data, it doesn't show what it claims to show regarding developer estimations, and the story of whether it speeds up or slows down developers is actually much more nuanced and precisely mirrors what the developers themselves say in the qualitative quote questionaire, and relatively closely mirrors what the more nuanced people will say here — that it helps with things you're less familiar with, that have scope creep, etc a lot more, but is less or even negatively useful for the opposite scenarios — even in the worst case setting.
Not to mention this is studying a highly specific and rare subset of developers, and they even admit it's a subset that isn't applicable to the whole.
That is fascinating to me, i've never seen it generate that much code that is actually something i would consider correct. It's always wrong in some way.
Standards are going to be as low as the market allows I think. Some industries code quality is paramount, other times its negligible and perhaps speed of development is higher priority and the code is mostly disposable.
> I build full stack web applications in node/.net/react, more importantly (I think) is that I work on a small startup and manage 3 applications myself.
I think this is your answer. For example, React and JavaScript are extremely popular and aged. Are you using TypeScript and want to get most of the types or are you accepting everything that LLM gives as JavaScript? How interested you are about the code whether it is using "soon to be deprecated" functions or the most optimized loop/implementation? How about the project structure?
In other cases, the more precision you need, less effective LLM is.
- For FrontEnd or easy code, it's a speed up. I think it's more like 2x instead of 3x.
- For my backend (hard trading algo), it has like 90% failure rate so far. There is just so much for it to reason through (balance sheet, lots, wash, etc). All agents I have tried, even on Max mode, couldn't reason through all the cases correctly. They end up thrashing back and forth. Gemini most of the time will go into the "depressed" mode on the code base.
One thing I notice is that the Max mode on Cursor is not worth it for my particular use case. The problem is either easy (frontend), which means any agent can solve it, or it's hard, and Max mode can't solve it. I tend to pick the fast model over strong model.
My current guess is it's how the programmer solves problems in their head. This isn't something we talk about much.
People seem to find LLMs do well with well-spec'd features. But for me, creating a good spec doesn't take any less time than creating the code. The problem for me is the translation layer that turns the model in my head into something more concrete. As such, creating a spec for the LLM doesn't save me any time over writing the code myself.
So if it's a one shot with a vague spec and that works that's cool. But if it's well spec'd to the point the LLM won't fuck it up then I may as well write it myself.
The problem with these discussions is that almost nobody outside of the agency/contracting world seems to track their time. Self-reported data is already sketchy enough without layering on the issue of relying on distant memory of fine details.
You have small applications following extremely common patterns and using common libraries. Models are good at regurgitating patterns they've seen many times, with fuzzy find/replace translations applied.
Try to build something like Kubernetes from the ground up and let us know how it goes. Or try writing a custom firmware for a device you just designed. Something like that.
Basically, the study has a fuckton of methodological problems that seriously undercut the quality of its findings, and even assuming its findings are correct, if you look closer at the data, it doesn't show what it claims to show regarding developer estimations, and the story of whether it speeds up or slows down developers is actually much more nuanced and precisely mirrors what the developers themselves say in the qualitative quote questionaire, and relatively closely mirrors what the more nuanced people will say here — that it helps with things you're less familiar with, that have scope creep, etc a lot more, but is less or even negatively useful for the opposite scenarios — even in the worst case setting.
Not to mention this is studying a highly specific and rare subset of developers, and they even admit it's a subset that isn't applicable to the whole.
I'm currently unemployed in the DevOps field (resigned and got a long vacation). I've been using various models to write various Kubernetes plug-ins abd simple automation scripts. It's been a godsend implementing things which would require too much research otherwise, my ADHD context window is smaller than Claude's.
Models are VERY good at Kubernetes since they have very anal (good) documentation requirements before merging.
I would say my productivity gain is unmeasurable since I can produce things I'd ADHD out of unless I've got a whip up my rear.
The overwhelming majority of those claiming the opposite are a mixture of:
- users with wrong expectations, such as AI's ability to do the job on its own with minimal effort from the user. They have marketers to blame.
- users that have AI skill issues: they simply don't understand/know how to use the tools appropriately. I could provide countless examples from the importance of quality prompting, good guidelines, context management, and many others. They have only their laziness or lack of interest to blame.
- users that are very defensive about their job/skills. Many feel threatened by AI taking their jobs or diminishing it, so their default stance is negative. They have their ego to blame.
> For me it’s meant a huge increase in productivity, at least 3X.
Quote possibly you are doing very common things that are often done and thus are in the training set a lot, the parent post is doing something more novel that forces the model to extrapolate, which they suck at.
Sure, I won’t argue against that. The more complex (and fun) parts of the applications I tend to write myself. The productivity gains are still real though.
That makes sense, especially if your building web applications that are primarily "just" CRUD operations. If a lot of the API calls follow the same pattern and the application is just a series of API calls + React UI then that seems like something an LLM would excel at. LLM's are also more proficient in TypeScript/JS/Python compared to other languages, so that helps as well.
I just want to point out that they only said agentic models were a negative, not AI in general. I don't know if this is what they meant, but I personally prefer to use a web or IDE AI tool and don't really like the agentic stuff compared to those. For me agentic AI would be a net positive against no-AI, but it's a net negative compared to other AI interfaces
On the right projects, definitely an enormous upgrade for me. Have to be judicious with it and know when it is right and when it's wrong. I think people have to figure out what those times are. For now. In the future I think a lot of the problems people are having with it will diminish.
I work in distributed systems programming and have been horrified by the crap the AIs produce. I've found them to be quite helpful at summarizing papers and doing research, providing jumping off points. But none of the code I write can be scraped from a blog post.
The answers to your questions are in the comment you replied to. Part of their love is music is sharing it with others. They also like to fantasize about becoming a full-time musician. Both of those things are less likely if there is 100x the current volume of music from unknowns.
It's not about validation, it's about expression and communicationw with other humans. That's one of the key beauties of art and it's being flooded away with artificial, empty content
Spotify does have a lot of AI generated music, yes. What is the purpose of your comment? Do you believe there is some filtering mechanism that is going to keep AI slop off of these platforms? Is that what we've seen happening with writing and art?
I decided to apply for a junior developer job. I got an interview and to prepare for an interview I found this site, Project Euler. I did ten or so tasks.
The interview started out pretty bad, they asked me some technical questions which I did not give good answers to, and I saw that they where not impressed. Then they wanted me to solve two programming problems on a white board. Imagine the relief I felt when both of these questions where from the ones that I solved on Project Euler a couple of days before! I nailed them and the interviewers where clearly impressed. In the end they hired me and motivated it with that although I lack a lot of theory I am obviously a very good coder haha.
Anyway, that was what I needed, when I got this job I quit the drugs and got my act together. 12 years later I live a confortable life as a freelancer and have even managed to build my own SaaS with paying customers! Thank you Project Euler.
reply