Hacker News new | past | comments | ask | show | jobs | submit | more throw83288's comments login

I think it's an apples and oranges comparison to lump in Christianity (a religion that outright predicts that people will abuse it and protects itself against that) with liberalism/marxism (a philosophy that has no such protection and can be mangled into whatever you want). If anything, liberalism/marxism are more like secularized offspring of Christianity, given that they would probably never developed if it wasn't for their founding figures living in a western moral context completely drenched in Christian ideas.


I think I would’ve used Objectivism as a contrasting example. It’s designed around the idea that whatever a “strong” person does to fulfill their goals is inherently good. Objectivists wouldn’t phrase it that way, surely, but that seems the inevitable end result.


I'm not comparing them. What I'm saying is despite all the differences, the end result (dogmatic and self-righteous ideology with zero ability to align beliefs with reality) is eerily similar.


Curious how Christianity protects itself from abuse? From my perspective, Christianity is used to justify incredibly un-Christian activities pretty much constantly.


1. Christianity (like other religions) has built-in protections against false teaching within their own theology, which isn't really the same for secular philosophical frameworks.

2. There's a level of outlier visibility going on with a lot of people who abuse Christianity. The Christian who sincerely follows Jesus and walks in obedience don't seek out visibility or to exalt themselves. Even ones that evangelize do it on a local scale most often. Meanwhile people who abuse Christianity (Prosperity gospel, Christian nationalists, etc.) try to seek out large followings to bolster their power or wealth, making them seem like the "face" of Christianity whe


> has built-in protections against false teaching within their own theology

You are just saying the Bible has a bunch of text that lays out what being Christian is and isn't?


Sort of.

To elaborate, the idea is that a Christian (someone who has accepted Jesus as Lord and Savior) will show an outward transformation into someone who is Christ-like and obedient to God. When this doesn't happen at all, and that they remain completely un-Christian, you know it isn't genuine (See Matthew 7:15-20). The idea of how a Christian is shown by their outward renewal also touched on in Romans 12:9-21, Galatians 5:16-24, etc. It's not a perfect process, and it's the renewal is not a pre-requisite to salvation but rather an end result of salvation (Ephesians 2:8-12).

Therefore, Christians have a framework that can be used to identify and rebuke people who distort the teachings of Christianity into something that is in rebellion to God. This doesn't really exist with secular philosophy, which lays out the "ideal" but has no way to prevent itself from being warped.


And wound up building a heretical pseudo-Christianity where the messiah is a superintelligence.


How do I avoid the angst about this stuff as a student in computer science? I love this field but frankly I've been at a loss since the rapid development of these models.


LLMs are the new compilers.

As a student, you should continue to focus on fundamentals, but also adapt LLMs into your workflow where you can.

Skip writing the assembly (now curly braces and semicolons), and focus on what the software you’re building actually does, who it serves, and how it works.

Programming is both changing a lot, and not at all. The mechanics may look different, but the purpose is still the same: effectively telling computers what to do.


LLMs are actually the new computers. Compilation is only one program they can run.


LLMs are the way computers were always supposed to work!


> LLMs are the new compilers.

This shows a grave misunderstanding of what compilers and LLMs are. They're fundamentally opposite concepts.

Compilers are about optimizing abstract code down to the most efficient representation possible for some hardware. LLMs are about wasting petaflops (made possible by compiler engineers) to produce random statements that don't have any static guarantees.


How can you trust that the compiler has written the most efficient assembly, if you’re not double checking it by hand?

Jokes aside, I understand your point.

In the history of computing, LLMs and compilers are closer than one might think.

Compilers weren’t first created to optimize “abstract code down to the most efficient” assembly as possible, even if that is the goal of a compiler writer today.

Compilers were created to enable the use of higher-level languages. Abstraction, efficiency, portability, error reduction, and most importantly: saving time.

They allowed humans to create more software, faster.


- a coping Software engineer


As a former prof. What you should be learning from any STEM degree (and many other degrees as well) is to think clearly, rigorously, creatively, and with discipline, etc. You also need to learn the skill of learning content and skills quickly.

The specific contents or skills of your degree don't matter that much. In pretty much any STEM field, over the last 100ish years, whatever you learned in your undergraduate was mostly irrelevant by the time you retired.

Everyone got by, by staying on top of the new developments in the field and doing them. With AI, the particular skills needed to use the power of computers to do things in the world have changed. Just learn those skills.


It's either over, or giving a lot of idiots false confidence — I meet people somewhat regularly who believe they don't really need to know what they're doing any more. This is probably an arbitrage.


There are at least two things here.

One, about the field itself. So far, I have been a know-it-all, and I dabbled in management too, besides that. This worked for me, because no matter how the field any my opportunities shifted, I always had a card up my sleeve. This is highly personal though.

Two, about managing angst. Whatever you experience now, you will in the future too. Circumstances won't matter at all, your brain will convert whatever it perceives around you, into these feelings that you generally experience. You can be at your highest high, and the lowest low, and you will always gravitate back towards these familiar feelings of yours. So, what you can do to have a nicer experience is to be a good partner yourself, and learn how to live with these specific feelings that you have.


For all the value that they bring, there is still a good dose of parlour tricks and toy examples around, and they need an intelligent guiding hand to get the best out of them. As a meat brain, you can bring big picture design skills that the bots don't have, keeping them on track to deliver a coherent codebase, and fixing the inevitable hallucinations. Think of it like having a team of optimistic code monkeys with terrible memory, and you as the coordinator. I would focus on building skills in things like software design/architecture, requirements gathering (what do people want and how do you design software to deliver it?), in-depth hardware knowledge (how to get the best out of your platform), good API design, debugging, etc. Leave the CRUD to the robots and be the brain.


You can ask them this question and all your fears will be washed away, for now..

"Here's a riddle for you - a surgeon, who is the boy's father says, "I cannot operate on this boy, he's my son!" Who is the surgeon to the boy?"

But seriously - AI in the hands of someone well-educated in their field is going to be a lot more powerful than some random person. Knowledge is still going to be valuable, and there are still people out there who don't know how to Google things and figure things out for themselves - so there'll be plenty of people who don't realise the potential of LLMs and won't use them.


Angst?

It just means you're less likely be fixing someone else's "mistakenly _mm512_store_si512 for been _mm512_storeu_si512" error because AI fix(ed) it for you and you can focus on other parts of computer science. Computer science surely isn't just fixing _mm512_store_si512.


The cost of developing software is quickly dropping thanks to these models, and the demand for software is about to go way up because of this. LLMs will just be power tools to software builders. Learn to pop up a level.


What do you expect to come from full o3 in terms of automating software engineering?


o3 (high) might score 80%+


As if cheap multi-modal reasoning won't completely change the nature of blue-collar labor. There isn't a single industry that won't be upturned by this stuff long-term. Most you can do is "be water" and hold onto something that won't rot away (faith for me).


I'm on your side, but there's two readings of these reports:

1) "We are serious, this is going to happen."

2) "AI is big right now so if we hype it we might get some money!"


I'd say with the current state of things it's more like two singularities in which either:

- A landian-stephensian accelerationist timeline occurs where the majority of the urban population becomes some flavor of AGI-tuned VR junkie

- An extreme naturalistic counterculture movement occurs that causes majority of the civilized world to willingly roll themselves back 1 or 2 centuries technologically in order to feel something again


Perhaps the current obsession will just go the way of heavy drinking or smoking? Ie the population will eventually develop some partial immunity to the allure, but it won't even go away completely.


I personally believe that at some point, many people will realize that the majority of the people with economic means are the people who are able to concentrate and don’t waste all their time. Note that I don’t mean the super wealthy, I’m referring to people who are solidly middle class and have means. I know a lot of successful people who aren’t glued to their phones. I think there will be enough good and bad examples out there for people to start catching on.


The most successful people I know absolutely are glued to their phones. They're networking and messaging and reading and planning on their phones.

But from the outside it looks very similar to candy crush. . .


Why not both?

- And there has to be the third, hyperminmaxers yearning forever more control and power trying to be(at) the machine. Thus becoming a reflection of the first.

- Fourth must be some sort of hybrid between denialist and creationist, whom I don't even want to envision through. Which would be a reflection of the second, but instead of withdrawing, they would bubble themselves into something terrifying version of the Amish.



Can't wait for the Butlerian Jihad


See: Walkaway by Cory Doctorow (2017)


Been reading their latest on interoperability. Interesting stuff, and a possible avenue for tech sanity in the upcoming years.


What is that bump in utility in practical terms? You can point to a benchmark improvement but that's no indication the agent swarm is not reducing to "giving an llm an arbitrary amount of random guesses".


Unrelated, but since you seem to have experience here, how would you recommend getting into the bleeding edge of LLMs/Agents? Traditional SWE is obviously on it's way out, but I can't even tell where to start with this new tech and struggle to find ways to apply them to an actual project.


This is me as well. Either:

1) Just give up computing entirely, the field I've been dreaming about since childhood. Perhaps if I immiserate myself with a dry regulated engineering field or trade I would perhaps survive to recursive self-improvement, but if anything the length it takes to pivot (I am a Junior in College that has already done probably 3/4th of my CS credits) means I probably couldn't get any foothold until all jobs are irrelevant and I've wasted more money.

2) Hard pivot into automation, AI my entire workflow, figure out how to use the bleeding edge of LLMs. Somehow. Even though I have no drive to learn LLMs and no practical project ideas with LLMs. And then I'd have to deal with the moral burden that I'm inflicting unfathomable hurt on others until recursive self-improvement, and after that it's simply a wildcard on what will happen with the monster I create.

It's like I'm suffocating constantly. The most I can do to "cope" is hold on to my (admittedly weak) faith in Christ, which provides me peace knowing that there is some eternal joy beyond the chaos here. I'm still just as lost as you.


Yes, some tasks, even complex tasks will become more automated, and machine driven, but that will only open up more opportunities for us as humans to take on more challenging issues. Each time a great advancement comes we think it's going to kill human productivity, but really it just amplifies it.


Where this ends is general intelligence though, where all more challenging tasks can simply be done by the model.

The scenario I fear is a "selectively general" model that can successfully destroy the field I'm in but keep others alive for much longer, but not long enough for me to pivot into them before actually general intelligence.


Dude chill! Eight years ago, I remember driving to some relatives for Thanksgiving and thinking that self-driving cars were just around the corner and how it made no sense for people to learn how to drive semis. Here we are eight years later and self-driving semis aren't a thing--yet. They will be some day, but we aren't there yet.

If you want to work in computing, then make it happen! Use the tools available and make great stuff. Your computing experience will be different from when I graduated from college 25 years ago, but my experience with computers was far different from my Dad's. Things change. Automation changes jobs. So far, it's been pretty good.


Honestly how about stop stressing and bullshitting yourself to death and instead focus on learning and mastering the material in your cs education. There is so much that ai as in openai api or hugging face models can't do yet or does poorly and there are more things to cs than churning out some half-broken JavaScript for some webapp.

It's powerful and world changing but it's also terrible overhyped at the moment.


The solution is neither: you find a way to work with automation but retain your voice and craft.


Dude, you're buying into the hype way too hard. All of this LLM shit is being massively overhyped right now because investors are single-minded morons who only care about cashing out a ~year from now for triple what they put in. Look at the YCombinator batches, 90+% of them have some mention of AI in their pitch even if it's hilariously useless to have AI. You've got toothbrushes advertising AI features. It's a gold rush of people trying to get in on the hype while they still can, I guarantee you the strategy for 99% of the YCombinator AI batch is to get sold to M$ or Google for a billion bucks, not build anything sustainable or useful in any way.

It's a massive bubble, and things like these "benchmarks" are all part of the hype game. Is the tech cool and useful? For sure, but anyone trying to tell you this benchmark is in any way proof of AGI and will replace everyone is either an idiot or more likely has a vested interest in you believing them. OpenAI's whole marketing shtick is to scare people into thinking their next model is "too dangerous" to be released thus driving up hype, only to release it anyway and for it to fall flat on its face.

Also, if there's any jobs LLMs can replace right now, it's the useless managerial and C-suite, not the people doing the actual work. If these people weren't charlatans they'd be the first ones to go while pushing this on everyone else.


Don't worry, they will hire somebody to control AI...


spend a little time learning how to use LLMs and i think you'll be less scared. they're not that good at doing the job of a software developer.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: