Hacker Newsnew | past | comments | ask | show | jobs | submit | serbuvlad's commentslogin

Prioritize goals over the process and what AIs can do doesn't matter.

Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.

Whether it's people captured by film, animations in Blender or AI slop, what matters is the outcome. Is it good? Do people like it?

I do the infrastructure at a department of my Uni as sort of a side-gig. I would have never had the time to learn Ansible, borg, FreeIPA, wireguard, and everything else I have configured now and would have probably resorted to a bunch of messy shell scripts that don't work half the time like the people before me.

But everything I was able to set up I was able to set up in days, because of AI.

Sure, it's really satisfying because I also have a deep understanding of the fundamentals, and I can debug problems when AI fails, and then I ask it "how does this work" as a faster Google/wiki.

I've tried windsurf but given up because the AI does something that doesn't work and I can give it the prompts to find a solution (+ think for myself) much faster than it can figure out itself (and probably at the cost of a lot less tokens).

But the fact that I enjoy the process doesn't matter. And the moment I can click a button and make a webapp, I have so many ideas in my drawer for how I could improve the network at Uni.

I think the problem people have is that they work corporate jobs where they have no freedom to choose their own outcomes so they are basically just doing homework all their life. And AI can do homework better than them.


Take this too far and you run into a major existential crisis. What is the goal of life? Most people would say something along the lines of bringing joy to others, experiencing joy yourself, accomplishing things that you are proud of, and continuing the existence of life by having children, so that they can experience joy. The joy of life is in doing things, joy comes from process. Goals are useful in that they enable the doing of some process that you want to be doing, or in the joy of achieving the goal (in which case the joy is usually derived from the challenge in the process of achieving the goal).

> Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.

This especially falls apart when it comes to art, which is one of the most “end-goal” processes. People make movies because they enjoy making movies, they want movies to be enjoyed by others because they want to share their art, and they want it to be commercially successful so that they can keep making movies. For the “enjoying a movie” process, do you truly believe that you’d be happy watching only AI-generated movies (and music, podcasts, games, etc.) created on demand with little to no human input for the rest of your life? The human element is truly meaningless to you, it is only about the pixels on the screen? If it is, that’s not wrong - I just think that few people actually feel this way.

This isn’t an “AI bad” take. I just think that some people are losing sight of the role of technology. We can use AI to enable more people than ever before to spend time doing the things they want to do, or we can use it to optimize away the fun parts of life and turn people even further into replaceable meat-bots in a great machine run by and for the elites at the top.


When all we care about is the final product, we miss the entire internal arc, the struggle, the bruised ego, and the chance of failure, and the reward in feeling "holy shit, I did it!" that comprises the essence of being human.

Reducing the human experience as a means to an end is the core idea of dehumanization. Kant addressed this in the "humanity formula of the categorical imperative:

    "Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means."
I'm curious how you feel about the phrase "the real treasure was the friends we made along the way." What does it mean to you?

But the process _does_ matter. That is the whole point of life. Why else are we even here if not to enjoy the process of making? It’s why people get into woodworking or knitting as hobbies. If it was just about the end result, they could just go to a store a buy something that would be way cheaper and easier. But that’s not the point - it’s something that _you_ made with your own hands, as imperfect as they are, and the experience of making something.

The fundamental advantage of our society as designed is that it weaponizes narcissism, and makes narcissists do useful stuff for society.

Don't care about competition? Find a place where rent prices are reasonable and you'll find it's actually surprisingly easy to earn a living.

Oh, but you want the fancy stuff, don't you?


I suspect that if you find a place where rent prices are reasonable, you'll find it's actually surprisingly hard to find job there that pays a good wage, healthcare that keeps you healthy, decent schools to educate your children, and a community that shares your values and interests.

People don't move to high cost of living areas because they want nice TVs. Fancy stuff is the same price everywhere.


I live in Romania so I have different problems. I understand that Americans have problems with rent and healthcare. We have problem with other stuff, like food prices.

But at the end of the day, it's extremely unhealthy to let these problems force us into feeling like we have to make a lot of money. You can find cheap solutions for almost everything almost everywhere if you compromise.


I don't think people feel like they have to make a lot of money.

I think they seek jobs and places to live that give them the maximum overall benefit. I currently live in Seattle, which is quite expensive.

If there was another city like Seattle with the same schools, healthcare, climate, and culture, but cheaper housing, I'd move there as long as the salaries there weren't so much lower that it more than canceled out the benefit of cheaper housing.

The problem in the US is that even though some cities are quite expensive, they are still overall the most economical choice for people who can get good jobs in those cities. The increased pay more than makes up for the higher prices.


I'm talking about narcissism as in Burnout Society.

Give the book a go if you haven't. It lays out many of the fundamental problems of current social organization way better than I can.

> Oh, but you want the fancy stuff, don't you?

Just some food for thought, though. Is weaponizing hyperpositivity the only way to produce fancy stuff? Think about it, you'll see by yourself this is a false dichotomy embedded in a realism that prevents us from improving society.


> because AI can make itself better

Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.

AI can not make itself better because it can not meaningfully define what better means.


AlphaEvolved reviewed how its trained and found a way to improve the process.

Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.

At this point its silly to say otherwise.


On the webpage at the "Why you should NOT use Canine" section, it is possible to swipe away a card that is in the background, which is very weird UX.

Chrome 137. Android 13.

Other than that... I'll give it a shot. Have three N100 NUCs. Two are currently unused after failed attempts to learn to use k8s.

Maybe this'll do the trick.


Yeah the background card swiping is confusing. I understand why it happens but it should not be possible when only a tiny sliver of the card is shown.

"Have we created machines that can do something qualitatevely similar to that part of us that can correlate known information and pattern recognition to produce new ideas and solutions to problems -- that part we call thinking?"

I think the answer to this question is certainly "Yes". I think the reason people deny this is because it was just laughably easy in retrospect.

In mid-2022 people were like. "Wow this GPT3 thing generates kind of coherent greentexts"

Since then really only we got: larger models, larger models, search, agents, larger models, chain-of-thought and larger models.

And from a novelty toy we got a set of tools that at the very least massively increase human productivity in a wide range of tasks and certainly pass any Turing test.

Attention really was all you needed.

But of course, if you ask a buddhist monk, he'll tell you we are attention machines, not computation machines.

He'll also tell you, should you listen, that we have a monkey in our mind that is constantly producing new thoughts. This monkey is not who we are, it's an organ. It's thoughts are not our thoughts. It's something we perceive. And that we shouldn't identify with.

Now we have thought-genrating-monkeys with jet engines and adrenaline shots.

This can be good. Thought-genrating-monkeys put us on the moon and wrote Hamlet and the Oddesy.

The key is to not become a slave to them. To realize that our worth consists not in our ability to think. And that we are more than that.


>I think the answer to this question is certainly "Yes".

It is unequivocally "No". A good joint distribution estimator is always by definition a posteriori and completely incapable of synthetic a priori thought.


The human mind is an estimator too.

The fact that the human mind can think in concepts, images AND words, and then compresses that into words for transmission, wheras LLMs think directly in words, is no object.

If you watch someone reach a ledge, your mind will generate, based on past experience, a probabilistic image of that person falling. Then it will tie that to the concept of problem (self-attention) and start generating solutions, such as warning them or pulling them back etc.

LLMs can do all this too, but only in words.


Do you think language is sufficient to model reality (not just physical, but abstract) here?

I think not, we can get close, but there exists problems and situations beyond that, especially in mathematics and philosophy. And I don't a visual medium or combination of is sufficient either, there's a more fundamental, underlying abstract structure that we use to model reality.


> Do you think language is sufficient to model reality (not just physical, but abstract) here?

It's sufficient to the level needed for human intelligence. We're a product of evolution, and we only need as much abstraction as it's required for operational reasons. Modeling reality in a deep, abstract way is something we want to, but not something that was required for our minds to evolve, nor for us to create civilization as it is today.


> Do you think language is sufficient to model reality (not just physical, but abstract) here?

After much time trying to accomplish this during the 20th century, the answer was as resounding "no" [1]

[1] https://en.wikipedia.org/wiki/Logical_positivism#Decline_and...


>LLMs think

Quick aside here: They do not think. They estimate generative probability distributions over the token space. If there's one thing I do agree with Dijkstra on, it's that it's important not to anthropomorphize mathematical or computing concepts.

As far as the rest of your comment, I generally agree. It sort of fits a Kantian view of epistemology, in which we have sensibility giving way to semiotics (we'll say words and images for simplicity) and we have concepts that we understand by a process of reasoning about a manifold of things we have sensed.

That's not probabilistic though. If we see someone reach a ledge and take a step over it, then we are making a synthetic a priori assumption that they will fall. It's synthetic because there's nothing about a ledge that means the person must fall. It's possible that there's another ledge right under we can't see. Or that they're in zero gravity (in a scifi movie maybe). Etc. It's a priori because we're making this statement not based on what already happened but rather what we know will happen.

We accomplish this by forming concepts such as "ledge", "step", "person", "gravity", etc., as we experience them until they exist in our mind as purely rational concepts we can use to reason about new experiences. We might end up being wrong, we might be right, we might be right despite having made the wrong claims (maybe we knew he'd fall because of gravity, however there was no gravity but he ended up being pushed by someone and "falling" because of it, this is called a "Gettier problem"). But our correctness is not a matter of probability but rather one of how much of the situation we understand and how well we reason about it.

Either way, there is nothing to suggest that we are working from a probability model. If that were the case, you wind up in what's called philosophical skepticism [1], in which, if all we are are estimation machines based on our observances, how can we justify any statement? If every statement must have been trained by a corresponding observation, then how do we probabilistically model things like causality that we would turn to to justify claims?

Kant's not the only person to address this skepticism, but he's probably the most notable to do so, and so I would challenge you to justify whether the "thinking" done by LLMs has any analogue to the "thinking" done using the process described in my second paragraph.

[1] https://en.wikipedia.org/wiki/Philosophical_skepticism#David...


> We accomplish this by forming concepts such as "ledge", "step", "person", "gravity", etc., as we experience them until they exist in our mind as purely rational concepts we can use to reason about new experiences.

So we receive inputs from the environment and cluster them into observations about concepts, and form a collection of truth statements about them. Some of them may be wrong, or apply conditionally. These are probabilistic beliefs learned a posteriori from our experiences. Then we can do some a priori thinking about them with our eyes and ears closed with minimal further input from the environment. We may generate some new truth statements that we have not thought about before (e. g. "stepping over the ledge might not cause us to fall because gravity might stop at the ledge") and assign subjective probabilities to them.

This makes the a priori seem to always depend on previous a posterioris, and simply mark the cutoff from when you stop taking environmental input into account for your reasoning within a "thinking session". Actually, you might even change your mind mid-reasoning process based on the outcome of a thought experiment you perform which you use to update your internal facts collection. This would give the a priori reasing you're currently doing an even stronger a posteriori character. To me, these observations above basically dissolve the concept of a priori thinking.

And this makes it seem like we are very much working from probabilistic models, all the time. To answer how we can know anything: If a statement's subjective probability becomes high enough, we qualify it as a fact (and may be wrong about it sometimes). But this allows us to justify other statements (validly, in ~ 1-sometimes of cases). Hopefully our world model map converges towards a useful part of the territory!


But I do not think humans think like that by default.

When I spill a drink, I don't think "gravity". That's too slow.

And I don't think humans are particularly good at that kind of rational thinking.


>When I spill a drink, I don't think "gravity". That's too slow.

I think you do, you just don't need to notice it. If you spilled it in the International Space Station, you'd probably respond differently even if you didn't have to stop and contemplate the physics of the situation.


I think they may have been referring to the fact that in the case of a spilled drink there's a shortcut from the sensory input to a motor output. Maybe you reach for the falling cup, maybe you back away to not get spilled on. These don't really require the conscious mind at all.

I don't think that we need to be aware of the reasoning our minds are doing for it to constitute reasoning.

That doesn't seem true to me at all. Let's say you fit y=c+bx+ax^2 on the domain -10,10 with 1000 data points uniformly distributed along x and with no more than 1% noise in observed y. Your model will be pretty damn good and absolutely will be able to generate "synthetic a priori" y outputs for any given x within the domain.

Now let's say you didn't know the true function and had to use a neural network instead. You would probably still get a great result in the sense of generating "new" outputs that are not observed in the training data, as long as they are within or reasonably close to the original domain.

LLMs are that. With enough data and enough parameters and the right inductive bias and the right RLHF procedure etc, they are getting increasingly good at estimating a conditional next token distribution given the context. If by "synthetic" you mean that an LLM can never generate a truly new idea that was not in it's training data, then that becomes the question of what the "domain" of the data really is.

I'm not convinced that LLMs are strictly limited to ideas that they have "learned" in their data. Before LLMs, I don't think people realized just how much pattern and structure there was in human thought, and how exposed it was through text. Given the advances of the last couple of years, I'm starting to come around to the idea that text contains enough instances of reasoning and thinking that these models might develop some kind of ability to do something like reasoning and thinking simply because they would have to in order to continue decreasing validation loss.

I want to be clear that I am not at all an AI maximalist, and the fact that these things are built largely on copyright infringement continues to disgust me, as do the growing economic and environmental externalities and other problems surrounding their use and abuse. But I don't think it does any good to pretend these things are dumber than they are, or to assume that the next AI winter is right around the corner.


>Your model will be pretty damn good and absolutely will be able to generate "synthetic a priori" y outputs for any given x within the domain.

You don't seem to understand what synthetic a priori means. The fact that you're asking a model to generate outputs based on inputs means it's by definition a posteriori.

>You would probably still get a great result in the sense of generating "new" outputs that are not observed in the training data, as long as they are within or reasonably close to the original domain.

That's not cognition and has no epistemological grounds. You're making the assumption that better prediction of semiotic structure (of language, images, etc.) results in better ability to produce knowledge. You can't model knowledge with language alone, the logical positivists found that out to their disappointment a century or so ago.

For example, I don't think you adequately proved this statement to be true:

>they would have to in order to continue decreasing validation loss

This works if and only if the structure of knowledge lies latently beneath the structure of semiotics. In other words, if you can start identifying the "shape" of the distribution of language, you can perturb it slightly to get a new question and expect to get a new correct answer.


> The key is to not become a slave to them. To realize that our worth consists not in our ability to think. And that we are more than that.

I cannot afford to consider whether you are right because I am a slave to capital, and therefore may as well be a slave to capital's LLMs. The same goes for you.


I am not a slave to capital. I am a slave to the harsh nature of the world.

I get too hot in summer and too cold in winter. I die of hunger. I am harassed by critters of all sorts.

And when my bed breaks, to keep my fragile spine from straining at night, I _want_ some trees to be cut, some mattresses to be provisioned, some designers to be provisioned etc. And capital is what gets me that, from people I will never meet, who wouldn't blink once if I died tomorrow.


Considering capitalism is a very new phenomenon in human history, how do you think people survived and thrived for the other 248000 years? It's as ludicrous to believe that capitalism is some kind of force of nature as it is to believe kings were chosen by god.

That depends on how you define your terms. A pro-capital laissez-faire policy is new, sure.

But the first civilizations in the world around 3000BC had trade, money, banking, capital accumulation, divison of labour etc.


> how do you think people survived and thrived for the other 248000 years?

In small tribes, where everyone knew everyone intimately because they lived together, and everything was managed by feels.

Things like rules, laws, money, banking, hierarchies, well-defined private vs. public ownership, are all things that came with scale, because interpersonal relationships fail to keep group cohesion once it reaches more than ~100 people.


What's up with the US tipping culture?

I live in Romania and I only tip restaurants a standard of 10% (not fast food, not coffee, just restaurants). Also delivery people when they help bring heavy stuff into my appartment (theoretically they are only paid to bring it to the block entrance).

Back when I used taxis we would tip those. But I have never tipped an Uber. Or a Glovo (our Door Dash) deliveryman.


Started off as a way to pay people less, especially for odd jobs.

Grew to a point where it's disconnected from the actual value of the service, so people like waiters make way more than if it was priced according to market price, but people pay anyways because it's not about the service, but about not feeling guilty for being cheap. The ecosystem has now found a balance that hurts the consumer, which they're willing to put up with because it's socially ingrained. The people providing a service make more, the business owner doesn't really care, and can't get rid of tips because it's a cutthroat industry and they wouldn't get workers, and higher wages would cause sticker shock, so they too have no incentive to make any changes. The customers group is too big, and don't have enough structure to organize any meaningful change. So it is what it is.

You can see it now, people complain about how tipping is everywhere, including for walk-ins where no table service is provided, but eventually this too will be normalized.

My personal hope is that one day we start tipping our doctors, our dentists, our programmers, to see how big and stupid this dumpster fire can grow.


> Started off as a way to pay people less, especially for odd jobs.

Kind of. American tipping came out of the post-slavery south as a form of exploitation where people weren't guaranteed a wage.

This is why tipping was common in historically black jobs like hospitality, food service workers and railroad porters.

There still a federal "tipped" minimum wage at $2.13 - which some states still abide by, roughly corresponding to the historic south https://www.epi.org/publication/waiting-for-change-tipped-mi...

These also seem to be some of the worst tipping states according to most sources, https://www.lyft.com/blog/posts/the-united-states-of-tipping...

Which kind of makes sense - if people in those states invented tipping to pay people less, then those states paying tipped people less isn't that surprising ...

Cultural behavior patterns last decades, which is why there's some dissipation 150 years later.

These things can be weird. For instance coat check (person who holds on to expensive coat) and car valet (person who holds on to expensive car) is functionally equivalent with a 100 year separation so the tip culture sticks.

Same goes for the shoe shiner and car washer; the person who makes your mode of transportation more presentable.

Maybe this sounds like crazy free association, but the pattern seems to hold. Take porters and food delivery drivers, for instance, not that different.

Anyway, when you start scratching at weird american anomalies like tipping and the electoral college, usually you find something to do with slavery's long tail.


I guess that's why it doesn't work in Romania. Most romanians take a certain amount of healthy pride in being cheap, or rather, in being able to get more for as little money as possible.

If you buy the expensive beer you're not impressing too many people. But of course, there are 50 cheap beers, most of which suck. The pride is kmowing that one cheap beer that's as good as the expensive ones.

The fact that taxis often tried to extort tips out of you and lied to you about the price by not running their meters is what made Uber popular here -- it ended up being cheaper.

My advice: stop tipping. Just you, personally. If the average person tips 10%, and tomorrow everyone stopped tipping, prices will probably increase by ~10%.

So just personally stop tipping and enjoy the permaneny 10% discount all the other suckers are gifting you.


I have been using arch for about a year now.

I've crapped my system on install, or when trying to reconfigure core features.

Updates? 0 issues. Like genuinely, none.

I've used Ubuntu and Mint before and Arch "just works" more then either of them in my experience.


In my experience with hard drugs, the human body has an amazing ability to build tolerance, but in a way that biases against positive effects.

So, for example, a first-time user may consume quantity X of a drug, and get the positive effect Y and negative side-effect Z. An experienced user may consume quantity X and only get a negative side-effect 1/3 or 1/4 Z. But also only get a positive effect of 1/10 Y.

So even though the ratio of Z/X has decreased (less negative side-effect per unit of substance) so has the ratio of Y/X (less positive effect per unit of substance). Most importantly, the ratio of Z/Y has increased (more negative side-effect per "unit" of positive effect).

I find no reason to disbelieve the existence of performance-enhancing chemicals (or mood-enhancing, or anything else). Perhaps Methylene Blue does do what it's fans think it does -- at first.

If you want to get some work done, coffee can really help, Red Bull even more so, and speed even more so. The question is what happens on day 3, week 3, month 3, year 3 of continuous use.


Processes are in Erlang terms are lightweight threads. So when a "process" crashes, that's not the whole system crashing.


I don't get the hate on

"curl ... | sudo bash"

Running "sudo dpkg -i somepackage.deb" is literally just as dangerous.

You *will* want to run code written by others as root on your system at least once in your life. And you *will not* have the resources to audit it personally. You do it every day.

What matters is trusting the source of that code, not the method of distribution "curl ... | sudo bash" is as safe as anything else can be if the curl URL is TLS-protected.


> Running "sudo dpkg -i somepackage.deb" is literally just as dangerous.

And it's just as bad an idea if it comes from some random untrusted place on the internet.

As you say, it's about trust and risk management. A distro repo is less likely to be compromised. It's not impossible, but more work is required to get me to run your malicious code via that attack vector.


Sure.

But

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
is less likey to get hijacked and scp all my files to $REMOTE_SERVER than a Deb file from the releases page of a random 10-star github repository. Or even from a random low-use PPA.

But I've just never heard anyway complain about "noobs" installing deb packages. Ever.

Maybe I just missed it.


> But I've just never heard anyway complain about "noobs" installing deb packages. Ever.

it is literally in the debian documentation: https://wiki.debian.org/DontBreakDebian

> One of the primary advantages of Debian is its central repository with many thousands of software packages. If you're coming to Debian from another operating system, you might be used to installing software that you find on random websites. On Debian installing software from random websites is a bad habit. It's always better to use software from the official Debian repositories if at all possible. The packages in the Debian repositories are known to work well and install properly. Only using software from the Debian repositories is also much safer than installing from random websites which could bundle malware and other security risks.


At least the package is signed. Curl can against a url that got high jacked


It's singed by a key that's obtained from a URL owned by the same person. Sure, you can't attack devices already using the repo, but new installs are fair game.

And are URLs (w/ DNSSEC and TLS) really that easy to hijack?


> And are URLs (w/ DNSSEC and TLS) really that easy to hijack?

During the Google Domains-Squarespace transition, there was a vulnerability that enabled relatively simple domain takeovers. And once you control the DNS records, it's trivial to get Let's Encrypt to issue you a cert and adjust the DNSSEC records to match.

https://securityalliance.notion.site/A-Squarespace-Retrospec...


Packages can get hijacked too.


What is the difference between a random website or domain, and the package manager of a major distribution, in terms of security? Is it equally likely they get hijacked?


The issue is not the package manager being hijacked but the package. And the package is often outside the "major distribution" repository. That's why you use curl | bash in the first place.

Your question does not apply to the case discussed at all, and if we modify it to apply, the answer does not argue your point at all.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: