Hacker News new | past | comments | ask | show | jobs | submit | tripper_27's comments login

Several EU cities have experimented with making public transport free, and people seem to really enjoy it.

Also, as you so eloquently put it, it isn't clear that the cost for issuing and checking tickets is covered by the income from the tickets, and there are reasons why MTA tickets cannot be priced at the actual cost to cover the ticket compliance infrastructure -- with a nice analogy to the cost of parking vs value of parking real estate. What justifies the subsidy for on-street parking?


Some internet searching suggests fares account for between 25 and 33 percent of the MTA’s revenue. There’s no way the infrastructure for collecting fares costs that much.

This is one of the main criticisms of free fares: in reality the revenue stream from fares is never actually fully replaced, so it just results in the transit agency becoming underfunded. This makes transit worse for existing users who are already paying. The new users you get because of free fares are mostly casual users like tourists who have alternate options, so serving them isn’t that useful and not worth the negative impact on existing users.


Seconding Uvix's question.

I was looking for work in 2021-2022, and an approach like yours got me a job after interviewing with circa 10 companies. Unfortunately ended up on the wrong side of office politics and had to leave in early 2024.

At the start of my 2024 job search I again tried targeted search, targeting was good enough that I had a circa 1:10 application to interview ratio. It took over 50 companies before I found my current role. The market is much tougher now than it was a few years ago.

I hear there was a time when companies were eager/desperate to hire. Those were good years for job seekers.


> Those people who have to apply to 100s of jobs are probably in that situation because they’re spraying low-effort LLM resumes around and most hiring managers can see right through this game by now.

Just came off a brutal 7 month job search. And that's with a resume good enough, and care enough in jobs I applied to, that I got to the hiring manager with 1 of 10 applications (vs 1:100 or worse which is what I've heard is normal).

I think I interviewed at 50+ companies, which makes 500 or more applications.

Yes, this clearly says something about my interview skills, but there is a difference between interview skills and engineering/software skills-- I've done well in my career without having to heavily interview before (senior IC level) and I came by that strong resume honestly.

So please be careful about generalizing. I'm an example of someone who had to apply to 5x as many jobs as you say would be needed, and it would have been 50x if I didn't have a strong background and work ethic.


cost and outcomes. In aggregate, Americans pay much more and get worse outcomes


Define "smart". And explain how "smart"=="conscious"

I can agree that there is no genetic benefit to being able to move at the speed animals move, because that's not how plants obtain food or avoid being eaten. Thus no need for nerves or a CNS to coordinate movement.


Take either of them and tell me why nature would have optimised for that rather than other features, like leaves. Energy isn't infinite so genetic changes optimise for easier-to-achieve ends rather than somehow jumping past all animals to evolve smarts or consciousness without evidence of many precursor adaptations.

Also why would a smart (or conscious) plant not have eventually learned to use some of that to do something that improves survivability. Like strike out, or hide down, or anything more than "somewhat grow towards the light or nutrients over time".

It's a nice fun exercise to argue with people while imbibing your drug of choice, but it's utterly unlinked to anything else we see in nature. We're not idiots, we would have seen evidence by now.


Alternative hypothesis: Given that plants do not have a central nervous system, it is reasonable to expect they have a distributed consciousness.

Recall that most plants avoid building single-purpose organs, as the odds that 70-80% of the plant gets eaten are high. Plants have evolved to survive massive loss of body parts.

I've read some studies on plant consciousness which shows that plant awareness can be turned off with anesthetics


> it is reasonable to expect they have a distributed consciousness.

Why would it be reasonable to expect they have any consciousness? What would plants do with such a consciousness that they’re wasting scarce energy on both operating and building the biological structures to maintain consciousness? They can’t move. They can’t take active actions. Why would they develop a consciousness that does nothing but makes them aware of their implementing doom without allowing them to act on it?

> Recall that most plants avoid building single-purpose organs, as the odds that 70-80% of the plant gets eaten are high. Plants have evolved to survive massive loss of body parts.

You know what would be really useful to evolve to survive the loss of body parts? Not suffering and feeling pain when you do, or even being aware that you just did. Especially when not being mobile in anyways you can’t do anything about it.

> I've read some studies on plant consciousness which shows that plant awareness can be turned off with anesthetics

Citation needed.

Actually I’ll make it even easier. Start with studies that show plant awareness in the first place, before you show studies showing it can be switched off.



Plants do move and respond to their environment. A lot. They even are social. They've been shown to communicate with each other to signal that pests are attacking and their peers will increase production of pest repelling chemicals and stuff.

They just do all that at a much slower time scale than you're used to in your consciousness. I wouldn't totally discount plants having some form of consciousness at lower frequency.


Are you okay?


Can you cite those studies please? Very interested.


Hey, will say your right about "seems similar to human", but behavior such as moms taking care of their babies is found in many plant species, as is loyalty. Probably happiness and longing as well.

Ask yourself how many of these feelings arise from the rational thinking part of your brain vs how they seem to be full body sensations, and realize that the central nervous system might allow such signals/awareness to propagate at mammal speed, but why would a plant need that speed?

There really is no reason why a CNS is needed for these emotions to be active, just a way to distribute hormones/chemical signals throughout the body.


> If we define hallucinations as falsehoods introduced between the training data and LLM output,

Yes, if.

Or we could realize that the LLMs output is a random draw from a distribution learned from the training data, i.e. ALL of its outputs are a hallucination. It has no concept of truth or falsehoods.


I think what you are saying here is that because it has no "concept" (I'll assume that means internal model) of truth, then there is no possible way of improving the truthiness of an LLMs outputs.

However, we do know that LLMs posses viable internal models, as I linked to in the post you are responding to. The OP paper notes that the probes it uses find the strongest signal of truth, where truth is defined by whatever the correct answer on each benchmark is, on the middle layers of the model during the activation of these "exact answer" tokens. That is, we have something which statistically correlates with whether the LLM's output matches "benchmark truth" inside the LLM. Assuming that you are willing to grant that "concept" and "internal model" are pretty much the same, this sure sounds like a concept of "benchmark truth" at work. If you aren't willing to grant that, I have no idea of what you mean by concept.

If you mean to say that humans have some model of Objective Truth which is inherently superior, I'd argue that isn't really the case. Human philosophers have been arguing for centuries over how to define truth, and don't seem to have come to any conclusion on the matter. In practice, people have wildly diverging definitions of truth, which depend on things like how religious or skeptical they are, what the standards for truth are in their culture, and various specific quirks from their own personality and life experience.

This paper only measured "benchmark truth" because that is easy to measure, but it seems reasonable to assume that other models of truth exist within them. Given that LLMs are supposed to replicate the words that humans wrote, I suspect that their internal models of truth work out to be some agglomeration (plus some noise) of what various humans think of as truth.


If that were the case, you couldn't give it a statement and ask whether that statement is true or not, and get back a response that is correct more often than not.


You can judge the truth and falsity of its output without caring a whit about how it produces those outputs.


Koan like question that may have no answer:

If language communicates thoughts, thoughts have a relationship with reality, and that relationship might be true or false or something else.

Then what thought is LLM language communicating, to what reality does it bear a relationship, and what is the truth or falseness of that language?

To me, LLM generated sentences have no truth or false value, they are strings, literally, not thoughts.

Take the simple "user:how much is two plus two? assistant: two plus two is four". It may seem trivial, but how do ascertain that that statement maps to 2+2=4? Do you make a leap of faith or argue that the word plus maps to the adding function? What about is, does it map to equality? Even if they are the same tokens as water is wet (where wet is not water?). Or are we arguing that the truthfulness lies on the embedding interpretation? Where now tokens and strings merely communicate the multidim embedding space, which could be said to be a thought, now we are mapping some of the vectors in that space as true, and some as false?


A part of an answer:

Lets assume LLMs don't "think". We feed an LLM an input and get back an output string. It is then possible to interpret that string as having meaning in the same way we interpret human writing as having meaning, even though we may choose not to. At that point, we have created a thought in our heads which could be true or false.

Now lets talk about calculators. We can think of calculators as similar to LLMs, but speaking a more restricted language and giving significantly more reliable results. The calculator takes a thought converted to a string as input from the user, and outputs a string, which the user then converts to a thought. The user values that string creating a thought which has a higher truthiness. People don't like buggy calculators.

I'd say one can view an LLM in exactly the same way, just that they can take a much richer language of thoughts, but output significantly buggier results.


It's also much faster to get in front of a NP than a doctor.


As it should be. One of the few levers we have to control costs across the healthcare system is shifting much of routine primary and urgent care to PA/NP practitioners. I understand that might mean a loss of quality in some cases (have been on the receiving end of that myself) but we'll have to lower our expectations and be content with good enough.


hey, I hear you. Modern America is full of voices demanding you give them care/attention.

Every.

Single.

Thing.

It is overwhelming. See following comments on the need for slack/margin in capacity. Well, we also need it in emotional demands. Yet every commercially-moderated interaction in our society seems to be optimizing for making us care more about it, because that increases their revenue.


This is why I recently fell in love with libraries. It is one of the few places where ads/manipulation to part you from your money is extremely limited and the demands are so chill.

"Want to borrow a book? Ok." "Don't want to? Ok. Whatevs."

And even analysis paralysis is overcome with: "Why not both! Borrow two whole books!"


I really resent how our collective attention has been so commoditized. Advertising lies at the core of so many of our problems, and I'm not sure how we can come back.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: