Hacker Newsnew | past | comments | ask | show | jobs | submit | qazxcvbnmlp's commentslogin

Well put.

> A lot of modern, aspiring-middle-class and online culture

Theres also a pernicious way of identifying with the struggle. Instead of I have trouble focusing in certain situations, so maybe I should find ways to spend my time (careers, hobbies) that work well with that. We instead go to 'I have ADHD' and my 'job' should make special accommodations for me.

Regardless of whether a job should or should not make accommodations. It's not a very helpful construct to think they should. It removes agency from the person experiencing the struggle. Which in turn puts them farther from finding a place that they would fit in well.

For the vast majority of behaviors (ADHD, attachment issues, autism, etc) they exist on a continuum and are adaptive/helpful in certain situations. By pathologizing them, we(society) loose touch for what they mean in our life. It also makes discourse hard because the (this is causing me to truly not be able to function) gets mixed in with the (this is a way that my brain behaves, but I can mostly live a life).


HN commenter logic: 'if I don't understand it, this is clearly wrong'

'If someone doesn't spend money irrationally like I spend money irrationally its bad'

There is indeed a blind spot.


Well said


so many opportunities for shenanigans!

'watch this ad before we give you your boarding pass'

'no boarding pass for you until your group is called'

'ah ha! you have an iPhone 17 pro max instead of a $BUDGET_ANDROID, cool your seat fees will be 20% higher'

'oh you got your boarding pass on this device, but need it on a different device now? cool lets do a device transfer fee'


Ryanair are probably reading this thread looking for ideas.


Reading the posters cv and experience, I suspect they have a skill gap in theory of mind. ie, understanding how they are perceived. Sure, the economy is hard, and finding a job is difficult. Questions I have

- What jobs are they applying for? - Do they understand the benefits they can bring to a team? - Are they showing up in interactions like they show up in this blog post? How can they take radical responsibility for the problem of finding job? Doing what you are told and not getting a job sure sucks but if that's all someone tells me about what they did, I am 100% not passing on a good recommendation. - Their resume needs work


‘Nobody hires juniors’ is a possibility, too


Maybe 'no-one doing whiz-bang AI research is hiring juniors'. I really doubt that 'no-one anywhere in the tech industry is hiring juniors'.


Agree 100%

It's very hard to get a job right now, I don't doubt that. Also it's not very helpful in getting a job to look at macroeconomic trends: the relative change in the trends is much smaller than how you show up in the process.

The poster had consulting work, and 3 internships.. I sense a disconnect between what a potential employer needs (ie why they would pay you) and what they have to offer.

Its easier for the ego to go "man the job market it bad", ie if I don't get this job what does that say about 'my worth as a human' but its not very helpful in getting a job.


Brenda also needs to put food on the table. If Brenda is 'careless' and messes up we can fire Brenda, because of this Brenda tries not to be carless (also other emotions). However I cannot deprive an AI model of pay because it messed up;


This is the reason the higher-ups in finance who rely on Brenda might continue to rely on Brenda, rather than relying on AI. She offers them accountability.


You might be looking for the word “accountability”


I use Render, spend remarkably little time doing devops. Its fantastic


Case in point: I have several technical manuals that look like books but have no ISBN numbers.


> My experience has been that if you take the time to explain what the current state is, what your desired state should be, and to give information on how you want the agent to proceed,

I have a pet theory. 1. This skill requires a strong theory of mind[1]. 2. Theory of mind is more difficult in those with autism 3. The same autism that makes people really good at coding, and gives them the time to post on online forms like hn, makes it hard to understand how to work with LLMs and how others work with llms.

To provide good context to the llm you need to have a good understanding of (1) what it will and will not know, (2) what you know and take for granted(ie a theory of your own mind) (3) what your expectations are. None of this you need to do when you are coding on your own, but are critical on getting a good response from the LLM.

See also the black and white thinking that is common in the responses on articles like this.[2]

[1]https://en.wikipedia.org/wiki/Theory_of_mind [2]https://www.simplypsychology.org/black-and-white-thinking-in...


An LLM has no mind! What is your strong theory of mind for an LLM? That it knows the whole internet and can regurgitate it like a mindless zombie?


Whether or not it has a mind is irrelevant to the problem. I think the point is, if you pretend it had a mind and write your prompt accordingly, you will get the best results.


Source?

It sounds like an unprovable metaphysical statement than something that is supported by scientific evidence.


The burden of proof is on people stating that an AI has a theory of mind, not on the reverse. Until recently it was highly debated on if dogs have theory of mind, and it took decades of evidence to come to the conclusion that yes, they do.


GGP didn't say that AI has a theory of mind. GGP said that using AI productively requires a theory of mind, a.k.a. being able to build a mental model of the LLM's context.


The burden of proof is on the person making the claim. It doesn't matter whether the claim is positive or negative. The default position is "We don't know if AI has a ToM."


Am I incorrect in thinking this is as much true of the linux kernel or emacs as it is of an LLM?


If you read carefully you will see that they never said AI has a theory of mind.


This actually makes a disturbing amount of sense and I think I'm going to need to chew on it for a while. Thanks for sharing!


That simply psych article is a psyop


That HN username is also bad news. Meanwhile yours is pretty cool. I really enjoyed the social credit memes with John Cena.

How exactly do you come up with a pet theory out of nowhere, randomly diagnose people on the internet with autism based on how they use LLMs and then start linking to a most likely AI generated blog post (there was simply too much repetition) that ascribes a lot of negative attributes to them with a username that is meant to be unrecognisable.

The post is basically a Kafka trap or engagement bait.


What's the round trip latency on this? ask question -> response. Do you parse the question word by word and feed into llm or wait for the whole question before feeling into llm?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: