Hacker News new | past | comments | ask | show | jobs | submit | Jonovono's comments login

What do you picture the syntax looking like? JSON, XML?


This is awesome. I did the same and have been using AlpineJS Pines UI library. Been pretty happy with it! But will take a look at this


Pines is really neat. I just don't like the wall of classes, but super neat nonetheless.


The Sully episode was unreal haha


It has already replaced therapists, the future is just not evenly distributed yet. There are videos with millions of views on tiktok and comments with hundreds of thousands of likes of teenage girls saying they have gotten more out of 1 week using ChatGPT as a therapist than years of human therapy. Available anytime, cheaper, no judgement, doesn't bring there own baggage, etc.


> no judgement

The value of a good therapist is having an empathetic third party to help you make good judgements about your life and learn how to negotiate your needs within a wider social context.

Depending on the needs people are trying to get met and how bad the people around them are, a little bit of a self directed chatbot validation session might help them feel less beat down by life and do something genuinely positive. So I’m not necessarily opposed to what people are doing with them/in some cases it doesn’t seem that bad.

But calling that therapy is both an insult to genuinely good therapists and dangerous to people with genuine mental/emotional confusion or dysregulation that want help. Anyone with a genuinely pathological mental state is virtually guaranteed to end up deeper in whatever pathology they’re currently in through self directed conversations with chatbots.


Reading between the lines I think a key part of what makes chatbots attractive, re lack of judgment, is they're like talking to a new stranger every session.

In both IRL and online discussions sometimes a stranger is the perfect person to talk to about certain things as they have no history with you. In ideal conditions for this they have no greater context about who you are and what you've done which is a very freeing thing (can also be taken advantage of in bad faith).

Online and now LLMs add an extra freeing element, assuming anonymity: they have no prejudices about your appearance/age/abilities either.

Sometimes it's hard to talk about certain things when one feels that judgment is likely from another party. In that sense chatbots are being used as perfect strangers.


Agreed/that’s a good take.

Again, I think they have utility as a “perfect stranger” as you put it (if it stays anonymous), or “validation machine” (depending on the sycophancy level), or “rubber duck”.

I just think it’s irresponsible to pretend these are doing the same thing skilled therapists are doing, just like I think it’s irresponsible to treat all therapists as equivalent. If you pretend they’re equivalent you’re basically flooding the market with a billion free therapists that are bad at their job, which will inevitably reduce the supply of good therapists that never enter the field due to oversaturation.


Also important is simply that the AI is not human.

We all know that however "non-judgmental" another human claims to be, they are having all kinds of private reactions and thoughts that they aren't sharing. And we can't turn off the circuits that want approval and status from other humans (even strangers), so it's basically impossible not to mask and filter to some extent.


The problem with this is they are practicing like medical providers without any quality assurance or controls to ensure they are behaving appropriately.

Therapy is already a bit of grey zone… you can have anyone from a psychologist, a social worker, an untrained deacon, etc “counseling” you. This is worse.

Hell, I’ve been a coach in different settings - players will ask for advice about all sorts of things. There’s a line where you have to say “hey, this is over my head”


Kind of reminds me of an interview question that a friend of mine suggested for when I conduct interviews: Pick your favorite/strongest language. How would you rate yourself, where 0 is "complete newbie" and 10 is "I invented the language"?

My friend, an EXTREMELY competent C++ programmer, rates himself 4/10 because he knows what he doesn't know.

I've interviewed people who rated themselves 9 or 10/10 but couldn't remember how their chosen language did iteration.


Sounds like a bad question then, no?


I wouldn't trust ChatGPT to help someone in a mental health crisis, but I would be glad to find out my dad had started using Claude Sonnet to process his transition into retirement. I believe Sonnet would encourage a user to seek professional help when appropriate, too. In my experience, genuinely good therapists are hard to find--probably 75% of them are going to be strictly worse than Sonnet.


> There are videos with millions of views on tiktok and comments with hundreds of thousands of likes of teenage girls saying they have gotten more out of 1 week using ChatGPT as a therapist than years of human therapy.

You can find influencers on tiktok recommending all kinds of terrible ideas and getting thousands of likes. That's not a very reliable metric. I wouldn't put a lot of faith in a teenage girl's assessment of AI therapy after just one week either, and I certainly wouldn't use that assessment to judge the comparative effectiveness of all human therapists.

I'd also expect ChatGPT to build profiles on people who use it, to use the insights and inferences from that collected data against the user in various ways, to sell that data in some form to third parties, to hand that data over to the state, to hallucinate wildly and unpredictably, and to outright manipulate/censor AI's responses according to ChatGPT's own values and biases or those of anyone willing to pay them enough money.

It's a lot easier to pay a large amount of money to ChatGPT so that the AI will tell millions of vulnerable teenage girls that your product is the solution to their exact psychological problems than it is to pay large amounts of money to several million licensed therapists scattered around the globe.

Maybe you think that ChatGPT is unfailingly ethical in all ways and would never do any of those things, but there are far more examples of companies who abandoned any commitment to ethics they might have started with than there are companies who never got once greedy enough to do those types of things and never ever got bought up by someone who was. I suppose you'd also have to think they'll never have a security breach that would expose the very private information being shared and collected.

Handing over your highly sensitive and very personal medical data to the unlicensed and undependable AI of a company that is only looking for profit seems extremely careless. There are already examples of suicides being attributed to people seeking "therapy" from AI, which has occasionally involved that AI outright telling people to kill themselves. I won't deny that the technology has the potential to do some good things, but every indication is that replacing licensed therapists with spilling all your secrets to a corporate owned and operated AI will ultimately lead to harm.


Is a system optimised (via RLHF) for making people feel better in the moment, necessarily better at the time-scale of days and weeks?


Just the advantage of being available at convenient times, rather than in the middle of the day sandwiched between or immediately after work/school is huge.


Yes. While these claims might be hyperbolic and simplistic, I don’t think they’re way off the mark.

The above issue, whilst relevant and worth factoring, doesn’t disprove this claim IMO.


Remembers everything that you say, isn't limited to an hour session, won't ruin your life if you accidentally admit something vulnerable regarding self-harm, doesn't cost hundreds of dollars per month, etc.

Healthcare is about to radically change. Well, everything is now that we have real, true AI. Exciting times.


Openly lies to you, hallucinates regularly, can barely get a task done. Such exciting.

Oh and inserts ads into conversations. Great.


> Oh and inserts ads into conversations. Great.

Are you sure you don't have browser malware?


Quick reminder that it's still just a fancy pattern matcher, there's no clear path from where we are to AGI.


>you are a stochastic parrot >no I’m not >yes you are


You can still get delta updates with Sparkle in an electron app. I am using it, and liking it a lot more than Electron Updater so far: https://www.hydraulic.dev


If you have built an app with Electron, you know how annoying dealing with IPC can be.

This provides an alternative way to build powerful UIs in electron without having to go through the IPC layer by instead treating your main process like a "server" that just sends HTML directly to the renderers. That way your views can directly make use of any node module or data on the main side. All without actually starting a server!

Let me know if there are any side by side examples youd like to see!


Not sure if its been fixed by now but a few weeks ago I was in the Golden Gate park and wondered if it was bigger than Central park. I asked ChatGPT voice, and although it reported the sizes of the parks correctly (with Golden gate park being the bigger size), it then went and said that Central Park was bigger. I was confused, so Googled and sure enough Golden gate park is bigger.

I asked Grok and others as well. I believe Perplexity was the only one correct.

Repeated it multiple times even with a friends account. It kept doing the same thing. It knew the sizes, but thought the smaller sized one was bigger...


I just tried. Claude did exactly what you said, and then figured it out:

Central Park in New York City is bigger than GoldenGate Park (which I think you might mean Golden Gate Park) in San Francisco.

Central Park covers approximately 843 acres (3.41 square kilometers), while Golden Gate Park spans about 1,017 acres (4.12 square kilometers). This means Golden Gate Park is actually about 20% larger than Central Park.

Both parks are iconic urban green spaces in major U.S. cities, but Golden Gate Park has the edge in terms of total area.


Probably because it has read the facts but has no idea how numbers actually work.


Took me way too long to realize this


Interesting read. I am curious how you would analyze the current AI hype MCP (model context protocol) from your perspective. Does it fit into the future you see? It seems like it's going the complete opposite direction as the future you paint, but perhaps it's just a stepping stone given the constraints of row.


Looks awesome! I want something like this for managing Electron base windows & Web contents views :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: