Hacker Newsnew | past | comments | ask | show | jobs | submit | more ToValueFunfetti's commentslogin

Is "Hoping to restore your faith in us" really part of their form letter? It fits right in with the melodrama of the complaint, but I guess it's plausible that's what they always say.


You may find Anglish amusing, then:

https://anglish.org/wiki/Anglish


What's funny is my initial impression of Anglish is that it reminds me a lot of German.


Not surprising, it is a Germanic language.

West Germanic, Anglo Frisian to be precise.

https://en.m.wikipedia.org/wiki/Anglo-Frisian_languages


Given English's "pure" roots, that should probably be entirely unsurprising.


Well, that wordbook is mighty bewitching.


Funnily enough, that's the line that made me suspicious it was AI. I've seen that structure and that sort of metaphor many times from ChatGPT. And it's not like GPT shies away from criticizing Altman (https://chatgpt.com/share/682623aa-eac0-8000-9fa3-d039580a01...). The rest of the comment doesn't set off any alarm bells for me.


Same, I've read enough ChatGPT prose to recognize it. The rest of the comment has also small cues that point to AI.


I think it maps perfectly onto the halting problem: just say one of the requirements of your program is halting. Humans can decide whether a program halts in a lot of cases, including more-or-less all of the programs we're likely to encounter. But for the overwhelming majority of possible programs, we can't figure it out.

A useful bug detector doesn't need to overcome this because it would be detecting bugs in the kind of code we write, but there is no bug detector which gives the correct answer for all inputs.


I don’t think you realize how universal the halting problem is in the universe.

Like the law governs everything that exists in the universe so it governs humans as well.

If a human can know that a program halts it also means the program is provably haltable. If a human doesn’t know whether a program will halt it likely means that the program is not provably haltable.

The halting problem refers to a general algorithm that can prove any program will halt.


>For context, generative AI music is basically unlistenable. I’ve yet to come across a convincing song, let alone 30 seconds worth of viable material.

This one pops into my head every couple months:

https://youtube.com/watch?v=4gYStWmO1jQ

It's not really my genre, so my judgment is perhaps clouded. Also, I find the dumb lyrics entertaining and they were probably written by a human (though obviously an AI could be prompted to do just as well). I am a fan of unique character in vocals and I love that it pronounces "A-R-A" as "ah-ahr-ah", but the little bridge at 1:40 does nothing for me.


You may have missed the month or so where this[1] AI-generated track (remixed by a person, but nonetheless) dominated pop culture.

[1] https://www.youtube.com/watch?v=1uW_AUwEv-0


The concern is that America would be too powerful if they had this power over Catholicism as well. There's no concern about waiting until it's time to appoint the next one.


Assumes a lot about every administration. I don't see how anyone can look at what the US Government has done and failed to do over the last decades and call it the ideal charitable recipient. Even when it's doing the right things, it wastes enormous amounts of money to do so and the primary beneficiary is one of the wealthiest populations in the world.

Of course, you wouldn't expect them to be the ideal charity; they are explicitly not a charity. Anyone who is actually trying to be a charity should have little trouble using funds more charitably than any government in the world.



If we're at the point where planning what I'm going to write, reasoning it out in language, or preparing a draft and editing it is insufficient to make me not a stochastic parrot, I think it's important to specify what massive differences could exist between appearing like one and being one. I don't see a distinction between this process and how I write everything, other than "I do it better"- I guess I can technically use visual reasoning, but mine is underdeveloped and goes unused. Is it just a dichotomy of stochastic parrot vs. conscious entity?


There's a lot of documentation out there that I've found was left unwritten but that I would have loved to read


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: