Never ceases to amaze me that the people who are clever enough to always be right are never clever enough to see how they look like complete wankers when telling everyone how they’re always right.
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
I don't see how that's any more "wanker" then this famous saying by Socrates's; Western thought is wankers all the way down.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
“I don’t like how they said it” and “I don’t like how this made me feel” is the aspect of the human brain that has given us Trump. As long as the idea that “how you feel about it” is a basis for any decision making, the world will continue to be fucked. The authors audience largely understand that “this made me feel” is an indication that introspection is required, and not an indication that the author should be ignored.
Yes and no? I think irl conversations were always more about connecting "what's going on in your life, who did what, I'm worried about X" eg with family. On the rare occasion there's a disagreement over facts, it's absolutely great to have a LLM ref to adjudicate. So the LLM is supplementary. It even does emotional persuasion better than I do. But yes I do find overall I have slightly less subjective need for irl conversations because I have been talking with the LLM.
Interesting. Thanks for responding. As I've gotten older (or maybe it is the times and not me), I've found online conversation less stimulating, and like you, most of my IRL conversations are focused on more social/personal things. I've used LLMs for learning, but not conversation; but I've been scratching that itch with books, which, although they are not the same as conversation, do offer something similar.
The key delineation here is the work is voluntary. I was uneasy reading the article and weighing it up, realised that the author could choose not to work whilst he serves time.
If he was being coerced into labour however, which the for-profit prison undoubtedly makes profit from, it’s simply unacceptable; indentured servitude, slavery, call it what you will, it’s bad for society in every way because it allows the ruling class to steal labour from the working class under the guise of “rehabilitation”.
I don’t understand how a company that has effectively defined modern interfaces, which has the clout to hire the best of the best, has resulted in this.
I fear the apple is starting to rot on the inside.
I had same issue and it’s ultimately what killed the Moonlander for me - I dedicated time to get better at touch typing (I already do it from years of chatting in games) and dialling in a layout that worked for how I use computers.
Only to find that I’m mostly using other people’s computers when they call me over to help, and suddenly I’m mashing my meaty paws all over their MacBook as they look on in horror that this supposed technical professional can’t even press the shift key reliably.
If you have this issue, I can highly recommend working in another country where all your colleagues are using a different keyboard layout to you entirely. This is particularly bad for programming, because while standard layouts are mostly _fairly_ consistent with the letters, the symbols can end up anywhere. Sure, this means you still won't be able to find anything and look like an idiot, but now you can blame their keyboards for being weird rather than your own muscle memory!
They’re delivering this so quickly. 72 seconds down to 6 is a remarkable improvement, considering it’s not yet complete and assuming they haven’t spent concerted effort optimising it to perfection.
I have this issue with Devin. Given my limited knowledge of how these work, I believe there is simply too much context for it to take a holistic view of the task and finish accordingly.
If both OpenAI and Devin are falling into the same pattern then that’s a good indication there’s a fundamental problem to be solved here.
reply