I see what you mean; but as external observer to both, I see crypto as having failed to provide or prove any particularly productive use; and chatGPT, for all its flaws, has been useful to me daily for last 2 months - and not in some crazy make-money-in-suspicious-ways sense, but like learning french and python and figuring out new concepts and stuff. But that's sample n=1 I suppose, there are both people that believe in crypto and those who believe chatGPT is useless.
I wonder how much of usefulness of ChatGPT that some report is caused by https://en.wikipedia.org/wiki/Novelty_effect
Because it is new, and cool, and fascinating people are spending a bit more time with a bit more focus on the task with it compared to the old ways, hence the productivity increase. Once the novelty wears off…
It has helped me learn things more interactively because I can ask questions and get answers without having to read 10 blog posts or a book before understanding things. For my personality, it would be hard to recreate without having a living mentor available 24/7 that had knowledge of everything I wanted to ask of it.
Does it get things wrong? Sure. But generally the subtle ways it might get stuff wrong would be the way I would subtly misunderstand things while learning. So, it's definitely not perfect, but "not letting perfect be the enemy of good" and all that.
You must have missed the article posted this morning where ChatGPT complexity invented a sexual allegation against a law professor including citing a fictional news article that never existed. I wouldn’t call that a subtle misunderstanding.
Yes, chatGPT hallucinates facts and is not great at causal inference chains. The interactivity is the real boon as what I can do is query the chat bot, and then explore the results either with traditional research or with more queries and exploration. It's a really fun and productive way to learn.
But... How much does it hallucinate? Are there safer areas or ways of asking questions, than others? Genuine question. I found that when I ask simple factual questions in a... "non pushy non presumptive" manner, it gives accurate basic answers in areas I'm familiar with. I ask it simple things about basic French or basic Python or basic music theory and so far it's been... brilliant!
I found most examples of hallucination on the web to be pushy or presumptive or edgy I. E. "tell me the ways in which vaccines cause male pattern baldness" and then manipulate prompts.
So I assume there are areas and approaches that are safer and more useful, and areas and approaches that are riskier?
It's shocking to me how many people are treating Google Search as a reliable source given how much we know it silos people into extremist sites, and has been extensively gamed by SEO professionals.
possibly. But I find it the opposite - there's more and more use I find in it as time passes. I am encouraging my friends to play with it, to figure out for themselves when it's useful and when it's not.
For me, It's like having a learning companion, and it makes it easier for me to tackle new things when I can "brainstorm" and ask random (basic) questions.
I think it's less about the usefulness and more about the fact that everyone is racing to add ChatGPT to their products (or start a new product off the back of it), whether or not it's useful or adds value. That feels a lot like everyone tacking on blockchain or launching an ICO.
So far, crypto provides exactly one genuinely useful ability: transferring money outside of regulatory frameworks. If, say, you need to buy some things that aren't legal in your country on the black market, or your family lives in a country that's under sanctions and they need money quick, that might be very productive use of it.