ChatGPT. This is the only technology that truly freaked me out in my lifetime (I am 35). It competently explained to me the difference between Dijkstra's algorithm and A* in a psalm-like poem [0]. It was able to find misspelled words in a long text I just made up, list them in alphabetical order, and explain to me why I misspelled them. It correctly simulated a ChatGPT instance which was accessible via a HTTP API, and it correctly simulated a Linux machine on which this simulated remote ChatGPT-API was accessed via curl. It was able to give the responses in XML, JSON, CSV, and other formats, and it was able to simulate Linux tools like cut, grep, sort, awk, jq, sed and others. It was able to add an additional layer of a ChatGPT simulation without problems. It was able to do entity recognition in a long text I provided, and was able to list the entities in a table, with the correct wikipedia article and a summary of the article as columns. It was able to do rudimentary public transit route planning in my home town in Germany (in the form of: go the XY Station, trains will run every 15 minutes from there to the main station, from there you will have long distance lines to Z, etc.), with an apology that it does not have current schedule data for Germany and I should visit bahn.de for better results. When it failed for math questions, it was able to give me Wolfram Alpha queries to find the correct results. It was able to formulate complex SPARQL queries to query Wikidata, run these against the simulated ChatGPT instance via a curl request, grep out some results I wanted, and list them in tabular form with corresponding Wikipedia articles.
All of this in one session, as part of a pleasant conversation, basically indistinguishable from a conversation with a real human.
[0]
> Could you explain the difference between A* and Dijkstra's algorithm in the style of a Psalm?
< Sure, here is a Psalm-style explanation of the difference between A* and Dijkstra's algorithm:
Hark, ye seekers of the shortest path,
And listen well to what I say:
There are two algorithms that ye may use,
To guide ye on your way.
Dijkstra's algorithm is a general one,
That can be used in any case,
It explores the graph from the source node,
To find the path with the least amount of space.
But A* is a more specialized tool,
With a heuristic function to guide,
It searches more efficiently,
And helps ye reach the other side.
So choose wisely, oh seekers of the path,
And let these algorithms be your guide,
For they will lead ye to your destination,
And help ye avoid the pitfalls on the side.
It's an incredibly striking piece of technology and really shows which way the future winds will likely be blowing. A particular exchange I had with it left a mark on me, in which I requested it to behave like a used car salesman attempting to close a deal with a hungry customer. It proceeded to repurpose idioms, make dad-jokes and sling double entendres about the shared joy of cars and hamburgers and how they both are simple, dependable and iconic in my way of life. I sat there for a minute reading over it, nearly in disbelief.
I can accept a system such as ChatGPT synthesizing from data it sucked in, making educated guesses and so on. But to see it do such lingustic gymnastics with a very non-concrete request was humbling. It's given me a lot of pause about the way I absorb digital information and the varying degrees to which I have implicitly assumed the reliability of that information; site A slightly more trustworthy than site B, et cetera. To me, that old trust heuristic I relied on, one I have honed thanks to unfettered broadband for two decades, is now completely upended. It probably has been for some time to be fair, but my time with ChatGPT really cemented that feeling.
Every freshly written statement that comes to me through an internet connected device now gets a side-eye by default.
I've been talking about this for awhile now, but I used to run a marketing service that streamed all reddit content in real time and did text analysis and bot detection. It's definitely a rough estimate but about ~65% of text content was determined to be a bot. I am entirely convinced that there are large entities (political campaigns, nations, etc.) that are using bot networks on social media sites like reddit to simulate "consensus" in online discussions and thus gently sway public opinion.
> It's definitely a rough estimate but about ~65% of text content was determined to be a bot.
A scary number. I wonder about a per-subreddit distribution, though. I imagine the primary subreddits have slightly worse human-to-bot ratio, niche subreddit somewhat better, with non-political, non-easily-monetizable subreddits having the best.
Did your analysis also attempt to identify troll farms? Would the content produced by protein bots be grouped in the ~65% of bot content, or the remaining 35%?
It's wild how low quality so many of the comments are on reddit, to the point that it makes me wonder "Why did this person comment something so empty and non-contributing to a post that already had 3000 comments?"
I don't know whether to believe people are so wasteful of their own time or whether this is just low-effort bot posting to build consensus. Combined with how harshly and instantly main subreddits like /r/politics and /r/news shadow ban accounts, it's basically impossible to dissent
>I can accept a system such as ChatGPT synthesizing from data it sucked in, making educated guesses and so on
On that note I find it interesting that this has sometimes been an argument to dismiss chatGPT as "non intelligent". What are we if not statistical machines, synthesizing from the data we've sucked in over our lifetime?
I certainly can't see us humans as anything other than that. But if that's true, us human machines seem to have a large number of "low level programs" running in the background that serve to blur the boundary between us and the machines we build and help us to elevate ourselves to a special typing in the universe that very well may not exist.
I feel I think too much about mundane things and sometimes about wondrously confusing things often to my detriment. And at least a few times a week I would swear I run into a person with the opposite problem. Their behavior, choices, preferences. Their thought processes (at least what they share with me) come off as closer to machine than human.
Something in my gut tells me we are more similar to some of the things we build than we might care to admit.
We are heuristical which may not be logical, but it makes sense from an organism perspective.
For example since the world is uncertain and chaotic we seek either to conserve or expend for future gains. This can explain many stock trading behavior, despite often being a non optimal viewpoint
Agree- so many are dismissing it because its not perfect, but this is an absolutely enormous leap from anything else I have seen in my lifetime, and this crosses the chasm IMHO between "AI is unlikely to really be able to have general applicability" to "AI is probably going to upend life as we know it in our lifetimes."
Maybe it will just peter out at this level like self driving car technology did, but I really doubt it, and it seems to me that the building blocks are already there to commercialize this and use it in a real way. The future is going to be interesting...
Agree- so many are dismissing it because its not perfect
In fact, I'd submit that phenomenon as its own entry in the Weirdest or Most Surreal Tech Trend contest. People who, when confronted with a talking dog, say things like, "This talking dog is an idiot. It thinks 429 is prime, its painting looks like something from Bob Ross on acid, and the C++ code it wrote for me is full of bugs. I don't see what all the hype is about."
Some of this dismissiveness is just whistling past the graveyard, but much of it comes from people who genuinely think the human brain is something that operates by means of ethereal, ineffable, and unreproducible mystic forces.
I feel this way about self driving cars as well. Its like even 10 years ago, the capabilities they have today would be considered mind blowing, yet they receive so much hate because they aren't yet adapted to all situations. Its baffling to me, every time I watch a video of one I get all giddy and I am like we are living in the future! Yet even from tech circles, they get so much criticism because they aren't 100% there yet.
Its just a mindset I don't understand. And on places like reddit there is definitely some astroturfing going on or at least some definite "camps" that don't like SDC technology, but on HN AFAICT its been genuine disdain. So weird.
I think most of the criticism with self driving is around the over-hyped under-delivered "full self driving" from Musk -- it actually seems to be getting worse compared to 5 years ago.
Apparently Cruise and Waymo are doing much more impressive things, but they only get a fraction of the press.
I'm curious how many people who criticize full self drive have actually ridden in the car while it navigates city streets. It makes about as many mistakes as a 1-3 years of practice new driver. I don't own a Tesla so I've ridden with friends. It's a barely noticeable difference between human and FSD operation, with very few overrides necessary.
If they're doing more impressive things why is it not in production at scale, right now? Just earlier today there was a story of a Waymo car getting stuck in an intersection in SF. Those cars still need constant intervention and human assistance that's why.
Tesla has FSD released on literally every car in the US that has purchased it. Hundreds of thousands of cars.
No one serious in the self driving space is using disengagements to pick winners or leaders on the path to commercialization. There is no standard about what qualifies as a disengagement and companies will interpret and report their disengagements differently. Those interpretations also change over time, making it even more difficult to actually understand how a company has progressed in its technology.
And again you're missing the point. Waymo/Cruise are operating in an extremely different and much more narrow context than Tesla's FSD. Let's go put Waymo on a random road in a random city and see how it fares compared to Tesla FSD. I guarantee it will be tremendously worse and no one who knows anything about the space would argue otherwise.
Also, why is Tesla so adamant about how many accidents occured simply with FSD disabled? Shouldn't we know how long prior to the accident FSD was engaged? I mean, hitting the brakes in panic only technically makes it "not FSD".
I'd be happy if you showed me any reported data, self or Tesla, of 30k miles before disengagement with FSD. Even within one order of magnitude instead of two.
Show me data of Waymo or Cruise doing 30k miles before disengagement in the 99% of the country where they don't operate. I guarantee their stats are worse anywhere outside their carefully chosen, extremely narrow operating areas.
If there was a single operating area that Tesla could do as well, they would be shouting it from the rooftops. They are not only silent. They are trying to argue to the state of California that they are not testing self driving capabilities past level 2. Have a great day.
> it actually seems to be getting worse compared to 5 years ago.
You pretty clearly don't know what you're talking about.
> Apparently Cruise and Waymo are doing much more impressive things, but they only get a fraction of the press.
They are doing objectively much less impressive things by relying on sensors that require much less intelligent analysis and will lead them to dead ends at the higher end of abilities that Tesla's approach won't.
They can't even keep up with self-parking technology. All the other electric car manufacturers don't seem to have any problem with it. But Tesla's implementation is laughable.
Agreed. However, in the detractors' defense, self-driving cars were hyped beyond all reason by people like Musk, so it's easy to justify some very vocal pushback.
GPT wasn't hyped much at all, in comparison. It just appeared. The next couple of iterations will be interesting to watch, since right now we don't really know what the rate of improvement is going to be. We might be fighting Skynet next year, or we might be in for another 50-year-long AI winter.
I consider it more like the golden calf that people worshipped while Moses was away. The Quranic version is it was made of gold, it made a sound, and so people idolized it.
I think it's a good parable. It's something expensive and man made. Even though it's on average dumber than a typical human, people try to ascribe godliness to it. And so many judge whether it has achieved omniscience. For many of us, it's just a better tool, an IDE, but for some, it's an oracle or divination tool that they use to decide whether they should marry someone.
I think the term "artificial intelligence" makes it even more confusing.
ChatGPT is smart enough at the first prompt, but what really stands out - freakily, I might add - is how it can remember earlier data. You can go several levels deep and it will still reference the first answer and contextualize it properly.
"Do X" is great. But "Now change X to Y in the style of Z" is what blows my mind.
Thats interesting. When I did play with it a bit, it wasn't clear to me if it was actually doing that, but in hindsight it definitely was and was not just a coincidence. When I first used ChatGPT, I had a dog that recently passed, and she had such a big personality (a mischievous always hungry hunting dog) and was just very different than my other dog (a pitbull mix that likes to guard things), and I had thought that the two of them would be a good basis for a children's book.
I initially put in a prompt like "tell me a story about the hunting dog and the guard dog." and the results were pretty meh. I was interested to see if it was actually pulling in data from FB so I put her name in instead (she had an FB page yes). It did not seem to do much. So I put in some things like "the little hunting dog is always hungry." "the hunting dog is always looking to hunt" and then did a fuller prompt of "the hunting dog and the guard dog are going to the park. tell me a story about it" and finally got a very good fully fleshed out story about how the little dog was on the prowl for a squirrel while the guard dog watched out and it was really well written and made about as much sense as a fictional tale could. There was no awkwardness or other tells that would have made you think that this was written by anything other than a human.
Anyway, that is huge that it can do this. I do remember when I used to play with other chatbots that remembering past context was something they were unable to do, at least not well, but in most cases it seems they didn't even attempt to. ChatGPT has blown me away in many ways, but this is a very specific leap forward.
It usually always comes up with something very meh the first time around. You have to nudge it in the right direction.
I was showing it off to my wife. I asked it to write a lecture for an intro to Latin American poetry with a focus on Pablo Neruda, including a background on Chilean history.
The first result was just 5-6 paragraphs with the most rudimentary 101 on all the above topics.
Then I told it to rewrite it as if the lecture was going to be given to grad students. Next, I told it to include examples of Neruda's poetry along with some analysis. I also asked it to flesh out the segment on Chilean history, with a focus on the independence movement.
The final lecture was good enough that you could walk into a classroom of college seniors and deliver it without any edits. The first one might have been good enough for 9th graders at most.
In my experience, ChatGPT was perfectly happy providing logic for a question that counteracted previous logic it provided. When I pointed out this conflict, it disagreed that it was in conflict when it clearly was.
One of them I used was supposed to be 'pick-a-path' style fantasy fight the dragon sort of thing. I decided to throw the A-Team into it. It integrated them in and even knew the correct names and their context of what each char tended to do. By far not perfect. But the fact it could do even 'kind of' that. Is very interesting. 'kind of' is for some things is perfectly fine. The generated art ones are very interesting too. It reminds me a lot of early emulation discussions. Where people were arguing unless you had cycle perfect emulation nothing would work. Turns out 'kind of' close works for like 95% of the cases. Some things did need that sort of detail. But most didnt.
I had the same reaction. It's the first time in my life where I've ever thought, "This is the beginning of an era. They'll give a name to this period of time." And yeah, maybe that's overselling it, but I had that thought.
I've seen so many pieces of incredible technology struggle to find a use-case. By contrast, ChatGPT almost has too many use-cases to catalog, and while it's far from perfect, it seems obviously, to me, to be a tipping point of sorts.
>nd this crosses the chasm IMHO between "AI is unlikely to really be able to have general applicability" to "AI is probably going to upend life as we know it in our lifetimes."
ChatGPT3 isn't really the definitive sign of the latter. Its just a stepping stone on the way. As soon as humans discovered differential calculus, that was the genesis to the eventual development of a general AI. Everything else along the way is an issue of optimization and spending raw compute time, whether its done by humans or machines.
Many are dismissing it because it is just neural network. Which were basically at academic works since 80’s or earlier. It is only that now we have enough hardware and material for training neural nets that they start to be useful.
> These tools are about as good as a non specialist with time to undertake some bullshitting.
Even if you are right, you still acknowledge that ChatGPT is on the level of a non-specialist bullshitting human you can have a meaningful conversation with. This alone would be enough to freak me out. If you would've told me that this is possible 10 years ago I would've called you crazy.
I know a lot of people are vividly impressed by this but to me "writing in the style of" (Victorian sonnets, poems, hip hop lyrics) is the equivalent of transfer learning and feels more like a cheap parlor trick.
While I wont deny that it has impressive summarization abilities in terms of making excellent Q&A if you're willing to vet the information, I wouldn't exactly say that chatGPT is capable of meaningful conversation. It has great powers of recollection but its capacitive powers to produce new interesting information feels highly formulaic.
This is my exact problem with ChatGTP, it looks great when you don't know what it's talking about, but as soon as you do, it looks foolishly over-confident in it's answers which are very clearly wrong.
Ha ha only serious - for me, the most profound thing about ChatGPT and friends is that they show how much of human behaviour is not actually intelligent in some deep sense.
The fact that a "dumb", generative model that is simply predicting the next token when given a prompt can talk so well, perform complex tasks and interact with humans in such a convincing manner is pretty fascinating.
There is obviously more to human intelligence than text based conversation but it is pretty humbling that such an aspect of ourselves can be replicated and be so convincing at such an early stage and perform better than some humans even when it makes stuff up: toddlers can't talk, kids are smarter but don't have the technical knowledge, most adults only have a few specific areas of expertise, etc.
I don't like OpenAI's implementation defaults that much. But the thing is you can have a conversation with an LLM about bullshitting, explore the reasons why it's undesirable and inferior to sincere conversation, and then leverage those points of agreement to modify conversational behavior, at least within the scope of that conversation.
I believe the data gets trained by human rankers, and it should therefore regress to their mean. That said, if they train it to seek approval, I suppose we could teach it like any other student.
Dijkstra's cannot be used as generally as described. A-star can potentially be more general, but that's highly dependent on heuristic. Dinkstras is a-star with a "min weight on shortest path first" heuristic. Because a star doesn't expand shortest paths first necessarily, it cannot necessarily find the shortest path to any node except the prescribed endpoint. Dijkstra's will find all shortest paths with cost less than the path to the endpoint, and that's helpful sometimes.
What is your definition of "general" here? The classic shortest path problem (which the psalm addresses) is described as finding the shortest path between two nodes, and given non-negative edge weights, Dijkstra's algorithm is perfectly general here. It will find such a shortest path in any graph, no further restrictions.
For A*, everything depends on the heuristic, as you said. If h(v) = 0, A* is equivalent to Dijkstra's algorithm, so potentially, it can be as general. But the wrong heuristic (inadmissible, inconsistent) will lead to wrong results, and so calling it a "specialized" tool is correct. The heuristic gives you a specialized version of Dijkstra's algorithm which is faster on specific graphs.
> Dinkstras is a-star with a "min weight on shortest path first" heuristic.
I am not quite sure what you mean here. Dijkstra's algorithm is A* with no heuristic. I am not sure how a "shortest path first" heuristic would look like - do you mean that Dijkstra's algorithm chooses the node to expand next based on its shortest-path cost to the target t? Even if you would construct such a heuristic h(v) = c(v, t) by explicitly calculating c(v, t) each time, this is not what Dijkstra does, as it would basically be a perfect heuristic - you would then only visit nodes on the shortest path.
If you meant that Dijkstra's algorithm is A* with a "neighboring node with shortest path from start node first" heuristic, that's also not strictly true, because expanding the nearest node first is already built-in to A*. A* does not chose the next node v with the smallest h(v), but with the smallest g(v) + h(v), where g(v) is the shortest path cost from the source node to v.
> Dijkstra's algorithm is a general one,
That can be used in any case,
I take issue with this statement. "any case" implies it's more general than A star, which is not true, it's a specialization of A star with (ok you win) h(v) set to nothing. It necessarily cannot solve as many problems as a star.
And Dijkstra's won't work in "any case" at all, esp graphs with negative edge weights.
The explanation struck me as glossing over too much, to the point of being misleading. Perhaps I'm being too pedantic, fine, but a more enlightened comparison would read more like our discussion than what was given.
> It necessarily cannot solve as many problems as a star.
Could you give an example of a positive-weight shortest path problem with A* can solve, but Dijkstra's algorithm cannot? I do not believe there is such a thing. Or did you mean something else?
> And Dijkstra's won't work in "any case" at all, esp graphs with negative edge weights.
i would rather say that a* is an optimization of dijkstra's algorithm
they both solve the single-pair shortest-path problem in digraphs with nonnegative arc weights
a* which explores nodes in a different, better order thank dijkstra's algorithm, but can only be applied in cases where you can compute an admissible heuristic
that makes it less general than dijkstra's algorithm, which works to find shortest paths in any digraph with nonnegative arc weights, not just the ones where an admissible heuristic can be computed
any problem you can solve with a* can be solved with dijkstra's algorithm (usually more slowly) but the converse is not true
now, in a sense, a* with an inadmissible heuristic such as h(v)≡0 is 'more general' in the sense that it can emulate dijkstra's algorithm and also do other things; you could say that a* is a class of algorithms of which dijkstra's algorithm is one
I actually think the robot's poetic take matches my thinking more closely than this description
Edit:
I guess in this description dijkstras is more specific, in that dijkstras is the specific instance of A* with a zero heuristic.
But I think what HAL over here was saying is that you can use dijkstras in a superset of scenarios in which you can use (non-trivial) A*, so in that sense dijkstras is more general than A*, so it's not wrong.
Ah, interesting. I didn't notice that. I have noticed that chatgpt is really impressive at doing rhyming poems/songs, and many of its generated poems sound similar to each other. I also don't remember raw gpt-3 being so good at poems.
So I wonder if there was substantial fine-tuning for chatgpt specifically to reward it to generate poems, in a particular style.
And here, it's "over indexing" on that and still generating poems in that familiar style
It's also not Psalm-like! The Psalms don't rhyme (even in the original Hebrew). What they do have is a parallel structure where each idea is repeated in slightly different words.
This is just generic poetry with a sprinkling of "ye's."
I saw dallE and GPT-2 demos and though 'oh pretty cool' about a year later we've got stable diffusion that can run on phones and ChatGPT which can remember context. I thought these things would be 3-5 years out or more. The speed of development is so quick, I think confidently right LLM is around the corner
In less than a decade, I believe front line support chat support jobs and even graphic artist jobs will be made obselete
The latest models, with the right setup, can already substitute for many (not all) tasks in knowledge jobs such as graphic design, support chat, even programming. I feel like we may be looking at less than 3 or 4 years for a lot of jobs to be mostly obsolete. Or at least, the traditional version of those jobs.
It's already at the point where you start expecting any knowledge worker to be significantly more productive by leveraging these tools.
It's hard to imagine that it will be more than five years before AI tools are available that can handle almost all tasks in these types of jobs.
For example, on my website aidev.codes I just added preliminary knowledgebase support. It can reference the knowledgebase this to write code. I would say that with the code-davinci-002 model at least it seems about at the level of a junior software engineer already since it's pretty effective with close supervision by a senior peogrammer, except for the fact that it cannot interpret visual information.
Knowledgebases/embedding search can also be used right now with these models for answering support questions. The only thing holding it back from very very wide scale adoption is the problem of making up information. There are already solutions in progress for this. It's unlikely that will take more than a few years to roll out and replace the current generation of models. Google and Microsoft will probably roll out their internet-scale chat search interfaces this year even if they can't fully mitigate the hallucination problem immediately.
I would guess more like 2-3 years for many knowledge-based jobs. If you want employment/contracts you will need to be very good at leveraging AI, or people will just use the AIs instead.
The wall of content and interactive exchanges that will fill the internet from chatgpt generated things will become harder and harder to wade through or evade.
Humans, in my experience, have even more reasons to be flawed than the AI. Yes, the AI can be incorrect, but humans can be misled in what they believe the same way, and have the added problems of ego and misaligned incentives. Heck, in this very thread we have one human calling out another for being incorrect about whether or not an AI was correct. Locating sources of reliable truth is an unsolved problem AFAIK.
lol no. LLMs get things wrong, but they don't lose their shit when you point it out. I find it pleasant rather than frustrating to work on problems with one; the shortcoming for me as a sort of sort of conditioned passivity that arises out of its orientation to search its own knowledge for correct answers rather than asking questions.
Also psalms don't rhyme, even in the original Hebrew. Also even if it is supposed to be a poem with rhyme and meter, ChatGPT's poem doesn't have correct meter.
>> That's wrong. It's poetic, but incorrect. These tools are about as good as a non specialist with time to undertake some bullshitting.
Does it really matter when it comes to real life? Kids take medical advice from TikTok influencers. Half the country believes absurd news they see on Facebook. Half the country believes anything their president says.
We've been down the AI diagnosis path before and even a purpose-built system failed to live up. I think that GPT, much like Tesla autopilot, is going to spend a very long time ironing out it's "last mile" before anyone will trust it. The tendency for AI to be spectacularly wrong when it fails to grasp context is still pretty prevalent.
Thanks! I wonder what is the most advanced automatic technique right now. I also think it's difficult to replace nurses -- even though basic nurse work is even easier comparing to a family doctor who needs to diagnose. I figure the reason is that to replace nurse we need a lot of automatic hardware, which needs a lot of investment, unlike ChatGPT which can really replace a service agent given enough training because it doesn't involve hardware.
We have a lot. But it's really the same story that we see with software. The more sophisticated the software you grab off the shelf the more that's expected from a single worker. Modern doctor's offices are overloaded with gadgets. Most are using automatic blood pressure cuffs, digital pulse ox and all sorts of machines to test for things that used to be untestable. The need for nurses and techs to operate them all only increases.
That can only be possible in a limited set of cases where the inputs necessary to make a diagnosis are all present in the patient chart. Human clinicians rely on other inputs to make most diagnoses including visually looking at the patient from multiple angles, asking them questions, manipulating and feeling the patient's body, listening (with or without stethoscope), and even smell. We are a long way from the robotics and sensor integration technology necessary to automate those steps in the diagnostic process.
Ought to be "to guide you on your way", "helps you reach the other side", "for they will lead you", "and help you", since "ye" is the plural nominative (not accusative).
I grew up using archaic, Jamesian English regularly (in a religious community), so it always sounds wrong when people use them incorrectly while attempting to be old-timey, which is a bit distracting and ruins the effect.
Apparently ChatGPT suffers from hypercorrection as much as any modern speaker.
(The meter is also dreadful, which is another thing that I think many modern English speakers don't often do precisely.)
There was an "ask HN" a week or so ago about knowing how your personal machine was or was not compromised ...
I am imagining training a model on all of the PDFs and email and receipts and contracts that I have spent the last 30 years protecting ... and then exploding my home directory with 2M parallel copies.
Now what ?
Which is the real correspondence ? Which contracts are real ? Which invoices and receipts ?
Could one construct a framework where all data at rest is worthless and only witnessed transactions and traffic have any surveillance value ?
On the other hand, note that you'd also need some way of distinguishing ... wheat from chaff ... for your own ability to track your correspondence, transactions, records, etc.
Otherwise, that is an interesting idea, particularly in a world in which storage is cheap relative to the size of documents. We're rapidly approaching an era in which storage capacities sufficient to hold the entire Library of Congress collection of books (as text) on a residential PC or server will be highly viable.
The larger problem is probably that your copy of that data isn't particularly important in most cases, it's what lies on others servers that's been generated by you / snarfed from your devices / activities. And that's far harder to chaff.
Another issue would be that any generated content that raises / matches suspicions could, in a regime with poor civil liberties protections, simply become more ammunition used against you. The old Cardinal Richelieu "six lines" dictum, though in this case, not necessarily in your hand.
Yet it will happily tell you random non sense with utter certainty like that it takes twenty minutes to boil a whole chicken in the middle of a recipe or listing an incorrect number of children with fake names while writing a biography.
Until ChatGPT, the term "artificial intelligence" felt like a misnomer for any of the tech that came out in the last 10y (which seemed mostly like statistical techniques for solving specific optimization problems).
For people familiar with how ChatGpT work, is there a youtube video or blog you recommend that explain how it work.
I have a very good understanding of how deeplearning neural network work and language model like "Word2vec" that convert a sentence or a word to a vector space.
But I completely fail to understand how Given an initial text as prompt, ChatGpt will produce text that continues the prompt.
I agree, it’s amazing and it’s just the beginning. When this get numerous iterations of improvement and is distributed at scale and integrated with everything it will power the next technological leap forward. This to me is obvious.
All of this in one session, as part of a pleasant conversation, basically indistinguishable from a conversation with a real human.
[0]