The issue is that your standard is borderline setting up an impossible strawman, and when we actually do textual (and/or biblical) analysis and historical science, we never really have a "this is THE version of Q" nor do we have a "this is the FINAL/REAL version of the book" or "this is the final/absolute version of the hypothesis".
The idea that there is "one authoritative version and it was the version that was copied into this one authoritative derivative and we found the derivative so now we have to find that exact original or else its all bumpkin" simply isn't the way 2000 year old books or texts were written, copied or used. You will never find it because that's not what happened.
But we can lay out the texts side by side, arrange the narratives and see how they differ chronologiaclly from book to book, notice where particular linguistic quirks take place, notice where words are copied word for word in a particular order, where embellishments or insertions or changes are made, then just like taking several witness accounts, we build up a probablistic version of events that happened.
So there isn't "one Q", just as there isn't one authoritative version of mark, luke, john, mathew, etc. But there's patterns in the texts which strongly suggest that there was some kind of shared knowledge and a common source in the authors of the later gospels. We hypothesised that this common source seemed to be shared amongst the other gospels was a "sayings gospel", because the common ground that seemed to be repeated in the other books were primarily sayings and the other bits seemed to come from mark, which at the time met the problem that people didn't accept that such a book or source would actually exist because we'd never seen one before.
Then, after this hypothesis was formed and that objection raised, with the discovery of the nag hammadi library and the gospel of thomas, we found an actual historical sayings gospel. A confirmation that this type of literature did exist and was written in early christian communities. It was not Q, but it confirmed the hypothesised genre and existance of early christian literature.
If you're waiting for the discovery of two literal peices of text, whereby the carbon copy of the first is deduced from the discovery of multiple other historical books to the letter that followed, well then you're setting up an impossible standard. Even literal transcription probably wouldn't meet that standard.
What on earth is it about intellectual property that breaks someone's mind so much that genuinely, when presented with a translation of a 2000 year old text that itself is based on another authors translation and who's translator is now dead, they go onto a website to proclaim "it's not in the public domain!".
In practice, I find it depends on your work scale, topic and cadence.
I started on the $20 plans for a bit of an experiment, needing to see about this whole AI thing. And for the first month or two that was enough to get the flavor. It let me see how to work. I was still copy/pasting mostly, thinking about what to do.
As i got more confident i moved to the agents and the integrated editors. Then i realised i could open more than one editor or agent at a time while each AI instance was doing its work.
I discovered that when I'm getting the AI agents to summarise, write reports, investigate issues, make plans, implement changes, run builds, organise git, etc, now I can alt-tab and drive anywhere between 2-6 projects at once, and I don't have to do any of the boring boiler plate or administrivia, because the AI does that, it's what its great for.
What used to be unthinkable and annoying context switching now lets me focus in on different parts of the project that actually matter, firing off instructions, providing instructions to the next agent, ushering them out the door and then checking on the next intern in the queue. Give them feedback on their work, usher them on, next intern. The main task now is kind of managing the scope and context-window of each AI, and how to structure big projects to take advantage of that. Honestly though, i don't view it as too much more than functional decomposition. You've still got a big problem, now how do you break it down.
At this rate I can sustain the $100 claude plan, but honestly I don't need to go further than that, and that's basically me working full time in parallel streams, although i might be using it at relatively cheap times, so it or the $200 plan seems about right for full time work.
I can see how theoretically you could go even above that, going into full auto-pilot mode, but I feel i'm already at a place of diminishing marginal returns, i don't usually go over the $100 claude code plan, and the AIs can't do the complex work reliably enough to be left alone anyway. So at the moment if you're going full time i feel they're the sweet spot.
The $20 plans are fine for getting a flavor for the first month or two, but once you come up to speed you'll breeze past their limitations quickly.
But going by the strict notion of DSM-V criteria of providing a hindrance, we hit the somewhat problematic definition whereby a person can have autism at one point in their life (when it hinders them in a context), moves into another point or context in their life (where it does not) and therefore they do not or would not meet the criteria for having autism if they sought a diagnosis at that point in time, and then move back into another point or context in their life where it hinders them and so now they meet the criteria and presumably have autism again.
Now, needless to say, this is not how anyone actually thinks about psychiatric or psychological issues in practice, especially with conditions such as autism, and just highlights the relative absurdity of some of the diagnostic metrics, practices and definitions.
What we tend to do is tie the diagnosis of autism to the individual identity and assume that it is a consistent category and applicative diagnosis that stays with a person over time because it is biological. We know, of course, that this is despite not having any working biological test for it, and diagnosing it via environmental and behavioural contexts. And don't even get me started on tying in diagnosis of aspergers/autistic individuals with broadly differing abilities and performance metrics on a range of metrics under the one condition such that the non-verbals and low-functioning side of neurotypicals get lumped in with the high iq and hyper-verbal high-functioning aspergers as having the same related condition even though neurotypicals are closer to the non-verbals and low-iqs on the same metrics and scores.
The entire field and classification system, along with the popular way of thinking about the condition is, if i might editorialise, an absolute mess.
A person without legs does not stop being disabled because they have no need or desire to walk. The fact remains that should they need or desire to walk in the future the hinderance will still very much exist.
A similar example could be made of someone with gluten intolerance. If they do not eat foods that contain gluten they are still gluten intolerant. They are however still disabled by needing to stay in that situation.
Firstly a fish without legs objectively does not have legs, but we do not necessarily call it disabled, even though it clearly lacks a facility.
Secondly, the autism spectrum disorders are, as I previously mentioned, not obviously just about deficits of behaviours or functions but also can take in extended and exceptional abilities in some areas and greater sensitivities rather than deficits or lack of an ability, so it is not clear that the entire diagnosis can be defined by deficits or lacking things. The high functioning and Asperger's type diagnosis is not about a universal deficit diagnosis and we do not generally call neuro-typical humans disabled because they lack prodigious activity or interest in math, language, or other subjects, even though that can also objectively be measured and called a deficit.
> The high functioning and Asperger's type diagnosis is not about a universal deficit diagnosis
To get an Asperger's diagnosis under the DSM-IV you needed some amount of impairment. "Disorder" is in the title of the DSM, if something isn't conceptualized as a disorder it isn't in there.
Being reliant on a particular life situation does strike me as a hindrance in and of itself. Maybe more of a macro limitation than a day-to-day one, but a reasonable definition could encompass that, too.
I think it depends on how one interacts with it. As far as I know it doesn't have a personalised feed and I'm seeing the same front page as everyone else. So I mainly use it to scan once or twice a day to pick out if there's anything going on in the world I need to know about.
Then for one or two threads I'll perouse the comments to see what our particular class of HN-esque people think about a topic. About once a month or a fortnight I might even post a comment. But it all has to be taken in context. Half of the time I'll close out the comments section immediately because it's clear the whole thing has gone down a tangent in not interested in hearing about. Another risk is when talking about topics that the HN crowd knows nothing about, which in my case is primarily economics where some of the takes are borderline delusional/ignorant and backed by a kind of tech worker/startup ideology.
The anti-politics thing is both a blessing and a curse. On the one hand it's one of the last sites on the internet where there is comparatively little vitriol and thankfully, comparatively little populism. On the other hand, it means defacto support for a dominant ideology and compressive censorship of anything that threatens that ideology, and obviously that ideology is the one that supports tech workers, startups and venture capitalists.
I think taking all those things into account you can still get value out of it but know what you're engaging with. But like the other forms of social media since the death of forums, it's not made for serious engagement or deep thinking on a subject, and discussion can't really be anything more than temporally ephemeral.
At the very least it's borderline whereas the other forms of social media can basically be judged to be explicit write offs in my opinion.
I find it much more intuitive to think of LLMs as fuzzy-indexed frequency based searches combined with grammatically correct probabilistic word generators.
They have no concept of truth or validity, but the frequency of inputs into their training data provides a kind of psuedo check and natural approximation to truth as long as frequency and relationships in the training data also has some relationship to truth.
For a lot of textbook coding type stuff that actually holds: frameworks, shell commands, regexes, common queries and patterns. There's lots of it out there and generally the more common form is spreading some measure of validity.
My experience though is that on niche topics, sparse areas, topics that humans are likely to be emotionally or politically engaged with (and therefore not approximate truth), or things that are recent and therefore haven't had time to generate sufficient frequency, they can get thrown off. And of course it also has no concept of whether what it is finding or reporting is true or not.
This also explains why they have trouble with genuine new programming and not just reimplementing frameworks or common applications because they lack the frequency based or probabilistic grounding to truth and because the new combinations of libraries and code leads to place of relative sparsity in it's weights that leave them unable to function.
The literature/marketing has taken to calling this hallucination, but it's just as easy to think of it as errors produced by probabilistic generation and/or sparsity.
As someone who has an android personal phone and an iPhone for work for several years, I literally do not know what the hell people mean by "polish", beyond just the informal emotional utterance that can be translated back to "what I'm used to". Half of the stuff in the iPhone is equally arbitrary and mindboggling as the android.
I think a lot of it is just tribal mysticism. One gets used to their preferred devices, and then they mentally imbue them with positive qualities, conjured out of their own imagination/biases. There was an article[1] a while back where the author was complaining that Android apps feel "inert and rigid," and lack "comfort, fun, and panache." Like, really? How is anyone expected to compare one app's "panache" with another one's? You're just used to one ecosystem's apps, and other people are used to another ecosystem's apps.
"But for the most part, it seems like third-party Android apps don’t even try to achieve the look-and-feel comfort, fun, and panache of iOS apps."
(referring to Android Mastodon clients vs iOS Mastodon clients)
Is nobody allowed to make any subjective judgement about apps?
As a comparatively politically aware Australian, I had absolutely no idea who he is/was, but then I don't have any Twitter or general social media presence or consumption.
My (limited) knowledge of him was mainly from reading the traditional US media, not from social media… I swear I’d read some article about him in the NY Times or the Atlantic or something like that. My brain files him next to Ben Shapiro
Me too! I follow politics, elections, and world affairs very closely, but I am embarrassed to admit - I had no idea who he was. Although I had heard about 'Turning Point USA'.
My wife had no idea who he was when I said his name… but when she saw a photo, she remembered him from videos which appeared on her Facebook feed in which he argues about abortion and transgender issues. She is Facebook friends with a lot of right-wing Americans, she doesn’t share their politics, but they connected due to a shared interest in Farmville
I don't know where you live, but it quite clearly is where I'm from.
Oooh, to be sure they don't call them IQ tests explicitly, but the psychometric capabilities and performance tests they've gotten me to do (mathematical, logical, verbal, reasoning etc) are pretty obviously IQ proxies.
Honestly, imo clinically in aggregate the actually score itself provides very little information beyond what a 5 minute conversation would achieve, and the result could be better thought as bordering on 5-6 level categorical variable rather than a gradient due to their biases and inherent individual patient variance on performance and test taking context.
The sub-sections of things like the WAIS can be of some value for identifying specific abnormalities or deficiencies, but as you said, is probably of more value clinically to split them out into separate tests/activities rather than to group them all together into an aggregate score. It's a bit like judging athletic ability and skill by BMI and fat percentage rather than just playing an opponent in tennis to find out if they're a good tennis player.
The idea that there is "one authoritative version and it was the version that was copied into this one authoritative derivative and we found the derivative so now we have to find that exact original or else its all bumpkin" simply isn't the way 2000 year old books or texts were written, copied or used. You will never find it because that's not what happened.
But we can lay out the texts side by side, arrange the narratives and see how they differ chronologiaclly from book to book, notice where particular linguistic quirks take place, notice where words are copied word for word in a particular order, where embellishments or insertions or changes are made, then just like taking several witness accounts, we build up a probablistic version of events that happened.
So there isn't "one Q", just as there isn't one authoritative version of mark, luke, john, mathew, etc. But there's patterns in the texts which strongly suggest that there was some kind of shared knowledge and a common source in the authors of the later gospels. We hypothesised that this common source seemed to be shared amongst the other gospels was a "sayings gospel", because the common ground that seemed to be repeated in the other books were primarily sayings and the other bits seemed to come from mark, which at the time met the problem that people didn't accept that such a book or source would actually exist because we'd never seen one before.
Then, after this hypothesis was formed and that objection raised, with the discovery of the nag hammadi library and the gospel of thomas, we found an actual historical sayings gospel. A confirmation that this type of literature did exist and was written in early christian communities. It was not Q, but it confirmed the hypothesised genre and existance of early christian literature.
If you're waiting for the discovery of two literal peices of text, whereby the carbon copy of the first is deduced from the discovery of multiple other historical books to the letter that followed, well then you're setting up an impossible standard. Even literal transcription probably wouldn't meet that standard.
reply