Hacker Newsnew | past | comments | ask | show | jobs | submit | UniverseHacker's commentslogin

Apple basically already has this built into macos - you can create an encrypted disk image and mount it to access the files. I'm not sure if it is possible to open these on ios.


iOS cannot mount .dmg files (at least, not without jailbreaking).


I don’t understand why customer owned co-ops aren’t ubiquitous. Vanguard is amazing- low fees, and great services- they beat all of the competition. I had to call their support line today and it was the most professional customer service I’ve ever experienced.


> they are incapable of even the simplest "out-of-distribution" deductive reasoning

But the link demonstrates the opposite- these models absolutely are able to reason out of distribution, just not with perfect fidelity. The fact that they can do better than random is itself really impressive. And o1-preview does impressively well, only vary rarely getting the wrong answer on variants of that Alice in Wonderland problem.

If you would listen to most of the people critical of LLMs saying they're a "stochastic parrot" - it should be impossible for them to do better than random on any out of distribution problem. Even just changing one number to create a novel math problem should totally stump them and result in entirely random outputs, but it does not.

Overall, poor reasoning that is better than random but frequently gives the wrong answer is fundamentally, categorically entirely different from being incapable of reasoning.


anyone saying an LLM is a stochastic parrot doesn't understand them... they are just parroting what they heard.


A good literary production. I would have been proud of it had I thought of it, but it's a path to observe a strong "whataboutery" element that if we use "stochastic parrot" as shorthand and you dislike the term, now you understand why we dislike the constant use of "infer", "reason" and "hallucinate"

Parrots are self aware, complex reasoning brains which can solve problems in geometry, tell lies, and act socially or asocially. They also have complex vocal chords and can perform mimicry. Very few aspects of a parrots behaviour are stochastic but that also underplays how complex stochastic systems can be in their production. If we label LLM products as Stochastic Parrots it does not mean they like cuttlefish bones or are demonstrably modelled by Markov chains like Mark V Shaney.


Well parrots can make more parrots, LLMs can't make their own GPUs. So parrots win, but LLMs can interpolate and even extrapolate a little, have you ever heard a parrot do translation, hearing you say something in English and translating it to Spanish? Yes, LLMs are not parrots. Besides their debatable abilities, they work with human in the loop, which means humans push them outside their original distribution. That's not a parroting act, being able to do more than pattern matching and reproduction.


LLMs can easily order more GPUs over the internet, hire people to build a datacenter and reproduce.

Or, more simply.. just hack into a bunch of aws accounts, spin up machines, boom.


I don't like wading into this debate when semantics are very personal/subjective. But to me, it seems like almost a sleight of hand to add the stochastic part, when actually they're possibly weighted more on the parrot part. Parrots are much more concrete, whereas the term LLM could refer to the general architecture.

The question to me seems: If we expand on this architecture (in some direction, compute, size etc.), will we get something much more powerful? Whereas if you give nature more time to iterate on the parrot, you'd probably still end up with a parrot.

There's a giant impedance mismatch here (time scaling being one). Unless people want to think of parrots being a subset of all animals, and so 'stochastic animal' is what they mean. But then it's really the difference of 'stochastic human' and 'human'. And I don't think people really want to face that particular distinction.


"Expand the architecture" .. "get something much more powerful" .. "more dilithium crystals, captain"

Like I said elsewhere in this overall thread, we've been here before. Yes, you do see improvements in larger datasets, weighted models over more inputs. I suggest, I guess I believe (to be more honest) that no amount of "bigger" here will magically produce AGI simply because of the scale effect.

There is no theory behind "more" and that means there is no constructed sense of why, and the absence of abstract inductive reasoning continues to say to me, this stuff isn't making a qualitative leap into emergent anything.

It's just better at being an LLM. Even "show your working " is pointing to complex causal chains, not actual inductive reasoning as I see it.


And that's actually a really honest answer. Whereas someone of the opposite opinion might be like parroting in the general copying-template sense actually generalizes to all observable behaviours because templating systems can be turing-complete or something like that. It's templates-all-the-way-down, including complex induction as long as there is a meta-template to match on its symptoms it can be chained on.

Induction is a hard problem, but humans can skip infinite compute time (I don't think we have any reason to believe humans have infinite compute) and still give valid answers. Because there's some (meta)-structure to be exploited.

Architecturally if machines / NN can exploit this same structure is a truer question.


> this stuff isn't making a qualitative leap into emergent anything.

The magical missing ingredient here is search. AlphaZero used search to surpass humans, and the whole Alpha family from DeepMind is surprisingly strong, but narrowly targeted. The AlphaProof model uses LLMs and LEAN to solve hard math problems. The same problem solving CoT data is being used by current reasoning models and they have much better results. The missing piece was search.


I'm sure both of you know this, but "stochastic parrot" refers to the title of a research article that contained a particular argument about LLM limitations that had very little to do with parrots.


The term is much more broadly known than the content of that (rather silly) paper.... I'm not even certain that it's the first use of the term.



And the word "hallucination" ... has very little to do with...


But it's far easier for human parrots to parrot the soundbyte "stochastic parrot" as a thought-terminating cliche.


There is definitely a mini cult of people that want to be very right about how everyone else is very wrong about AI.


Firstly this is meta ad hom. You're ignoring the argument to target the speaker(s)

Secondly, you're ignoring the fact that the community of voices with experience in data sciences, computer science and artificial intelligence themselves are split on the qualities or lack of them in current AI. GPT and LLM are very interesting but say little or nothing to me of new theory of mind, or display inductive logic and reasoning, or even meet the bar for a philosophers cave solution to problems. We've been here before so many, many times. "Just a bit more power captain" was very strong in connectionist theories of mind. fMRI brains activity analytics, you name it.

So yes. There are a lot of "us" who are pushing back on the hype, and no we're not a mini cult.


> GPT and LLM are very interesting but say little or nothing to me of new theory of mind, or display inductive logic and reasoning, or even meet the bar for a philosophers cave solution to problems.

The simple fact they can generate language so well makes me think... maybe language itself carries more weight than we originally thought. LLMs can get to this point without personal experience and embodiment, it should not have been possible, but here we are.

I think philosophers are lagging science now. The RL paradigm of agent-environment-reward based learning seems to me a better one than what we have in philiosophy now. And if you look at how LLMs model language as high dimensional embedding spaces .. this could solve many intractable philosophical problems, like the infinite homunculus regress problem. Relational representations straddle the midpoint between 1st and 3rd person, offering a possible path over the hard problem "gap".


There are a couple Twitter personalities that definitely fit this description.

There is also a much bigger group of people that haven't really tried anything beyond GPT-3.5, which was the best you could get without paying a monthly subscription for a long time. One of the biggest reasons for r1 hype, besides the geopolitical angle, was people could actually try a reasoning model for free for the first time.


ie, the people that AI is dumb? Or you are saying I'm in a cult for being pro it - I'm definitely part of that cult - the "we already have agi and you have to contort yourself into a pretzel to believe otherwise" cult. Not sure if there is a leader though.


I didn't realize my post can be interpreted either way. I'll leave it ambiguous, hah. Place your bets I guess.


You think we have AGI? What makes you think that?


By knowing what each of the letters stand for


Well that’s disappointing. It was an extraordinary claim that really interested me.

Thought I was about to be learn!

Instead, I just met an asshole.


When someone says "i'm in the cult that believes X", don't expect a water tight argument for the existence of X.


> If you would listen to most of the people critical of LLMs saying they're a "stochastic parrot" - it should be impossible for them to do better than random on any out of distribution problem. Even just changing one number to create a novel math problem should totally stump them and result in entirely random outputs, but it does not.

You don't seem to understand how they work, they recurse their solution meaning if they have remembered components it parrots back sub solutions. Its a bit like a natural language computer, that way you can get them to do math etc, although the instruction set isn't of a turing language.

They can't recurse sub sub parts they haven't seen, but problems that has similar sub parts can of course be solved, anyone understands that.


> You don't seem to understand how they work

I don't think anyone understands how they work- these type of explanations aren't very complete or accurate. Such explanations/models allow one to reason out what types of things they should be capable of vs incapable of in principle regardless of scale or algorithm tweaks, and those predictions and arguments never match reality and require constant goal post shifting as the models are scaled up.

We understand how we brought them about via setting up an optimization problem in a specific way, that isn't the same at all as knowing how they work.

I tend to think in the totally abstract philosophical sense, independent of the type of model, at the limit of an increasingly capable function approximator trained on an increasingly large and diverse set of real world cause/effect time series data, you eventually develop and increasingly accurate and general predictive model of reality organically within the model. Some model types do have fundamental limits in their ability to scale like this, but we haven't yet found one with these models.

It is more appropriate to objectively test what they can and cannot do, and avoid trying to infer what we expect from how we think they work.


Well we do know pretty much exactly what they do, don't we?

What surprises us is the behaviors coming out of that process.

But surprise isn't magic, magic shouldn't even be on the list of explanations to consider.


Magic wasn’t mentioned here. We don’t understand the emerging behavior, in the sense that we can’t reason well about it and make good predictions about it (which would allow us to better control and develop it).

This is similar to how understanding chemistry doesn’t imply understanding biology, or understanding how a brain works.


Exactly, we don't understand, but we want to believe it's reasoning, which would be magic.


There's no belief or magic required, the word 'reasoning' is used here to refer to an observed capability, not a particular underlying process.

We also don't understand exactly how humans reason, so any claims that humans are capable of reasoning is also mostly an observation about abilities/capabilities.


> I don't think anyone understands how they work

Yes we do, we literally built them.

> We understand how we brought them about via setting up an optimization problem in a specific way, that isn't the same at all as knowing how they work.

You're mistaking "knowing how they work" with "understanding all of the emergent behaviors of them"

If I build a physics simulation, then I know how it works. But that's a separate question from whether I can mentally model and explain the precise way that a ball will bounce given a set of initial conditions within the physics simulation which is what you seem to be talking about.


> You're mistaking "knowing how they work" with "understanding all of the emergent behaviors of them"

By knowing how they work I specifically mean understanding the emergent capabilities and behaviors, but I don't see how it is a mistake. If you understood physics but knew nothing about cars, you can't claim to understand how a car works "simple, it's just atoms interacting according to the laws of physics." That would not let you, e.g. explain its engineering principles or capabilities and limitations in any meaningful way.


We didn't really build them, we do billion-dollar random searches for them in parameter space.


> I grow most of my own produce, with RO water

Do you really grow enough food to make up most of your diet on RO water? And is this specifically to avoid microplastic exposure, or what?


I grow around 1/3 of my own food. Yes all with RO water. I'd like to get above 50%.

Specifically produce, however we grow most of what we eat. We pressure can, dehydrate and ferment to preserve. I have background and decades of experience in growing, which is to say it's more than just standard hobby garden level.

The RO water is not to avoid microplastics (although that might a side benefit) but rather that the water is highly mineralized. It would be a long post to explain why I do this. Some is theoretical health concerns, some is more practical.


This is really interesting. Do you have suggestions how to use RO in garden scale? (Like link where could I start)


I built my own RO system it cost around $1500 including a water softener. There are some ongoing supplies every year, maybe $200 or less.

I don't have any links I just figured it out, but it's not super complicated. I made it out of undersink RO membrane housings (housings from those little RO systems you can buy for around $300 that do a couple of gallons a day). The membranes have pressure pumps in front of them that get it up to a couple hundred gallons RO water total a day.

Basic steps are 1) Soften the water, 2) Pass it through very tight filters (like 1 micron), I also carbon filter for organic contaminants, 3) Booster pumps put water through osmosis membranes and from there into a storage tank.

I just used plastic totes with gravel in the bottom to house the membranes and booster pumps.

I should write up a blog post on it one day because professionally installed osmosis can be expensive.


Thanks!


Interesting, thanks!


Seems like an unidentified virus could be the cause?


It is unlikely. The samples collected would be checked for fragments.

Most likely its something novel that's been thawing out from climate change.

That or a unique modified form of an existing chemical, or something else that hasn't been considered. There's some real fringe stuff out there in chemistry that open a whole can of what-ifs, that mainstream science might scoff at and not study seriously.

Something like this might be the cause, albeit very unlikely, still there are things stranger than fiction.

Spitballing here, Water has Memory, where an EM, or NQR signal of extremely dilute substance or dissolved chemical can cause certain unnatural forms of molecules to stabilize in aqueous solutions, or alternatively impart its effects because water mimics the nature of the dissolved substance for a time at higher concentrations than may be detectable.

There were a number of scientists looking into these anomalies, including a Nobel laureate, but they were all largely discredited without rational basis or support by those doing the discrediting.


> It is unlikely. The samples collected would be checked for fragments.

It is not straightforward to identify a new virus that is not closely related to known human viruses. You cannot just “check for it.” It is largely an unsolved problem, and there are likely a huge number of common viruses that frequently infect humans that still remain undiscovered. When we sequence real world DNA and RNA there is a whole bunch of mysterious stuff that is unexplained, and may include many undiscovered viruses, and bacteria.

I don't agree about the rest of the stuff you mentioned. In fact, there are academics actively studying things like what you mentioned- including Gerald Pollack at UW, but the reality of these phenomenon are more complex than the popular conspiracy theory level explanations imply, and there is no reason to think they lead to things like what you are claiming. Look here: https://www.pollacklab.org/

I used to feel the same about science unfairly rejecting fringe ideas, and that was part of my motivation to become a scientist... but after becoming one I found it is mostly not true. Plenty of scientists openly study and consider stuff like this and are not "discredited" - the reality is a bit more mundane, that the conspiracy theory versions of these stories are just lacking so much nuance that they have little to no relation to the actual research.

You can see from Pollacks' website above that he has a big problem with fraudulent products using his name and likeness to claim some medical benefit, which neither he or his research actually support.


> You cannot just "check for it"...

I had read and heard that many places had started using preliminary tools like Lucaprot that scan viral dark matter retrieved using nano-pore sequencers to identify the sequences and common secondary structures of proteins which all viruses need to replicate, to automate detection of new viruses. Is this not widespread?

I'm aware of Pollack's research, but as you said he's suffered reputational harm which started when he began that research. The stories surrounding Luc Montagnier and Benviste, were pretty poorly handled, and they both were somewhat discredited for merely pointing out undiscovered anomalies that merited further investigation.

Nature sent their hatchet man James Randi, who has been known for discrediting people, sometimes without sound basis especially in cases where the underlying mechanism is not understood.

There is something to be said that When you suddenly can't get any funding because you published something which no one else had found in a methodological scientific way, that could be duplicated; that tends to gives teeth to those calling something conspiracy theory, where it seems more like a conspiracy practice.

Every little quirk we find, can potentially be used in an engineered solution to get to some amazing outcome not previously considered. Quantum dot based technologies are an example of this, from what I've read with regards to their history.


Yes, that is the process basically for new virus discovery- sequencing and then looking for similarity to known viral sequences. That is still an expensive and time consuming research project, and it fails if the virus is too different to identify any sequence homology. We still find a lot of DNA and RNA we can’t make any sense of in almost every sequencing experiment- there’s a ton of stuff out there undiscovered and unexplained. I suspect a lot of currently mysterious diseases and health problems may have viral origins.

That’s why I’m saying we can’t rule out a virus here easily- not until some other cause is proven.

You can also have more complex mechanisms that also involve a virus plus generic or environmental factors- for example the recent finding that implicates HSV in Alzheimers, despite the fact that most people with the virus still never get Alzheimers.


I wasn't aware of that tidbit with HSV and Alzheimer's, always nice to learn something new. Thanks for mentioning this.


Water has no memory, this is pseudoscience that leads to directly homeopathy. Homeopathy is not a medecine it's a religion!


A lot of those claims about water physics used to market homeopathy are based on real experimental observations- see the link in my other reply, water really does do some strange and complex stuff.

But the problem is that these observations do not actually support the claims of homeopathy at all- the attempt at connecting the two is entirely nonsense. I like to try to be open minded about fringe science and medical ideas... but homeopathy really takes the cake, and is the type of total nonsense that gives the rest of that type of stuff a bad name.


That is not how a dominant trait in genetics works- it means you only need one copy to express the phenotype, but it is no more likely to be passed on than any other genetic trait.

It is possible to have a trait that occurs with high frequency in a population- so that almost everyone has 2 copies of it.


> there's nothing special about having darpa build it other than they provide funds

DARPA chooses real world engineering/technical problems, and then works closely with the external grantees that develop possible solutions, to solve the problem together - rapidly. Fundamental discoveries often come out of that kind of focused well resourced problem solving, but it's not really the goal.


That's what ousd is for.


It's not pointless at all- it's the core idea behind Stoic philosophy aka "the dichotomy of control," and has proven very effective at improving people's mental health through modern therapy methods like CBT and ACT.

One can still do everything in their power to prepare for, and mitigate things outside their control, while still keeping in mind what is in your control and isn't so you don't become emotionally dependent on outcomes outside your control, which is ruinous for mental health.

Having empathy, and caring about doing the right thing actually work better when you stop obsessing over and wasting all of your energy on things you cannot control.


You do have influence. How much is up to your ingenuity and effort. You may choose not to exercise it.


Of course you do, that's the whole point: to focus on what you actually can control- your own actions, which absolutely includes using your own ingenuity and effort to influence things for the better.


There is still a lot we can do. Every demagogue and authoritarian regime collapses eventually, often quickly- and they haven't even succeeded in seizing total control yet. As long as we are alive, we can resist.

Moreover, even under the worst possible situations, individuals can find meaning and purpose. Viktor Frankl's book "Man's Search for Meaning" on surviving concentration camps as well as James Stockdale's books on surviving as a POW in Vietnam show firsthand that it is possible.

"You have a right to make them hurt you, and they don't like to do it." -James Stockdale


It is ironic that the most useful thing Musk and Trump will do is wake people up to the fact their house is on fire...


I very much doubt they'll wake up anti-woke people.


This German comedy sketch sums up this situation incredibly well IMO https://www.youtube.com/watch?v=zvgZtdmyKlI


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: