Hacker News new | past | comments | ask | show | jobs | submit | brotchie's comments login

Similar experience, I had a Cyrix PR200 which really underperformed the equivalent Intel CPU.

Convinced my parent's to buy a new PC, they organized with a local computer store for me to go in and sit with the tech and actually build the PC. Almost identical specs in 1998: 400Mhz Pentium 2, Voodoo 2, no zip drive, but had a Soundblaster Live ($500 AUD for this at the time).

I distinctly remember the invoice being $5k AUD in 1998 dollars, which is $10k AUD in 2024 dollars. This was A LOT of money for my parents (~7% of their pretax annual income), and I'm eternally grateful.

I was in grade 8 at the time (middle school equivalent in USA) and it was the PC I learnt to code on (QBasic -> C -> C++), spent many hours installing Linux and re-compiling kernel drives (learning how to use the command line), used SoftICE to reverse engineer shareware keygen (learning x86 assembly), created Counterstrike wall hacks by writing MiniGL proxy dlls (learning OpenGL).

So glad there wasn't infinity pools of time wasting (YouTube, TikTok, etc) back then, and I was forced to occupy myself with productive learning.

/end reminiscing


I could share almost exactly the same story. So grateful my parents could afford, and were willing to spend, the money on a nice PC that I entirely monopolised.


Thanks so much for putting so much effort into this, loved reading it: the diagrams and explanations are top-tier. Inspirational.


Have done both clinical Ketamine and Psilocybin therapy.

Ketamine was very interesting. Proper completely dissociative "K-hole" experience. I feel like it helped with Anxiety, but I can't pinpoint "why" from an introspective perspective.

Psilocybin on the other hand. Was a hero dose, and I'm a changed person afterwards.

Could feel the "layers" of my identity being stripped off, almost regression to a more child-like state. Very interesting experience. Had strong synesthesia: sounds would produce colors, colors would produce tastes, fun experience.

Near the peak of the experience I had these strong recurring auditory hallucination of my mothers says all these random words from my youth, these were accompanied by strong feeling of anxiety. After a lot of post-experience integration and reflection I realized that my mothers anxiety about the world was effectively "programmed" into my brain during my upbringing. e.g. Generationally transmitted anxiety.

Therapy always talks about childhood trauma, etc, but actually experiencing it was another level, and really helped me on my journey to being a less anxious person.

Before the Psilocybin experience, I suffered from existential depression: what's the point of living if the sun is going to explode in ~x billion years. Towards the peak of the experience everything was super chaotic, I felt like I was being transported into different realities (e.g. realities with different laws of physics, or different space time geometries). This was hugely anxiety inducing and would otherwise be called a "bad trip." I felt "lost" in this sea of all different realities.

As I was coming down from the peak and started to reintegrate, I had a strong distinct sense of "coming back" to our current reality. It felt like finding a safe tropical island in a sea of chaos: e.g. our currently reality is a safe space and point of stability in a sea of chaos and uninviting realities.

I was truly, deeply, grateful to be able to return to the familiar and it made me really really deeply appreciate myself and the blessing that our reality is to us.

Post the experience I also acquired the ability to observe my emotions from a third person perspective. e.g. rather than feeling "angry" I could tag the emotion "angry" and react accordingly, almost as if I gained ring 0 access to my brain when I previously only have ring 1 access.

All-in-all probably the most profound and healing experience of my life.

  1. Deeply felt and understood my anxiety was generationally passed on from my mother's anxiety,
  2. Eliminated my existential depression, giving me a deep appreciation for the beauty of our reality,
  3. Gave me ring 0 access to my emotions making me a much more stable, calm person.


Beautiful description of your experiences. The psilocybin experience sounds like it was guided by a professional? was it and if so, how did you find that person?


These skills can come from studying ACT though.


Open question for LLMs, does creativity and new ideas come from a process or is it a laddered emergent capability.

What I mean by this, is the process of coming up with novel ideas a single capability that has to be trained and reinforced.

Or is it a ladder of capabilities of increasing complexity in that a model that could figure of General Relativity from scratch would not be able to continue the process and perhaps come up with a viable “theory of everything.”

One thing I’ve wanted to do, I’m sure somebody has tried it, is build a dataset to RL a model to be more creative: Get a human expert in a field, have them ask a reasoning model some open questions, and then have the expert look at 20 outputs and rank them by creativity / insight. Have the expert iterate and see how much new “insight” they can mine from the model.

Do this across many fields, and then train a model on these rankings.

Perhaps creativity is a different way of moving in latent space which is “ablated” from existing models because they’re tuned to be “correct” rather than “creative.”

Also curious what techniques there are to sample a reasoning model to deliberately perturb its internal state into more creative realms. Though these a fine line between insight and hallucination.

In some respects creativity is hallucination. As a human, you’re effectively internally postulating creative ideas “hallucinations” and then one of them “hits” and fires a whole bunch of neurons which indicate: “ok that wild idea actually has grounding and strong connections to the existing knowledge in your brain.”


I think creativity comes from search, inspiration or discoveries from the environment. Creativity is basically searching and stumbling into novel ideas.

Creativity doesn't come from the brain itself, the brain is just exploring and accumulating experience. Experience builds on itself. The role of the environment is both to spark and to invalidate ideas.

For example AlphaZero with just search-and-learn strategy could beat humans at our own game. It's not a magic of the model, but of the search loop.


If this is your kind of thing and you ever get a chance to see the musical artist Tipper alongside Fractaled Visions driving the visuals, you’re in for a treat.

Most spot on visual depictions of psychedelic artifacts I’ve witnessed.

Saw them together last year and it’s the no. 1 artistic experience of my life. The richness, and complexity of Fractaled Vision’s visuals are almost unbelievable.

Even knowing a lot about shader programming, etc. some of the effects I was like “wtf how did he do that”.

Here’s the set, doesn’t fully capture the experience, but gives a feel: Seeing this in 4k at 60fps was next level.

https://youtu.be/qMcqw12-eSk?si=R5mCaIbR01w3Tbyv


ooo I was there


Nice, thanks for the reccomendations, the Mudi V2 looks great.

Any limitations / bumps in the road, or it "just works"?


Yes! I had to flash it with beta firmware since the eSim Manager disappears using Airalo.

https://dl.gl-inet.com/release/router/testing/e750/4.3.21

   Fixed the problem that esim manage page is lost after installing some esim profiles.


I own a Mudi v1, it is Chinese and 'just runs OpenWrt' (with mods). I had to use such device for a previous job (not w/eSIM tho). Battery life was good, but now not so much, and battery isn't user replaceable.


I feel like they're conflating "rave" with "clubbing."

Friday, Saturday club attendance has been dropping across the world, and many electronic music focused club venues have shut down (at least in Australia and the UK).

My word association of "rave" is "festival" though. Festivals feel like they're still booming, or at least not in dramatic decline.

From a small personal sampling: Coachella, Portola, Outside Lands, Proper, Lightning in a Bottle, festivals are still going strong. For some (Coachella, Lightning in a Bottle) attendance felt like it dropped 2023 -> 2024, but perhaps 10-20%, and this is likely economically correlated (inflation, etc). Late 2024- festivals (Portola, Proper) were packed.


> My word association of "rave" is "festival" though.

hehe, my definition of a rave is a temporary venue where at least 2 people have asked if you need help finding Molly.

There is a bisect of "festival" goers and "ravers", but many ravers are priced out of festivals, but may attend raves weekly or monthly.

Both of these imho, are different than your traditional licensed club that primarily serves alcohol and is 21+ exclusive.


Pet theory is that our universe is run on some external computational substrate. A lot of the strangeness we see in quantum physics are side effects of how that computation is executed efficiently.

The inability to reconcile quantum field theory and general relativity is the that gravity is a fundamentally different thing to matter: matter is an information system that's run to execute the laws of physics, gravity is a side effect of the underlying architecture being parallelized across many compute nodes.

The speed of light limitation is the side-effect of it taking a finite time for information to propagate in the underlying computational substrate.

The top-level calculation the universe is running is constantly trying to balance computation efficiently among the compute nodes in the substrate: e.g. the universe is trying to maintain a constant complexity density across all compute nodes.

Black holes act as complexity sinks, effectively "garbage collection." The matter than falls below the event horizon is effectively removed from the computation needs of the substrate. The cosmological constant can be explained by more compute power being available as more and more matter is consumed by black holes.

This can be introduced into GR by adding a new scalar field whose distribution encodes "complexity density." e.g. some metric of complexity like counting micro-states, etc. This scalar field attempts to remain spatially uniform in order to best "smooth" computation across the computational substrate. If you apply this to a galaxy with a large central supermassive black hole, you end up with almost a point sink of complexity at the center, then a large area of high complexity in the accretion disk, and then a gradient of complexity away towards the edges of the galaxy. That is, the scalar field has strong gradients along the radius of the galaxy, and this gives rise to varying gravitational effects over the radius (very MOND-like).

Some back of the napkin calculations show that adding this complexity density scalar field to GR does replicate observed rotation curves of galaxies. Would love to formalize this and run some numerical simulations.

Would hope that fitting the free parameters of GR with this complexity density scalar field would yield some testable predictions that differ from current naive assumptions around dark matter and dark energy.


”External computation susbtrate” is a useful idea if it leads to falsifiable theories. As a ”theory of everything” it sucks because it’s clearly not motivated by any specific maths or observations, but by the human need to map nature into some comprehensible analogue. Ie. taking some simpler subset of nature and trying to pretend the rest of it is like that as well. Usually nature so far has become more incomprehensible the deeper we’ve looked at it.

Newtonian mechanics & mechanical clocks being hottest precision technique led scientists at the time to viewing nature as a clockwork. Now we have computers, we think ”nature is like computers” because it’s an appealing analogue.

But it’s a false analogue imo. Just like clocks are a thing enabled by nature (a subset, in every meaning of the word) similarly computers are a subset of nature. So yes, nature can think (with human brains) and nature can run computations (with cpu:s impregnated with programs) but that also is just a subset of nature.

Now: games of the mind and helpfull analogues rock. And asking ”how is nature analogous to a turing machine” is interesting for sure. But just because a game is fun or analogue appealing, should not one let forget in the philosophical sense that one is playing only with a limited subset of a thing.


There's a Danny Hillis talk on this but I couldn't find it.


Have been building agents for past 2 years, my tl;dr is that:

Agents are Interfaces, Not Implementations

The current zeitgeist seems to think of agents as passthrough agents: e.g. a lite wrapper around a core that's almost 100% a LLM.

The most effective agents I've seen, and have built, are largely traditional software engineering with a sprinkling of LLM calls for "LLM hard" problems. LLM hard problems are problems that can ONLY be solved by application of an LLM (creative writing, text synthesis, intelligent decision making). Leave all the problems that are amenable to decades of software engineering best practice to good old deterministic code.

I've been calling system like this "Transitional Software Design." That is, they're mostly a traditional software application under the hood (deterministic, well structured code, separation of concerns) with judicious use of LLMs where required.

Ultimately, users care about what the agent does, not how it does it.

The biggest differentiator I've seen between agents that work and get adoption, and those that are eternally in a demo phase, is related to the cardinality of the state space the agent is operating in. Too many folks try and "boil the ocean" and try and implement a generic purpose capability: e.g. Generate Python code to do something, or synthesizing SQL based on natural language.

The projects I've seen that work really focus on reducing the state space of agent decision making down to the smallest possible set that delivers user value.

e.g. Rather than generating arbitrary SQL, work out a set of ~20 SQL templates that are hyper-specific to the business problem you're solving. Parameterize them with the options for select, filter, group by, order by, and the subset of aggregate operations that are relevant. Then let the agent chose the right template + parameters from a relatively small finite set of options.

^^^ the delta in agent quality between "boiling the ocean" vs "agent's free choice over a small state space" is night and day. It lets you deploy early, deliver value, and start getting user feedback.

Building Transitional Software Systems:

  1. Deeply understand the domain and CUJs,
  2. Segment out the system into "problems that traditional software is good at solving" and "LLM-hard problems",
  3. For the LLM hard problems, work out the smallest possible state space of decision making,
  4. Build the system, and get users using it,
  5. Gradually expand the state space as feedback flows in from users.


Same experience.

The smaller and more focused the context, the higher the consistency of output, and the lower the chance of jank.

Fundamentally no different than giving instructions to a junior dev. Be more specific -- point them to the right docs, distill the requirements, identify the relevant areas of the source -- to get good output.

My last attempt at a workflow of agents was at the 3.5 to 4 transition and OpenAI wasn't good enough at that point to produce consistently good output and was slow to boot.

My team has taken the stance that getting consistently good output from LLMs is really an ETL exercise: acquire, aggregate, and transform the minimum relevant data for the output to reach the desired level of quality and depth and let the LLM do it's thing.


There’ll always be an advantage for those who understand the problem they’re solving for sure.

The balance of traditional software components and LLM driven components in a system is an interesting topic - I wonder how the capabilities of future generations of foundation model will change that?


Certain the end state is "one model to rule them all" hence the "transitional."

Just that the pragmatic approach, today, given current LLM capabilities, is to minimize the surface area / state space that the LLM is actuating. And then gradually expand that until the whole system is just a passthrough. But starting with a passthrough kinda doesn't lead to great products in December 2024.


Unrelated, but since you seem to have experience here, how would you recommend getting into the bleeding edge of LLMs/Agents? Traditional SWE is obviously on it's way out, but I can't even tell where to start with this new tech and struggle to find ways to apply them to an actual project.


When trying to do everything, they end up doing nothing.


Do you have a public example of a good agentic system. I would like to experience it.


+1, the second and third order effects aren't trivial.

We're already seeing escape velocity in world modeling (see Google Veo2 and the latest Genesis LLM-based physics modeling framework).

The hardware for humanoid robots is 95% of the way there, the gap is control logic and intelligence, which is rapidly being closed.

Combine Veo2 world model, Genesis control planning, o3-style reasoning, and you're pretty much there with blue collar work automation.

We're only a few turns (<12 months) away from an existence proof of a humanoid robot that can watch a Youtube video and then replicate the task in a novel environment. May take longer than that to productionize.

It's really hard to think and project forward on an exponential. We've been on an exponential technology curve since the discovery of fire (at least). The 2nd order has kicked up over the last few years.

Not a rational approach to look back at robotics 2000-2022 and project that pace forwards. There's more happening every month than in decades past.


I hope that you're both right. In 2004-2007 I saw self driving vehicles make lightning progress from the weak showing of the 2004 DARPA Grand Challenge to the impressive 2005 Grand Challenge winners and the even more impressive performance in the 2007 Urban Challenge. At the time I thought that full self driving vehicles would have a major commercial impact within 5 years. I expected truck and taxi drivers to be obsolete jobs in 10 years. 17 years after the Urban Challenge there are still millions of truck driver jobs in America and only Waymo seems to have a credible alternative to taxi drivers (even then, only in a small number of cities).


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: