Hacker Newsnew | past | comments | ask | show | jobs | submit | uxhacker's commentslogin

Does using AI kill the dopamine loop?

I don’t think so.

I still spend the hours — because it needs to sound original. It needs to feel authentic. I have to add my own personal parts to the story.

I still struggle writing it.

The AI helps, but it doesn’t replace the work. The dopamine’s still there — because I’m still in the loop.


AI kills the fun here for me. Writing is fun. Writing using AI to help is horrible. Same with coding to some extent.

Same, kills the fun. It’s actually made it harder to get started on anything because I know the starting point and most of the work is just prompts which I don’t find fun at all. Handcrafting feels more tedious knowing that prompts could do it so much faster. So I end up just disengaging from the activity all together. This is the second year since about 1995 that my side projects folder has practically nothing new (I’ve built a few things with AI, but I lose interest very fast - like a day or two).

FWIW my context is coding as a hobby/entrepreneur. It’s not my job.


I think that generalizes to 'creation is fun, using AI to help is horrible'.

I think that generalized to ‘AI killed the dopamine loop’ XD

AT is an amazing tool that can boost productivity and help with creative inspiration. If an app is making you feel sad stop using the app.

Can it really help with creative inspiration in the long term? I'd say the answer is no for most people.

And some people need a certain number of others who are also doing the same thing for the love of it. We are a social species after all. AI is taking that away.


I’m very dyslexic, so having AI in the loop is incredibly useful — especially for feedback.

But I have to guide it: “just list the changes,” “use English English,” and so on.

The fun’s still there — because the thinking is still mine.


I wrote a "funny" email to a colleague who asked for a formal request to do a task I asked him for. I took it seriously and wrote extremely formal ("Dearest Steven... " Etc). He laughed and said "did chatgpt write that?".

It made me irrationally angry, no, I spent two minutes of my own brain power to come up with those five sentences. This kind of thing happens constantly now, everyone assumes everyone else uses gpt's for everything and I find it a bit depressing to be honest.


The mainstream writing assistants are dog-shite, but so is everything else! If your idea of writing with AI is ChatGPT and no harness, you're only making a statement about the largest common denominator of AI tooling—from a position of ignorance. I'd previously helped multiple pen-pals of mine to properly harness the AI tooling with low-code platforms such as Dify. I'm sure there's plenty more out there, but re: Dify specifically, they took to it rather well. When carefully prompted, some models excel in "editing" moreso than writing from scratch. Not having to rely on professional editors is a huge advantage for aspiring authors that would otherwise struggle with keeping on-form. In my experience, progressively refining ideas, maintaining notes on development of characters in long-winded stories, and soon enough, persistent agents with proactivity, interruptible work capabilities—would vastly reduce the cognitive load that has very little to do with "creativity," that writers have to deal with all the time.

You cannot blame "AI" for your own lack of trying...


Actually, the fact of the matter is that a lot of people derive joy of being the "sole creator" of what they do, or if they collaborate, to enrich human relationships when they do it. So, AI fundamentally takes away that joy because its outside the parameters of normal creation.

What you allude to is not so much "fact," as the "heart" of the matter. The availability of AI tooling takes away nothing; you elect to either use it, or not. I personally hate having to deal with human editors! Most of them typically fit in two broad categories: guns-for-hire and genuine collaborators. The "fact" of the matter is such that AI does not prevent me from collaborating with any of my peers, however, it does allow me to pseudo-collaborate with the writers long-dead! In fact, I happen to maintaib a collection of theatrical play-journals, riddled with conversations I've had at the time with various historical figures vis-à-vis AI. This is the single most valuable source of inspiration enabling my writing in ways that my peers never could. AI-assisted writing is a misnomer—it's not about writing as much as reading, and moreso playing, which is how we get creative.

Wittgenstein would absolutely love it!

It doesn't surprise me that those of us to have failed in keeping up with the constantly-evolving AI tooling, would also make it part of their newly-refined, all-human identity. IMHO, similarly to how hating popular things does not make you cool, not using AI does not make you a joyous independent creator to bravely hold post in the treacherous world of AI slop! It sounds more like a fantasy than coherent creative position. We're still in the early days when it comes to creative writing comprehension in AI. You may or may not be surprised that there's very little to show for in terms of evals when it comes to that. Unlike coding and maths, fiction is yet to be recognised as verifiable domain. (Probably due to probability distribution in fictional outputs not necessarily converging the way of related objective rewards!) However, some labs are working it! There's a huge market for creative writing aids, as it'a necessary to everything from education (as story-telling is what makes studying worthwhile) to political work.


It also reduces the enjoyment of a finished product. You used to write a story or report and be proud of the work. Now your neighbors have done the same with AI and feels like it isn't worth it.

This has been the effect of technology for a while, at least mass communications technology. It exposes you to a pseudo-anoymous world of millions of people doing things but for which you have no context for their creation, only their output.

AI however brings it to a horrific next level, and really emphasizes the mass production of art.


> The AI helps, but it doesn’t replace the work. The dopamine’s still there — because I’m still in the loop.

The problem is that most people need to feel that they are doing something original, and AI takes that away. AI doesn't help anything, except in the short term and maybe for some people who can compartamentalize it. But those people are few and far between indeed.


this reads like it was written by AI.

You've either let AI help you with the 'struggle' of this post or you've spent so much time with chatgpt that you've internalized it's cadence and patterns. This is straight chatGPT.

I recently have also been thinking about Jef Raskin’s book The Humane Interface. It feels increasingly relevant to now.

Raskin was deeply concerned with how humans think in vague, associative, creative ways, while computers demand precision and predictability.

His goal was to humanize the machine through thoughtful interface design—minimizing modes, reducing cognitive load, and anticipating user intent.

What’s fascinating now is how AI, changes the equation entirely. Instead of rigid systems requiring exact input, we now have tools that themselves are fuzzy, and probabilistic.

I keep thinking that the gap Raskin was trying to bridge is closing—not just through interface, but through the architecture of the machine itself.

So AI makes Raskin’s vision more feasible than ever but also challenges his assumptions:

Does AI finally enable truly humane interfaces?


"no" .. intelligent appliance was the product that came out of Raskin's thinking..

I object to the framing of this question directly -- there is no definition of "AI" . Secondly, the humane interface is a genre that Jef Raskin shaped and re-thought over years.. A one-liner here definitely does not embody the works of Jef Raskin.

Off the top of my head, it appears that "AI" enables one-to-many broadcast, service interactions and knowledge retrieval in a way that was not possible before. The thinking of Jef Raskin was very much along the lines of an ordinary person using computers for their own purposes. "AI" in the supply-side format coming down the road, appears to be headed towards societal interactions that depersonalize and segregate individual people. It is possible to engage "AI" whatever that means, to enable individuals as an appliance. This is by no means certain at this time IMHO.


> Does AI finally enable truly humane interfaces?

Perhaps, but I don't think we're going to see evidence of this for quite a while. It would be really cool if the computer adapted to how you naturally want to use it, though, without forcing you through an interface where you talk/type to it.


"Does AI finally enable truly humane interfaces?"

I think it does; LLMs in particular. AI also enables a ton of other things, many of them inhumane, which can make it very hard to discuss these things as people fixate on the inhumane. (Which is fair... but if you are BUILDING something, I think it's best to fixate on the humane so that you conjure THAT into being.)

I think Jef Raskin's goal with a lot of what he proposed was to connect the computer interface more directly with the user's intent. An application-oriented model really focuses so much of the organization around the software company's intent and position, something that follows us fully into (most of) today's interfaces.

A magical aspect of LLMs is that they can actually fully vertically integrate with intent. It doesn't mean every LLM interface exposes this or takes advantage of this (quite the contrary!), but it's _possible_, and it simple wasn't possible in the past.

For instance: you can create an LLM-powered piece of software that collects (and allows revision) to some overriding intent. Just literally take the user's stated intent and puts it in a slot in all following prompts. This alone will have a substantial effect on the LLMs behavior! And importantly you can ask for their intent, not just their specific goal. Maybe I want to build a shed, and I'm looking up some materials... the underlying goal can inform all kinds of things, like whether I'm looking for used or new materials, aesthetic or functional, etc.

To accomplish something with a computer we often thread together many different tools. Each of them is generally defined by their function (photo album, email client, browser-that-contains-other-things, and so on). It's up to the human to figure out how to assemble these, and at each step it's easy to lose track, to become distracted or confused, to lose track of context. And again an LLM can engage with the larger task in a way that wasn't possible before.


Tell me, how does doing any of the things you've suggested help with the huge range of computer-driven tasks that have nothing to do with language? Video editing, audio editing, music composition, architectural and mechanical design, the list is vast and nearly endless.

LLMs have no role to play in any of that, because their job is text generation. At best, they could generate excerpts from a half-imagined user manual ...


Because some LLMs are now multimodal—they can process and generate not just text, but also sound and visuals. In other words, they’re beginning to handle a broader range of human inputs and outputs, much like we do.


Those are not LLMs. They use the same foundational technology (pick what you like, but I'd say transformers) to accomplish tasks that require entirely different training data and architectures.

I was specifically asking about LLMs because the comment I replied to only talked about LLMs - Large Language Models.


At this point in time calling a multimodal LLM an LLM is pretty uncontroversial. Most of the differences lie in the encoders and embedding projections. If anything I'd think MoE models are actually more different from a basic LLM than a multimodal LLM is from a regular LLM.

Bottom line is that when folks are talking about LLM applications, multimodal LLMs, MoE LLMs, and even agents are all in the general umbrella.


Multimodal LLMs are absolutely LLMs, the language is just not human language.


Everything has to do with language! Language is a way of stating intention, of expression something before it exists, of talking about goals and criteria. Everything example you give can be described in language. You are caught up in the mechanisms of these tools, not the underlying intention.

You can describe your intention in any of these tools. And it can be whatever you want... maybe your intention in an audio editor is "I need to finish this before the deadline in the morning but I have no idea what the client wants" and that's valid, that's something an LLM can actually work with.

HOW the LLM is involved is an open question, something that hasn't been done very well, and may not work well when applied to existing applications. But an LLM can make sense of events and images in addition to natural language text. You can give an LLM a timestamped list of UI events and it can actually infer quite a bit about what the user is actually doing. What does it do with that understanding? We're going to have to figure that out! These are exciting times!


What if you could pilot your video editing tool through voice? Have a multimodal LLM convert your instructions into some structured data instruction that gets used by the editor to perform actions.


Compare pinch zoom to the tedious scene in Bladerunner where Deckard is asking the computer to zoom in to a picture.


Zooming is a bad example (because pinch zoom is just so much better than that scene hah.) Instead "go back 5 frames, and change the color grading. Make the mood more pensive and bring out blues and magentas and fewer yellows and oranges." That's a lot faster than fiddling with 2-3 different sliders IMO.


> Zooming is a bad example (because pinch zoom is just so much better than that scene hah.) Instead "go back 5 frames, and change the color grading. Make the mood more pensive and bring out blues and magentas and fewer yellows and oranges." That's a lot faster than fiddling with 2-3 different sliders IMO.

Eh. That's not as good as being skilled enough to know exactly what you want and have the tools to make that happen.

There's something to be said for tools that give you the power of manipulating something efficiently, than systems that do the manipulation for you.


> Eh. That's not as good as being skilled enough to know exactly what you want and have the tools to make that happen.

I mean, do you know that? A tool that offers this audible fluent experience needs to exist before you can make that assessment right? Or are vibes alone a strong enough way to make this judgement? (There's also some strong "Less space than a Nomad. Lame" energy in this post lol.)

Moreover why can't you just have both? When I fire up Lightroom, sure I have easy mode sliders to affect "warmth" but then I have detailed panels that let me control the hue and saturation of midtones. And if those panels aren't enough I can fire up Photoshop and edit to my heart's content.

Nothing is stopping you from taking your mouse in hand at any point and saying "let me do it" and pausing the LLM to let you handle the hard bits. The same way programmers rely on compliers to generate most machine or VM code and only write machine code when the compiler isn't doing what the programmer wants.

So again, why not?


> So again, why not?

Because at my heart I'm a humanist, and I want tools that allow and encourage humans to have and express mastery themselves.

> Nothing is stopping you from taking your mouse in hand at any point and saying "let me do it" and pausing the LLM to let you handle the hard bits. The same way programmers rely on compliers to generate most machine or VM code and only write machine code when the compiler isn't doing what the programmer wants.

IMHO, good tools are deterministic, so a compiler (to use your example) is a good tool, because you can learn how it functions and gain mastery over it.

I think an AI easy-button is a bad tool. It may get the job done (after a fashion), but there's no possibility of mastery. It's making subjective decisions and is too unpredictable, because it's taking the task on itself.

And I don't think bad tools should be built, because the weaknesses of human psychology. Something is stopping you "from taking your mouse in hand at any point and saying 'let me do it'," and its those weaknesses. You either take the shortcut or have to exercise continuous willpower to decline it, which can be really hard and stressful. I don't think we should build bad tools that should put people in that situation.

And you're not going to make any progress with me by arguing based on precedent of some widely-used bad tool. Those tools were likely a mistake too. For a long time, our society has been putting technology for its own sake ahead of people.


> And you're not going to make any progress with me by arguing based on precedent of some widely-used bad tool. Those tools were likely a mistake too. For a long time, our society has been putting technology for its own sake ahead of people.

Your comment is pretty frustrating. HN has definitely become more "random internet comments" forum over the years from its more grounded focus. But even when "random internet comments" talk to each other, you expect a forthrightness to discuss and talk. My reading of your comment is that you have a strong opinion, you're injecting that opinion, but you're not open to discussion on your opinion. This statement makes me feel like my time spent replying to you was a waste.

Moreover I feel like an attitude of posting but not listening when using internet forums is corrosive. In fact, when you call yourself a humanist, this confuses and frustrates me even more because I feel it's human to engage with an argument or just stop discussing when engagement is fruitless. Stating your opinion constantly without room for discussion seems profoundly inhuman to me, but I also suspect we're not going to have a productive discussion from here so I will heed my own feelings and disengage. Have a nice day.


> My reading of your comment is that you have a strong opinion, you're injecting that opinion, but you're not open to discussion on your opinion. This statement makes me feel like my time spent replying to you was a waste.

Eh, whatever. I was just trying to prevent the possibility of a particularly tiresome cookie-cutter "argument" I've seen a million times around here. I don't know if you were actually going to make it, but we're in the context where it's likely to pop up, and it'd just waste everyone's time.

Also this isn't really opinion territory, it's more values territory.


Training LLMs to generate some internal command structure for a tool is conceptually similar to what we've done with them already, but the training data for it is essentially non-existent, and would be hard to generate.


My experience has been that generating structured output with zero, one, and few-shot prompts works quite well. We've used it at $WORK for zero-shot stuff and it's been good enough. I've done few-shot prompting for some personal projects and it's been solid. JSON Schema based enforcement of responses with temperature 0 settings works quite well. Sometimes LLMs hallucinate their responses but if you keep output formats fairly constrained (e.g. structured dicts of booleans) it decreases hallucinations and even when they do hallucinate, at temperature 0 it seems to stay within < 0.1% of responses even with zero-shot prompting. (At least with datasets and prompts I've considered.)

(Though yes, keep in mind that 0.1% hallucination = 99.9% correctness which is really not that high when we're talking about high reliability things. With zero-shot that far exceeded my expectations though.)


Deckard. Blade Runner.


> Does AI finally enable truly humane interfaces?

This is something I keep tossing over in my head. Multimodal capabilities of frontier models right now are fantastic. Rather than locking into a desktop with peripherals or hunching over a tiny screen and tapping with thumbs we finally have an easy way to create apps that interact "natively" through audio. We can finally try to decipher a user's intent rather than forcing the user to interact through an interface designed to provide precise inputs to an algorithm. I'm excited to see what we build with these things.


Highly recommended, timeless read!


Dude is responsible for one-button mouse ...


I’m not a mathematician (just a programmer), but reading this made me wonder—doesn’t this kind of dimensional weirdness feel a bit like how LLMs organize their internal space? Like how similar ideas or meanings seem to get pulled close together in a way that’s hard to visualize, but clearly works?

That bit in the article about knots only existing in 3D really caught my attention. "And dimension 3 is the only one that can contain knots — in any higher dimension, you can untangle a knot even while holding its ends fast."

That’s so unintuitive… and I can't help thinking of how LLMs seem to "untangle" language meaning in some weird embedding space that’s way beyond anything we can picture.

Is there a real connection here? Or am I just seeing patterns where there aren’t any?


> That’s so unintuitive…

It's pretty simple, actually. Imagine you have a knot you want to untie. Lay it out in a knot diagram, so that there are just finitely many crossings. If you could pass the string through itself at any crossing, flipping which strand is over and which is under, it would be easy, wouldn't it? It's only knotted because those over/unders are in an unfavorable configuration. Well, with a 4th spatial dimension available, you can't pass the string through itself, but you can still invert any crossing by using the extra dimension to move one strand around the other, in a way that wouldn't be possible in just 3 dimensions.

> Or am I just seeing patterns where there aren’t any?

Pretty sure it's the latter.


That makes sense for a 2D rope in 4D space, but I’m not convinced the same approach holds for a 3D ”hyperrope” in 4D space.


Your intuition is correct, it doesn't! A "3D hyperrope" is in fact just the surface of a ball[1], and it turns out that you can actually form non-trivial knots of that spherical surface in a 4-dimensional ambient space (and analogously they can be un-knotted if you then move up to 5-dimension ambient space, although the mechanics for doing so might be a little trickier than in the 1d-in-4d case). In fact, if you have a k-dimensional sphere, you can always knot it up in a k+2 dimensional ambient space (and can then always be unknotted if you add enough additional dimensions).

[1] note that a [loop of] rope is actually a 1-dimensional object (it only has length, no width), so the next dimension up should be a 2-dimensional object, which is true of the surface of a ball. a topologist would call these things a 1-sphere and a 2-sphere, respectively


Any time I am tempted to feel smart, I try to go and study some linear algebra and walk away humbled. I will be spending 20-30 minutes probably trying to understand what you said (and I think you typed it out quite reasonably), but first I have to figure out how... a 3D hyperrope is the same as a surface of a ball...


I'm not sure what you mean here. This is discussing a 1-dimensional structure embeded in 4-dimensional space. If you're not sure it works for something else, well, that isn't what's under discussion.

If you just mean you're just unclear on the first step, of laying the knot out in 2D with crossings marked over/under, that's always possible after just some ordinary 3D adjustments. Although, yeah, if you asked me to prove it, I dunno that I could give one, I'm not a topologist... (and I guess now that I think about it the "finitely many" crossings part is actually wrong if we're allowing wild knots, but that's not really the issue)


There is a real connection insofar as the internal space of an LLM is a vector space so things which hold for vector spaces hold for the internal space of an LLM. This is the power of abstract algebra. When an algebraic structure can be identified you all of a sudden know a ton of things about it because mathematicians have been working to understand those structures for a while.

The internal space of an LLM would also have things in common with how, say currents flow in a body of water because that too is a vector space. When you study this stuff you get this sort of zen sense of everything getting connected to everything else. Eg in one of my textbooks you look at how pollution spreads through the great lakes and then literally the next example looks at how drugs are absorbed into the bloodstream through the stomach and it’s exactly the same dynamic matrix and set of differential equations. Your stomach works the same as the great lakes on a really fundamental level.

The spaces being described here are a little more general than vector spaces, so some of things which are true about vector spaces wouldn’t necessarily work the same way here.


> The spaces being described here are a little more general than vector spaces

You probably mean considerably more special than a general vector space. We do have differentiable manifolds here.


If you're holding a hammer, everything looks like a nail ...


I would be careful about drawing any analogies which are “too cute”. We use LLMs because they work, not because they are are theoretically optimal. They are full of lossy tradeoffs that work in practice because they are a good match for the hardware and data we have.

What is true is that you can get good results by projecting lower dimensional data into higher dimensions, applying operations, and then projecting it back down.


> "And dimension 3 is the only one that can contain knots — in any higher dimension, you can untangle a knot even while holding its ends fast."

Maybe you could create "hyperknots", e.g. in 4D a knot made of a surface instead of a string? Not sure what "holding one end" would mean though.


Yes, circles don't knot in 4D, but the 2-sphere does: https://en.wikipedia.org/wiki/Knot_theory#Higher_dimensions

Warning: If you get too deep into this, you're going to find yourself dealing with a lot of technicalities like "are we talking about smooth knots, tame knots, topological knots, or PL knots?" But the above statement I think is true regardless!


Yep — you can always “knot” a sphere of two dimensions lower, starting with a circle in 3D and a sphere in 4D.


It's not just LLMs. Deep learning in general forms these multi-d latent spaces


When you untie a knot, it’s ends are fixed in time.

Humans also unravel language meaning from within a hyper dimensional manifold.


I don't think this is true, I believe humans unravel language meaning in the plain old 3+1 dimensional Galilean manifold of events in nonrelativistic spacetime, just as animals do with vocalizations and body language, and LLM confabulations / reasoning errors are fundamentally due to their inability to access this level of meaning. (Likewise with video generators not understanding object permanence.)


> Or am I just seeing patterns where there aren’t any?

Meta: there are patterns to seeing patterns, and it's good to understand where your doubt springs from.

1: hallucinating connections/metaphors can be a sign you're spending too much time within a topic. The classic is binging on a game for days, and then resurfacing back into a warped reality where everything you see related back to the game. Hallucinations is the wrong word sorry: because sometimes the metaphors are deeply insightful and valuable: e.g. new inventions or unintuitive cross-discipline solutions to unsolved maths problems. Watch when others see connections to their pet topics: eventually you'll learn to internally dicern your valuable insights from your more fanciful ones. One can always consider whether a temporary change to another topic would be healthy? However sometimes diving deeper helps. How to choose??

2: there's a narrow path between valuable insight and debilitating overmatching. Mania and conspirational paranioa find amazing patterns, however they tend to be rather unhelpful overall. Seek a good balance.

3: cultivate the joy within yourself and others; arts and poetry is fun. Finding crazy connections is worthwhile and often a basis for humour. Engineering is inventive and being a judgy killjoy is unhealthy for everyone.

Hmmm, I usually avoid philosophical stuff like that. Abstract stuff is too difficult to write down well.


A lot of innovation is stealing ideas from two domains that often don’t talk to each other and combining them. That’s how we get simultaneous invention. Two talented individuals both realize that a new fact, when combined with existing facts, implies the existence of more facts.

Someone once asserted that all learning is compression, and I’m pretty sure that’s how polymaths work. Maybe the first couple of domains they learn occupy considerable space in their heads, but then patterns emerge, and this school has elements from these other three, with important differences. X is like Y except for Z. Shortcut is too strong a word, but recycling perhaps.


I'm unsure if I misunderstand you or your writing ingroup!

> learning is compression

I don't think I know enough about compression to find that metaphor useful

> occupy considerable space in their heads

I reckon this is a terribly misleading cliche. Our brains don't work like hard drives. From what I see we can keep stuffing more in there (compression?). Much of my past learning is now blurred but sometimes it surfaces in intuitions? Perhaps attention or interest is a better concept to use?

My favorite thing about LLMs is wondering how much of people's (or my own) conversations are just LLMs. I love the idea of playing games with people to see if I can predictably trigger phrases from people, but unfortunately I would feel like a heel doing that (so I don't). And catching myself doing an LLM reply is wonderful.

Some of the other sibling replies are also gorgeously vague-as (and I'm teasing myself with vagueness too). Abstracts are so soft.


If you have some probability distribution over finite sequences of bits, a stream of independent samples drawn from that stream can be compressed so that the number of bits in the compressed stream per sample from the original stream, is (in the long run) the (base 2) entropy of the distribution. Likewise if instead of independent samples from a distribution there is instead a Markov process or something like that, with some fixed average rate of entropy.

The closer one can get to this ideal, the closer one has to a complete description of the distribution.

I think this is the sort of thing they were getting at with the compression comment.


I think LLM layers are basically big matrices, which are one of the most popular many-dimensional objects that us non-mathematician mortals get to play with.


The BBC article seems to overstate the uniqueness of the service at Great Ormond Street Hospital. While it’s true that they are the first in the UK to offer a UKAS-accredited clinical metagenomics service (UKAS being the United Kingdom Accreditation Service, which certifies labs to meet medical testing standards like ISO 15189), metagenomics itself is already being used in several other places across the UK.

For example, the Earlham Institute, the University of Oxford, and the UK Health Security Agency are all actively involved in metagenomics research and surveillance.

For example: https://www.phgfoundation.org/blog/metagenomic-sequencing-in...

https://www.earlham.ac.uk/events/nanopore-metagenomics-sampl...


It might be so that the plebs don't start demanding it from the NHS.


I’m trying to understand something broader about the U.S. tech economy and wondering what others think. How much of the money flowing into U.S. tech — particularly advertising spend on platforms like Meta and Google — is actually, indirectly, the result of America running a massive trade deficit? If dollars are going out to pay for imports (like ultra-cheap goods from Shein and Temu), those dollars have to come back in some form — often as investment in U.S. assets, including digital advertising.

Add to that the role of U.S. universities in driving innovation and attracting international capital, and it starts to look like this whole engine is powered by the very things that Trump’s tariffs and restrictions are pushing back against. Doesn’t that make his actions — while ostensibly protecting U.S. jobs — potentially anti-university and anti-investment in the longer term?

I’m genuinely asking this as someone without a background in economics and curious how others see it. Does anyone else think this crackdown might undercut the very ecosystem that’s funded a huge part of American tech?


There is no way to know definitively how much money in aggregate has actually been run through as a massive trade deficit laundering scheme, which it sounds like is what you are asking.

In any potential scheme like what you describe there would be on a time lag delay, with holdovers (tit for tat), and no real visibility.

The crackdown will undercut this ecosystem without a doubt, but that was bound to happen anyway because of the loss of the petrodollar agreement, and the money printing which has been nonstop since 2012, when we abandoned the sound dollar policy.

The money pool we inflated to meet demand for the petrodollar mandated reserves of other nations is now returning to the US domestic market (driving inflation, a 5 decade delayed debasement).

There are critical junctures where monetary properties in money no longer hold, and we are coming up on one of those junctures with the USD. Most economists I know don't appropriately consider monetary impacts on their econometric models.

In the private sector, money printing through non-reserve debt issuance has already surpassed this juncture but hasn't yet been actualized. This cycle of printing debt fuels the boom bust cycle, and when enough bad investments occur the boom bust cycle becomes the bailout cycle once every 8-10 years. So this has already occurred but its largely off the public ledger.

The public deficit will breach it also in 2030, sooner if more spending occurs. Generally it is the same behavior as what happens in a 3-stage ponzi system. Benefits are front-loaded, diminishing returns, outflows exceed inflows (and collapse of perceived and real value, stable store of value and medium of exchange will have then failed as a whole shortly after).

The short rundown is, Adam Smith has two requirements needed for any participant to continue operating in a market economy.

They must make sufficient profit to cover expenses (in purchasing power), and in individuals that includes the expense of 3 children and wife. These both have largely failed, being edged out, and out-competed by companies that have attached themselves to a money printer and removed their loss function constraints as opaque state-run apparatus forcing the trend towards consolidation through leveraged buyout, merger, and bankruptcy. This is silent nationalization.

That critical juncture at stage-3 basically collapses any market to a non-market socialist economy. The point at which production is exceeded by 'debt growth'. First order producers tend to calculate those loss constraints in a more rigorous way.

This is unsustainable because we know non-market socialism as a system fails long-term. They fail to chaotic distortions that occur in part from cooperative decision-making which violates requirements for economic calculation (it generally must independent and adversarial as a participant). This usually occurs within 50 years at the latest, sometimes within 2, and as a result of the 6 problems Mises touches on in his book from the 1930s on Socialism.

So this is actually a much more dire situation than at first glance because we can no longer print money, and the transition from fractional-reserve in 2020 to no reserve (0% reserve) is what nailed the coffin shut. Basel 3 as the capital reserve system they transitioned to fails because it assumes objective value, and that simply violates known economic principles as covered by Carl Menger in his paper on Subjective Value.

PS: If you are thinking AI driving the labor value of time in a factor market to 0 basically breaks this cycle too you'd be right.

The danger is that when economic exchange fails, food production fails following models put forth by Malthus/Catton which are to put it lightly apocalyptic, but appear to be soundly reasoned and fall under socio-economic collapse.


I agree w/ some of what you say, but other stuff ... ?

> but that was bound to happen anyway because of the loss of the petrodollar agreement

The agreement was informal and it "ended" about a year ago. Last I checked, most oil was still priced in USD.

> Basel 3 as the capital reserve system they transitioned to fails because it assumes objective value

citation needed.

I'm worried about the long-term value of the US$, but perhaps not as much as you ... are you putting your money where your mouth is on that?


I would have liked to respond to you yesterday but people rate limited me through downvotes (to 2 posts per day). HN isn't a very good place to talk about anything serious, people who disagree or have a vested interest in useful information not being visible will brigade you and prevent you from communicating/responding seemingly along collectivist lines.

Yes, the agreement is informal, but it was highly important in driving demand for the currency, demand which is no longer there. We largely produce very vew things aside from global currency which allowed us to export our inflation through that agreement. The fact is that it no longer offers value to other countries who have been de-dollarizing. They can now join BRICS and transact for those goods in local currencies and goods. The downstream effects will be less trade because the high seas will become unsafe as we pull back our peace-keeping role, its entirely possible piracy or a more modern form of privateering returns once that is scaled back.

With these type of agreements it takes time for the effects to unwind and then be seen simply as a matter of contract expiration's some which are several year terms out. We should be seeing more of the effects by the 3-5 year mark, enough to show the definitive trend.

Depending on where you get your news, you may not have seen the fact that roughly 44% of global oil are now members of BRICS which are settling oil now in local currencies. Exchanges still have contracts in USD but there are a number of issues with loadout of contracts which are presenting financial contagion problems as well, trust has been lost, and for example the failures to deliver of gold and silver contracts now have a estimated delivery date of 16 weeks that seems to keep getting extended out, given how many of these faux commodity contracts are naked its not surprising.

Naked paper trading by GSIBs in these markets far exceeded the physical commodities (300%+ in volume), and the risk was offset through options, but options aren't perfect protection. Capital flight to gold, silver, and other commodities is happening with people opting for loadout which deplete the physical to paper, but that's not really being covered in the news in any effective way.

The Trusted News Initiative has been suppressing news on a variety of subjects including BRICS.

Regarding Basel 3 using objective value, you can get to that point just by reading the published framework at BIS, and examining the definitions that they redefine. The link for the framework is here: https://www.bis.org/baselframework/BaselFramework.pdf .

The Basel 3 system used in the US varies from that because it is a GAAP modified variant (with the GAAP loopholes and the issues inherent in the original framework). I'm not aware of a Fed document available to the public that includes a description of the changes in this variant implementation.

In a fractional reserve system, the published material by the Fed in their publication "Modern Money Mechanics" walked through how the rates limit the amount a bank can loan leaving a fractional percentage of assets for any given expansion based on those rates.

There is no such similar publication for the Basel 3 modified system, at least as far as I am aware of. The reserve requirements are limited by risk weighted assets, which are one indirection to the objective value of an asset they use as capital (they define a few but include shares/equity of the bank).

This is clear when you see how the banks are treated as a grouping, under the title of the section "Consolidation", which is misused. Consolidation is being used here so as to not set off any alarm bells about valuation, because consolidation for the tasks required must naturally require a valuation to occur first on the capital. You can't do anything in the consolidated group without first having a valuation of the assets.

If you go to the definitions under CAP you'll see that they've redefined capital very carefully. If you mark the indirect references and then followed through to the objects in question, and how they vary in valuation dramatically from time to time that's a problem. This changeover was largely adopted without any public announcement at the start of the pandemic.

Who values these, and how they are valued is left unsaid, it would seem the banks themselves do this in part, and some of the assets may include market exposure (something a systemic bank should never have as a security issue).

The fact that the market is no longer functioning as a result of price discovery failing with >50% of the volume of transactions occurring in dark pools off exchange, should give people pause and inspire great concern. Price discovery fails when around a quarter of the market transactions are invisible.

Upon careful consideration of what's read, you'll see that they are no longer limited by regulatory rates, the central bank has no effective lever to pull to hold back their debt issuance.

As long as the fiat-based asset valuations(prices) inflate with the currency they can keep issuing more debt, and this drives consolidation of the market into non-market territory taken to its logical conclusion. They'll do so right up to the point where people abandon the currency.

When you have a positive feedback system, these type of systems are prone to runaway failures. A general principle in engineering is safety, systems with safety-critical features require a higher duty of care. The food system is dependent on exchange which is dependent on money. The required duty of care is wholly lacking given the existential threat touched on by Catton/Malthus related to that.

Also importantly, there doesn't appear to be any public disclosure when the bank fails to meet the requirements (as would happen in a stock market crash regardless of weighting). If you examine the FRC failure two years ago you'll see this all happened behind the scenes, and the background process is not public (as far as I can tell), it seems they get a private notice and if they can't correct it in 30 days they get seized and the bad debt consolidated in one of the remaining dealers. Eventually only one remains, but they all operate on unsound and unsustainable principles. They are called banks, but this isn't banking in the classical sense of the term.

> I'm worried about the long-term value of the US$, but perhaps not as much as you.

I'm not worried about the long-term value of the dollar because I know there is no long-term value, and yes I've moved my assets accordingly (put my money where my mouth is).

I've been rewarded for that too, in the short time I made that change, post-pandemic, my personal capital has tripled. Its not nearly as liquid as I'd like, but its diversified sufficiently, and I'm taking other steps to hedge currency risk which is bound for despair and failure for the general market participants.

I've lost money of course, but the money I've lost were in the bucket of assets that normal people invest their retirement plans like a 401k in, where I couldn't take it out without a very large tax burden in such a short period of time. I've largely written that portion off and will be cashing that out soon to more suited investments.

I envision a time not far off where basic goods can't be gotten at any price. Not because I'm a doomer, as some might call me, but because when you have no visibility to recognize a problem, nor ability to correct underlying issues (the socialist calculation problem), the worst outcome is likely to eventually happen. Worst-case scenarios are independent of likelihood.

Trying to plot a safe path for one consolidated boat through chaos is an impossible task. People in many generations past understood enough to be humble in the knowledge that there were impossible problems that could not be solved and avoided systems that coupled their legacy's existence to the solution of those problems.

More recent generations, as consolidated in the ruling leadership (boomers) failed in upholding the generational contract, and blinded themselves through action (reducing visibility increasing corruption etc). Front-loaded benefits eventually must be paid back, but they've left their children to foot and pay the bill. This touches on Thomas Paine's writings in his Rights of Man, but I digress.

Minor shortages in key goods started about a year ago, and the grocery stores have been good at hiding it by putting things in front of the spaces so people won't notice empty shelves. That can't help but get so much worse with the tariffs in place now.

Update: Also, while I touch on market related things I didn't want to boil down into the details too much, there's a lot more. As a result I didn't even touch on the adoption of FASAB S-56 in 2019 during the Kavanaugh grilling which was used as a smokescreen. Catherine Fitts best touches on why this legislation is important and relevant to discussions of consolidation, she's done interviews.

The TL;DR is any bank that meets the requirements under this measure can legally keep separate books and modify the disclosed consolidation as needed without footnote. No measures will be available to track artificial distortions originating here except by lagging indicator (Mises, SCP, chaos).


> I would have liked to respond to you yesterday but people rate limited me through downvotes (to 2 posts per day).

No worries. fwiw, I didn't down vote you.

haven't read your response yet.. it's quite long and I prefer reading on a large screen... currently on my phone stuck in a doctor's office...


> It's quite long

No worries, I'm sorry it couldn't be more concise, unfortunately that is the way of things, sometimes you can't simplify it more without losing important meaning without having a sound and reliable common reference and definition.

I tried to keep it as short as possible while preserving a unique meaning unambiguously. Given the adversarial and critical nature of much of the many participants on social media this has become a necessity when its technical. This problem is why communicating on this subject matter is so difficult today.

The state of education is also horrible. I could mention horror stories but it'd be another long paragraph, and not add much. Its quite hard to communicate technically without a good common vernacular and reference.

> currently ... stuck in doctor's office.

Hopefully nothing too serious, doctors can be a real pain, but often necessary.

Getting back to your response, If you are already familiar with the material that's covered rigorously (with historical footnote references) by Ludwig von Mises, in his book on the Theory of Money and Credit, then what's mentioned basically follows along similar lines of reasoning.

It takes the criteria he uses of the various forms/groups of money, and money substitutes, and follows along the same analysis framework. The failures logically follow when you treat fundamentally separate functional groups as the same.

There is often a fine nuance between the legal concept of money as debt settlement or obligation, and the economic concept of money which often gets ignored or conflated to what amounts to fallacy when examining these kind of documents, which is why unambiguous definitions are so important.

On a side note, many people today have never heard of Mises, and the material on this subject matter has not been aggregated appropriately anywhere else at least as far as I'm aware, in such an equivalent short form and aimed towards the common person.

I also try not to mention him by name because there is a coordinated effort on HN and other places to downvote any posts with keywords related to what he talks about to the point where the responses can't be seen.

Every post I've referenced him in the last 6 months, and with certain others, has been downvoted to the point of the content being removed from visibility; often within a day.

I take a karma hit every time, but he covers material that is sound and foundational.

I've also seen similar behavior when referencing Adam Smith's Wealth of Nations, Landes on Wealth & Poverty, and Carl Menger on subjective value. All well established and recognized works from great minds of the past.



So get the car delivered in Europe. Go on a driving holiday and then ship to the USA.



Not just that the pill works as well as the jab, but Lilly has stockpiled over a billion pills to meet expected demand.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: