The issue is, even with all the browser protections, you still create an account anywhere or buy something an input your name/email address/shipping address, your "hashed data" immediately gets sent to meta/google as a conversion with "this guy bought a cat toy", and you start getting ads for cat related stuff everywhere.
They don't even need to "track" you properly for this stuff to work and it seems there's no way to escape it.
I don't experience that though I have friends who use smartphones who describe it. So I think a lot of it is via javascript. I doubt every retailer, or even a significant fraction, has their backend sending that type of data to $megacorp. But maybe I'm just lucky or shop weird places or it's because I use a new email address @superkuh.com for every account sign up. Or maybe I'm just not seeing the targeted ads for my $superkuhprofile that do exist because I have almost all ads successfully blocked. Perfect is the enemy of good anyway, all mitigations help a bit. And blocking JS is a huge mitigation.
If those companies are using big SaaS companies for eCommerce and have not going "Don't Track" part of their admin panel to turn off tracking, a lot of those SaaS companies will just sell off the data.
So sure, cat toy small time retailer on Etsy won't but credit card processor or shipper might.
I think part of the issue is that these retailers are also customers of meta/google on the side of purchasing ads, and as a merchant you're highly encouraged to send as much data on your events as you can, or your conversion tracking can be "less accurate"and your campaigns are less efficient.
So it's less about "we're sending the data to $megacorp" and more about "I want the most bang for buck on my own campaigns" when the decision is made.
Using a different email certainly helps, though!
EDIT: highly encouraged by meta et. al! Whether this is a legitimate request to improve results or pure self-interest on the part of meta I don't know!
A fascinating thing for me after reading this is: how can it be that the "circuit input" is compatible with its output to the point where the performance improves? The training process never saw this particular connection just like it didn't see layer 60 output into layer 3 or whatever.
Great read, makes you wonder what else is encoded in these models that might be useful!
I think the intuition is that the first N layers decode into "thought language" while the last N encode back to desired output language. So if there are well defined points where it transitions between decoding/understanding, thinking, and rendering back to language, those 2 transition points should be in the same vector space of "LLM magic thinking language".
A good example of this is Rust. Rust is by default memory safe when compared to say, C, at the expense of you having to be deliberate in managing memory. With LLMs this equation changes significantly because that harder/more verbose code is being written by the LLM, so it won't slow you down nearly as much. Even better, the LLM can interact with the compiler if something is not exactly as it should.
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
I was under the impression from Rust developers that it was one of the languages LLMs struggled with a bit more than others? My view could be (probably is) very outdated.
Very small suggestion: Can you make the entries actual links/anchor tags so that it is possible to copy link, middle-click to open in a new tab, and so on?
Using the database as a queue, you no longer need to setup transaction triggers to fire your tasks, you can have atomic guarantees that the data and the task were created successfully, or nothing was created.
I think the problem starts with the name. I've been coding with LLMs for the past few months but most of it is far from "vibed", I am constantly reviewing the output and guiding it in the right direction, it's more like a turbo charged code editor than a "junior developer", imo.
Have you tried Roo Code in "Orchestrator" mode? I find it generally "chews" the tasks I give it to then spoon feed into sub-tasks in "Code" (or others) mode, leaving less room to stray from very focused "bite-sized" changes.
I do need to steer it sometimes, but since it doesn't change a lot at a time, I can usually guide the agent and stop the disaster before it spreads.
A big caveat is I haven't tried heavy front-end stuff with it, more django stuff, and I'm pretty happy with the output.
If you're visiting Shinjuku, nearby there's a narrow street called Omoide Yokocho. Just take in the vibe and choose a yakitori spot to grab a bite and drink your poison of choice (tea/beer/sake). I would recommend going at night/dinner time.
Speaking of Shinjuku and videogames, if you've ever played any yakuza/like a dragon game, you owe it to yourself to go to Kabukicho and its big red gate.
In any case, whatever you choose to visit in Tokyo, it will be really nice, and a lot of it will still be waiting when you eventually come back.
They don't even need to "track" you properly for this stuff to work and it seems there's no way to escape it.
reply