I suppose this is banal/obvious to many, but I found this very interesting given the practical context.
>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.
>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
...
>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
The negative net value of external contributiona is to make the decision. End external contributions.
For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.
AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.
>I also think it would take some doing to get advertisers to jump on a new platform when YouTube has almost all the viewers.
Volume isnt even your main issue here. YouTube ads are powered by adwords... that all advertisers already use. It comes with tracking and user-analytics built in.
You can't compete with YouTube by replicating this business model.
Even so.. direct YouTube ad revenue per view is low. Many successful tubers monetize with sponsors. That is replicable, if a (single) tuber has enough views.
I think there can be markets for smaller, paid video sites... but that's not really a competitor to YouTube. It's more like competition for substack.
The way YouTube is managed, including all the reasons for criticism, are why it is successful.
Legible rules have loopholes. Keeping advertisers "on their toes" with mystery rules is a strategy.
It makes sense to keep the platform as unoffensive as possible. Strict nudity rules, and other such "hard" rules. Demonetization gives yotube a chance to implement soft/illegible rules... many of them simply assumed or imagined. It also makes business sense to suppress politics a little. The chilling effect is intentional..
and understandable.
Honestly, I think the more open alternative to YouTube is podcasting. Podcasting has terrible discovery, and video is underdeveloped but... it also has persistence that proves it is a good platform.
Half of "the problem" with YouTube is Google running the platform and pursuing their own interests. These are somewhat restrictive, but they also make sense.
The other half is intense competition for daily attention. That's what a low friction, highly accessible platform does. You can't have everything.
Without all the restrictions and manipulations that YouTube do, the platforms would be 100% nudity, scandals and suchlike.
>the problem is that in order to develop an intuition for questions that LLMs can answer, the user will at least need to know something about the topic beforehand. I believe that this lack of initial understanding of the user input
I think there's a parallel here for the internet as an i formation source. It delivered on "unlimited knowledge at the tip of everyone's fingertips" but lowering the bar also lowered the bar.
That access "works" only when the user is capable of doing their part too. Evaluating sources, integrating knowledge. Validating. Cross examining.
Now we are just more used to recognizing that accessibility comes with its own problem.
Some of this is down to general education. Some to domain expertize. Personality plays a big part.
The biggest factor is, i think, intelligence. There's a lot of 2nd and 3rd order thinking required to simultaneously entertain a curiosity, consider of how the LLM works, and exercise different levels of skepticism depending on the types of errors LLMs are likely to make.
Im not sure that advertising specifically is the issue.
I think a lot of the ills of social media are ills of the medium itself... once it reaches "everyone scale," game theory maturity and whatnot.
Anyway the way past it is probably to go past it... and onto the next medium. Back is rarely an available option.
On that note... its curious that Digg now describes itself as a "community platform," not a social network. Ironic, considering they bought the name "digg."
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
Part of the fun is that predictions get tested on short enough timescales to "experience" in a satisfying way.
Idk where that puts me, in my guess at "hard takeoff." I was reserved/skeptical about hard takeoff all along.
Even if LLMs had improved at a faster rate... I still think bottlenecks are inevitable.
That said... I do expect progress to happen in spurts anyway. It makes sense that companies of similar competence and resources get to a similar place.
The winner take all thing is a little forced. "Race to singularity" is the fun, rhetorical version of the investment case. The implied boring case is facebook, adwords, aws, apple, msft... IE the modern tech sector tends to create singular big winners... and therefore our pre-revenue market cap should be $1trn.
Good points but dont underestimate "granny needs to visit the bank to get access to her account again" as a problem.
For a lot of people, dealing with (now mostly digital) bureaucracies is a major stress in life. The biggest one, for some.
Its not just about invonvenience. Its sometimes about losing access to some, and just not having it for a while.
In terms of practical effect, a performance metric for a login system could be "% of users that have access at a given point." There can be a real tradeoff, irl, between legitimate access and security.
On the vendor side.. the one time passwords fallback has become a primary login method for some. Especially government websites.
Customer support is costly and limited in capacity. We are just worse at this than we used to be.
Digital identity is turning out to be a generational problem.
How many HN denizens are the de facto tech support for family members when they can’t login, can’t update, can’t get rid of some unwanted behavior, or just can’t figure stuff out?
I don’t blame them one bit. The tech world has presented them with hundreds of different interfaces, recovery, processes, and policies dreamed up by engineers and executives who assume most of their user base is just like them.
>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.
>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
...
>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
The negative net value of external contributiona is to make the decision. End external contributions.
For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.
AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.