I have Codex and Gemini critique the plan and generate their plans. Then I have Claude review the other plans and add their good ideas. It frequently improves the plan. I then do my careful review.
This is exactly how I've found leads to most consistent high quality results as well. I don't use gemini yet (except for deep research, where it pulls WAY ahead of either of the other 'grounding' methods)
But Codex to plan big features and Claude to review the feature plan (often finds overlooked discrepancies) then review the milestones and plan implementation of them in planning mode, then clear context and code. Works great.
It’s the same as buying a house. I want to buy a house for $1.2m. I put down $200k and borrow $1m. The bank determines the value of the house. My equity absorbs a 20% drop in prices, so the bank is fairly protected. Businesses are different because they really can go to $0. Banks will need more collateral and/or make many different types of loans to dilute the risk.
Yep, once you accept “might makes right” the laws in a democracy become polite suggestions. Oh, your town is in the way of hydropower? Too bad the gov’t has more guns than you. That’s how you get the Three Gorges Dam in China. Nevertheless, the Trump Mafia is demonstrating how paper thin democracy and rule of law really is in the US.
The true cost of labor is paid for by taxes. The bottom 50% in the US pay little to negative taxes due to government benefits. Most taxes are paid by the rich. Therefore, the true cost of labor is paid for by the rich, rather than by consumers in the form of higher prices.
That always ignore the 8.5%% everyone pays in taxes for FICA that really just goes into the general fund - or the additional 8.1% in taxes that your employer pays in on your behalf. If you are an independent contractor - like even an uber driver is - you pay the entire 16.2%
Since 2014 most bits of central London have seen 10-30% increase, which is below inflation and far, far below stock returns. This should persuade my wife not to buy here. Thanks!
This resonates with me. I have a hobby where I transform classic books into hand-written papyrus as the author intended. There is something almost meditative in unspooling a 10kg scroll where the sometimes illegible ink allows me to wonder what that sentence even was.
So glad to see my people. I have a hobby where instead of a modern synthetic ball, I play soccer with a severed goat head. There is something positively transcendent about the resonating thunk of a kick that you just don’t get with a standard ball.
In my community, we use the head of the defeated chieftain of the nearby tribe instead, and keep goatheads for soup. The lack of horns makes for a better roll.
Funny how there are minor variations the world around.
Hi, AI apologist here. This scenario is a problem with or without AI. You can’t drop a 13k line PR you don’t understand without prior discussion. There are many ways to use AI. Your scenario (keynote speech) is a bad way to use it. Instead, a PR where you understand every line, whether you or an AI wrote it, should be fine. It would be indistinguishable from human generated code.
AI is a tool like any other. I hire a carpenter who knows how to build furniture. Whether he uses a Japanese pullsaw or a CNC machine is irrelevant to me.
That's a fair answer. How do you stop people from doing it though? How do you stop it from becoming every lazy person's first reflex instead of every smart person's third?
We have historically intervened socially (via state regulation, taboo, or censure) in areas where the likelihood of misbehavior was high or the result of misbehavior was severe enough.
For example: nuclear material possession or refinement; slavery; consumer-available systemic antibiotics; ozone-damaging coolants; dowries.
Proscriptions on those are imperfect and inconsistent worldwide, but still prevalent. Each of them is a thing which benefited many people but whose practice enabled massive harm due to human failures (like laziness).
I suppose the issue is that it's a multiplier for bad actors. It has become so much easier to generate plausible-looking code (or any number of things that would've previously required a knowledgeable human to make something that at least passes the sniff test, let's say legal documents as another example) and just overwhelm the limited bandwidth of good actors.
Enough that stacked PRs are a thing. At my job people sometimes build large features on a branch for 6 months. Then it’s a massive PR and no one can review it.
I used ET but it requires a server process also. Some machines are too locked down to allow this. Wish there was a way to kick start the server on demand.
Well yes, but mosh starts its server over an initial SSH connection used for setup, so you only need the binary to exist in PATH of the remote host and you're done. It's more difficult to arrange a service to be running; sometimes more so if you don't have root.
I’m actually working on just this. What’s the smallest training data set required to learn tic-tac-toe? A 5yo doesn’t need much training to learn a new game, but a transformer needs millions of samples.
It’s a glib analogy, but the goal remains the same. Today’s training sets are immense. Is there an architecture that can learn something with tiny training sets?
Maybe ZephApp, when it's actually released.
But would be interesting to record day-to-day conversations (face-to-face using voice recognition) to train a virtual doppelganger of myself and use it to find uncommon commonalities between myself and others.
What would someone do with a year's worth of recorded conversations? Would the other parties be identified? How would it be useful, if at all? How about analyzing the sounds/waveform rather than words? (eg BioAcousticHealth / vocal biomarkers)
Perhaps typing into a text-field is the problem right now? Maybe have a HUD in a pair of glasses. Better than getting a brain chip! Most recent or most repeated conversations most important. Could lead to a reduction in isolation within societies, in favor for "AI training parties." Hidden questions in oneself answered by a robot guru as bedtime story-telling but related to the real-world and real-events.
I'm certainly not challenging anything you're writing, because I only have a very distant understanding of deep learning, but I do find the question interesting.
Isn't there a bit of a defining line between something like tic-tac-toe that has a finite (and pretty limited for a computer) set of possible combinations where it seems like you shouldn't need a training set that is larger than said set of possible combinations, and something more open-ended where the impact of the size of your training set mainly impacts accuracy?
reply