>Changing dimensions or previous sketches is usually fine, but anything more complicated often results in everything in your stack breaking with strange errors, leading it to just be easier to re-create the model.
This is usually the result of design workflows and how you avoid it is going to vary based on the CAD package. It definitely requires being pretty deliberate in design, which can make it harder to draft out an initial design. And the path of least resistance is often one that's more likely to break.
One example would be in Fusion, using projected faces in sketches is far more fragile than projecting a body -- but Fusion will happily project faces by default.
Which constraint types you use where are another common cause of breakage.
The thing that makes it frustrating is that none of this is really well documented anywhere and largely ends up being best practice wisdom passed from one person to another, since a lot of this stuff is really non-obvious. And it's confounded yet again by people cargo culting best practices from one CAD package to another that then gets repeated third and fourth hand.
All that said, as you work with it more and delve into more complex designs, you'll end up settling into workflows that result in more resilient models if you're deliberate about it. The "scrap it and start over" cycle is part of the learning experience, IME, as frustrating as it is at the time.
> they would let random retailers fill the order with fake products
What made this all particularly insidious is that Amazon not only commingled inventory, but actively refused to track where inventory came from.
This meant you only needed one fraudulent seller to poison the entire inventory pool and there was no way know where the bad product came from because Amazon actively avoided being able to track it.
That's the aspect of it that always felt particularly malicious to me.
Amazon don't check returns either. It's a nightmare if you use their FOB service. We've had product returned, not checked and then shipped to another customer who then pputs in a claim because they didn't get what they ordered - because Amazon didn't check the return. Amazon then claim you're selling counterfeit goods.
Entirely why we no longer use their service and ship direct for amazon orders. Some people still try the trick but we always put a claim in and amazon after they automatically give a refund to the buyer, and Amazon pay it. So Amazon pay twice. Maybe the cost of just accepting that loss is less than having someone check the return.
The bad part here is letting “poisoned” inventory in.
Adding vendor tracking adds a layer of ERP difficulty that isn’t practical for bulk, cheap items.
You either have to have serial numbers (unique per item, not just a product identifier barcode) or you have to physically segregate inventory by vendor, which is not practical.
If the vendor doesn’t serialize the item, then Amazon has to add it on receipt. Certainly not worth it for $10-20 item.
Russia has a working system that tracks retail sales of individual cans of beer, bottles of milk and such. Initially it was introduced to track things like shoes and furs that were massively counterfeited, but then expanded to include other goods. So now in a grocery store you use it, for example, for all milk products (milk, cheese, ice cream, etc.), vegetable oil, beer, mineral water. Technically you just scan a different barcode (QR code). There's also an app you can use to scan the thing and get more information, such as the exact producer. The general idea was to fight counterfeit goods, but as a side effect it also enforces shelf life rules or may help to find a drugstore that has a specific drug.
So it is possible and not that expensive even as a country-wide system for goods that cost around $1 (a typical can of beer).
And yes, it does have additional codes for larger-scale packages. So a pack of cigarettes gets its own code, a carton gets its own code, a box of cartons gets its own code. A wholesaler can just scan the box and the system updates the status of every pack inside.
What am I missing about this? Couldn't the scammer just replicate the QR code of a legit shop? I thought the point of counterfeit goods was to fool you into buying them instead of the real thing. I guess part of the process would have to be verifying that every shipment of goods received was accurately tracked from a valid "ship from" address, but I have to imagine there's a lot of common warehousing in use for bulk goods. I'm not understanding how the QR code helps solve that.
Maybe a unique bar code per-item that includes some private hash information that makes it unique to the producer? Sort of an electronic signature for physical goods? Then if there's a centralized database, copying the QR codes wouldn't do much good. You might be able to slip in one if it is sold before the real version. But each subsequent copy could be caught.
This is fascinating in the context how they use and abuse intermediaries to buy and smuggle western tech into Russia. If every chip were that well tracked, it would be a lot easier to clamp down on it..
They didn’t need to actually track things internally, add a sticker or even have someone stamp the vender code to the item listing the vendor when you’re adding the item to the bins and if the customer complains you can likely use that sticker to track who added the item after the fact. Critically you don’t need some 6 digit number for vender code, every new vender for a given item gets a number for that item, software can remember the relevant mapping.
If some vender is adding fraudulent items to the system based on some thresholds you set, charge the vendor to manually sort those specific products out.
Odds are they would make up the ~5 cents per item just dealing with less fraud. However, you don’t need to track every item rack the first few thousand items from a vender and you can scale back tracking as they prove themselves. At scale this could be almost arbitrarily cheap.
There’s many illegal things which can boost a companies bottom line. Quite often the law cares about what’s a reasonable effort which is a very different standard than what maximizes profits.
Something which may or may not decrease their bottom line but definitely significantly reduces counterfeit items ending up in customers hands is going to be considered reasonable even if it’s not profit maximizing.
That’s a really clever and simple plan but doing anything like applying stickers, correctly, by hand or robot, can add cost ranging from $<surprising> to $<shocking>.
Maybe they have a variation of your idea where they inkjet a serial number onto a conveyor belt of incoming items or add a super-cheap chip of some kind.
My understanding is every individual item is tracked in an Amazon warehouse - so Amazon knows that the 67th item in a box from supplier X was shipped to user Y.
They don't just track quantities of SKU's like most other retailers.
This always confused me. You have a bottle of glue sold by company X. Then you have 87 different people "buying" the glue in bulk, having it sent to Amazon, and selling it on Amazon as if it comes from their store:
Buying option 1: Company X glue from store A.
Buying option 2: Company X glue from store B.
Buying option 3: Company X glue from store C.
...etc.
But then Amazon says, "actually, these are all the exact same bottles of glue, so we'll thrown them all into the same bin, and no matter what "store" the people buy them from, we'll just grab them out and send them to the customer.
Now even without counterfeits, this is weird. What exactly is the point of store A, B, C, etc.? Company X sends the bottles to Amazon, they get put in one big pile, you buy them on Amazon, and Amazon takes them out of that one big pile and sends them to you.
The only thing purpose of the "stores" when you co-mingle inventory seems to be:
1. Plausible deniability for counterfeits. Hey, they told us they bought it from company X, we had no way of knowing they didn't.
2. Getting money from people trying to get rich quick in the marketplace. Some people will try all sorts of cuts to boost their Amazon sales in the hope that it will pay off later.
The reality is more complicated than you are assuming. A shockingly large number of vendors grossly mismanage their supply chains such that Company X can actually be legitimately undercut by reseller Company A on Amazon even though Company X produces the product! The mechanics of it are convoluted but legit, and there is a huge ecosystem of companies that arbitrage the legions of producers that are bad at managing their global supply chains.
Amazon has an interest in allowing these resellers of legitimate products to exist because it pushes down prices from the primary vendors, lowering prices for the customer. The primary vendors end up competing against themselves indirectly but they have no one to blame but themselves. This is the milieu in which counterfeit products exist.
If the producers of these products were consistently competent at managing their supply chains it would be much less of an issue because it would clear the field of resellers arbitraging the mismanagement, leaving only Company X and the counterfeiters which is a much easier problem to solve because you don’t have to worry about banning legitimate resellers. But that isn’t where we are.
That’s one way but far from the only one. Producers like to do things like make random deals through their myriad divisions to offload inventory to a random reseller very cheaply that ultimately finds its way onto Amazon at a price that undercuts cost of the original producer’s contract on Amazon. The cost of sales are not the same on Amazon even if you are selling the same product, so they can legit undercut you. You also have different divisions of the same company around the world all selling on Amazon under different contracts competing with each other (which Amazon tacitly encourages AFAICT).
Smart companies put contracts globally that have Amazon implications under a single person who can see across every deal. If they sell to someone with a restriction on Amazon resale, they will mark those goods so that they can track it if it shows up on Amazon. However, there are so many fly-by-night resellers that this is a losing proposition, so many don’t bother with those resellers anymore because enforcement yields nothing.
The vast majority of companies are naive and not very smart about any of this. People that know how to systematically set up a sales program that is profitable and resistant to arbitrage on Amazon get paid a lot of money in industry. It isn’t that hard but most companies can’t seem to figure it out.
That's one approach. There is also "buy it at a discount, resell it without much markup" and "buy it earlier, store it until prices rise", and plenty of other ways to perform this arbitrage.
I read it as "For items in the ten to twenty dollar range, its not worth adding a vendor label" and I don't suppose its the cost of the sticker, but how much longer does it take the warehouse worker to take it from a shelf and put it in a box if they add a sticker to every item? +5% ? +10% ? +100% ? (It takes very little time to put an item in a box, I could see adding a sticker doubling this...)
Is that real? I find it hard to believe that Amazon effectively accepted stock from third parties "as is" and lost track of where it came from. It's more likely that they don't tell you than they don't track.
In the seller documentation they say they can track the source of commingled inventory - they achieve this by never putting them on the same physical shelf location.
A fair point and important distinction, but so is the difference between "we CAN" and "we WILL/DO". That "myth" didn't come out of thin air. It's a result of of amazon not doing that unless they felt it financially prudent to do so/until enough people bitched about it.
The OP article is exhibit A for how common of an issue this was.
We know that they would not provide such tracking for those conducting fraud investigations. You can believe they intentionally didn't track the source or that they intentionally refused to share the information to root out fraud; either one is a very bad look for Amazon.
It wouldn't surprise me. Amazon knows where every item is in its FC and knows the motions of every item's placement, from the grand scheme of things.
It's not that hard to then track back from an order exactly what bin or tote or shelf the item was pulled from, then look at what shipment(s) that bin's items came from to figure out what supplier it came from.
They know the counterfeit goods came in and were stowed to bin XYZ and they know that someone pulled from XYZ...
Its co-mingled, so you may not your stuff back, you just get similar stuff back. That is if you pay to get it back. It’s usually cheaper to let them flog it off cheap on prime day
>I just have to mention that IRC had these archives so repeat questions had a corpus to search. The walled gardens don't.
For many businesses, this is a feature, not a bug.
Internal communications are discoverable in litigation. If you have records, you can be compelled to turn them over.
I used to work in healthcare. Internal messages had a maximum retention of 30 days. That wasn't driven by IT or the users. That was a decision made by legal. In that space, you are always being sued by somebody. The lawyers want to minimize exposure and that's a fight they're basically always going to win.
To be clear: it's better if that's a decision made by the business. But it's also one of those cases where what the decision makers care about isn't necessarily aligned with what the users care about, so there's ultimately not a lot of incentive for Slack to care.
Roo has codebase indexing that it'll instruct the agent to use if enabled.
It uses whatever arbitrary embedding model you want to point it at and backs it with a qdrant vector db. Roo's documents point you toward free cloud services for this, but I found those to be dreadfully slow.
Fortunately, it takes about 20 minutes to spin up a qdrant docker container and install ollama locally. I've found the nomic text embed model is fast enough for the task even running on CPU. You'll have an initial spin up as it embeds existing codebase data then it's basically real-time as changes are made.
FWIW, I've found that the indexing is worth the effort to set up. The models are generally better about finding what they need without completely blowing up their context windows when it's available.
On the one hand, it's good that we're seeing a lot of exploration in this space.
On the other, the trend seems to be everyone developing a million disparate tools that largely replicate the same functionality with the primary variation being greater-or-lesser lock-in to a particular set of services.
This is about the third tool this week I've taken a quick look at and thought "I don't see what this offers me that I don't already have with Roo, except only using Claude."
We're going to have to hit a collapse and consolidation cycle eventually, here. There's absolutely room for multiple options to thrive, but most of what I've seen lately has been "reimplement more or less the same thing in a slightly different wrapper."
I've been contributing to an open source mobile app [1] that takes two swings at offering something that Roo does not have.
1. Real-time sync of CLI coding agent state to your phone. Granted this doesn't give you any new coding capabilities, you won't be making any different changes from your phone. And I would still chose to make a code change on my computer. But the fact that it's only slightly worse (you just wish you had a bigger screen) is still an innovation. Making Claude Code usable from anywhere changes when you can work, even if it doesn't change what you can do. I wrote a post trying to explain why this matters in practice. https://happy.engineering/docs/features/real-time-sync/
2. Another contributor is experimenting with a separate voice agent in between you and Claude Code. I've found it usable and maybe even nice? The voice agent acts like a buffer to collect and compact half backed think out loud ideas into slightly better commands for Claude Code. Another contributor wrote a blog post about why voice coding on your phone while out of the house is useful. They explained it better than I can. https://happy.engineering/docs/features/voice-coding-with-cl...
This is awesome. I’ve tried several of the mobile setups and this worked like a charm without any fiddling.I’ve been using termius + tailscale but this is much better UX. Thanks!
I currently do this with Termius and ssh into the box I’m working on the launching Claude Code. Only issue I have is the occasional network issue causing the session to drop.
You likely know this, but in case you don’t: Termius makes it easy to use “mosh”, which makes your connection resistant to network drops and resumable. I am experimenting with it right now. Once you install mosh on your serve, click the “mosh” setting in the connection settings in Termius, and you are good to go.
Yes you can self host the relay server (and please do so!). If you are like me and already have Mac Mini in a closet running docker, k3s, or just have a friend group kubernetes cluster, you can get it running in about 3 minutes.
Why a relay server? I want an app to just work with no fuss while running an AI agent on arbitrary consumer hardware. Having both the mobile app and agent wrapper process connect outwards through any firewalls and networks to a third computer on the public internet is the most boring and dumb way for it always to work.
Plus I wanted a system that did not require the app and the computer running the AI agent to both be online at the same time. Having a third computer act as a dumb mailbox handles some corner cases I care about.
I've been trying to surreptitiously get claude code and an oven specific MCP server to run on my friend's smart oven for a prank. However this oven enters a low power state when you don't interact with it; killing the network connection. My vision is to queue up commands via the mobile app with fuzzy logic, and then have the oven make weird noises as determined by claude code at some later point when they go to make a pizza or something.
To be clear: having a diversity of tools is a good thing! I like having options.
My complaint is more that right now it feels like everybody is rushing to fill the exact same space with the exact same feature sets.
It's resulting in a lot of superficial diversity that's functionally homogenous. I want to see more applications that are pushing the capabilities of current AI tooling in creative directions.
> Like during the dawn of web 2.0 we had lots of aggregators and forums instead of "Reddit and others."
So, in other words, this is the exact opposite? “Lost of aggregators and forums” meant diversity. Lots of small players doing their own thing. What we have now is a handful of big players, and then tons of small players accessing those services with a different coat of paint. It’s like if the web you mention consisted of lots of people doing alternative interfaces to access Facebook and Reddit.
Twitter wasn’t nearly as big or influential, that comparison doesn’t hold. Furthermore, I was replying directly to the reference of “lots of aggregators and forums instead of "Reddit and others."”, which obviously excludes Twitter as part of the “others”.
I think the thing is that most of the people implementing stuff for Claude have already realized it’s just the best option available for… basically everything. I’ve switched to different models before, but I always come back to Sonnet or Opus for doing anything sensible.
Claude may be arguably the best model, but why decide unilaterally for your users that they _have_ to use it?
If there's no particular feature that only Claude offers, this is just needless vendor lock-in. And what happens if another lab releases a model that suddenly trounces Claude at coding? Your users will leave for an app that supports the new hotness, and you won't be able to keep them because of a short-sighted architecture that cannot swap model providers.
The situation with models right now is that to eke out that last bit of performance, you have to do some things in ways that are specific to the model in question - wording of prompts, when and where to introduce relevant parts into context etc.
Because Claude code does offer particular features. More important than “features” is the fact that it works and does the things you want like 60-70% of the time, with guardrails and practice and attention. Which is way better than competing tools.
Besides that. These tools are changing so fast that to build an agent agnostic tool would be insane given the speed and market pressures right now. Why support roo or cline or cursor cli if it adds 3x engineering cost for 20% more market reach? The reality is there are no standards around the way the actual leading tools work if you wanna build something on Claude/codex/(insert flavor of the week).
Gotta pick your horse and try to hang on, and hope you picked right.
> Claude may be arguably the best model, but why decide unilaterally for your users that they _have_ to use it?
What a ridiculous proposition. It’s me making the app right? You getting to use it (if you want) is purely incidental. If you never use it because it doesn’t support anything but claude, that’s not something I consider a problem.
There’s value in Anthropic being able to optimize their model’s front end to fulfill whatever features they plan for CC - like tool calling. You point out a valid risk of lock-in, maybe this signals they are committed to being at the forefront of coding models (part of enterprise play)?
While I prefer Cline/Roo at work where I have multiple API plans for AI models, for personal I have Claude Pro and that really only works with Claude code. The benefit is that I can use it on a $20 a month plan.
I mostly use Claude Code with a Max plan via Roo. I have the option of sending prompts to OpenRouter if I've hit usage limits or if I want to try a particular task with a different model (e.g., I'll sometimes flip to Gemini Pro if a particular task could benefit its large context windows).
I wonder if LLMs are actually closer to programming languages, in the sense of how they'll proliferate amongst different companies/people/use cases. Like maybe OpenAI is considered the Java of LLMs, while Claude is more like Python etc.
I'm finding GPT 5 (via Codex CLI on a pro subscription) is far better than Opus for my use cases. Much more than the small difference on swe-bench would suggest. However, the Codex CLI is so immature by comparison that I'm still mostly using Claude, and only escalating to Codex when Claude snookers itself.
I've been using Codex in another tab in Terminal on Windows and it's my go to agent now. Just my two cents. I have a lot of hours with Claude Code, and do appreciate it, but Codex is quite good.
And then the providers ship a landmark feature or overhaul themselves. Especially as their models advance.
Wrappers constantly live in the support and feature parity of today.
Anthropic’s Claude Code will look a hell of a lot different a year from now, probably more like an OS for developers and Claude Agent non-tech. Regardless they are eating the stack.
Pricing/usage will be very simple - a fixed subscription and we will no longer know the tokenomics because the provider will have greatly abstracted and optimized the cost per token, favoring a model that they can optimize margin against a fixed revenue floor.
>Pricing/usage will be very simple - a fixed subscription and we will no longer know the tokenomics because the provider will have greatly abstracted and optimized the cost per token, favoring a model that they can optimize margin against a fixed revenue floor.
Personally, I think it's far more likely that a year from now either SotA models will have shifted elsewhere or Anthropic will have changed their pricing model to something less favorable than the current MAX plans. Either of those scenarios could suddenly result in the current Claude subscription models either not existing or no longer being the screaming deal they are now. I think it's exceedingly unlikely we see any major provider go to an unmetered business model any time soon.
And if you've built your entire workflow around tooling specific to Anthropic's services, suddenly you have an even bigger problem than just switching to a more cost effective provider. That's one of the bigger reasons I'm very skeptical of these wrappers around CC generally.
Even Claude Code itself isn't doing anything that couldn't and hasn't been done by other tools other than being tied to a really cheap way to use Claude.
Claude’s wide adoption makes it more likely Anthropic will stay SotA, as do the max plans. This is the training data they crave to be able to improve, and it’s costing them peanuts while identifying customers who’ll pay and building loyalty. The data flywheel enabled by Claude is the closest thing to a vault any of the models have right now.
Anthropic's models are fantastic, but they are -- by far -- one of the most expensive providers on an API basis. That's a large part of what makes the Max plans a great deal, right now.
Even on a Max plan, it's not hard to completely blow through your usage limits if you try to use Opus heavily.
All it takes is another provider to land a combination of model and cost that makes Code less of a deal for vendor lock-in to become a problem.
Do they have such a flywheel? I remember I had to specifically opt-in to sharing my Claude Code sessions with them. I think by default they aren't training on people's sessions.
I'm more optimistic. Open source and open weights will eat this whole space.
Training is capital-intensive, yes, but so far it appears that there will always be some entities willing to train models and release them for free. All it takes is a slowdown at the frontier for the open models to catch up.
I still can't figure out how to set up a completely free, completely private/no-accounts method of connecting an IDE to LM Studio. I thought it would be "Continue" extension for VS Code, but even for local LM integration it insists I sign-in to their service before continuing.
Strongly seconding Roo Code. I am using it in VSCodium and it's the perfect partner for a fully local coding workflow (nearly 100% open-source too so no vendor is going to pry it from my hand, "ever").
Qwen Coder 30B is my main driver in this configuration and in my experience is quite capable. It runs at 80 tok/s on my M3 Max and I'm able to use it for about 30-50% of my coding tasks, the most menial ones. I am exploring ways to RL its approach to coding so it fits my style a bit more and it's a very exciting prospect whenever I manage to figure it out.
The missing link is autocomplete since Roo only solves the agent part. Continue.dev does a decent job at that but you really want to pair it with a high performance, large context model (so it fits multiple code sections + your recent changes + context about the repo and gives fast suggestions) and that doesn't seem feasible or enjoyable yet in a fully local setup.
Thanks to both for recommending roo, it is the closest I've gotten. I still can't get it to work the way I expect.
When I use qwen coder 30B directly to create a small demo web page, it gives me apl the files and filenames. When I do the same thing in roo chat (set to coder) and it runs around in circles, doesn't build multiple files and eventually crashes out.
Both Roo and Continue support local modals (via LM Studio). For Continue, you add a fake account (type in literally anything) and then click 'edit' -- it will take you to the settings JSON, and you can type in LM Studio as your source.
The main problem I'm seeing, is that a lot of the tooling doesn't work as well "agentically" with the models. (Most of these tools say something like 'works best with Claude, tested with Claude, good luck with any local models'). The local models via LM Studio already works really well for pure chat, but occasionally trip up semi-regularly on basic things, like writing files or running commands -- stuff that say, GitHub Copilot has mostly already polished.
But those are basically just bugs in tooling that will likely get fixed. The local-only setup is behind the current commercial market -- but not much behind.
I strongly agree with the commenter above, if the commercial models and tooling slow down at any point, the free/open models and tooling will absolutely catch up -- I'd guess within 9 months or so.
So I work at a company that sells a product that is part of a larger ecosystem. the parent company has spent 35 years NOT having a solution to our niche. There are others like us too in the space. Some do WMS, some do EDI, etc.
So depending on the parent company, they may prefer to have a - to be a little enterprisey - set of ISVs that are better in specifc domains.
> a fixed subscription and we will no longer know the tokenomics because the provider will have greatly abstracted
This is definitely not how most compute-constrained cloud services end up looking. Your cloud storage provider doesn't charge you a flat rate for 5tb/month of storage, and no amount of financier economics can get Claude there either.
There's few new ideas in this space, it's pretty boring.
How many ways can you wrap (multiple agents, worktrees, file manager, diff viewer, accept reject loops, preset specifications for agents) -- let's try Electron! Let's try Tauri! Let's try a different TUI!
What if we sat down and really thought about how these agentic IDEs should feel first instead of copy pasting the ideas to get something out to acquire market and mind share? That's significantly harder, and more worthwhile.
That's how these agentic front ends should be advertised: "Claude Code, plus _our special feature_" and then one can immediately see if the software is filled or devoid of interesting ideas.
The idea here is an IDE for Claude Code specifically. is most likely the strongest coding agent right now, but not everyone loves the command line only interface. So I totally get it.
Because I mentioned it and it's what I use daily: Roo is a VSCode extension. So you get the entire VSCode ecosystem for free. On the AI specific side, it has every feature this app highlights on its homepage and more. It works with just about any API provider and model you could ask for.
I could probably translate my existing workflow over to Claudia pretty easily, but what does that get me? A slightly different interface seems to be about it.
That's the question I keep hitting with these new tool announcements.
Didn’t know about roo! But I’m with you; I don’t see why folks are investing their efforts in building more of these shiny wrappers, and what their expected end game could be.
If you're opposed to using VSCode for whatever reason, that's reasonable. Though, for me personally, the fact that it only lets you use Claude Code strikes me as a much larger negative on net. It's not at all agnostic in terms of AI provider.
That said, VSCode is a popular platform for this for exactly the reason I think consolidation is eventually inevitable: it's got a huge preexisting ecosystem. There are extensions for practically anything you could ask for.
There's likely room for some standalone, focused apps in this space. I just don't see the current wave of "we put a wrapper around Claude Code and gave it some basic MCP and custom prompt management tools like a dozen other applications this week" being sustainable.
They're all going to end up on their own tiny islands unless there's a reason for an ecosystem to develop around them.
There are lots and lots and lots of us that don't like using VSCode, want to use our own IDE of choice and use Claude Code. Terminal / standalone app is best for me there or even better an IDE plugin.
A tiny island is fine for a tool like this - not everything needs an 'ecosystem'.
The thing about tiny islands isn't that every tool needs a sprawling ecosystem to thrive. It's that applications that don't develop a userbase tend to die. This is as true of open source apps as it is commercial ones.
Typically, applications develop a userbase when they offer something that people can't find elsewhere.
What I'm saying isn't "everyone should be using VScode extensions for this"; it's "I see nothing to distinguish this from a bunch of other functionally identical applications and people just keep building them." I literally don't see a single unique feature promoted on the landing page.
My fundamental point is that we're in a gold rush phase where people are all building the same thing. We'll eventually see a handful of apps get popular and effort swell around those instead of everyone reimplementing the same thing. And my money is on that looking a lot like it usually does: the winners will be the apps that find some way to differentiate themselves.
Yeah, it's absolutely a ('quick, sell shovels') gold rush. Too much that's the same and not enough big/different thinking, it'll take time, and as a buyer I'm not rushing in to buying too much of the early crap, personally.
I was with a fairly well acclimated woman the other day and mentioned something about chatgpt’s voice, she acted confused and asked if that was the paid version (it is)
But long story short she showed me what she had on her iphone and it was a totally different app that wrapped a text chat interface around chatgpt, it wasn’t even themed like to be a persona or anything but was at the expense of any multimodal capabilities
Just caught me off guard about how common that might be
My (somewhat elderly) father only refers to it as ChatGBT and when I tried to get to the bottom of why he said it’s because “thats what it’s called in my phone”.
Seems pretty scammy to me, akin to typo squatting with potential to collect a lot more personal information but he can’t always be reasoned with.
Hopefully he heeds my advice to not provide anything personal.
As the code generation tools improve, this will only get worse. Having gen ai build a clone of something with some minor differences will become easier and easier.
If the "on the go" experience is important to you, i.e. you actually want some care and intention put into the phone experience. There are 4 apps I'm aware of:
- Happy Claude Code Client: open source (MIT) effort for a quality mobile app
- Omnara: closed source mobile app, $9/month
- CodeRemote: closed source mobile app, $49/month
- Kisuke: closed source mobile app, private beta, unknown price
If you know of others, I would appreciate a PR to update the table I put together, or just let me know and I'll add it.
They're multipliers against your quota of requests. GPT-4.1 is "free" with a copilot sub, and then the premium models would burn credits against a multiplier. So higher multipliers count more against your monthly quota.
GPT5, Sonnet 4, and Gemini Pro 2.5 are all 1x. Opus is 10x, for comparison.
Thanks for the info. Would you consider GH Copilot the best bang for buck currently, or would you recommend just going with the Claude $20 plan? I'm definitely not looking to spend a lot of money, just want to see what kind of mileage I can get on low-end plans
I use Copilot because work is paying for it and it can be made usable, but requires being really deliberate about managing context to keep things on the rails. It's nice that it gives you access to a pretty decent selection of models, though.
At home, I'm mostly using the $100 Claude plan. It's definitely not cheap, but I've found it has a pretty decent balance for my casual experiments with agentic coding.
Another option to seriously consider is setting up an account with OpenRouter and just tossing some cash into your bucket on occasion. OpenRouter lets you arbitrarily make API requests to pretty much any model you want. I've been occasionally tossing $10 or so into mine and I'll use it when I've hit my usage limits with Claude or if I want to see how another model will attack a particular task.
FWIW, I use Roo code for all of this, so it's pretty easy for me to switch between models/providers as I need to.
I consider the $10/mo to be an incredible value ... but only because of the unlimited 4.1 usage that can be provided to other compatible extensions (Roo Code, Cline support it) with the VS Code LM API.
Unlike some other workarounds this is a fully supported workflow and does not break Copilot terms of service with reasonable personal usage. (As far as I understand at least. Copilot has full visibility into which tools are using it to make chat requests so it isn't disguising or impersonating Copilot itself. When first setting it up there's a native VS Code approval prompt to allow tool access to Copilot and the LM API is publicly documented).
But anything unlimited in the LLM space feels like it's on borrowed time, especially with 3rd party tool support, so I wouldn't be surprised if they impose stricter quotas for the LM API in the future or remove the unlimited limit entirely).
It's my workhorse model with Roo Code given the cost - or lack thereof. I was about to cancel Copilot after they massively cut the premium limits until they swapped out 4o with 4.1 for the base model. 4.1 is just decent enough for simple, uncreative tasks and is pretty reliable as far as tool use (especially compared to 4o) so I have had a lot of success with it.
For any problem with a lot of reasoning or problem solving I use "architect" mode first with Gemini 2.5 Pro or Claude Sonnet 3.7/4 to break it into discrete subtasks that 4.1 can follow pretty successfully. This approach is very cost effective as Gemini can do a lot of high level planning quickly and cheaply.
I'm sure a lot of the experience depends on how 4.1 is being used, I've fine tuned my custom Roo code configuration to work around its limits without a lot of sacrifices, I'm sure using it out of the box with Copilot is asking a lot more from a weaker model on its own.
Sounds like you’ve figured out a good workflow for yourself. When you switch back and forth between models like that, do they know about all the previous interactions and context? (They must right?)
Yes, either through passing the full context of the current task/conversation to a new model to continue working from that point on, or through the intermediary step of the plan document generated by Gemini or whichever larger model that is then passed back to 4.1 to implement.
The latter is a commonly recommended strategy in general for any large task even with more powerful models to keep context manageable and allow recovering easily if on step 9/10 the LLM loses it and starts mangling all the previous work it did. That way you don't have to start all over from the last good checkpoint or commit.
A decade ago, I was really interested in the idea of using a crypto like what Doge was at the time for this specific use case. Back then, a dogecoin was a fraction of a cent so it was a better fit than its current valuations.
Any individual page impression is only worth a few cents to the publisher anyway. I still think there's a lot of potential value in something similar as infrastructure for facilitating ultra-microtransactions on that scale that don't get completely consumed by credit card processors, etc.
I'm not going to maintain subscriptions to every news source out there, but I'd be more than happy to toss something in the tip jar from a fund I could top-up on a regular basis.
The fact that they chose to tie it to and advertise it as "get paid to see ads" is a significant turn-off in my mind even if the rest of the ecosystem theoretically works in functionally the same way.
In my mind, the entire point is to get away from advertising as a revenue stream entirely. I want to pay for the things I consume. If the advertising market has decided that my page impression is worth less than pocket change, I'd far rather just give that money to the publisher directly and avoid ads being part of the equation.
The core idea behind BAT isn't bad, but the marketing is pretty terrible if you're targeting people like me.
I think it is bad because it legitimizes bad practices of the marketing industry. "How bad could grabbing as much data from the population really be? We're sharing our profits!"
I like that idea. If you opened an article you wanted to read, you could be prompted to pay a few cents. You click "yes", funds are transferred, and you read the article.
Security through obscurity is bad when obscurity is the only thing stopping an attacker. It's a meme because obscurity is not a substitute for stronger security mechanisms. That does not mean it cannot be an appropriate compliment to them, however.
If I wanted to hide a gold bar, sticking it in an open hole behind a painting on the wall wouldn't be particularly great security. As soon as a robber found the hole, the entirety of my security is compromised.
If I put it in a safe on the wall, it's much more secure. The robber has to drill through the lock to get the gold bar.
If I put it in a safe behind a painting on the wall, the robber has to discover that there's a safe there before they're able to attempt drilling through it. Bypassing the painting is trivial compared to bypassing the safe, but the painting reduces the chance of the actual safe being attacked (up until it doesn't!)
Security should be layered. Obscurity will generally be the weakest of those layers, but that doesn't mean that it has no value. As long as you're not using obscurity as a replacement for stronger mechanisms, there's nothing wrong with leveraging it as part of a larger overall security posture.
I was kind of shocked by just how gosh-darned reasonable it is when it came out a couple of years ago. It's my absolute favorite thing to cite during audits.
"Are you requiring password resets every 90 days?"
"No. We follow the federal government's NIST SP800-63B guidelines which explicitly states that passwords should not be arbitrarily reset."
I've been pleasantly surprised that I haven't really had an auditor push back so far. I'm sure I eventually will, but it's been incredibly effective ammunition so far.
Alas, in Australia one of the more popular frameworks in gov agencies is Essential Eight, and they are a few years away from publishing an update with this radical idea.
The option is not a binary between "let it fail" and "no strings attached bailout."
If things get bad enough for Intel, the precedent that makes the most sense is to follow the model that was used with GM: the company enters bankruptcy, existing investors and creditors are wiped out, and a new corporate entity backed by the fed.gov steps in and assumes the assets and operations of the company. Once things have sufficiently stabilized, the government can then divest its ownership in the company.
Intel is economically and strategically important enough that just letting it collapse without a plan to pick up the pieces is a serious footgun. We have a framework for handling this kind of situation that has been shown to work without rewarding those who caused the problem. It'd be silly not to use it.
It's not just the investors and creditors that need to be wiped out. Above all else the managers need to be fired, with no bonuses or golden parachutes.
The 'the only reason a company exists is to make money for the stockholders' is why we're in this mess right now. Because that's the regime management is operating under. And they are delivering what's demanded. Yeah management is looting companies to the detriment of their long term health. But that's allowed because it aligns their incentives with predatory capitalists.
I don't understand why they discontinued the bolt before the new one comes out. Who does that? You have a cash cow, the assembly lines are already running, just... do nothing.
Conrail might be a better alternative model. It really did succeed well enough to re-float and become a competitive endeavour.
The state probably did a better job than a private takeover could have because:
- They had effectively infinite backstop-- they could afford to spend years and front load a lot of costs to fix the mess
- They had a mandate to restructure the sector beyond a single company. I'd expect a nationalized Intel would have an easier time pivoting to new models, while private investors might be too afraid of killing their current golden geese for, say, a full fab spin-off or becoming a more aggressive x86 licensor.
Sigh, I'm sure there is zero consideration for a antitrust-style breakup. All I saw on a quick scan was "hey does any other company want to acquire parts of them" which is just another way to say "does anyone want to reduce competition even more".
I get that there is probably only one top-end fab in Intel and that would be hard to split across two entities.
But if you split Intel into three entities, they might (gasp) hire a bunch of the engineers that have steadily fled intel or be chopped off in shortsighted layoff rounds. You might get (gasp) innovation.
If you give all three the access rights to Intel's IP, things would probably be FINE in the long run. Give three separate companies a couple fabs and the IP, and see what happens. Intel has FIFTEEN fabs. So hand five to each of them.
Intel is a shell because existing management is financial "wizards" and war-on-labor MBAs that are trying to manipulate the stock price to hit options targets.
If you split the companies, they have to compete on engineering, or they die. You can probably also wipe away a lot of the management because the companies will have to do "real work" in the medium run and their "skillsets" won't really apply.
The MBA types will all say that's impossible. IMO that's why we should do it, because all the people that failed the industry and company are opposed to it.
Admittedly my language was deliberately more inflammatory than it had to be, because I wanted to challenge folks to basically post what you just did. Rather than an endless reform-a-palooza, seizing upon current apathy to champion “let it fail” forces folks to reckon with the reality that nothing is permanent, and if the status quo is unworkable, how do we destroy and rebuild in a constructive way?
Excellent response, gave you the upvote so others will hopefully read it.
This is usually the result of design workflows and how you avoid it is going to vary based on the CAD package. It definitely requires being pretty deliberate in design, which can make it harder to draft out an initial design. And the path of least resistance is often one that's more likely to break.
One example would be in Fusion, using projected faces in sketches is far more fragile than projecting a body -- but Fusion will happily project faces by default.
Which constraint types you use where are another common cause of breakage.
The thing that makes it frustrating is that none of this is really well documented anywhere and largely ends up being best practice wisdom passed from one person to another, since a lot of this stuff is really non-obvious. And it's confounded yet again by people cargo culting best practices from one CAD package to another that then gets repeated third and fourth hand.
All that said, as you work with it more and delve into more complex designs, you'll end up settling into workflows that result in more resilient models if you're deliberate about it. The "scrap it and start over" cycle is part of the learning experience, IME, as frustrating as it is at the time.