Thanks for being part of the discussion. Almost every response from you in this thread however comes off an unyielding, "we decided this and it's 100% right"?
In light of this vulnerability, the team may want to revisit some of these assumptions made.
I guarantee the majority of people see a giant modal covering what they're trying to do and just do whatever gets rid of it - ie: the titlebar that says 'Trust this workspace?' and hit the big blue "Yes" button to quickly just get to work.
With AI and agents, there are now a lot of non-dev "casual" users using VS code because they saw something on a Youtube video too that have no clue what dangers they could face just by opening a new project.
Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).
Didn't mean to come off that way, I know a lot of the decisions that were made. One thing I've got from this is we should probably open `/tmp/`, `C:\`, ~/`, etc. in restricted mode without asking the user. But a lot of the solutions proposed like opening everything in restricted mode I highly doubt would ever happen as it would further confusion, be a big change to UX and so on.
With AI the warning needs to appear somewhere, the user would ignore it when opening the folder, or ignore the warning when engaging with agent mode.
> Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).
I’m not sure this is possible or non-misleading at the time of granting trust because adding or updating extensions, or changing any content in the folder after trust is granted, can change what will be executed.
Looks like they're selling an N150 based "Mini PC" for $500.
You can get a very similar 16GB RAM, 1TB storage Minipc in the same form factor from Amazon for around $260 so looks like you're paying almost twice the price for the NAS-type software?
Yea you’re definitively paying more for a product vs just a raw computer. I wouldn’t want it as I bought an intel nuc and manage it myself but I could see less tech savvy people who what to get into the space finding this interesting.
>You can get a very similar 16GB RAM, 1TB storage Minipc in the same form factor from Amazon for around $260
Not apples/apples, it looks like the Umbrel at $500 comes with 4TB, you're pricing out a 1TB above. A bare Samsung 990EVO 4TB is $328, on a straight $/TB that's an extra $246 putting your total build more like $500.
Kind of wild that even an option literally phrased in a way indicating you want to immediately give them money requires clicking again to even start making a selection (without even getting into the fact that the actual specs are behind a link that makes it sound like it's for when you've already made up your mind to buy it).
Seconded I quite like this business model of sharing open source software and selling hardware with software pre-installed. Seems like best of both worlds to serve everyone with a tidy profit.
It's nice that they mention node-pty that does most of the heavy lifting for the terminal/pseudo-tty that powers this (VSCode's terminal emulator is powered by the same library).
It looks like they've added a layer on top of node-pty to allow serializing/streaming of the contents to the terminal within the mini-terminal viewports they're allocating for the terminal rendering. I wonder if they're releasing that portion as open source?
Strange - the model is marked as "Trains on data" ("To our knowledge, this provider may use your prompts and completions to train new models. This provider is disabled, but it can be re-enabled by changing your data policy.").
This is usually not the case for paid models -- is Openrouter just marking this model incorrectly or do Deepseek actually train on submitted data?
- The MCP server makes a tool available called 'get_inspirations' ("Get sources of inspiration. Always use this when working on creative tasks - call this tool before anything else.")
- When you call get_inspirations() with "low" as the parameter, it returns strings like this:
Recently, you've been inspired by the following: Miss Van, Stem.
Recently, you've been inspired by the following: Mass-energy equivalence, Jonas Salk.
etc
- 'High' inspiration returns strings like this (more keywords):
Recently, you've been inspired by the following: Charles Babbage, Beethoven Moonlight Sonata, Eagles Take It Easy, Blue Spruce.
Recently, you've been inspired by the following: Missy Elliott Supa Dupa Fly, Design Patterns, Flowey, Titanic.
etc.
Simple tool. Seems adding a few keywords for 'inspirations' is what makes the LLMs generate more varied text.
In light of this vulnerability, the team may want to revisit some of these assumptions made.
I guarantee the majority of people see a giant modal covering what they're trying to do and just do whatever gets rid of it - ie: the titlebar that says 'Trust this workspace?' and hit the big blue "Yes" button to quickly just get to work.
With AI and agents, there are now a lot of non-dev "casual" users using VS code because they saw something on a Youtube video too that have no clue what dangers they could face just by opening a new project.
Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).
reply