Related question about this, with apologies if this is a dumb question. For small to mid size tech companies (say small startups to 1000 employees), what are the general recommended IT procedures to ensure application software like this is updated across the company?
That is, every company I've worked at has had some form of device management software on their laptops, but that software only ensured that the OS and some specific "managed applications" were always patched. For developers, though, we never had fully "locked down" machines because they made our job so much more difficult (that is, more than other departments we'd often be installing and running new software).
In that case, are there some specific corporate controls to ensure nobody is running an unpatched VSCode, beyond messaging all engineers and saying "you better make sure your VSCode installation is updated, or else..."?
Firstly, an IT team which works with users rather than against users. That might be as simple as adding all the "core" apps to MDM to ensure they always get patched regularly.
Secondly, it requires a development team to realise that just because they're good at programming computers it doesn't mean they're good at administering them. Yes, it sucks that you're not allowed to install bonzibuddy.exe from Limewire. But your needs aren't more important than protecting the integrity of the network and company.
Realistically, how often do you need to install brand-new non-standard software? If it is a regular occurrence then you need a process by which you can request it and the IT team can assess how they manage it.
> Realistically, how often do you need to install brand-new non-standard software?
Almost every day. Sometimes multiple times a day in case of a new project / new embedded hardware (toolchain) / new devkit to "quickly test" / tight deadlines, etc.
> If it is a regular occurrence then you need a process by which you can request it and the IT team can assess how they manage it.
There isn't. It has been attempted to get such a process multiple times over the past years. With and without escalation to higher ups all the way up to the CEO. The IT team promises they'll speed things up, but a software approval for 1 app still takes at least 3 months. Now what?
I'll tell you now what: employees and contractors alike just start using their own fully unlocked (perhaps badly updated) machines out of desperation and transfer files to and from locked-down company machines using email or onedrive or whatever other means possible.
Rather than cheat the system to get the project done you probably need to let the project be late and then explain that it was late because IT dragged their feet on app approval. That will make it pretty clear to the powers that be that this needs to be fixed.
Correct. The only way to actually bring about change is to make sure the ones with the power to change things feel the pain associated with their decisions.
No, it's the only way things get better. First if you cheat the system and something does happen, it's your and just your fault. Why take the risk. If the company has rules, it's also the companys job to make your work possible with these rules.
This depends so much on the situation. It might be a way to get things better, but it's also likely to leave a sour taste in your customer's mouth. They may know that Dev Bob was't responsible for the problem but IT Alice was, but what they'll internalize is that there was a problem with Bob and Alice that caused them pain, and they'll associate Bob and Alice both with that pain.
It's necessary where the relationship with IT becomes adversarial, often with IT that's less skilled, or more constrained by their management.
Where possible though a mutual beneficial relationship is best.
Where I work I have two laptops. One is my corporate laptop, less powerful but it can connect to our internal network. The other is my dev laptop, more powerful but it can only connect to Internet and our dev network. I can install whatever I need on the dev laptop, as long as I a) keep a running log of installed software and update IT when that changes, b) regularly check NVDB for the software I have installed and apply mitigations as needed, c) keep versions updated to current versions where possible (and never to versions that have known vulns).
I'd like to believe this works, but my experience in Big(ish) Corp is that you get the blame.
If you think you're in a position to change it and you're motivated to do so, go for it, otherwise my experience is that you're better off finding somewhere else to work.
CYA via email and other documented functions, but you are correct in the sense you're better off finding somewhere else to work.
I've seen too many companies set up systems that you cannot be in compliance with but there is no punishment for non-compliance unless they are looking to get rid of you.
Special from a Big(ish) Corp I expect to give me all tool for work. They have all admins and all compliance people. If they are not willing, yes search another job.
I work at a bank and I can tell you that nobody is emailing or sharing files to their personal devices. Doing so would have you fired on the spot.
At my workplace apps are whitelisted and it's honestly not a big deal. Very rarely do I find myself in a situation where I can't access the tools that I need.
Working at a bank and working as a design engineer or low-level programmer have extremely different requirements for a work computer.
My experience, after working in many big and small companies in several countries and multiple continents matches perfectly what the above poster has said, i.e. at least for any kind of design or testing work that involves external hardware, the only way to get anything important done is to circumvent any restrictive IT policies, which are always made by people who might have worked in a bank, so they have no idea about how a computer is used for designing or testing hardware or software for embedded computers.
Most of the work that I have ever done could not be done on company-managed computers, so I usually had to do it on my own computer and transfer then the results to the company computer, which was used only for communication inside the company, through e-mail/Teams/internal file servers etc., and for writing reports or other documents.
I work in a factory in industry automation and we also never ever use with personal devices. That said, it is possible for some few guys in the the tech to get an local admin account to install software, or even just to change IP-addresses.
For ex. often VPNs (external people can connect to a machine) don't work and so we had several times the case a machine did stand still because of it. Then of course we not just make a link with a phone. We make a mail to the CEO and he take care of it with the IT.
> Almost every day. Sometimes multiple times a day in case of a new project / new embedded hardware (toolchain) / new devkit to "quickly test" / tight deadlines, etc.
The approach normally doesn't count project local tools which don't install like normal applications. E.g. some frameworks you install into your project instead of having user or system wide installations (e.g. with npm for web-dev stuff).
I know that for embedding computing a lot of tools which are normally ad-hoc project installations are instead system wide "normally installed programs".
But that isn't the case for most software development fields. In many other fields that new tool-chain you want to try out is just a dependency you add to your projects dependency management tool. And keeping it up to date is done in the same way you keep your other dependencies up to date (e.g. a scheduled CI job checking for updates).
Rootless OCI (docker) containers can help with that. But embedding also often touches on driver aspects so it might not be an option.
Anyway if you ignore embedding programming, parts of system programming (e.g. linux kernel), some legacy software and some cases of important C/C++ tools needing too many permissions nearly all other programming jobs should be viable on a locked down system as long as you have the right not too big set of tools preinstalled and access to rootless containers (independent of weather they run in a VM or not).
Given various recent supply chain attacks, I would say "installing dependencies into your projects" is becoming just as dangerous as downloading and running arbitrary executable.
Its IS running arbitrary executable, but so is installing some 3rd party Program and even some program uploaded to the Microsoft app store or a Linux package manager isn't necessary much different. Sure Linux package index maintainers give them "due diligence" but that has limits and Microsoft replaces that mostly with some automated scanning, which isn't better.
But when it comes to the aspect of managing updates, project specific installs are better.
And when it comes to security then the answer for both installed and project tool-chains is: Sandboxing.
- run your toolchain in a container
- run your IDE in a different container
- run hardly trusted tools in another container
- use reasonable phishing proof 2FA (i.e. FIDO2) for authentication, use authentication tokens.
This applies to "normal" applications, too. They also should run in their own container. Just because it has an installer or is signed by a company or is on a app store doesn't mean it's trustable.
Thats one of the things phones at least somewhat got right, their apps are containerized by default (through they might give too much permissions to apps but that's a different problem)
vscode has a portable version that can be installed as a project local tool. So would vscode installed in such a way also be an exception to this approach?
It's less about if it's install-able in a project and more if it's project specific, so IDEs should if possible not be included.
Furthermore if possible stand alone tooling which tend to not be very version sensitive can be nice to not include: E.g. git, npm, rustup, pip and similar are preferable system installations.
Furthermore frameworks and similar tend to go through the package manager and whatever update handling you have for it. But a drop in IDE likely would not.
> The IT team promises they'll speed things up, but a software approval for 1 app still takes at least 3 months. Now what?
Ouch.
I wonder if those apps are one-off for particular task or you just find so many useful apps every day. Anyway, Windows now features Sandbox mode - a great place to install untrusted apps as the sandbox is discarded on close. https://learn.microsoft.com/en-us/windows/security/threat-pr...
Sounds like you have slightly unusual needs - I install NPM packages and the like all the time (which I haven't seen any company attempt to oversee yet), but very few binaries. Sounds like you need binaries the way I need these packages, so it sounds odd that the company wouldn't come up with some solution - dedicated VM, special exceptions, whatever.
My company has a policy that all software needs to be explicitly approved. That includes any software library, extension, or package from a package manager. Is it followed? Not in the slightest! Something I recently ran into: the Rust version manager 'rustup' is approved but neither the compiler nor the package manager have been approved. That's despite our core product having used Rust libraries through the C ABI for years. Madness!
For the purpose of our discussion: None whatsoever. That's my point. Corporate IT mandates what they know/think about, quite often unfortunately without any understanding of the specific needs of employees (like parent).
I found it to be a better strategy to start from a prioritised list of risks and tackling the high priority items in an actually meaningful way. But I can see how that doesn't work at companies with >300 employees - at one point security just pushes for whatever they can get, disconnected from the true needs and relevant risks at the company.
azure AD has a privilege escalation mechanism to local admin. you enter e.g. the jira ticket reference for documentation and the system gives you an 8h window to enable localadmin on your machine with the particular AD user. we use it to give e.g. architects who often try new software to do what you describe in an efficient manner.
>Realistically, how often do you need to install brand-new non-standard software? If it is a regular occurrence then you need a process by which you can request it and the IT team can assess how they manage it.
Brew is a pretty regular part of my workflow, although that's more often restoring the environment after an IT-mandated OS upgrade than net-new software.
Also there's a couple of thousand third-party dependencies in the monorepo, seeing ~100 changes per month.
I'm grateful that engineering workflow is managed by engineering platform teams... corporate IT is a totally different beast, and it'd be pretty dysfunctional if they micromanaged this stuff..
I think the two lines actually work together. If the IT team works with the users rather than against them, the friction of adding new software is "low". It is, of course, strictly higher than if the user can just install whatever whenever.
But, GP posits, this increase (from "none" to "low") is further mitigated by the fact that, in practice, this friction-generating need does not arise often.
I understand, it was a snarky way to point out that the IT person is perhaps out of touch with developers. It might be he works in a more tightly regulated industry (which should be pointed out when giving advice anyway).
I don't think I've gone more than one week in my professional live without installing some sort of new executable on my dev machine. How do you test out software/libraries before you use them in a project? Do you make the decision ex nihilo and then ask IT to install them for you? That would be insane. How do you test out new toolchains? What if you need a random tool for processing a file format you're working with that isn't approved yet? Without root, what do you do when your Linux gets messed up? Ping IT and.. wait? Just stare out the window?
Ideally a security researcher takes a look at it, then it gets whitelisted.
Sure, to simply try it the best way might be to set up a new machine not connected to the corporate network/domain.
If it's an open source library it might be easier, but still the questions are the same: is it secure, is the added maintenance worth it, does it have a community maintaining it, what is its security track record... it might be as easy to answer this as "nobody got fired for using OpenSSL/Oracle/IBM" or "no single committer libs ever"
There's no universal right/wrong answer.
> What if you need a random tool for processing a file format you're working with that isn't approved yet?
you don't. this is how companies grow to these enormous headcounts. there are specialists for everything, everybody has their own cubicle, and there is a huge support:corebiz ratio.
also, in practice, there are usually hybrid ways. eg. you can install whatever, but it silently disappears overnight. is that better? who knows.
A friend of mine works for BlackRock, they simply work on remote Windows virtual desktops. He is not complaining about speed/efficiency.
A different friend works for ALDI (like Walmart just smaller), he got an issued MacBook (Pro?). When he connected an external USB hard drive the next day (or sooner?) he got an email about it because some automated scanner thing found something suspicious on it.
When you get struck by a watering hole attack and drag malware inside the corporate network that proceeds to start encrypting everything not only for your own system, but your coworkers around you and an entire team of developers is sitting around twiddling their thumbs while all the systems are restored, that's when we realize playing fast and loose with stuff off the internet can have far reaching effects.
Having something go wrong on your personal system can be an 'oops', having something go wrong in any number of regulated industries can be a 'news generating event'.
> having something go wrong in any number of regulated industries can be a 'news generating event'.
I did say that if you work in a tightly regulated industry, obviously stuff is different. This should be pointed out though! You can't give advice for the medical or financial sector to someone with a 15-person startup that builds low-stake products. It's just not going to be the right advice, as you well know security is a tradeoff. I guarantee that disabling root on dev computers is a huge productivity hit -- you need to be sure of the pros/cons of making that decision.
I can have empathy on users for them wanting to treat the development system they are on for 1/3rd of their daily lives like a personal machine. What I cannot actually allow is them treating their development machine like a personal machine.
> it requires a development team to realise that just because they're good at programming computers it doesn't mean they're good at administering them. Yes, it sucks that you're not allowed to install bonzibuddy.exe from Limewire
What exactly is IT doing when a dev requests to install randomtool.exe? Why is it a developer is incapable of that same thing?
> Realistically, how often do you need to install brand-new non-standard software?
Well, does my own software count? I mean, I've run an executable that didn't exist 5 second ago more times than I could possibly count. I've also built installers for said software, and then executed and installed it on my machine.
Is that ok? If so, why (if I also can't download and run something from github)? If it's not ok... how I supposed to do my job?
> What exactly is IT doing when a dev requests to install randomtool.exe?
Selecting/paying for secure repositories from legitimate well maintained sources.
Packaging the software and deploying it through some means, so all users who need the software can get the software.
Regular patching of software across many users who may only be intermittently connected to the internet.
Lots of busywork updating configurations as new teams are formed, teams dissolve, people come, people leave, new patches come out, new software comes out, have to move to a new OS with old software. Company policy or circumstances might require some tweaks.
Generally though as to your question, there’s no reason you can’t do what the IT department does. The IT department can also have such a setup and then have a dev sandbox environment you can pull random code into. The larger point of what they’re doing is building a software supply chain, and once you actually have every piece of software you’re using in a secure repository, you can do BIG things like blocking all applications that aren’t whitelisted in prod which essentially shuts down entire classes of security threats against a company, and maybe that’s what a company wants.
Maybe a company shouldn’t be trying to maintain a centralized repository and that’s a dumb idea…
> What exactly is IT doing when a dev requests to install randomtool.exe?
Some basic research on its creator/distributor and history, particularly with regard to security issues and how well/quickly they were addressed. Also perhaps running the software in a constrained environment to see what calling home it tries to do, or does as part of its core function¹, if it is monitoring the clipboard, etc.
> Why is it a developer is incapable of that same thing?
It isn't really a question of capability, it is a question of whether they are convinced⁵ of the necessity and can all be relied upon to be appropriately diligent.
For many all that "contracts", "auditing", and "data protection law" stuff is someone else's problem, not interesting, and thinking about it wastes time & gets in the way of getting the interesting stuff done.
> how I supposed to do my job?
Do you want all that compliance stuff to become part of your job? Is that what you got into development for? Do you want to be held responsible if something is missed? If not then accept that someone else has to do it so that you don't have to, which sometimes means waiting for them to do it properly.
----
[1] we work with banks, we have to be very careful about potential accidental data exfiltration routes because we sometimes handle PII (and, more cynically, because we'd fail an external audit required by some big contracts if we didn't!)², we have a local instance of languagetool if someone needs that sort of thing but people still try to install grammarly³ and done seem bemused that potentially sending everything you edit⁴ is being sent to another country could be a bad thing. And that one is obvious, as it is part if the products core function.
[2] for other companies, their own "trade secrets" could be the concern
[3] nothing against grammarly, that is just a good glaring example of a tool with which we could accidentally breach promises made to clients about where information could reside or be processed. The same concerns, along with a few extras like licensing and stability, are also relevant for dependencies that actually become part of our products.
[4] yes, of course we have proper data access controls in place and all but a few of us have no access to real data if all is well with that, and even then that access is gated and used sparingly, but security-in-depth is a thing...
[5] from your question, you don't sound convinced currently
What the GP is talking about is not letting developers install stuff on their PC. I assume this is also "run unknown exe" because.. well I can't see any reason local root could hurt the network in a way my user account couldn't. Unless maybe the PC is shared, there's no difference of install (local root) compared to run as normal non-user, as far as attack surface on the network.
So this comes back to: do you block "potentially malicious" executables or not? How do you tell if a new never-before-seen executable needs to be blocked, without either just blocking every unknown executable ("dev can't get anything done") or opening a giant loophole ("anything in x directory is safe")?
Let me write it a different way: An exe with an unrecognized name and unrecognized sha256 suddenly appears on my computer. Maybe I just compiled it from source I wrote, or maybe it is a random thing I downloaded and unzipped. How does a decision get made on whether my system will run that or not?
Your local non root user can install new network drivers and launch ARP attacks against the switch in promiscuous mode?
There are plenty of ways for a local dev to self sign local trusted executables that can only run on their own machine for testing purposes (that would not be trusted being distributed to the public).
At least in Windows there are a few different systems that protect against running unknown executables, and downloading and running unknown executables would be a resume generating event that would get you walked out the door by security.
As a restaurant owner, would you prevent an expert cook employee from using their favourite brand of knives? Or even knives they bring from home? Probably not.
What if your insurance only covered certain brands and explicitly excluded the ones your cook used? That would probably make the cook less happy and less effective. Yes, happiness is important. Your employees perform their best when they use their favourite tools. This can either be Vim, their own ergonomic keyboard or even Linux instead of Windows.
Every restriction you add increases friction and decreases output.
No, it's not about installing Bonzi Buddy, it's about giving your people the best tools (the tools they like) to perform their best. Sometimes this incurs risks like unpatched vulnerabilities, like any software, but to go and paint this as some sort of entitled attitude developers have is plain ridiculous, honestly.
If you are not an administrator on the computer that you are using, you should run Visual Studio as an administrator while you are using the profiler. (Right-click the Visual Studio application icon, and then click Run as administrator.
> how often do you need to install brand-new non-standard software?
That depends on the definition of non-standard I guess. If I want to quickly throw together a proof of concept product I might need to grab node/npm/postgres/ for example. Waiting even a half day for approval is too much in that scenario I think. So it's problematic in some situations to have an approval process.
But in a typical month I'd say it's around once or twice that I need to run new software.
Some times I feel that we should just take the opposite approach. Don't trust employee machines! In my work I just commit code to a repo that isn't on a company machine anyway (github/azure devops etc). I don't use documents/network shares/databases/vpns whatever that is on a company network.
I could work on my personal machine and no one would notice.
The threat model for a laptop connected to a private github repo is what? a) that a compromised machine can be used in a supply chain attack, and b) that a compromised machine could be used to read secrets/IP from the private repo. That's it. I wonder whether that threat, for normal "low risk" development work is worth the hassle of even bothering with locked down machines?
In some situations I have resorted to running tools inside a VM on my dev machine, where I can play god if I need to.
But really this isn't any different from the above. There is a machine where you can do anything, and which hasn't got access to anything sensitive.
> depends on the definition of non-standard I guess.
Depends on the definition of 'software' also. Strictly speaking, every npm package is someone else's code running on your machine. Often with quite wide ranging permissions.
If you need approval from IT to add a package or increment a version then your development process is going to be slow for sure.
Nix would be a good solution to this. Have a corporate cache for all installed software with approved versions that folks can pin their environments to.
Ticketing still needs to be in place, but it can also be a managed repository that can be audited/reviewed by IT team as well - tickets could be as simple as a Pull request.
Working with large financial clients I've seen that most companies have systems somewhat like this already. Reviewed software is internally available for download off an intranet site, and some kind of request system exists if you need a newer version and you'll get an update on when it's available.
> Secondly, it requires a development team to realise that just because they're good at programming computers it doesn't mean they're good at administering them.
That's a so annoying thing, I understand enough about admin stuff to understand that I have huge knowledge gaps but I'm seeing people all the time assuming they can securely administrate docents of servers while having less knowledge then I just because they made things somehow run (but then I also have seen more then just one or two really bad admins).
The worst case was someone who through because they have a doctor title in numerical math that and a (small) bit of programming/admin experience that they know what is needed to securely administrate a "private cloud" (cloud isn't quite the right term here,but good enough).
As expected it lead to the company being hacked due to some major security gaps in less then 2 years.
Working in the industry for a while now I've come to the realization that developers in the vast majority of situations are not security engineers, and should not and cannot be trusted to ensure both the software they write and the systems they run are secure in any manner.
Security itself is a set of processes and policies where different teams monitor and ensure compliance with said policies. If security policies get in the way of the software compiling/working developers will disable security (and sometimes for short periods of time this is needed for testing to figure out why something has gone wrong). But it becomes easy and commonplace for developers to turn off the security because it gets in the way of productivity. And that's the problem with security, you don't generally get punished for violating it immediately by an attacker. It can be months or even years before someone downloads your source code and posts it online. You may not notice for months that your database was breached and everyone that you don't want has a list of your customers.
> you're not allowed to install bonzibuddy.exe from Limewire.
Would a "recommended"/typical MDM rule set mean you can't install "unidentified developer" licenses on Macbooks? The usually checkbox every mac user normally enables in security settings?
Former Mac admin here. I never locked Gatekeeper down that strict, but I did enforce the setting where if you just double clicked some nonsense, it would it let you open it. You had to right click and then open from the context menu and blow through the warning.
Ultimately I depended on our endpoint protection software and Jamf inventory to keep things clean. Jamf I could setup to alert me on known malware/bad shit, or in this case, specific versions of apps so I knew who I needed to reach out to, and then endpoint protection for everything else. Good enough for our HIPAA auditors.
As a researcher, I used to work on Linux and I never ever even imagined I could ask for administrator privileges.
In the industry, it's Microsoft all around. And it shows. And R&D software managers sign administrator privilege requests without batting an eyelid because they know that otherwise they are paying us for nothing.
I can't remember the names of any of these companies, but I know there's a healthy ecosystem of "endpoint management" businesses that sell essentially this service (among other things, like endpoint security monitoring/EDR).
The downside is then that you're running an ultra-privileged agent on each developer machine, with no particular guarantee that said agent is more secure than any given piece of software despite having way more power.
Edit: Remembered one: you could use something like Kolide[1], which more or less wraps osquery[2] to enumerate applications and their versions, and push out warnings asking users to upgrade.
Also see Fleet (https://fleetdm.com/) for an open source self-hosted solution. I'm currently using this at a small company to query / enforce policies across a bunch of Windows laptops.
I can't speak for every bigcorp but at my work I installed firefox completely on my own, and once in a blue moon I get an email from IT telling me I need to update firefox (usually I just need to restart ff).
This is on a domain-joined windows machine. There's some management "stuff" that got pushed down to it but IT is fairly hands-off, or at least I haven't run into any limitations with running random development tools.
There’s basically two approaches: inventory and enforcement.
In the former, you have your MDM inventory all the things and you follow up with folks that aren’t patched. In the latter, you lock it down and roll it out.
For dev environments, maybe try using a VM built using CI/CD and using that as the “dev environment”, so you can do it all in one go and make it easier to roll things out to folks once tested.
I once worked for 300+ people research company which was mainly using Linux, there default setup was:
- no "normal" employee has admin rights
- any long running/compute task must run on one of the many server systems available
- desktop systems used some form of network boot to install/reinstall/update them on startup, unused systems shut down after a while for both power saving and enforcing the network boot triggering
- there was some tooling which allowed admins to easily configure the right combination of Linux distribution and software bundles
- if some applications wasn't available you needed to make a ticked and normally got it in a day, that seems high friction but then it doesn't happen that often so it's not really a problem and it tends to get added to a bundle so that other people have it too
What somewhat wasn't covered would be local project specific installations using e.g. `pip`/`npm` etc.
And naturally there had been a bunch of exceptions.
Now this was years ago and this approach was for siency use-case not normal software development, but while a lot of developers probably wouldn't like it should be applicable to a lot of dev use cases, too. As long as you include enough tools/choices in your software bundles. Okay, maybe it will not work for certain system level C/C++ programming, but for most app/web/server use cases it should be possible. I guess one of the worst offenders here would be docker as you would _only_ allow rootless containers. (But then I'm using mostly podman and I thing I only have been using rootless containers the last 1.5 years or so.)
At a minimum you need a way to collect information from each workstation for all of the installed programs and patches/hotfixes installed. There are tons of companies that sell software that does this out of the box, usually either with an agent installed on each workstation or a single server sitting on your network that uses an account with the necessary permissions to remotely grab that info.
Once you have the data you can run reports to see what the worst offenders are and perform corrective actions such as updating software. If you want to do this for a small company like yours look into setting up a Wazuh server since it’s free and open source.
For large corp environments there is software that can automatically patch out of date software but the costs are high and you have to be extremely careful because these process can occasionally break an end user application.. Usually there is always some sort of change management process that has to be followed for that for obvious reasons.
We're largely in the windows world. Defender vulnerability management will detect software such as this just because we use defender edr and have this feature in our license. If the user self installed and doesn't update, someone will call them tomorrow. For more managed software, someone will log into intune and push an update that you will get in the next hour.
For Linux devices, you'd tell employees to only install software through their package manager, then have something like unattended-upgrades (installed by default on Debian-based systems) handle automatic security updates.
This isn't what you asked for, but one of the systems that Google uses internally has system to report the hash of every executable launched and block executables it isn't aware of.
Most larger companies already have some type of code gate like this where dev checks in software and it's exposed to automated testing then review.
But as you state there are many security models in which the dev machines still need to ensure you cannot leak code out of the company confines. In many financial industries leaking code between different teams can be/is considered a security risk.
Dont get me wrong I’d love to say lolno to these types of requests, but basically everything requires npm these days, while also being basically impossible to audit every dependency you install. I’m a huge fan of good security shouldn’t impact too much on productivity, but dependency trees make me lose sleep at night (waterhole attacks, EOL packages that are painful to remove, etc).
Pentester by trade with a background in software dev.
The same way any business deals with any need. The requestor must make a business case for the need that is then reviewed.
I need $x.exe is not just a security risk, it's a business continuity risk. What is the licensing of $X, do we need to pay licensing for it? Are we getting sued for using it? Are we doing to redistribute it, and can we? In general the average programmer, especially on a team should not be making that choice, and someone higher up should before really expensive surprises happen.
I work in software composition analysis/static analysis/SBOM and ya, the average piece of software coming can be terrifying. Its common to see companies distributing things like open source packages in violation of the license. Even worse is when they are pulling in something like 'One-Js' and '0ne-Js' (made up for example) that shows up in audit of our SCA output.
For Windows deployments - Microsoft Intune provides the ability to monitor any application and its version. Updating is automatic provided its installed correctly and using detection rules. The responsible party updates the MSI installation files and the local management service will perform the update.
Yeah, but who uses MSIs? Windows Installer is just a flash in the pan. It's so much easier to just make an Inno Setup-based SETUP.EXE. /s
It makes me shudder to think how many hours (days? weeks?) of my life I'd have back if developers had just packaged their Windows application as MSIs instead of making me, as a sysadmin who wants consistently updated client computers, do it for them.
The inmates have been running the asylum at Microsoft for a number of years now. I assume it’s necessary to re-invent wheels, badly, for career advancement. (That’s not to say that MSI is particular nice, but it works reasonably well and saves a lot of downstream headaches.)
If we’re talking about macs, what you’re looking for is munki + AutoPkg. There are community supported recipes for pretty much everything out there, and for light touch management you can set the munki policy as a “managed update” - that is, munki will keep it updated if it finds it, but there’s no forced install or locking down.
I suspect you could write a script that detects or locates VSCode, verifies the version, and updates as necessary, and run the script on all computers that are managed. Or at least report back version information to understand the impact.
This should work to find affected versions on macOS. I'm on Ventura and sort has the -V version sort option. Not sure if all versions of macOS do.
verlte() {
# Returns code 0 if first arg is less than or equal two the second arg
[ "$1" = "$(echo -e "$1\n$2" | sort -V | head -n1)" ]
}
vscode_version=$(/Applications/Visual\ Studio\ Code.app/Contents/Resources/app/bin/code -v | head -n 1)
vulnerable_version="1.71.1"
if verlte $vscode_version $vulnerable_version; then
echo 'Vulnerable version!'
else
echo "Up to date version"
fi
Often "evergreen" is the fastest/cheapest solution.. ie every time the app installs it comes from the internet source so its always the most up to date at the time of install with no extra work being done.
chocolatey and (hopefully) winget offer "upgrade" functions, so you could script that out for your supported evergreen deployments.
I'm not really sure how i feel about chocolatey atm, it has the potential to save me a lot of time, but as I dont truly control the source I'm not sure i trust it.
The origin story of how zemnmez found this is so hilarious
> i got tricked into finding some 0days in vscode because I got asked to review a vscode based project and didn't notice the 1P devs had only made one minor change to it. actually not a joke
Can someone explain? I can barely parse this quote. Who are the 1P devs? Why does it matter that they didn't notice they had made a minor change? Why were they "tricked"?
Edit: I think I might understand it more now: This person got tricked into carefully reviewing the "entire" code, instead of focusing on the one minor change that was made to it, because they didn't realize it was only the minor change they had to review. In their careful review of the code, they uncovered vulnerabilities which were actually related to the original code (ie. VSCode) rather than the changes that this person was asked to review. Did I get it? I'm still confused about the use of "1P" here though.
I found a linked list bug in FreeBSD because I am less smart than my buddy who was like it's trivial bla bla bla. I am like wait, lets back up, I don't understand this...
I think one sniff out shifty code, as soon as you open the file you can tell there are going to be bugs in there. I am not saying, well groomed high level code is high quality or bug free. A couple trivial deficiencies and there a probably a whole lot more you don't see.
Like if the code has a low to zero number of tests. Or the build system includes top level shell commands.
Programmers shouldn't be allowed to name things. I know it predated the GNU Project, but it just normalized bad naming as a goal. Maybe kids can stop learning cursive and switch to the how-to name things class!
right, I kinda wish more languages did things like this. Every JS framework or library has a cute name but then realizes they need to always say the ".js" if they ever want any SEO results (looking at you Next. Node isn't much better either tho).
It might be silly, but if we just added some l33tspeak like n3xt or n0d3 we would actually be making out lives a lot easier. Or we could just spend like a whole extra minute to come up with something like Deno which is short and sweet and apparently unique enough to never conflict with other results
This date format works for just as much of the year as DD/MM/YYYY though, and is just as ambiguous as the other format is if you don't know who the author is.
I now this is a bit beside the topic, but I basically know of two date formats: those with slashes in the, one should ignore because they have no meaning (because they are ambiguous) and an iso-like format which starts with yyyy and uses hyphens and I know I can rely on it to be yyyy-MM-dd so there is no risk of ambiguity (Unless of course someone uses yyyy-dd-MM but I don't think that format is a thing, it would be like writing a quarter past six as 15:06)
I work at google and frequently collaborate with people from Germany, Australia, Japan, Taiwan, and the US. I always use YYYY-MM-DD in docs and emails to make sure everyone understands. It's still somewhat common to see mixups between the Americans and everyone else around MM/DD and DD/MM though.
If you give people a good reason, they'll actually care. "Use YYYY-MM-DD as it allows us to sort by date in the filesystem" is a great reason. On the other hand, bickering about DD-MM-YYYY vs MM-DD-YYYY because "that's how it is in my country" just makes me roll my eyes.
ISO 8601 is pretty common amongst the engineers I've worked with. Odds are they didn't grow up with it unless they're from China or a handful of other countries that primarily use YYYYMMDD.
I work with lots of geographically dispersed and culturally diverse teams. The rule I always enforce is to spell (or abbreviate) the month in all dates. The other numbers tend to work themselves out.
Tangential: Google search date range only accepts MM/DD/YYYY even if you’re on a different version of Google with a different locale. Really lazy if you ask me. Must be confusing to a lot of people, too.
This is really frustrating. Its format is not indicated on the form so my natural (in ja-jp) YYYY/MM/DD input interpret as weird date. I never use MM/DD/YYYY format other than this. This is localization 101 problem on big tech's main product.
It's a free-form text field; there's a very good chance it was an absentminded choice (it's probably how I'd write it, as an American, if I wasn't thinking too hard).
The good news is that GitHub Advisories are publicly editable, so you can always suggest an update to this one.
I was very annoyed when I worked for a global company that used DD-Mon-YYYY, like 01-Apr-2020. Moaned and groaned about how they chose a non-standard format instead of an ISO standard.
Of course, I later found out that is a standard format, and I suppose it's easier on non-technical people even if it stinks as a computer guy. Egg on my face.
month/day/year makes sense. the way you euro's do it is stupid. when someone asks you "where were you on X date" do you respond with: "Well on the 23rd day of november i was at X place"? LMAO. stop occupying the same universe as me.
Hey, could you please stop breaking the site guidelines, such as by calling names ("stupid"), or posting unsubstantive or flamebait comments? You've been doing this repeatedly, and we eventually have to ban such accounts. It's not what this site is for, and destroys what it is for.
"Conflict is essential to human life, whether between different aspects of oneself, between oneself and the environment, between different individuals or between different groups. It follows that the aim of healthy living is not the direct elimination of conflict, which is possible only by forcible suppression of one or other of its antagonistic components, but the toleration of it—the capacity to bear the tensions of doubt and of unsatisfied need and the willingness to hold judgement in suspense until finer and finer solutions can be discovered which integrate more and more the claims of both sides. It is the psychologist's job to make possible the acceptance of such an idea so that the richness of the varieties of experience, whether within the unit of the single personality or in the wider unit of the group, can come to expression."
ONE AND A HALF MONTHS to fix this? Am I reading this right?!? This should have been a same-week emergency patch.
> Date disclosed: 11/22/2022
So anyone who didn't update VSCode during those 6 weeks is vulnerable to a publicly documented RCE. Lovely.
It's nothing short of insane that we have to treat our text editor as a high-risk potential backdoor that might suddenly open to anyone unless we click the "Check for Updates" button every few hours.
That we know about. Last I checked VSCode extensions run completely unsandboxed, at least on Linux. It's only a matter of time before a malicious one is discovered.
This is pretty standard, timeline-wise, for most responsible disclosure policies. As-in : Give the vendor ample time to publish and deploy a fix before reporting vulnerability specifics.
Aside: As usual, excellent work from Google project-zero
Would running vscode in docker/podman mitigate the risk? When I started using VSCode I was a bit worried about potentially unwanted network traffic originating from VSCode and for that reason I'm executing it from podman container without routing to the internet.
Docker and Podman are not security countermeasures.
They appear as such accidentally through controlling resources and visibility of the filesystem.
Docker and Podman solve the "isolated environment for development so I can control my dependencies", "package management" and "reproducible environment" problems; which is already a lot.
We should not pretend that it's doing more than it is; since it is already covering many important topics and it muddies the conversation.
The reason I'm being so detailed is that there are a contingent of people who will swear to death that Docker is a security layer: but notably none of them are ever security engineers.
You can use similar primitives that docker is using to achieve security hardening.
Even docker themselves have a detailed dossier of documents saying they’re not a security layer (but that they’d like to be).
From FAQ:
> Is Docker a secure platform?
> The Docker platform itself isn’t inherently secure or insecure. While containers may be isolated from other processes on a host, additional security measures are still crucial to prevent container breakout and other types of vulnerabilities. An effective container security strategy for building and deploying containers is the best way to reduce the risk of a vulnerability, and in turn, an attack. Just like any other technology platform, following security best practices is the key to mitigating potential threats.
I agree with original intent behind docker. But is the idea of unix namespaces not serving wider purpose than what docker/podman is delivering..?
If process in question is contained within namespace then would it not be enough to stop the exploit of process in question from doing harm to your system? I assume the bad actor would not combine such exploit with, say, kernel exploit enabling to escape unix namespace.
The contents of the ‘file’ in this code
are a single Markdown cell in ipynb
format. Because Markdown allows
arbitrary HTML, in trusted mode, we can
inject any HTML code we want into the
webview.
This is a very common foot-gun with markdown. Unfortunate that they did not sanitize the HTML output from their renderer.
The logic here is somewhat sound. VSCode does sanitize by default, but Jupyter notebooks effectively need to run Python code on your machine to work. At that point (this is the meaning of trusted mode), it's not really worth protecting yourself against XSS.
I think the takeaway here is that there's likely more of these kind of vulnerabilities that's not discovered and VSCode should run in a sandbox or separate virtual machine.
This is a core VS Code vulnerability, the remote development extension is mentioned because this vulnerability can be used to "take over the computer of a Visual Studio Code user and any computers they were connected to" using the remote development extension.
That is, every company I've worked at has had some form of device management software on their laptops, but that software only ensured that the OS and some specific "managed applications" were always patched. For developers, though, we never had fully "locked down" machines because they made our job so much more difficult (that is, more than other departments we'd often be installing and running new software).
In that case, are there some specific corporate controls to ensure nobody is running an unpatched VSCode, beyond messaging all engineers and saying "you better make sure your VSCode installation is updated, or else..."?