I’m disappointed this wasn’t really about remote workers but about remote development as in code doesn’t live on the machine. That said, remote development tends to be awful, and more tools could be helpful if it’s not specific to a FBs particular implementation of remote dev shards. Learning that at Facebook you don’t develop things locally and likely are on a box with all sorts of things tracking your usage and access to everything lends even further credence to the big brother thing. I understand that for a codebase and app that’s too big to run on one client machine it makes sense, but what also makes sense is having piecemeal development environments where you just pull the components that you need.
Not really sure what you are implying with the 'Big Brother' comment. The remote development servers are only used for writing/debugging code and not for other daily tasks. Even if they are tracking what tools/functions I am using on the server, what does that matter?
Personally, I have found remote development awesome because it enables engineers to start contributing to a huge product in the very first hour. No need to wait for the repository to clone, the dependencies to install, and the code to build before you can become productive.
> Personally, I have found remote development awesome because it enables engineers to start contributing to a huge product in the very first hour. No need to wait for the repository to clone, the dependencies to install, and the code to build before you can become productive.
I am nobody but I've had this nagging feeling ever since I started working on websites for big corporations about why can't I work on my code on my local machine with no network connectivity? Why do I need to talk to three different databases and two different services on five different servers? Why can't I just fake all those things during my development?
If anything, my not so humble opinion is that remote development further enables bad habits. Of course, remote development is a tool and is not to blame but I recently learned the term "hermetic" build [Google SRE]
>The build process is self-contained and must not rely on services that are external to the build environment.
Personally, I think we should work towards making it possible to run (or at least stub) the whole stack on a single physical machine - be it local or remote. What do you think? I think this is a trivial problem engineering-wise but I am not very good at selling ideas.
Outside of web development, sometimes you need special hardware or workspaces configured to do development. Usually this can be done in a container, but that comes with its own annoyances. Having a central development server for large compiled code bases is really useful.
Not true. Worked on a real distributed payment system. Every service came with a mock clone or a single server mode to standup all services in the same box. Every developer had a powerful personal desktop (or two). I loved it. Everyone I knew there loved it.
See comment below. Just because it wasn't true for you, doesn't mean it's not true. I'm not talking about mocking services to run it; I'm talking about the development environment and sharing with a team of 20 or so on a code base that takes 45 minutes to compile without parallelizing the build. You also cannot mock GPU functions. There's simply no cuda emulator.
This is an ideal case and I wish it's like this for everybody. At least one of the payment providers I'm working with in a customer's project doesn't even have a sandbox. We must test with real money. Obviously we mock everything in unit and integration tests, but to know that it really works the customer must put money on the manual testing account.
Those are all valid trade offs, but its ignoring the issue raised in the AP.
Like, is an environment like this encouraging bad habbits?
But anyway..
Special hardware: such as? Very few systems can't be downscaled. Situations where you need this special hardware we are talking horizontal multi server setups anyway.
Special workspaces: be less special, improve tooling, improve build operations. Very few setups actually needs centralised configuration.
Large compilations: I'm not sold that a) there is many people with the justifiable need, and b) any real justifiable need is likely going to want on demand autoscaling of the compilation servers, i.e. development won't be local anyway.
I'm not trying to debunk everything you've said. It's a trade off and I've used central dev databases in the past for legacy systems and it worked well for those in the office.
All I can tell you is that iteration speed, testability of code, manual testing and all around team morale was DRASTICALLY improved by stubbing out that bottleneck in newer systems.
I'm speaking for my individual case and others I work with where we have codebases of over a million lines of c++, with many header-only libraries. On an 80-core server make -j can still take 3-4 minutes, and that uses all the resources on the machine. Trust me, I wish I could have something as fast that's not centralized. The closest I can think of is either a VM (slow/wasteful), or a container. The container would be really easy if everyone used vscode with the container development plugin, but not everyone on the team does.
For those that don't, it's much more friction to remember to start it up, etc.
People don't like being tracked and Facebook has a long history of being deceptive around how and when it tracks people. Facebook also has a long history of selling the data it collects as a result of tracking even when the people being tracked try as they might to opt out.
Hence the "big brother" reference.
Frankly, I was/am a little leary of how Facebook might be "improving" the MSVSC Remote Development pack since I use it every day.
As an employee working on a corporate device everything will be tracked anyways, this is the most tin-foil take ever.
Your latter concern is at least a reasonable one, but it should all be open source anyways? Not that any of us have the time to audit everything. I doubt Microsoft is going to allow anything nefarious...
You're telling me that in the EU things that happen on company assets are not tracked? I think any company could easily come up with a 'strong' reason (IP theft?)
My employer might track me, but how does that mitigate my concerns that a 3rd party data aggregator like Facebook might track me as well as a result of installing a closed source plugin that they "improved".
I just addressed this in the second part of my comment. You would have to trust Microsoft/Facebook at this point, right? So if you don't and you really actually care (unlike 99.999999 percent of people), don't use closed source software, and audit every single line of the open source software you use, because I bet a lot less eyes are looking at much of what you use.
> People don't like being tracked and Facebook has a long history of being deceptive around how and when it tracks people.
you are mixing unrelated things. As an employee of a company it's unlikely you can raise any concern about your privacy when working on the company's main asset.
> Facebook also has a long history of selling the data it collects as a result of tracking even when the people being tracked try as they might to opt out.
Not OP, but I'm also not sure how OP will provide a decent reference of constantly moving goalposts of privacy wrt to privacy implications of the Facebook platform. The default has always been towards public and noisy, even as Facebook has been forced to mature and realize there were privacy implications about things they were doing by default on the platform.
Despite a culture of "move fast and break things," things have never been broken from new more restrictive default privacy settings. Users who signed up in 2005 would still be an open book by default today.
My point was to elaborate on how some other user might feel that Facebook represent a big brother type actor.
However, you have suggested that there is or should be no privacy when doing development, but I'll remind you that not all devs work in large corporate environments and even when they do, they might reasonably expect to be watched only by their employer. It is frankly not 100% clear that Facebook will never have access to telemetry as a result of "improving" this set of plugins. They certainly did not suggest as much.
In my opinion, there is a dramatic difference between what my employer may or may not do to track me and what a 3rd party social media company may or may not do to track my development practices as a result of using a plugin.
While semantically I may have overstated that Facebook "sells" data, they unquestionably considered doing so between 2012 and 2014 despite promises made to the contrary. They also unquestionably shared data with other large data aggregators. Even if USD did not change hands I think we can be fairly certain that Facebook bartered in user data.
Unrelated? How? It's Facebook. They have shown a repeated assualt and disavowement of social responsibility. I wouldn't expect Exon mobile to do very nice things in non oil contexts. Are you saying you would?
Reference? Where have you been? Cambridge analytica? It's advertising arms? It's all selling user data either directly or indirectly. Don't be so naive.
I don't honestly expect there to not be access control especially at a big tech company, but for me when its Facebook doing it I can't help but think there's some MBA that might put you in a "bottom 5%" productive employee cohort because your usage patterns just so happen to correlate with less productive employees. They could of course do that almost anywhere as a lot of business IT tracking software is ubiquitous especially in tech, but it feels more likely at FB.
> when its Facebook doing it I can't help but think there's some MBA that might put you in a "bottom 5%" productive employee cohort because your usage patterns just so happen to correlate with less productive employees.
I work at Facebook. I would say it's less likely there than elsewhere that I have worked due to how the review cycle is set up. People have this strange idea that Facebook is some kind of top down panopticon.
If you’re using a React pipeline on the remote server, how quickly does the page refresh occur? I do remote VM development where I work and a major pain point for me is how long it takes to refresh a page that I’m working on.
You navigate to the remote server in your browser to test your changes. You just need to save and refresh the page and the updated code is magically deployed to the page.
YOU, a random developer working at some other company, or on your open source projects, are not sending data to facebook.
Facebook employees, using facebook's tools, running on facebook's dev-servers, to build facebook, ARE sending dev-tool telemetry to facebook's dev-tool-development team.
"What does it matter?" is referring to the latter.
> all sorts of things tracking your usage and access to everything lends even further credence to the big brother thing.
As a user, shouldn't you be happy if the usage / access to everything by Facebook developers is carefully monitored? That should significantly reduce the risk that some insider improperly accesses any of your data.
>As a user, shouldn't you be happy if the usage / access to everything by Facebook developers is carefully monitored?
Only in the sense that I would be happy that the serial killer that captured me uses sterilised blades. That would considerably reduce the risk of infection.