That is correct. Pretty much the all of the major cities are right around that 100 mile mark. On the coasts that’s because of geography (Victoria, Vancouver, Toronto, Ottawa, Montreal, and the Atlantic provinces). In the prairies we generally have one major city close to the border (Calgary, Regina, Winnipeg) and one further north (Edmonton, Saskatoon, none in MB)
Unfortunately, the org as a whole has completely collapsed as the result of the GitHub acquisition. I wouldn't be surprised to see ADO be deprecated in 2-3 years in favor of GH
It's sad because GitHub is honestly a children's toy compared to ADO and GitLab in terms of large scale management of repos. GH shows no interest in fixing and improving long standing deficiencies.
Same thing could be said for Microsoft, where employees were upset they were providing O365 and Azure to ICE, but there is no similar outrage directed towards the special China cloud they run
Not really sure what you are implying with the 'Big Brother' comment. The remote development servers are only used for writing/debugging code and not for other daily tasks. Even if they are tracking what tools/functions I am using on the server, what does that matter?
Personally, I have found remote development awesome because it enables engineers to start contributing to a huge product in the very first hour. No need to wait for the repository to clone, the dependencies to install, and the code to build before you can become productive.
> Personally, I have found remote development awesome because it enables engineers to start contributing to a huge product in the very first hour. No need to wait for the repository to clone, the dependencies to install, and the code to build before you can become productive.
I am nobody but I've had this nagging feeling ever since I started working on websites for big corporations about why can't I work on my code on my local machine with no network connectivity? Why do I need to talk to three different databases and two different services on five different servers? Why can't I just fake all those things during my development?
If anything, my not so humble opinion is that remote development further enables bad habits. Of course, remote development is a tool and is not to blame but I recently learned the term "hermetic" build [Google SRE]
>The build process is self-contained and must not rely on services that are external to the build environment.
Personally, I think we should work towards making it possible to run (or at least stub) the whole stack on a single physical machine - be it local or remote. What do you think? I think this is a trivial problem engineering-wise but I am not very good at selling ideas.
Outside of web development, sometimes you need special hardware or workspaces configured to do development. Usually this can be done in a container, but that comes with its own annoyances. Having a central development server for large compiled code bases is really useful.
Not true. Worked on a real distributed payment system. Every service came with a mock clone or a single server mode to standup all services in the same box. Every developer had a powerful personal desktop (or two). I loved it. Everyone I knew there loved it.
See comment below. Just because it wasn't true for you, doesn't mean it's not true. I'm not talking about mocking services to run it; I'm talking about the development environment and sharing with a team of 20 or so on a code base that takes 45 minutes to compile without parallelizing the build. You also cannot mock GPU functions. There's simply no cuda emulator.
This is an ideal case and I wish it's like this for everybody. At least one of the payment providers I'm working with in a customer's project doesn't even have a sandbox. We must test with real money. Obviously we mock everything in unit and integration tests, but to know that it really works the customer must put money on the manual testing account.
Those are all valid trade offs, but its ignoring the issue raised in the AP.
Like, is an environment like this encouraging bad habbits?
But anyway..
Special hardware: such as? Very few systems can't be downscaled. Situations where you need this special hardware we are talking horizontal multi server setups anyway.
Special workspaces: be less special, improve tooling, improve build operations. Very few setups actually needs centralised configuration.
Large compilations: I'm not sold that a) there is many people with the justifiable need, and b) any real justifiable need is likely going to want on demand autoscaling of the compilation servers, i.e. development won't be local anyway.
I'm not trying to debunk everything you've said. It's a trade off and I've used central dev databases in the past for legacy systems and it worked well for those in the office.
All I can tell you is that iteration speed, testability of code, manual testing and all around team morale was DRASTICALLY improved by stubbing out that bottleneck in newer systems.
I'm speaking for my individual case and others I work with where we have codebases of over a million lines of c++, with many header-only libraries. On an 80-core server make -j can still take 3-4 minutes, and that uses all the resources on the machine. Trust me, I wish I could have something as fast that's not centralized. The closest I can think of is either a VM (slow/wasteful), or a container. The container would be really easy if everyone used vscode with the container development plugin, but not everyone on the team does.
For those that don't, it's much more friction to remember to start it up, etc.
People don't like being tracked and Facebook has a long history of being deceptive around how and when it tracks people. Facebook also has a long history of selling the data it collects as a result of tracking even when the people being tracked try as they might to opt out.
Hence the "big brother" reference.
Frankly, I was/am a little leary of how Facebook might be "improving" the MSVSC Remote Development pack since I use it every day.
As an employee working on a corporate device everything will be tracked anyways, this is the most tin-foil take ever.
Your latter concern is at least a reasonable one, but it should all be open source anyways? Not that any of us have the time to audit everything. I doubt Microsoft is going to allow anything nefarious...
You're telling me that in the EU things that happen on company assets are not tracked? I think any company could easily come up with a 'strong' reason (IP theft?)
My employer might track me, but how does that mitigate my concerns that a 3rd party data aggregator like Facebook might track me as well as a result of installing a closed source plugin that they "improved".
I just addressed this in the second part of my comment. You would have to trust Microsoft/Facebook at this point, right? So if you don't and you really actually care (unlike 99.999999 percent of people), don't use closed source software, and audit every single line of the open source software you use, because I bet a lot less eyes are looking at much of what you use.
> People don't like being tracked and Facebook has a long history of being deceptive around how and when it tracks people.
you are mixing unrelated things. As an employee of a company it's unlikely you can raise any concern about your privacy when working on the company's main asset.
> Facebook also has a long history of selling the data it collects as a result of tracking even when the people being tracked try as they might to opt out.
Not OP, but I'm also not sure how OP will provide a decent reference of constantly moving goalposts of privacy wrt to privacy implications of the Facebook platform. The default has always been towards public and noisy, even as Facebook has been forced to mature and realize there were privacy implications about things they were doing by default on the platform.
Despite a culture of "move fast and break things," things have never been broken from new more restrictive default privacy settings. Users who signed up in 2005 would still be an open book by default today.
My point was to elaborate on how some other user might feel that Facebook represent a big brother type actor.
However, you have suggested that there is or should be no privacy when doing development, but I'll remind you that not all devs work in large corporate environments and even when they do, they might reasonably expect to be watched only by their employer. It is frankly not 100% clear that Facebook will never have access to telemetry as a result of "improving" this set of plugins. They certainly did not suggest as much.
In my opinion, there is a dramatic difference between what my employer may or may not do to track me and what a 3rd party social media company may or may not do to track my development practices as a result of using a plugin.
While semantically I may have overstated that Facebook "sells" data, they unquestionably considered doing so between 2012 and 2014 despite promises made to the contrary. They also unquestionably shared data with other large data aggregators. Even if USD did not change hands I think we can be fairly certain that Facebook bartered in user data.
Unrelated? How? It's Facebook. They have shown a repeated assualt and disavowement of social responsibility. I wouldn't expect Exon mobile to do very nice things in non oil contexts. Are you saying you would?
Reference? Where have you been? Cambridge analytica? It's advertising arms? It's all selling user data either directly or indirectly. Don't be so naive.
I don't honestly expect there to not be access control especially at a big tech company, but for me when its Facebook doing it I can't help but think there's some MBA that might put you in a "bottom 5%" productive employee cohort because your usage patterns just so happen to correlate with less productive employees. They could of course do that almost anywhere as a lot of business IT tracking software is ubiquitous especially in tech, but it feels more likely at FB.
> when its Facebook doing it I can't help but think there's some MBA that might put you in a "bottom 5%" productive employee cohort because your usage patterns just so happen to correlate with less productive employees.
I work at Facebook. I would say it's less likely there than elsewhere that I have worked due to how the review cycle is set up. People have this strange idea that Facebook is some kind of top down panopticon.
If you’re using a React pipeline on the remote server, how quickly does the page refresh occur? I do remote VM development where I work and a major pain point for me is how long it takes to refresh a page that I’m working on.
You navigate to the remote server in your browser to test your changes. You just need to save and refresh the page and the updated code is magically deployed to the page.
YOU, a random developer working at some other company, or on your open source projects, are not sending data to facebook.
Facebook employees, using facebook's tools, running on facebook's dev-servers, to build facebook, ARE sending dev-tool telemetry to facebook's dev-tool-development team.
"What does it matter?" is referring to the latter.
Can someone explain to me why they would spend effort on this kind of thing? Other than minimalist websites like HN, I can't remember the last time I saw completely unstyled form inputs in the wild.
If every web developer is going to immediately reach for CSS/JS to style these things, who cares what the default style is?
If I could use unstyled controls and know that most browsers would have reasonable defaults, I would. I think a lot of line-of-business developers would agree. I see an opportunity to eliminate one thing that adds bloat to web applications.
I often do use the unstyled input for say range sliders and color pickers. Most component libraries don't include them but they are complicated enough to make you not want to reimplement them. Even if you choose another external library it will be thematically incompatible anyway so you might as well go barebones and accept the OS style.
In general, it's definitely possible to go too far with styling. Sometimes the look and feel of your app does add value but for the vast majority of apps, how your scrollbars and file pickers look doesn't ever matter.
I guess one area where I see custom styling as important is visual consistency between browsers. These updates only apply to Chromium browsers, so a sufficiently visually complicated UI might look good in Chrome, but bad in Firefox for instance.
I don't feel super strongly either way, but I think it is important to call out
A lot of native HTML controls cannot be styled, instead they are re-created in JS.
This causes problems with accessibility, and burdens the browser with unneeded JS.
On a very related note, Safari not supporting date and time inputs on MacOS, and not having full support on iPhone, is super irritating. Having to add a bunch of barely accessibility JS controls to my site because Apple refuses to implement a spec after a bug was filed 6 years ago (https://bugs.webkit.org/show_bug.cgi?id=119175) was rather annoying.
I'd argue that the correct solution then is to have better support to style native components if designers are going to do it anyway.
Additionally, it is not terribly difficult to make a custom checkbox , for instance, accessible. It may require JS and ugly DOM, but I wouldn't say it's difficult.
Date and time pickers are harder. All the platforms have a native control, and all major browsers except Safari on MacOS expose it.
That one omission means the control is basically unusable without some sort of polyfill, at which point you might as well just use the same JS implementation everywhere because why not. :/
The the drop down on select not being customizable is also famously irritating.
Nearly all browsers have a native date/time control, true. Also true - many browsers do it quite badly. So eventually I always end up using the JS solution...
e.g. There's no way (by design!) to override in CSS the browsers' date format detection - and the method differs by browser/OS combination. Some customers can't manage to configure it, so end up with the american format when they want the european format. Also more than a few native browser date controls are very underfeatured (e.g. current Edge).
I am thankful for momentjs literally every day I do web development. I know it gets a lot of flack for it's bundle size, but it makes so many things easy.
Heck the native JS Date object can't even give an ISO time string in the same time zone the date object was created in. Not to mention a dozen other deficiencies.
Dealing with time had always and probably will always be painful.
And then there is China, where they use YYYY-MM-DD, but 12 hr time with an AM/PM equivalent. So close to doing it the correct way!
The article makes several points that a lot of this effort came out of Accessibility requirements. Microsoft is very serious about Accessibility. They can't stop a web developer from breaking Accessibility, but they certainly care if the defaults are Accessible.
Hi, one of the authors here. We created this library to help developers on our product ease into the Redux ecosystem. One of the main challenges we faced is that our application has many entry points where we need to create a new Redux store, so it was important for us to devise a way to easily import and re-use common state, reducers, and sagas.
Let us know if you have any questions!
Hi, one of the authors here.
We created this library to help developers on our product ease into the Redux ecosystem. One of the main challenges we faced is that our application has many entry points where we need to create a new Redux store, so it was important for us to devise a way to easily import and re-use common state, reducers, and sagas.
You could try the https://github.com/Hotell/rex-tils package. I find these action creator utilities make typing the redux concepts a little easier.
Also, I am working on a library which aims to centralize all of the redux constructs into reusable modules. It doesn't eliminate all if the boilerplate, but I think it makes it easier to add things to the store.
https://redux-dynamic-modules.js.org