> What I've noticed in practice however, is that occasionally, this process will allow an upgrade to a dependency that will pass the automated build and test step, but introduce the wildest runtime error into the application. Usually at the time when we aim to deliver something.
Sounds like dependabot is very useful for uncovering insufficient test coverage or missing integration tests :)
That would be a shallow reading, however. The last two major runtime issues wer actually one that broke the test runner and ignored a number of tests. And another runtime error was a Python Django specific sub dependency that broke the admin interface, which obviously, we don't explicitly test.
On the other hand, very recently, we had to abort a release, because of an outdated dependency that Dependabot DID actually raise.
Which is why I don't want to throw the baby out with the bathwater, as one or two people have suggested.
But I can say that I think that the reality of working with Dependabot, is not very well reflected in popular online articles.
This is not testing some other package. It's testing functionality provided by your app that happens to rely upon a 3rd party package. The 3rd party package has its tests, but doesn't know or care about the specific integration environment of your site.
With regards to writing tests of the Django admin, you need to do it if your site is customizing and/or depending upon it.
While this is true, many will probably disagree, just because they don't want to consider the maintenance burden that external dependencies will introduce.
So between choosing to write everything themselves (and getting nothing done), writing tests against dependencies (and getting little done due to the overhead), or claiming that external dependencies should have tests of their own, many will pick the latter.
Then again, in a world where create-react-app results in 180 MB of dependencies and about 1500 modules (probably different numbers now, using some older ones from my blog post), auditing security is an uphill battle, not even talking about actual testing.
The situation in the back end development, isn't that much better either, to be honest, because once you look into the complexity of any framework like Spring, Laravel, Django, Rails etc., it becomes apparent that creating a fully featured framework like that is a huge undertaking.
That said, you should at least test the bits where the external dependency is integrated with your codebase.
> one that broke the test runner, and ignored a number of tests
That's unfortunate! For the project I'm working on, we've "solved" that by showing the number of test and the difference to the number of tests that ran on main.
FWIW, at previous jobs, upgrading Java dependencies was a major pain because they were all outdated and the latest versions introduced too many breaking changes for us.
At my current job, we pretty much instantly merge all PRs from dependabot
because we trust our CI. Upgrades rarely introduce problems and if they do, they are easy to fix.
Was it an update to the test runner or test specific packages that broke the test runner? I would ignore infrastructure/testability/tooling packages in dependabot and do them manually to prevent these errors.
Most test runners have an option (or can be easily modified) to fail when 0 or less than X tests have been run. You should use it for situations like this.
What if there's a bug where your CI ignores all steps? There's always some situation possible and we can go all the way to nasal demons. You have to accept the risk at some point.
I can honestly say I have never went into "news mode" on the main search site before. If I know I want to search for news stories I go to news.google.com and search there. At any rate you have to choose "news mode", it is not in the normal search screen.
I wonder how you deal with restarting changefeeds? The last time I checked you'd have go through every document again after losing the connection to rethinkdb or restarting the server.
We use changefeeds more or less of a queue/pipeline and don't care too much about the initial state. When the changefeeds are created we specifically don't pass the includeInitial argument [0] so we only get a stream of newly modified/created documents.
In a slightly different use case than what OP is describing, we keep track of createdAt and updatedAt in Rethink, order by those, and pick off from max(createdAt) in destination in order to fake restarting the feeds.
For what it is worth - apart from road signs and speeds - I've not really seen imperial measurements used for much any more in the UK.
I would be amazed (and saddened) if someone had started a new, large-scale, serious, rigorous, engineering project in the past couple of decades (i.e. since the 90s) and used imperial units (I am sure there are lots of small-scale/personal/etc stuff done in imperial though - I dont count interplanetary space flight as small-scale!)
Imperial is pretty much dead in the UK (or maybe just London?) apart from roads and conversational/casual usage where its often easier to say the imperial equivalent than the metric (e.g. "pint of beer" is easier than saying "568 millilitres of beer", "about a foot" is easier than saying "about 30 centimetres" just because of fewer syllables if nothing else).
If you think imperial measurements are dead in london you never had to deal with plumbing in your apartment where you have a mixture of imperial and metric pipes and threads ;)
And the best is always finding a metric pipe with an inch thread pitch on it....
Also IIRC technically most engineering schools in the US also use metric these days, it doesn't mean stuff doesn't get screwed up, imperial on it's own is also annoying since you have both decimal (thous) and fractional units which I always found frustrating on it's own.
Do you have any data on that? We found that using transit could actually increase overall throughput because it does some minimal compression/deduplication.
Sounds like dependabot is very useful for uncovering insufficient test coverage or missing integration tests :)