Manual testing is for chumps (the US government is a good source of chumps). It's a great way to stretch a project out and balloon the budget because you need a separate team (maybe even multiple teams) to do in a few weeks or months what could have been done in a few hours or days by a couple computers. Why anyone would prefer manual testing for anything is beyond me, it's a great source of errors because it becomes your big time crunch in large projects. You end up cutting out tests because "that one never fails" (until you cut it) and rushing through the tests handwaving issues with "I'm sure we just fat-fingered it".
Automated testing takes less time, fewer people, less hardware (normally, but not always), and can be kicked off by anyone in the project team (if you're sane about your project management) at any time. So you can run your suite today and know (worst case on huge projects) by Friday or next week that it's actually passing or failing. If you stick with manual testing, you won't know about failures for weeks, months, and sometimes years. Genius if you're going for government money, moronic if you're spending your own.
Hmm, this has not been my experience, but I spend a lot of time in the UI. Writing UI tests is a massive pain, the functions are impure, naturally deal with side-effects, and involve robo-interacting with a fake browser. The amount of code required to simply scaffold a fake browser that works reasonably is a monumental feat and that's before you start writing a single test.
Further, it's not enough to measure how long it takes to write a test but also to maintain them. That's the time creep that can be absolutely brutal.
We do a lot of mock 15 different things so that you can unit test this one thing. I too think it's a drag because it tends to ossify things. Not only do I have to change the code I have to change all the unit tests and mocks. And it just doesn't seem to catch a lot of stuff. Or at least a lot of bugs slip through
I wish we had spent that time on better anomaly detection and defensive coding and stuff like blue green deployments. Users are the best testers and it's not really a big deal if 2% of your users see a bug for 10 minutes.
Maybe different if people will die or if money will be lost. But for general business or consumer stuff I think it's fine.
> Users are the best testers and it's not really a big deal if 2% of your users see a bug for 10 minutes.
This is the product engineer mindset. Your job is to deliver value, and you use tools that deem appropriate and most efficient.
But most engineers tend to be disconnected from value (both by their own choice, and by organization structure). When you don't know what the value of what you're producing is, you start clinging to other signals, most of which are actually noise.
> Why anyone would prefer manual testing for anything is beyond me,
Formal software testing is great, particularly if you're working on a hard problem with clear goals like building some hypothetical API to do some financial calculations or transactions where an error might cost tens of thousands of dollars or more.
But there's some cases where literal manual testing is the best business decision, even if it's not the best software engineering decision.
To give one example: have you ever worked in an agency environment?
What do clients do when you hand them off any scale of project? Some people within their organization will then go over every inch of the software and see if it works as they expected. They will literally never ask once what your test coverage is. But they will notice if some test data they input themselves doesn't give the right answer, or if some minor UI element they never really thought through doesn't seem to work the way they like.
Writing tests in this environment is 99% intellectual masturbation.
> But they will notice if some test data they input themselves doesn't give the right answer, or if some minor UI element they never really thought through doesn't seem to work the way they like.
Those people are going to not stop at one; so you will be foolish if you don't write a regression test for the behavior they wanted.
When they find the next thing they think is broken, and you try to fix it, you could regress your earlier fix for the first item.
In reacting to the user reports, you may be breaking other things that are not tested, but at least you pin down the behavior that the users are reporting.
UI element not working right in some way could be a genuine "pass" for testing. I mean, writing some monkey test that feeds events into a UI to check that it's in exactly some expected state could be a big waste of time.
Probably if you were writing a reusable widget framework, you might want that sort of testing, because you could make an inadvertant change which makes every instance of some widget behave differently in downstream applications.
If you have some complex behavior in your UI that doesn't come from the underlying widgets, and has to be right in certain ways, then that could be worth testing.
Your comment was dead when I saw it so I vouched for it.
> But there's some cases where literal manual testing is the best business decision
I will agree, manual testing can be the best business decision. Especially if it's someone else's money and time you're spending on it rather than your own. Or if your system is trivial or short-lived.
I don't work on short-lived systems, and manual testing has repeatedly been a major hurdle to improvement overall when dealing with non-trivial, long-lived systems in my experience. Manual testing either misses too much, or takes too much time if it's comprehensive. And even if it is comprehensive, or perhaps especially if it's comprehensive, manual testing gives you many false positives and negatives because it is error prone.
I think most of the developers on HN have predominately worked on long-lived products where some culture of heavily testing things makes perfect sense and the idea of doing anything else seems strange. If you've got a customer base on some SaaS that depends on things working as expected, then yeah, having lots of tests to make sure that your system is consistent and changes on various edge cases spread over the course of years don't break subtle things is vital.
So there's probably sort of a developer cultural disconnect when somebody who has often built urgently needed, shorter-life, bespoke projects points out that there's some types of projects and environments where software testing brings a much different value-proposition. In some cases, you're not building something that's intended to last for years: maybe you're building a specific software that's intended to be used in a booth demonstration for 3 days in one industry conference, as an example. In some cases, the speed of development and an absolute lowest cost takes priority over perfect software engineering principles, or otherwise something couldn't even be built.
When Barry from HR doesn't like the way a page works, that's a feature request, or maybe it's a bug.
When a bit of errant code refunds the customer $10000 instead of $100, that's a BUG (ALL CAPS).
The two are not mutually exclusive. You need human critics looking at what you create, but you also need objective tests, and automation is very good at objective tests. You might even ask Barry to look at your creation earlier, further reducing the need to inject more humans than you already have.
Just throwing opposite view. I saw money burned on test automation because manager needed to check the box. They got a guy that set some automation up in 6 months and then he left. Guy did not know the project and project was evolving in rapid pace so none of the GUI automation was working couple of weeks after he left - actually a lot of stuff stopped working while he was on it because project was moving under his feet.
That's the bigger issue. Why would you put someone in charge of testing something who doesn't know it? That's brainless. Your manager was not very competent, sorry you experienced that.
Automated testing takes less time, fewer people, less hardware (normally, but not always), and can be kicked off by anyone in the project team (if you're sane about your project management) at any time. So you can run your suite today and know (worst case on huge projects) by Friday or next week that it's actually passing or failing. If you stick with manual testing, you won't know about failures for weeks, months, and sometimes years. Genius if you're going for government money, moronic if you're spending your own.