Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Largely because more of the automated integration testing has to be done with a headless browser, e.g. Poltergeist.

If you have no JS in an important area of your site, you can integration test it without a headless browser. Your test process models HTTP requests as simple method calls to the web framework. (E.g. in Rails, you can simulate an HTTP request by sending the appropriate method call to Rack.) The method calls simply return the HTTP response. You can then make assertions against that response, "click links" by sending more method calls based on the links in the response, submit forms in a similar manner, etc.

But if a given part of your app depends on JS, you pretty much have to integration test in a headless browser. Given the state of the tooling, that's just not as convenient as the former approach. Headless browsers tend to be slow as molasses. There are all kinds of weird edge cases, often related to asynchronous stuff. You spend a lot of time debugging tests instead of using tests to find application bugs.

Worst of all, headless browsers still can't truly test that "the user experience is correct." That's because we haven't yet found a way to define correctness. For example, a bug resulting from the interaction of JS and CSS is definitely a bug, and it can utterly break the app. But how do you assert against that? How do define the correct visual state of the UI?



Yes, I've known about the headless proposition for a while.

Splitting front-end and back-end tests is desirable.

> Worst of all, headless browsers still can't truly test that "the user experience is correct."

This is the claim from the old Joel Spolsky article about automation tests but should not be the ultimate dealbreaker.

Nobody claims you should rely on automation-tests 100%. Automation-tests test functionality of your software not the look-n-feel or user-experience. You have a separate tests for that.

The problem between JS and CSS shouldn't be that many either (shouldn't be a factor that, again, becomes a dealbreaker). If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?

I don't test my configurations (in-code configuration, not infrastructure configuration) because configuration is one-time only. You test it manually and forget about it.


> Splitting front-end and back-end tests is desirable.

I don't feel confident without integration tests. An integration test should test as much of the system together as is practical. If I test the client and server sides separately, I can't know whether the client and server will work together properly.

For example, let's say I assert that the server returns a certain JSON object in response to a certain request. Then I assert that the JS does the correct thing upon receiving that JSON object.

But then, a month later, a coworker decides to change the structure of the JSON object. He updates the JS and the tests for the JS. But he forgets to update the server. (Or maybe it's a version control mistake, and he loses those changes.) Anyone running the tests will still see all test passing, yet the app is broken.

Scenarios like that worry me, which is why integration tests are my favorite kind of test.

> Automation-tests test functionality of your software not the look-n-feel or user-experience.

It's not about the difference between a drop shadow or no drop shadow. We're not talking cosmetic stuff. We're talking elements disappearing, being positioned so they cover other important elements, etc. Stuff that breaks the UI.

> The problem between JS and CSS shouldn't be that many either

Maybe it shouldn't be, but it is. I'm not saying I encounter twelve JS-CSS bugs a day. But they do happen. And when they make it into production, clients get upset. There are strong business reasons to

> If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?

Exactly. I think there's a tooling problem.


> I don't feel confident without integration tests.

Nobody does. Having said that, my unit-tests are a-plenty and they test things in isolation.

My integration-tests are limited to database interaction with the back-end system only but do not test near-end-to-end to avoid overlap with my unit-tests.

I have another sets of functional-tests that use Selenium but with minimum test cases written for it only to test the happy path (can I create a user? can I delete a user? There is no corner cases tests unless we found that they're a must) in most cases because it is expensive to maintain the full blown functional tests.

Corner cases are done at the unit-test or integration-test level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: