Thoughts On End to End Testing of Browser Apps

Tuesday, February 10, 2015

In a previous post on using the PageObject pattern with Protractor, Martin asked how much time is wasted writing tests, and who pays for the wasted time?

To answer that question I want to think about the costs and benefits of end to end testing. I believe the cost benefit for true end to end testing looks like the following.

cost benefit of e2e testing

There are two significant slopes in the graph. First is the “getting started” slope, which ramps up positively, but slowly. Yes, there are some technical challenges in e2e testing, like learning how to write and debug tests with a new tool or framework. But, what typically slows the benefit growth is organizational. Unlike unit tests, which a developer might write on her own, e2e tests requires coordination and planning across teams both technical and non-technical. You need to plan the provisioning and automation of test environments, and have people create databases with data representative of production data, but scrubbed of protected personal data. Business involvement is crucial, too, as you need to make sure the testing effort is testing the right acceptance criteria for stakeholders.

All of the coordination and work required for e2e testing on a real application is a bit of a hurdle and is sure to build resistance to the effort in many projects. However, the sweet spot at the top of the graph, where the benefit reaches a maximum, is a nice place to be. The tests give everyone confidence that the application is working correctly, and allows teams to create features and deploy at a faster pace. There is a positive return in the investment made in e2e tests. Sure, the test suite might take one hour to run, but it can run any time of the day or night, and every run might save 40 hours of human drudgery.

There is also the ugly side to e2e testing where the benefit starts to slope downward. Although the slope might not always be negative, I do believe the law of diminishing returns is always in play. e2e tests can be amazingly brittle and fail with the slightest change in the system or environment. The failures lead to frustration and it is easy for a test suite to become the villain that everyone despises. I’ve seen this scenario play out when the test strategy is driven with mindless metrics, like when there is a goal to reach 90% code coverage.

In short, every application needs testing before release, and automated e2e tests can speed the process. Making a good test suite that doesn't become a detriment to the project is difficult, unfortunately, due to the complex nature of both software and the human mind. I encourage everyone to write high value tests for the riskier pieces of the application so the tests can catch real errors and build confidence.

gravatar Dennis Burton Tuesday, February 10, 2015
Test where the money is. That isn't about return on investment, it is literal. If you have an online retail site, the money is in the cart. Make sure there is coverage around the flow of money. I love the part about coordination of different business units causing resistance. That is arguably as valuable as the test itself. This is an excellent way to get everyone on the same page. Great post!
gravatar Dan Kellett Tuesday, February 10, 2015
I think the concept might be better illustrated with a cumulative graph since you are talking slopes. Benefit doesn't really disappear as you spend more (ROI might). If this were cumulative, you would simply have a very low slope or flat slope after the benefit peaks.
gravatar scott Tuesday, February 10, 2015
@Dennis - thanks!. @Dan - true. I am trying to give a sense of what happens when tests get thrown out or ignored, but like I said in the post the slope might not go negative.
gravatar SteveC Wednesday, February 11, 2015
My teams call them "Integration Tests" and the first tasks of each sprint is to write them (sometimes as console apps sometimes as VS Unit Tests) to match each piece of Acceptance Criteria. It helps keeps the developers "on track" through the sprint, and since the tests are automated they can be run after the build. They are also demonstrable pieces at the end of the sprint. No client has ever asked us for unit test coverage percentage, but every client has asked us "does it work, can you show me?". Having e2e/integration tests (not unit test coverage numbers) is what works for my teams.
gravatar Brett Baggott Wednesday, February 11, 2015
In my most recently completed project, the consulting people we were working with (Bryan Hunter) was using us as a "test" case (pun intended) to give us end to end QA using canopy (and F# of course), at a reduced price, in order to be their guinea pig. Having gone through that experience, I have 3 primary observations. First, all of the above points you make were true for us. Second, at the core of that "reduced price", was having brilliant people like Bryan Hunter and Rachel Reese doing the QA part without having to pay full price. I'm not so sure that the organizational and other challenges you normally face wouldn't have been more painful for us had we had "normal" talent involved. Finally, having seen the value, and having been on projects where things like testing the shopping cart weren't adequately done because of the trudgery of manual testing... I will strongly advocate for this type of end to end testing for all my future projects. At the very least, as Dennis says, for the money parts. We have already started adding it to my current project. Two side notes; we used MSpec for our unit testing and wrote our Acceptance Criteria On the story cards using Given When Then, which made it very easy to translate into both testing platforms. Finally, the "get thrown out or ignored" is a big deal. If it's worth it to do it during (and it is), it's worth it to maintain just like the rest of the code base. And I'm sure I'm being Capt. Obvious here. Great topic, thanks for sharing.
Comments are closed.

My Pluralsight Courses

K.Scott Allen OdeToCode by K. Scott Allen
What JavaScript Developers Should Know About ECMAScript 2015
The Podcast!