🎤 Test Impact Analysis. Smarten up your pipeline and reduce t
- 👤 Bart Szulc
- Twitter: @BartSzulc
📝
Slides:
https://delex-conf.com/2019/test-impact-analysis-smarten-up-your-pipeline-and-reduce-test-time/
📹
Video:
https://youtu.be/hWi819DCMrE
It’s always been a problem of mine to understand what code coverage reports want to tell me. Couldn’t wrap my head around this one particular number they come up with, the percentage of code covered with tests. Is the current percentage ok? Should it be higher? Is 80% enough or should I do something to reach magical 100%? I had to find answer to those questions while working on major refactor of a system. The problem was, is the coverage good enough to give green light the refactor. To answer the question I decided to first look at the coverage from the very most low level. Looked at the lines that are and aren’t covered. Driving the analysis with risk. Assume that all types of automated regression tests you have in your project cover 80% of your production code. This coverage is assured by unit tests, API level tests, webdriver tests and other system level tests. Besides knowing 80% of code is covered with a test, the number also tells you that 20% of your code is missing any test. Now switch from percentage to number of lines. In my case, the system we were about to refactor consisted of over million of lines of code. One fifth of a million is a lot of code missing coverage… a lot… but… not every line is equal in value and thus equal in risk. So what was the value and risk associated with each line missing coverage? One way of looking at risk is by the probability of something unwanted occurring multiplied by the impact it causes. Missing test for a line of code is definitely a risk. Nothing will tell you when the line is wrongly modified and that the modification is going to cause a bug in production.
This page was generated from this YAML file. Found a typo, want to add some data? Just edit it on GitHub.