At Fundbase, we are adamant about writing tests, as I hope you are :) This includes unit and end-to-end (feature) tests. Having high test coverage is something we constantly strive for and monitor; and it’s certainly something we hang our hats on as the benefits to the company are huge. However, it can have an unfortunate side effect for development - slowness. As our test suite has grown, specifically our feature tests, our build time has increased as well. This results in three main pain points: 1. Feature branch builds take long to run the suite 2. Deployments are slower (due to point 1) 3. It becomes more painful to work with the specs and developer happiness decreases
In this post, I’m going to highlight 4 techniques I used to cut our test suite build time in half.
Be sure to benchmark the speed of all specs in your suite and sort them by run time to have a starting point for what specs to focus on and a metric to return to.
1. Remove slow finders
This is both a symptom of a slow spec and overall slow suite. There is a great post by @ngauthier in which he recommends his slow finders gem to hunt these down. In a nut shell, it boils down to avoiding assertions like this:
def has_ok_button? has_css?('.ok-button') end context 'when the button is displayed' do it 'has the ok button' do expect(has_ok_button?).to be true end end context 'when the button is not displayed' do it 'does not have the ok button' do expect(has_ok_button?).to be false end end
In the second example, you are waiting the default wait time before passing the spec, which is slow. This is faster:
def has_no_ok_button? has_no_css?('.ok-button') end context 'when the button is not displayed' it 'does not have the ok button' do expect(has_no_ok_button?).to be true end end`
This also reads better as an added benefit. The slow finders gem will help you identify these, but make sure your team is aware of this antipattern to avoid introducing in the future.
2. Start at the URL of the page under test
Some of our main pages our comprised of multiple tabs:
When testing each tab, we would open the page on the ‘Overview’ tab, then click to the desired tab (i.e. Pricing), and the assertions would begin. This may not seem like a big problem, but Capybara may spend 1-2 seconds loading the initial page, finding the tab, clicking, loading the new page. In some cases, we would have 50 examples for a tab; that’s 50 clicks x 5 tabs!!! You do the math :)
So be sure to start the spec at the URL we want (
/funds/:id/pricing) rather than starting on a default page (
/dashboard) and clicking over.
3. Be weary of Rspec shared contexts
If you are not using shared contexts, then don’t and skip to the next section :) In all seriousness, they can be useful, but like Capybara finders, can just as easily shoot you in the foot as they can help. The problem we experienced, as you might guess, is that these shared contexts become very bloated, over used, and over included. So we could have a shared context that takes multiple seconds to run being included in files where we maybe needed 1 line out of the 100 line file. So my advice here would be to avoid using shared contexts, or limit using them for 1-2 files.
4. Combine examples where possible and appropriate
In unit tests, we want each example to test one thing. This is not necessarliy true for feature tests. So given these examples:
before do test_page.open end it 'has comments' do test_page.ready do # comments is an array of page objects expect(comments.count).to eq 2 end end it 'can delete a comment' do test_page.ready do comments.first.delete! expect(comments.count).to eq 1 end end
With feature specs, we pay a much higher cost in terms of time for each example than we do for unit tests. So we should be diligent about only running examples that provide value. The above could be re-written as:
before do test_page.open end it 'can delete a comment' do test_page.ready do expect(comments.count).to eq 2 comments.first.delete! expect(comments.count).to eq 1 end end
This is a contrived example in which the time saved is negligble, but as with the previous points, these types of time wasters add up when your test suite grows.
Cleansing the above points from your test suite can result in some significant time savings, but it’s just as important to ensure we don’t continue to reintroduce them in the future. Nobody wants to do a massive spec refator every 6 months…except maybe me :)
Questions/Comments? Tweet me @tim_blonski.