Blog

Agile Bugs Q&A: Part 4

Lital Barkan on July 19, 2016

4-agile-bugs-qa-posts-4.jpg

I asked questions about #AgileBugs in a recent live chat. Tech trailblazers Raygun CEO JD Trask and Assembla CTO Jacek Materna answered my questions and we all discovered together that I was not the only inquiring mind. Dozens of intelligent, thought-provoking questions came in from those who attended the event. 

We created a series of blog posts in which our experts expertly answered them

This fourth post is about designing use cases, automating testing, preventing regression, and using Assembla for failed test cases. Please see our other posts for answers about:

Part 1:
  • Estimating   

 

Part 2:
Part 3: 
  • Differences in dev and production environments
  • Bugs that stop the sprint goal
  • Third-party bug reporting in Assembla
Part 5:

Tips for designing use cases

Bethel asked the experts, "Could you share a couple of tips about how to become more effective at designing unit tests?"

JD.pngJD said, "This is just my experience, but the common issue I had was that I’d try to test too much in a single test. Make the tests smaller. Something like, if I add an item to the cart in code, does the count of items in the cart increase by 1. Rather than unit test that the checkout flow works. Smaller sized tests work great."

Jacek.pngJacek added, "Each unit test should be able to run Independently; generally in a project, all the tests are run at one time. The order in which the tests are run cannot be determined; for this reason, the tests should be independent of each other.

"Test only one condition at a time. There should only be one assert statement for each test method. In the preceding example, there are two test methods: one to verify the positive and one for the exception scenario. Now, when the tests are run, each scenario is validated separately. Explicit assertion for every method gives an instant insight of the failed tests and helps analyze the scripts easily.  The tests should return the same result whenever and how many times it's executed. The result shouldn't change from one execution to another.

"Thorough testing is related to how many units of code are verified. Code coverage gives a good indication of how much testable code is verified. A code coverage of 80% and above is a good indication that most of the code is covered by unit test scripts.

"Mock external references, the test script, when run should not depend on external entities such as connection to a database or WCF or a configuration file. Mocking external references essentially means that it's assumed that the expected data is returned from the source and the testable code only verifies the actions that are performed on that data.

"Another reason for mocking is if there is a connection problem with the external entity, the unit test scripts are bound to fail; there are two levels of testing being done, one for the external entity and other of the action performed on the data retrieved. Ideally, these should be two different test scripts to identify the bug better.

"Unit tests are not meant for integration test. The unit test scripts are generally run as a part of continuous integration (CI) where the build is fired and, after the build is generated, the unit test scripts are run against the assemblies."


Bogdan wondered, "How do you deal with slow intergration tests?"

Both experts agreed, "We deal with slow integration tests by only having them run when building for deployment to production. We still run all unit tests on all builds, with integration tests run less often. We’re always looking for ways to improve our build times, however, as it is a real drag on team productivity when the build exceeds about 5 minutes."


Automating testing

Aparajita wanted our experts to help, "Could you please shed some more light on automatic deployment process?"

JD.pngJD explained, "At Raygun, we use a tool called Octopus Deploy (it’s great for .NET solutions, which we use a lot). There are many other tools also out there. We use Jet Brains Team City to build every merge to Master in GitHub, and Octopus can create packages from those successful builds. Team members can then choose to deploy those builds using Octopus with a few clicks. Your best bet would be to go and read about it on sites like Octopus Deploy :)"

Jacek.pngJacek said, "The general assumption is that your code, configuration management, and deployment configurations are all stored in some type of repo like Git. The purpose of a CI tool (we use Jenkins) is to ensure that code that is submitted by developers goes through unit testing in a “production-like” environment. Once that passes, merge requests (aka. pull requests) can be queued up where developers inspect the incoming code, if the code is voted OK by the team, it is merged and moved onto QA, QA runs its checks, QA releases code, it gets pushed through production.

"We also use Chef heavily to orchestrate this entire infrastructure process end to end and ensure consistency that all code runs in similar environments all the way through. We can deploy and rollback changes to production in less than 10 minutes."


Sergio asked, "Is it worthwhile having automatization tests from the very beginning of the project, or it is better to wait until the project is in a mature state?"

Jacek.pngJacek says, "Ideally, automation comes from day 0 - write a test, write the code, write the staging test, commit the code, if it passes, move on, repeat."

 

JD.pngJD adds, "In reality, it often depends on the experience of the team. Many senior engineers would write unit tests to automatically test critical parts of the code at a minimum. If you haven’t written tests previously and are working on a legacy code base then add unit tests when bugs are found (find the bug, write a test that fails because of that bug, fix the code, test passes). This approach builds up the unit test set over time, and proves the fixes being made are resolving the issue."


Preventing regression

Ryan asked JD, "How do end user workflows factor into building the types of unit tests that truly help prevent regression?"

JD.pngAnd JD replied "I’m not sure the end user workflow impacts the unit test as such. More than if you find an error, you write a unit test for it, have it fail, then fix the code so the test passes. That allows you to then know you don’t regress because your build environment should run all the tests on each build and the build would fail if that test fails."


Using Assembla for failed test cases

Sabarirajan wanted to know, "How can a failed test case for a task be tracked through tools like Assembla? Should we have to create new bug tickets or re-open those task items?"

Jacek.pngJacek explained, "We keep them open in 'Pending-Deploy’ status until QA passes them or they are reviewed by the Dev. team. Once they are deployed in production, we open new tickets for errors and reference the source to keep a chain."

 


Thanks to our brilliant question-askers. Your questions represent issues faced by many development teams across many countries and industries. And thank you to our panelists, Jacek Materna (@jacekmaterna), CTO of Assembla, and JD Trask (@traskjd), CEO of Raygun. Your answers are making dev practices across the world more efficient!

Please stay with us for the next (and last) post in which JD and Jacek provide insight on bugs after the sprint, bug-handling tools, bugs when developing for clients, and differences in methodologies.