Please upgrade your browser!

Bookmark and Share

Finding an agile balance for testing is easier with Zephyr for JIRA

agile balance

The following article is a guest post to Zephyr from Larry Cummings, Atlassian Consultant and Atlassian Certified Trainer at Isos Technology. Isos Technology is a partner of Zephyr and is the market leader in solving complex enterprise challenges from Agile adoption and QA to DevOps and they help teams improve quality and speed delivery.

I always have software development teams use a mix of Test Driven Development (TDD) and Development Driven Testing (DDT) in the same project and in the same release. I’m going to talk about why I do this and how Zephyr for JIRA helps.

I’m assuming you’re using Agile software methods and are very intentional about testing

Using JIRA for software projects tends to steer teams towards a disciplined and agile approach. Integrating Quality Assurance activities with normal software development processes is a no-brainer for anyone who wants to improve software quality. This doesn’t mean it’s easy, though.

My work with Software Development teams is as a Product Owner or Product Developer. These roles are heavily invested in maximizing quality. One of my favorite add-ons is Zephyr for JIRA because it allows distribution of the ability to test for quality to the whole team. Furthermore, this distribution is organized, fits how teams want to work, and intuitively meshes with how JIRA tracks software development process activities.

Every software development team has to find the balance between under- and over-testing different parts of releases

Teams I work with often combine testing methodologies to find an approach that fits what they are trying to achieve. Let’s compare two of the most popular software development testing methodologies: Test Driven Development (TDD) and Development Driven Testing (DDT).

Test Driven Development

Test Driven Development emphasizes the creation of tests before starting to code. You value code (test) coverage and automated testing as a primary activity of your software development team. Your entire team does QA. It’s not just delegated to QA staff. QA staff concentrates on User Acceptance Testing (UAT) testing and spends their time finding defects the software development team didn’t find.

This approach to testing is hard to implement but is an extremely thorough way to ensure a high quality product.

Some of the things to consider when choosing Test Driven Development are:

  • TDD is expensive in terms of the time it takes to set up and manage.  Early in the process near 100% test coverage is pretty easy, but moving from early MVP builds to beta builds is a lot of work. Still, it’s not anywhere near as expensive as shipping buggy software.
  • It’s great when you need to insist quality is everyone’s job. The payoff for the expense is anyone who modifies code can be accountable for not introducing bugs, because every potential bug is theoretically tested for.
  • You may write too many tests, but you won’t know which tests you didn’t really need until you’re shipping. Since you are writing tests before you write your code, you are writing some tests you don’t need because they are prevented in the final design of the software. The thinking here is that’s ok because if you ever change the design of the software, you will still want the test to work. This provides an incredibly valuable safety net for incremental innovation, making existing features work better.
  • It can feel like it punishes disruptive innovation; like it locks you into your design too early. By disruptive here I mean in the context of major changes to the design of your software (not disruptive in a “market disruption” sense). For example, if you’ve developed a web application and you’re using a RESTful URL scheme with an MVC based approach (a fairly common practice), and you propose a change that turns your entire model structure on its head, you will likely have to rewrite a lot of tests. This isn’t really a problem if you consider that upending your model structure should require entirely new kinds of tests. This is only a problem if stakeholders are not aware of the investment they’ve made in code coverage… if this is the case, they may have a hard time understanding why improvements that upend foundational assumptions are not incremental.

Development Driven Testing

On the other end of the spectrum is Development Driven Testing. Development Driven Testing means you test for what is meaningful and reasonable. You achieve 100% code coverage in a more manageable model because you write your tests after you get the software design figured out and mostly working. You are not testing for design flaws that can’t ever occur because the tests are created after, or at least near the end, of the design of the software. This approach is actually preferred by many teams when they are looking to save time or the problem they are solving isn’t interesting enough to create runnable tests prior to creating working code.

This approach is preferred when your team doesn’t consider automated testing as the primary or sole means of ensuring high quality software.

DDT testing can test very efficiently because it leverages the design of the software to improve efficiency of automated testing.

Some things to consider when choosing Development Driven Testing are:

  • It’s faster to create tests that are informed by the design of the software because you don’t have to create tests to check for things the design of the software fundamentally prevents. When you’re finishing up the development of how your software achieves a user story, it becomes clear what tests can’t fail… so those tests DON’T need to be written. From our previous example RESTful and MVC based web application, let’s say as you are finishing up your design you notice your model is designed to prevent any possibility that updating a user’s contact information can result in that user no longer having access to certain features of your application. Creating tests checking for user profile changes affecting access to features aren’t interesting to a DDT approach so they are not written.
  • It takes fewer resources to reach 100% code coverage and the time you save writing your tests, can be used to improve the design of your software. This may feel counter-intuitive. How can writing tests later and writing fewer of them increase your ability to reach 100% code coverage? The fact is, you can write tests looking for flaws more efficiently when those tests are informed by the way the software is specifically designed. It’s also far less likely you will spend time writing tests that don’t contribute to creating a high quality product.
  • You don’t have as complete a safety net in your code coverage because your tests reflect the assumptions you make when you decide a defect isn’t possible in your design. If you assume the design of your software prevents a defect from ever occurring, you may be wrong and it would take more thorough manual testing to validate the assumption is or is not preventing a defect from occurring.
  • You have an easier time radically redesigning your software because you don’t need to audit huge numbers of automated tests that may or not still be useful.

Actually, software teams don’t do only TDD or only DDT. They end up doing both at the same time

For any given user story and any given software design, the developers, the QA staff and the stakeholders tend to find a balance somewhere between these two extremes on a case-by-case basis. When you are creating new software that expresses a high risk to quality, you tend to favor a more TDD based approach. When you are creating a new solution that relies on problems you’ve solved many times before, you tend to favor a DDT based approach. These two approaches live comfortably side-by-side because it’s extremely hard for a team to achieve 100% code coverage for parts of the software that are well understood and make use of reliable and robust existing architectures.

Moving back and forth between TDD and DDT in the same project is easier with a tool like Zephyr for JIRA

Being able to represent your tests right in JIRA means everyone has access to facts about tests. Getting testing visible in your project issue tracking tool is, in my opinion, the best way to ensure that testing is integrated into your delivery process.

Because Zephyr for JIRA’s test adds a Test issue type to your projects, using both a TDD and DDT based approach is no problem.

The only real difference from a JIRA and Zephyr from a JIRA perspective between TDD and DDT is when you create the tests. Since these tests are represented in your JIRA project, there’s no limitation on when individual test creation occurs. It’s up to you and your team!

Reporting on your test results

Getting all of these facts about tests into your project is great, but how do you leverage them?

Zephyr brings testing into your production context in two important reportable ways.

  • Your workflow for completing a work item can be contingent on a passed test cycle.
If you just want to know facts about tests, they are JIRA issue types, so facts about tests are available to JQL. Facts about what happened each time a test was executed are different, so Zephyr provides its own internal Zephyr Query Language (ZQL) to expose facts about test executions. You have to look at test executions to see if all the tests for your release are passing. Teams frequently want to indicate in a JIRA development issue that all tests are passed. In the default configuration this is done by adding a Yes or No custom field named something like: All Tests Passing. When the team has issues that successfully passed all these tests this field needs to be manually updated in the Story being tested. (Teams that don’t like manually updating a field like this use a program that sets that value either through the Zephyr API or direct database access, but that’s beyond the scope of this article).
Once you have the All Tests Passing field in the JIRA Issue, you can make your workflow depend on that field being set to YES (via a Transition Condition) to allow that story to move out of Testing.
  • You can surface progress against both test coverage and test acceptance for your next release.
Zephyr Tests, and therefore Test Executions, have knowledge of your JIRA FixVersion(s). This means you can inspect all the work you are doing for your next release to find issues not linked to any tests for that release! This is very valuable information.
We recommend using an add-on that provides the capability to find facts about linked issues to achieve this. We use Scriptrunner at Isos because it provides additional JQL functions to inspect issue link relationships. This means we can see a report of all Stories in the next release that don’t have any tests associated with them yet!

How does your team do it?

I’ve spent quite a bit of time in this article talking about TDD and DDT, but there are many different approaches to testing that I haven’t touched on. I love the way Zephyr for JIRA adds the ability to create tests, manage their execution and track those results. I love it because it does so without requiring my team to adopt a specific testing methodological approach. We get to decide which approach to use where to use it. Mixing TDD and DDT is just the beginning. When you want to add Behavior Driven Design based testing you’ll find Zephyr for JIRA is an excellent tool for that too!

The important thing is that you test your software using the methodologies (note the plural there!) that works best for your team. This leaves you in control of how you will work together to maximize the quality of your software while still maintaining a timely release schedule.

Larry Cummings is an Atlassian Consultant and Atlassian Certified Trainer at Isos Technology. He helps organizations share, cooperate and collaborate in alignment with the organization's mission. This is a fancy way of saying he make the mission real by concentrating on the community dynamic required to make it happen. Larry especially enjoys working with product development teams, helping find the right balance between the people that use new systems, and the machines used to build the new system.