TL;DR

  • METS is a framework for organizing testing activities by high level features and priority given a time constraint
  • METS is not a substitute for project training and expertise
  • For small-scale, one-and-done projects, METS is a great way to organize testing efforts without putting in the effort to organize a full regression testing suite in a test case management tool

“Have you heard of this QA strategy called METS?” asked one QA engineer to the other on the team.  “No, what is it?”  “It stands for Minimal Essential Testing Strategy, and it’s a tool that aids QA engineers in prioritizing their testing efforts when time is a constraint.”  Thus began the discussion that led the Five & Done QA team to explore using METS.

In this blog post, we’ll outline:

  • The high-level concepts of METS
  • How we hypothesized it would work for our organization
  • What we learned along the way
  • Our next steps to further explore its uses in our space

But before we get into that, you might be asking yourself, what is METS, anyway?

What is METS and Why Should You Care?

As previously mentioned, METS stands for Minimal Essential Testing Strategy.  At a very high level, using METS is a way for QA engineers to prioritize testing certain features and functionalities when time is a constraint (in other words: always), and it can help the QA organization to provide a risk assessment analysis to the rest of the software team.  Shout out to Greg Paskal, the creator of METS back in the early aughts; here’s a link to the official METS Testing website to learn more: https://www.metstesting.com/.  While not necessarily a new methodology, it was new to Five & Done until just a few months ago.

Essentially, the QA engineer sets up a grid - on the leftmost column, the major features and functionalities of the app under test are listed (in rows).  Back at the top, additional columns are put into place for priorities - critical through low.  It looks like this:

In the cross-section of each feature and priority, the QA engineer places high-level test scenarios, or use cases, into the field.  For our application of METS, these are not heavily detailed - we’re not writing full test cases with steps and expected results in all their glory here.  The idea is to give the QA engineer executing tests against the app a way to focus their energies on the appropriate tests.  Running a prod smoke test?  Focus on the critical and high tests.  Have time to run a more comprehensive suite of tests?  Include the medium and low tests.

The implementation of METS necessitates a living document that the QA team can update to reflect the current state of the application under test.  In a way, this is very similar to the maintenance of test cases as an application matures or changes over time, but as we’ve come to find out, updating a few use cases in a grid is quicker than locating all impacted test cases in a regression test suite that require updates after, say, a sprint’s worth of work.  Along those lines, it’s worth noting that the grid can and should update as priorities shift, even if the application’s functionality isn’t changing.  For example, if the QA team learns that clients use a specific part of the application quite frequently that was previously marked “medium” or “low,” the team should reorganize the use cases for that functionality/feature.  Progress over perfection; think of the grid as the best representation of the understood priorities at any given time and that it will update as more feedback is obtained.

If you want to study up on METS, please see the following link for an associated Udemy course.

To hear about our findings with METS, read on.

Hypothesis 1 & Analysis

Our first question about METS: can a filled out grid be shared between QA resources with different accountabilities?  Let’s dig in.

Five & Done is a lean agency with a small-but-mighty QA team.  Each QA engineer is accountable for various projects the company works on; in this case, accountability essentially means they own the project from a QA perspective.  That being said, people take vacations and get sick from time to time, so while one QA engineer is out, the other QA engineer may jump in to assist with testing.

Of course, with accountability and ownership comes expertise.  The primary QA on a project will, over time, learn that application in and out and essentially become close to (if not) a subject matter expert.  Since the secondary QA has their own projects (in which they’re primary), filling in for an absent primary QA always presents the challenge of getting properly ramped up to be effective.  Not only that, but time is a factor - the secondary QA filling-in has to know the right questions to ask about the application (with respect to testing it) and must aim to achieve the same level of quality in the same amount of time the primary QA is normally allotted so as to keep the project on schedule.

In the past, Five & Done has facilitated knowledge transfer meetings to discuss the application before handing it off to the secondary QA.  While helpful, it’s not always at the same level of depth that digging into the application for yourself while testing will reveal.  Oftentimes, the secondary QA finds themselves in a position to ask the rest of the team questions when it’s finally time to test, which can interrupt normal flows for the team, as well as slow down testing efforts.

Is METS a training tool for a new resource?

Back to METS now; the theory was, if METS is put together for a project by the primary QA, is that sufficient to ramp up the secondary QA when the time comes to fill in for testing efforts?  After all, the test scenarios are organized by priority, so it should help the tester identify where to spend their time.

In our exploration, we didn’t find METS to be a replacement for knowledge transfer on a project.  Ultimately, the secondary QA knew where to focus their testing, but didn’t necessarily understand the context of the data put together by the primary QA.  In fact, the secondary QA still ended up needing to ask the rest of the team questions about certain parts of the app to ensure they were in alignment with the requirements.  In other words, there is no shortcut to building expertise and understanding of business logic for any given application.

This inevitably leads to a larger discussion about how to write test cases/scenarios; there are, more or less, a couple of camps.  The first camp consists of those that write out detailed test cases such that anyone can read them and execute with minimal inquiries, while the other camp writes basic notes that rely on the tester’s expertise to properly execute.  That dichotomy, or perhaps we should call it the extremes on a spectrum, is out of scope for this analysis on METS.  However, given the nature of our implementation of METS (simply documenting high-level use cases rather than detailed test cases), the secondary QA had a hard time understanding the more nuanced steps necessary to verify certain aspects of the application under test.

At the end of the day, we learned that METS wasn’t the right tool for the job of a secondary QA jumping into a project to fill in while the dedicated resource is out.  We always strive to optimize our own processes, and so, to combat this, we have implemented more cross-training as time permits.  This helps build expertise across the team over time, instead of trying to cram it all in with a quick knowledge transfer meeting before the primary resource goes on vacation.  Perhaps we will formulate another hypothesis to test the application of METS again after that cross-training has happened to gauge the usefulness of the methodology at that point.

Hypothesis 2 & Analysis

Perhaps a bit controversial, but we asked ourselves the following question: “Is METS a suitable substitute for a full-fledged regression test case suite in a test case management tool?”

Let’s break that down, but first, we need a bit of context.

Five & Done is a boutique agency focused on creating unique digital experiences for clients.  Many times, our work at Five & Done involves building out a very specific experience within a client’s existing application; this may be a new version of the Fender Play landing page for example, or even a new component to be used in a client’s existing content management system.  For the sake of this blog post, let’s call them “one-and-done” projects.

Given that context, we can consider the following: Five & Done doesn’t own the whole application, we’re just enhancing it.  We fit into their existing ecosystem, and improve a product already built and maintained by the client.  With that said, and our experience over time, we started to realize that building out full suites of test cases isn't really worth the time on “one-and-done” projects.

Can METS be used instead of a regression test suite?

As any test organization probably knows, building out detailed test cases can be both a large and time-consuming effort.  Additionally, maintenance on those test cases is a necessity, invariably consuming more of the QA team’s time.  This strategy definitely has its merits - it’s got a proven track record - for long-term contracts where Five & Done supports a client’s application over long periods of time; the effort in the creation and maintenance of those test cases is worth it.  But for “one-and-done” projects, the act of creating and maintaining all those tests, only to use them once or twice before handing off the final version of the project to the client, might have a lower return on investment than just using METS.  Or so we theorized, anyway.

Turns out, for our niche spot in the market, METS offers an advantage when working on “one-and-done” projects.  It’s quicker to implement than full-fledged (and detailed) test cases because it puts high-level use cases in a grid rather than documenting all steps and expected results for a given workflow, feature, or functionality.  It’s quite flexible and can be changed rapidly/iteratively as the team makes changes to the application under development (i.e., it’s not static and it’s easy to maintain, at least, easier than traditional detailed test cases).  One simply has to update a few use cases in the grid, rather than find all test cases that need to have updated steps and expected results.

METS has another benefit: it’s easy to archive, and thus easy to restore for future use.  In the case that a client comes back to us for more work, we can pull up our old notes to use as a starting point.  If it’s another “one-and-done” project, we can update it to fit the needs of the project.  If it’s a long-term contract, we can utilize what was created before to prioritize the test case writing efforts, i.e., document the critical test cases, then the high, and so on.

The irony is not lost on us that we moved away from Excel to proper test case management tools, only to go back to Excel (or Google Sheets).  But in our case, for smaller-scale projects that don’t have long-term engagements in place that warrant the larger effort involved with test case management, METS just makes sense.

Next Steps

Here are a few areas we’re looking to further our exploration of METS:

  1. Does METS help expedite the organization and writing of test cases for a net new application that is being built with a maintenance contract in place?  Essentially, if a client hires us to build them an app from the ground up, and plans on having us support it long term, can METS be used to better facilitate the test case management portion of the QA engineering process?
  2. Once the living document is implemented on a particular project, can it be shared with the broader team (not just the QA engineers), such as case/project managers, producers, product owners, developers, designers, etc., to better align on prioritization of testing?  What about the client’s stakeholders, who have their own agenda with respect to making sure their critical business processes are covered during testing?
  3. Can it be a deliverable to the client once we wrap up our “one-and-done” project for them?  Perhaps it can be a resource to them with respect to their own test case documentation.

Look for a part 2 in the series where we’ll document our findings!

Conclusion

Five & Done embarked on a journey to learn about METS, and, specifically, whether or not it can be a new tool in our QA tool belt.  We hypothesized that it could be a quick and easy way to ramp up secondary QA resources to a particular project, but we didn’t realize much success in that experiment.  The other hypothesis - whether or not METS grids could be used as a suitable stand-in for full regression suites on “one-and-done” projects - was more of a success in our eyes.

As we look to the future, we have set our sights on using METS for completely new applications as they mature, sharing it with the broader team for feedback, and perhaps making a deliverable out of the grid.  Who knows, maybe we’ll explore building out the grid with AI, as long as it poses no security risk (we don’t want the data in the wrong hands) and with the expectation that the human QA engineer will use the output of the AI as a springboard, fully knowing tweaks will need to be made.

Thanks for checking in with Five & Done!  We’d love to get your feedback; give us a shout if you’re in the QA community and have experience with METS and want to share it.  Or, if you’re a QA engineer who reads this blog post and decides to implement METS on your end, and you want to share your findings, give us a holler!  This is Five & Done QA, signing off.  ✌️