Before you put together your next UAT session, it’s essential to consider some best practices to set you on the right track.
In a previous blog post, we talked about what user acceptance testing is, and how to conduct it.
In this article, we will go over what measures to take before starting a UAT session.
UAT Best Practices
1. Define project scope and objectives
This step happens way before you get to testing (obviously).
We mention this now because when we finally engage in UAT, we constantly go back to our project scope & objectives.
For example, at Marker.io, we’ve recently made it possible for members to join their team when detecting an existing company email.
We document the pain point at the very start of the project:
Why do we do this?
During UAT, we can verify that:
- Technical objectives have been met: any new sign-up with an existing @company.com email seamlessly joins the company.com team;
- The pain point is gone: we no longer have complaints or confused support chats from team-less users.
All in all, defining your project scope saves a ton of headaches down the line.
- The whole team is aligned. “This is what we’re trying to achieve, and this is how we’re going to do it.”
- All the information is centralized. If at any point during testing we’re not 100% confident that this is how the app should behave, we go back to this document.
- We don’t go over (or under) scope. We ensure that the test answers the customer’s problem—no more, no less. You don’t want to fall down the rabbit hole of adding more features during testing.
2. Prepare and document workflows & wireframes
The power of comprehensive workflows is underutilized.
The idea here is not only to align with the development team. When we do UAT at Marker.io, we share all workflows with the testing team as well.
We design workflows like this with Whimsical:
We benefit from this in many ways.
First of all, the document is super comprehensive:
- Conditions & states. For instance, we need to test our app from multiple points of view (team lead, member, guest, company plan, starter plan), and we need to know who sees and does what under X conditions.
- Expected results. If, at any time, one of our testers goes, “was this supposed to happen?”, or “is this the next logical step?” they can check Whimsical to make sure they’re on track.
Secondly: the ability to pinpoint where problems occurred.
The idea when designing those workflows is to be as visual as you can. That way, anyone (from beta tester to developer) can just point and say, “this is where things went wrong”.
Bug reports like this make it easy for your dev team to identify why a flow failed—no guesswork involved.
The best part? Scaling your test efforts is not a difficult task with the right approach.
Whether you have 5 or 500 testers, you can run this process—as long as there are good documentation and test cases.
3. Set up a secure staging environment
The ideal way to observe real users testing your app would be production, right?
But you can’t push a new version of your software to prod and “see what happens” just like that... for obvious reasons.
The next best thing is a staging environment.
A staging environment allows you to run tests with production data and against production services but in a non-production environment.
For example, at Marker.io, when we need to test changes, we push everything to staging.marker.io, a password-protected version of our app.
It’s all smooth sailing from thereon. Our product team just logs on and starts reporting bugs.
The same logic applies when you run tests on a larger scale. Send a URL and login credentials to your beta testers (via e-mail or otherwise) and tell them to try and break your app!
There are a couple of other added benefits:
- Save time. When your product is in staging, you can push changes immediately—it doesn’t matter if the whole thing breaks.
- Protect live data. Beta user inadvertently caused a crash? Your production database is safe.
- Better overview. Locally testing your components is great for seeing what they do independently. But only staging gives you a sense of how these pieces fit into the greater whole.
- Easier testing. Suppose you forgot to update a package in production, or a function working completely fine in local suddenly breaks. These issues will show up in staging already.
Let’s conclude this section with a warning:
It is paramount that your staging settings mirror your production settings.
Don’t fall into thinking, “Well, this is just another testing environment. It doesn’t need to be perfect”.
Your staging should be an (almost) exact copy of prod.
The reason is simple: with an accurate clone, if something doesn’t work in staging, it won’t work in production either.
4. Pre-organize feedback
This is a big one.
If you don’t have a system to triage reports, you’re in for a rough time.
We suggest categorizing feedback into two main categories:
- Feedback that requires discussion. Complex bugs and usability issues. Stuff that will require a meeting with several team members.
- Instant action feedback. Wrong color, wrong copy, missing elements, one-man job problems.
At Marker.io, our “first responder” dev handles instant action feedback immediately. Everything else goes into the “discussion” box—to review with the rest of the team later.
This allows us to look at crucial bugs one by one, without getting bogged down with minor issues.
Devs need actionable, specific bug reports. Pre-categorized feedback is the key to filtering noise.
After this, we get in “rinse and repeat” mode. Fix bugs, push new version to staging, collect new round of feedback, discuss internally.
You should have all the knowledge you need to confidently conduct user acceptance testing for your next web development project (and it wasn’t all that much!).
Before we let you go, though—we’d like to share how Marker.io has been crucial for our UAT tests and will ensure they go off without a hitch!
Using Marker.io for user acceptance testing
You can’t possibly ask dozens (or hundreds) of testers to fill in complex bug reports and send the whole thing over for review in an Excel file.
That sounds like a nightmare.
Marker.io (that’s us!) helps make our bug reporting process faster and more accurate—for all parties involved.
On the tester side, it’s a 3-steps process:
- Go to staging, find a bug.
- Create a report and input details.
- Click on “Create issue”—done!
Every single piece of feedback sent this way is directly synced with our PM tool.
This is the biggest time saver for us.
We no longer need a “feedback person” in charge of reading and responding to reporter feedback.
And with our 2-way sync system, every card moved to “Done” will be marked as “Resolved” in Marker.io:
Reporters will also receive a confirmation email.
All this automatically, with any of the bug tracking tools we integrate with.
Our tool also has an extra benefit when it comes to UAT: Guest forms and Member forms.
When you conduct UAT, you want to collect detailed feedback from your in-house members, who are familiar with your app.
But you don’t want to overwhelm a first-time user with dozens of fields.
That’s why we built guest forms and member forms:
Clients and beta testers can type in what went wrong with the guest form (and automatically attach a screenshot).
The member form is a bit more advanced. You can immediately label bug reports or assign issues to a team member.
The best part? These forms are 100% customizable—which means every time we do a new round of feedback, we can ask our users exactly what we want from them.
Looking for more UAT Tools? Check out our list of the best 14 UAT tools out there—and get even more insights from your users.
We know this is a lot to take in.
So, we wanted to provide you with an extra resource to support your next UAT session.
This checklist is a (super condensed!) recap of everything we’ve discussed in this post.
1. Design project scope and objectives
Have a clear plan for the new feature or software that you are releasing.
During UAT, come back to this document to ensure pain points have been properly addressed.
2. Design workflows
Workflows allow you to align everyone.
Share workflows with the testing team so they can accurately pinpoint where issues occurred and give you feedback on the fly.
3. Prepare staging environment
Run tests in a safe environment.
Staging is a perfect copy of your production—the ideal playground for alpha and beta testers.
4. Pre-organize feedback
Pre-categorize feedback into two categories:
- Instant action: can be dealt with easily (color, font)
- Discussion: requires meeting with the dev team
5. In alpha testing, brief testers
Tell testers about the new feature you’ve built in detail.
Make it clear what the business objective is, and what you expect to discover from this testing.
6. In beta testing, draft test cases
For large-scale testing, draft test cases for all users to follow and report on.
Mention at least case test steps & expected results.
7. Deploy a bug reporting system
Use a powerful bug reporting tool like Marker.io to assist your dev team.
Added benefit: greater accuracy from reporters, and makes the dev life that much easier.
8. Document all steps
Keep tabs on what was fixed, what needs working on, expected challenges in the next iteration, etc.
Great UAT starts with a great organization.
Here’s a downloadable version of this checklist you can keep at hand for your next user acceptance testing project.
When it comes to UAT, proper planning is 80% of the work.
Our hope is that with the best practices described in this article, you have all the tools and knowledge necessary to go forward with your own tests.