What Is User Acceptance Testing (UAT) And How to Do It Right
Try Marker.io for free
How-To Guides

What Is User Acceptance Testing (UAT) And How to Do It Right

Last updated:
November 16, 2022
Contents
    Contents

    User acceptance testing is a phase of software development where your ideal customers test your app in a production-like environment.

    In this guide, we’ll show you how to conduct user acceptance testing (UAT)—the right way.

    There is a surprisingly high amount of literature on user acceptance testing out there.

    Do you want to become an expert in user acceptance testing, in its jargon and processes? This guide is not for you.

    If you just want to make your website or web application better, and believe UAT can help you achieve this—read on.

    In this guide, you will learn:

    • What user acceptance testing actually is;
    • How to conduct UAT from A to Z, with a system that makes sense.

    There’s a lot of ground to cover.

    Let’s start with a short definition: what is user acceptance testing?

    What is user acceptance testing?

    User acceptance testing (UAT) is a phase of software development where your ideal customers test your app in a production-like environment.

    It’s the phase after your internal QA testing and before you push your site to everyone.

    What is the purpose of UAT?

    The purpose and goal of user acceptance testing (UAT) is to figure out this:

    • Does the app satisfy user requirements—A.K.A., are people using our app as expected, do they understand and like it?
    • Does the app satisfy business requirements—A.K.A, can it perform real-world tasks?

    In other words, the app should be useable in day-to-day scenarios, and users should have no problem completing tasks with said app.

    That’s it.

    Internal QA vs UAT

    Internal QA is carried out by your QA team and focuses on resolving technical issues.

    UAT testing is performed by end-users or business stakeholders, where the goal is to ensure the software works in real-world scenarios.

    That’s the main difference.

    Both QA and UAT involve heavy levels of testing. Plus, they are performed at the same stage of the software development life cycle.

    So, it’s not uncommon to confuse the two. At the end of the day, user acceptance testing (UAT) is a form of quality assurance.

    To really understand the difference between the two, let’s take a standard sign-up flow as an example:

    QA test cases will look like this:

    • Are all input fields and buttons usable?
    • Does the email verification function run properly?
    • Are there any unexpected bugs at any point in the workflow?

    UAT test cases will look like this:

    • Are testers filling out the correct information?
    • Do they understand what’s happening when being redirected to the login page?
    • Are they opening their email and going through the verification steps?

    There are two phases to user acceptance testing, each with widely different acceptance criteria: alpha testing and beta testing.

    Let’s have a look at what each stage entails.

    Alpha testing VS beta testing

    In alpha testing:

    • Your testing team is the product owner. Depending on how big the project is, you might bring in friends and family or other business analysts for testing.
    • Critical issues still exist. Workflows might be broken or unclear. Bugs overlooked by the development team can still occur—these are immediately reported and worked on.
    • It takes time. At the very least 4 weeks.
    • Your main goal is to prepare for beta testing. In this phase, you eliminate potential security issues. It also ensures people can use your product without significant hiccups.

    In beta testing:

    • A fraction of your current users becomes your testing team (on a staging environment). These should represent your average customer and/or target audience.
    • It’s a short phase. You’ve already dealt with the most critical issues and are now in the monitoring stage.
    • Your main goal is to experience your app from the customer’s point of view. As well as receive feedback on your software as a whole or on the feature you’ve developed. At this point, you’re observing whether business objectives are being met.

    Because the topic is blurry, people use a lot of the following terms interchangeably, when really, they mean user acceptance testing:

    • End-user testing
    • Software testing
    • Black box testing
    • Regression testing
    • Unit testing
    • System testing
    • Integration testing
    • Functional testing
    • Traceability testing
    • etc.

    As a matter of fact, all of these are different types of software testing. And they are testing phases of the UAT process.

    Now that you now what UAT testing is, you might be wondering—why do I need this? Isn’t QA testing enough?

    Let’s look at a few reasons why you might need UAT.

    Why do you need user acceptance testing (UAT)?

    So why is UAT testing so important?

    If we haven’t convinced you that you need to adopt user acceptance testing to improve your products, let’s look at some more advantages.

    First, there’s the obvious.

    Your testers will tell you what’s wrong with your app before you release it to the public.

    Sure: developers will catch most issues when they do their code review—that’s just part of the development process.

    Technically speaking, the app might be flawless.

    But because they are so stuck in the details, they can forget the bigger picture.

    A first-time user is a fresh point of view. If anything about your app is confusing, or they can't figure it out at a glance—they’ll tell you.

    Some more advantages of user acceptance testing:

    • A production-like environment without the drawbacks of live testing. In a proper UAT environment, you copy your production database. You can then observe real use cases from real users rather than “expected usage” in a local environment.
    • Dozens of testers. More feedback and a higher chance to catch the more obscure bugs you’d never see on a smaller scale.
    • Fix issues upstream. Major malfunctions, confusing UI, critical security errors—you don’t want this to happen after releasing your app to the public. An app that passes the user acceptance stage is an app ready for production.
    • No-bias testing & feedback. For the most part, UAT testing exposes uninvolved, new users who do not have a pre-determined idea of how they’re supposed to use the app. Plus, they have no business investment in your software, therefore impartial and blunt in their feedback.

    How to conduct user acceptance testing (UAT)

    In this section, we want to give you a hands-on example of how user acceptance testing actually goes down in a real-life setting.

    Here’s how we handle UAT for one of our newest features, domain join.

    1. Define project scope and objectives

    This is one of the major prerequisites for UAT, and happens way before you actually get to testing (obviously).

    We mention this now because when we finally engage in UAT, we constantly go back to our project scope & objectives.

    For example, at Marker.io, we’ve recently made it possible for members to join their team when detecting an existing company email.

    We document the pain point at the very start of the project:

    Why do we do this?

    During UAT, we can verify that:

    • Technical objectives have been met. Any new sign-up with an existing @company.com email seamlessly joins the company.com team;
    • The pain point is gone. We no longer have complaints or confused support chats from team-less users.

    All in all, defining your project scope saves a ton of headaches down the line.

    • The whole team is aligned. “This is what we’re trying to achieve, and this is how we’re going to do it.”
    • All the information is centralized. If at any point during testing we’re not 100% confident that this is how the app should behave, we go back to this document for validation.
    • We don’t go over (or under) scope. We ensure that the test answers the customer’s problem—no more, no less. You don’t want to fall down the rabbit hole of adding more features during testing.

    2. Prepare and document workflows & wireframes

    The power of comprehensive workflows is underutilized.

    The idea here is not only to align with the development team.

    When we do UAT at Marker.io, part of our testing strategy is to share all workflows with the testing team as well.

    We design workflows like this with Whimsical:

    We benefit from this in many ways.

    First of all, the document is super comprehensive:

    • Conditions & states. For instance, we need to test our app from multiple points of view (team lead, member, guest, company plan, starter plan), and we need to know who sees and does what under X conditions.
    • Expected results. If, at any time, one of our testers goes, “was this supposed to happen?”, or “is this the next logical step?” they can check Whimsical to make sure they’re on track.

    Secondly: the ability to pinpoint where problems occurred.

    The idea when designing those workflows is to be as visual as you can. That way, anyone (from beta tester to developer) can just point and say, “this is where things went wrong”.

    This is super helpful when designing test steps (“they need to get here”) and evaluating test results (“why couldn’t they get here?”).

    Bug reports like this make it easy for your dev team to identify why a flow failed—no guesswork involved.

    The best part? Scaling your test efforts is not a difficult task with the right approach.

    Whether you have 5 or 500 testers, you can run this process—as long as there are good documentation and test cases.

    3. Set up a secure staging environment

    The ideal way to observe real users testing your app would be production, right?

    But you can’t push a new version of your software to prod and “see what happens” just like that... for obvious reasons.

    The next best thing is a staging environment.

    A staging environment allows you to run tests with production data and against production services but in a non-production environment.

    For example, at Marker.io, when we need to test changes, we push everything to staging.marker.io, a password-protected version of our app.

    It’s all smooth sailing from thereon. Stakeholders log on and start reporting bugs.

    The same logic applies when you run tests on a larger scale.

    Send a URL and login credentials to your beta testers (via e-mail or otherwise) and tell them to try and break your app!

    There are a couple of other added benefits:

    • Save time. When your product is in staging, you can push changes immediately—it doesn’t matter if the whole thing breaks.
    • Protect live data. Beta user inadvertently caused a crash? Your production database is safe. Push again, and ready for retest.
    • Better overview. Locally testing your components is great for seeing what they do independently. But only staging gives you a sense of how these pieces fit into the greater whole.
    • Easier testing. Suppose you forgot to update a package in production, or a function working completely fine in local suddenly breaks. These issues will show up in staging already.

    Let’s conclude this section with a warning:

    It is paramount that your staging settings mirror your production settings.

    Don’t fall into thinking, “Well, this is just another testing environment. It doesn’t need to be perfect”.

    Your staging should be an (almost) exact copy of prod.

    The reason is simple: with an accurate clone, if something doesn’t work in staging, it won’t work in production either.

    4. Pre-organize feedback

    This is a big one.

    If you don’t have a system to triage reports, you’re in for a rough time.

    We suggest categorizing feedback into two main categories:

    • Feedback that requires discussion. Complex bugs and usability issues. Stuff that will require a meeting with several team members.
    • Instant action feedback. Wrong color, wrong copy, missing elements, one-man job problems.

    At Marker.io, our “first responder” dev handles instant action feedback immediately.

    Everything else goes into the “discussion” box—to review with the rest of the team later.

    This allows us to look at crucial bugs one by one, without getting bogged down with minor issues.

    Devs need actionable, specific bug reports. Pre-categorized feedback is the key to filtering noise.

    After this, we get in “rinse and repeat” mode. Fix bugs, push new version to staging, retest, collect new round of feedback, discuss internally.

    5. Install a way to collect feedback and test data

    Quality reporting, during the software testing process, is tedious.

    For every bug or usability issue, you have to:

    1. Open screenshot tool, capture bug.
    2. Open software to annotate screenshots, add a few comments.
    3. Log into project management tool.
    4. Create a new issue.
    5. Document the bug.
    6. Add technical information.
    7. Attach screenshot.
    8. ...etc.

    Sure: for a seasoned QA expert, this is a walk in the park.

    But this is a user acceptance testing guide. And when you do UAT, you are bound to involve non-technical, novice QA testers—or actual software users.

    We’ve found it easiest to install a widget on our staging site, using Marker.io (that’s us!).

    This way, your testing team reports bugs and sends feedback as they go, without constantly alt-tabbing to email or PM tool.

    It’s faster, and more accurate—for all parties involved.

    On the tester side, it’s a 3-steps process:

    1. Go to staging, find a bug.
    2. Create a report and input details.
    3. Click on “Create issue”—done!

    Check it out in action:

    We try to include as many external eyes as possible.

    Early during UAT testing, this means our CEO and the rest of the team.

    They know what the new feature does, but they haven’t operated this version of our app yet.

    All that’s left to do is collect this feedback and triage it. Here’s how we do it.

    6. Collect and triage feedback—the right way

    All reports sent via the widget land directly in our PM tool. For us, that’s Linear, but we integrate with dozens of other bug tracking tools.

    Now, in order to get to bug fixing, you want reports to be as detailed as possible. But here’s the kicker.

    You can’t possibly ask dozens (or hundreds) of testers to fill in complex bug reports complete with console logs, environment info, and other technical stuff…

    …and send the whole thing over for review in an Excel file.

    That sounds like a nightmare.

    With Marker.io, automation solves all of this. Every bug report contains:

    • Screenshot with annotations
    • Console logs
    • Environment info
    • Metadata
    • Session replay, so we can see exactly what the user was doing when the error occured

    Here’s an example of what this looks like on the developer side, in Jira here:

    The best part? Marker.io has 2-way sync with all bug tracking tools.

    This means that whenever your developers mark a client issue as “Done” in your issue tracker, Marker.io will automatically mark the issue as “Resolved”.

    It’ll even send a notification to your end-user if you wish.

    Check it out:

    One added benefit of using Marker.io for user acceptance test cases is that you can always get in touch with the tester, even if they’re not part of your organization:

    Imagine the chaos if 500 beta testers were to report issues via email.

    Instead, having easy access to the tester’s insights—case by case, bug by bug—speeds up the feedback cycle and saves a ton of headaches.

    There's one more advantage to Marker.io when it comes to UAT: Guest forms and Member forms.

    When you conduct UAT, you want to collect detailed feedback from your in-house members, who are familiar with your app.

    But you don’t want to overwhelm a first-time user with dozens of fields.

    That’s why we built guest forms and member forms:

    Clients and beta testers can type in what went wrong with the guest form (and automatically attach a screenshot).

    The member form is a bit more advanced. You can immediately label bug reports or assign issues to a team member.

    The best part? These forms are 100% customizable—which means every time we do a new round of feedback, we can ask our users exactly what we want from them.

    User acceptance testing (UAT) best practices and checklist

    We know this is a lot to take in.

    So, we wanted to provide you with an extra resource to support your next UAT session.

    This checklist is a (super condensed!) recap of everything we’ve discussed in this post, and contains all best practices for user acceptance testing.

    1. Design project scope and objectives. Have a clear plan for the new feature or software that you are releasing. During UAT, come back to this document to ensure pain points have been properly addressed.
    2. Design workflows. Workflows allow you to align everyone. Share workflows with the testing team so they can accurately pinpoint where issues occurred and give you feedback on the fly.
    3. Prepare staging environmment. Run tests in a safe environment. Staging is a perfect copy of your production—the ideal playground for alpha and beta testers.
    4. Pre-organize feedback. Pre-categorize into two categories: instant action and discussion.
    5. Brief testers. In alpha testing, tell testers about the new feature you’ve built in detail. Make it clear what the business objective is, and what you expect to discover from this testing.
    6. Draft test cases. For large-scale and beta testing, draft test cases for all users to follow and report on. Mention at least case test steps & expected results.
    7. Deploy a bug reporting systems. Use a powerful bug reporting tool like Marker.io to assist your dev team. Added benefit: greater accuracy from reporters, and makes the dev life that much easier.
    8. Document all steps. Keep tabs on what was fixed, what needs working on, expected challenges in the next iteration, etc.

    Great UAT starts with great organization.

    Here’s a downloadable version of this checklist you can keep at hand for your next user acceptance testing project.

     Frequently asked questions

    Who is responsible for user acceptance testing?

    Typically, your QA team is also in charge of running user acceptance testing.

    They will write a complete UAT test plan, prepare a UAT environment that mirrors production, and write corresponding UAT test cases.

    With that said, it is the product owner (as well as the end user) who performs UAT.

    Can you automate user acceptance testing?

    To an extent, yes.

    Instead of writing test scenarios and having your users or UAT team complete them, you can simply push a release to your test environment.

    Then, run a couple of automated test cases that’ll test the functionality of your app.

    However, we recommend having actual software users test your website. At the end of the day, this is the only way to make sure your users are satisfied with the app.

    What is a user acceptance test? (examples)

    A user acceptance test is a type of testing run during UAT execution.

    The goal is to find out if the end user has an easy time completing the test, and if they run into any issues.

    For example:

    • Sign up for the tool and upgrade to our “Standard” plan.
    • Try to create a project and add a couple of tasks to this project.
    • Attach a file to task #3.
    • Invite your team to join you.

    Not sure what you should include in your test cases? We've got your back! Check out our list of user acceptance testing templates.

    What kind of tests do you run during UAT?

    You can run all the tests that you want as long as they involve business users.

    • Regression testing. Have recent changes to your codebase or software affected your users in a negative way?
    • Unit testing. Test individual units of code (for example: a specific field on the sign-up form), and ensure it is understood properly by all end users.
    • System testing. Are all components properly working and interacting with one another? Can our end users see these connections easily?
    • Integration testing. Are two components communicating as expected?
    • Functional testing. Are all features working properly on an individual level? Can our end users see the benefit of those features?
    • Performance testing. Does our website (or app) work fast, no matter how many requests we receive?
    • Final testing. When the product goes live—are we running into any hiccups still?

    Who writes UAT test cases?

    The QA team is in charge of test management and writing test cases.

    They oversee the whole process from writing test scenarios that satisfy business requirements, setting up a staging environment, onboarding UAT testers, and analyzing results of test cases.

    What are some tools that help successfully perform UAT?

    There are many tools that help assist your UAT testing process.

    We recommend having:

    • An error monitoring system like Sentry.
    • A bug report/website feedback tool like Marker.io.
    • A test case management system or bug tracking tool like Jira, Trello, etc.

    Looking for more UAT Tools? Check out our list of the best 14 user acceptance testing tools out there—and get even more insights from your users.

    Wrapping up...

    User acceptance testing is about understanding how your customers use your app or website, and if the app satisfies business requirements.

    And to successfully conduct UAT, you need to make sure that you’re collecting that information in the best possible way.

    In other words, have a system.

    The resources and ideas shared in this post should be the stepping stones towards building that system—and plenty to help you put together your next UAT session.

    Continue reading

    Frequently Asked Questions

    Get started now

    Start free trial
    Free 15-day trial  •  No credit card required •  Cancel anytime