The Ultimate Guide to User Acceptance Testing (UAT)
Book a demoTry Marker.io for free
How-To Guides

The Ultimate Guide to User Acceptance Testing (UAT)

Last updated:
October 10, 2023
Flowchart of the software development life cycle, with alpha and beta testing highlighted as user acceptance testing
Flowchart of the software development life cycle, with alpha and beta testing highlighted as user acceptance testing
Contents
    Contents

    What is user acceptance testing?

    User acceptance testing (UAT) is the final stage of software development, ensuring your application aligns with the organization’s business requirements. It is also known as end-user testing or application testing.

    In this phase, actual or ideal users test your app in a production-like environment.

    Testers will validate the application by simulating real-world scenarios and identifying bugs or issues that might have been overlooked during internal QA.

    User acceptance testing occurs after internal QA testing and before go-live.

    A simplified view of the software development lifecycle, positioning user acceptance testing as the final stage before production.

    What is the purpose of UAT?

    The purpose of user acceptance testing (UAT) is to validate two critical aspects:

    1. User requirements. Does the app align with user expectations? Can users intuitively navigate and enjoy the app's functionality?
    2. Business requirements. Can the app handle real use cases efficiently and effectively?

    In other words, the software should help users accomplish real-world tasks without hurdles.

    Validation of these requirements usually comes in the form of stakeholder sign-off, e.g., when your client is satisfied with the final version of the app or website.

    Internal QA vs UAT

    Internal QA focuses on technical issue resolution and is executed by the QA team, whereas user acceptance testing (UAT) is performed by end-users or stakeholders to ensure the software meets real-world expectations.

    A table that shows the difference between QA (carried out by product team and ensure a functional product) vs UAT (carried out by real users and ensures a viable product).

    Both QA and UAT involve heavy levels of testing. Plus, they are performed at the same stage of the software development life cycle.

    Workflow of the software development life cycle, including project scope, design, development, testing, and deployment.

    So, it’s not uncommon to confuse the two. At the end of the day, user acceptance testing (UAT) is a form of quality assurance.

    To further understand the difference between the two, let’s take a standard sign-up flow as an example.

    A standard sign-up flow to shows what may happen after a user signs-up on a website.

    Typical test cases in the internal QA phase for a sign-up flow might include:

    • Are all input fields and buttons usable?
    • Does the email verification function run properly?
    • Are there any unexpected bugs at any point in the workflow?

    UAT test cases, however, aim to answer user-centric questions such as:

    • Are testers filling out the correct information?
    • Do they understand what’s happening when being redirected to the login page?
    • Are they opening their email and going through the verification steps?

    What are the benefits of user acceptance testing (UAT)?

    The benefits of user acceptance testing are plenty. It guarantees the quality of your apps, products, and websites. If you're still on the fence, consider the following advantages:

    1. Early detection of issues. While developers can catch many technical issues during code reviews, UAT brings the perspective of an end-user to identify potential confusion.
    2. Fresh perspective. Developers can sometimes miss overarching usability or flow issues. First-time users provide fresh insights. If they find any aspect of the app unintuitive, they'll raise it.
    3. Real-world environment. UAT offers a production-like setting for testing. By mirroring your production database, you get to observe genuine user interactions and behaviors, rather than just hypothetical or "expected" use scenarios.
    4. Broad testing scope. With multiple testers, you amplify the chances of spotting even the most obscure issues and bugs.
    5. Mitigate risks pre-launch. Identify and rectify significant issues, such as confusing user interfaces, security vulnerabilities, or critical bugs, before the software reaches the broader audience. Successful UAT ensures your app is robust, secure, and user-friendly when it goes live.
    6. Objective feedback. UAT often involves users who are new to your software and lack any biases. They don’t have prior notions about its functionalities or design, and so their feedback is candid and impartial.
    A standard sign-up flow to shows what may happen after a user signs-up on a website.

    Types of user acceptance testing

    There are several different types of user acceptance testing, based on their objectives:

    • Alpha and beta testing aim to fix critical bugs and get early feedback from stakeholders.
    • Contract acceptance testing validates that software meets pre-agreed specifications.
    • Operational acceptance testing ensures workflows operate smoothly.
    • Regulation acceptance testing verifies compliance with laws and regulations.
    • Black box testing focuses on software outputs without considering internal mechanics.

    In this guide, our spotlight will be on the paramount UAT types: alpha and beta testing.

    Alpha testing VS beta testing

    Alpha testing and beta testing are two distinct testing phases.

    A table that shows the difference between alpha testing (where the goal is to fix critical bugs) and beta testing (where the goal is to get customer feedback).

    In alpha testing:

    • Your testing team is the product owner. Depending on the project's scale, you might bring in friends, family, or other business analysts for comprehensive feedback.
    • Critical issues still exist. Workflows might be broken or unclear functionalities. Bugs overlooked by the development team can still occur—these are immediately reported and worked on.
    • It takes time. At the very least 4 weeks.
    • Your main goal is to prepare for beta testing. In this phase, you eliminate potential security issues and ensure a smooth user experience for beta testers.

    In beta testing:

    • A subset of actual users becomes your testing team, often on a staging environment. These testers should represent your average customer and/or target audience.
    • It’s a short phase. You’ve already dealt with the most critical issues during alpha testing, and are now in the monitoring stage.
    • Your main goal is to experience your software from the end-user's perspective. Feedback collection is paramount. At this stage, you're gauging whether business objectives are being met and if the user experience is satisfactory.

    How to conduct user acceptance testing (UAT)

    In this section, we'll delve into a real-world example of user acceptance testing for our latest feature: domain join.

    Let's break down the key steps:

    1. UAT prerequisites: project scope and objectives

    The first step is to clearly define project scope and objectives.

    Throughout the UAT process, we constantly go back to our documentation, verifying scope, customer needs, and others.

    For example, at Marker.io, we’ve recently made it possible for members to auto-join their team when detecting a matching company email.

    So, we document the pain point at the very start of the project:

    Example of a pain point that lands in Intercom.
    Example of a pain point that lands in Intercom.
    Example of a pain point that lands in Intercom.

    With this documentation, we can verify at any point that:

    1. Technical objectives have been met. In this case, any new sign-up with an existing @company.com email seamlessly joins the company.com team;
    2. The pain point is gone. We no longer have complaints or support requests from confused, team-less users.

    All in all, defining your project scope saves a ton of headaches.

    • The whole team is aligned: “This is what we’re trying to achieve, and this is how we’re going to do it.”
    • All the information is centralized. If there's ever uncertainty about the app's behavior during testing, we refer back to this foundational document.
    • We don’t go over (or under) scope. Avoiding the trap of introducing unplanned features during testing is crucial.

    2. Prepare and document workflows & wireframes

    All workflows, wireframes, and expected behaviors are shared with everyone.

    The idea here is not only to align with the development team.

    When we do UAT at Marker.io, part of our testing strategy is to share all workflows with the testing team as well.

    We design workflows like this with Whimsical:

    An example workflow in Whimsical shows the expected app behavior.

    We benefit from this in multiple ways.

    First, the documents are as comprehensive as possible, including:

    • Conditions & states. For instance, we need to test our app from multiple points of view (team lead, member, guest, company plan, starter plan). We need to know who sees and does what, and under what conditions.
    Example workflow depending on state.
    • Expected results. If any tester wonders “was this supposed to happen?”, or “is this the next logical step?” during the UAT process, they can check Whimsical to make sure they’re on track.
    Another example workflow illustrates conditions for pop-ups to appear within the app.

    Secondly, this documentation gives testers the ability to pinpoint where problems occurred.

    Workflows should be visual and easy to follow. This way, anyone can just point and say, “this is where things went wrong”.

    This is invaluable for outlining test procedures ("What path should the testers take?") and analyzing test outcomes ("Why couldn't the testers reach the desired outcome?").

    Bug reports like this make it easy for your dev team to identify why a flow failed, eliminating the need for guesswork.

    The best part? With structured documentation and robust test cases, scaling your testing—whether for 5 or 500 testers—is feasible and efficient.

    3. Set up a secure staging environment

    The ideal way to observe real users testing your app would be production, right?

    But you can’t push a new version of your software to prod and “see what happens” just like that... for obvious reasons.

    The next best thing is a staging environment.

    Differences between local, staging, and production environments.

    A staging environment allows you to run tests with production data and against production services but in a non-production environment.

    For example, at Marker.io, when we need to test changes, we push everything to staging.marker.io, a password-protected version of our app.

    Stakeholders can then easily log on and start reporting bugs.

    The same logic applies when you run UAT tests on a larger scale.

    Send a URL and login credentials to your beta testers (via e-mail or otherwise) and tell them to try and break your app!

    There are a couple of other added benefits:

    • Save time. When your product is in staging, you can push changes and bug fixes immediately—it doesn’t matter if the whole app breaks.
    • Protect live data. Beta user inadvertently caused a crash? Your production database is safe. Push again, and ready for another round of testing.
    • Better overview. Locally testing your components is great for seeing what they do independently. But it's only through staging that you can truly see how they integrate within the full software.
    • Easier testing. Suppose you forgot to update a package in production, or a function working completely fine in local suddenly breaks. These issues will show up in staging already.
    An example issue in staging that did not show up on a local environment.

    For user acceptance testing (and a good practice in general), it is paramount that your staging settings mirror your production settings.

    Don’t fall into thinking, “Well, this is just another testing environment. It doesn’t need to be perfect”.

    Your staging environment should be an (almost) exact copy of production.

    The reason is simple: with an accurate clone, if something doesn’t work in staging, it won’t work in production either.

    4. Pre-organize feedback

    You will also need a system to triage bug reports and user feedback.

    We suggest categorizing feedback into two main categories:

    1. Feedback that requires discussion. For example, more complex bugs and usability issues. All issues that will require a meeting with several team members.
    2. Instant action feedback. Wrong color, wrong copy, missing elements, one-man job problems.
    Differences between discussion feedback and instant action feedback.

    At Marker.io, the developer in charge of triage also handles instant action feedback.

    Everything else goes into the “discussion” box, to review with the rest of the team later.

    This allows us to look at crucial bugs one by one, without getting bogged down with minor issues.

    Developers need actionable, specific bug reports. Pre-categorized feedback is the key to filtering noise.

    After this, it's just rinse and repeat. Fix bugs, push new version to staging, execute UAT tests, collect new round of feedback, discuss internally.

    Our process for UAT feedback management: triage reports, evaluate internally, fix issues, then push to staging for another round of testing.

    5. Install a way to collect feedback and test data

    Quality reporting during the software testing process is tedious.

    For every bug or usability issue, you have to:

    1. Open screenshot tool, capture bug.
    2. Open software to annotate screenshots, add a few comments.
    3. Log into project management tool.
    4. Create a new issue.
    5. Document the bug.
    6. Add technical information.
    7. Attach screenshot.
    8. ...etc.

    For a seasoned QA expert, this is a walk in the park.

    But this is a user acceptance testing guide. And when you do UAT, you are bound to involve non-technical, novice QA testers—or actual software users.

    We’ve found it easiest to install a widget on our staging site, using Marker.io (that’s us!).

    This way, your testing team reports bugs and sends feedback straight from the website, without constantly alt-tabbing to email or PM tool.

    It’s faster and more accurate for all parties involved.

    On the tester side, it’s a 3-steps process:

    1. Go to staging, find a bug.
    2. Create a report and input details.
    3. Click on “Create issue”—done!

    Check it out in action:

    A reporter finding a bug and reporting it via Marker.io feedback button and annotation tools

    Early during UAT testing, we have our CEO and the rest of the non-developer team give early feedback via this widget.

    They know what the new feature does, but they haven’t operated this version of our app yet.

    Then, we collect this feedback and triage it.

    6. Collect, triage, and respond to feedback

    All reports sent via the widget land directly in our project management tool. For us, that’s Linear, but we integrate with dozens of other bug tracking tools.

    It's crucial for bug fixing that these reports are comprehensive.

    However, you can’t possibly ask dozens (or hundreds) of testers to fill in complex bug reports complete with console logs, environment info, and other technical data...

    …and send the whole thing over for review in an Excel file.

    That sounds like a logistical nightmare.

    That's where Marker.io comes into play. It automates the process, ensuring each bug report includes:

    • Screenshot with annotations;
    • Console logs;
    • Network requests;
    • Environment info;
    • Metadata;
    • And session replay, so we can see exactly what the user was doing when the error occurred.

    Here’s an example of what this looks like on the developer side, in Jira here:

    A Jira issue with all the information a developer needs to reproduce a bug found during testing.

    The best part? Marker.io has 2-way sync with all bug tracking tools.

    This means that whenever your developers mark a client issue as “Done” in your issue tracker, Marker.io will automatically mark the issue as “Resolved”.

    It’ll even send a notification to your end-user if you wish.

    Check it out:

    Example of Marker.io status sync, marking an issue as Resolved in Marker.io as soon as it is moved to the Done column in the project management tool

    One added benefit of using Marker.io for user acceptance test cases is that you can always get in touch with the tester, even if they’re not part of your organization, via the issue page.

    Imagine the chaos if hundreds of beta testers submitted their feedback through individual emails.

    Streamlining the process to capture tester insights for each specific case or bug accelerates the feedback loop and eliminates unnecessary confusion.

    There's one more advantage to Marker.io when it comes to UAT: Guest forms and Member forms.

    While conducting UAT, detailed feedback is essential from in-house users familiar with the application.

    However, for first-time users, a lengthy form with dozens of fields can be daunting.

    That’s why we built guest forms and member forms:

    Image of different feedback form configurations for Marker.io

    Clients and beta testers can type in what went wrong with the guest form (and automatically attach a screenshot).

    The member form is a bit more advanced. You can immediately label bug reports or assign issues to a team member.

    The best part? These forms are 100% customizable, which means every time we start a new round of feedback, we can ask our users exactly what we want from them.

    User acceptance testing (UAT) best practices and checklist

    This checklist is a recap of everything we’ve discussed in this post, and contains all best practices for user acceptance testing.

    1. Design project scope and objectives. Have a clear plan for the new feature or software that you are releasing. During UAT, come back to this document to ensure pain points have been properly addressed.
    2. Design workflows. Workflows allow you to align everyone. Share workflows with the testing team so they can accurately pinpoint where issues occurred and give you feedback on the fly.
    3. Prepare staging environmment. Run tests in a safe environment. Staging is a perfect copy of your production, the ideal playground for alpha and beta testers.
    4. Pre-organize feedback. Pre-categorize into two categories: instant action and discussion.
    5. Brief testers. In alpha testing, tell testers about the new feature you’ve built in detail. Make it clear what the business objective is, and what you expect to discover from this testing.
    6. Draft test cases. For large-scale and beta testing, draft test cases for all users to follow and report on. Mention at least case test steps & expected results.
    7. Deploy a bug reporting system. Use a powerful bug reporting tool like Marker.io to assist your dev team. Added benefit: greater accuracy from reporters, and makes the dev life that much easier.
    8. Document all steps. Keep tabs on what was fixed, what needs working on, expected challenges in the next iteration, etc.

    Great UAT starts with great organization.

    Here’s a downloadable version of this checklist you can keep at hand for your next user acceptance testing project.

    Downloadable checklist containing every step of the user acceptance testing process.

    Challenges of UAT

    When conducting UAT, companies can run into the following challenges:

    • Poor UAT documentation. Lack of scope, clearly defined objectives, and test plan will cause issues during testing—getting your entire team (and testers) aligned is crucial for success with UAT.
    • Not enough extensive internal QA. Development teams can save themselves a lot of admin time by getting rid of several bugs upstream, rather than leaving it for the UAT phase. Ideally, UAT testers should operate in a nearly bug-free environment.
    • Lack of a strong testing team. Pre-screen your testers and ensure they are the right target audience for your software. Train them in the different tools and processes you use for testing, and align them with your goals.
    • Not using the right tools. For large-scale projects in particular, asking your testers to use Excel, Google Docs, or emails for their reports is a recipe for disaster. Prepare solid bug tracking and reporting solutions to make it easier for everyone involved.

    Frequently asked questions

    Who is responsible for user acceptance testing?

    User acceptance testing is performed by the end-users.

    With that said, it is QA team that will be in charge of running user acceptance testing.

    They will write a complete UAT test plan, prepare a UAT environment that mirrors production, and write corresponding UAT test cases.

    How to build the right UAT team?

    The right UAT team will consist of:

    • A project manager, in charge of overseeing UAT execution from start to finish.
    • A documentation specialist who will define scope and objectives, create a UAT test plan, and assist in creating test cases.
    • A QA lead in charge of pre-screening, onboarding, and training testers.
    • A set of testers, ideally existing users or customers, but also business stakeholders.

    Can you automate user acceptance testing?

    To an extent, yes.

    Instead of writing test scenarios and having your users or UAT team complete them, you can simply push a release to your test environment.

    Then, run a couple of automated test cases that’ll test the functionality of your app.

    However, we recommend having actual software users test your website. At the end of the day, this is the only way to make sure your users are satisfied with the app.

    What is a user acceptance test? (examples)

    A user acceptance test is a type of testing run during UAT execution.

    The goal is to find out if the end user has an easy time completing the test, and if they run into any issues.

    For example:

    • Sign up for the tool and upgrade to our “Standard” plan.
    • Try to create a project and add a couple of tasks to this project.
    • Attach a file to task #3.
    • Invite your team to join you.

    Not sure what you should include in your test cases? We've got your back! Check out our list of user acceptance testing templates.

    Who writes UAT test cases?

    The QA team is in charge of test management and writing test cases.

    They oversee the whole process from writing test scenarios that satisfy business requirements, setting up a staging environment, onboarding UAT testers, and analyzing results of test cases.

    What are some tools that help successfully perform UAT?

    There are many tools that help assist your UAT testing process.

    We recommend having:

    • An error monitoring system like Sentry
    • A bug report/website feedback tool like Marker.io
    • A test case management system or bug tracking tool like Jira, Trello, etc.

    Looking for more UAT Tools? Check out our list of the best user acceptance testing tools out there—and get even more insights from your users.

    Wrapping up...

    User acceptance testing is about understanding how your customers use your app or website, and if the app satisfies business requirements.

    And to successfully conduct UAT, you need to make sure that you’re collecting that information in the best possible way.

    In other words, have a system.

    The resources and ideas shared in this post should be the stepping stones towards building that system—and plenty to help you put together your next UAT session.

    Continue reading

    Frequently Asked Questions

    Get started now

    Start free trial
    Free 15-day trial  •  No credit card required •  Cancel anytime