Moderated Remote Usability Test Plan

Introduction

As I work in an internal corporate UX role I have predominantly conducted usability testing remotely and via 1:1 moderated sessions. The company that I work for has strict security requirements which makes using unmoderated testing tools difficult to procure (however we have worked some unmoderated scopes and I as a quantitative guy really like the prospect of conducting more unmoderated tests).

One of the most important pieces of upfront work when planning a usability test is to generate a test plan for that test. This is the document that provides all of the details of the test… where, when, why, how, and who. It doesn’t need to be 100 pages long, but it needs to be robust enough that you could hand off that document to another UX person and they could be off the ground and running that test if needed.

A good intro to usability test plans can be found at Usability.gov: https://www.usability.gov/how-to-and-tools/methods/planning-usability-testing.html. I draw a lot of inspiration from this approach, but it’s always important to cater as needed for your specific circumstances. I often will start with a test plan template, but the end result after refining for specific product needs will often result in added/removed sections as needed.

The Test Plan

Stakeholder List

At the very front of every test plan I like to have a table of stakeholders along with their role and contact information. These often include a project manager (and or Scrum Master), a technical POC, a functional POC (if system includes non-technical information domains), lead UX person, and perhaps an escalation point (one above project manager/scrum master).

NameRoleContact Info
Jane JonesProduct/Program Manager (Escalation)Email/Phone
John DoeProject ManagerEmail/Phone
Sally SmithTechnical POC/SMEEmail/Phone
Paul AndersonFunctional POC/SMEEmail/Phone
Andrew SkinnerUX LeadEmail/Phone
Sample Test Plan POC Table for Upfront Matter

Your list may include other people that are important for your specific system. For example, a business analyst or test engineer who will be helping set up test environments.

Usability Testing Team Roles

When conducting moderated testing it is incredibly helpful to have a facilitator and a datalogger/notetaker. You should identify the responsible parties for these ahead of time.

NameResponsibility
Andrew SkinnerTest Facilitator
Bob JonesPrimary Data Logger
Sally SmithBackup Data Logger
Example Team Role Table

Scope

It’s important to scope usability tests appropriately. If you have too large of scope it can be hard to focus in on targeting issues in specific elements of your application or site. However, if you make your scope too narrow you can have similar struggles or you may begin to lead your testers unintentionally since there is so little room for them to do anything within a small scope of tasks.

Scope should include:

  • The site/application/prototype being tested.
  • The section/sections being tested.
  • How many users to be tested
    • User/Persona/Role Segmentation

Test Objective

What is the objective or purpose of the test? Typically, when doing experimental design, you will have a hypothesis in mind that you are testing. In usability you often will develop tasks that you are predicting will be completed by users. The test objective or objectives will give a high-level overview of what you may be testing: navigation site X, interactive element Y, content structure on pages A and B, and etc.

Test Schedule

You should identify a timeframe when the usability test will be conducted. If you are testing within a testing environment this should be closely communicated with a test engineer or technical POC to ensure that the test environment does not change during the usability test. If possible, a frozen environment should be used if you are testing in a test environment.

Testing with clickable prototypes is another option and gives you more control over what the users see. Just remember that clickable prototypes will only present a representation of a system and may not reflect full functionality or system performance.

Test DateTest Time SlotsPersona/Role Being Tested
Mon 3/2/20209-10AM
11AM – 12PM
1-2 PM
Persona 1
Persona 2
Persona 3
Tues 3/3/20209-10AM
1-2PM
3-4PM
Admin 1
Person 2
Person 3
Wed 3/4/20208-9AM
11AM-12PM
2-3PM
Admin 1
Admin 1
Persona 2
Thur 3/5/20209-10AMPersona 1
Persona 1
Admin 1
Fri 3/6/202012-1PMPersona 3
Example Test Schedule

Test Environment/Location

Since this test plan is specifically targeted as remote moderated, the test environment will consist of a few elements:

  • Test Environment/URL
  • Conferencing Medium (e.g. Skype, Zoom, WebEx, UserZoom, etc.)
  • Other software needed to conduct test (Notetaking, Screen Recording, etc.)

Test Scenarios

Provide all the test scenarios being used in the usability test. Include the success path(s) that would indicate a successful completion of that scenario. If you are going to be recording task time include a benchmark for that task (you can develop a benchmark using SMEs to develop a “best case” timing and then double or possibly even triple that time depending on the complexity of the task. Include an upper limit on a task that is well outside of estimated timing.

Scenario #ScenarioSuccess PathTask Time(max)Persona/Role
1Scenario written here.Success Path Here (Home -> Menu 1 -> Option 2 -> Complete60 seconds (300 seconds)Persona 1
Example Table of Scenarios

Test Metrics

Identify what quantitative and qualitative metrics you are going to measure during the test.

Some common quantitative metrics:

  • Task Success (Successful/Unsuccessful)
  • Time on Task (Recorded in seconds often translated to minutes & seconds)
  • Error Count (Critical/Non-Critical)
  • Error Free-Rate

Some common qualitative metrics:

For quantitative metrics selected you should identify test goals for these metrics.

For example:

Task Success: 75% or greater.

Time on Task: Maximum 120 seconds per task.

Error Count: 5 maximum per user per test (this can be granulated down to per task if you have a hypothesis about certain tasks being difficult).

Error Free-Rate: 70% (you can break this down per task if you want to get even more detailed)

Coding/Severity Definition

Detailed coding/severity definition will be handled in a later post when we talk about test data analysis. However, it’s good to provide basic coding/severity ratings for errors if you are defining them.

SeverityDescription
Severity 1 – CriticalHigh impact on usability of system. May cause critical user errors or data loss/errors.

Frequency is not a factor because even low frequency occurrences of critical issues are considered critical issues. If something is high frequency and critical then this should be escalated.
Severity 2 – Moderate/HighNon-critical issues that affect time on task and error count but may not necessarily prevent completion of task.

Can include high frequency/low or moderate impact issues and low frequency/high impact issues.

Users will typically identify these and will often trend across many users.
Severity 3 – Low/ModerateNon-critical issues that may cause minor stumbling of users during tasks.

Typically moderate problems with lower frequency or low severity problems that are more common.

These are often categorized as “annoyance” things that doesn’t prevent task completion but reduce satisfaction.
Severity 4 – Low/TrivialNon-critical issues that are likely also low frequency. Not resolving these problems would likely not increase risk and may be subjective issues (didn’t like a color, thought text size was too small, specific to a fringe device).
Example Severity Table

Reporting/Outbrief Artifacts/Plans

You should provide a brief description of artifacts or plans for providing testing results to stakeholders. Plans to present to development teams. Etc.

Reporting/Outbrief:

  • UX Recommendation List
  • Meeting with Technical POC to conduct PICK Analysis
  • Executive Outbrief/Meeting
  • Disposition Recommendations Meeting

Appendix Documents

I like to provide an appendix with all necessary documents for executing the actual usability test.

Common appendix documents include:

  • Detailed Scenarios
  • Test Facilitator Document
  • Datalog Documents
  • Pre/Post Test Questions (Sometimes I provide qualitative questions prior to and post usability testing to provide more open-ended discussion with user. These are often pretty open questions like: Was there something missing you were expecting to see? If you could change one thing what would you change? Etc.
  • User Consent Forms (Will talk more about these in a later post… these provide a user with information on what to expect, notifies them if session will be recorded, how will their data be handled, etc.)
  • Data Handling/Storage Requirements (All user data should be anonymized and non-attributed. If you have requirements for GDPR or other data laws these should be noted and followed)
  • Accessibility Considerations (if you are testing with users who have adaptive technology (screen readers etc.) these should be noted).

Conclusion

This is not an exhaustive (though maybe nearly exhaustive) example of a remote unmoderated usability test plan. A lot of the details in this document help focus your test and make sure that things are being measured against… measurable factors. This document can be as detailed or not as you need it. In an Agile environment I would have an extremely condensed version of this that can be constructed fairly quickly if I’m working on a project in 2-week sprints. If it is a longer project that has much farther out timeframes, or if this test is more of a summative test that will be encompassing a larger release and testing a lot more users then it will be more detailed and more robust.

The main thing I always keep in mind before being satisfied with my test plan is, “Can I hand this off to someone and could they run the test and collect results ready for analysis without any intervention or help from me?” If the answer is Yes. Then it’s ready. If the answer is no, then it’s not ready. If the answer is maybe, then it’s probably not ready. But always be prepared!

Comments are closed.