Conducting a remote user test
Though each test is designed to specific objectives, its success is dependent upon following a standard methodology. Through our tools and teams, we’ll make every effort to ensure that you receive the best insights possible.
User script
This is the foundation of the user test that’s being designed. Particular attention must be paid to its contents and to the instructions that will be submitted. The challenge here is to translate the test objectives into instructions for the user without influencing their behavior or introducing cognitive biases.
The test is conducted in a non-moderated way, so the instructions must be neutral, understandable, precise and concise.
It’s essential to formalize:
- Questions
- Goals
- Hypotheses to validate or invalidate via the test
- Scenarios submitted to users via the test solicitation email
- Panel sample characteristics
- Task-by-task instructions submitted to users
- Task-by-task instructions submitted to users
Conducting the tests
This the point at which the tests are sent to users (Learn more about our panel).
From this point on, the only thing to do is wait for the testers’ feedback. Generally, this takes from 48 to 72 hours. If the segmentation criteria are complicated (for example a B2B target), this can take up to one week.
Analysis
Every analysis begins with a framework. Before analyzing the data, it’s important to have a clear idea of expectations for the deliverables. The main points of the analysis can, for example, center around:
- users’ paths via a flow analysis of the various pages landed on along the way,
- predefined study objectives (measuring engagement, understanding, usability, trustworthiness…),
- features studied during the test (search engine, filters, etc.),
- etc.
As part of a SaaS subscription, our CSM will guide you on best practices in terms of analysis.
Learn more about our offeringsWritten comments from users must be selected according to the expectations of the analysis and defined in advance. It’s better to first identify the richest and most well thought-out written comments; even those that express ideas for improvement.
The less constructive comments (ex: “find the landing page attractive”) can be revisited later.
Quantitative data analysis generally takes less time, as the numbers often speak for themselves. It is nevertheless recommended to link these numbers to those from previous tests. The question here is how to interpret these quantitative measures.
Ex: If 21% of users state they don’t trust the payment page, is this significant?
The goal is to isolate the key segments. For example, when the user expresses:
- a positive or negative opinion on the interface or the flow
- an infinity or preference for an element of the interface
- difficulty understanding an element of the interface
- annoyance or a wish to abandon their interaction with the interface
Actions/Improvements
Now it’s time to put the improvements in place. You can easily build a roadmap with the list of recommendations that allow you to filter the most critical observations or those having a high frequency of occurence.