In the role of user experience and design execution, there is often a question of why we care so much about tests, questions, and the answers they inspire, and how we attribute measurement to these factors?

When we are trying to validate design decisions, there are numerous factors we need to consider: demographics, geolocation, technical constraints, accessibility, variables, etc.  But some factors that can influence design executions are formed simply around human responses such as cognitive bias and the framing effect.

As humans, we each have a sort of bias, and due to the nature of bias, we are likely unaware of it.  How that bias then transfers to others is the framing effect.

As a simple example: one could see a posting on a social network and be influenced by the credibility of the person who posted it, the reactions or number of likes from others, the image referenced with the post and ultimately (even if misconstrued or prematurely reacting without context), can then transfer that bias to others by reposting.

Essentially this means that depending on how the information is presented, regardless of its content, people draw different conclusions. And its extremely easy for bias to be manipulated.

This is evident when applying A/B testing to the design process.  Through the addition of context, test results can be very different.

For example, in A/B Test #1, the participants are presented with two web design comps and asked which comp they prefer.  In A/B Test #2, the same participants are presented with the same two web design comps and asked which they prefer with the context that it is intended for a 20 year old Boston based technology company in the healthcare space focused on senior care.

In Test #1, the experiment returns the result that 70% chose A, where Test #2 only 30% chose A.  Without context, Test #1 results rely purely on cognitive bias.

Frequentist inference states that all experiments of statistical probability and the observations made therein are independently relevant.  If we consider the result from Test #1 to be statistically relevant, one could begin to make decisions based off of that sole result.  However this allows for the sidestepping of the controversial subject of statistical significance.  Without relevant context, the observational results are incomplete.

Frequentist Inference (while accepted) is an incomplete way to look at test results when it comes to design and user experience.  Context is a critical ingredient and can be simple or complex, some things may be more ubiquitous than others, and some tasks may require more context for what tasks are being measured.

Ultimately, the goal in user experience and design execution testing is to continue to ask better questions, recognizing that better questions in absence of the right audience, isn't enough to help shape better answers.

By building context, identifying the best audience, and measuring results, we can acheive the most accurate and statistically significant result.