A/B testing

A/B testing is a controlled experimentation method wherein two distinct versions of a design (Version A and Version B) are presented to users to determine which version performs better based on user interactions. When running A/B tests, you only want to vary one element or aspect of the design—this allows you to pin-point exactly which feature influenced the result.

Additional Resources:

FAQ:

A/B testing is a method where two different versions of a design are compared to identify the one that performs better based on user interactions.

A/B testing eliminates guesswork and relies on real user data, ensuring that design decisions are grounded in empirical evidence and user preferences.

A/B testing involves changing a single design element in one version (B) while keeping everything else the same as the original version (A). User responses are compared to measure the impact of the change.

Popular A/B testing tools include Optimizely, Google Optimize, and VWO (Visual Website Optimizer), which help you set up experiments, collect data, and analyze results.

Explore reputable journals like the “Journal of Usability Studies” and “International Journal of Human-Computer Interaction” for scholarly insights. Online platforms like the Interaction Design Foundation offer curated courses for deeper learning.

  • “The Most Powerful Way to Turn Clicks Into Customers” by Dan Siroker and Pete Koomen
  • “You Should Test That” by Chris Goward
  • “Optimizely: 50 Scientifically Proven Split Test Winners” by Benji Rabhan
  • “Testing Business Ideas” by David J. Bland and Alexander Osterwalder
  • “The Lean Startup” by Eric Ries.