The Expectation Effect (Part 1/3)

2026.02.04

The Invisible Parameter that shapes every User Experience

I have always designed for journeys. For years, I aimed intuitivity and simplicity, believing that if I "nudged" the user correctly with a well-designed journey, the result would be inevitable.

But then came the mystery.

10 years ago, I was working on a UI face-lift for a washing machine. Nothing radical - just cleaning up the interface. The logic and flow were mostly the same. I expected a smooth, uniform improvement. Instead, the results were mixed. Some users breezed through; others failed completely.

Why? The interface was the same for everyone. I realized that I couldn't explain the usability issues just by looking at the test results or the user's actions. To understand the failure, I had to look at what wasn't there. I had to look at their recruitment profile and had a realization: The users who failed were bringing "ghosts" from other machines - Gestalt principles and mental models formed by years of using a competitor's product.

I wasn't just designing a UI. I was fighting a memory.

The Prediction Engine

You are hallucinating right now.

Cognitive science tells us that the human brain is not a passive receiver of reality; it is an active Prediction Engine. According to the theory of Predictive Coding, the brain constantly generates top-down model predictions ("Priors") and uses sensory data merely to check for errors.

We do not see the world as it is. We see the world as we expect it to be. This goes deeper than just "System 1" intuition. This is about physical neural wiring.

  • Bayesian Integration: The brain statistically weighs its prior knowledge against new evidence. If the Prior is strong (e.g., "Buttons always look like this"), it can literally override visual input.
  • Gestalt Continuity: We perceive patterns where there are none, simply because our prediction engine hates gaps.

In short, if the experience matches the prediction, the user's brain stays in System 1, riding a wave of dopamine. If not, it throws a prediction error, forcing the brain to switch to System 2 to debug. This spikes cognitive load and kills the dopamine loop.

This works great for regular User Experiences. But when desiging an innovative User Experience, this creates a critical trade-off. We can't simply fallback to "Don't Make Me Think" or "Don't Reinvent the Wheel." Innovation requires prediction errors. The trick is understanding that every deviation is a tax. If you want to innovate, you must be willing to pay the metabolic cost.

The Glitch: When Reality Leaks

The reality is that our users approach our products with heads full of these "cached models" from the real world, old habits, and social norms. Standard user personas fail here. Knowing "Susan is 34 and likes coffee" describes demographics, not behaviour. It doesn't tell me about the specific neural pathways she formed using a 1998 Jura. We need behavioral archetypes - patterns of how users have been trained by prior products, not just who they are. We aren't fighting demographics; we are fighting muscle memory.

When a user interacts with your product, they aren't judging it objectively. They are comparing it to a prediction you didn't even know they made.

Case Study: The 2012 Shift & The Broken Button

I once had a huge argument with an engineer about a touch interface. It was around 2012, right during the transition from Skeuomorphism to Flat Design. We were building a home appliance touch interface, and I hadn't provided a "pressed state" for a button.

The engineer argued - correctly - that it was web development best practice to give immediate feedback. But he was missing the System 1 context.

In the real world (light switches, doorbells), a mechanical action yields an instant reaction - typically under 100ms. That was the user's "Prior." Research on the Doherty Threshold shows that users perceive interactions as "immediate" only when response time stays under ~400ms. Beyond that, the illusion of responsiveness breaks.

Our embedded system was slow - well beyond that threshold. So even though the button visually "registered" the tap (feedback), the screen didn't move (latency). The users kept smashing the button, thinking it was broken.

They didn't expect a "pressed state." They expected the outcome. The button was just an impediment to their goal - they weren't pressing a screen, they were starting a wash. The "feedback" was technically correct, but the Prediction failed. The engineer was debugging the code; I realized we needed to debug the expectation.

Conclusion: The Invisible Parameter

Understanding this shifted my entire perspective. Managing expectations took on a completely different meaning. We aren't just setting clear paths or providing clear feedback; we are managing neurochemical prediction loops.

A few years back, I began to see the full picture. It wasn't just about System 1 vs. System 2, intuitivity, or simple habits. It was about the Predictive Architecture of the mind - how memory, bias, and biology conspire to resist the new or different. User Experience is not just solving a user need, or giving what they want. It is the balance in between what they need and what the product delivers, affected by their expectations.

So, innovation becomes a balance game against biology. We have to convince a brain wired for efficiency that the unknown is worth the energy.

But we often forget that this "biology" doesn't just apply to the user. It applies to us, and everyone sitting at the table - from the engineer to the stakeholder, long before the user can even get a glimpse of the product.

We are not objective observers. We are running on the same hardware, projecting our own "priors" onto the solution. We aren't just fighting the user's expectations, we are fighting our own.

This introduces a second layer of friction: The Designer's Duality.

(To be continued in Part 2)


Thank you for sticking with me and I hope you enjoyed it.


© 2026 Luis Kobayashi
Powered by Nextra & Vercel