What the user never told you: designing AI for implicit signals

Lucas Semelin
9 min read
#artificial intelligence #AI architecture #B2B SaaS #product design #context engineering #trust design #AI strategy

What the user never told you: designing AI for implicit signals

In the previous post I showed how two architectural approaches to the same data sources produce completely different products. Traditional RAG versus a knowledge engine based on artifacts. The conclusion: the decision is architectural, not technical, and it gets made before code gets written.

That example assumed something that's almost never true in production: that all the relevant information about the user is declared. The customer said "I'm lactose intolerant," the system knows, done.

In reality, users almost never declare what you need to know. They demonstrate it through behavior.

And that's where the most interesting question in the architecture of an AI feature shows up: what does the system do with what the user never told it?


The Setup

Go back to the Starbucks assistant from the previous post. Same domain, same catalog, same stores. But now a different situation:

The customer has opened the app 9 times in the last 2 months. In 7 of those 9 visits they ordered lactose-free drinks — almond milk, oat milk, coconut milk. They never said "I'm lactose intolerant." They never marked a dietary flag. There is nothing in their profile that declares it.

Today they open the app and type:

"What do you recommend for this afternoon, something sweet."

The system has the information. The behavior is in the order history. But the dietary flag is empty.

What should the assistant do?


Three options, none trivial

Option 1: Ignore what's undeclared. The system assumes that if the user didn't mark an intolerance, they don't have one. It recommends the caramel frappuccino with whole milk. The customer, who's been avoiding lactose for months, ignores it. Lukewarm trust: the AI "doesn't get me."

Option 2: Silently assume. The system infers from history and only recommends lactose-free drinks, without saying anything. The customer gets recommendations that match their preference, but never understands why. If one day a lactose option appears, they don't know if it was an oversight by the system or a change in criteria. Fragile trust: the AI "sometimes knows, sometimes doesn't."

Option 3: Infer and make it explicit. The system detects the pattern, prioritizes lactose-free options while labeling them as such, and at some point asks the user if they want to declare the preference. If they say yes, it gets saved. If they don't respond or say no, the system keeps inferring but doesn't insist. Solid trust: the AI "gets me and explains why."

Most AI products live in option 1 or 2. Option 3 is what you want. And option 3 doesn't get built with a better prompt.


The Architectural Problem

Option 3 requires that something exist in the domain that doesn't exist in option 1 or 2: an artifact that represents what the system knows about the user without the user having declared it.

In the previous post we had CustomerProfile. It was a flat artifact, with declared flags:

interface CustomerProfile {
  user_id: string
  dietary_flags: string[]      // ["lactose_intolerant", "vegan"]
  // ...
}

For option 3 to be possible, we need a different artifact — separate, with a different nature:

interface InferredPreferences {
  user_id: string
  signals: {
    name: string                 // "lactose_avoidance"
    confidence: number           // 0.0 - 1.0
    evidence_count: number       // 7 of last 9 orders
    first_observed: string       // ISO date
    last_observed: string        // ISO date
    declared: boolean            // false until user confirms
    asked_count: number          // how many times we've asked
    last_asked: string | null
  }[]
  refresh_policy: "after_each_transaction"
}

Notice the separation. CustomerProfile.dietary_flags is what's declared. InferredPreferences.signals is what's inferred. They are two different things and the system treats them differently.

That separation looks pedantic, but it's the architectural decision that makes everything else possible.


What Changes When the Artifact Exists

Once InferredPreferences exists in the domain, the user's query has radically different behavior.

When "what do you recommend for this afternoon, something sweet" comes in, the system runs a typed query that uses both artifacts together:

{
  "ask": "Recommend up to 3 sweet drinks, prioritizing matches with high-confidence inferred preferences, explicitly marking which user signal each recommendation satisfies",
  "filter": {
    "user_id": "customer_123",
    "product.tags.contains": "sweet",
    "store.user_proximity_meters": { "<=": 2000 },
    "store.active_skus.contains": "{product.sku}"
  },
  "ranking": {
    "primary": "matches_inferred_signals(confidence > 0.7)",
    "secondary": "average_rating"
  },
  "shape": [{
    "name": "string",
    "recommended_size": "string",
    "signal_matched": "string | null",
    "signal_confidence": "number | null"
  }]
}

And the response isn't just a list of drinks. It's an annotated list, with the context for why each one was chosen:

"I recommend: — Iced almond milk latte (lactose-free · sweet) — Oat milk caramel macchiato (lactose-free · sweet) — Strawberry açaí refresher (dairy-free · sweet) I noticed your last orders were lactose-free. Want me to always prioritize that? Yes / No / Later"

Three things happened in that response that without the artifact couldn't happen:

  1. The system prioritized with a verifiable reason. Not "I think you want lactose-free." It states it because the artifact has confidence: 0.78 and evidence_count: 7.
  2. The UI labeled "lactose-free" as an attribute of each recommendation. Not as a general claim, but as a property of the product that matches a specific user signal.
  3. The system asked at the right moment. Not the first time. Not every time. When confidence crossed a threshold and it hadn't asked yet.

If the user answers "yes," declared: true and CustomerProfile.dietary_flags adds lactose_intolerant. The signal gets promoted from inferred to declared. If they answer "no," the signal gets nullified. If they ignore it, the system keeps using the inference but won't ask again for a defined period.


The Deeper Point

Everything you just read is architectural decisions that look like UX decisions.

  • When does the system infer and when does it only use what's declared? Policy on the artifact.
  • At what confidence threshold does the system act silently versus ask? Threshold defined in the domain.
  • How are recommendations labeled so the user understands why the system chose them? Shape of the query response.
  • How many times and at what spacing do we ask? Artifact policy, not modal copy.

None of these decisions get made when designing the "are you lactose intolerant?" modal. They get made earlier, when someone decides what entities exist in the domain and what policies govern them.

If those decisions don't get made by someone, the model makes them at query time, probabilistically, differently every time. And that's where user trust starts degrading without anyone knowing exactly why.


How This Translates to B2B SaaS

Think about any B2B SaaS product. Every product makes inferences about users and entities all the time, whether it declares them or not:

  • A CRM infers which leads are "hot" based on behavior (open rates, clicks, replies). It almost never exposes that as a separate artifact. It buries it in an opaque score.
  • A PM tool infers which projects are "at risk" based on velocity, comments, missed dates. The inference exists, but not as a queryable entity with explicit confidence.
  • A support tool infers which tickets are "urgent" based on language, customer history, component criticality. It surfaces a calculated priority, without separating signal from declaration.

When you add AI to these products, those inferences become critical. The AI can't recommend smart actions if it treats the product's inferences the same as declared data. And it can't explain to the user why it made a decision if it doesn't have an artifact that separates "what we know because they told us" from "what we infer from their behavior."

The architecture gives you two options:

Path A: inferences live implicitly in queries and models. The AI regenerates them every time. Inconsistent, opaque, not auditable.

Path B: inferences live as typed artifacts with explicit confidence and refresh policies. The AI queries them. The UI represents them. The user can declare, override, or ignore.

For B2B SaaS, where users need to understand why the system recommended something — audit, compliance, or just professional trust — Path B wins. Almost always.


What to Do About It

If you're building or auditing an AI feature in B2B SaaS, on top of the steps from the previous post:

  1. List the inferences your product already makes implicitly about users and entities. Every app makes them. Starting by making them visible is half the work.
  2. For each inference, decide: should this be a queryable artifact with confidence, or stay as an implicit calculation?
  3. For each artifact, define: refresh policy, threshold for silent action, threshold for asking the user to declare, re-asking policy when the user ignores.
  4. Design the UX as a consequence of those thresholds, not the other way around.

The product that distinguishes between what the user declared and what the system inferred, with explicit confidence, is the product users trust enough to use every day.

The one that blurs the two — or worse, the one that infers silently without separating — is the product users thank for recommendations when they're right, and abandon when they're not.

The difference isn't the model. It's the artifact you decided, or didn't decide, to add to the domain.


If you're working on an AI feature in your B2B SaaS product and want to think through these decisions — what inferences your system should make explicit, with what thresholds, with what UX — before or after committing to an approach, that's the work I do. Let's talk →

Share:

AI Product Architect for B2B SaaS. Designing AI features users actually trust and adopt.

Buenos Aires, Argentina · Working globally.

© 2026 Lucas Semelin. All rights reserved.