Nest: Designing Trust for Tokenized Assets

How to make seven-figure investors trust DeFi vaults.

Context & Problem

Most people think the hard part of bringing real-world assets onchain is the legal and technical plumbing. And that’s true, but it hides a second problem just as important: trust.

Shortly after I joined Plume, it only took us about a month to design and build the first version of Nest from scratch. It worked — money could be staked, vault tokens minted, yield accrued. But the experience was fragile. An investor opening the app saw numbers on a screen without any clear sense of where they came from, or what they meant. APY looked like magic. Liquidity wasn’t even in the picture. And when numbers went stale or turned red, the instinct was to hide them instead of explain them.

That may sound cosmetic, but in practice it was existential. You can’t expect someone to move $2M into a vault if the interface feels like a slot machine. The product lacked what I came to call a trust surface — a way to show not just the outputs, but the underlying mechanics, in a way that investors could understand and verify for themselves.

This was the problem I set out to solve.

Research & Insights

Good design work doesn’t happen in isolation. Especially not here. What I learned quickly at Plume was that Nest wasn’t just a design problem — it was a capital problem, a compliance problem, and a technical problem.

The capital team taught me how LPs think. They don’t ask “what’s the APY?” They ask: what happens if I put $2M here, and I need it back in two weeks? That single question reframed how I thought about the interface. Every screen needed to reflect not just returns, but timelines, liquidity, and risk — the mental models LPs actually use to make decisions.

The compliance team brought a different lens. For them, every number on a screen had to stand up to the standard of an institutional-grade disclosure. That meant no hand-waving, no vague promises. The output had to look and feel like a financial product that could survive scrutiny from regulators and institutions alike.

The engineering team made sure it all stayed real. With them, collaboration meant iterating continuously, asking: can this number be indexed reliably? Can this chart run directly from onchain data? Where are we still dependent on third-party vault operators? Those conversations forced hard choices: sometimes we’d simplify a visualization until infra caught up, other times we’d build entirely new technical pathways so Nest could deliver what the product required.

The result of all this wasn’t just “designing screens.” It was translating three very different worldviews into one shared interface — one that LPs, regulators, and engineers could all recognize as truthful and reliable.

Design Strategy

From the research came a set of principles. They weren’t abstract “design values.” They were practical rules that let us translate capital psychology, compliance standards, and engineering constraints into a single product language. Six principles, in order:

  1. Anchor everything in verifiability. The rule to rule them all: if a number can’t be tied to onchain data or a defensible offchain source, it doesn’t go on screen. No black boxes. No magic. Just numbers with receipts.

  2. Show the mechanics, not just the outputs. APY, liquidity, redemption times — these aren’t final scores, they’re the result of moving parts. Instead of hiding the machinery, we put it in view: liquidity buckets, price updates, redemption queues. A number should feel earned, not arbitrary.

  3. Design for institutional scrutiny. Compliance pushed us here. Every visual doubles as a disclosure. Plain language, explicit methods, no hidden footnotes. Screens should feel less like marketing, more like a fact sheet regulators could nod at.

  4. Treat red numbers as signals, not embarrassments. Capital psychology reframed failure states. A dip in performance isn’t a bug to hide, it’s a story to explain. Red numbers build trust when they make sense in context.

  5. Trust is a surface, not a checkbox. You don’t convince someone with one chart or one number. Trust is cumulative — a feeling that grows as you scroll through something that looks and feels real. Each chart, disclosure, and data point adds to the surface area of trust. That’s the true product we were building.

  6. Deliver trust at varying depths. Not every user wants the same depth. Some LPs want a quick glance, others want to audit line by line. The principle was progressive disclosure: a surface that works at any level, fast trust for the casual, deep trust for the skeptical.

Featured Design Solutions

The principles only matter if they change what you put on the screen.

Two feature areas in particular became the crucible for testing everything I just described: performance and redemptions.

Performance

LPs don’t just want a headline APY. They want attribution. If they put $2M into a vault, they expect a straight answer: where is that money going, who’s managing it, and how is it being put to work?

Most products punt. They call a folder of PDFs “transparency.” But PDFs aren’t attribution. They’re homework.

And with tokenized assets the problem gets worse. There are no standards. Each issuer tokenizes differently, with its own quirks and disclosure formats. That put Nest at the edge of the industry: adapting to fragmentation to make it feel unified for users, while nudging issuers toward more standardized practices over time.

Stage 1: Estimates (the bootstrap). We launched with static estimated APYs and a clean table of vault composition. On paper it worked. In reality, early vaults were volatile — assets being deployed for the first time, allocations shifting week to week. The table looked precise but was often misleading. We pulled it.


Stage 2: Composition Cards (context over false precision). We replaced the fragile table with asset-level cards. Each card showed estimated weight, but more importantly, richer context: who the third-party custodian was, the risk band, the underwriter. It was a way to build confidence without pretending to have more accuracy than we actually did.


Stage 3: Rolling APYs (measured, not promised). As vaults matured, performance history accumulated. That allowed us to graduate from static estimates to live rolling APYs — numbers grounded in actual outcomes, not targets. Now investors could see what the vault had earned, not just what it hoped to earn. (In dev; not yet public.)


Stage 4: Yield Attribution (from numbers to sources). Even with rolling APYs, LPs pushed further: show me where the yield comes from, asset by asset, month by month. That demand led us to rethink attribution itself. Instead of a single magic number, we broke it down: here’s the dollar contribution of each underlying asset to the total yield. (Planned and specced; not yet public.)


Where it lands in the product. Attribution moved APY from faith to evidence. Investors can now scroll from a high-level performance metric down to the actual assets driving it — each one visible, contextualized, and traceable. The result isn’t just a prettier chart. It’s the shift from trust us to see for yourself. That’s the line between marketing and observability.

Redemptions

Most people think redemptions are about buffers and over‑collateralization. Ours worked well from day one because the assets were high quality. The real work was explaining why they worked — in a way that matched how investors actually think (“when do I get my money back?”) without hand‑waving.

Stage 1: Cooldown + Buffer (ship fast, set expectations). We launched with a simple operational rule: a short cooldown, plus a modest stablecoin buffer. It wasn’t there to “guarantee” exits so much as to give a clear mental model while we proved out the pipelines. The UI said what it meant: redemptions are processed in tranches; most settle faster, but plan for a short wait.


Stage 2: Liquidity Profiles (earn while staying liquid). Buffers don’t scale, and idle stablecoins don’t earn. As we built performance history and gained confidence in vault composition, we replaced static buffers with depth across highly liquid, yield‑bearing assets (think near‑instant T+0/T+1 instruments at ~4–5%). The interface shifted to explicit liquidity buckets — Immediate, Short, Medium — tied to targeted allocation ranges and the specific assets that back them. The message: we don’t “sit” on cash; we hold a ladder of liquid assets designed to meet redemptions at scale and earn in the meantime. (Planned and specced; not yet public.)


Stage 3: Redemption History (evidence over assurances). This step only clicked once the team aligned on the core idea: redemptions are a derivative of the underlying assets’ redemption capacity. But investors don’t buy a theory; they ask one simple question — how long from request to fulfillment? In finance you can’t promise a date, so the next best thing is a track record. We prioritized indexing so we can show months of redemption history for each vault: queue depth, pending amounts, fulfillment times, and how those times vary with the size of a request. Not a guarantee, a baseline. (Planned and specced; not yet public.)


Where it lands in the product. At the moment you request a redemption, you see a plain‑English estimate driven by three inputs: current queue state, today’s liquidity mix, and historical fulfillment for similarly sized requests. Up top: a single ETA. Below: the “why” — live composition by liquidity bucket, recent redemption throughput, and the tranche you’ll likely settle in. No magic, no fine print; just an answer you can verify if you care to look.

Impact

The proof is in the shift of behavior — both inside the company and outside with investors.

Inside, trust surfaces changed how the whole team thought about the product. Before, red numbers were something to hide, tables were something to hand-wave, and every discrepancy was brushed off as “ops still catching up.” After, the instinct was reversed: if something’s red, we show it and explain it. If data is fragmented, we design the disclosure that forces the conversation. Transparency stopped being marketing jargon and became an operational standard.

Outside, the difference was even sharper. Vault pages went from feeling like a slot machine to feeling like a Bloomberg Terminal. Investors who would never put seven figures into a DeFi vault started to take the calls seriously, because the product looked serious. The capital team began pitching “financial observability” as a differentiator, not just “yield.” That was the point: design made the product legible, and legibility made it credible.

This wasn’t just UX polish. It shifted the psychology of the users and the psychology of the builders. And in crypto — where trust is always the bottleneck — that’s the difference between noise and signal.

Reflection

A few things I learned building Nest:

  1. Designing for finance is designing for trust. If the numbers can’t be verified, they might as well not exist. Investors will always assume the worst.

  2. Screens force standards. By deciding what you do or don’t display, you pull technical and capital teams toward standardization. The UI becomes the negotiation table.

  3. Transparency is asymmetric. The same module that reassures a $5M allocator can also make a $500 retail user feel safe. Progressive disclosure isn’t a nicety; it’s the bridge.

  4. Good design is a forcing function. Once trust surfaces existed, you couldn’t ship a vault without them. Screens raised the bar — for engineers, for issuers, for everyone.

Nest is moving from “a vault product” to “a capital observability platform.” Each new trust surface compounds into a larger surface, until the product itself becomes the market standard for how tokenized assets are understood. That’s the real design win: not just prettier screens, but a new baseline for what investors expect from the category.

Visit Nest at app.nest.credit.