I sent an email to 15 of my investors. Same words, wildly different reactions (though I will admit, most of them didn't reply, this is normal.) Every piece of content has two versions: what it says, and what the reader hears.
This is not about how to communicate to investors, if you are interested in that, boy do I have a lot to say (across private companies, small and large cap listed companies, funds etc.)
This is about something I call "Perception Engineering" - tweaking the way something is said whilst maintaining faithfulness to the original message in order to maximising how it resonates with specific individuals.
The term appears in research, but in a wildly different context (mostly computer vision.) I started thinking about this idea after spending an insurmountable amount of time exploring how (financial) content can resonate better with people. I was also inspired by work such as "LLMs speaking with confidence", Thaler's Narrative Economics and the "era of experience", I love David Silver's explanation of the problems with RLHF; the problem with ChatGPT is that a consumer can ask for a cooking recipe and respond with "that's great!" without ever trying the recipe itself, so the models are incentivised to give responses that seem like something the user wants to see.
This works. But imagine how much better it gets with real feedback from the environment. This is difficult for content, since public experimentation is slow and embarrassing. This is precisely the gap perception engineering tries to close. Instead of waiting for real feedback that may never come (or comes too late), we simulate the audience's reaction and iterate before publishing.
Can we simulate and optimise content before it ships?Two levels of optimisation
I think this exists on two levels:
- one-to-many: optimise for an audience. Think social media — same post, many people, find what works on average.
- one-to-one: optimise for an individual. Think DMs - tailor the message to exactly who's reading it.
I have been most interested with the latter, and what I believe true personalisation looks like. This is how we can supercharge current generalised models to give the best output every time, to give users that sci-fi level of personalisation. At the nth degree, you could view this in both a cynical lens (ads) and an optimistic lens (perfect answers).
For one-to-one content, specifically for things like DMs, one idea is to do monte-carlo/game type simulation -- to simulate many paths of conversation to optimise the one that leads to the optimal outcome. A great example is a fundraising investor call, or an interview. Simulating chess, the thing you say or reveal at a certain point in time will influence the course of the conversation, and you're really trying to optimise a few things (such as closing the sale, or getting a second meeting.)
A concrete example: financial headlines
Take something like Tesla's forward P/E ratio being higher than Ford's. Same fact, but watch how the framing changes everything:
- Raw data: "Tesla Inc's Forward P/E Ratio Significantly Higher Than Ford Motor Co. and General Motors Company"
- For a beginner: "Tesla's forward P/E ratio suggests investors expect high future earnings growth compared to Ford and GM"
- For an experienced investor: "Tesla's high forward P/E ratio suggests overvaluation risk; recent volatility underscores the need for cautious, diversified investment strategies"
The beginner version anchors you to growth expectations. The experienced version anchors you to risk. Both are true, but one might make you buy, the other might make you hedge.
This is where it gets interesting from a bias perspective. Anchoring is particularly dangerous in finance — the first number you see disproportionately influences your judgement. If I show you "Tesla up 40% this year" before showing you valuation metrics, you're already primed to see the high P/E as justified. If I show you "Tesla down 20% from ATH" first, you're primed to see it as overvalued.
The same information, reordered and reframed, can trigger completely different decisions. A retail investor panic-selling during a dip isn't irrational — they're responding rationally to how the information was presented to them. Change the presentation, change the behaviour.
Hurdles and assumptions
This whole idea rests on some bold assumptions. They compound on each other.
Foundational
- You can measure if content actually worked (effectiveness)
- You can trace what someone read to what they did next (attribution)
The personalisation thesis
- Generic content is hard to optimise — you can't predict what will make a diverse audience act
- Even perfectly-crafted generic content hits a ceiling. Beyond that, you need to personalise
- Segment-level personalisation outperforms one-size-fits-all
- One-to-one personalisation beats everything
Betting on simulation
- You can predict how content will perform before shipping it with specific metrics from a simulation environment
- LLMs can simulate how a specific person might react
- You can use that simulated feedback to iterate and improve
- The improvement is significant enough to justify the complexity
If assumptions 7–10 hold, “publishing” might start to look different: the same idea, rendered a little differently for each reader. You send somebody an article (maybe even this blog post), or forward them an email, and they are actually not reading the same words as you.