When our gut feeling told us we'd fail, but all the data indicated otherwise—and yet, we still failed.
At one point, we found ourselves in a tricky spot. We'd already picked all the low-hanging fruit and weren't growing as a company anymore. We faced a choice: either expand geographically or build new growth loops within our existing product. Since expanding to new markets wasn't an option, and significant product development wasn't feasible at the time, we needed something simple, quick to implement, and capable of delivering meaningful growth. Essentially, we had to create growth out of thin air.
We revisited some ideas we'd previously set aside, and among them was the "Refer a Friend" program. Initially, we had shelved this idea because, based on my experience, referral programs rarely work well for financial companies. However, we decided to set aside our biases and let data guide our decision.
We began with the simplest approach possible—conducting an NPS (Net Promoter Score) survey among our existing customers. For those unfamiliar, an NPS survey asks customers: "How likely are you to recommend our company to your friends and family?" To my surprise, the results were exceptionally good (~30). Skeptical, I double-checked by increasing the sample size to avoid false positives, but the impressive results held firm. Encouraged by the data, we developed an MVP for our referral program and quickly launched it, promoting it to our customer base.
Initially, the program didn't gain much traction. We intensified our promotional efforts, but still, nothing changed. The issue wasn't that referred customers didn't sign up or convert; the core problem was that those same enthusiastic users who claimed they'd eagerly recommend us simply weren't referring anyone—even with generous rewards in place. As anticipated, the referral program failed once again.
Determined to understand this contradiction, I dug deeper. My hypothesis emerged: perhaps our users in Asia—particularly those in communist countries—tend to respond overly positively to surveys out of politeness rather than genuine intent. Some evidence supported this theory: our customer retention rates were extremely poor, with barely 1% of users still active three months post-registration. We had been so blinded by the optimistic NPS results that we'd overlooked the glaring reality reflected in our retention numbers.
My key takeaway from this experience: Actions speak louder than words. Instead of trusting only what users say, we should prioritize observing what they actually do. In retrospect, relying solely on NPS was a misstep—we should have trusted our retention cohort data instead.