Doing user research as a product manager in an early-stage startup
6 months ago, I joined a climate tech startup called Climate Policy Radar as employee #11. As their first product manager, user research is one of the main things I am responsible for. This post describes the approach I am using to gather continuous feedback from users via user tests and interviews.
Background
For most of the past 7 years, I have worked with organisations that had dedicated user researchers on most teams. For the past 6 months, I have had to wear many hats: UX designer, business analyst, performance analyst, tester, scrum master and of course product manager. User research is something I have only had the capacity to do part-time. So I have developed a light touch approach that fits alongside all the other parts of my job.
How we do user research
User research helps us answer three questions:
- Is there a need or opportunity?
- Do people value my proposed solution?
- Can people use my proposed solution?
There are lots of different ways to do user research, but the main approach we’ve used so far is user feedback conversations. These are 15-45 minute conversations with someone who has used our product or might use it in the future.
We’ve had more than 50 of these conversations so far. Our aim is to be doing five a week so we can continuously and rapidly evolve our ideas. Before I joined we also ran a survey with some of our early-access users.
As the product manager, I lead most user feedback conversations. But user research is a team sport. Getting the team involved improves the research. It helps us make better decisions about the things we want to learn from users and avoids us having one person as a single point of failure. It also improves the product. When team members learn more about users they build empathy and understanding.
These conversations have given us a rich understanding of users and the problems they are trying to solve. This has helped us to focus on building a product that is genuinely useful to them. Now that we have a live product we can also track user behaviour via analytics. As user numbers grow we will start making more use of other methods such as micro surveys and A/B tests.
How we find users to speak to
When we launched the first (alpha) version of our tool, people had to register for early access. We put a tick box in the form asking if people would be happy for us to contact them. We had about 500 sign-ups for early access, and we spoke to 35 of them – some before launch, and some after.
Now that we have a live product, we invite every user who gets in contact with us to join a user feedback session. This includes people who contact us, submit a law/policy document or who request to download our data. We also do regular shout-outs on social media and across our networks to find people to talk to.
Calendly is a great tool that automates the signup process. Through Calendly users can:
- Find out more about what to expect from a user feedback conversation
- Find a slot in my calendar that works for them
- Receive a calendar invite that contains a zoom link and instructions
- View and complete our consent form
This is my first time recruiting research participants without offering incentives. So far so good! Our no-show rates are at 10-15% which is around the industry average.

How we use data responsibly
We ask everyone to read and complete a short (one-page) consent form before the session. This helps set expectations for how it’ll run and how we handle their personal information. It also asks for explicit consent for:
- other team members to observe
- to record the session
- for other team members to be able to view the recording in the future.
Questions we ask
Our sessions are usually a cross between a user interview and a usability test.
We start by asking people questions about their role and what their work in climate policy involves. This helps us understand who they are and what problems they’re trying to solve. Starting with easy questions like this also helps people to settle in and get comfortable.
If they haven’t used our tool before, we usually ask them to show us how they use any other tools. This gives us insight into how they are solving problems today. We also observe them using Climate Policy Radar for the first time. This gives us insight into how new users onboard, and whether they understand how it works.
If they’ve used it before, we usually ask them to show us how they last used our tool, and about what problems it helps them solve. These conversations focus more on understanding what regular users do with the tool, and how it might grow to better meet their needs in the future.
We end by asking users if they have any questions for us and ask if they are willing to participate in further research. We are gradually building up a rich database of users we can go back to for further conversations when needed, which is great.
The questions we ask depend on our research priorities at the time. They also depend a bit on the type of user we are speaking to and what direction the conversation goes in. For example, users sometimes sign up for conversations because they want to discuss a specific feature idea that isn’t a priority for us right now. If that happens, I try to learn more about the problem they are trying to solve and how they are getting around that problem today. I also ask for permission to speak to them again when solving that problem becomes a priority for us.
Taking notes
We have two Notion databases where we store user research findings.

1. Interview log PII database
We store all personally identifiable information here. This includes
- raw notes from the observer
- a link to the recording/transcript (stored in a secure folder on Google Drive)
- the participant’s name and contact details
- whether they are willing to participate in future research.
2. User research database.
This is where we store summary information that the whole team can have access to. This includes
- anonymised information about the participant
- a one-page summary of the key findings from the conversation
- a link back to the entry in the Interview log PII database
- any analysis or planning work
Turning raw findings into notes
At the end of the conversation, we take 10-15 minutes to summarise the raw notes from 1 into the one-page summary in 2. In the beginning, I put lots of tags on each note and linked them back to product bets so the research would be easier to refer back to later. We stopped doing that because it didn’t seem worth the effort. Notion keyword search makes it easy enough to find which conversations cover which topics. And our one-page summaries make it easy to skim-read for key findings. Maybe as we scale this might be one to revisit, but it is overkill for the size we are at now.

Monitoring interviewer performance
To improve our performance as interviewers, we check our performance at the end of each session. In the Notion template we use for taking notes there is a checklist of things that the interviewer should avoid. This includes things like “did the interviewer ask leading questions” and “did the interviewer ask yes/no questions” (both of these things are things interviewers should not do). I borrowed this idea from Build Better Products.
How we share learnings
We write a one-page summary of every conversation. This includes key insights and opportunities for improvement. These one-page summaries are available for the whole team to view. We adapted our template from this interview snapshot.
Every few weeks I share what we’re learning from user research with the team, either via a dedicated session or via a slot at the show & tell. We generally focus more on the findings that have surprised us or the things that aren’t going so well, as these are the things we expect to learn the most from.
I always try to share some of the best snippets from video recordings. It’s really powerful for the team to see people using our tool first-hand.
I also make updates to our strategic user insights documentation. These include personas, journey maps and research priorities.
Not the only tool in the box
We also have to be careful with the conclusions we draw from user feedback conversations. People behave differently under observation compared to real life. In almost every session people have used the filters on our search results page. But we know from analytics that only a tiny percentage of people actually do so. It’s best to use feedback conversations to understand why people do things, not what they do. We use analytics to understand what people are doing with our tool. I’ll be blogging about our analytics setup soon.
Thanks
Thanks go to Faye Koukia-Koutelaki and Lisa Long, all of whom have provided lots of feedback and advice over the past 6 months. And obviously, credit to the Climate Policy Radar team who have worked with me on all of the above.
Feedback is welcome, so if you have any ideas for how we can improve on any of the above then get in touch!
Leave a Reply
Want to join the discussion?Feel free to contribute!