Jean-Yves Simon is Chief Product Officer at AB Tasty, a customer experience optimization software company. He began his career at Actito (previously Smartfocus) as an account manager and spent 12 years at the company, working his way up to Vice President of Product Management. Jean-Yves later co-founded HotelULike and also served as CPO of D-EDGE Hospitality Solutions. Prior to his current role at AB Tasty, he led product at Jabmo, an account-based marketing platform.
In our conversation, Jean-Yves shares how he approaches product discovery at AB Tasty. He discusses the importance of solving a need users don’t even know they have, and talks about the critical role customers play in the feature ideation process.
Working to implement customer ideas
How do you ensure that you’re in tune with your customers’ needs as they evolve and change, especially when customer loyalty is so volatile?
It’s become more difficult over the years, especially since COVID. AB Tasty started as an agency, so customer focus is in our DNA — it’s also one of our core values. This aligns closely with my own personal ethos. Growing up, my parents owned a hotel, so customer satisfaction is ingrained in my work, especially now as CPO. My father used to say, “The customer is king. You never say no to the customer.” That’s always stuck with me.
At AB Tasty, we work closely with our CSMs and professional service teams to ensure our customer experience satisfies the customer and stays in tune with their needs. To ensure we’re staying on track, we built a multi-channel feedback system made up of both qualitative and quantitative signals across the entire customer lifecycle.
We have product advisory and in-app client feedback boards. We run interviews with customers. And, since we have 10 squads working on the product by domain or persona, we also have each squad partner with customer advisors who help validate ideas.
One of the ways that we drive customer loyalty is by delivering on our promises. We’re a fast-paced SaaS company, and we cadence over everything. But customer feedback is extremely important to us —we take feedback as the basis for every new idea. I’m proud that about 42 percent of every product improvement that we shipped last year came from user requests.
Given your broad set of customers, how do you prioritize customer requests? And how do you let customers know if their idea made it into the product?
Customers can submit ideas through our idea application, and we have a system in place so they can vote for other people’s ideas as well. Once they do that, they become part of our alerting system to follow updates on their requests. This helps them see that they’re truly part of our process.
With that said, the most difficult aspect of creating B2B SaaS products is drawing the line between a custom versus a productized feature. At AB Tasty, we take more of a layered approach to understanding enterprise behavior and customers. The feedback aspect lies at the bottom of this approach, where we validate ideas with customers. Generally, since enterprise customers take longer to co-develop with, I like to mix enterprise customers with mid-market customers who can operate at a faster pace. They have an appetite for new features, and they’re willing to take risks in implementing new things.
In the end, it’s a question of having the right conversation with the right customers and seeing what’s possible for us to build internally. We look at whether the feature impacts more than $1 million ARR, as well as how many customers would need to implement it to meet that threshold. We also prioritize the voting system for the qualitative aspect. Quantitatively, we look at usage and session replay after we prototype and release the initial feature to a set of customers.
Solving a need users don’t know they have
Do you have an example you can share of a time when a customer had a need that conflicted with technical impact constraints?
We ran into that situation a couple of months ago. A customer had a use case to create multiple A/B tests that could run simultaneously. Running several A/B tests sounds like normal practice, but when it addresses the same page or content, it creates conflict and bias in the analysis. We thought about whether this was still possible to do, and it turns out, it was. We were able to run mutually exclusive experiments to ensure that the experiments remain in silo and don’t conflict with each other.
This was quite a big investment in terms of data. The request came from one of our big US-based customers, so we looked to see if other customers expressed interest in the same feature. We ran it by our early adopter community, and a few of them saw a need for the feature as well. The cost of developing it was high, but one of our competitors was already working on it. So, we knew that moving forward with mutually exclusive experiments was worth the investment. Now, more than 200 of our customers use this feature.
Have you found that as a feature develops, customer needs also evolve, and more of them end up having a use case for it?
Definitely — sometimes it solves a need that customers don’t explicitly express. One of the things I realized in my career is that if you don’t expose anything, people won’t react. There have been several times when customers didn’t even know they had a need. Henry Ford said, “If I asked customers what they wanted, they would’ve said a faster horse,” but then he created a car. I often come back to that quote. Sometimes, when you’re a bit ahead of the innovation curve, you create a need through your product that someone didn’t know they had.
There are plenty of features we came up with that hit this mark, and the results and adoption were surprising. We try to structure that approach to our “big bets,” which are innovative ideas, differentiating, and bringing value and impact to something in an unexpected way. That “let’s go for it” mentality is part of AB Tasty’s DNA, as well as innovation and taking risks, even though that’s not always easy to do.
How AI is changing the discovery value chain
Sometimes, AI can be somewhat of an obstacle to a human experience. How do you balance the usefulness of AI with understanding when the customer would benefit from a human touch?
It’s twofold. AI is necessary nowadays — we’ve been providing AI and machine learning technology for years. LLMs and generative AI are more of a new trend, of course, and we have to stay on top of them. How can we bring value without having another AI assistant that sits on the side?
We find tasks that our customers struggle with and then ask how we can use AI to bring value in solving them. For example, you can create a campaign within AB Tasty in seven or eight steps, and this takes roughly 20 minutes. You select an audience, create content, and then target it. We looked at the different steps and where people typically struggle within them.
One area was creating the hypothesis, and another was selecting the right audience. People also wanted the ability to create content without having to rely on developers, and that’s a bit tricky for non-technical users. Plus, with AB testing, you’re making bold decisions on sometimes very tiny uplifts in results, which can be stressful. It’s often hard to know if you’re making a good, concrete decision based on small data points.
So, we started to include agents that could help with the experience. We looked into how we could use AI to solve support issues as well, and infused those agents into the platform. That has worked pretty well. Our customers engage with these agents often.
However, we think we can provide even more value by making AI part of the design and discovery process. I recently worked with a designer and a PM to create an orchestration module. We want to take it further and put AI at the beginning of the journey to make it even easier for the customer. We’re now revamping the feature’s design to meet that new standard.
Overall, AI is changing the entire discovery value chain. Roles within AB Tasty are changing as a result — designers can now code prototypes, and PMs can design in ways they weren’t able to before. We’re shifting our mindset so that AI is at the start of everything we create.
More great articles from LogRocket:
You mentioned that with advancements in AI, a product manager can now do design, for example. How does that impact cross-functional collaboration?
It sometimes creates friction. As a CPO, I push everyone to adopt new tools like GenAI because I believe it’s important to continually evolve. Often, a PM is not comfortable with a designer challenging their user story, but it comes down to mindset. We held a training session recently about feedback because we know that it can be difficult to accept feedback and to be challenged. Sometimes, feedback is taken negatively, as if someone is stepping on your toes.
My goal is to change this mindset for people. I want to put principles around that and reframe it as a good thing, even if it’s uncomfortable. In the end, we may end up with PMs that have designer skills, PMs that have coding skills, etc. It doesn’t matter as long as everyone finds the right balance and values what each other is doing. One of our other core values is going above and beyond, and I want to encourage that.
How do you measure the return on investment in your product discovery process? Are there certain metrics that you rely on to ensure that your efforts have the impact you’re targeting?
ROI is very important to us because we tie everything back to how we’re serving our customers. We measure it in a couple of ways. Part of our process is to look at the impact via leading and lagging metrics. I also value activity as being important — for example, the number of customer requests being processed within this feedback tool. To better understand customer satisfaction, I look to see what percentage of those customer submitted requests we actually deliver.
Recently, we implemented a CSAT survey to measure the key areas of our product. We wanted to see the true satisfaction our customers felt for these product lines so we could give our PMs a good gauge. We looked at the number of campaigns created, as well as the customer-driven delivery rate. How many customer-submitted requests are we delivering? The main North Star metric we look at is the adoption rate and feature engagement.
One thing we’re working on is how these tie back to the overall company business metrics. That’s hard to do, but we’ve started to be able to link them. We’re trying to find proxy metrics that have a relationship with a KPI, like churn rate. How many integrations used by a given customer positively impact the churn rate? For example, if AB Tasty is connected to a session replay tool, how many of these customers churn compared to the average? If we see that by pushing a particular integration to customers, we reduce the overall churn rate — that’s the type of relationship we try to find.
Scaling product discovery and ideas
We’ve talked a fair amount about product discovery efforts and where ideas come from. What about scaling them?
Scaling, to me, means knowing what you’re trying to discover. It sounds crazy, but many people will say that they ran a discovery for two weeks or a month and still don’t know what the deliverable will be or the hypothesis they’re trying to answer. Scaling means you put a playbook together that frames the overall discovery process — templates, tool kits, and retools.
You may want to run a big UX research session, but if you have only three days, how do you do that? I encourage people to create a matrix, use guidelines, and a framework. We put several things together at AB Tasty to help designers and PMs define what they want to discover. Depending on the time you have, it could be a very small discovery focused on the users, asking questions, and prototyping. That way, you can iterate fast — maybe in a couple of hours.
At AB Tasty, we don’t use one specific framework today — it’s more of a mix of templates, tools, and principles that we put together to ensure that we discover the right way. I like the idea of operating in a six-week cycle. This is based on the Shape Up methodology, which allocates six weeks to discovery. At the end, there’s a presentation to expose quantitative metrics, UX research, and early prototypes. The group looks at it together, and then six weeks later, delivers it. It’s a great way to timebox discovery so you work efficiently.
Have you noticed improvement from using the Shape Up methodology and transitioning to that six-week cycle?
Yes, it’s been really beneficial. We went through several iterations, and we’ve found that six weeks is a very good timeline. It’s half of a quarter — you can tell the CEO that the idea is great and you can put it into the next six-week cycle, but it still gives you enough time to stay focused and flexible. I highly recommend it.
The difficulty comes in with doing discovery and delivery in parallel. That’s tough for many PMs, which is why framing discovery as delivery is helpful. Being able to expose customers on day one because you rapidly prototyped has been the best discovery process I’ve landed on. The earlier you expose your prototype, the better feedback you will get and the faster you will be able to iterate.
With AI, it’s become even easier to go beyond that and prototype something functional. AI is making coding and querying data easier than ever, and we’ve been able to prototype real-life scenarios and get them in front of customers quickly. I’m really happy with how that’s changed.
LogRocket generates product insights that lead to meaningful action
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.