Episode 261: AI Implementation in Regulated and High-Trust Industries
AI is moving quickly from experimentation into real products, but in regulated and high-trust environments the stakes are much higher. In this compilation episode of the Product Thinking Podcast, host Melissa Perri brings together perspectives from multiple product leaders to explore what it really takes to ship AI responsibly when accuracy, trust, and risk management matter.
You will hear from Maryam Ashoori, who works at IBM and was involved in building Watsonx. She explains what AI agents actually are, why large language models hallucinate, and how guardrails and human oversight help teams manage calculated risk as systems become more autonomous.
The episode also features Magda Armbruster, Head of Product at Natural Cycles, and Jessica Hall, Chief Product Officer at Just Eat Takeaway. Together they share how embedding regulation, prioritizing data privacy, and being honest about cost, governance, and capability building can turn compliance and trust into enablers rather than blockers for AI-driven products.
You’ll hear us talk about:
What AI agents can and cannot do
Maryam breaks down how agents reason, plan, and take action, and why their probabilistic nature leads to hallucinations. She explains why this behavior is acceptable in low-risk contexts but dangerous in high-stakes domains without proper safeguards.
Managing risk with guardrails and humans in the loop
The conversation explores how teams can design agentic guardrails and decision flows that keep AI systems close to verified truth, while escalating sensitive or high-risk situations to humans for review.
Embedding regulation and privacy into product development
Magda shares how Natural Cycles integrates quality assurance, regulatory, and compliance partners directly into day-to-day product work, and why strong privacy practices and user control are core product strategy rather than afterthoughts.
The real cost of AI and long-term responsibility
Jessica discusses the often underestimated costs of building and running AI systems, from unit economics to team capability, and why product leaders must balance simplicity, governance, bias mitigation, and customer trust instead of chasing hype.
Episode resources:
Check our courses: https://productinstitute.com/
Maryam Ashoori LinkedIn: https://www.linkedin.com/in/mashoori/
Magda Armbruster LinkedIn: https://www.linkedin.com/in/magda-armbruster-326692a/
Jessica Hall LinkedIn: https://www.linkedin.com/in/jessica-hall-4223b0/
Product Thinking Podcast Episode 241: https://www.produxlabs.com/product-thinking-blog/episode-241-ai-strategy
Product Thinking Podcast Episode 251: https://www.produxlabs.com/product-thinking-blog/episode-251-femtech-innovation
Product Thinking Podcast Episode 199: https://www.produxlabs.com/product-thinking-blog/2024/11/27/episode-199-the-true-cost-of-ai-beyond-the-hype-and-into-reality-with-jessica-hall
Follow/Subscribe Now:
Facebook | Instagram | LinkedIn
Episode Transcript:
[00:00:00]
Intro
---
Melissa Perri: Creating great products isn't just about features or roadmaps, it's about how organizations think, decide and operate around products. Product thinking explores the systems, leadership and culture behind successful product organizations.
We're bringing together insights from multiple product leaders, pulled from past conversations to explore one shared topic, offering different perspectives and lessons from real world experience.
I'm Melissa Perri, and you're listening to the Product Thinking Podcast, by Product Institute.
Today we're looking at how AI is moving from experimentation into real products. What that means when trust, accuracy, and risk really matter.
We'll start with Dr. Maryam Ashoori, who works at IBM and was involved in building Watson X. She breaks down what AI agents actually are, how they reason, plan, and take action, and why those capabilities introduce new challenges when systems start operating on their own.
Let's hear from [00:01:00] Maryam.
Understanding AI Agents and Their Capabilities
---
Maryam Ashoori: AI agents is an intelligent system with reasoning and planning capabilities that can automatically make decisions and take actions. So now how can I use this? Because it has some sort of reasoning. We can question if it's reasoning or just rudimentary planning capabilities. But because it has those capabilities, it has the opportunity to break down complex problems and play the role of a partner for brainstorming, for market evaluation, for gathering and analyzing information and data.
And then because it has some sort of action taking actions we call it in the AI war tool calling or action calling. I can automate some of the actions that is potentially possible to execute on for an agent. For example, we are having this conversation on the podcast, I can have an agent to automatically summarize everything that we are talking about, identify a list of actions, [00:02:00] depending on what we said, so use those reasoning and planning capabilities to identify the actions and connect them to some internal systems that automatically enables it to go and execute those action items and get them started.
From Gen AI to Action-Oriented Agents
---
Maryam Ashoori: let's, Let's go back to, you mentioned Chat GPT, the story of Gen AI in 2023. ~So the,~ there's ~a re ~series of use cases that Gen AI was applicable to at the time was around content generation, code generation, content grounded question and answering, summarization, classification and information extraction.
Mid 2024, we started seeing LLMs expanding to taking actions. The tool calling and function calling capabilities that we talked about. So suddenly businesses saw an opportunity, a window of opportunity to connect all those acceleration that we previously got from LLMs and bring them through action calling and function calling, bring them to every [00:03:00] single of their businesses, even the legacy systems.
You can think of automation that can come everywhere. You can think of acceleration that can come everywhere. So practically any use case that you can think of can potentially benefit from some of the use cases that are enabled by agent.
Why LLMs Hallucinate: Probability, Not Logic
---
Maryam Ashoori: Because there is no reasoning really, there is no logic of thinking behind LLMs. This is an unsupervised learning that basically LLM is exposed to a body of information, a very large body of information. So when you ask a question, it basically calculates with the probability of the next token. It's just the word of probability of the next token versus the whole thing. And because of this, the model can hallucinate because based on just following the probability rules, there is a high probability that the output that you have generated is sound, but then when you use the logic behind it, it is like, Hey, it looks good, and most [00:04:00] of the time they are confident, very confident on generating this, that you can feel the confident in the tone. But it's not an accurate information. It's just a collection of words nicely put together.
Guardrails and Human Oversight for AI Risk
---
Maryam Ashoori: There are two things that they can do. One is using the technology to identify some of those. So for example, for agents, we have been developing guardrails, agentic guardrails that identify context relevant, faithfulness to the contents and something like that. So as a product manager, you can put together these guard rails, in the flow of decision making for the agent to make sure that they stay close to that truth. The second thing that they can do is when they design the workloads they want, they make sure that the human is in the loop when there is a sensitive information or when there is the need to very high, accurate output. So, for example, if you are [00:05:00] using AI to provide recommendations for where to eat, maybe the confidence can be low and nothing can happen unless we are talking about food allergies and stuff, which is a serious thing. So maybe ~as~ a product owner is providing recommendation, you can make sure that if there is something in that query about food allergies, you wanna make sure a human is in the loop. Or you wanna make sure that there is an extra checking to make sure that the ultimate before is communicated to the end user is double validated.
Melissa Perri: And it sounds like this has to do with risk. Like the higher the risk, the more you want to put these into place.
Maryam Ashoori: Exactly. We call it calculated risk. So as a product manager, you want, you need to know what is non-negotiable and what are the risks that can't be jeopardized and just work around that.
Melissa Perri: So Maryam walked us through how AI agents work, why hallucinations happen, and why guardrails and human oversight become critical as risk increases. That idea of calculated risk is especially [00:06:00] important in regulated environments where mistakes can have serious consequences.
Next, we'll hear from Magda Armbruster head of product at Natural Cycles, who shares how her team builds healthcare products where regulation, privacy and compliance are embedded directly into product development.
Here's Magda.
Building Products with Embedded Regulation
---
Magda: we have quality assurance members within our team that work alongside of product development and they do flag to us early if we try to do something to risky or too out there. Our quality assurance is like inbuilt into product development processes, whenever we brainstorm like about a new idea, they are part of that brainstorming. They are part of our our team meetings where we present like new designs or new features, and they give us feedback. So we take that into account right from the beginning. We don't work in silos, that first the product team works [00:07:00] and then we hand it over for sign off and then it comes back after a month of limbo. We work alongside the regulatory and the compliance team, with like frequent meetings and things, and also just talking to each other.
Data Privacy as Product Strategy
---
Melissa Perri: There is a very big topic in the United States right now about that as well, and I've seen natural cycles and apps like it in the news, especially when it comes to the government who, uh, our choices have been more and more restricted here. A lot of news articles talk about how they are going to start, the government's gonna start looking at these apps, taking people's data to find out if they, had an abortion or if they prevented pregnancies, which could lead to legal action. How do you think about, first of all, is any of that true? And how do you think about data privacy when it comes to such a sensitive topic like this, especially in a political climate like we have today?
Magda: When it comes to data privacy and data security, we are a paid app. [00:08:00] Which one reason for us being a paid app is that we do not sell our users' data. Our users' data is in their hands, is under their control. They are in charge of it. They can delete it.
They cannot add any data. They don't need to do anything. They're absolutely 100% in control. If they don't want us to use their data for research, they don't have to. It's absolutely all in their hands. All of the practices around data privacy and data security, they are under a NC Secure flagship product in our company.
And one of those is also even an advanced identity protection called go anonymous mode, should women want to go anonymous, they can. And this means that. Not even us cannot identify who they are. So in a unlikely, hopefully event of a subpoena from the government, we will not be able to give the users data 'cause [00:09:00] we don't know who the data belongs to.
Regulation as Innovation Enabler, Not Blocker
---
Magda: um,
People often think that if you're a regulated medical device and if you need to follow like a very strict process, it can slow you down. But I think for us it's actually the opposite, which I think it's pretty cool. We have we have a very well-defined quality management system that has been with us since the beginning. So it's like a foundation of what we do. Whenever we like tackle a new feature, we can look at the product requirements at the risk. So we do have a baseline to start with, which is an excellent thing to have. Also when it comes to being a regulated medical device, we do need to document quite a bit, which also might seem like a hurdle, but if you have it incorporated in your process it's actually very seamless. And it does help to document your decisions, user insight, feedback that we get, so that you [00:10:00] can refer to it later on. And we do innovate actually quite a bit. One aspect that I also wanted to touch upon is that we do have a great culture in the company where everyone feels like they really can contribute to the success of the company.
And if someone feels like they can impact the mission or the product or the vision. They will be more likely to innovate and to come up with great ideas. And we have an amazing team that that does that basically on a daily basis. So when we do have a new product and we innovate on it, we do bring also our compliance and our regulatory teams early on and they also shape and they also can help us shape that product.
So it's really working together with the regulatory team it is not a blocker. It actually it just provides us with a framework that enables [00:11:00] the innovation.
Melissa Perri: Magda showed how bringing regulatory and compliance teams into the process early can actually create clarity and enable innovation, rather than slow teams down.
But even with strong processes in place, AI introduces another set of questions around cost, scale and long-term sustainability. For our final perspective, we'll hear from Jessica Hall, Chief Product Officer at JustEatTakeaway.com. She talks about how product leaders should evaluate those trade-offs from unit economics and simplicity to governance and building lasting capability. So, this is Jessica on what it takes to make those decisions responsibly.
The Real Cost of AI and Product Trade-offs
---
Jessica Hall: I think there's always a whole load of inputs and it really depends on the situation.
So there's a cost piece in this, we are excited about AI and there's lots of conversation about it in the industry, but implementing LLMs training LLMs and running them can be really expensive. And if we're talking about something that has a very [00:12:00] small impact, it might be great for your customers, they might really love it, but it might not commercially really move the needle, then you have to ask the question like, is this the right investment to make? Or are there cheaper alternatives that are as good or 90% as good? And that's a always a judgment call that you have to make. I think the other piece is just keeping it simple and that's one of our one of the things that I hold really true and I challenge my team on as well.
Sometimes with the AI solutions we can be guilty of overcomplicating things or over-engineering it, and actually simplicity is the answer. So asking yourself like, is this the simple answer? Is this the right thing to do? That's really important. And then I think once you get to that point, it's then a question of: what's your tech stack? What data do you have? Can you, do you have enough data that you can make this meaningful? And, you know, that you can run it effectively? You have to ask yourself all of those kind of things. How accurate is it gonna be based on [00:13:00] the information that you've got? And is that a risk you're willing to take?
Because with generative AI, there is also a risk element of this because we are still learning and the LLMs are learning and it is changing all the time. We've seen some mishaps in the industry and there will be more to come, I'm sure. But it's all of these factors together. And I think I will go back to what I said also about hype, not getting dazzled by the exciting potential, but actually asking like, what is the problem we're solving and does this really solve that problem?
Melissa Perri: I've been playing with LLMs on on our side too for like our product institute training to see if we can make things more interactive. And what surprised me about it was some of the stuff that you just said too that I don't think a lot of people are looking at. And one was the cost side of it, right?
Does this scale? Yeah. And I think a lot of people were like, oh, LLMs, it's gonna, it's gonna reduce the cost. And you're like, for some, if you wanna self host, for example, on LLM, on [00:14:00] AWS, like that's expensive. Yeah, sure. It could replace a couple thousand people if you have that. But if you don't, and you're doing it for the wrong reasons, like that's expensive.
And I don't think people are actually looking at it.
Jessica Hall: Yeah, the cost side is quite interesting and depending on the size of your business and what you're going after, it's definitely worth looking at. That is one of the things we assessed when we were doing our AI assistant for customers.
What is the cost per chat? And when you're thinking about, conversion customer acquisition, which I'm sure many people who are listening will be very familiar with that. Then you have to think about what is the cost of a chat? What's it gonna do for your first order, second order and your conversion metrics.
Are you seeing it as a way of doing acquisition and have you built that cost in? And it can be surprisingly expensive. And I think that probably the other thing is it isn't just the cost of the technology, it's the cost of maintaining the team that needs to continue to work on that, the skill sets that you need to build.
And that's something I also take very seriously, not just with AI, [00:15:00] machine learning and other technology and that kind of space, but. Are you building a department that is fit for the future? Are you training your colleagues, your team on the things that they need to know to be effective into the future?
And as you adopt these technologies, it isn't just about adopting the technology, it's about creating the entire ecosystem of your department that's able to take it forward.
Melissa Perri: Yeah. It's building in a capability to your business.
Jessica Hall: Yeah, it is and it is not a reason not to do it. But that expense piece, there's lots of other things around the data governance and how you govern it and bias is a great topic to talk about on this as well, and, how do you tackle all of those parts of the solution as well.
I'm not for a moment saying, don't do it just. You've gotta go into eyes open and not be dazzled, but really make the right decision for you.
Managing Governance, Bias, and Customer Trust
---
Melissa Perri: You just mentioned something too about bias, and we have talked a lot about bias and ai and it's a computer, it's generating stuff based off the data you [00:16:00] give it.
How do you help run the governance around the AI that you have and how do you mitigate for that risk? You also talked about it giving like the wrong answer. Yeah. Or doing something that's not desired. 'cause they do hallucinate. What do you do to manage risk around there?
Jessica Hall: Yeah, so all of these things I would say, we do have a robust data governance kind of policy and procedures already in place, and AI falls into that category already.
So we already have much of that in place, but we have developed that out further. To make sure we really get this right or at least that we are giving it the best chance we can to be on top of some of that stuff. 'cause it is new and is developing all the time. So we do have a cross-functional team of people who look at these solutions from all perspectives.
So we have data protection, office, legal, tech, product, various different people who come together regularly. To talk about, how are we using it, what are [00:17:00] we doing, what are we learning, what's the industry saying? And making sure we're really on top of all of that. And we try to be transparent with customers as well.
So the AI assistant that I'm talking about, we give the customers the choice to opt in, to us analyzing their chats. We want to, 'cause we want to know what they want, but we're giving them the choice. And we're being transparent about that in the first chat that they have. We ask them that question.
So I think that's also really important. Not only that we've set up a, an AI guild, so anyone with an interest across the entire organization, not just product, not just tech people, but anyone can get involved. In the AI Guild and through that guild we are educating people on the foundations, the policies, safe use or correct use within our guidelines so that we can make sure that we don't allow this to go unchecked and that we are really eyes open to the risks and the way we use it.
When it comes to bias and [00:18:00] inaccuracies, it's difficult. We've created a cross-functional team that work on the AI system that's made up of a number of different people different backgrounds, and is quite a diverse team. And I think it is very important to have diverse teams building solutions, not just for AI, but especially for AI because if we don't think about how we train the models, we risk exacerbating some of the bias that we see in society today.
I am not for one moment saying that we have solved that problem, or that we are the perfect business and we know exactly how we're doing it, but we are giving it a very good go, and we are really aware that's something we have to be on top of.
In terms of inaccuracies, it's difficult. The longer the conversation goes on the more likely the AI is to hallucinate, because of short term memory and all of the things that, you know, how LLMs operate.
From my perspective, it's, it's not just for the inaccuracies, it's also better for the customer if we answer their question [00:19:00] quickly. And they don't have to ask five times for the meal that they want, but they can get a pretty good answer straight away. And speed is important too in that, actually, initially we found that our assistant was too slow. Customers were waiting too long, so they were asking again, and that's much more likely to re lead to inaccuracies. So those are some of the things we've done, but I think just like everybody else, we are on that journey and we are learning and developing alongside the technology and that's what makes it really exciting as well.
Melissa Perri: Thank you so much for listening to the Product Thinking Podcast.
We'll be back with another episode soon, continuing to share practical insights on how product leaders think and work. If you wanna hear the full conversations with Maryam, Magda, and Jessica, check out episodes 241, 251 and 199.
Make sure you like and subscribe so you never miss an episode.
We'll see you next time.