Episode 241: Mastering AI Strategy in Enterprise Teams with Maryam Ashoori

In this episode of the Product Thinking Podcast, Dr. Maryam Ashoori joins Melissa Perri to explore the transformative impact of AI on product management. Maryam shares her insights into how AI is reshaping roles within the industry, particularly the evolving relationship between product managers and engineers. With AI increasingly influencing design processes, she discusses the shift from traditional UI building to prompt definition. The conversation also delves into the distinct capabilities of AI agents compared to generative AI, highlighting their potential for reasoning and task automation.

Maryam provides valuable guidance for enterprises on adopting AI strategically, emphasizing the importance of focusing on specific problems rather than being swayed by emerging technologies. The episode also touches on the challenges large organizations face in integrating AI tools amidst rapid technological changes.

Curious about how AI can redefine your product management approach? Tune in to this episode for a deep dive into AI's role in transforming industry practices and discover actionable insights for leveraging AI within your organization.

You'll hear us talk about:

  • 05:45 - The Evolution of Product Management Roles with AI

Maryam discusses the increasing integration of AI in product management, emphasizing the shift towards focusing on building the right products, not just any products. She highlights how AI is changing the dynamics between product managers and engineers.

  • 14:10 - Designers' New Role in the AI Era

The conversation shifts to how AI is impacting design roles, with designers moving away from traditional UI construction to defining prompts for AI. Maryam explains how this evolution is changing the design landscape.

  • 22:30 - Strategic AI Adoption in Enterprises

Maryam shares her insights on how enterprises can successfully adopt AI by starting small with proof of concepts and scaling strategically. She emphasizes focusing on problem-solving rather than being distracted by the latest tech trends.

Episode resources:

Maryam on LinkedIn: https://www.linkedin.com/in/mashoori/

Watsonx.governance: https://www.ibm.com/products/watsonx-governance

Other Resources:

Follow/Subscribe Now:
Facebook | Instagram | LinkedIn

Episode Transcript:

[00:00:00] Maryam Ashoori: In a world that is changing rapidly. And most of the developers, they are not machine learning engineers, they have limited AI knowledge behind the scenes to go and evaluate what's going on. So my recommendations for them are: don't be distracted by the technologies that are coming out every day. It's gonna leave a certain level of exhaustion for you. Take one step back and identify what is the problem that you're trying to solve. Be very intentional about where you spend your time, because the technology world keeps changing, but the time that you have spent on that piece of technology, you can't get it back. Focus on the problem that you're trying to solve, and use it as a frame and lens through which you can evaluate what is noise and what is that area that you is worth your time to spend.

[00:00:45] I think that boundaries between roles are gonna dissolve. There is gonna be more focus on building the right product versus building the product. Especially between product management and engineers. Historically we were saying that for a typical group of product management versus engineers, a ratio of 1, 2, 6 to 10 is applicable. But now we've been hearing the leaders talking about event 1, 2, 0 0.5 for engineers, which is indicating that how important it is to identify the right problem. I'm less worried about the labeling of people. But I'm, I'm expecting the lines between these two roles are emerging big time with AI.

[00:01:23] PreRoll: Creating great products isn't just about product managers and their day to day interactions with developers. It's about how an organization supports products as a whole. The systems, the processes, and cultures in place that help companies deliver value to their customers. With the help of some boundary pushing guests and inspiration from your most pressing product questions, we'll dive into this system from every angle and help you find your way.

[00:01:51] Think like a great product leader. This is the product thinking podcast. Here's your host, Melissa Perri.

[00:02:01] Melissa Perri: Hello, and welcome to another episode of the Product Thinking Podcast.

[00:02:04] Today I'm excited to welcome Dr. Maryam Ashoori, a Vice President of Product and Engineering with IBM Watson X.  Maryam is at the forefront of simplifying generative AI adoption for major companies and has a unique blend of expertise in computer science, user experience and interface design. I'm excited to dive into the conversation with  Maryam and explore AI agents, their potential for product innovation and how AI can transform the way we build products.

[00:02:30] Welcome  Maryam. It's great to have you on the show.

[00:02:32] Maryam Ashoori: Thanks for having me, Melissa.

[00:02:34] Melissa Perri: I am so excited about your background. It spans so many different things. Can you tell us about a defining career moment that led you to pursue AI so deeply?

Maryam's career journey

---

[00:02:43] Maryam Ashoori: I like the way you framed it. It's a combination of so many different things. So over the past 20 years, I've been really working at the intersection of system design, AI, and product strategy. It all started out with a couple of advanced degrees in AI and robotics, focusing on multi-agent systems way before LMS put them under the spotlight again.

[00:03:06] And then for my PhD, I did system design interaction and human computing interaction. So throughout my career, I've operated as a user experience designer, as an AI researcher, as an engineer. And ultimately, a product leader. But throughout all of those, I've been focusing on designing AI system, making them trustworthy, making them usable and impactful.

[00:03:28] More recently I've been responsible for the product strategy for our portfolio to make sure that we are not just moving fast, but we are also competitive and we are responsible by design.

[00:03:39] That's the balance, the delicate balance between velocity and trust that I've been operating at.

[00:03:45] Melissa Perri: And how has this background really shaped how you think about building AI products at Watsonx?

[00:03:50] Maryam Ashoori: When people are a really good portion and center of the problems that you're solving. Whenever you think about AI, you look into what problems it solve. So that view is gonna be independent from the technology. What is the problem that we are solving? So instead of looking into different technologies and trying to figure out how I can bring it to businesses, I do a reverse one.

[00:04:14] I go to the business problems and I question is this problem is well suited to be tackled by AI? And if so, what are the advantage? What are the risks? And have a realistic view of what. Requires, but also what it takes to implement it. Because I worked as a designer, so I understand the pain points of our designers out there.

[00:04:34] I've worked as an engineer, so I have a good understanding of the depth, sizing and effort that is needed to implement a solution. And then last but not least, the product strategies. Is this the right product that we are building?

[00:04:48] Melissa Perri: And I think that's so important in the age of AI because we're all just really excited about it. But there's a lot of questions about, is this actually gonna produce business value?

[00:04:57] So when we talk about AI agents, you know, it's all the buzz out there. Can you explain what an AI agent actually is and what you would use it for?

AI agents and their role in business

---

[00:05:07] Maryam Ashoori: Very good question. AI agents is an intelligent system with reasoning and planning capabilities that can automatically make decisions and take actions. So now how can I use this? Because it has some sort of reasoning. We can question if it's reasoning or just rudimentary planning capabilities. But because it has those capabilities, it has the opportunity to break down complex problems and play the role of a partner for brainstorming, for market evaluation, for gathering and analyzing information and data.

[00:05:42] And then because it has some sort of action taking actions we call it in the AI war tool calling or action calling. I can automate and some of the actions that is potentially possible to execute on for an agent. For example, we are having this conversation on the podcast. I can have an agent to automatically summarize everything that we are talking about, identify a list of actions.

[00:06:08] Depending on what we said, so use those reasoning and planning capabilities to identify the actions and connect them to some internal systems that automatically enables it to go and execute those action items and get them started.

[00:06:23] Melissa Perri: So when we're thinking about AI agents versus a lot of the generative AI applications that we see out there. We've got, your chat gpts, your claude. How is it different? What do you do with an AI agent if you're in a business and like, why would you want one yourself?

[00:06:37] Maryam Ashoori: Very good question. So let's go back to, you mentioned chat gpt, the story of Gen AI in 2023. So the, there's a re series of use cases that Gen AI was applicable to at the time was around content generation, code generation, content grounded question and answering, summarization, classification and information extraction.

[00:06:59] Mid 2024, we started seeing LLMs expanding to taking actions. The tool calling and function calling capabilities that we talked about. So suddenly businesses saw an opportunity, a window of opportunity to connect all those acceleration that we previously got from LLMs and bring them through action calling and function calling, bring them to every single of their businesses, even the legacy systems.

[00:07:30] You can think of automation that can come everywhere. You can think of acceleration that can come everywhere. So practically any use case that you can think of can be potentially benefit from some of the use cases that are enabled by agent.

Capabilities and limitations of AI agents

---

[00:07:45] Melissa Perri: And it's like Claude and Chat GPT, they're one type of AI, they're like a generative AI LLM, right? There's multiple different types of AI that you could possibly build into these agents.

[00:07:55] Maryam Ashoori: Yeah, so if you look into, for example, hugging face today, there are. Probably a hundred thousand models at this point. There has been lots of development on the open market in terms of creating open LLMs, but there is also development on the closed market like Entropic Cloud or Open Chat GPT that you mentioned.

[00:08:14] There is an LLM behind the same that is powering up these agents. But then around that you have the libraries and framework to connect these agents to the outside board, to the tools, to the APIs and to the data sets, as part of the pipelines. So that application can be some of the applications that are offered by those companies that you mentioned, or you can create your own. Like you are a product manager, you are a developer, you can just grab one of these models and go build your own for the purpose that you are targeting for.

[00:08:48] Melissa Perri: Can you give us an example of when a company or something that you've seen in practice that developed an AI agent, what did they use it for? What did they do to put it together and how did it work?

[00:08:58] Maryam Ashoori: Yeah let's look at a very simple example. I would call it content grounded question and answering. A question comes in, let's say it's a customer service situation. Someone has problem with the camera is not working. They go to the website of the camera provider, and they ask a question: Hey, my camera is not working.

[00:09:17] So now the agent, it, the first thing that can happen is the agent goes to the instructions of that camera to see if they, the agent can troubleshoot the situation. We called it retrieval augmented generation. So there is an LLM behind the scene that is retrieving information from that document, and it's basically helping the agent not to hallucinate, but respond based on the instructions that are available. But then if the instruction doesn't come up, now we can say, Hey, agent, if you don't find the instruction in your internal guidelines and manuals, go fire up an online search. To see if other people have experienced that. So through, through that API and action calling the agent can automatically make a decision that, hey, I'm not confident on the accuracy of what I retrieved from here, so I'm gonna fire up and search online to see what's going on over there and it can retrieve some information through API calling and action calling, bring it back potentially because it's external. Depending on how sensitive the situation is, it is probably represented and packaged to a human partner to validate and if it's okay, then it's gonna be communicated to the end user.

[00:10:31] Melissa Perri: So it sounds like there is some kind of like discretion level that is programmed into these AIs, which is different than just let me search, oh, I can. Can't find it, right? It's oh, I searched, I can't find it. But now I know to take a different step. Is that like the difference between, having an AI agent that can string all these things along versus just like one question and answer model like you would do on Claude?

[00:10:53] Maryam Ashoori: Yeah, so when you use an LLM. It's restricted to the knowledge of the LLM. You can pair the knowledge of LLM, we call them a body of knowledge. External body of knowledge. Let's say if I'm a camera provider, I can probably connect that LLM that is trained on public information on some proprietary domain specific knowledge that I own for my business, and help the LLM to answer those questions.

[00:11:22] So now the body of knowledge for the LLM is expanded from what the LLM was trained on, to also that body of knowledge, but that body of knowledge, we call them unstructured data, is text. But for most of the organization, the data doesn't live in text. For example, the customer data information that we track from customers, it's not necessarily just text.

[00:11:43] They might be in a database, a record that we want, a query and all of those. So in this situation, we can define a series of tools that are available to the agent to properly as part of the pipeline that we designed, take the action and pull the information. In this case, I didn't find what I was looking for in this body of knowledge, which was text.

[00:12:04] So now I can automatically query that database by converting that natural language, the query that came into SQL, and go interpret that code automatically on that database. If I have access to, as the agent retrieve the data that I'm looking for, bring it back, or I can go online and do a search online.

[00:12:24] So there is a series of tools that are available with the agent, but the agent is gonna use the LLM underneath for the reasoning and planning capabilities to pick which one is the right tool to pick over here.

[00:12:38] Melissa Perri: Your dev team is shipping faster than ever, but your tools, they're still stuck in 2012. Monday dot com's dev platform changes that With Monday Dev, you get fully customizable workflows, real-time visibility across your development lifecycle, and seamless GitHub integration without admin bottlenecks slowing you down.

[00:12:55] Whether you're working in the platform or straight from your IDE Monday Dev keeps your teams aligned, focused, and moving fast with AI powered context builtin. Go to monday.com/dev and see how your team can build without limits.

[00:13:08] There's a lot of talk out there about how one day We're gonna go to a developer with a bunch of AI agents and that will be it, and they'll go out and do the things for them. What are the risks that you see with AI agents that product managers need to be aware of and leaders as well? And where do you think our limits are right now in what AI agents can actually do?

[00:13:27] Maryam Ashoori: I'll give you an example. We talked to about thousand developers. Across the US building enterprise AI applications, and we asked them, do you use AI assisted coding? And almost half of them, they said they frequently use that. We said, how much time saving are you getting out of using these tools? And most of them, they said one to two hours a day.

[00:13:50] About 4%, they said about six hours. Saving per day. What the, what it tells me is if you as an engineer know effectively how to prompt these agents or how to put together a system to get AI to work for you, you can maximize your productivity and stand out. So that's the opportunity that you can gain.

[00:14:14] But on the other side. There are limitations associated with this, and it's the mastery of developers to use the prompting or the system in a way that they are well aware of the limitations, and they manage around this by going in, by effectively working with the AI and getting the AI work for you.

[00:14:37] Why does it matter? For LLMs of the past, like 2023, the absolute worst thing that could happen was AI that could generate inappropriate content. The AIs are today, as you mentioned, agents, they take actions, they can leak data, they can wipe out your code for optimizing the code, right? So it's essential to have transparency and traceability of actions of the agents.

[00:15:04] And as the developer or as product manager or whoever that is interacting with those, have the um, sort of like set the strategy for the agent and supervise the actions versus completely automating what needs to happen. The second thing that I wanted to highlight, the limitations that are carried forward from the LLM words, every agent is powered up with an LLM underneath.

[00:15:30] So if the LLM is suffering from hallucination, lack of explainability, and all of those, now it's applicable to the new word. So if you're using it for the purpose of code generation and you just say, generate this code and not properly prompting it, it may hallucinate code that may not be necessarily what you want.

[00:15:48] So may work for prototype, but it's not gonna work for production quality. So it's important for developers to understand those limitations and plan around to capitalize on that, as a differentiation for their skills, something that not everyone body can achieve and deliver.

Guardrails and practical use of AI in teams

---

[00:16:07] Melissa Perri: This is a question and I feel like you're the right person to ask this, um, hallucinations. I know that AI can hallucinate. Why does it hallucinate?

[00:16:16] Maryam Ashoori: Because there is no reasoning really be, there is no logic of thinking behind LLMs. This is an unsupervised learning that basically LLM is exposed to a body of information, a very large body of information. So when you ask a question, it basically calculates with the probability of the next token. It's just the word of probability of the next token versus the whole thing. And because of this, the model can hallucinate because based on hallucination, based on just following the probability rules, there is a high probability that the output that you have generated is sound, but then when you use the logic behind it, it is like, Hey, it looks good, and most of the time they are confident, very confident on generating this, that you can feel the confident in the tone. But it's not an accurate information. It's just a collection of words nicely put together.

[00:17:08] Melissa Perri: So when we're trying to prevent hallucinations or get better accuracy or not have our AI agent just delete an entire code base, how do product managers or developers, what tactically are you doing to put like guardrails or instructions or that strategy in there to make sure you can mitigate that risk?

[00:17:25] Maryam Ashoori: There are two things that they can do. One is using the technology to identify some of those. So for example, for agents, we have been developing guardrails, agentic guardrails that identify context relevant, faithfulness to the contents and something like that. So as a product manager, you can put together these guard rails, in the flow of decision making for the agent to make sure that they stay close to that truth. The second thing that they can do is when they design the workloads they want, they make sure that the human is in the loop when there is a sensitive information or when there is the need to a very high, accurate um, output. So for example, if you are using AI to provide recommendations for where to eat. Maybe the confidence can be low and nothing can happen unless we are talking about food allergies and stuff, which is a serious thing. So maybe as a product owner of that is providing recommendation, you can make sure that if there is something in that query about food allergies, you wanna make sure a human is in the loop. Or you wanna make sure that there is an extra checking to make sure that the ultimate before is communicated to the end user is double validated.

[00:18:45] Melissa Perri: And it sounds like this has to do with risk. Like the higher the risk, the more you want to put these into place.

[00:18:50] Maryam Ashoori: Exactly. We call it calculated risk. So as a product manager, you want, you need to know what is non-negotiable and what are the risks that can't be jeopardized and just work around that.

[00:19:02] Melissa Perri: With your team, I'm very curious, like how are you using AI? We've got vibe coding out there, we've got all these different tools. What are you putting into place and where are you using it Strategically?

[00:19:12] Maryam Ashoori: It is still fascinating to see how AI has been enabling every single role out there In my department, I have product managers and engineers. When I'm talking to my product managers, they are using AI not just for vibe coding, for prototyping. Like they have an idea and they come back with a fully fledged working prototype.

[00:19:32] That sometimes I'm like, is it the prototype or is a fully functional tools that I'm looking into, but also looking into AI for like, as a partner, to do evaluation on the market perception of your product, you can just fire up a search on everything that everyone has said last day on Reddit or on search or analyst. Compile all of them together from multiple communities and forums. Bring them back and then you can format the output that you wanna see from them. So for example, you can send an agent and say, go do a market requirement analysis on this product that has been out there and organize it in the factors sections that I care about. And in few minutes it's ready to go. Obviously the models hallucinate, they make things up, they may not understand things clearly, so it's a back and forth to make sure through the prompt engineering you get the right prompt to get the model to deliver what you want. I've also been seeing it heavily used as brainstorming partner. You're looking for ideas for a new feature, what can work, what is the competition like? What is the strengths of the competition? What is the weaknesses of my portfolio? How can I leverage what is the user problem? So it can help you with those information. I've also seen people using it as generating PRD for the features.

[00:20:59] It's okay, so we settled on this. Now I wanna define this feature. Who do you think is gonna benefit from this? What of the pain points? And basically the whole thing is still owned by the product managers, but it's accelerated. Big time, like I can't say 10 x or a hundred x, but I can see the progress and the speed of delivery that we can enable if AI has been effectively used.

[00:21:26] Back to your question about understanding the limitations. If they don't understand the limitations, it might be misleading. All those has hallucinations that we talked about. It might be misleading. Even the tone, some of these chat applications out there, they are trained to please you because they want the customer to feel good. So that comes back and say, oh that's, the product is well received and it's really good and stuff. So you wanna make sure that you set the tone and say, Hey, stop praising me and use, like, help me with product thinking to improve. And this is just coming up out of understanding what is the limitation of this model and how to effectively work around this.

[00:22:10] Melissa Perri: What do you think is the future of AI when it comes to productivity for product managers?

[00:22:15] Maryam Ashoori: I think that boundaries between roles are gonna dissolve. There is gonna be more focus on building the right product versus building the product. Especially between product management and engineers.

[00:22:26] Historically we were saying that for a typical group of product management versus engineers, a ratio of 1, 2, 6 to 10 is applicable. But now we've been hearing the leaders talking about event 1, 2, 0 0.5 for engineers, which is indicating that how important it is to identify the right problem. I'm less worried about the labeling of people. So maybe it's the developer that is now able to play the role of the product managers because they have access to the same AI, or it can be the product managers that now through vibe coding and all of those has access to go down in the development passed. But I'm, I'm expecting the lines between these two roles are emerging big time with AI.

[00:23:15] Melissa Perri: How do you think designers fit into that too and the shape of interface design as we move forward more with more AI baked in?

[00:23:21] Maryam Ashoori: Very good question. I think that designers need to focus their expertise on asking the right questions and testing the right thing versus building the UI itself. They, to me, they should be the master of defining the right promptings for AI to help the product managers and engineers to achieve what they need based on the user preferences. Because we already see AI generating UIs. We already see them even creating codes out of that UI.

[00:23:56] I think the role of the visual designer, UX designer, they are gonna be more evolved to a user researcher and maybe design managers to set the right strategy and prompting and what defines the right experience for the user to build and then just get AI to implement such a  experience them.

Creativity, UI standardization, and product strategy

---

[00:24:21] Melissa Perri: The one thing I am really interested about and I want your take on this, is I love the vibe coding stuff. I've used all the different ones. And I like to generate the same uh, UI through all them and see where they go. And I feel like they all converge. I'm wondering if it's gonna get to this point where we have the same UI everywhere.

[00:24:39] 'cause everybody's using the same stuff. It's the same patterns. It's like a pattern library. So it's just using a pattern library going for best practices. Do you think there's gonna be an advantage in the future of having a unique design or some innovation there that's gonna set stuff apart?

[00:24:52] Maryam Ashoori: So my hypothesis is that because all these models are practically trained on the same internet data, eventually if you don't tune them or if you don't apply your nice designer touch to it, or what is unique to your firm or to your business, it's gonna generate the same design that someone else has generated. But what is gonna win this competition is the ones that are grabbing that and add their differentiated touch to stand out. And especially, I don't think, I believe that, like I, I believe in universal UI designs, maybe patterns. Yes, patterns or interaction patterns are different, but universal UI designs for all the users. Globally, or depending on just treating people the same is not gonna work. So it's essential to apply that expertise, to bring that uniqueness to AI because the LMS by default, can't offer that uniqueness.

[00:25:54] Melissa Perri: Yeah, it seems to me like having people who are super creative. In this field of AI too can help going forward. Like being able to look at it from a lens of that's not just the way that we've always done it, but here's where I can add value. Here's where I deeply understand the value pieces and that's why I'm taking away from you when you're saying like the product managers need to really focus on this.

[00:26:14] The developers have to focus on the customers, the interface designers really have to come back to user research. It's like how do we get back to that problem instead of just generating stuff?

[00:26:23] Maryam Ashoori: The way that I think about AI is really a tool. It can be a sophisticated tool that you would treat it as a partner, but this is still a tool. And the metaphor that keeps coming to my mind is calculator. When I was going to school at some point, calculators at university were banned because they wanted us to focus on manually doing the very complicated math stuff.

[00:26:45] And now I feel like that word is gone. Because what the skills that they are teaching the students these days is how to leverage the calculator, even the advanced calculations of calculators or maybe AI to go solve bigger problems versus how to manually do these calculations. So it's the same for all of these roles that you mentioned. AI is just a tool for them to give them the acceleration, to improve the quality of their designs, to improve the quality of their code, and to improve the quality of the products that they are building. It gives you an option to bring agility to your design, create something, put it out there, do user testing and in few days, just test it out if it's the right thing before investing a lot of resources and building it. With that I'm gonna I feel like we are gonna see a lot more applications out there, a lot more domain specific targeted applications out there too, are solving niche problems because the cost of development is going low. I'm expecting to see a lot of open development in the open ecosystem beyond firewalls. So the power of community, designers being part of a community, developers being part of the community, influencing each other on the designs. These are the parts that can be enabled by AI, but not necessarily replaced by AI.

Enterprise AI adoption and building future-ready strategy

---

[00:28:10] ​

[00:28:33] Melissa Perri: I love the calculator analogy and I totally agree with you on the community part. So I've seen this and you work with a lot of large enterprises, so I would love your take on this. I think so many large enterprises wanna do something with AI. They're readily trying to adopt it, but they struggle, I think, with security and how do we get approval for this?

[00:28:53] How do we actually bake that in? How have you seen large enterprises adopting AI, in what ways that get it into their systems and are successful with it?

[00:29:03] Maryam Ashoori: Start small. Identify what is the problem that you are trying to solve. I see a lot of POCs proof of concept happening in the market. Once you have identified the use case and you have used LLMs to identify what is the target performance, the desired performance. You can go back and experiment with other options that are available to you to create an optimize workload. Especially for enterprises. We are talking about millions of transactions per day. So if you think about using a very large general purpose models behind the scene, the compute text, the latency, that energy that is gonna consume. It's just not a scalable at the scale of enterprises.

[00:29:48] So I see a lot of enterprises thinking is small, identifying the problem, but then quickly moving to what are the scaling requirements and how can I go back to hand select the tools that are applicable to my use case. The those thousand application developers that I told you about, like we asked them in average, how many tools do you use to build an AI application?

[00:30:12] And the majority of them, they said five to 15 different tools. So you need to stick together five to 15 different tools in a world that it's evolving rapidly every three months in the AI world is a generation now, and some of these tool providers may not be relevant, startups that may not be relevant in few months.

[00:30:36] So you are pulling them in, stitching them together, and you are asked to optimize them for the scale of. And that's the area that I see the developers especially has been challenging. This is the areas that not necessarily leveraged by AI, but these are architectural choices that s. Are enabled by senior developers that have a very good understanding of what needs to happen to deliver such frameworks. And I think AI is gonna have few years to get to the part that they can provide recommendations on the architectures or play the role of the architect. But just generating code for prototyping is something that is already out there.

[00:31:20] Melissa Perri: When you're thinking about all these startups that you're just talking about too, and I can totally see that, where things are popping up every day. It's Hey, use this new tool. You go and you learn it, and then something better comes out. Then you gotta go learn that one. How do you think about adopting different AI technologies responsibly and with an eye towards the future, like what should people be looking for?

[00:31:40] Maryam Ashoori: We asked people how much are they willing to spend, how many minutes are they willing to spend on learning a new piece of technology? majority of them, they said not more than one or two hours. So they can't afford spending more than one to two hours a day in a world that they need five to 12, 15 different tools. And in a world that is changing rapidly. And most of the developers, they are developers, they are not machine learning engineers, they have limited AI knowledge behind the scenes to go and evaluate what's going on. So my recommendations for them are: don't be distracted by the technologies that are coming out every day. You can't catch up with that. It's gonna, it's gonna leave a certain level of exhaustion and fatigue for you. So that's not the right approach. Don't chase the solutions. Take one step back and identify what is the problem that you're trying to solve. Be very intentional about where you spend your time because the technology world keeps changing, but the time that you have spent on that piece of technology, you can't get it back. So basically be very selective. Focus on your problem, the problem that you're trying to solve, and use it as a frame and lens at through which you can evaluate what is noise and what is that area that you is worth your time to spend.

[00:33:03] Melissa Perri: I think our conversation keeps coming back down to having like a product strategy, right? So like identifying the problem. What do you think are the core components for a really good AI product strategy. What do you need to have to say Now we can go pursue a solution for this?

[00:33:17] Maryam Ashoori: You are gonna have a clear problem, clear goal. What are you trying to achieve? Whatever your goals are achieve, at the end of the day, you are gonna need to have a very clear understanding of the personas and users that are gonna be impact, by that AI application that you're building, and you're gonna need to have a very clear, I would call it a technology agnostic view of how to achieve that. And I make it very bold, the value of the technology agnostic part because the world is changing so fast and by the time that your product gets out, there's a good chance that these underlying technology is already outdated. And I see a lot of enterprises or businesses that they stick to the old technology because the cost of upgrade is too high for them and basically not providing a chance for them to enjoy the best and the state of the art technology out there just because they didn't plan it well. So I would say any decision that you make, try to be technology agnostic. Use the best that is available in the market for you, but have a plan of, or a layer in your architecture to abstract away business case from the technology part.

Emerging trends and opportunities for product leaders

---

[00:34:38] Melissa Perri: When you are thinking about emerging trends in AI, what do you think product leaders should pay attention to the most?

[00:34:44] Maryam Ashoori: It depends on what they are using AI for. Some of the product managers are using AI for the purpose of productivity. I think for those people, they should every day look into the community, how other product managers have been using it, what are the sample prompts, and just experiment on a daily basis. a group of product managers that are looking into using AI as a way to enrich the existing capabilities of their product. For that one I, my suggestion is to look into, really have a good understanding of the acceleration that Gen AI can give you, and then go back to your product features and see which one can benefit from it and make you stand out. The third one is I've seen product managers that using AI as a new business opportunity. What are the new product ideas that can come to mind? One example is all of the startups that are on code development that are using one of the powerful LLMs out there behind the scene. To provide that applications for developers and they get very popular.

[00:35:48] So that's a really major business opportunities that only product managers that have a good understanding of the potential of LLM can jump on it and be the first. And last, but not least. I see opportunities for the provider side of the business. If you are a product manager and have access to AI researchers that are developing the state of the art technology, you can potentially package something on the provider side of the house that can be potentially adopted by thousands of or millions of technology consumers through embed or resell or OEM. It represents a major opportunity. So depending on where you're operating, path to take forward in terms of where the future is gonna look like for you is gonna be very different.

[00:36:35] Melissa Perri: And that sounds like there's lots of different directions that are emerging, which is really exciting.  Maryam, I've got one last question for you. If you had to give advice to your younger self before you started your tech career or early in your tech career, let's say, what would you say?

[00:36:49] Maryam Ashoori: Be very intentional about where to spend your time. Because the world changes, but you don't get that time back.

[00:36:57] Melissa Perri: I think that is fantastic advice. Thank you so much  Maryam , for being here. If people wanna reach out to you or learn more about you, where can they go?

[00:37:04] Maryam Ashoori: LinkedIn, find me on LinkedIn. I'm pretty active over there.

[00:37:08] Melissa Perri: Okay.

[00:37:08] Maryam Ashoori: And my, uh, current product is WatsonX.governance.

[00:37:11] Melissa Perri: Great, so we'll put all those links at our show notes at productthinkingpodcast.com. Thank you so much for listening to the Product Thinking Podcast. We'll be back next week with another amazing guest, and in the meantime, make sure that you like and subscribe to this podcast so you never miss an episode.

[00:37:25] If you have any questions for me, go to dear melissa.com and let me know what they are. I'll answer them on an upcoming episode. We'll see you next time.

Melissa Perri