Episode 223: Behind the Rise of GitHub Copilot with Mario Rodriguez

Join us for an engaging episode of the Product Thinking Podcast as I speak with Mario Rodriguez, Chief Product Officer at GitHub, about the transformative role of AI in product management and development.

In this episode, Mario discusses the integration of AI into GitHub's strategy, particularly through GitHub Copilot, and how it leverages AI to improve creativity in software development. Mario provides a deep dive into how GitHub Copilot is changing the developer's landscape by freeing them from mundane tasks and allowing them to focus more on creative problem-solving.

He also shares his vision for the future of software development and the role AI will play in it. Listen in to learn how AI can be a powerful tool to amplify productivity and creativity in your product management strategy.

You'll hear us talk about:

  • 08:22 - Reimagining Developer Tools

Mario talks about how GitHub Copilot started as an AI-native product, focusing on augmenting human creativity rather than replacing it, and how it strategically integrates with existing developer workflows.

  • 24:12 - Surprises in Copilot Adoption

Discover the unexpected ways developers used GitHub Copilot and how it challenged initial assumptions about product acceptance and success in the market.

  • 32:27 - The Future of Product Management with AI

Explore how AI is shaping the product management field and what it means for the role of product managers in this fast-evolving industry.You’ll hear us talk about:

Episode resources:

Other Resources:

Follow/Subscribe Now:
Facebook | Instagram | Twitter | LinkedIn

Episode Transcript:

[00:00:00] Mario Rodriguez: When you have a product, it's funny. Even though you're very proud of it, you're also very critical of it because you know all of the flaws it has. We also knew it was magical. It was something new. I do not come from ML Ops or AI. So if you were to ask me, Mario, you designed a product, you get it out there, and 70% of the people do not actually accept the suggestion. Oh, I'm telling you: that's not a good product. But in this case, it was completely different. So that surprised me is, oh my God. Only a 30% acceptance or 20% acceptance can create something that changes the world on this. And that was a learning even for me.

We just finished a set of AI days in one of the teams. They just took an entire day to learn more and more. Even us, like we're so busy that sometimes we miss announcements out there and we have to take a series of days just to learn what is in the industry and how can we use that appropriately as well. They go and use a lot of products including Co-Pilot by the way and they try to go and use it in novel ways and see what they could learn. Then share it in a channel at the end of the day, everything we learn. So I would say AI days inside your company is something like really powerful.

Intro

---

[00:01:02] PreRoll: Creating great products isn't just about product managers and their day to day interactions with developers. It's about how an organization supports products as a whole. The systems, the processes, and cultures in place that help companies deliver value to their customers. With the help of some boundary pushing guests and inspiration from your most pressing product questions, we'll dive into this system from every angle and help you find your way.

Think like a great product leader. This is the product thinking podcast. Here's your host, Melissa Perri.

[00:01:39] Melissa Perri: Hello, and welcome to another episode of the Product Thinking Podcast. Today I am super excited to introduce Mario Rodriguez, the Chief Product Officer at GitHub, Mario oversees GitHub copilot the AI powered developer platform trusted by more than 150 million developers and 77,000 organizations. I think Copilot is almost a household name at this point. I'm thrilled to discuss with Mario how product leaders can adapt to the evolving world of AI and emergent technologies.

But before we talk to Mario, it is time for Dear Melissa. So this is a segment of the show where you can ask me any rear burning product management questions, and I answer them here every single week. Go to dear melissa.com and let me know what's on your mind.

Hey, product people. I have some very exciting news. Our new mastering product strategy course is now live on Product Institute. I've been working on this course for years to help product leaders tackle one of the biggest challenges I see every day, creating product strategies that drive real business results.

If you're ready to level up your strategy skills, head over to product institute.com and use code launch for $200 off at checkout.

Here's this week's question.

Dear Melissa

---

[00:02:45] Melissa Perri: How would you change your approach to discovery when you're responsible for reducing fraud on a platform?

So I think the key here is to remember that discovery is about understanding what the problem is. So at this point, you have a business problem, which is reducing fraud. So we still have to be able to protect the revenue of the business and protect the business itself. Now you wanna go back and figure out what's causing that fraud, right?

If you have too much fraud, it always will come back to a customer problem too, because you might take very stringent controls and not allow people to transact and fraud will bring down your platform, and then you can't provide your service. So at the end of the day, this becomes a customer problem, If you can't keep operating to both a business and a customer problem.

But in discovery, the idea here is to figure out what is causing this problem. So I would take it back and start to say, where is our fraud coming from? Is it coming into from certain areas? Is it from certain pockets of things? And then you wanna understand too, the user behavior or the the things leading up to that fraud actually happening.

So this makes you take on a role of people actually transacting with your platform, coming out onto the outside of it, empathizing with that customer and trying to figure out how to intercept it and figure out how we can actually test and solve those problems in that way. So when it comes to discovery, it's not just about customer interviews, and in this case I'm sure your fraudulent customers do not wanna be interviewed. That's okay. It's about solving problems, right? Evaluating the problem, identifying the problem. And in this case, you got a business problem. How do I actually break that down in a systematic way?

Try to figure out what the leading indicators are showing that this fraud might be, and then figure out how to protect the business. That's what I would do from a discovery perspective and try to get really smart on that. Try to understand where the bad actors are coming from. What are their motivations?

How do you actually anticipate that? There's some empathy there, 'cause you do have to empathize with your fraudsters. Put yourself in their shoes and figure out, how do I intercept this and start to make this place a little bit safer. So I hope that helps and I wish you the best on reducing your fraud.

Now let's go talk to Mario.

Welcome, Mario. It's great to have you on the podcast.

[00:04:46] Mario Rodriguez: Thank you for having me, Melissa. I'm really excited on being here and really talk more about product and ai.

From Microsoft to GitHub CPO

---

[00:04:52] Melissa Perri: Yeah, and I'm so excited to dive into GitHub co-pilot, 'cause I think everybody is talking about it. But first, can you tell us a little bit about how you came to be the Chief Product Officer of GitHub?

[00:05:03] Mario Rodriguez: Yes, it's a little bit of a story. Maybe I'll start all the way back and, I joined actually Microsoft in 2002 and worked for games and that was an amazing experience I got to work on with many great producers, Peter Mullinax, and people like really loved their craft. I ended up doing this game called Foursome Motorsport, and I as I was diving deeper and deeper into that space, I'm like I'm gonna become a producer of games. And then after doing it for a little while, I'm like, I really like product. Maybe not games overall, but maybe something in the software side and I was reflecting about, okay like if I'm gonna go and invest in someplace, is it gonna be mainly in a consumer product or is it gonna be on the, like the backend of things?

And I was like I think my passion really lies for developer tools. So I was lucky enough to get a job as a PM one, I did a switch from like software engineering to product. Many people do that. And I got a job at this on this group called Team Foundation Server. Since then, for 20 plus years I've been doing developer tools. And I love doing developer tools. Like I think it's, the way that I describe it to my wife and my kids is, look, humanity already invented fire, the wheel and everything we're gonna be creating from this point on where it's gonna be powered by software. And if I'm gonna be here, on earth, on a limited time giving software developers the best tools available in the market. That seems like a life pretty good spend. So I had this passion for developer tools. The opportunity came also where Microsoft was acquiring GitHub, and I moved in into that team then at that moment. Joined GitHub around 2018 and was helping at the very beginning on enterprise and making more, expanding GitHub from what it was at that moment, which is the home of open source and having the best version control product into this end-to-end platform that can take you from idea to production. And that was really exciting for me because if you expand what that amazing product that had a lot of taste, a lot of developer love and make it into this thing that everyone use. Then, you could accelerate human progress because of that. So I worked a little bit on enterprise, then on collaboration tools on planning and tracking. I took a, what I call inside of GitHub, a hiatus for a year and a half, and I'm like, we need to get issues and project management in GitHub for product managers to be better than what it is today. So I spent a year and a half doing that and then I got an amazing opportunity to lead copilot, and that product took off and then from there I transitioned into the chief product officer role. So that's a little bit of a story behind all of that.

[00:07:58] Melissa Perri: Amazing. And I love your passion for developer tools.

[00:08:01] Mario Rodriguez: Yes.

[00:08:01] Melissa Perri: It's definitely coming through. So let's jump into it. Let's talk about co-pilot. 'cause this is the, the thing that everybody is talking about these days. GitHub super successful with your launch of co-pilot. Tell us how it got started.

How did you start to imagine this like AI native product and what AI could do to help, developers to fit seamlessly in.

[00:08:22] Mario Rodriguez: Yes. And yeah, one of the things that I actually also love about copilot is the co-part. We have this thing where the human is at the center and then we're augmenting you to do more and to really spend time in the things that us humans are really good at, which is creativity. Or we have an advantage on, which is creativity.

But yeah. And gitHub this thing called the GitHub Next Team. And the GitHub Next team is in charge of looking for not only Horizon one initiatives that can make GitHub better, but really more about Horizon two. Think two years from now or Horizon three. Things that may never actually happen as well. they had been in talks with OpenAI, and OpenAI had this thing called GT three internally, I think it was called something like Codex for us at the Codex Model. And they have been collaborating with them on that technology essentially saying, look, we have these LMS are very special at understanding natural language and solving problems specifically in the coding space.

And we have seen that played out. These models have gotten significantly better in natural language understanding and significantly better in understanding code and being able to code as well. And the GitHub next team created a paper that said, is this science or fiction?

More or less, if you wanna think about it that way. And that was the start within GitHub on, oh my God, is this something special or not? When I first saw it. What really impressed me and why I said, wow, like this is probably gonna change everything. In our space we have been able to add a complete code for a long time that we have this technology called IntelliSense, and there have been a lot of ML models that helped you do those type of things. But what I had never experienced in my life is you being able to go through a comment and describe something in natural language, and then have the, the system or the AI or the AI model, understand that and then translate that into code. And that natural language conversion to code was some, again, something that I never experienced, but I'm like, oh my God, we gotta have, we have to go all in on this.

Now, the interesting part of this dilemma is the following. What we were playing with, at least in the research lab, was just a technology, like it could do things like solve python problems. Sometimes it'll take a hundred shots to do it, which is not something that you could ship in a product. Like just imagine if to solve anything, you have to tell it to do something a hundred times.

That's not something that anyone will buy. So we have this amazing technology. We have these preexisting tools that developers use. How do you marry those two things? And that's easier said than done. Sometimes you're gonna be too early with a technology. Sometimes you're actually gonna go and put it in a product in the wrong way, and then you're gonna, not necessarily waste the technology, but you're too early with the wrong assumptions as well.

And yeah, and that's the graph of product in my opinion. It's that complexity of the problems. It is, no matter how good my PRD is, like at the end, the product has to actually make it and be good. So we played a lot with that tech into making it in, into GitHub copilot. And the simplicity of what we shipped at the beginning was incredible.

There were only three things that make copilot successful in my opinion. Number one is we actually made it. So as you're coding, you're not interrupted. We have this thing called ghost text, and as you were doing your normal coding, then it will come in and then tell you, do you wanna take the suggestion or not? At the very beginning it was a pop-up. We had many ways of doing that in the ux and we found one that worked. The second thing was, it had to be fast because we chose that modality on showing you the suggestion it had to be really fast, so we got it down to a hundred milliseconds, which really meant that we kept you in the flow developing as well. third thing is we needed to bring more context in. So the suggestion was something that was more personalized to the code base and to you and not to what the model actually knew about in the world. A lot of people get confused, like they think of the suggestions come from all of this corpus of code, and yes, a little bit of that is true, the context and the LLM together with that, it's what really generates a new thing altogether and just, we just needed to tune that. And those three things is what make a product successful, is the fact that it was a ghost text, the fact that it was super fast, the fact that it was contextual and it gave you pretty high quality suggestions because of that. And then we just watched that take off. We were the first co-pilot in, in, in the name and in the product as well. And we're very proud of that.

Failing toward the right UX for Copilot

---

[00:13:13] Melissa Perri: How did you settle on those three factors? What was the process to actually figure out what was gonna make it successful?

[00:13:19] Mario Rodriguez: Yes. It's interesting because I still remember to a lot of what we do at GitHub, we operate on this set of principles, right? And we're very much, and you know this very well because of the build trap. We're very much outcome driven instead of output driven on these things. And because of that we also value the learning loop or like your iteration velocity through that.

So the, if there is a week and we could do three experiments, we learn three times. If in a week we could only do one experiment and we're gonna learn once. So we value learning fast. So the way that we got there was through a bunch of failures. Believe it or not is you tried this thing and you're like, Nope, that did not work from a UX modality.

You tried this other one that did not work, that you tried this other one. You're like, this is starting to feel like it. So you play a lot with the product, you experiment a lot, and then you just learn and learn. And if you do that and you have some people that have good taste, have some expertise in the field because you do need expertise. In developer tools, then just start getting to a place where you're like, okay, now I could actually iterate to towards something that is a good product. So those are the things just through a lot of iteration and then, a significant amount of engineering work, then to figure out, okay, how to make it faster.

How to do this thing called fill in the middle, which was very important to us. 'cause a lot of development doesn't just happen on a new line at times, so you do wanna fill it in the middle. So then you start marrying the technology with the product, but it was through failures which is something like, I don't think the product discipline talks a lot about.

We, we like to accelerate a lot of the successes of it, but I actually think you end up creating a better product through failures than through successes, the majority of the time.

[00:15:07] Melissa Perri: Yeah. What were some of the things that you tried that you decided, through testing, were not what you were gonna go with?

[00:15:13] Mario Rodriguez: Yeah, so like I said, like sometimes at the very beginning you would just be asking coval a through some model, and that was just too disruptive. We tried kind of other things where the suggestions would appear on the right hand side of it to another panel. So it's not a model, but it's. Not in your flow, and what we found out repeatedly is if it's not in the flow, it's just does not get used. So then you're starting to think about, okay, then how do you put it in the flow? Okay. The best way is to be on top of what IntelliSense already does today, but have a slightly different treatment on it.

Okay, so then do that. Does that feel good with no technology limitations? Yes. Okay. Then how do you marry that with the technology? So it was a lot of Dev X or experiences and that we fail and we ended up in: No, it needs to be in line. And then the other stuff is through a lot of iteration on is the context, right?

No. Why is it not right? Then you do these evals. Maybe we could talk about that later. But like you do a lot of evals and then you play with the product a significant amount as well. So you have a little bit of your own intuition together with some evals. Then that gets you into a place that you could iterate towards a product you could show.

Using evals and metrics to refine AI models

---

[00:16:23] Melissa Perri: What were, let's get into evals. Like what are you running there? How's it work?

[00:16:27] Mario Rodriguez: yeah. I think if you ask me. What is the most important thing about creating AI products? I would hands down tell you evals. And there's two way that we think about it. There's of course, offline evaluation and the key in there is to get the right test. But once you have the right test, you'll find out that you could pass your offline evals.

There's somehow in your organization there will be an incentive system that you could get very good at your offline evaluations. Okay, so what we started noticing is, okay, we're starting to get really good on offline evals, but. we're not seeing that sometimes what we're playing. So now let's go and evolve from offline evaluations to online evaluations. Then you have to set up all of that infrastructure. 'cause you have privacy in mind. You have to set up flighting and to a company like us, that's really easy sometimes to do. But if you think you are average, enterprise company, like how do you set up, flightings and all those things to 50,000 teller machines or something like that. That's a lot more involved engineering wise for some of the enterprise companies. For us, you have to set up your online evals and then you start getting oh my God, what worked in offline evals doesn't work as well in on online. And by the way, in my opinion, in product and ai, that should be your expectation.

In my opinion. It's that you often, it's just gonna make you feel a little bit better about what is happening, but it is not gonna be a hundred percent representative, especially a scale. What is really gonna happen in your product? And me tell you, like you make very different set of decisions based on the success rate of some of these AI features.

Meaning if, let's say a suggestion only gets accepted 20%. If you get your suggestion rate to be 50% or your success rate to be 50%, or you get your success rate to be 90%, they, the actual product will completely change because at 20% you're gonna have to walk through the user through a lot of failure points and be like, oh, you think it's not that good yet, but like for this thing it works really well.

So you're trying to get them to the 20% that really works when you're at the 90%. Your product works well at scale and then you are really trying to design this: how is the experience going to differ when it doesn't and when it does, how are you gonna continue to have that trust overall? And then when it works, 99%, like you just moving fast and you probably don't care, almost at the what it does that doesn't do well. So we set up online evals, we have our offline evals, and then even after that, have to have this other thing, which is okay, what are the metrics you're gonna have? How are those metrics going to get gained, essentially? And then how are you gonna listen to: this one gets better, but this one in I would say suffers because of it.

But NetNet the actual thing for the user was better, or NetNet, it was, significantly worse than what they were experiencing before. So for us it was a lot of then trial and learning. On the actual metrics. Once you have your offline eval, your online evals, then you have to choose these metrics that really makes sense I could give you a couple of examples.

[00:19:43] Melissa Perri: Yeah. What kind of metrics made sense for you?

[00:19:46] Mario Rodriguez: So like for example accepted or acceptance. We call AR for us. And acceptance rate is if you ask copilot for something and we suggest it, or maybe you didn't even ask copilot, we just show you that suggestion that you accept or you're not accepted. It's very binary, right? Yes or no. We have another one that is accepted and retained characters.

[00:20:09] Melissa Perri: Oh.

[00:20:10] Mario Rodriguez: after you accepted, how much of what you accepted did you change? And then we have other ones that have to do with. Okay. Based on the suggestion, if we suggested one line, what is then the arc or accepted and return characters?

And if we suggested 20 lines, how much of that do you ended up changing? It's very different. So as an example, I could write accepted rates, if what I'm suggesting to you is smaller. So were in Gmail or when you're in Google products and they give you a suggestion, you can see for the most part it's very short.

And hence you're probably gonna accept it. But if they gonna suggest you a whole paragraph. You're probably gonna accept it and then change a bunch of things on it. So like we started measuring a lot more things that have to do, not only with accepted, but how much you accept. Why do you ended up accepting in which languages and then how much do you ended up changing and how did it change as well.

And the combination of all of those things together with latency meaning how fast we ended up giving you that. Puts a story behind the quality of the product overall. And then we have a lot of what I call must have tests that always need to pass no matter what, because we know if we're regressing that and then, the quality of the experience is just not gonna be good.

And then we have other things that we're like. Are the models getting better? Things like didn't work in the last iteration of the model. And we're gonna try then the new models and try to see how good they're doing on this one.

[00:21:37] Melissa Perri: Okay.

[00:21:37] Mario Rodriguez: those, there's a lot of benchmarks out there that, software engineer Benchmark or Suite Verify is one of those as an example.

[00:21:44] Melissa Perri: How are you using those metrics or the feedback to help improve the models? Is it like automatically learning about them? Are you going through and looking at, what's happening and then taking that and deciding how to train the model? What's the feedback mechanism there in the learning?

[00:21:58] Mario Rodriguez: Yes. So what I would say is a lot of what we do is we do flights of AB and then we run them all the way to, we get some statistically significant or some stat on that, that we're like, okay, we can trust this. And we usually go ahead and do the old one and the new ones. So they're those IV tests. There are times where we are like, we don't even care about the old one. We just cannot run two versions of this. So we are actually doing like B and C, no A no, actually control group of what exists today. And there, there's good reasons to do that. At times when you have, when you're feeling pretty good about some tech, then actually looking at A it's not gonna give you anything and all you're gonna do is being slower and making a set of decisions in my opinion. So sometimes we just do B and C, but what we do is we fly them either A and B or some type of B and C and then we take a look at them. Usually for us it nowadays it takes us around seven days to get something back that we could trust. Sometimes, depending on where we put it, it might take us hours only, but for the most part, we try to do three days of looking at the last three days or look at the last seven days. And then we maybe mid on Monday or mid on Tuesday, we take a look at those evals. And then we're like, okay, where is it failing and where's it doing good. So sometimes, as an example, we do something and we're like, Hey, AR is shooting up through the roof. People are accepting things like crazy. And then, but then we're seeing a ARC, which is our accepted and retained characters go down significantly.

And we're like that's not what we wanna do. If all the time you're accepting a bunch of things and then changing it, you're probably gonna be pretty upset at copilot as an example. And then we have other metrics that we try to control as well, but it's, think about it on a weekly basis.

We're trying to flight something and learn from it. Like I, I think Kevin Scott, which is the CTO of Microsoft, always said if you're not, if you're not flight something or learning something every week in ai, you probably doing it wrong. And sometimes, in Christmas stuff we don't do that, but it, for the most part, we try to be always learning every week.

Seeing Copilot in the wild: surprises and adoption

---

[00:24:03] Melissa Perri: When you launched copilot and you started watching people use it, what surprised you, what kind of things did you observe and trends and how developers are interacting with it?

[00:24:12] Mario Rodriguez: Yeah, when you have a product, it's funny. Even though you're very proud of it, you're also very critical of it because you know all of the flaws it has, it's like, and and we knew, look, we developed for a living, gitHub is a developer company, right? So we knew the places that it was not doing well, whenever we were using it on a daily basis.

We also knew it was magical. It was something new. And when you get innovation out there, what really made it for me is how many people used it and how many people absolutely loved it. And even though it, maybe when we first released it, acceptance rates were like in the 20%, 30%. So just think about this, 70% of the time that was suggested something to you, you did not accept it. So you would think that makes a horrible product, but no because of all of the value when you did. People just absolutely loved it. And that was a learning even for me. I do not come from ML Ops or AI. So if you were to ask me, Mario, you designed a product, you get it out there, and 70% of the people do not actually accept the suggestion.

Oh, I'm telling you. That's not a good product. But in this case, it was completely different. So that surprised me is, oh my God. Only a 30% acceptance or 20% acceptance can create something that changes the world on this. And I could see it. Then the second thing it was, one didn't surprise me, but I, there were so many other cool things that people did with it.

How many people ended up programming. Just by giving comments to copilot instead of having to write the entirety of that code. And how many people did that across many programming languages in natural language. So I could be programming Spanish, an example or at least learning how to program in Spanish as another example.

So I think that was one of the things that also surprised me is. The ability to use that model in multiple languages, not only programming languages, but also, the way we speak.

[00:26:13] Melissa Perri: That's really cool. When you came into GitHub too, you mentioned you started working on bringing it into more enterprises, expanding it there. How have you seen the adoption of, copilot in larger companies, versus startups and scale ups? What's been the trend of adopting AI across that?

[00:26:31] Mario Rodriguez: Yeah . It's exponential and you having a lot of fun, but it feels like a tornado. I think there's a book out there called Inside the Tornado and it was recounting how Intel grew and all of those type of things. That's how it felt because we having this product that essentially people are taking orders, trying to get it.

But at the same time you have to. It's a brand new way of programming. So you do have to do a lot of skilling and there's a lot of fear of AI at that moment. There was a lot of fear of AI too. So you have to do a lot of education and then you have to do a lot of things on look, we don't train on none of these code that belongs to our users, right?

So you have to also make sure that you have all these controls that you could show them your enterprise customers, that their data is safe, that their data it's not trained on and all those things. So we created a trust center. We did tours with them to show them how the, even the flow and the diagram works.

How do, how does LLMs work? That was another thing that we needed to get a lot of education out there. But it just took off and it was constant going to customers, doing skilling, doing enablement supporting our revenue team supporting. Our go-to market team supporting our customer success teams.

And it's pretty fun, when you are working on something that is growing exponentially, that you get pm, product market fit within a period of months then, and then you continuously work on the quality of it and you're seeing it improve and you are having people program like, my 9-year-old and my 7-year-old can now program mainly through natural language and that it was possible because of the beginning of that with co-pilot.

[00:28:10] Melissa Perri: That's so cool and it's so fun to watch people have that be more accessible to them.

[00:28:14] Mario Rodriguez: a hundred percent. We, we sometimes talk about like, why is it that professional developers. Are the only developers, in my opinion, it's like saying, Hey, like only Michelin star chefs can be called chefs. Or only the people that go to the Tour de France can be called, that they know how to bike.

No. Like we, we need to, GitHub has this dream of creating 1 billion developers that's 10% of the world population by, I don't know, 2050 or so. Imagine if we have 1 billion people that through natural language can create software to better the world. That would be special and we should call those people developers too.

While, why is it that it's only professional developers? They're creating something and software is at the core of that too. They're doing it through natural language is slightly different. But I think they could be called developers. So I think that's one of the key things that think we still have to go and achieve at GitHub is to realize that vision of making software should be accessible to a billion people.

AI’s role in augmenting vs. replacing developers

---

[00:29:16] Melissa Perri: I love that idea of, I think it also touches on some of the fear behind ai, right? That it's just going to replace all the developers. And, as we, before we started recording I told you that I've heard a lot of investors out there and a lot of people say Hey, how many developers do we think we can replace if we just bring in ai?

How much more efficiency should we get out of here? And that's been the talk out there a lot. How do you see AI. Either replacing or enhancing developer, how do you think it's gonna impact the role?

[00:29:45] Mario Rodriguez: We view it, copilot, we view the human at the center and we view it as augmenting the developer or augmenting these product teams or this engineering product and design teams as well. And the reason why I say that, and look, you always gonna have either investors or places where in the industry where they're trying to efficiencies overall, but the way that I think about it is, every time I go to one of my customers, their backlog exceeds their operational budget. More like the OPEX budget overall, meaning they wanna achieve more things that they have people for. That has been, I think, a constant through

[00:30:24] Melissa Perri: For sure.

[00:30:25] Mario Rodriguez: in the enterprise space. I never go to a customer and they're like. Look, we did everything We're giving, everyone a vacation. Like we don't have any other good ideas that we need to implement this quarter. In fact, it's always like we're behind on this transformation. We have this other 50 things that this specific customer wants, and we can only do one this quarter. Let's apply a Monte Carlo sequence on how our plan is gonna end up. And like all of those type of things. So so for me it's that, it's shouldn't we be using technology more? To create more output overall and achieve more outcomes overall. I think Satya talks sometimes about the beauty of ai, if we'd realize the vision is we are gonna be able to increase the world's GDP by 10%, wouldn't that be great?

We've been stuck with very low percentage increase, maybe 1% or 2% in developer nations, but imagine if the whole world can increase GDP by 10% and what that means to society on it. So we think like the way to do that, if we believe in the vision that, hey, everything's gonna be powered by software. Let's get to our backlogs a little bit faster. Let's make sure that our technical debt gets reduced. Let's make sure we're producing higher quality software. Let's make sure we're doing that with less tickets of bugs and defects that we have to deal in production. Let's get there, and then from there we take a look at, places that have efficiency gains or not efficiency gains.

But I think for me is that, so where I spend a lot of my time in conversations with our customers is mainly on that, is how can you get through your backlog faster, achieve then a set of outcomes because of that and, create this AI leverage and then hopefully, have your company grow because of that.

[00:32:08] Melissa Perri: Yeah I think that's a great way to think about it too. And I just hear so much fear about, this is all gonna go away. And then there's the product manager one too, where people are saying like, do we even need product managers anymore? Is product management dead? How have you seen

[00:32:22] Mario Rodriguez: Yes.

[00:32:22] Melissa Perri: AI shaping the way that we think about product management and the work that we do as product managers?

[00:32:27] Mario Rodriguez: Yeah, a hundred percent. I lead product for an organization, so it's in design and we have these conversations like what is the future of product management? Even inside of GitHub? If I reflect a little bit on it. Product is like a new discipline out there. If you think about it, like humanity knows how to do agriculture very well, but creating software we haven't been doing that for hundreds and hundreds of years, right?

Like it's been something in the last or 70 or so. and then just recently this kind of role of product management really gave birth and started gaining momentum. I don't even know if we, as the industry, really understand the role of product management across all of the organizations.

Some people really look at that role as program management. Some people look at that Ross, project management. And for me it's not about those things. Although there's a fair amount that you have to do in product to, make sure that we have the right programs. There's a lot fair amount that you have to do to make sure that projects end up being delivered. But for me, like the craft in product is about achieving those outcomes. And I think I was telling you. our role was just to create great PRDs, oh my god, life would be so much easier and I, I could probably retire and be on a beach like hey, I could do those things right now very well at times. A new model came out 4.1. We just released it, GPT-4 0.1, and actually as we speak just I don't know, three hours ago we released also. Three and oh four meaning. So these models are really getting really good at creating PRDs, in my opinion. And taking something that I have in my head and I could rubber dock with it and be like, is this the right strategy?

What are your arguments against this? How should I make it better? But I feel like that's just like one small part of what product really is. For me, product is, look, there's this customers. They want to hire your product for something and people talk about jtbd or jobs to be done. There's many ways that I think Ryan Singer, uses shape of as of doing some of this as well.

So there's many ways that you could think about what is the problem that you're solving, but there's a problem that your product is gonna be higher on and there's a set of business outcomes that you wanna also achieve when you create that product. Marrying that, that's where the beauty comes.

And a lot of doing great product management is the learning loop on, on figuring out how you get the problem a user has and your problem to solve that at your product to solve that problem. And that is in incredibly hard at times to do well. It's like investing, like Warren Buffet and Charlie Munger are great investors because they make great investments like it's that plane. So great product managers create products that achieve the outcomes of the business and get hired by their customers to achieve then the job the customer hired that product to do. And if you marry those things, like then you have something special.

So for us at GitHub, going back into your question, it's how do we not create PRDs? Yes, we have to do that. Not create amazing change. Yes, we have to do that. Not, Hey, let's make sure that we're paying attention to all our customer feedback. Yes, we have to do that, and AI augments us to do more of that better.

So instead of taking 24 hours, maybe I take two hours right now. but the rest of the time that we're saving, right? It is all spent on creating a great product by figuring out what is the outcome that we wanna achieve. how do we measure that and how to make sure that our customers find that, and then that we could end up marrying that with a business, meaning: how do you price it, how do you package it, how do you make it available? All of those hard things.

Embedding AI into product strategy and practice

---

[00:36:26] Melissa Perri: What I think is neat too about your story, and especially with copilot here and AI, is that, you basically reimagine the developer experience around this, right? GitHub did that from the beginning, but now you're doing it again with AI and when I see companies that are like hitting it outta the park with ai, it's because it's so ingrained into their product strategy, but it like flips what we are used to on its head, right? And it says, just because we've done it this way forever doesn't mean we have to keep doing it this way. And now that we have a technology that allows us to recreate the experience or make the experience better, like we should be harnessing that at its core.

And what I think is so cool about, co-pilot and some of the other AI stuff that we're seeing out there is that if you think about it for developers too. So many people try to have tried to optimize, and I know developers hate this, but like how fast their fingers go in keyboard, for velocity. But like you, you don't hire a developer for that. You hire them so that they can think strategically. And this to me, frees them up to be able to do that. It frees 'em up to focus on the harder problems so that they are not just typing, typing, typing away. And that's where I think there's so much potential in this because now we're gonna get. So much more brain power around solving problems rather than just trying to figure out how to write perfect code.

[00:37:42] Mario Rodriguez: You got it. And a hundred percent I think the people that are being successful today with AI is because they use the AI to free up time to work on the creativity items, because those problems are incredibly hard. And lemme tell you, no AI today can solve that. The world today as it exists, and we're having, this conversation didn't exist 500, thousand years ago, and there was no blueprint to create it either.

Like no one out there when it's oh, this is the way that we actually create podcasts in the world and this is when it's gonna exist. That didn't happen, but it all got created. Through us and humanity, right? That's what we have. So you are right. The people are using AI today, very successful.

They understand what is the AI leverage that they wanna get, and at the end, the AI leverage that you end up getting is the creativity, right? Like for example, I have a roadmap and am I a hundred percent certain of that roadmap 12 months from now? No. Right now the planning is amazing. That's why we create roadmaps because it allows us to actually go through a planning phase and all of those type of things.

But to create a product, you have to have a lot of iteration on it. And that necessitate a lot of creativity and a lot of time spent on reflection and understanding of what exactly is gonna make it great and a lot of intuition, et cetera, et cetera. So like for me, product is all about that. And the people are doing great today in using ai, are using it to solve problems from a creative side of the human by removing all the toil the human usually had to do. So instead of spending 40 hours in toil, that human spends 10 or five, and the rest of the time is really on that creative genius.

[00:39:23] Melissa Perri: Yeah, and I it's so powerful to think about it. But what I also see people struggling right now, especially product managers or leaders at organizations as well, is there is so much changing and there's so much AI coming out there to adopt. How do you know where to start, right? Or where to encourage your people to start.

And I think this comes into two camps, right? Some people are thinking about just productivity on the backend. What can you use AI to do your job better? But then there's also the ones who are thinking about how do I bake aI into my strategy? So when you're thinking about leading, your organization and encourage your team to adopt new AI, try to understand it, try to figure out how to bake it in, how are you approaching those two things?

How are you looking at it for productivity, but also how are you looking at it to help your strategy of making GitHub better?

[00:40:10] Mario Rodriguez: Yes. We just finished a set of AI days in one of the teams. It's called the platform and enterprise team where they just took an entire day to learn more and more. Even us, like we're so busy that sometimes we miss announcements out there and we have to take a series of days just to learn what is in the industry and how can we use that appropriately as well.

So we just finished I think it was day two yesterday. They might do one more day overall. Where they go and use a lot of products including Co-Pilot by the way and they try to go and use it in novel ways and see what they could learn. Or they use it for things like, oh my God, you know what? I'm spending a lot of time creating X, Y, and Z status report, and then I'm just gonna try to automate this with that. Or get a new perspective on how we could do that better. So what I would say is to anyone starting in this space, you have to use it. It's like learning how to bike. You have to bike you, there's no other way for you could watch 20 dozen YouTube channels. You're not gonna learn how to do it, so you have to do it.

So I always start with AI days where now the tricky thing is, okay, do you have procurement allow you to, you know, play with those things. So you have to do a lot of things before that, even us, we need to go and make sure that the tools are approved, because there's a lot of privacy implications with many of these things. So we go in, make sure that we have these tools available, and then we just use them and then we share it in a channel. At the end of the day, everything we learn and then people can be like, oh, I like that. I'm gonna start using that as part of my everyday or my weekly overall. So I would say AI days inside your company is something like really powerful. The other thing that I would say, just at a personal level. Like I use it constantly to learn new things and to really go back and forth. The way that I learn is mainly visual and to go back and forth on some concepts. And I don't like to click 20,000 links to do that. And, I would say LMS and many, chatGPT, Microsoft Co-Pilot, even ours as well allow you to do that.

So if it's anything programming related, I go to our to GitHub copilot, and then I just try to learn something related either to Rust as an example, like I'm not very familiar with Rust and there might be something that I wanna understand a lot better because I'm coming into a team I'm coming to a conversation with my code search team and all of code search is written in Rust.

And I may wanna understand a little bit more how we do an indexing as an example. So I used it to augment myself very quickly on it, but I would say AI days. Use said every day, mainly just to learn at the very beginning. Then to reduce toil that you have overall, and then how to bake it into the strategy of your company.

Like you have to, again, don't think about it as efficiency and think about it as a multiplier. And once you make that transition, by the way, like almost all the apps from now are gonna have intelligence baked into it. The key is how you're gonna bake it. If you think about it as AI is omnipresence. That's, in my opinion, AI native. When you're trying to just bolt it as a UX panel or something like that, all you're doing, for the most part, in my opinion, it's infusing AI into your application. So the people are very successful with AI at the product, not in the organization, but at the product level on AI is they think about it as it's just there. The product is the feature. AI is not, AI is just omnipresence across everything the user is doing. And those products, are special in my opinion.

[00:43:45] Melissa Perri: And I definitely think copilot is among them. Thank you so much for being with us on the Product Thinking Podcast. If people wanna learn more about you and GitHub, where can they go?

[00:43:54] Mario Rodriguez: Yes. So I'm on X and I'm on LinkedIn, @mariorod1. So, you could ping me there at any point in time. And about GitHub, just github.com.

[00:44:04] Melissa Perri: Great. And as individuals, you could sign up and try it as well.

[00:44:07] Mario Rodriguez: Yes it is free actually. So if you get a GitHub account, you could start using it right away. You get 2000 completions and 50 chats. And we have a pro plan and a pro plus plan as well.

[00:44:19] Melissa Perri: Amazing. Thank you so much again for being on the Product Thinking Podcast, and thank you to our listeners out there. We'll put all of those links as well on our show notes at productthinking podcast.com. We'll be back next week with another amazing guest. And in the meantime, if you have any questions for me, go to dearmelissa.com and let me know what they are. We'll see you next week.

Melissa Perri