Podcast: Product Mastery in Corporate Software: Insights from Kent McDonald

MMS Founder
MMS Kent McDonald

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I’m sitting down with Kent McDonald. Kent, you and I have known each other for decades, which we would be able to say now. Which of course is not a pointer to how old the two of us are getting. But a fair number of our audience probably haven’t come across you in your work. So who’s Kent?

Introductions [01:10]

Kent McDonald: Well, I think it’s almost going on two decades Shane. So I’m a freelance product manager and freelance writer. I have spent a large part of my career playing around in a few different communities.

I primarily work with different organizations from the products side of things, and dabble in the technology enough to make sure that we’re doing things the right way, and making the right decisions for what to build. And then I do a bit of writing about software product development.

Shane Hastie: One of the things that I know you bring to your work, that is possibly different to some of the stances that we hear, is you are working with organizations for whom software is not our business. The large corporates who need software, they’re not the startups, they’re not trying to be an Amazon or a Google or whatever. So what is different about product management in that space, and product management for software products in that space?

Product management in non-software companies [02:12]

Kent McDonald: Yes, I’ve made a living out of working at organizations that build software for their own use but don’t sell it. And probably one of the biggest things right off the bat that’s different about product management, is oftentimes these organizations need convincing that they probably could use product management. And really where product management fits in, if you’re to way oversimplify it would be making sure we’re working on the right software, solving the right problems, getting the right outcomes. And a lot of times in these organizations that are starting to use software more intentionally to enable their customers, enable their employees, enable their partners, there’s decisions getting made about what should be done, but it’s usually by folks that may not necessarily know the extent and the capabilities of the technology. And so when they describe something that they want to have happen, they’re typically describing it in a solution in terms of something they’re familiar with before.

So a big place where product management fits in is kind of walking those asks back to really understand what it is they’re really trying to accomplish, and make sure, does that make sense to do that. Another big one is just because you can build software for use inside your organization doesn’t mean you should always build it. Sometimes buying and configuring it is often the better route to go. So a lot of it is adding the decision-making and intentional thought process of, should we be building this, what should we be building, does it make sense to do this? To complement and make the good engineering practices that much more effective.

Shane Hastie: The good engineering practices. I hear and I experience that in many of those large organizations, those good engineering practices are hard to motivate. I’m going to take one of my favorite technical practices or team practices, pair programming, ensemble programming. And you talk to a bank and they go, “No. We won’t have the resource utilization curve going in the right direction”. How do we motivate those good engineering practices in these types of organizations?

Motivating for strong technical practices [04:20]

Kent McDonald: It’s tough. So amongst my career, before I was really focusing explicitly on the product aspect of things, I did spend a bit of a time as a delivery lead. Which is kind of the kinder, gentler, non-methodological specific way of saying the person that’s kind of helping coordinate things. This is one of my favorite experiences, actually. I was part of a team that was brought in to do a couple of things. One, our main goal was to rebuild a 20-year-old product that was originally built in PowerBuilder, that the organization used to establish prices that they would offer farmers for corn, soybean, and wheat seeds, that they would turn around and then sell off as commercial seed. And then this app also had to track any kind of options trading they did to hedge the risk of setting prices so far ahead.

So that was the main thing, but then the other thing was, is that based on the organization that I was working with, that I got brought in to help out with this, is we were to basically model those good engineering practices that I mentioned, and encourage the spread of those throughout the rest of the technical teams at this large organization. Pair programming being one of them. The one that we actually had the most difficulty trying to encourage the organization to even let us do, was automated testing. Because there was a very strong QA presence there, that I thought there was no way that we could possibly let the developers test their own things, even if they were automating it. Because what would that say about separation of concerns and all that?

We had to do a little bit of agreement there to where we did have a manual QA, but the developers basically made it a point to say, “We’re going to do whatever we can to make sure that the QA has the most of their time to spend doing exploratory, because they’re not finding anything of just the normal ordinary humdrum stuff”.. To kind of spread those practices, the best way I’ve seen happen, is you model it, show success, and then be willing to help other teams to kind of adopt those practices as well. Unfortunately, if there isn’t also a lot of buy-in at technical leadership level, all of those efforts can go to naught.

So you could even show this is, “Hey, we were able to release a new product for use inside the organization on April 1st, 2020”. There were a few things going on at that time. And one of the smoothest releases that the organization had ever seen, very low, if any downtime, very reliable efforts. And when the team started moving away, I don’t know that the organization continued to do those practices, because there wasn’t someone continuously pushing and really supporting that. So it certainly is model the behavior, show where it can provide results, but it also requires a bit of leadership and advocacy from tech leadership.

Shane Hastie: One of the things that is reasonably newish in the product space, is the Jeff Patton, Marty Cagan’s work on the product operating model. And how does this start to weave into the large corporate space today? Or does it?

The Product Operating Model [07:25]

Kent McDonald: Well, I think Marty Cagan is kind of the one that’s been trumpeting the operating model. I would suggest that if you were going to listen to either him or Jeff Patton talk about it, I’d listen to Jeff, because Jeff’s version’s probably a little bit more pragmatic and a little bit more realistic. Marty tends to be very idealistic and let everyone know where it’s not going to possibly work in their organization. But what it’s doing is it’s effectively saying, “Here are some things you need to strive for, to look at how you’re organizing things”. And some of the good parts of it are getting back to the idea of the collaboration, where we want to have different perspectives. And Shane, you’ve been a big proponent of product ownership being a team sport. It’s along the same lines there, with you want to have different folks working on… And the way that Marty describes it, and this is something I do like about how he talks about things, is kind of the four big risks of product.

Is the thing valuable? Meaning, is it solving a customer problem? Is the thing viable? Meaning, is this something that our business can actually do? Can it work with the other parts of the business? Is it usable? So our customers, and users, and the people that need to use the software, able to use it? And then of course feasible. Can we build it, can we make it work? Those types of things. And you’re likely not going to find someone that is best positioned to figure out if something addresses all four of those risks. You need different skill sets.

So you kind of need a village to do that. And a lot of times it’s always been the talk of the trio, where it’s product person dealing with value and viability, but I’m going to throw that a little bit on the head here in a minute. A UX person or designer looking at the usability and then the lead tech, the technicians looking at the feasibility. I have heard some perspectives to kind of put that in a little bit more useful thing. Especially if you’re working on something that isn’t explicitly human facing. Because there are some systems that have a lot of internal things. And our friend Chris Matts once said that the trio really should be certainly someone from a product standpoint, someone from a technical standpoint, and then the third person, and there may even be four, the third person’s actually basically looking at what’s the big risk that that product has, outside of there, and then whoever’s the expert in that.

So example, if you’re working on financial trading, it might be someone that’s really well versed in all of the intricacies of financial maths and things like that. I’ve also started to play around with the concept that the business analysis community, there’s probably people that are very well suited to be looking at the viability of things, as far as how is this going to work within our business, how is it going to support the processes, and the data and the rules that we’re having to deal with? Especially for those products that are for the organization’s own use.

Shane Hastie: You chaired the accelerating products track at Agile 2024. What are the trends in product today?

Trends in product today [10:18]

Kent McDonald: I got the opportunity to chair that with Holly Bielawa who works with Jeff Patton. And one of our kind of focuses this year was to bring some content to the conference that’s looking at the aspect of product management that doesn’t get discussed as much in the Agile community, and that’s more around the discovery and focusing on outcomes. There is certainly some of that, but really kind of looking at discovery and metrics and things of that nature. We kind of did an intentional focus on those types of sessions to bring to the conference. And also to do a little bit of cross-pollination between people that tend to hang out more in product management circles, and people that hang out in the agile software development circles. Because we figured there’s probably a lot of good things that they could all learn from each other. So that’s kind of the main thing of it.

There were several sessions on dancing around topics such as customer journey, some ways of explicitly quantifying value that maybe the Agile community hasn’t tried to dig into it deep, because they figured the product owner had that covered and they didn’t need to have to worry about it. Some more in-depth things. One of the sessions was about survival metrics, so digging deeper into metrics so you can understand where your product is, how your product’s doing, and where it might be faltering a little bit. So a variety of sessions on that. So I think the track was pretty well received. Maybe not as many people attending that as some of the others, but that’s to be expected with a not new but yet newer topic for the conference, compared to what it had been in the years past.

Shane Hastie: For the InfoQ technology audience, what are some of the important things that they should be thinking about in product today, in product management, product coaching, product, blah?

Product management is fundamentally about making sure we build the right thing [12:04]

Kent McDonald: I don’t want to oversimplify it, but I’m going to anyway, because that seems to be the great place to go. It comes down to… And the way I describe the relationship between product management and agile, and I used to describe this in terms more of the community I used to live in a lot more, which is business analysis, was organizations that did a good job of adopting agile, more often than not, they originally did it because they wanted… Either it was groundswell effort because they just saw this as a better way for us to work. From an organizational perspective, it was always, we want to be able to do things better, faster, and cheaper. And what they found was, is that their approach to adopting agile and good engineering practices kind of help them get there, but the one thing that they never tackled was, “Are we doing the right thing? Are we building the right thing? Should we be building this, or should we just not be doing it at all?”

There have been attempts along the way to incorporate that idea. And what it seemed like every time was happening was, is that the roles that the agile’s community conveyed as far as the ones that owned the value, they basically said, “Yes, product owners got it, the product people got it, they know what to do, they’ll just go off and take it”. And a lot of times the people that found themselves getting into product roles, didn’t know exactly what to do. They needed some help. And so the only kind of help they got was, “Well, here’s how you can do product to help the engineering team out, to keep things organized, tell us what to do next, provide us the information we need”. But not a lot on the decision-making about, “Should we even be doing this in the first place?”

And it’s a tricky thing for organizations, again, going back to those that are building software for their own use. Those are hard decisions. Those are hard conversations, and a lot of times organizations seem to go out of their way to find ways to not have those discussions. It usually gets baked into some kind of budget battle. And the people that are good politicians or the ones that win and get to do their things, without really having what makes the most sense for the organization. And so I think there’s some of the techniques that come along, the good techniques, not the kind of the vanity techniques that you see out there with any kind of approach, that helps organizations and helps teams make those decisions about, “Should we be doing this, does this make sense, is this the right thing to do, is it going to get us the outcomes that we want?”

Shane Hastie: What are some of those good techniques?

Stable and temporary metrics [14:25]

Kent McDonald: A lot of it comes down to explicitly thinking about what is it we’re trying to accomplish. And John Cutler actually who wasn’t at the conference recently, had a great post where he kind of positioned the idea of how measurements really should be playing into things. And the interesting thing was is he described it in the interest of getting to a different point, but it was actually this first point that he talked about I thought was very helpful. And so he talked about, there’s stable metrics that you have that are things you track over the long period of time. You can think of these as health metrics, you can think about them are “We’re establishing some measurements along the way that we want to make sure to keep track that our process is doing what it’s supposed to be doing, and not going off kilter”.

My long ago educational background is industrial engineering, so I think of like process measurements and controls like that. Some people would think of these as KPIs, so key performance indicators. If everything stays within this certain range, we’re doing pretty good. Those are stable metrics. Then you’ve got a set of metrics which are going to be temporary in nature. And effectively, anytime that you think that you need to make a change to something, so in this realm of working inside organizations and doing software for its own use, usually it’s going to be, we either are bringing on new customers and we need to support different kinds of things that we had before, or we need to increase our capacity so we can bring on more customers, or those processes or bouncing outside of those KPIs. So we need to make some kind of change to get things back in order or expand our capacity or things like that.

And so those temporary metrics are ways of saying, “Have we gotten there?” So are the things that we’re doing getting us in the right direction to actually make the change that we’re trying to get to? And then what you do there, is you base success based on “How are we getting towards those metrics?” Not “Did we deliver the thing we thought we needed to deliver?” Because we might find out that the thing we thought was going to work isn’t going to work after all. And then the third part of that is goals, basically saying, “For the stable ones, how do we know when we’re doing well?” For the temporary metrics, “How do we know when we’ve accomplished the outcome we’re looking for?” There’s a lot of kind of practices built into that, but it’s basically looking at something that’s saying, “Are we accomplishing the change we wanted to accomplish as a measure of progress and success?” Rather than, “Did we deliver the backlog items that we thought were going to be needed three months ago, when we were probably completely wrong?”

Shane Hastie: Getting those metrics, actually getting the numbers in our hands. This is in my experience, again, a pretty hard thing. In product organizations, we’re better and better today at building the analytics in. I’m not so sure that our large corporates have the appetite for that.

The challenges of gathering useful data [17:16]

Kent McDonald: No, because what it inevitably comes down to, and I can attribute it to a couple things. One of it is because in order to get the metrics and the measurements you need, you’re effectively, some of the work on whatever your change is that you’re doing, is actually work to put in the ability to measure those metrics. And so it’s opportunity costs about other things you can’t do. I would argue there that it’s worth it, but some folks sometimes take a little bit of convincing for that. The other one that you don’t hear people talking about as much, is the embarrassment factor. Oftentimes when you’re driving some change, it’s because things aren’t working very well the way they were before. And usually you’ll hear the reason for making the change described as, “We need to do some kind of modernization,” or “We need to adopt new technology”.

When it really is, what they’re saying is that, “Right now this process lets the people working in it make way too many errors, it’s costing us way too much money. We need to fix it”. But you won’t hear people say that, because then the next thing is, “Well, we should really measure what the errors are now,” which can be embarrassing because it’s like you’re admitting, “Yes, we’re letting a lot of things happen that shouldn’t be”. So I think part of it is having the courage to say, “What is it we’re really trying to accomplish?” And in some cases, especially when you’re talking about something that’s enabling an existing process, one of the reasons you might need to make the changes is because right now it’s way too easy to make errors that can cost the organization money. And when you’re having those discussions, it’s also important to note that it’s not the people that are causing the errors, it’s the system that lets it happen.

Shane Hastie: So as the technologist, how do I advocate for spending the time and money to put that in?

Kent McDonald: I think a lot of it is kind of just reinforcing the idea that says… Because every developer I’ve worked with that’s anything worth their salt, really does care that the stuff they’re working on is making a difference. So if anything, especially if the organization cares at all about their staff’s satisfaction, is to reinforce the argument that says, “Hey, if we do this, if we track these metrics, if we look at these things, it’s going to help us make better decisions about should we be doing these things or not. It’s going to give us a better idea if we’re working on the right things”. And by virtue of that, it’s going to make sure that this team sees a lot more in fulfillment in what they’re working on. Because they know they can directly tie it to, this is how the organization is succeeding. Because we’re doing the right things, we’re putting the right stuff in place.

Shane Hastie: Shifting stance a tiny bit, I know that you’re working in an organization today that is very, very distributed. But what is different when working in this massively geographically distributed environment, versus maybe working within one or two time zones?

Working in massively distributed teams [20:15]

Kent McDonald: Yes. And so let me provide a little bit of context here. So I’ve had the opportunity to work in organizations of all different sizes. The team I talked about earlier, we were co-located explicitly working in the same room. This is 2020, when some things happened, so we were able to shift quickly to not working in the same room, but we’re still working in the same time zone. And then I’ve worked with other larger organizations where it’s kind of the product, folks were in one time zone, one continent, and most of the tech team were in a different one. The team I’m working with now, and it’s actually a smaller organization, where the folks are distributed via space-time and organization. So it’s kind of like a movie model where you bring the right skills together to create the thing, and that way you can get the best skills. Team I’m working with right now, it’s four people.

I’m in the central United States. There’s another developer that’s probably a couple hour drive from me. And then we’ve got another team member in Turkey, the team member in the Ukraine, a team member in Europe, and a team member in the UK. And it’s interesting that we’re able to adjust, we’ve found a way to lock into our approach that tends to be fairly lightweight. And a lot of it is just adjusting schedule wise as far as how things work. So we primarily live in Slack and ClickUp, so the realm of product and tech tools continues to grow, and source control and GitHub. So a lot of it is that we’re able to be a little bit more synchronous working in the morning’s, US Central Time. And then afternoon, US Central Time, I’m pretty much kind of getting things prepped and then stuff happens overnight while I’m sleeping.

So I know when I wake up in the morning, there’ll be other things. It works out pretty good because that way the team members, we’re still able to have good communication synchronously, and we do a lot of asynchronous communication. But then all the team members have the ability to go off and have the focus time and work work they’re needing to do, and yet still kind of get questions answered. I think because we’re a small organization, and there isn’t a lot of the extra baggage that comes along with larger organizations, the extended time to work on focus time by themselves is probably a little bit easier to have happen. But I think it’s a model that larger organizations could model. It’s like, do you really need to have all these people always involved at the same time? Or is it as long as you have a certain timeframe where there’s overlap to do the stuff that needs to be done synchronously, and then allow for a lot more of the work to happen asynchronously, where it’s better suited for that?

Shane Hastie: Another aspect that you bring to the table, is this concept of the player coach. We’ve seen a pushback against a lot of coaching roles. One could argue that the visibility of the value delivered there has been hard to find at times. But the emergence of the player coach, so in your stance at the moment, you’re the player and you’re coaching that team. How does that fly and work?

The role of the player coach [23:17]

Kent McDonald: Yes. This team, the coaching aspect of it is a lot lighter. So it’s more just along the lines of, let’s get the practices in place that we need to get work done and we make it happen. A previous gig that I had, I just recently finished up, it was explicitly a player coach type setting. And granted this is more from a product focus, more so than a technical focus. Part of it was I was coaching the folks in the business unit. So this organization is starting, one of those that builds software for its own use, starting to adopt a product operating model. Interestingly enough, they’re adopting the product operating model at the same time that they’re trying to adopt a little bit more rigorous agile software development approaches. So it’s rare that you see an organization trying to do both at the same time. The jury’s out if that’s a good or bad thing, but so part of it was that I was coaching the director of the business area to take a little bit more product focused approach.

And It was interesting, it was a very good fit here, because the business unit they were in, we’re actually providing services out to their customers. So it was a definitely good fit. And at the same time, working with their other folks on the team, including the manager, about adopting more product management techniques. And so a lot of that was, again, like I said before, it was me doing it for the stuff I was working on, modeling it. And then spending a bit more time going deeper into explaining the why I chose to do it a particular way, pointing out when I messed something up, and kind of the do as I say, not as I do type situation. Or explain, “This is what I thought was going to work, it didn’t. So here’s what I would probably do instead”. And being available for questions.

I honestly, for my own personal approach to working, I enjoy that approach better, because I had had the opportunity to coach exclusively in the past, and I found a lot of times it felt like as if I wasn’t doing as much as I should have been. So this way it definitely felt like moving things forward. Plus it helped me pick up the context a lot better. And so I think it’s for folks that have the experience and would prefer to mostly do, be a maker. It’s a great way to share the knowledge and experience as long as the people that may be less experienced are open to hearing about it. Sometimes people go the route of saying, “I just have to experience it myself to learn it”.

Shane Hastie: So, knowing when to let go.

Kent McDonald: Knowing when to stand back.

Shane Hastie: My co-author and friend, Ronald Layton says that sometimes the conversation is, “It’s your foot. You can shoot it if you want to”.

Kent McDonald: I like to say, you can lead people to knowledge, but you can’t make them think. So there’s only so much you can do.

Shane Hastie: Yes. We’ve covered a wide range of topics here. If people want to continue the conversation, where would they find you?

Kent McDonald: I’m pretty available on LinkedIn. So just look for Kent, J. McDonald.

Shane Hastie: Thank you very much for taking the time to talk to us today.

Kent McDonald: Thanks for the opportunity to be here. I enjoyed it.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.