Subscribe: Google Podcasts | Stitcher | TuneIn | RSS
Be prepared for the intersection of data science and product management
Organizations are developing robust data science capabilities, adding the role of “data scientist” to their ranks. As the importance of data science increases in organizational strategy analysis and operations, it is also impacting product management. Product managers are being asked to work with data scientists. We are still at the forefront of this and figuring out how product management and data science intersect.
To explore the topic, we are joined by two past guests who have been working at this intersection. In episode 117 Felicia Anderson shared how she was building a product management council at Piney Bowes and in 055, Rich Mironov shared how product managers can navigate organizational challenges. For the past year, they have been helping product managers work with data scientists.
If this topic isn’t impacting your product work yet, it will in the future. This is information you need.
Summary of some concepts discussed for product managers:
[2:19] What are some examples of how you use data science as product managers?
In commerce services, data science can predict where shipped parcels are and which are at risk of being delayed, and determine when volumes of parcels will arrive so the company receiving them can optimize staffing and other resources.
One trend I see is that instead of using data analytics to give ourselves internal insights that we then hard-code into our applications, we’re using AI to build data analytics into the products themselves, such as using natural language processing to spot trends in long-form text documents. Software can make recommendations to the end-consumer. The challenge is that this kind of data analytics is never perfect. You have to consider edge cases and problems that might occur if the software makes a bad recommendation or data is missing. Product managers need to think about the difference between type one and type two errors. If we tell somebody a thing’s going to happen and it doesn’t, what are the bad outcomes? If we tell somebody it’s not going to happen and it does, what are the bad outcomes? You want your errors to collect on the side with less damage.
[11:23] How do you bring data science and product management together?
Sometimes the business leads us into data science. In other cases, you build the data science teams and bring the product managers and business side onboard. You have to pair up the product management knowledge with the data science team because neither half can make it work alone.
[12:36] Do you usually see data scientists in product management teams or more separate?
I’m mostly seeing them separated, but if a company is building data science products, like using machine learning, then data science is a core part of engineering. A data science team for internal insights tends to be a separate team that investigates problems brought to them and spots trends. Then they have to find the internal consumer who cares about what they found, which brings them back to the product managers who know what they need for their product. When we leave data scientists in their own separate department, what they learn is not very valuable because most of the company finds it totally obvious. On the other hand, product managers and others come up with crazy, fictional ideas about how to apply data and need data scientists to bring them down to earth.
[14:57] How can product management become excessively data-driven to the detriment of good product management processes?
Blindly following the data takes us to really uncomfortable places. Shortening phone calls and spending less time helping customers will save some money in the short term, but it’s a really bad idea. Always apply business logic. Ask what the edge cases are and take things to extremes to avoid walking in a really bad direction just because the data is leading there.
[16:40] You’ve put together several tips on how product managers can make better use of data science. Let’s talk through the first tip: Provide Much Deeper Context than Traditional Software Projects, Especially Use Cases and Business Goals.
Engineering teams tend to understand a lot about the application and who is using it. Data science teams come with a lot less context. For them, I may have to explain how our company makes money, the cost of errors in each direction, business goals and metrics, and success criteria for the business. I always bring real user validation and research by playing recordings or showing somebody using the application and struggling with the interfaces. Our engineering teams know all these things, but data scientists may find them surprising, fresh, and new.
[18:59] What can you tell us about the next tip: Remember That Data Science Projects are Uncertain and Our Judgment May Be Weak?
Often data is just not very predictive. Our intuition of the data’s ability to predict is much weaker in data science applications than it would be in traditionally built applications. You have to be cautious about setting expectations early. Prepare your stakeholders for the possibility of needing to recalibrate, get different data, and take a different approach. This is even more important in the data science world than on the traditional software side. If we promise to meet a delivery date when there’s a high chance something won’t work, we have much higher expectations to unwind than if we start with clear communication that much of what we expect won’t play out until we get our hands dirty with the data.
[22:42] Let’s talk about the next tip: Done Means Operationalized, Not Just Having Insights.
For the data science project to really add value, the whole organization needs to know how to use it, and everything needs to work to get that information where it needs to be. It’s not enough to have an academic insight. You have to work through the people, processes, and systems to deliver business value.
It’s important to consider how data will be presented. For example, if we want a retailer’s website to say when a package will arrive, we have to automate and maintain our model, re-engineering the front end of the application to present the data. If the data isn’t presented in a workable way to the end-user, it accomplishes nothing.
[24:30] We have one last tip: Describe How Accurate This Application Needs to Be, and Anticipate Handling “Wrong” Answers.
For example, we might automate review and approval of consumer mortgage applications. Any model will have some mistakes. We need a plan to investigate complaints that someone didn’t get a mortgage and a plan for reworking the system when we approve mortgages that are being defaulted too fast. We need to be able to verify the data and have human pathways so when somebody thinks we got the wrong answer, we can fix it.
Another example is a model that predicts which e-commerce transactions might be fraudulent. You want to stop those orders, but you don’t want too many false positives, so any orders that the model flags as suspicious are handed to a human team that decides whether they are truly suspicious. Human review of a portion of the results complements the data science.
Useful links:
- Rich Mironov’s website, Product Bytes
- Connect with Felicia via LinkedIn
Innovation Quote
“Stay curious.” -Anonymous
Thanks!
Thank you for being an Everyday Innovator and learning with me from the successes and failures of product innovators, managers, and developers. If you enjoyed the discussion, help out a fellow product manager by sharing it using the social media buttons you see below.