Thursday, June 20, 2019

The Anatomy of an Effective Product Definition Document

A product definition document is a product management artifact that serves two purposes. First, it helps to gather and organize a team’s thoughts. Second, it serves as an artifact for collaboration and debate. I ask every product manager I work with to write a product definition document. If they are new and don't know how to write one, I give them this framework and help them think through what they want to do and think about how they want to communicate their thoughts to others. When I review the product definition documents written by product managers every quarter for potential implementation, these are the things I look for. I have shared some sample content where possible.

Every product manager I work with receives these inputs from me. This is my framework to make product managers develop empathy for the user, to make them collaborate with the technical architects, make them realize that they are a custodian of the solution rather than the inventor of the solution, make them develop a point of view and defend it, and help them come across as credible when they propose a solution.
1. A meaningful title that everyone in the organization can understand. Product managers who can write a meaningful title demonstrate that they have thought about the nature and scope of the problem.
2. Date
Sometimes a document is read months after it is written. The date provides valuable context. If the document is edited months after the creation date, it is useful to add a last updated date. 3. Names of authors, and contributors.
This tells me who wrote the document and who gave inputs for defining the problem and the proposed solution. I look for inputs from end users, architects and engineers. No co-authors or contributors is a red flag. Lack of collaborators could be an indication of limited experience, selfish behavior, not acknowledging the value others bring or insecurity. 4. Purpose of the document
Sometimes the document is written as an artifact for collaboration. Sometimes it is complete enough for development. Sometimes it needs legal or clinical review. This section tells me if the product manager thought about the purpose and audience of the document. 5. Explanation of key words
Explanation of the key words used in the document is to ensure that the product manager understands all the keywords and can explain them in simple words. This is hard to do. A document with key words explained leads to clear concise conversations and work sessions. It is usually best to build the keywords as you expand the document. Here are a few samples. Note that there are no use of abbreviations or jargons. 
  • Sweepstakes: A promotional drawing in which prizes are given away. Participants enter sweepstakes or are automatically entered in sweepstakes. There is no certainty of reward
  • Program is the framework used by our products to enable users navigate their health benefits.
  • Activities refer to trackable steps within a program. These are used to monitor and encourage participation.

6. Executive Summary that clearly conveys what we plan to accomplish.
Executive summary is not just for executives. It is for everyone. It states the problem, the proposed solution and the top 5 steps to execute the assignment. Usually it is best to write the executive summary after writing the full document. Writing this well is hard. In general, the more experienced the product manager, the better the executive summary is. Here is an example executive summary:
Executive Summary:  Product features such as Programs, Events, Employer Messages, Surveys, Assessments, Challenges, Rewards and Store shall use the segmentation framework  to make instances of such features available to select segments of people. For example a certain employer message may be sent to men in the state of New York.  The professional services team,  will create segments using 15 standard attributes and 5 custom attributes chosen by the customer. The custom attributes will be defined in the eligibility specification by the customer in conversation with the professional services team. Once defined and ingested, these custom attributes will be available for segmentation for the customer.

7. A diagram explaining how the user will consume the product or feature.
A simple diagram, usually abstract, that explains the core solution is arguable the most effective part of the product definition document. This is done before the user interface mocks or prototypes are created. I look for annotations and user personas to determine if the product manager understands who they are building the feature or product for. This diagram is not hard to do but few product managers take the time to draw such diagrams. The ones who do are way more successful in delivering a useful solution on time compared to those who don't.

Figure: Sometime all it takes is the photo of a hand drawn diagram

8. Details of the feature explained in sentence format with diagrams
This section is usually short when I review the document. it gets built with every work session. The content of this section depends on the nature of the feature or assignment. It may have diagrams, screen shots, tables and descriptions of the feature. 9. Outline of risks, assumptions and limitations
A product or feature will always have limitations. An effective product manager always points out the limitations and boundaries of the product clearly. When I discuss the scope of a feature with a product manager, discuss what we do not plan to do as much as what we plan to do. 10. Acceptance criteria used to verify completion of work.
This is to tell the engineering team what the product manager will look to verify completion. The best product definition documents start off as a 2 page document and may go up to 5 pages. Anything more could be built in a prototype or can be broken up into multiple assignments or phases.

Sunday, January 28, 2018

When A Product Says It Uses Machine Learning What Does It Mean?

I was at my employer's annual sales conference recently where many digital health vendors and their CEOs were present. In one of the sessions they all said that they use machine learning to personalize their product for users. The CEOs did not go into any details. I could tell that some of my colleagues and partners in the audience were skeptical, but they did not ask the leaders of those companies to elaborate. I suspect that most people in the audience did not understand what the leaders of those companies meant when they said they use machine learning. Unfortunately that might be the case with many enterprise software buyers.  But it does not have to be that way. While designing machine learning driven products is tough and requires a highly skilled team of data analysts, data scientists data engineers, and product managers, the basic concept of machine learning is quite simple. In this post, I will take a few examples from the real world and my experience at Castlight, building machine learning driven products, to explain what a machine learning model is, what a simple rules based model is, and how those models are used in the real world to benefit employees of companies that buy software driven by such models.

At Castlight Health, we make it simple for employees of American companies to navigate their healthcare, benefits and wellness programs. We provide employees with a web application and a mobile application where they can see health benefits information personalized to their needs. To do this, we use a combination of rules based and machine learning based models. This is a hypothetical example of a rules based model. "If a pregnant woman is over 35 years of age, place her in a segment called high-risk-pregnancy". This model is a not a machine learning driven model. It is driven by clinical rules written by experienced clinicians. Software developers simply listen to clinicians and turn that rule into a software program. This information is then given to our personalization engine. Once our personalization engine understands a woman is in the high-risk-pregnancy segment, benefits programs that are relevant for a woman with high risk pregnancy are promoted to her via our web, mobile and email channels. When she logs into our application, instead of viewing information about all the benefits her employer provides, which could be exhausting, she will see benefits that are relevant for her. The hypothesis is that such personalization makes employees aware of relevant benefits, engages them with the benefits providers and wellness programs in a timely manner, improves their health, and reduces healthcare costs for them and their employers. This actually works. That is why hundreds of employers pay us tens of millions of dollars every year. The example we saw above is rules based prediction and personalization. You may be wondering about an example where machine learning is used to predict and personalize. To understand that we need to first understand what machine learning is.

What Is A Machine Learning Learning Model?

A machine learning model is where software developers do not program the computer (the machine) with an explicit logic. Instead they make the machine learn by training the machine with historical information. For example, if we want to check if an email is spam or not spam we can create a machine learning model. To do this data scientist will take several known spam emails, label them as 'spam' and feed those to a suitable computer program, 'the machine'. Data scientists will not say why the email is spam. They will simply say "My dear machine, this is a spam email. I want you to look at the email and recognize that this email is spam". So the machine learns what a spam email might be like and, after looking at enough spam emails, gets better and better at identifying a spam email. After it gets really good at identifying spam email, the model is deployed and goes to work identifying spam email in the real world and putting them in the junk folder.  It is important to note that data scientists in most product teams don't invent the computer algorithms they use. They simply use an existing computer algorithm and build a model. It is a bit like this. An electric car engineering team does not have to invent the electric motor. They just have to design it for the particular type of car and build it.

Image Courtesy: Europeana Collections

A Machine Learning Model use Case For Benefits Navigation

We just looked at a real world example of a machine learning model. I now want to give you a hypothetical use case of a machine learning model in a health navigation product such as the one Castlight Health provides. Let's say that employers want employees who get an unnecessary back surgery to get a second opinion before they decide on the back surgery. This is because clinicians know from experience that many back surgeries do not improve the condition of a person's back. Instead they cost a lot of money for the employer and the employee and cause a lot of pain and suffering for the employee. In most cases, a surgery also results in weeks of time off from work and, in some cases, lost wages for the employee. So there is a big incentive to identify people who might get a back surgery and make them aware of second opinion programs as well as inform them about the costs and benefits of back surgeries. The problem is there is no simple rule to find out who might be considering a back surgery. This is where machine learning comes handy. Castlight data scientists have access to de-identified information about the medical history of people who had a back surgery. They can feed that information to a computer algorithm (the machine) and tell that machine "My dear machine, this is the medical history of people who had a back surgery in the past. I don't know why they got a back surgery. But they all did. I want you to look at this data and learn to identify people who are likely to get a back surgery." With enough data the Castlight machine learning models gets really good at identifying people who are likely to get a back surgery. The model is then deployed to analyze medical data of employees and predict if someone is likely to get a back surgery. Once it identifies such people, the model informs the Castlight personalization engine about this. The personalization engine then goes to work, promoting second opinion programs and educational information to those identified via web, mobile and email channels. Once again, the hypothesis is that even if the product prevents only a few unnecessary back surgeries a year, the cost savings could be in the hundreds of thousands of dollars and the health improvements could be significant too.

I avoided technical terms on purpose in the above examples. Data scientists reading this post, will recognize that what I described above is supervised learning. There are other types of machine learning, which I did not go into in this post.

If you are an enterprise buyer or a sale person competing against another product claiming to be machine learning driven or artificial intelligence driven, ask the product manager or the sales person to explain the use case. If they are not able to explain in simple terms what they are using machine learning and how it turns into real value, you should be very skeptical about their claims and verify before buying their software. A product does not have to be machine learning driven to be good. Simple rules based engines can do a good job to address many problems. However, it is important to understand the difference.

Please note that for business reasons, I did not use actual use cases. I also did not go into more details about our personalization engine, and our system of intelligence, which are far more sophisticated than what I outlined here. If you would like to learn more, contact me or my colleagues at Castlight Health and we will be glad to share more. If you are in the San Francisco Bay area drop by at our San Francisco or Mountain View offices and I will be glad to give you a demo of our products. If you like this kind of work and want to join Castlight, give me a call. We are always looking for good data scientists, data engineers, clinicians and product managers who understand machine learning and data driven products.

Sunday, June 18, 2017

How To Build A Minimum Viable Product

Earlier this year, my employer Castlight Health acquired another company called Jiff. I took over the product responsibility for machine learning driven and user input driven personalization for the combined companies. The team that built personalization at Jiff is a capable team with many ideas, some customer implementation under their belt and tens of customers who have bought into those ideas. Looking at the clear need in the market, I decided to focus on executing on those ideas to accelerate delivery of functionality to customers who were paying us millions of dollars every year. Since then the team has delivered their first major functionality release which personalizes Jiff home page and prescribes benefits based on machine learning driven prediction intelligence from Castlight. We are now working towards the next release focusing on user input driven personalization and benefits prescription.

When I did my initial analysis of the team and Jiff personalization product, I noticed that like any young and ambitious team, my colleagues were trying to build too many things too fast. I introduced two principles. The first one is to design and build minimum viable products rather than release too much functionality at one time. The second principle is to deliver value early and often.  I will talk about the first principle in this post.

Minimum Viable Product (MVP)
I shared the diagram below with the team and explained how we need to build a minimum viable product. For example, I explained to them that if we are building Survey functionality, we should not release it without building at least a simple reporting functionality.  It is not only a product error, but also a business error. If we release a Survey feature without associated reporting, customers will reach out to us for reporting, increasing our cost of customer operations. Since manual reporting will involve data analysts and engineers, we will also lose the opportunity to build new valuable features. All these things will make the customer wait for days and weeks and will most certainly decrease their satisfaction with our product. It will also slow down the delivery of future innovation.

I also asked the team to think like a landlord rather than like an architect. The team responded very well. We started defining our products in a very disciplined manner. We consciously reduced features that were not necessary for the MVP.  We started thinking about total cost of operations rather than just the first release of the product. Sometimes, sales teams and even business executives lose track of this fundamental principle. It is the responsibility of the product manager to ensure that this principle is understood and adhered to.

Deliver Value Early and Often
We also followed the principle of delivering value early and often to customers. We have large customers who have hundreds of thousands of users. Such customers go live with our benefits management platform usually once a year. These implementation projects are large, take months to execute and have tens of work streams involving multiple teams.  I notice a tendency among implementation teams to not show any product until very late in the process. I am putting down a process in place where we show and even release functionality to customers months ahead of their requested delivery dates. I believe it has several advantages. I will write more about this principle in a future post.

Wednesday, March 22, 2017

Road-mapping User Journeys Is Better Than Road-mapping Features

Heads of product will face this problem in most software product companies. During every planning period, every product team comes with a long list of features they want to build. Most features will have cryptic names that few people understand and investments are made without even a clear idea about what is being built let along the return on investments. All well-meaning product teams clammer to get their features in without realizing how they fit into the overall objectives of the company. They build the features. The features don't fit well together. Even if they are good, they never get promoted to the right users at the right time. Even in the event the products are used, the reporting teams don't have an idea that the features are getting used because they did not built the tracking instrumentation. Product marketing teams or aggressive sales teams make up their own stories about what the product can do with little input from the teams that built the product. Once the product is made available, disgruntled customers complain bitterly and refuse to renew their contract. The sales team comes to the product team and requests for features. The product team build features with no purpose other than to keep the contract from getting cancelled. Everybody loses including the customer.

Such behavior of product teams might be overlooked for a while in a large company with a very successful product that is already a market leader and makes huge profits. However if you are a company trying to build a product for a new market or if you are a small company working hard to keep your customers, this sort of behavior could kill your company or at the very least your product managers our of work in a couple of years. Slowly but steadily, product managers who build features without knowledge of user journeys and without tracking usage will slowly lose investment and their value to the organization and eventually their job. In other words if one wants to succeed as a product manager of product leader in the long run, user focused and data driven roadmap planning is probably the only way to go.

Changing behavior of product teams is very hard. But there is a framework that might work. In my early experiments with a user journey driven framework for roadmap planning, I see acceptance from various teams and acknowledgment of value from product managers. This is the framework. Ask your product teams, including product managers who build features, product managers who are custodians of existing features, user marketing teams that promote your products and analytics teams that instrument your product for tracking and reporting to think about all their features, assignments and campaigns in the context of a user's journey. Here is an example of a user journey [1] where multiple teams contribute.


Ask them to build a collection of the top 5 users journeys that their features form a part of and estimate the impact of those user journeys on product objectives. To do this first, they have to think about the user. Second, they have to think about the other product managers and teams that they need to rely on and collaborate with. Third they need to focus on how much engagement does their feature bring today. Fourth, they need to collaborate with the analytics teams to estimate the potential impact of the user journey on product objectives. Fifth, they need to identify instrumentation gaps in their features and think about building just the right instrumentation for reporting purposes.

During this process product teams will realize that a feature cannot become successful on its own. It required some one such as a user marketing teams and other upstream features such as home page or mobile channel to promote it. User marketing teams will realize that there are good features that bring value for customers not being promoted. Product managers will recognize that their feature usage is not being tracked by the reporting teams. This will help them convince the reporting instrumentation teams to build just the necessary instrumentation. Product teams can communicate a collection of user stories to product leaders, product marketing and sales so that they sell what has been built. Not what they cooked up. Customer success teams will be happy because they will get accurate reporting on user journeys that matter without having to wrestle with the analytics team to dig up data every time they have to report progress to a customer.

I am not saying this is easy. However it has worked in the past for me and my new experiments are showing signs of more promise.  If done right, this approach could be the difference between your small software company staying in business or going out of business.

If you don't have a framework for evaluating features, product teams will use phrases such as 'customer commits',  'table stakes' or 'strategic project' to justify the investment. While customer specific features are part of every product team's work, when you hear such phrases from more than 25% of your product managers, as a head of product, you should be very concerned because you will end up with a hodgepodge of features that bring no value for your users. If you are a CEO, and you hear phrases like these from your product team very often, you should be very concerned because your investment is being squandered by a team that is inexperienced and has no framework for execution towards your business goals.

[1] The user journey discussed above  is for a business to business to consumer (B2B2C) product company. Many enterprise software companies fall into this category. I am making the assumption here that you are a cloud product company and your customers pay you if and only if you can prove to them that their employees are using your product to either stay compliant, save money or help you make money.

Thursday, January 12, 2017

Product Managers Could Explain Data Science Results In Plain English

One of the key responsibilities of a product manager working with a data science team could be to articulate the results of a data science project in plain English to other team members and stakeholders. I take the following approach to do this.

First, I request the data science team to aim for a small success within three months of work. In collaboration with the data scientists and a subject matter expert, I create a concept story (1) that outlines the specific results we aim to achieve. We aim for modest results in a short period rather than aim for very ambitious results in a year (4).

Second, I sit down and have a conversation with one of the data scientists (2) to understand the results of the data science project. I do this during the research phase of the project as soon as the team reaches the projects desired research goals (3). The data scientist will usually share a data file with the results of the data science project. I normally request the data scientist to point out the top three highlights of the research. We then verbalize the results and convert the results into a plain english sentence in a short work session. For example, in a data science project to match data sets, the plain English sentence might say "We were able to improve the match rate between dataset A and dataset B from about 2000 to about 10,000." I then build on the sentence by stating what it means for an end user. For example, the plain English statement might be "When a person looks at a doctor, she is five times more likely to see a hospital affiliation compared to before."

Third, I provide a screen shot of the application area where the data manifests itself to make the data easier to understand for all team members and stakeholders.

A product manager who takes on these responsibilities in the data product team can play a meaningful role (4). It is also a good way to gain credibility not only with the data scientists but also with stakeholders who may not always have a data science background. It might take 6 months and a couple of successful releases for the data scientists and stakeholders to appreciate the role of a product manager. Don't let that stop you. Keep at it and you will succeed.


1. I might share a sample concept story in a future post, if possible.
2. Experienced data scientists are good at articulating the results achieved.
3. Data science research outcome is later turned into scalable code by a data engineering team.
4. Overstating the scope and impact of a data science project is a common mistake.

Related Posts Plugin for WordPress, Blogger...