If you are committed to increasing the impact of your knowledge management (KM) efforts, you will have to master the art and science of metrics. The purpose of metrics is to collect reliable data (i.e., quantitative or qualitative information) that can be used to understand the performance of an organization, and then guide that organization towards more productive behaviors that help it achieve its strategic and operational goals.
There is no one-size-fits-all approach to metrics. In "Measure Better to Manage Better," I provided examples of some types of metrics that you can use to gain a deeper understanding of the impact of your work:1
- activity metrics
- operational metrics
- behavior metrics, and
- outcome metrics
These four metrics are just a few of the wide range of metrics available to managers. For example, there are metrics such as key performance indicators or KPIs that are used to measure an organization's progress towards its strategic and operational goals, and there are efficiency metrics such as cycle time.
The way you collect data for each of these metrics is quite different. Of the four types of metrics mentioned in my earlier article, the first two, activity metrics and operational metrics, are the easiest to collect because they are usually tracked automatically by various KM platforms and tools, which then present the data in an administrator's dashboard or report. Behavior and outcome metrics however, cannot be tracked yet by common KM systems.
However, the metrics that are easiest to collect may not always be the best. While activity and operational data may suggest the existence of a behavior change, they cannot prove it. Similarly, while they suggest a possible impact, they cannot prove it.
What versus Why
Nonetheless, as long as you review system metrics regularly and stay abreast of their trends over time, you will know WHAT is happening with those platforms and tools. WHY something is happening is a different, and more difficult, question to answer. To make this clearer, consider the following examples:
1. When intranet usage goes up, the backend metrics should be able to tell you who is using the intranet, when they are using it, and what they are doing once they log in. But why is there an increase in usage? You will not find an absolute answer in those numbers. So a human will have to intervene. By analyzing the timing of the increase you might be able to determine that the uptick in activity occurred just after the arrival of summer associates. By analyzing the identity of the users you should be able to confirm if the increased usage is attributable to summer associates. So it might be safe to assume that those summer associates were responsible for the increased usage.
2. When usage of your intranet goes down, what do the backend numbers tell you? That (1) usage is down and (2) particular people are not using the intranet. That may be interesting, but it misses the key issue: Why has usage declined? Once you have eliminated causes such as maintenance down time or acts of God, do you have an answer from these data? No. In this case, you have to leave the relative comfort of your automatically generated numbers to interact with the users (or, in this case more properly, the nonusers). Only by asking them point-blank why they are not using the intranet will you be able to ascertain why usage is down. Further, their answers will likely suggest ways in which you can improve your intranet, and thereby the usage of it.
These examples show the differences between quantitative and qualitative data at play.
Methods for Uncovering Why
In the prior example I suggested that you ask, point-blank, the critical question you need answered. That is just one way of getting to the answer. There are, in fact, several accepted ways in which you can collect useful qualitative data. In this next section, we will focus on four methods: observation, survey, interview and focus group.
1. Observation. Observation simply means watching your internal clients as they go through their day (or a specific process) to understand what they are doing. Better still, because you are in the room with them, you can ask them why they did a particular thing after a key activity occurs. Gathering data by observation yields two important benefits. First, it sometimes can be done without the person (or people) you are observing being too conscious of your observation. As a result it can be more accurate than self-reported data. Second, it gets you out of your office (where illusions and delusions may persist) and into your clients' workspace (where you must confront reality). To maximize the chances of acquiring quality data, be clear about your objectives, anticipate what types of activities you are likely to observe and the type of data you can reasonably draw from those activities, record your observations accurately without judgment, and do not interrupt the subject of observation more than absolutely necessary.
2. Survey. With the advent of Survey Monkey, SharePoint, Google Forms and other tools, surveys are now a little too easy to conduct. In our rush to ask questions, we sometimes disregard what the social sciences have learned about how to conduct a survey properly: how to choose the right questions to ask, how to create pre-set responses that are mutually exclusive and exhaustive, how to interpret the responses received, and how to report the responses and your analysis in an ethical manner. The other key methodological issue is understanding what participation and response levels provide the most reliable data. A final point is to only ask questions that can provide to actionable information. For instance, asking if people are satisfied is interesting, but by itself does not allow you to act. What you really need to know is why they are dissatisfied as a follow-up question to those that indicated dissatisfaction in the previous question.
3. Interview. In law firms, the hardest part of conducting an interview may be actually getting your foot in the door of the office of the person you want to interview. Unless the topic is directly related to a current client matter, lawyers tend to reschedule or cancel interviews that happens to conflict with billable work. That said, to increase the likelihood of having a productive dialogue, you need to come prepared with a plan for using the time efficiently, including a list of critical questions that should yield the data you need, while reassuring the lawyer involved that you are not wasting their time. A data-collection interview is not the time to wing it or improvise. Go in with a script, while remaining flexible. With experience you will learn when it is safe, or advisable, to go off script in order to further explore the responses you receive. Remember, if all you want to do is gather set answers, use a survey. If you want to be able to move beyond set answers to a more nuanced discussion, conduct an interview.
4. Focus Group. Marketers have used focus groups for years. Technologists have focus groups called user groups. Consider creating a KM user group that can provide early qualitative feedback on your products and services. If you intend to collect useful data through user groups, here are some things to keep in mind:
- Make sure the right people are participating, which means that they have enough experience with the system or tool in question that their responses will be based in reality rather than theory;
- Provide the necessary preparatory materials to ensure that they arrive ready to participate and comfortable that you have a plan to use their time in the meeting efficiently;
- Create a script with questions geared to elicit the honest responses of your group;
- Have a trained facilitator who can lead the meeting and manage the personalities in the room without compromising the quality of the feedback;
- Collect their feedback as accurately as possible; and
- Report back to the group promptly so they see that their participation was valued and useful.
There are a few things to keep in mind with all of the methods described above:
With all approaches, know that people behave differently when they know someone else is paying attention. When asked to report on their thoughts or behavior, they tend to report what they think the questioner wants to hear. This is known as the social desirability bias, and is a major challenge in survey, interview, and focus group situations.2 You only have to think about the classic job interview or office meeting to understand how this works in practice. So it is critical that you do what you can to ensure the accuracy of the response data. For example, if the survey data does not lose its value by being detached from the identity of the respondents, then assure all respondents of the anonymity of their responses. That should prompt more honest responses. In an interview or focus group setting, explain with sincerity why it is important that they be brutally honest with you and do not do anything that might suggest a particular answer is desired. To promote honest responses, check your own speech and body language—make sure that you do not act defensively in any way. Otherwise, you signal that you want less than honest responses.
When collecting data through observation, try to be inconspicuous without being unethical (e.g., recording behaviors without disclosure and consent). Expect that you will have to wait until someone gets comfortable with you before you can see their "true selves" emerge for observation. Until then, they will likely self-censor in order to demonstrate what they think you want to observe.
In focus groups, be very sensitive to the presence of groupthink; the natural desire people have to conform to a group to minimize conflict and promote harmony. Turn up your personal radar so you can detect quickly when it emerges. In addition, develop strategies for reducing its occurrence, minimizing its impact, and redirecting a group that seems in danger of groupthink.
In surveys, interviews, and focus groups, be sure that you are not asking leading questions. The purpose of these data-gathering opportunities should be to provide new insights. If you rig the process to achieve specific ends, then you deprive yourself of the insights that might help make your work more effective.
The Quantitative versus Qualitative Trap
In our numbers-obsessed world, we sometimes fail to appreciate the value of information that is not automatically generated by a system. Without a doubt, qualitative data in the wrong hands can be more subjective than quantitative data, and that potential for subjectivity has led some to prefer quantitative over qualitative. Don't fall into that trap. Good qualitative data are every bit as useful—and sometimes even more useful—than good quantitative data. As mentioned above, quantitative data can explain what is happening, but you often need qualitative data to explain why it is happening.
Be careful as you collect and analyze your data. The key to success is to handle the data ethically and with the type of professional controls that minimize the possibility of distorting the qualitative data to achieve specific results. For that matter, it is just as possible to distort quantitative data to achieve specific results. Remember the old warning about the sometimes inappropriately persuasive power of numbers.3 There are three kinds of lies: "lies, damned lies, and statistics."
Now that you know more about metrics, consider what behaviors and outcomes are critical to the success of your KM work and then focus on the metrics that shed light on those behaviors and outcomes. If you track these metrics diligently and analyze them honestly, you will be guaranteed insight into your work and its impact on your law firm.
1. V. Mary Abraham. "Measure Better to Manage Better." Practice Innovations Newsletter, July 2015.
2. "Social Desirability Bias." Wikipedia. Last modified 9 June 2015.
3. "Lies, Damned Lies, and Statistics." Wikipedia. Last modified 9 Sept. 2015.
Back to Contents