Exceptional Client Service In Law Firms

 

Analyzing And Reporting Your Client Satisfaction Survey Results

By John W. Olmstead, MBA, Ph.D, CMC

You have completed your telephone or mail client survey campaign. Completed questionnaires are everywhere. Now what? Somehow you must go from unprocessed questionnaires to results that people can understand and digest.

The key focus of any client satisfaction survey study should be on implementation of action items that can improve client service and satisfaction. Data analysis and report writing should always be pointed toward the development of specific action plans. Such action plans should be specific and identify specific tasks to be accomplished and individuals or groups responsible for task completion with deadline/completion dates. A system should be established to monitor progress of action plan implementation.

There is nothing worse than asking clients for feedback and then doing nothing and not following up. The benefits of gathering feedback can be negated if you do not follow through on the results. Once your firm has taken the initiative to actively invite feedback, you must take actions to correct at least some, if not all, of the problem areas identified. Doing so is vital. You must also act on business opportunities identified as well. Going to the effort of gathering the information and then not doing anything about the problems identified is not only a waste of time and money but can also increase the likelihood that future service improvement efforts will be viewed with skepticism. For this reason, you must close the loop on the surveys you have conducted by getting back to the people who provided you with the feedback. Doing so benefits your relationship with your clients because you not only confirm what they said but that you are making changes accordingly.

Editing and Coding Survey Data

Now it is time to get the questionnaires ready for data entry and analysis. This is time-consuming work that needs to be done in every survey involving more than a few questions, especially when they are open-ended. The purpose of editing is to clean the questionnaires to make sure that:

Coding the questionnaires means expressing in terms of numbers all responses that will eventually be analyzed. Typically a master list or codebook is used to keep track of all the codes used in the survey, including the ones that appear on the questionnaire and those that are added after data collection. For example:

5 Completely Satisfied

4 Mostly Satisfied

3 Neither Satisfied nor Dissatisfied

2 Mostly Dissatisfied

1 Completely Dissatisfied

 

1 Yes

2 No

Once coding schemes have been developed each response is assigned a numeric code, which is then entered into a database. It is important that the responses fit the coding categories well, have a minimal number of categories, be as mutually exclusive as possible, and have few responses left over as “other responses.”

Missing data must also be handled properly. When data analysis begins, it is important to be able to distinguish between “zero,” “I don’t know,” and no response at all. For example, when calculating an average, you don’t want to confuse “zero” with “I don’t know.” Typically we reserve codes 99 for “not applicable” and 98 for “don’t know” or “no opinion.” Almost all statistical software can recognize missing value codes as those that should be excluded from calculations.

Data Entry

The next step is to enter data from the questionnaires into a dataset in a computer for statistical analysis. The exact procedure will depend on the particular brand of software that you are using, but the basic task is the same: You go through each questionnaire, one at a time, typing in the responses in order. We use one Excel spreadsheet to capture the coded responses and another Excel spreadsheet to record the verbatim responses to all open-ended questions. Data entry should be done twice into two datasets and then compared and verified for accuracy.

Statistical Analysis

Statistical analysis should be done by a computer program specifically done for that task (not a spreadsheet) – one that can read the dataset and conduct comprehensive statistical analysis. Appropriate programs are SPSS and SAS. We use SPSS and import the dataset from Excel.

Typically you will use descriptive statistics to summarize and describe your data. Initially you will examine data frequencies – the number (count) and percentage of respondents giving each answer to each question. After this examination you may need to go deeper and use other descriptive statistics – but only after having a complete understanding of whether you are working with continuous or categorical data. Continuous data consists of data, which is comprised of quantity values. Examples include: age, number of years of employment, distances, test scores, yearly income, etc. Categorical data consists of data, which are, very simply, forms of data, which fall into groupings or divisions. Examples include: gender (male/female), political affiliation (Republican/Democrat/Independent/other), and favorite color (blue/green/orange/yellow). The use of either continuous or categorical data has a profound effect on statistical analysis methods available for use. A quick review of the relationships between these data types is mandatory. Continuous data yields more exact information. Accurate assessments of mean, range, standard deviation, variance, and other statistics also become possible. The key point to remember here is that if the data is of the categorical form, values for the mean, median, mode, and all other descriptive statistics are nonsensical. Instead, count and percentages should be reported, without reference to a mean, median, etc.

Descriptive statistics used in typical client satisfaction surveys, depending whether examining a single variable (question) by itself or in relation to another variable, include counts, percentages, mean, median, mode, standard deviation, correlations, and chisquare. Bar graphs, pie charts, pivot tables, and cross-tabs tables are typically used for categorical data and histograms, pivot tables, and other graphs, are used for continuous data. Inferential statistics are used less frequently.

As you analyze your data keep in mind the following tips:

TIP #1: Know the characteristics of your data and whether you are using the right statistics and graphs to describe the data.

TIP #2: Know when a study is so flawed that it is totally worthless.

TIP #3: Do the findings make sense?

TIP #4: Has your survey avoided coverage, sampling, measurement, and nonresponse errors? Non-response error, which is a major problem, occurs when a significant number of people in the survey sample do not respond to the questionnaire and are different from those who do in a way that impacts the accuracy of the study. Low response rates serve as a warning that non-response error might be a problem. Depending on who is surveyed and what method is used, anything under 60-70 percent should be a red flag.

TIP #5: Telephone surveys yield much higher response rates than mailed surveys.

TIP #6: Look for results that matter and look for information that you need to know to initiate changes in your client service practices.

It is important that the firm undertake the appropriate analysis to answer the questions that motivated the study in the first place.

Reporting Survey Results

The final report should be easy to read, understood, and answer the questions that prompted the study. It should discuss implications of the findings and should present information that can be acted upon. Variables (questions) that tell the story should be identified, reported, and included in graphs and tables to summarize findings. A typical report should include the following sections:

In addition to a written report, our firm typically provides the following deliverables as well:

It is important that the survey findings be presented in person to members of the firm, which should be an interactive session during which the implications are discussed at length.

Client Service Improvement Plan

As I said earlier in this article – there is nothing worse than asking clients for feedback and then doing nothing and not following up. The benefits of gathering feedback can be negated if you do not follow through on the results. A client service improvement must be developed and implemented. Details concerning development of such a plan will be discussed in the next article in this series.

John W. Olmstead, Jr., MBA, Ph.D., CMC, is a Certified Management Consultant and the President of Olmstead & Associates, Legal Management Consultants, based in St. Louis, Missouri. The firm provides practice management, marketing, and technology consulting services to law and other professional service firms to help change and reinvent their practices. The firm helps law firms implement client service improvement programs consisting of client satisfaction surveys, program development, and training and coaching programs. Their coaching program provides attorneys and staff with one-on-one coaching to help them get “unstuck” and move forward, reinventing both themselves and their law practices.

Founded in 1984, Olmstead & Associates serves clients across the United States ranging in size from 100 professionals to firms with solo practitioners. Dr. Olmstead is the Editor-in-Chief of “The Lawyers Competitive Edge: The Journal of Law Office Economics and Management,” published by West Group. He also serves as a member of the Legal Marketing Association (LMA) Research Committee. Dr. Olmstead may be contacted via email at jolmstead@olmsteadassoc.com. Additional articles and information is available at the firm’s web site: www.olmsteadassoc.com

© Olmstead & Associates, 2005. All rights reserved.

Contact Us

  • The best decision when we were considering succession planning was to hire you and your firm. 
  • Curt Tobin
    Tobin & Ramon

Read More Testimonials »