Still loving this data. Billing over $300,000 per month, with no negative customer impact, that had formerly been dropped on the floor.


Nationwide Service Provider

 
 

The KFR Services Quality Story, Part I

A journey in discovery of the Good, the Bad, and the Big Q

By Kimberly Russo, Co-President

Download PDF Version

It’s easy for just about any company that wants to attract customers to say they’re quality focused. After all, quality is what customers want. It’s the biggest marketing buzzword out there, and it’s no mystery why. If you’re relying on accurate data to drive call rating and routing applications in the telecom industry, poor quality can cost you Big Time.

With the requirement for accuracy being paramount in our field, what vendor wouldn’t want to be known as the high quality vendor?

KFR is no exception. For three decades we’ve been defining quality, leading the way with rigorous standards and continuous improvement processes. Clients repeatedly tell us our data is top-notch, and over the years, we’ve been satisfied with this rather subjective feedback. But now, we can do something other vendors can’t. We can prove our accuracy is superior and provide the statistical information to back-up our claims.

The question that started it all.

Quality can be claimed by just about anyone. Quality can be preached by people with decades of experience, or heralded on the web site of a company that’s only been around for a few days. But, saying you’re quality focused doesn’t necessarily make it so. Which lead me to wonder, can accuracy be proven in our environment?

And thus began my journey...

In a world full of exaggerated claims and meaningless assertions, I set out to prove first to myself, and then to the industry at large what our long-time customers already know. KFR really is at the top of the game when it comes to data accuracy.

I began zealously dissecting the accuracy information we had collected to date, crunching the numbers that my operations and customer service teams were tracking. What I found was impressive.

The Good

The numbers available then were based on internal error checking. They showed how many data files were found to have one or more errors during our strict quality control processes. Using the number of errors and the number of data files included in our products, a bit of simple arithmetic showed our accuracy percentage to be 99.75.

Now keep in mind that the 0.25 percent of files in our database that were found to be in error during the quality assurance process could be corrected before they were ever delivered to clients, resulting in a near perfect database upon release. It was 2001, and this was really leading edge for our market niche. We were the only company out there providing any statistical information to prove our data quality.

I was pumped up! I was like a pocket-protector-wearing statistician, eager to see what the numbers would show with a bit more analysis (which isn’t easy for someone who hasn’t added more than two numbers together without a calculator since 1989). I knew I needed help finding more meaningful statistics.

“What you’re missing here,” I was curtly informed by the true pocket-protector wearers in my company, “is information on how many customer complaints show that an error did in fact get through the quality control process.”

“OK. Right!” I thought. We do on occasion have customer questions that when investigated, show an error in the delivered data. I set to work, determined to ensure my statistics were valid and as accurate as our database itself.

But while my new direction on the stats seemed logical, I soon uncovered a problem. I couldn’t find historical information on how times we’d corrected and reshipped data to customers.

Back to the techies I went. “Hey, can you guys give me some info on how many times we’ve had to correct data and reship to customers due to errors?” I asked.

“That doesn’t happen too often,” was the response I received, “So we’ve never tracked that.”

I was devastated. A true flaw in my quality measurements had been exposed, and the adjustment that would validate the data I’d been gathering for a year was not in site. I was at a crossroads, but failure was not an option. I pressed on.

As a team, we started tracking the reworks and recalculating our statistics to show the number of customer found errors reported each month versus the number of file changes implemented in the month the report of the error was made. This method of calculating our accuracy was much tougher on us because we were including only files changed that month in the equation as opposed to all files in the database. But it was a truer picture of the quality of the data when customers receive it.

After a year of tracking this data, we arrived at 2003 with our accuracy looking something like this...

Quality Statistics Chart

Our efforts to measure our quality were on a roll! Customers and prospects were impressed with our data and with the fact that we could measure accuracy in the first place! Our operations teams were happy with the specific performance standards. And I was working it from the sales and marketing side!

The Bad

Then, perhaps cocky from the results of my efforts to date, I made an announcement at a management meeting that will live in infamy. I was proud about what the stats reflected, and as a marketing professional, I was enticed by the idea of raising the bar on data quality in our market niche.

“What we really need in order to prove how good we are,” I said, with a bit too much confidence, “is a Service Level Agreement. A guarantee, if you will, that really shows we stand behind our quality.”

The reaction I got was...

Complete silence.

And then came the barrage of questions and comments for which I had no reply. “How do you know whether your statistics are calculated in a valid way?”

“What data would be included in the guarantee?”

“Are customers really asking for this?”

“If we’re going to do this, we need to do it right.”

That’s when I realized I was in over my head.

The Big Q

Enter Tom Redman, data quality expert extraordinaire.

Dr. Redman established the AT&T Bell Laboratories Data Quality Lab in 1987 and led it until 1995. There he created the Applied Research Program that produced many of today's methods for improving data quality and saved AT&T tens of millions of dollars. He is the leading inventor of practical techniques to help organizations improve amidst the explosion of data (anddata quality problems) created in the information age. In short, Dr. Redman lives and breathes data quality day in and day out. (He says this condition is not fatal. We sure hope he’s right!)

At our first face-to-face meeting with our new quality guru, he set us on a dizzying course to measure our accuracy in a statistically legitimate way. “You just need to be driven by the Big Q,” he said.

Tom stressed we should calculate quality using the most stringent measurements available because of the level of quality we’d achieved to date, and because accuracy was so important to our customers.

When Tom says “stringent”, we found that he doesn’t really mean the dictionary definition of stringent, which is “Imposing rigorous standards of performance; severe.”

No. What our data guru actually means by stringent, is that we should employ standards higher than Six Sigma, the well known, rigorous, and disciplined methodology that uses data and statistical analysis to measure and improve quality.

So reluctantly, rather than measuring the number of incorrect bytes in a given data set, or the number of incorrect fields as Six Sigma standards would require, we began measuring the number of incorrect files received by customers against the number of files that were changed. If one teeny-tiny byte in a file with 1700 fields is wrong, we consider the entire file wrong. Any error, omission, or typo that makes the file imperfect in any way means the file is wrong. Period.

Next, we started a more thorough tracking of the changes we made. When a customer reported an error, and the report was proven valid, we could count the error in the month it was made instead of the month in which it was reported, producing a more concise measurement.

Seems reasonable, but there is a catch. If a customer, in January of 2006, finds 8 zillion files that are incorrect, and we trace the error back to a tariff interpretation problem in January of 2005; then suddenly, a month that had previously been stellar on our graph for a year could now shift to a very dim month indeed. In essence, we created a moving target.

The results of these changes in our measurements are shown in this graph:

Measurement Parameters Chart

Goal #1

With our measurement parameters in check, we started to set objectives for increasing our accuracy percentage.

Our first goal is to eliminate what came to be known as the “bad” months. (You can spot the “bad” months pretty easily on the chart. I hope that’s enough said about that.) In studying the months that fall into this category in the past three years, we determined that in each case the cause was a single error, multiplied by a large number of files. In fact, in all cases, the “bad months” were created by errors that affected just one client. Generally, these problems were human errors that can be corrected by taking staff members out back and beating them with a ...wait... I mean they can be corrected with additional training and additional automation.

We set out to eliminate those bad months by:

1. Adding more quality control measures to our delivery procedures. (It seems that in some cases, “bad” months were the result of not shipping all files to a customer, even though the files themselves were accurate. The new quality controls in our delivery processes are designed to prevent a recurrence of this “shooting ourselves in the foot” syndrome.)

2. Undertaking a thorough post-mortem review of errors to gain a full understanding of cause, effect, and potential preventative measures.

3. Locating opportunities to move the quality effort forward in the process to avoid errors in the first place.

4. Creating a plan to eliminate tariff interpretation errors by providing additional training, including:

a. On-going weekly training sessions for local researchers to discuss tariff exceptions, unique circumstances, and new tariff concepts, as well as special techniques or information discovered during data maintenance.

b. A six-month thorough review of our procedures manual including modifications, additions, and deletions on all procedures. This manual includes detailed instructions on over 100 separate processes undertaken to produce our databases.

c. Ongoing “Mock Calling Area Exercises”. No, these are not sessions where we make fun of the calling areas while doing jumping jacks. These are our version of the “Top Gun” school for fighter pilots, where the best of the best come to get better. In these exercises, local researchers are given copies of actual data files that have been changed in a test environment to recreate previous errors. The researchers are challenged to find and correct the errors. Trick questions are allowed, which offers an explanation for the ghoulish laughter we hear coming from the office of the twisted soul who mucks with the data to make the Mock Calling Areas in the first place.

5. Reviewing errors found during the quality control process for ideas on additional automation to eliminate human errors.

Future Goals

Once we’ve gone a whole year without a “bad” month, we’ll check off that goal and use the current accuracy statistic as our first benchmark.

At that point, we’ll put our next goal into effect: Cutting our error rate in half each year. So if our accuracy rate proves to be at 99.95% when the bad months are eliminated, our goal for the subsequent 12 months will be 99.975%.

This is an ongoing process. And we pledge to report these statistics to you monthly via our web site. (Unless they’re really bad.) No, we’ll do it every month regardless of the results. We are human, and we do occasionally make mistakes. But mistakes are also opportunities for us to further refine our quality assurance processes and eliminate future errors.

The Reward

What this means for clients is that you know what you’re getting, and what you’re paying for. It means you can trust KFR as your sole-source provider because you know our level of accuracy. And while producing data this good generally means we’re not the lowest priced vendor on the surface, we do offer the lowest total cost.

Lowest Total Cost & Dr. Redman’s Rule of Ten:

“For every dollar it costs you to do work when the data you receive is perfect, it costs $10 to do that work when the data contains errors.”

In fact, Tom has calculated that poor quality data costs the typical organization 20% of revenue. That’s no laughing matter. Can your organization afford poor quality data, even if it’s less expensive initially?

Freedom from data errors means the cost of doing your work will be 90% less than the cost of doing that work with an inaccurate data source.

Freedom from data errors means you can free up 20% of your revenue for investment in other things.

Freedom from data errors means the bills you generate are accurate, causing fewer costly customer complaints and billing disputes, and fewer refunds and credits.

Let Freedom Ring, Baby!

It Comes Full Circle

But back to my Service Level Agreement (SLA), the reason we brought Tom here in the first place. How can we validly define our accuracy and set-up a meaningful guarantee for our customers? When would he answer the barrage of questions I was thrown that fateful day at our management meeting? Aren’t those his problems now (I hoped)?

That’s when Tom made a bold, if not revolutionary statement. He shared with us what he had discovered after consulting for numerous customers of data vendors.

“Customers don’t want their money back,” he stated. “A small fraction of fees returned if the data is wrong is meaningless. Customers just want the data to be right the first time.”

Wow! Talk about a “back-to-the-basics” approach! Tom stated the obvious that we’d been overlooking in writing an SLA.

Tom convinced us that our SLA should be reward based, not penalty based. But the reward’s not for us when we do a good job. After all, that’s what you pay us for. The reward is for you when you find an error.

Introducing...Let Them Eat Cookies!

It’s a big bonus for us when a customer reports a suspected error. In seventy percent of these cases, our data is proven correct. But when the investigation spawned by the report does turn up an error, we get to correct it and make our database even stronger. It also gives us more ammo with which to train our Top Guns and keep them sharp. When you suspect an error in our data, email our Cookie Hot-Line at cookies@telecomdb.com.

If we did, in fact, deliver flawed data, we’ll send you a box of goodies from the world-famous Charleston Cookie Company right here in Charleston, South Carolina. Trust me; these are better than Mom’s double fudge-peanut butter-chocolate chip cookies. (Sorry Mom!)

In addition, as our quality initiatives continue, we’ll be working on developing more traditional SLAs that will focus on what customers time and again tell us are the most important attributes of our service: 1) data accuracy; 2) help desk results; and 3) on-time delivery. Once our goal of eliminating bad months is reached we’ll have a baseline on which to develop further customer assurances.

Does all this seem like overkill?

Well, maybe to an outsider. But from my perspective, absolutely not! As a small, family-owned company generating revenue solely from the data we provide to the industry, we know that our business lives and dies, literally, by the accuracy of the data we deliver.

While we are a micro company by telecom industry standards, the biggest players have come to rely on us. Our data is used to determine billing on billions of landline and wireless calls each month. A small error becomes enormous in the robust environment in which our data resides. It is precisely because of this that we insist that our customers’ experiences with our data are virtually perfect. We have learned to be continuously driven by the Big Q.

Now, you may think this is the end of the story...no way! With the Big Q in the driver’s seat, you’ve got a passenger-side view to vicariously go along for the ride! Join us for Part II of the Quality Story.

Follow our progress! Check our monthly stats updates at http://www.telecomdb.com/expertise/.

 
Tele-Tech News