Incentive Compensation and training residents. How shall we manage the intersection? (Part 2)

Last month I told you about the economic challenges we were confronting out of the transformation of our practice into one with both clinical and academic missions. I wrote about our charting improvement initiative and engaging concerned staff colleagues in both developing solutions and seeking the engagement of the broad group in the implementation of the solutions. I’ve promised that this month I would discuss modifying incentive structures so that an academic mission can be explicitly accommodated. Let’s consider some relevant observations.

As long-time readers know I have a bias towards quantitative evaluation whenever possible, thus, the recent publication[1] of a relative value scale for teaching gives me some hope for including more quantitative evaluation of teaching contributions among physician staff; however, this tool alone, even fully implemented as described by the authors probably won’t serve as the sole measure for academic productivity. Aside from my general reluctance based on Deming’s warning about management on numbers alone (see my July 2003 column) in the case of teaching and scholarly activities, how shall we assign value when the work done benefits the worker—the teaching physician—too?

Just as maximizing revenue isn’t unique to clinical practices—as an emed-l poster commented, “academic programs in particular, considering the payer environment most of them are in, need the clinical production efficiency every bit or more than the community ED.” referring to using RVUs as the driver for all compensation. Yet, in a double coverage ED, the process of supervising residents, particularly supervising senior year residents who are themselves overseeing the work of more junior residents, I would accumulate many more RVUs/hour worked than a physician in the same ED just seeing patients. Since the other physician’s work indirectly facilitates me spending time with the residents and accumulating their RVUs, some adjustment is probably appropriate to fairness.

In the context of direct teaching activity, while Khan and colleagues’ approach shows some merit, as another poster has opined in the past, “One must have adequate protected time (which is quite expensive) to further the department’s goals. Someone who runs the department, runs the residency, runs a course in the medical school, directs the research for the department—all of these need protected time. I’ve seen papers published about how protected time is counted, and am quite shocked, to be frank, that attendings are given (paid) protected time to attend (not teach, just attend) resident conferences and faculty meetings. Not that I don’t think they shouldn’t attend. But, if you have to divvy up for every minute that your faculty member spends beyond strictly clinical hours and reward it with paid protected time, you’re paying out hundreds of thousands of dollars a year just to have your folks come sit in a room, with little to show at the end of the year. It is crucial that protected time unfunded by outside resources be used wisely, if we hope to progress as a specialty. If you have to ‘spend’ it for every minute outside of a clinical shift, nothing will be left for the larger stuff, like having a productive department as a whole.”

This colleague works where the group has come to a common understanding, where certain duties are “just part of the deal . . . there were some irreducible contributions as faculty which would not be measured against ‘protected time’. This included attending faculty meetings, grand rounds, and being asked to give a talk on some subject every month or two, if needed. It may also include participation in hospital and medical school committees.”

You might argue that we should be separately compensated or supported for all of the non-clinical activities and that’s a nice idea but not reality at our hospital or in the academic environments I know. I don’t think full support for academic activities exists anywhere; that corner of our society is still a bit of a social welfare state.

So taking all these inputs together, I’ve chosen to consider the issue along the lines of the “Leading Beyond the Bottom Line” principles I’ve written about, adapting these principles first articulated by American College of Physician Executive authors.

It seems to me that our developing clinical and academic department should strive for the pareto optimum among patient care, community service, organizational economic well-being, staff well-being (economic, professional development and others) and unique to the academic environment, a fifth good: academic accomplishment (perhaps measured through RRC approval, peer academic recognition, scholarly productivity including grant funding and others as per Khan). I don’t believe that it is possible to divorce clinical center productivity from the rest and optimize it alone. We teach residents from almost every specialty, a significant fraction of patients admitted in most teaching hospitals come through the ED, we teach medical students, etc. These academic interactions matter in our institutional standing, and for those at the medical school seeking promotion and tenure (hence compensation) and other tangible and intangible aspects of life in our clinical and academe.

I still think we’re in an era of “local solutions.” I’m a big supporter for all of these metrics as inputs, but not direct drivers of resource allocation decisions or incentive pay. Ultimately, in seeking a pareto optimum, I subscribe to Deming’s admonition (his seventh “deadly disease”) that not all measures are known or even knowable. As a consequence a degree of subjectivity remains and it’s the leader’s responsibility to exert that subjectivity “fairly.”

Yet, as we at Maimonides Medical Center undertake to develop a fair allocation method, I’ll offer this essay and references to my colleagues on our Faculty Practice Finance Committee and seek their solutions and their engagement of their colleagues in crafting a solution. When we get there I’ll let you know how it turns out.


[1] Khan NS, Simon HK. Development and implementation of a relative value scale for teaching in emergency medicine: the teaching relative value unit. Acad Emerg Med, 10(8):904-7, 2003.

Incentive Compensation and training residents. How shall we manage the intersection? (Part 1)

Next month our third class of EM residents will start and we’ll have attained a full complement of emergency medicine residents, taking another step in our transformation from a purely service focused organization to a department delivering service and education while undertaking scholarly activities.

As I write this column the transformation comes to mind since we’ve just distributed our semi-annual profit-sharing and incentive payments, based as in the past on measurable clinical, academic, administrative productivity and my subjective evaluation of the individual physician’s contributions to the department. One quarter of the profit-sharing “pot” attributable to each factor. (I’m describing only our profit-sharing plan and not our base compensation plan which is that of hospital employed physicians.)

The residency program has imposed its own demands on practice earnings, though we’re not a medical school and thus pay no “dean’s tax.” I had agreed long ago when contracting with the hospital, that the faculty practice would support certain aspects of the residency in lieu of a dean’s tax. That agreement and the steadily increasing pressure on revenues all of us in medical practice have experienced over the years have appropriately raised staff concerns about profit-sharing income.

This month and next I’m going to describe our department’s efforts at improving financial performance while broadening the group members’ understanding of practice finances and fulfilling our educational mission. This month I’ll focus mostly on our billing improvement and educational efforts. Next month I’ll share some early thoughts about transforming our incentive program in keeping with the transformation of our department.

Recently, several outspoken physicians have worked on improving our charting quality as a first step to improving our revenues. We long ago had implemented straight-forward, housekeeping improvements: assuring charts were completed and signed, making sure we didn’t lose charts for billing, confirming that updates of insurance information to the hospital were shared with the practice. These and other best practices have been taught by many and I won’t review them here.

What’s new is our giving of near real time feedback to our physicians regarding the completeness of their charts. Though the value of regular feedback is obvious to all, making it happen can be a challenge with emergency physicians working shifts—coming and going each on his/her own schedule. In the era of paper charts and given realistic concerns regarding billing practices it had been impossible for us to accomplish. However, more recently our electronic medical record and the hospital’s virtual private network have permitted our staff to logon from home to complete their charts. One of our physician staff built a web based application that our coders use to send email to each physician about their incomplete charts and that tracks the physician’s completion of these incomplete charts as appropriate. Not all charts are appropriate for completion.

In consultation with hospital compliance and legal staff and discussion with several professional EM chart coding companies, we decided that only charts missing entire sections: history of present illness, past medical/surgical history, social history, family history, review of systems or physical examination would be referred for completion. Charts that addressed each area but appeared to our coders as lacking usual documentation would be referred for “educational review” but no expectation of chart completion would attach to the referral and coding and billing would be based on the original chart. We set a 72 hour limit on the physician’s response to the request. After 72 hours, even woefully incomplete charts are coded as-is.

Through this feedback and education process, coupled with attention to our coders’ training we’ve seen a reduction in incomplete charts and a general improvement (measured as RVUs/patient) in charting quality. It’s no surprise that some physicians have shown greater improvement than others. We anticipate revenue improvements as these early charting improvements translate into higher charges.

I’m sure you’re equally enthusiastic for improving collections, but not everyone has an electronic medical record and a hospital supported virtual private network. Yet, everyone does have colleagues who are not as preoccupied as you are. Engaging them in solving your revenue problem makes all the difference. It’s not merely a matter of demanding the behavior change; rather you want others invested in making the behavior change across all group members and make the change stick.


Supporting those junior colleagues requires both structure and constant mentoring from you. For a structure, I’ve created a faculty practice finance committee that will I expect develop further improvement ideas. For mentoring, I’ve offered my time and unfettered access to the semi-annual accounting of the practice plan’s revenues and expenses—though individual physician’s compensation will be available only in aggregate. I’ve also just ordered them each a copy of a book I first mentioned in my May 2000 column: Fisher and Sharp: Getting It Done: How to Lead When You’re Not in Charge. HarperBusiness, 1999; ISBN: 0887309585

With these efforts I expect that our team will not only develop solutions, but will bring others in the group in so as to improve upon the solutions the committee itself develops. Next month I’ll discuss how our group, now engaged in both clinical service and residency teaching might go about developing a profit-sharing plan that suits all medical staff even though physicians’ activities differ.

Developing Your Doctors: Management Complimenting Your Leadership

A decade ago, firmly ensconced in an academic health sciences center and as a relatively recently tenured full-professor I became a participant in an exercise intended to assure the medical school that tenured faculty remained productive. A worthy idea, but various events conspired to leave me with little more to show for it than a loose leaf binder full of barely intelligible notes, some beautifully printed but poorly documented forms and a floppy disk full of WordPerfect™ templates.

Over the years I’ve been at Maimonides, I’ve reworked those forms and the ideas embedded in them for our employed, but non-tenured physician practice. In harkening back to the idea in my column of July 2000 of the “Pyramid of Medical Staff Development” it seemed to me that I needed a management tool that would implement my leadership idea. Then too, as our department focused on developing a residency faculty from out of a clinically focused physician staff, the ambitions of individual physicians manifested—supporting the individual physicians’ professional development and security seemed appropriate to the process.

As shown in Figure 1, medical_staff_pyramid
the “pyramid” develops from recruitment through the phases of managing a medical staff. Like you, I have a fine staff; but do they understand not only the mission of the department but also the strategic objectives the hospital and department have in mind for accomplishing this year, and next? Only if I tell them. Yet at the same time these physicians have ambitions for their own career, and I’m responsible to help them attain those.

So I’ve worked and reworked the decade old forms and instructions and developed a package that we distributed last year; but which I really didn’t fully follow through on implementing all of the steps. This year with the help of my office assistant, I hope to do a better of job of staying on schedule and take this professional development tool from an idea to an accomplishment to a routine component of my work. The essence of the tool is shown in Figure 2.

A cover sheet with a preamble and signature/date blocks for you and the staff member covers how many ever sheets needed for your staff member to describe their ambitions for the forthcoming contract year. The broad areas of accomplishment should fit with your environment. In a small community hospital practice you may include nothing more than “clinical service;” “service to department, institution and community” and “other activities.” At an academic health sciences center, which doesn’t impose its own approach, you may find a single category for “scholarship” overly restrictive. Including an additional sheet allows for the staff member’s personal goal statement for the next 2-5 years (A junior person probably won’t want to look further ahead; a mid-career physician may enjoy the opportunity for expressing a desire for stability or change over the period.). It also provides room for a time breakdown—in percentages—among the categories included.

Mail or email the forms to your staff members and ask them to complete the forms prior to meeting with you at contract renewal time, perhaps in the quarter prior to the new contract year. Regardless of the number of categories selected, the form is a tool that precipitates one-on-one discussion and deliberation with you. It provides you an opportunity for understanding your staff member’s horizon while allowing you chance to share your specific objectives for your department and while coming to a “fit.” Don’t merely accept the input, discuss and “test” it by asking yourself and your staff member how you will both know that the objective has been met.

By encouraging at least two regular meetings annually at which clinical productivity and clinical performance can also be reviewed you provide the feedback necessary for your physician staffs’ growth in performance and confidence in your leadership through conveying your continuing interest and support of your staff member’s overall professional growth and development.

At the beginning of each subsequent year, your staff member returns with both a new set of forms for the forthcoming year and the previous year’s forms completed with their self-assessment of their level of accomplishment. This meeting becomes both a review and a look forward with the forms providing a chronological record of objectives and expectations asserted and a measure of their accomplishment—together a fine management accomplishment in support of your leadership.

Figure 2

[Area of accomplishment & evaluation (see list below)]
(make all boxes auto-resizable in word processor)

Physician Annual Objectives:

List specific, measurable accomplishments; ask staff member to complete this section prior to meeting with you at contract renewal time.  Don’t hesitate to edit and enhance with your staff member during your meeting.

Chair’s Response:

Describe exactly what support will be provided; complete in longhand at time of meeting with staff member for contract renewal.

Level of Accomplishment:

Entries made by staff member continuously as objectives accomplished;  reviewed with chair at ~5-6 months into contract year and again at contract renewal time.

Chair’s Response:

Interim evaluation entered related just to these objectives at mid-year meeting; overall evaluation on these objectives entered at contract renewal time.

Areas of accomplishment & evaluation:
1. Clinical Service
2. Teaching of Residents, Interns and Medical Students
3. Teaching of other than above (might include EMS personnel, nurses, PAs, etc.)
4. Scholarship (including publication in peer-reviewed journals, books and other critically; reviewed forums; invited presentations outside of your institution)
5. Professional Service to department, institution and community (departmental administrative work, speaking to medical staff or other departments, representing the department or institution to community or governmental groups, health fairs, becoming a “nighthawk”, etc.)
6. Other activities (self improvement, including adding ultrasound credentialing, pursuing an advanced degree

We’re in the Business of the Trees.

My title this month came to me after an animated conversation with Rick Heffernan, MPH, Director, Data Analysis Unit, Bureau of Communicable Disease, New York City Department of Health and Mental Hygiene (NYDOHMH). I had received, several days earlier, an email message including the “emergency department syndromic surveillance quarterly report.” Just what is this mouthful and why should it matter to us in the ED? Well, as Rick and I agreed in conversation, the health department is “in the business of the forest” and since we see patients one-by-one, we’re in the business of the trees. Together, we can and should learn more about the community population both the health department and we serve. More about working with your local health department later.

The health department’s Quarterly Report of Syndromic Surveillance was replete with graphs comparing visits to our ED with citywide ED visits secondary to complaints of diarrheal, respiratory and fever-flu syndromes in patients 13 and older, as well as overall ED visits. (See graph examples [editor: please correct plural if only one graph used]) These quarterly reports are of more academic than practical interest, but they presage a future with more real-time scrutiny of patient populations in our own emergency departments.

I’ve yet to find a succinct formal definition for syndromic surveillance, so bear with me as I try to explain what’s happening in New York City and in many other urban centers around the country.

Syndromic surveillance is the organized observation and analysis of patients seeking medical care as part of a monitoring effort for a community at risk of bio-terrorism. In the immediate post-September 11, 2001 period, the Centers for Disease Control and the New York City Department of Health and Mental Hygiene with the cooperation of 15 hospitals set up a manual system watching for a secondary bio-terrorism attack that some feared was likely in the aftermath. The subsequent anthrax bio-terrorism, which was not discovered through syndromic surveillance, was nonetheless another spur for creating automated, rather than manual systems. It is through this approach, which is already being refined that we may find syndromic surveillance operating some day in the future in our own ED.

In the long run, syndromic surveillance must provide value for the physician working in the ED and that will only happen if we are “cued” to look for an illness we otherwise wouldn’t expect. The prepared mind will recognize a pattern that the unprepared will not—that’s why practitioners read and study. Tracking the presenting complaints of our patients and notifying the team on duty in the ED at the time a patient presents with a complaint that’s appearing at higher than usual frequency may encourage the emergency physician and nurse team to contemplate a broader differential diagnosis—even illnesses associated with bio-terrorism.

We could have used such a system earlier this year, when we experienced an outbreak of measles (rubeola) in our community. A colleague missed the diagnosis in a teenage patient who had received the vaccine, both as a toddler and at age 12. We were alerted by the pediatrician a day later—how embarrassing reporting the miss to the health department. On the other hand, we had at least not had the child and parent wait in the waiting room, but had brought them into the ED to a closed room because of the rash and concomitant respiratory complaints. This all occurred during the SARS surveillance we all experienced in April-May 2003 and all of us were focusing on respiratory illness and travel history, not rash. It’s doubtful most physicians would have made the diagnosis in any case given the history of immunization and the relative rarity of an acute presentation.

This one case of measles did alert us to its presence in our community so we—just as you would—created handouts with photos and the key signs for recognizing the disease on presentation. Sure enough, a few days later another child presented to the ED and the treating physician recognized the presentation as most likely measles and obtained infectious disease consultation. Serum for antibody titers was drawn and the pediatrician who had seen the child in a crowded office earlier the same day was called and alerted and advised to check immunization records of the children who had been in his office at the time. Still, we once again missed a required step. No one called the health department for this reportable disease. Over the next week we saw several additional cases of measles and I know from colleagues that the outbreak, and the initial failure to recognize and report the disease, occurred repeatedly across the Brooklyn, Queens and nearby suburban communities. The health department did a marvelous, but onerous job, of contact tracing every identified measles patient and those exposed to them so that vaccine could be administered and isolation encouraged as appropriate.

I enjoyed my conversation with Mr. Heffernan and others at the NYDOHMH. We’ve let our entire physician, nurse and technical support staffs know that the health department wants notification on suspicion of communicable disease and not to wait for confirmation. The group at the health department we worked with around the measles outbreak was not the same as Mr. Heffernan’s group. Yet our experience of the outbreak made our entire staff more aware of the value the NYDOHMH brings across the board. While I think that syndromic surveillance is a wonderful theory, we who see the patients one at a time will be alerting the authorities before they alert us for some time yet—because we’re in the business of the trees.

HospDiar

HospFev

HospResp

HospTot

Have the ambulances stopped coming? Is it real or only accounting?

As I sit down to write this column in early July, I’ve been pondering our first six-month results. You’ll recall that a few months ago I wrote about our electronic medical record startup in April and with a few months of experience and the statistics from the first half of this year—well suffice it to say this column nearly wrote itself.

Yes, the ambulances are still bringing patients to the Maimonides Medical Center ED, but for a week or so, it appeared as though we had many fewer ambulance patients in the first half of this year than in the same period last year; although, total patient volume was up 1.6% and admissions were nearly constant. So while I still don’t have all the answers as I write, I think our experience—particularly because it derives from our electronic medical record (EMR) implementation is instructive.

When Charlie Howe, our billing manager, brought the first six months of statistics to me—an unusual hand delivery, he called my attention to the drop off in ambulance patients, particularly for April-June. When we drilled down on the data that day it appeared as though something had happened to patient deliveries (“ambulance drops”) from the voluntary hospital ambulances and to a lesser degree from our volunteer community based ambulances.

EMS in New York City, led by the Fire Department of New York (FDNY), is an amalgam of the publicly operated fire department ambulances, voluntary hospital based ambulance services also dispatched through 9-1-1 and a web of community based volunteer ambulances and proprietary ambulances dispatched through seven digit telephone numbers. One of the larger and better organized of the volunteer ambulances in NY is the orthodox Jewish group, Hatzolah. Our hospital sees Hatzolah volunteers mostly from two nearby communities, though other Hatzolah volunteer organizations also bring us patients less regularly. Other community based volunteers, Bravo and Bensonhurst Community Ambulance service bring us patients, as do several nearby voluntary hospitals and our own hospital based ambulance service. The ambulances dispatched through 9-1-1, whether FDNY or voluntary hospital based, use the same ambulance call report (ACR) while most of the volunteer ambulances and the proprietary ambulances use their own form for meeting state requirements.

In trying to find out what was happening to our “ambulance drops”—we asked, “Were there really many fewer ambulance drops, was it merely accounting or some of both?”

Well I didn’t ask myself this question before I reported the apparent reduction in ambulance drops to our CEO. I reported it as a fact—uh oh.

Stanley Brezenoff, Maimonides’ CEO is a long-time public servant. His past seven years at Maimonides has been one of his very few private sector employment experiences. A skilled administrator and troubleshooter, he has taught me about the emergency department’s role in reaching out to the community. His leadership has transformed Maimonides even as he has personally visited more than 150 community organizations to listen to them and learn from them what they needed from the hospital. Sometimes he has brought me along. He has brought the message of the hospital’s openness to the communities’ needs everywhere in Brooklyn (and beyond) where he’s gone, the volunteer ambulance corps among them. He truly believes and teaches that our hospital belongs to the community—we who work there are merely stewards. He was concerned by the reported diminution in ambulance drops.

Until April 9, when we went live with our electronic medical record (see EMN, June 2002) our ambulance triage nurse completed a triage form that went to the registration clerk along with the ACR. The triage form captured through check boxes and write-ins the name of the ambulance company that had brought the patient. The triage nurse enters the same information in the EMR, but the information hadn’t been made available to the registration clerk. We missed it. In all the work we did with the registration clerks around changes in workflow we overlooked the registration clerks’ usual process for getting the transporting ambulance information—they did not read the ACR; they did read the ambulance nurse triage note.

Well, between the time we discovered the deficit in ambulance transports and had drilled down to its apparent genesis we took advantage of the fact that our Maimonides’ ambulances bring patients to our own ED. So we undertook a one-day comparison of the registration system’s data with our own ambulance department’s ACRs. Whew! Of 11 ACRs showing Maimonides as the destination, only five were noted in the hospital registration system as Maimonides’ ambulance transports. Four others were attributed to other ambulance operators and two weren’t even recorded as ambulance drops—thus explaining our missing numbers.

Looking further I realized that in April, we might have experienced a true decrease in ambulance drops. To some extent this is explicable through our early days of learning the ropes with our new EMR, yet, here it was July and I was only just examining the quantitative data—I’d neglected to do so for too many months. I had spent time in April and May on the telephone with some of our volunteer ambulance providers. I’d listened to their concerns about the “computer system” and their frustration over waiting times at ambulance triage—potentially bad for patients and always bad for the volunteers who wanted to get back to work or family.

We’re responding now, by reconfiguring our clerk staffing to bring a registration clerk over to the ambulance triage location so that a clerk can enter the initial patient information while the nurse focuses on the patient and ambulance personnel. We’ve also changed the information available on the registration icon so that the registration clerk can see the nurse’s selection of the ambulance provider. Using the feature in HealthmaticsED™ which when “mousing over” the icon—a “tooltip” box opens on the screen displaying text, which now includes means of arrival and the ambulance company’s name.

Just like at your hospital, our lab calls us with “panic” values—which sometimes don’t make sense in the given situation. What do you do? You build a safety net for the patient and then repeat the test. Next time, I’ll call the ambulance companies while rechecking the accounting—before I call my CEO. Even though he always says that bad news can’t wait; first I’ll make sure it’s news at all.

Patients are waiting. There are delays in my ED. And I’m doing something about it. – Part 2 (of 2)

Last month I wrote about some of the ways you might measure how long patients were waiting in your ED. This month I’d like to suggest some approaches to reducing the waits and delays your patients experience and your community fears.

While many different approaches to evaluating and reducing waits and delays have been used, probably one of the best breaks the patient care episode into three periods. Everyone at Maimonides Medical Center knows that we track “T1, T2 and T3.” These three intervals mark the period from patient arrival until first physician contact, initial physician contact until a disposition decision is made and lastly the period from disposition decision until the patient physically leaves the ED. Various consultants use a variety of terms, the Advisory Board Company for example names the same three intervals in terms of their recommendations for improvement: “Expediting Time to Physician,” “Expediting Diagnosis” and “Expediting Inpatient Admission.” Of course this leaves those patients who are discharged—usually the majority of those patients seen. Perhaps this is not an issue in your ED, but in ours, where our physicians teach the patients, handing the printed discharge instructions to the patient, this is an issue.

Your efforts at reducing waits and delays should be focused where you’ll gain the greatest benefit, but it’s important to be practical and for most of us it’s more realistic to start with processes that take place entirely within the ED rather than processes that involve outside departments. Clearly, if everything in your ED is running perfectly, except for the interval to get a chest x-ray or CBC, well then you have to go where to the problem. But for most of us, working to reduce “T1”-the period from patient arrival until first physician contact is a good place to start, because while it involves workers who may have different managers or supervisors, the workers are all based in the ED and are accustomed to working together. Both of the other intervals require working with personnel based in the lab or radiology departments (if addressing “T2”) or on the floors (if addressing “T3”). Starting in the ED says to others that you are serious about first cleaning up your own mess.

Among the most commonly adopted approaches to reducing “T1” is bedside registration while the patient would otherwise be waiting. Not sending the patient back to the waiting room after triage also gives the patient the sense that they are moving forward in the process of care. Instead, with the patient on a stretcher, register the patient. If your ED is low-tech, a clerk can do this with a form, interviewing the patient and entering the information into the computer at a later time. If your ED is high-tech, the clerk can use a wireless-networked workstation wheeled up to the patient’s bedside. If space and funds are available, a PC can be placed at each patient care location, but based on experience I think this is the least useful approach as crowding often makes it difficult to reach these. But perhaps that’s just Brooklyn and your ED isn’t that crowded (tongue firmly in cheek). Electronic medical records impose their own banal logic on registration, as many systems won’t allow entry of orders or observations prior to patient registration. The electronic handcuffs thereby applied may force registration earlier in the process of care. An abbreviated triage with a short registration entered by the triage nurse solves the problem.

The key factor here is using the waiting time productively. Many opportunities for using waiting time exist early during the patient’s ED visit. A brief triage followed by a more complete assessment by the first available practitioner, whether physician or nurse can facilitate the ordering of laboratory tests or imaging studies through either physician order or nursing initiation of standing medical orders by protocol. In either case, the results will be available sooner than waiting for complete physician assessment. The downside of over-utilization is a risk of this early testing strategy, but monitoring with feedback to the entire group can mitigate the problem through the “Hawthorne Effect” and the tendency for any group of people to modify their behavior to be in-line with that of their colleagues.

Brief triage reduces waits by freeing the triage nurse for the next patient, thereby reducing the waiting time for triage itself. You may not be aware of this wait, I’ve not described it above and we’ve only started to track it, but this “T0” the period from patient presentation until triage can be a cause for significant patient displeasure and morbidity. Both reticent patients, and those seriously ill, may not make enough of a fuss to receive an urgent response and while waiting quietly, deteriorate.

Another intervention, though not reducing the “T1” interval will reduce the “T2” interval. Many times patients wait for the physician to make a decision even though laboratory results have become available, but the physician is not aware of them. Bringing the lab results directly to the physician along with the chart can reduce “T2” as absent an electronic tracking system, the physician must periodically check to see if results are back from the laboratory thus interrupting other patient care work. By knowing that the results and the chart will come to her when results become available, the emergency physician will interrupt other work less often and through the prompting provided by the chart and results together will disposition the patient that much sooner.

Reducing waits and delays not only improves your patients’ experiences in your ED, your Emergency Department’s standing in your community and your administration’s perception of you, but it also provides you with the capacity to see more patients. Since as patient move through more quickly you open up space to see the next patient, surely an effective response to the continuing growth of volume in most of our EDs.

Patients are waiting? There are delays? In my ED? I’m shocked, shocked! – Part 1

But are you? As patient volume climbs and primary care providers and consultants become ever less available, waiting in the ED stretches on forever, or seems to if we listen to our patients’ complaints or review the results of recent satisfaction surveys. You have heard this, haven’t you? Hasn’t your administrator brought you recent results of the hospital’s patient satisfaction survey from the winter just past and wasn’t ED waiting time one of the hospital’s lowest rated measures?

For over one year now I’ve been encouraging you, gentle reader, to measure various facets of what goes on in your ED. Patient time in the ED is only one measure, but one very much on the minds of your patients and your administration. As I described two months ago in the April 2001 column, patients are “demanding excellent, speedy, available convenient health care.” Your administration, which is actively competing with nearby hospitals, is looking for the same.

Measuring the patient’s “registered time in the ED,” “waiting time” or “throughput interval” is the first step towards addressing it? How are you measuring and reporting this parameter at your hospital? Unfortunately, many of you, with the poor quality tools available can only measure pretty broadly—for example from patient arrival until patient discharge. You may not be able to group patients by disposition in our measurements and most assuredly you don’t measure and report patient time in the ED in percentiles, but more likely only as an average. Reported as an average, this measure is not merely insufficient, but misleading, but is often all that’s available from your systems.

Why should you measure and report your “throughput interval” in percentiles rather than as an average registered patient time in the ED? Because averages assume a uniform “bell-shaped” distribution and we all know that some ED patients spend unusually long periods in the ED, in fact we call them “outliers.” Percentile reporting permits the acknowledgement of outliers yet allows for tracking and management of service performance.

For example, most of you can recall when going to the bank meant choosing a teller’s line, getting into it and immediately regretting that you had not chosen a different line. About 15-20 years ago many banks began implementing a single queue for all waiting customers and as a result, we all got to the front of the line faster than if we had chosen a “slow teller” but perhaps not as fast as if we had luckily chosen a “fast teller.” What does this observation have to do with measuring ED patients’ “throughput interval?” Well, your bank has a standard for customer waiting time in line and the people who work in that branch know at what point in the queue the standard will be breached. The bank has set a customer service standard for example (this may not be true for your bank) that 85% customers will get to a teller in 10 minutes or less and the branch manager knows at what point on the queue that standard will be breached. Typically, the branch manager or teller supervisor will open another teller window as the waiting line of customers approaches that point.

Ah you say, now I understand, if only the airlines operated that way. Well, they do, they just set their standard much lower, perhaps as low as 70% of customers waiting 20 minutes or less.

Getting and analyzing data in this way is easy with an ED information system, but difficult if the only automation you have is the hospital registration system. Of course, measuring patients’ registered time in the ED does nothing to get at the causes for it.

Another way of getting at the same idea is to look at preset intervals. Some hospitals have learned to report this interval in multi-hour “chunks.” Intervals of less than two hours, two-to-four hours, four-to-six hours and over six hours are often used. One can then describe the fraction of the total patient volume (percentage of patients) whose registered time in the ED falls into each of these tracking periods. This is the approach to reporting at the QI Project® for registered patient time in the ED.

If you’ve been reading my column over the past year, you’ll recall that I’ve extolled “base-lining” and discouraged benchmarking. So why do I point you to the QI Project®, a benchmarking site? Mostly so that you can consider adopting the definition they use and also so that you can determine if your hospital is one of the 1500 or so hospitals that report data to this consortium. If so, it will make your measurements, the first step to improvement, easier.

In the examples of the bank and the airline counter I describe different standards. Setting standards is a rather foolish way to operate. It is what your administration does when without consultation with you and consideration of the ED’s capacity to adhere to the standard, your administration advertises, for example, that all patients will be seen by a physician within 30 minutes of arriving at the ED. Here, the standard is that 100% of patients will have a waiting time of 30 minutes or less to see the physician. Is that achievable? Yes, but not unless your ED has the capacity to perform at this level and few do, though many can be improved—over the course of years—to meet or even exceed this level of performance. Knowing where you stand today is the first step and one well worth undertaking for it tells your administration that you intend to actively manage this measure of quality-of-service.

Next month I’ll address what some hospitals have accomplished in reducing the patient “throughput interval.” In the meantime, you might determine if your hospital shares data with the QI Project® or belongs to the Advisory Board Company, a research and consulting organization to which about half of the hospitals in the country belong.

The Why’s and How’s of Relative Value Units (RVUs)

I’m necessarily digressing below from the theme begun in last month’s column on using “relative value units (RVUs)” in measuring physician work and productivity. What follows briefly describes what RVUs are and from whence they come. For more information on how the RBRVS and RVUs work read the two-part article on the ACEP web site: “Basics of Reimbursement” Part I and Part II (ACEP membership is not required).


On January 1, 1992, federal regulations that implemented federal resource-based relative value scales (RBRVS) for the payment of physicians under Medicare went into effect. Since that time all federally sponsored physician payment programs and many others have adopted the RBRVS method of payment. ACEP estimates that over 70% of all payments for physician services are based on the RBRVS data, so that even if Medicare patients are not a large part of your practice, RBRVS will impact what you are paid for a given service (see web page citations above).

The RBRVS method uses RVUs to measure the work involved in performing a clinical service, the expense involved in delivering the service and the malpractice risk associated costs of performing the service. The “work RVUs” incorporated in the method explicitly include the physician work expended on a patient service before, during and after the service itself. Thus work RVUs provide an explicit tool to measure and compare physician work and productivity across a varied mix of services, including the services delivered by emergency physicians.

Each clinical service an emergency physician provides to a patient is billed through the use of a “CPT Code”. The book, Current Procedural Terminology (CPT), is a listing of descriptive terms and identifying codes for reporting medical services and procedures. The purpose of CPT is to provide a uniform language that accurately describes medical, surgical, and diagnostic services, and thereby serves as an effective means for reliable nationwide communication among physicians, patients, and third parties.

Each CPT code is associated with an RVU value that is segmented into work, practice expense and malpractice costs as noted above. The association of RVUs with the CPT codes that most apply to your practice can probably be best obtained from your billing service. The entire list of CPT codes and associated RVUs including the segmentation can be downloaded at the Centers for Medicare & Medicaid Services site.

Once we understand that every physician delivered clinical service has a CPT code with an associated RVU, this allows us to use the measurement of RVUs to begin the process of comparing our physician group members. Measurement of RVUs billed, adjusted for clinical hours worked, or “RVUs per hour” provides a far more reliable tool—better able to compare productivity among physicians—than does the traditional emergency medicine measure of patient’s per hour. Why? Because over equal intervals physicians can no longer assert that their productivity is unrecognized because their patients are sicker: The very measure, relative value unit, adjusts for that phenomenon. Only in the event that one or another group member is routinely assigned to a distinct clinical population (e.g., “minor-care,” “fast-track” or pediatrics) or to a distinct schedule (e.g., all nights) should that individual’s productivity be excluded from direct comparison with colleagues.

Let me once again emphasize that not everything about a doctor can be told from measurements of clinical productivity as measured by RVUs per hour. In fact, this one measure will give a hugely distorted evaluation if not coupled with at least measures of patient waiting times or throughput intervals and measures of medical staff, ED nursing and support staff and patient satisfaction. Other subjective evaluations such as “group citizenship” also matter. Some current approaches to economic benchmarking also seek to evaluate emergency physicians’ comparative utilization of laboratory tests, imaging studies and consultations.

Someday perhaps we’ll have other quantitative measures of physician quality, but in the meantime RVUs are one useful guide. Nonetheless, RVU measurements should not be used to bludgeon physicians, but rather used as a measure for comparison to articulated expectations as I explained last month in describing the pyramid for medical staff development. Measurement and comparison of RVUs with adoption of the pyramid by your medical director or ED chief speaks to enlightened leadership; bullying by your group’s leader over productivity and RVU comparisons suggests it may be time to move on.

The group should measure monthly for one year before identifying baseline performance expectations. Monthly measurement with reporting of the group’s overall performance at the 20th, 50th, 80th and 99th percentiles should complement monthly reporting to each group member of individual performance. Provision of individual performance data in this fashion, supported with explicit feedback will over time tend to reduce the variability of performance among the group on this one measure. Remember, not to lose sight of other important subjective and objective measures as mentioned above.

Reporting productivity in this fashion will quickly unmask differences in quality of documentation among members of your group. Exhortation to improve documentation is not the answer and neither is a new system for documentation whether dictation, templates or electronic. Rather, investing energy in identifying the best documentation captured by members of your own group and using these “best documentation practices” when teaching the rest of the group will improve documentation overall while simultaneously optimizing revenue capture which is based upon documentation. In those groups in which physician earnings are based upon the individual emergency physician’s own documentation, the incentive is obvious. But rare is the group that does not depend at least in part on direct clinical revenues and reminders of the importance of clinical revenues to the group’s well-being associated with other feedback as part of the implementation of the pyramid for medical staff development may help.

If physician productivity measurement is used as a threat, it is probably time to look for another job, if possible. Yet, when used as one measurement in implementing the pyramid for medical staff development it can be an invaluable tool for both assisting in the maturation of the group practice as a whole and in retaining excellent emergency physicians.

Measuring Productivity Means Setting Expectations, and—Possibly—Managing Poor Performance

I’ve begun to hear from some of you commenting upon earlier columns. In particular, several have asked about ways of getting started with measurement so as to help you decide when to add a partner to your group or when to intervene with a partner who is only seeing one-half as many patients as the other group members. In the second example just noted, if you are actually measuring the “one-half as many” you’ve already started your measurement. If it’s merely an impression, than you are yourself raising barriers by asserting the ratio without the evidence. Correspondents have asked me to recommend a text or other literature that they could peruse to educate themselves on the measurement of physician performance. Unfortunately, almost nothing is available that has been actually validated in practice. Yet, models with some face validity may be found, mostly among consulting organizations.

My hospital recently employed a consulting group as part of an external assessment of our medical staff credentialing and peer review processes. At an instructive presentation at our executive committee meeting the consultants proposed a “pyramid” for medical staff development. While proposed for the general medical staff, applicability to any department, including my own was obvious.

Figure 1: Pyramid of Medical Staff Development
Medical_Staff_Pyramid

The pyramid (figure 1) begins with the obvious: Get the best people you can onto your team. Since I’ve only one column this month, we’ll leave recruiting for another time. The interlocking components of setting expectations, measuring and giving feedback are the key components of the change process. Like you, I believe that most physicians want to “do the right thing.” Unfortunately, most of us are increasingly uncertain as to what the “right thing” is these days, which is why clearly setting expectations is so important to beginning measurement.

Let’s face it, what I am talking about goes by many names, but measuring productivity is a component of physician profiling. By itself measuring physician productivity is value neutral, but as I’ve previously written (April 2000, EMN) productivity measurements can be used as a bludgeon or as a teaching tool. Unless the three stages of setting expectations, measurement and feedback are in place, jumping to “managing poor performance” or “corrective action” is simply bludgeoning your staff. Thus, if some measurement of physician productivity is put in place it must be implemented based on some clearly articulated expectation of productivity. Well, what shall we use? Probably, the most widely used measure is patients per hour (PPH), which is also the most easily measured. While I do not think this measure has as much value as another—I prefer to measure relative value units (RVUs) per hour—I’ll discuss how PPH might be measured since so many of you have expressed interest.

Several sources suggest that 2.5 PPH is an appropriate benchmark of physician productivity.[1],[2] But is it? Should you even set a benchmark? Or should you develop baseline information in your local environment first? I derive my answer from the pyramid, which suggests that measuring and comparing to articulated expectations is the desired process. Given the uncertainty of what constitutes appropriate productivity despite the references cited below and the excellent discussions by Tom Scaletta, MD at the AAEM web site (see “Rules of the Road Q & A) and elsewhere including the emergency medicine internet mailing list (subscribe by sending “sub emed-l” to LISTSERV@itssrv1.ucsf.edu), I believe it is far more important to measure your own team’s productivity over-time before setting your own baseline rather than subscribing to another’s. For example in our adult acute area having measured our PPH for more than two years we still see a PPH of less than 2.0, but then our admission rate in that part of our ED nears 45%.

Returning to my correspondents’ questions, how shall we measure PPH? An easy route to measurement of PPH is asking your billing contractor to provide your patient volume by physician for the dates of service of interest and count your scheduled clinical hours over the period for which you’ve measured patient volume. Measure PPH for a month at a time or at least for a period that includes approximately 150 patients and track results over a minimum of six months before making any decisions. Graphing the results of PPH, RVUs/hour and RVUs/patient against the 20th, 50th and 80th percentile for the group tells quite a story (figure 2). Besides as the pyramid advises us, feedback is the next step after measurement. More on RVUs next month in Part II.

Figure 2: Physician Productivity by RVU/Hour, RVU/patient, Patients/Hour
New_Phys_Prod_Merged_Example

Most physicians, provided the monthly graphical feedback about PPH described above, will over time adjust their performance to the mean of the group. When this productivity measure (or RVUs/hour) is coupled with measures of patient waiting times or throughput intervals and measures of medical staff and patient satisfaction then decisions about improving productivity or altering staffing will become apparent.

What about the physician who does not come into line? The physician who insists that her or his patients are more sick or complex or otherwise more difficult to evaluate? Relative value units per hour more closely reflect the patient complexity by more directly measuring the physician work done in patient care, but RVUs are harder to measure. Yet, conversion to this measure should not be the answer, rather reaffirmation of the expectation clearly articulated at hiring and through regularly scheduled reviews and meetings assures that the physician is identified as non-compliant with the expectations s/he had earlier agreed to. Unfortunately, counseling or disciplinary corrective action may be required after a sufficient period of monitoring without response.

The medical staff development pyramid provides a basis for exerting leadership within a group in an open and aboveboard fashion. Implementing the pyramid approach no later than your group’s next annual review and planning session provides a basis for improving your group’s cohesiveness and performance the following year.

[1]Graff LG, Wolf S, Dinwoodie R et al: Emergency physician workload: a time study. Ann Emerg Med 1993; 22(7) 1156-1163.

[2] Weston, K: A difficult fix: staffing the emergency department to meet patient demand. Clinical Initiatives Center, ED Watch #1 1999. The Advisory Board Company, Washington, D.C.

Measurement of Operations Performance: An Invaluable Tool for Patient Care

Thomas P. (“Tip”) O’Neill, the late Speaker of the U.S. House of Representatives, once commented that “all politics is local.” So too are all emergency departments.

Our emergency departments’ clinical operations pose endless challenges, and the result I obtain in Brooklyn most assuredly will require a unique implementation in your world or it won’t work at all. Nonetheless, we can learn from each other, particularly if we are willing to look beyond the specific sought-after result to the processes that bring us to that result. “Best practices” described anywhere usually have some value everywhere. Garnering that value requires reliable measurement if success is to be obtained.

Measurement of operations performance is an invaluable tool for helping improve patient care and, as importantly, the patients’, community’s, and medical staff’s perception of the care we deliver in the ED. Yet resistance to measurement of operations performance is common among emergency physicians, and when measurement itself may be grudgingly accepted, arguing the validity of the findings is equally common.

Emergency physicians resist measurement of operations performance or disdain the results because by and large they were never taught how to use the results to improve care for patients. Unfortunately, most emergency physicians don’t take the time or are not afforded the opportunity to participate in interpreting results, and therefore they rarely gain the opportunity to learn about the processes of measurement and the value of the measurement. Rather, emergency physicians mostly learn about operational performance measures through someone else’s interpretation of a particular finding.

Bludgeon or Tool?

Too often, hospital or practice managers begin by using these measurements of operations performance as a bludgeon rather than a teaching tool. For example, over the past several years, physician productivity has become an oft-measured parameter, sometimes with dismal findings for one or more emergency physicians in the group. The dismal result and the management pronouncement that, “Something’s wrong, fix it,” fails to convey the importance of the finding and the real need to address it. It sets up a situation in which energy is diverted to a struggle over the quality of the data and the value of the particular measure.

Just as vital signs are vital but don’t tell the whole story about a given patient or clinical condition, so too are measures of physician productivity important but don’t tell the whole story of a physician’s value. Few of us make judgments in practice or in life on a single parameter. Even W. Edwards Deming, a statistician who popularized statistical-process-control and whose name has become nearly synonymous with continuous quality improvement (CQI), when speaking of his system of “Profound Knowledge” acknowledged that, “Not all facts, not all measures are known or knowable.”

Nonetheless, measurement has value in a variety of circumstances. Many practices may have one or more members who are perceived by some as unable to “move the room.” In practices with periods of multiple physician coverage, one physician may be seen as “slow,” causing colleagues to avoid double-coverage shifts with him. When confronted, the “slow” physician may ask for a more objective measure of his performance.

As an alternative example, a new member of the practice or perhaps a recent residency graduate may be thought of as a “slow” physician because of a subjective sense of the state of the department when this particular physician is working. In both of these instances, some measure, used over time to track change in performance, may be sought by the practice leadership, the physician, or hospital management. Many people singled out for examination seek objectivity and reliability in the evaluation. Yet, others remain uncomfortable with measurement, seeing productivity measurement as a two-edged sword that can be more harmful than helpful. I disagree; evaluation or improvement of operations performance requires measurement.

Credible Measurement Vital

Credibility in measurement is vital. Producing and exhibiting a number is a pointless exercise in an environment where no other numbers and the underlying data are unavailable to the subjects of the measurement. While measurement and reporting shouldn’t wait for each individual’s review of every datum, practices and EDs lacking transparency are not environments in which productivity measures, regardless of how carefully produced and published, will have the respect of those physicians subjected to measurement. Without respect for the process of measurement, there can be no hope for constructive change among the physicians or others whose work is subjected to measurement.

Some physicians are quick to dispute the validity of clinical operations measurements. Trained in science and the scientific method, most physicians are notably slow to embrace the “management analysis” implicit in clinical operations management. Management analysis shouldn’t be confused with the scientific method. The validity physicians anticipate when deploying prospective hypothesis testing differs from the validity of management analysis examining an operations issue. Accordingly, precision of measurement — by this I mean especially the repeatability of the method and the reliability of the result — are more important than absolute accuracy. Consequently, single measurements are of scant value; yet, repeated measurements over a period of time often prove a powerful and reliable tool.

Thus, the second point: Start measuring now. Don’t wait until you change something or have improvement efforts underway. Nothing will speak so resoundingly of your success in the future as a measurement with a dismal finding at the start and a strikingly positive trend over time. Beginning the process of measurement speaks to a seriousness of purpose that may be used to persuade your critics in management or elsewhere that you are committed to evaluation and, as necessary, improvement.

As I mentioned, even vital signs don’t tell the whole story about a patient, and neither will any one measurement of operations performance. Thus my third point: No single measurement itself induces immediate action. Responsible leaders will not be stampeded into imprudent action based on a single observation. Whatever the interval of measurement, usually seven or more observations should be recorded before any action is contemplated. Thus, one could wait seven months before acting on a measurement made monthly. Rather than acting precipitously, it would be better to measure more frequently than monthly, if the pressures of the real world require action sooner.

Improvement of operations performance requires measurement, which is best started as soon as possible, but no single measurement itself requires an immediate response. We have no choice in the matter, for without measurement how will we know that the changes we plan to undertake are an improvement?