Do PAs really order more imaging studies? One PA’s Response.

As well as being a physician assistant I am interested in the equity and economics of care, so naturally my interest was piqued when: A Comparison of Diagnostic Imaging Ordering Patterns Between Advanced Practice Clinicians and Primary Care Physicians Following Office-Based Evaluation and Management Visits. Hughes, et al was published online in JAMA Intern Med. November 24, 2014. As its authors are active in the online FOAMed (Free Open Access Medicine) community and its publication happens to coincide with the Radiological Society of North America annual meeting, it was quickly disseminated on Twitter with the associated claim: “When ordering images, NPs, PAs far more trigger-happy than docs.” (@NeimanHPI) This is not only misleading but is a gross and irresponsible generalization.

I emphatically agree that this is an important area of study, there is and will be a growing shortage of providers for an aging population. The shortage in primary care is amplified by fewer and fewer graduating physicians moving into primary care practice while at the same time many current providers are late career and nearing retirement. It is simply a fact that you will see mid-level providers providing more care in the primary care setting, this seems to be the area of concern in regards to scope of practice and quality of care. Keep in mind the majority of supporting research cited by the authors was aimed at examining expanding scope of practice and/ or more autonomous practice of nurse practitioners (NPs).

While I do not work in primary care, I practice emergency medicine, I have worked in a number of both private and public institutions across the country and in doing so have developed a robust sense, albeit anecdotal, of practice patterns of: board certified physicians, family practice physicians grandfathered into emergency medicine, residents and non-physician providers alike. I can firmly attest that in my experience I have witnessed mid-career physicians order the bulk of imaging studies in the ED especially advanced imaging, CT/ CTA. While seeing the same acuity I am often coerced by these same physicians into ordering additional studies on my patients when they supervise my care. Briefly the three most common reasons cited for what I perceive to be unnecessary imaging are: defensive practice (been sued/ know someone who has been sued), expediency (scan the patient instead of physical exam/ serial exams/ admission/ observation/ close follow-up)  and patient satisfaction/ expectation (it’s what the patient expects/ demands/ sent by PCP).

The authors lump PAs and NPs together as: advanced practice clinicians (APCs). While this may seem reasonable and makes sense from the stand point of data collection and analysis, the authors used medicare billing data for their study, when teasing out finite variables in medical practice this lumping is bound to be confounding. While PAs and NPs are increasingly being used interchangeably in clinical settings it cannot be overlooked that there are marked differences in PA and NP training. It bears repeating that PAs are trained on a medical model, albeit PA training is shorter than that of physicians and does not include a residency. I expect that a PA-C (certified PA) in clinical practice for 3-5 years would be held at least to the same standard as a graduating resident in his or her given specialty, at least in regards to practice patterns. Furthermore, while I do not have data at hand I believe it would be a reasonable assumption that nurse practitioners rather than PAs account for the majority of mid-level primary care for patients in the studied population . So what are the authors really studying?

The authors cite a previous study to bolster their premise in the introduction however, that previous study by Seaberg, D.C., MacLeod B.A., et al. Correlation between triage nurse and physician ordering of ED tests. Am J Emerg Med. 1998;16(1):8-11 examined concordance of ED physician orders with triage nurse initial order sets and did not involve mid-level providers at all as stated by Hughes, et al. The Seaberg study was two phased and designed to determine differences in triage nurse ordering of diagnostic studies (lab work and plain film radiography) from that of emergency department physicians and whether the differences could be reduced by protocol based order sets. While the authors of the Hughes study state: “Previous research investigating the concordance of APC and physician radiography orders in the emergency department (ED) setting found that in 34% of ED patients, APCs recommended imaging studies when physicians had not.” Again, this in not only misleading, but erroneous.

Despite our best efforts to standardize care medical practice is variable due to regional differences (standard of care) and even within the same institution inconsistent from provider to provider. By the nature of my own practice, emergency medicine, I utilize a lot of diagnostic imaging, although I rely heavily on clinical decision tools such as PECARN, NEXUS, Ottawa, etc and together with shared decision making I am successful in avoiding ionizing radiation in many of my patients and strive to do so whenever possible. I concede that these tools are intended to reduce imaging for more routine presentations in the emergency department rather than the primary care practice environment.

Furthermore, the authors do mention in closing that: “Also, under some circumstances work performed by APCs is coded by their supervising physician. This would create downward bias, i.e., our reported estimates underestimate the magnitude APCs order relative to PCPs, if many episodes of care treated by APCs and presumably ordering more imaging are actually coded in the PCP reference group.” However, this fails to take into account that visits may occur in tandem or with the physician in a consulting capacity while the visit is still coded as a mid-level visit. Finally, the authors themselves state that their data would not correlate to a measurable effect at the individual patient level but instead represents population size differences in practice pattern which are again extrapolated from a small and inherently unreliable dataset.