Advanced practitioners (APs) are health professionals educated at master's level to manage entire episodes of care (Health Education England (HEE), 2017; Scottish Government, 2017). Advanced practice is defined by the ability to demonstrate capability across four pillars of practice:
- Clinical practice
- Facilitating learning
- Leadership
- Evidence, research, and development (HEE, 2017; Scottish Government, 2017).
Exact terminology varies across the UK. Programmes of education for advanced practitioners are aligned with the four pillars to enable practitioners to demonstrate capability in all four domains.
It has become accepted practice within AP curricula that capability is assessed using the Observed Structured Clinical Examination (OSCE). The OSCE was instituted in Dundee, Scotland, by Harden and Gleeson (1979) and has gone on to become widely adopted within undergraduate and postgraduate curricula globally. The OSCE is structured using multiple stations that students rotate round, completing tasks specific to that station. As with workplace-based assessments, each station assesses capability in relation to a specific skill or procedure. The justification for the continued use of the OSCE within AP curricula is to allow educators to standardise the assessment of capability at set intervals within a programme of study and to support the translation of knowledge into practice.
Boursicot et al (2014) stated that the design of the OSCE (whereby each station uses structured marking schedules that ensures consistent scoring by examiners) created an acceptable environment to assess capability. The authors argue that the inclusion of multiple stations (focusing on different examination, communication, or procedural skills by specialty) resulted in a more reliable picture of a participant's overall capability as performance was being reviewed by multiple examiners (thus bias is minimised).
Brannick et al (2011) conducted a meta-analysis to determine the reliability of the OSCE. In this study, data were analysed across items (reviewing student performance against a set criteria) and across stations (reviewing the performance of the OSCE itself). The authors identified that Cronbach's alpha (a) was the most commonly used measure of internal consistency, so this became the principal comparator across the 39 studies that met the inclusion criteria (8% of the studies analysed were from AP curricula). The authors' reported a=0.78 across items and a=0.66 across stations (the mean a for OSCEs with less than 10 stations was 0.56 and 0.74 with greater than 10 stations). The authors identified that the construct OSCE stations made it more difficult to reliably assess communication skills rather than a specific examination or procedural skill. A better than average reliability was associated with a greater number of stations and a higher number of examiners per station, although it was acknowledged that this would create an examination process that was logistically challenging and resource intensive.
Issues with the OSCE
The Advanced Practice Team at the University of Dundee has held iterative discussions with educators (both in the UK and other countries in Europe) who deliver AP curricula. These discussions have identified three issues relating to the construct and utilisation of the OSCE as a means of assessing the capability of APs.
First, there is a notable variation in the number of OSCE stations used within AP curricula to assess capability (variation ranged from 3 to 12 stations) therefore the reliability of this assessment process must be questioned.
Second, the assessment instrument used within an OSCE station is not sophisticated enough to discriminate performance reliably across domains (it is common practice that only a pass or fail judgement is given for each station).
Third, the construct of OSCE stations is designed to demonstrate capability in managing entire episodes of care, therefore a mixture of stations are utilised to assess the capability of APs in relation to examination, communication and procedural skills. This approach (especially in relation to communication skills) has been shown to not deliver an objective assessment of capability (Brannick et al, 2011).
The COVID-19 pandemic necessitated a move towards remote assessment, which meant that OSCEs were delivered within contexts where this assessment was not traditionally undertaken. This rapid adoption of a dispersed manner of assessment, along with a national drive to instigate compassionate assessment practices (Quality Assurance Agency, 2020), challenged the Advanced Practice Team to examine the validity of the OSCE as a mean of assessing the capability of APs within a programme of study and whether this assessment supports the translation of knowledge into practice. The authors are keen that the data collected from this survey is representative of current practice and the lived experience of APs so that the subsequent recommendations made are relevant to the current and future requirement of the profession and AP curricula.
Get involved
Readers are invited to engage with this research study and anyone interested in doing so can contact Kevin Stirling (K.J.Stirling@dundee.ac.uk).