The Accreditation and Assessment Pack

3 downloads 235 Views 563KB Size Report
Monash Medical School and on January 29th 2002 Professor of Medical Education and ..... Cardiac Technologist courses are accredited in a similar manner.

The Accreditation and Assessment Pack

Authors: Jennifer Parkes Brian Jolly

Jennifer Ann Parkes, DApplSc, DMU, GCHPE Having trained as a radiographer, Jennifer began her career in sonography as a new graduate and has practiced as a sonographer since December 1980, completing her DMU (ASUM) in 1993. Her professional interests have included 8 years with the Australian Sonographers Association (ASA) as Secretary, President and Past-President, conference convenor of ASA2004 in Melbourne and presenter at numerous national and local meetings – most recently the ASA Travelling Workshop presenter for Breast ultrasound in 2006/7. Jennifer has always had a keen interest in the many aspects of the sonography profession, and has recently completed a Graduate Certificate in Health Professional Education at Monash University. This course introduced her to both the educational and professional influences on medical and allied health professions; an area in which she has a great interest. She is also a unit advisor for the Monash University Graduate Diploma in Medical Ultrasound. Jennifer currently is employed as a Tutor Sonographer with MIA Victoria and is undertaking her Masters in Medical Ultrasound at Monash University.

Brian Clark Jolly

BSc, MA(Ed), PhD

A graduate in Psychology, Brian Jolly began working in medical education in 1972. He moved to St Bartholomew’s Hospital Medical College, London in May 1979 and in 1983 he helped develop the ‘Cambridge Conference’: a series of conferences that has had a major international impact on medical education, especially on assessment. In 1989 he obtained a Department of Education UK learning improvement grant (£752,000) for staff development and student-centred education – at that time the largest ever grant awarded in UK for medical education. Later he was one of the principal investigators on a successful bid for a 3 year grant to a consortium of four schools in North Thames for development of Community-based medical education (£852,000). In 1994 he completed a PhD for work on clinical education at Maastricht University, the Netherlands. In 1993 he co-led a team in a successful tender to the Commonwealth Government of Australia for a research project to evaluate and redesign the training scheme for general practitioners in Australia. Brian became a Professor of Medical Education in the newly formed Department of Medical Education at the st University of Sheffield in March 1999 and in October 1 2001, he became an international Consultant to th Monash Medical School and on January 29 2002 Professor of Medical Education and Director, Centre for Medical and Health Sciences Education. He was on the Federal Ministerial Steering Committee for the recently completed Australian Medical Education Study, and has recently co-hosted the Ottawa Conference on Medical Education – Ozzawa 2008.

© ASAR 2008. © This document is copyright. You may download, display, print and reproduce (copy) this material in unaltered form only (retaining this notice) for your personal, non-commercial use or use within your organisation. Apart from any use as permitted under the Copyright Act 1968, all other rights are reserved. Disclaimer 1. This material is made available on the understanding that ASAR is not thereby engaged in rendering professional advice. 2. This document has been written on behalf of ASAR as a supplement to its accreditation process which is detailed in the Program Accreditation Guidelines. Before relying on the material, users should independently verify its accuracy, currency, completeness and relevance for their purposes, and should obtain any appropriate professional advice. 3. The material may include views or recommendations of third parties, which do not necessarily reflect the views of ASAR, nor does it indicate ASAR's commitment to a particular course of action. 4. Links to websites are inserted for convenience and do not constitute endorsement of material at those sites, or any associated organisation, product or service. 5. The listing of a person or company in any part of the document in no way implies any form of endorsement by ASAR of the products or services provided by that person or company.


Introduction In the development of healthcare professionals, many different accreditation regulators co-exist in Australia. This is due in part to the Federal – State divide both in the provision of healthcare, and in the provision of professional education. It is also a consequence of the proliferation of different healthcare professions each with its own, usually self-determined, scope co-existing in a hitherto relatively uncontrolled and unregulated environment. Recently the healthcare professions in Australia have come under increasing scrutiny from the public and from governmental jurisdictions. This has happened at Federal, State and regional levels. On March 26, 2008, the Council of Australian Governments (COAG) signed the Intergovernmental Agreement for a National Registration and Accreditation Scheme for Health Professionals, expected to be established by July 1, 2010. This national registration will include medicine and nursing amongst other health professions, but will initially not include medical radiation. It is therefore uncertain when sonography may be included in this national registration scheme, but it can be assumed that this would be a natural progression


. A call for

submissions to COAG from those concerned in partially regulated professions has been issued. In addition to registration, the COAG process is expected to encompass educational and training program accreditation amongst other quality assurance activities. Currently, registration for sonographers does not exist either on a state or national level. Mandatory sonographer accreditation via the Australasian Sonographer Accreditation Registry (ASAR) is required under Federal Medicare laws for all ultrasound examinations billed to Medicare, if performed by a sonographer. Such accreditation encompasses most sonographers within Australia, but may not include those practicing outside the Medicare system. Accreditation of ultrasound courses in Australasia is a primary role of the ASAR. At present, only sonographers who are enrolled in, or have completed an accredited course may be accredited by the ASAR. This guide is designed to inform members of the ASAR of the general theory of, and frameworks available for, accreditation and regulation of educational courses for a variety of health professionals within Australia and overseas. In addition, the current state of registration or licensure of sonographers within Australia and internationally will be discussed, with comparison to other heath professionals.


Chapter Headings Chapter 1

What is Accreditation and Why is it Important?


Chapter 2

Standards – accreditation, competency, professional


Chapter 3

What Strategies are available for Accreditation?


Chapter 4

Accreditation Strategies and Examples


Chapter 5

Multiple Foci of Accreditation Aims and outcomes The Curriculum Governance Assessment Processes Specific Assessment Methods Teaching and learning Learning resources Student progression and support Professional Attributes/Standards


Chapter 6

Implications of Distance learning


Chapter 7

Accreditation Activities


Chapter 8

Implications and Conclusions for Best Practice




Abbreviations / Acronyms


Appendix 1

Methods of Assessment, as likely applicable to sonographer education


Appendix 2

CASE - Validation and Accreditation Handbook (with permission for reproduction from CASE)



1. What is Accreditation and Why is it Important? Definition of Accreditation ‘Accreditation is a formal process by which a recognised body, usually a non-governmental organisation, assesses and recognises that a health care organisation meets applicable pre-determined and published standards. Accreditation standards are usually regarded as optimal and achievable, and are designed to encourage continuous improvement efforts within accredited organisations. An accreditation decision about a specific health care organisation is made following a periodic on-site evaluation by a team of peer reviewers, typically conducted every two to three years” (2).

Aim of Accreditation Accreditation activities are designed to: Ensure support and development of education and training for sonographers which enables them to meet appropriate standards of safe practice, clinical skills and professional confidence, and become eligible for registration. Ensure that the best possible environment exists to develop, evaluate and maintain the organisational processes that ensure excellence in the training of sonographers. Provide a common denominator of shared values and practices among the diverse organisations which train sonographers in order to encourage communication and sharing of experiences. Promote links between the educational processes occurring at the university level and those at the vocational level. Provide the community with a process of external validation of vocational education programs. Provide prospective students with a means of identifying institutions/programs of study that meet minimum standards and that will provide them with appropriate pathways to registration/licensure. Provide assistance to institutions by identifying for them, the strengths and weaknesses of their education programs. Provide a mechanism to enable educators to make changes to their programs and an incentive for quality assurance mechanisms. Assist in the protection of institutions against internal or external deleterious forces By meeting these aims, it is argued, the accreditation process will provide a well-prepared and qualified workforce and at the same time. The accreditation process itself should stimulate improvement. The improvement of education is both a direct aim of accreditation, and a frequent by-product of the process itself. The co-operative participation of those involved at the institutional level often serves as a catalyst for self-improvement


. The capacity for

this to occur may depend on the initial willingness of these institutions to undergo accreditation and their support of the process. Accreditation of start-up educational programs where much inconsistency of content 5

and goals exists is a useful reform procedure for a profession, to help bring all courses into line with agreed standards (4). Another important factor in developing or reviewing an accreditation system is the extent to which whether is it a voluntary or compulsory process. As time progresses, it is apparent that even when participation is voluntary, strong moral and professional pressures usually produce a high level of participation


. Even

further pressure is placed on educational programs to participate if government funding is provided only to those who are “voluntarily” accredited


. Where such pressure is less, the ability to advertising your course

as an accredited one, in a voluntary process, is likely to become a useful marketing tool. Another alternative of voluntary accreditation involved the Institution of Engineering in Australia – while course accreditation is voluntary, only graduates who have completed an accredited course are eligible for membership to the Institution (5). Key features of accreditation programs usually include: A self study document provided to the accrediting body by the institution seeking accreditation. This involves the internal preparation of a detailed evaluation document based on the accrediting body’s guidelines, standards and requirements. This will usually include curriculum, mission, goals, objectives, resources, facilities, methods of teaching/contact hours, class sizes, student selection procedures, qualifications and experience of staff, assessment mechanisms and a review of outcome measures. External assessors undertake site visits incorporating administrative /educational staff and student interviews and review of building facilities. This process of peer review will include review of written documentation during site visits, ie policies and procedures, clinical placement site visits including clinical supervisor interviews, equipment/materials/resource review and direct observation may also be undertaken. A meeting of the accreditation body will review a report of the process and make an assessment as to whether or not the institution has met the specified standards and criteria. A decision on accreditation is made and notified to the institution. Various processes exist for institutions that do not meet the accreditation criteria; some include interim accreditation and a process for re-application. Sometimes the results of accreditation are published. In most cases a register of accredited courses is established. Re-accreditation takes place after a 3-5 year period. Some accreditations may require yearly updates.

One key factor in establishing an accreditation system is the need for a well established bureaucracy and intensive resource demands. For example, programs may be very labour intensive for the volunteer assessors. Processes need to be developed that are less bureaucratic but still have reliable accreditation procedures (4).


Principles of Accreditation General Principles Accreditation commonly espouses the following underlying general principles: To develop valid and reliable processes for accreditation which measure the essential attributes of an educational institution in a consistent and accurate manner with clearly defined outcomes. To include processes that are explicit, and transparent to all stakeholders. To certify these processes are demonstrably trustworthy because they are rigorous, efficient and equitable. To develop processes that minimise the burden on the institutions to which they are applied, and, where possible, co-ordinate with other accreditation processes and use shared documentation and data sets. To ensure that processes are consistent with international standards. To operate accreditation processes within a legal system within which all stakeholders recognise the authority of the regulatory body. To promote procedures which promote improvement and a standard of excellence beyond a minimum level of compliance.

This last principle is one that has much discussion in the literature. It is often referred to as an “Improvement Model of Accreditation”. This model sets clear aims and invites creative responses, rather than prescribing a narrow set of formulaic constraints; it rewards an institution for identifying and addressing problems within a program. By contrast, in a minimum-threshold-only approach, it is in the program’s interest to hide deficiencies. However, the improvement model creates new challenges for the educational institutions undergoing the accreditation process due to variability of performance above the minimum threshold and lack of a clear answer to the accreditation aims


. It is therefore the task of an accreditation review to ensure that its

reporting and follow-up activities incorporate fair and effective methods that are likely to lead to improvements in the program without unduly damaging the institution being reviewed (5). Accreditation does not, though, necessarily provide continuous supervision or review. It is an assessment of an institution or educational program at a given time, resulting in a statement as to its educational effectiveness at that time and to its promise of continuing effectiveness. Such assessments are reviewed periodically, but the process does not provide continuous oversight


. Some accrediting body processes try

to overcome this issue by requiring yearly completion of review forms and notification by the institution of any changes to the curriculum at any time, as per the Consortium for the Accreditation of Sonographic Education (CASE) in the UK.


Accrediting bodies may vary in the depth of their review of a program. In general, in Australia, professional body accreditation does not review the administration and financial support of the educational program. This is performed by the individual institutions, especially the universities in their in-house self- accreditation and higher education review processes undertaken by the state and federal government authorities, and a national accreditation body for higher education titled the Australian Universities Quality Agency. Such universities conduct their own reviews that include financial, staff, resources and outcomes. This is often termed “institutional accreditation”


. It is the role of the professional accreditation bodies to advise the

courses on the standards to be met, while allowing them to determine how this will be done and the specific content of their course, such as in medicine, radiography, sonography and nursing. Accrediting bodies such as LCME in the USA do undertake such in depth review of the educational institution (7). Accrediting bodies are, in general, either: membership organisations that are funded by their membership and are responsible for the development of their own standards of practice. The accredited programs are judged by criteria cooperatively developed. Membership to these bodies is in general voluntary and represents the key stakeholders of that profession with minimal paid support staff (3) or private companies that perform the accreditation process under the guidance of standards of practice set by the profession/professional bodies.

A mix of paid and voluntary, professional

representative staff are utilised. The payment of fees by those seeking accreditation covers the costs of the process (5). These companies generally provide to their members’ professions a known organisational model of accreditation, expertise in handling the process (especially appeals) and legal liability coverage. Granting of accreditation is often done at arms length of the professional body, while utilising standards developed with that body. To achieve the purposes of identifying, improving and preserving educational quality, accrediting bodies need to be reasonably independent. Notably this independence should include the right not to undertake tasks, however worthy, for which they are not suited or which they cannot adequately support. For example this may include the enforcement of some governmental regulations beyond the initial accreditation role undertaken, or promotion of a protectionist attitude to certain current fields of clinical practice. Accrediting bodies need to work co-operatively within federal, state and private sectors as necessary, but at the same time need a substantial degree of freedom from governmental influence in order to play an equally important role as a balancing force. This issue has arisen during health professions’ responses to the latest proposals in Australia for national accreditation of all health professionals


to strengthen the public’s confidence in the

quality of health care. Here the reliance on a Department of Health and Ageing committee to oversee the process is seen by some existing bodies as challenging that independence. Accrediting bodies need continuously to refine their purposes, clarify their responsibilities and improve the processes to keep up to date with current trends in education and professional responsibilities. Furthermore, they need to examine carefully their social role. But to be effective in meeting their objectives, they must resist pressures to reach beyond their capabilities




Definition of Licensure/Registration These terms are used interchangeably in this document; licensure is the term used commonly in the USA and Canada, while registration is the term used commonly in Australia and the UK. Licensure/registration is a process by which a government authority or one recognised by government, grants permission for an individual practitioner or health care organisation to operate or engage in an occupation or profession. Regulations are generally established to ensure that minimum standards are met to protect public health and safety. Licensure/registration is usually granted after some form of examination or proof of education and if often renewed periodically through payment of a fee and/or proof of continuing education of professional competence. Licensure/registration programs often include the following components: examination of the individual’s credential and determine whether their education and experience meet requirements inspection of educational programs to determine they meet required standards administer examinations to test professional qualifications granting of reciprocal licenses in other jurisdictions if they meet required standards issue regulations establishing professional standards of practice investigate charges of violation of standards which may lead to revocation or suspension of an individual’s licence to practice (2).

Definition of Certification Certification is a process by which an authorised body evaluates and recognises either an individual or an organisation as meeting pre-determined requirements or criteria. Accreditation and certification are often used interchangeably; accreditation applies only to organisations or institutions while certification may apply to individuals as well as organisations/institutions. Certification of an organisation shows that a predetermined standard of process and activities has been met; therefore indicating a successful outcome of the accreditation process. Certification of professionals in some countries (such as ultrasound in Australia) may be similar to registration, while in others it is a required prior to registration (specialist medicine, radiography and sonography in the USA), and may involve periodic testing of competence, proof of continuing professional experience and assessment of clinical outcomes (2). An example of this may include where an individual has demonstrated competence in a specialty area or an approved program. This may be at the minimum level for licensure/registration, or above it. Such certification should provide the public that they have met a professionally-accepted standard.


2. Standards – Accreditation, Competency, Professional Definition of a standard: In the context of accreditation, a standard can be defined as an explicit, predetermined expectation set by a competent authority that must be met to ensure an institution’s acceptable performance level. Standards are designed to be optimal and achievable and if met, should lead to the highest possible quality in a system or educational program


. They are used by the institutions seeking accreditation for guidance during

application and to consider if any program changes are likely to be required. Standards for accreditation are also used by assessors in reviewing self-study documents, at site visits, and in determining if the institution, all things considered, can be awarded accreditation (8). In the context of individual licensure/registration, a standard is usually set at a level designed to protect public health and safety. Standards are generally developed by a professional or accrediting body after a process of consultation and consensus. These are often called professional or competency standards and form the basis of a minimum standard the practitioner should meet to perform their role with safety and at an appropriate level of competence (9, 10, 11). Standards can develop from a variety of sources, i.e. professional bodies, panels of experts, research studies, regulations. They are often organisation-specific, especially in regard to professional and competency standards. Standards often evolve from a consensus of best practice, given the current state of knowledge and technology at the time of their development. For standards to work well, it is important to consider their intent and purpose, as well as expectations of how they will be met. Consideration should also be given to how meeting the standard will be measured – e.g. Is it to be met all the time or are there any valid exemptions that should be considered? In addition, development of standards should take account of what evidence will be required to ensure the standard is met. Standards may be accompanied by guidelines that provide illustrative examples of interpretation of that standard



When developing or reviewing a standard, it is worthwhile to consider the following: does it address the performance of common or important functions? is it clear and easily understood? is it amenable to assessment and quantification? does it have face validity and can it be measured reliably? do experts believe it to be important to practice? does it focus on, or relate to, the patients receiving the care? can it be uniformly applied? is it consistent with existing laws and regulations? does it compliment any existing international standards? is it culturally sensitive and appropriate? does it reflect what experts consider as best practice or minimum standard? 10

does it provide a framework for the inclusion of advances in clinical practice and technology? is it flexible enough to be revised as needed? can it accommodate reasonable variations and special characteristics for regional or cultural differences( 2, 8).

Types of Standards Accreditation Standards: Accreditation standards are typically set at a maximum achievable level to stimulate improvement over time. Some accrediting bodies though, especially in their early stages may consider minimum standards. Such standards are generally developed in consensus with stakeholders in the profession. There are several types of standards. Structure standards – look at the institution’s inputs i.e. human resources, building design, availability of equipment and supplies. Process standards – look at the activities conducted and evaluate whether they work toward recommended clinical guidelines or standards of practice set by the professional bodies. Outcome standards – look at the effect of the activities used to see if the expected outcome was achieved



Professional standards: Professional standards are generally devised by the professional bodies to provide standards of conduct and behaviour that members of that professional should follow, e.g. ‘The Duties of a Doctor’


. Some may be

strict standards while others may be considered guidelines or recommendations. Codes of conduct may fall into this category that outline expectations of that profession of its members behaviour in performing their professional duties. This often included ethical, legal, privacy and confidentiality matters, in addition to patient care and protocol outlines (13, 14).

Competency Standards: These are often set at a level of minimum level to ensure that the essential components are met to ensure minimum risk to patient health and safety. Included in such documents are procedures, acts and processes permitted by law, for which the individual has received education and clinical experience, and has demonstrated competency (9, 10, 11).


3. What Strategies are Available for Accreditation? The development of strategies for accreditation is a burgeoning area on an international scale, and particularly in Australia at the current time. Such development is frequently driven by high profile cases that impact on the public, on the media and subsequently on governments. For example in Queensland a series of innovations, including a $27 million clinical simulation centre, have been developed to try to address underperformance and enhance quality in the health workforce. It is widely perceived that the Bundaberg affair, in which Dr Jayant Patel was accused of significant departure from the provision of safe and optimal surgical care for his patients and a subsequent warrant issued by the authorities for his extradition from the USA, was a critical event that stimulated this interest. In Australia there are now carefully controlled pathways to clinical practice for international medical graduates being developed by the AMC in collaboration with State health departments. In the UK the Shipman case, although more triggered by that doctor’s extreme psychopathology rather than lack of competence, resulted in a multi million dollar enquiry and a complete review of the stringency of the then current proposals for revalidation of medical professionals. Process-based accreditation reviews focus on the capacity and resources of the institution to deliver their education goals and curriculum. Many accreditation programs have fallen into this category. There is a shift now to outcomes-based accreditation: reviews that consider data on the outcomes of programs of study i.e. student capability, and graduate and employer satisfaction and success. Outcomes are benchmarks (minimum and maximum cut-off points) to which programs are held accountable in order to determine graduate and program success.

Methods used include graduate and employer surveys, attrition and

academic/clinical dismissal rates, data on the number of students employed in a relevant setting and credentialing success data. This process began in the US for ultrasound courses in October 2007 (15). Following is a summary of various accreditation strategies. These will be elaborated on and discussed further in Chapter 4.


A. Independent statutory accreditation of Educational Process, Initial Access or Training Outcome and subsequent registration of practitioners. An example of this is the USA medical licensing process, in which a national accreditation of medical schools takes place and a separate national licensing examination is produced, followed by statebased registration. The Liaison Committee on Medical Education (LCME) is the nationally recognized accrediting authority for medical education programs leading to the M.D. degree in U.S. and Canadian medical schools. The LCME is sponsored by the Association of American Medical Colleges and the American Medical Association. A separate body, The National Board of Medical Examiners constructs a national licensing examination in 3 steps – 2 focussed on basic and clinical science and one on clinical skills. Separate (State-based) licensing boards are responsible for monitoring and regulating professional registration. Sonography in the US is also of a similar process, whereby training schools are accredited, and separate national licensing examinations are undertaken.

B. Statutory Accreditation of Process only An exemplar here is most of the UK allied health disciplines, in which a national regulatory body accredits an institution to perform all the functions that need to exist to deliver a health professional. The education and accreditation Committee of the UK Society and College of Radiographers (SCoR) assesses and approves institutions that wish to provide radiography courses. The assessments that lead to registration are undertaken by the individual institutions. However a generic independent body, the Health Professions Council, regulates individual professionals after licensure. This does not currently include sonographers, who have only just begun a process of voluntary registration. This strategy is similar for radiographers in Australia, whereby the Australian Institute of Radiography (AIR) accredits the courses, but assessments are performed in the training institutions and the professionals are registered on a state by state basis. This process strategy is also utilised in Australia for many areas of health professional (including nursing and physiotherapy) and medical education, excluding sonography.


C. Statutory Accreditation of Process and subsequent Regulation of Professionals Exemplar - the General Medical Council (GMC) and the General Osteopathic Council (GOsC) in the UK, which both oversee the accreditation of their respective training institutions, but also have statutory responsibility for regulating individual registered practitioners. The GMC and GOsC both have separate departments for educational accreditation and professional regulation. The accreditation is process based – individual institutions run their own examinations leading to full or provisional registration. The GOsC has recently changed from awarding full to awarding provisional registration with an initial 1-2 year pre-registration phase. This strategy is similar to Australian sonography course accreditation and sonographer registration via the ASAR. Whilst the responsibility for training institute accreditation is similar, the responsibility for regulating sonographers is not – as currently there is no statutory obligation on the ASAR other than to maintain a register of those who have the appropriate education and fulfil the CPD requirements. Issues such as dealing with unfitness to practice and poor performance are not within the scope of the accreditation process of the ASAR.

D. Voluntary or Consensus Accreditation Training courses undergo voluntary accreditation where there is no mandatory requirement. This is often a process early in the life of a profession, prior to formal accreditation. Participation helps to foster program and institutional improvement and can be used by the institution as an indicator to likely students of their willingness to meet voluntary standards; in an effort to attract them to their course. Many alternative health professions fall into this category and at present, so does sonography in the United Kingdom where there is no requirement for mandatory ultrasound course accreditation or sonographer registration.


4. Further Description of Accreditation Strategies and Examples. Following the strategies outlined in the previous chapter, examples of different accreditation programs will now be described. Overseas models of medicine, radiography and sonography will be outlined and discussed in relation to a summary of their aims and standards, the process of accreditation, representation on the accreditation boards and registration/licensing arrangements.

1. USA Medicine Course accreditation Undergraduate medical education In the USA, private professional organisations accredit undergraduate and post-graduate medical education. Medical licensing is distinct from registration as a practitioner. Licensing of individuals is performed first by the National Board of Medical Examiners through a nationwide examination process, and then practitioners have to license themselves through registration on a state by state basis (16). Undergraduate medical education programs are accredited via the Liaison Committee on Medical Education (LCME). The LCME is recognised by the US Department of Education to undertake accreditation of courses leading to an M.D. degree by institutions that are themselves accredited by regional educational accrediting associations. Whilst this accreditation of medical schools is ‘voluntary’, it is required for schools to receive federal grants for medical education and to participate in the federal loan programs. Most state boards require LCME accreditation of the school as a condition for licensing of their graduates. Currently there are 129 US and 17 Canadian medical schools



Membership of the LCME includes medical educators and administrators, practicing physicians, public members and medical students. Professional members are appointed by the Association of American Medical Colleges (AAMC) and the American Medical Association (AMA). A member is also appointed to represent the Canadian medical schools who are also accredited by the LCME.

LCME Accreditation Standards - précis of the attributes evaluated Institutional setting Educational Program for the M.D. Degree (objectives, structure, teaching and evaluation, curriculum management, evaluation of program effectiveness) Medical Students (including, selection, services, learning environment) Faculty (staff numbers, qualifications, functions, policies, governance) Educational Resources


Postgraduate medical education The Accreditation Council for Graduate Medical Education (ACGME) accredits the post-graduate training programs and is required for the institutions that sponsor the programs and employ trainees


. Each state

medical/registration board requires varying amounts of post-graduate education to satisfy their licensing regulations. This may be one or two years. The ACGME is committed to accreditation standards that reflect educational outcomes in addition to structures and processes of training, such as: patient care medical knowledge interpersonal and communication skills professionalism practice-based learning and improvement systems-based practice

Medical Registration/Licensure Licensing of medical practitioners is not granted until they pass their National Board of Medical Examiners (NMBE) examinations. These examinations consist of the United States Medical Licensing Examinations (USMLE), consisting of a 3 step process. These examinations are also sponsored by the Federation of State Medical Boards (FSMB). Once this is attained, the licensed doctor must then register to practice in the states in which they wish to work. Generally there is some reciprocity across the states, but initial registration requirements vary greatly from state to state. Licensed medical practitioners must also undergo continuous education and professional development; the requirements also vary from state to state. The NMBE and FSMB offer a post-licensure assessment system (PLAS) that provides to the states details of the individual’s ongoing education and development (18, 19, 20, 31, 32).

Sonography Ultrasound courses in the USA (and Canada) may be under-graduate (such as a bachelor degree) or postgraduate certificates, diploma or even Masters level (8).

Course accreditation Ultrasound course accreditation in the US is undertaken by the Commission on Accreditation of Allied Health Programs (CAAHEP), after programs are reviewed and recommendations are made by the Joint Review Committee on Education for Diagnostic Medical Sonography (JRC-DMS) (15). JRC-DMS comprises members of supporting and sponsoring organisations i.e. SDMS, ASE, ACOG, SVT, AIUM, ACR, ACC, ASRT, SVU (acronyms listed at end of document). Cardiac Technologist courses are accredited in a similar manner whereby the Joint Review Committee on Education for Cardiac Technologists (JRC-CVT) makes recommendations for accreditation to CAAHEP (21). CAAHEP currently has 19 allied health professions that it accredits and requires that all standards are reviewed every 5 years (8).

CAAHEP standards – a précis of the attributes evaluated sponsorship ( post-secondary institution) 16

program goals resources curriculum student and graduate outcomes evaluation/assessment fair practices (8)

Courses requiring accreditation (initial or review) undertake Completion of a self-study document (from CAAHEP utilising the standards of the appropriate professional body). The professional body provides guidance, procedures and policies regarding the process. On-site evaluation to determine how clearly the self study document reflects the status of the program and answer any additional questions. This peer review process includes an informal discussion of processes for program strengthening or improvement. A review of the site visit report by the professional group’s committee of accreditation (JRCDMS/JRC-CVT) and a subsequent recommendation. Any deficiencies are identified and progress reports called for if required. Recommendations are made as to initial, continuing, probationary, transfer of sponsorship, withhold or withdrawal (voluntary) of accreditation. CAAHEP Board of Directors then act on that recommendation, assuring due process has been met and that the Standards are being applied consistently and equitably. Initial accreditation is 3 years, CAAHEP continuing accreditation has no time limit, but recommendations are made by the professional committee on a review time frame, 3,5,7,10 yrs (max) and whether a progress report is required (8, 21). Course accreditation in the US is not compulsory, and there are currently 10 states that have no CAAHEP accredited courses.

Accreditation fees 2008 Application fee


Self-study report fee


Site Visit Admin fee


(applicants pay travel and lodging costs for site visitors) Annual accreditation

$US750 – 1,350.

There is currently a move, supported by all ultrasound-related groups in the US, towards a National Education Curriculum in sonography and this may change the non-mandatory course accreditation requirement



Representation on CAAHEP includes professional and educational sonographer representatives, a representative of the public and a national organisational member (8).


Sonographer Credentialing/Certification/Registration There are currently 3 certifying bodies for sonographers in the USA. The American Registry for Diagnostic Medical Sonography (ARDMS) administers examinations and awards credentials in diagnostic medical sonography (RDMS), diagnostic cardiac sonography (RDCS), vascular interpretation (RPVI) and vascular technology (RVT). Specialty area credentialing is also available within the RDMS and RDCS. Cardiovascular Credentialing International (CCI) undertakes the same role in all areas of cardiovascular technology. American Registry of Radiologic Technologists (ARRT) undertakes the same role in all areas of radiography, including sonography and radiologic assistants. These 3 organisations administer examinations and award credentials/certification to those who pass their examinations. Once successful, the certification organisation awards a “registered” credential. Annual registration renewal, fee payment and a variable minimum of continuing education credits are required to maintain registration (23). Certification/registration is voluntary and not legislated by any state as a requirement to practise as a sonographer. A bill is pending presentation in US Congress, titled “the CARE bill” that has major allied health professional organisation support that if passed, will require (as a precondition of reimbursement for all federal insurance programs such as Medicare and Medicaid) that sonographers providing the service for which reimbursement is requested must be certified/credentialed by one of the national sonographer registries. This bill will also require specified educational standards to be met. There is though, a very large number of employers of sonographers that require certification/registration; the primary reason likely to be that non-certified sonographers increase the exposure window to professional liability and related lawsuits. Practices undergoing American College of Radiology (ACR) or similar accreditation are obliged only to employ registered sonographers to comply with current site accreditation standards. Site accreditation is not rd compulsory but is often tied to 3 party health insurance payments

providers are actually ahead of the regulatory area



. Therefore, the healthcare service


. Sonographers also appear to be in short supply in the USA and there is much discussion within the profession as to how to make it more attractive to new students and retain existing sonographers. Mandatory registration has been mooted as a way to improve recognition and status of the profession, with any short-term staffing losses for the better gain of the profession and future greater interest in the profession. Other suggestions include increasing both the number of accreditation site reviewers and the number of clinical sites for training

(26, 27)

. There are concerns though, that the current short-staffing of

sonographers in the USA and Canada may lead to more sites employing uncredentialled sonographers (24). There is a similar situation in Canada, whereby most employers request or require their sonographers to have ARDMS or registration via the Canadian Registry of Diagnostic Ultrasound Professionals (CARDUP) but there is no regulation of sonographers in any province within Canada and is therefore not a legal requirement to work. As in the USA, the major pressure is from sites that have to employ registered staff to comply with their site accreditation (24). 18

Radiography The process of radiographic educational course and radiographer accreditation /registration is very similar to that of sonography in the USA, with the exception that the majority of states require licensing of radiographers to practise.

Course accreditation Radiography course accreditation is undertaken by the Joint Review Committee on Education in Radiologic Technology (JRCERT) in a similar manner as sonographic education is accredited, but JRCERT is the only body recognised by the United States Department of Education to accredit programs in radiography and radiation therapy. Such accreditation is based on substantial compliance with the JRCERT standards (these standards are currently under review, previously revised in 2001) (28). JRCERT accreditation is a voluntary process, but only graduates from an accredited educational program are able to then sit their certification examinations with the American Registry of Radiologic Technologists (ARRT) (28). JRCERT accreditation standards are directed at the assessment of program and student outcomes. Utilising these standards, the educational program undergoing the accreditation must describe and document student learning outcomes and the pursuit of academic excellence. Such assessment for accreditation should lead to programmatic improvement. JRCERT accreditation standards require the educational program to articulate its purposes, to demonstrate that it has adequate human, financial and physical resources effectively organised for the accomplishment of its purpose, to document its effectiveness in accomplishing its purposes and to provide assurance that it can continue to meet accreditation standards. A variety of assessment approaches are used in the accreditation process of evaluating the program’s ability to document its effectiveness.

JRCERT Accreditation standards - précis of the attributes evaluated mission/goals, outcomes, effectiveness program integrity organisation and administration curriculum and academic practices resources and student services human resources students ( rights, health and educational opportunity protection) Radiation safety Fiscal responsibility The accreditation process is similar to the JRC-DMS but with no higher accreditation body to make the final decision (as CAAHEP does with sonography). Initial self study documents are completed and submitted, a 19

site visit is arranged; a report of these findings is completed and submitted to a JRCERT subcommittee. This subcommittee makes its recommendation to the JRCERT Board for deliberation and the program is notified as to whether accreditation as been granted as initial, probationary or continuing with dates set for program review. Unsuccessful programs may withdraw their application or have their accreditation withdrawn. Programs accredited are required to submit a “self study progress report” at time frames determined at the time of accreditation, usually yearly. Any changes need to be reported within a reasonable time frame to JRCERT such as official, director, clinical coordinator or supervisor personnel changes and other changes that will alter the structure of the course whereby outcomes may differ from those previously described. JRCERT board members include those from radiography, radiation therapy, general medicine and radiologists. Clear guidelines are in place to ensure appropriate and unbiased behaviour of all members, subcommittees and site visitors.

Accreditation Fees Initial accreditation:


Continuing accreditation:

$US1,350- 2,500

(dependant on number of clinical practice settings) Annual fee to maintain accreditation:

$US1,500 min

Site visit fee:


(travel and accommodation expenses paid by the program) Recognition of clinical education/practice setting fee

$US250 (for each site)

Interim report fee


Misc fees related to process variations


Radiographer Credentialing/Certification/Registration The American Registry of Radiologic Technologists (ARRT) is the largest certification body for radiographers in the USA. Usually, only graduates from radiographic educational programs who have fulfilled specific clinical competencies must sit national examinations to become certified. Primary certification is for the baseline content, while post-primary certification is for specialised areas of practise. Other certification bodies exist for non-radiographic medical imaging/radiologic practises. National examinations are conducted by the ARRT for an entry-level standard of radiographer/radiologic technologist. After passing these exams, their certificates are registered by the ARRT. These registrations must be renewed annually and the ARRT required registrants undertake continuing education which it will administer



Certification and registration is a voluntary process, but is increasing more required by employers and most states of the US require it for radiographer licensure. Of the 52 states, 8 have no regulation, 5 have partial regulation (ie mammography) and the remaining majority of 39 states regulate radiographic practice (30).


2. UK Medicine Course accreditation Accreditation of courses in the UK is currently in a state of flux. Many changes have been introduced and, particularly at early postgraduate and specialist levels, there is shared responsibility for accreditations.

Undergraduate medical education The General Medical Council (GMC) accredits all undergraduate medical courses, utilising the standards set in consultation with a wide group of stakeholders including the universities and the specialist medical colleges


. Only then can graduates of such accredited courses be suitable for GMC registration. In doing

this, the GMC works closely with the Medical Schools Council (MSC). The GMC regulates the curriculum delivery of medical schools including both the teaching and the examinations that they provide. It also has statutory responsibility for the content and standards of the undergraduate medical curriculum and for pre-registration medical training. requirement are defined in the GMC document “Tomorrow’s Doctors”


These standards and

. The GMC organises continuing

monitoring of accredited schools through a new quality assurance program of the undergraduate medical schools called the “Quality Assurance of Basic Medical Education” (QABME)


. This is aimed at ensuring

that the standards and outcomes in undergraduate education are delivered appropriately. This involves regular self reviews and multiple visits to schools over a 10 year period (34, 35).

Tomorrow’s Doctor standards - précis of the attributes evaluated Curriculum outcomes Curriculum content, structure and delivery Assessing student performance and competence Student health and conduct

QABME program précis 1. Medical schools must provide an annual document that: Identifies significant changes to curricula, assessment or staffing Highlights risks or issues of concern, proposed solutions and corrective actions taken Identifies examples of innovation and good practice Responds to issues of interest and debate in medical education, including promoting equality and valuing diversity Identifies progress on any requirements or recommendations from the visit process. 2. Site Visit: Each medical school is visited by the GMC at least twice every 10 years, or additionally if significant change to the curriculum or facilities is mooted Visiting teams can include medical and educational professionals, medical students and lay personnel. Information from the site visit is collected, confirmed and integrated into a judgement 21

over 3 stages and 11 months. Evidence collected is evaluated against the standards in “Tomorrow’s doctors”. Processes are in place for appropriate probity of all participants. New schools are visited in the same way, but review is undertaken for EACH year of the initial program intake. Annual reports allow the GMC education committee to gauge the progress of each school and compare progress across schools. Once the new school is admitted to the list of those universities that can award UK medical degrees in the Medical Act 1983, they undergo yearly quality assurance as described above (34). The GMC, with the MSC, also has guidelines for medical students that outline appropriate professional behaviour and fitness to practise. This deals with professional behaviour, areas of misconduct and the sanctions available and key elements in student fitness to practice arrangements (35).

Postgraduate medical education Responsibility for the postgraduate medical education standards and accreditation is currently a shared role; The GMC is responsible for newly graduated doctors from the undergraduate courses they accredit in their first post-graduate year (foundation year 1- F1). The Postgraduate Medical Education and Training Board (PMETB) is responsible for the second year of the foundation program (F2) The PMETB is also responsible for setting the standards for an optional third stage of postgraduate training whereby general practitioner or consultant training is undertaken. Until recently, the Royal Colleges were responsible for setting the standards of training in their specialities, and regional postgraduate dean’s commission, fund and manage delivery of postgraduate medical training across all specialities. The Postgraduate Medical Education and Training Board began its role in 2005 with a transition period



Its first full intake of trainees was in 2007. The role of this Board is to supervise the education and training sector independent of government, replacing specialist training authorities within each specialty college


The current aims of this Board are to raise training standards, improve the supervision of postgraduate education and training and to consolidate and strengthen the roles of the colleges and faculties. Representation on the PMETB includes those from the royal colleges, postgraduate deaneries, trainees, clinical trainers, GMC, National Health Service managers and patients (37). The PMETB and GMC have worked together to review standards for the new foundation programmes utilising the PMETB format – the “Standards for Training for the Foundation Programme”


and are

working on a single system to ensure quality in the Foundation program (QAFP) to provide complimentary legal responsibilities in regulating the foundation program. Foundation year entitled “The New Doctor”




The GMC produces standards for the first

Postgraduate medical training now requires 2 years before entering specialist or GP training programs. Previously there was a 1 year pre-registration house officer posting, then 3-4 years of senior house officer posts prior to 5 years in a specialist registrar training program (or 3 years of GP practice training). One of the aims of this new program is to produce a medical workforce more quickly and more fit-for-purpose in the rapidly changing National Health Service. Disorganisation of the previous system was also a concern



The new structure is titled “Modernising Medical Careers (MMC)” and it appears that the worldwide medical community will be keen to see its performance and outcomes (16). The specific aims of the MMC model are to produce a trainee who will: be fit to look after patients with acute medical problems have been exposed to a range of medical career options have developed a range of professional “life skills” essential for working in a health care profession (eg communication, teamwork, time management and decision-making). At the end of this training process it is the role of the PMETB to recommend trainees for entry to the appropriate General Medical Council register. However a new training body, Medical Education England (MEE) has been suggested by the ‘Darzi report’ (81)

, and its predecessor the ‘Tooke report’


that will take over the accreditation of postgraduate courses

amongst other matters in relation to improvement of the medical and allied health workforce within the NHS.

Accreditation standards précis of attributes evaluated The PMETB standards are categorised by “domains”. These are: patient safety quality assurance, review and evaluation equity, diversity and opportunity recruitment, selection and appointment delivery of curriculum including assessment support and development of trainees, trainers and local faculty management of education and training outcomes (as per joint Standards for Training for the Foundation Program (38)).

Medical licensing/registration All registration of medical practitioners is performed by the GMC. Provisional registration is awarded to graduates of medical schools to allow them to undertake the general clinical training needed for full registration. Those with provisional registration can only work in approved foundation year one (F1) sites. Full registration for those completing their first foundation year after August 2007 no longer have a requirement to undertake experience in surgery and medicine (a transitional process of requirements for full registration existed during the transition period of the new foundation postgraduate program). This full registration is required to work unsupervised in the NHS or private practice in the UK. 23

Specialist registration is for those with suitable qualifications and experience to work as a consultant in a surgical or medical specialty for the NHS or in private practice (40).

Sonography Course accreditation Ultrasound training courses are accredited in the UK by the Consortium for the Accreditation of Sonographic Education (CASE). CASE is the recognised body in the UK to undertake this role, which is a completely voluntary process. At present, the gravitas attached to having a course accredited is well-respected within the profession due to the standards and monitoring processes. The National Health Service (NHS) is recommending that trainee and new sonographers should undertake such accredited courses. However, it is still possible for students to undertake a course that is not accredited (41). CASE membership consists of representatives from key professional groups such as United Kingdom Association of Sonographers (UKAS), the British Medical Ultrasound Society (BMUS) and the Society of Radiographers (SoR), British Society of Echocardiography (BSE), Institute of Physics and Engineering in Medicine (IPEM), Royal College of Midwives (RCM) and the Society of Vascular Technology of Great Britain and Ireland(SVT) (42). In addition, current and previous program leaders from accredited university programs are also represented – currently 3 of the 14 positions on the CASE Council are from such educational representatives


. CASE recognises that each of its member organisations is unique and has its own

agenda, but they are all provided a forum for idea interchange, so that their views may be synthesised to inform education and training programs (43). CASE undertakes accreditation of new courses, re-accreditation of established courses and annual monitoring of the courses it has accredited. Feedback to those courses is undertaken on both an individual basis and via an annual CASE report. Annual monitoring allows CASE to undertake consistent quality assurance of these programs and to take prompt appropriate action as indicated. Case accreditation is a maximum of 5 years, with shorter periods of accreditation in certain circumstances (42). All courses accredited are post-graduate, with the minimum being a post-graduate certificate. CASE maintains a registry of accredited courses which is updated annually. (The CASE Validation and Accreditation Handbook, March 2000 is supplied as Appendix 2 with approval from CASE) In addition, CASE undertakes an open forum, normally held twice each year. This is open to all individuals with an interest in sonographer education and is an opportunity for the member organisations of CASE and CASE accredited institutions to discuss aspects of sonographer education. Two members of each accredited institution are invited to attend this forum. The views of the institutions at this open forum are then taken into consideration when CASE policy is developed and reviewed.

CASE accreditation standards include: 1. Course content - core topic area and specific topic area 2. CASE learning outcomes – core components ( science and technology and professional issues), specific clinical topics with an appropriate balance between core and specific clinical area


3. Assessment – these strategies must be sufficiently rigorous and be matched to and measure learning outcomes. 4. The teaching team - specific requirements are stated 5. The learning environment- academic and clinical.

In addition, the CASE validation documentation requirements include: 1. Course organisation and philosophy 2. Syllabus, teaching and learning methods 3. Staffing and student information 4. Course and clinical resources

The process of CASE accreditation is: 1. Validation – whereby CASE representatives, external peers and the educational institution reach agreement that the course is designed to lead to an appropriate post-graduate award. This includes a site visit after all self-study documents have been completed and reviewed. 2. Accreditation – whereby the course is initially accepted by CASE to meet the professional and educational criteria established by the members of CASE. 3. Re-validation – whereby the process of course validation is reviewed toward the end of the accreditation period. In addition to the objectives of the initial validation process, evaluation of the success of the course in practice is undertaken. Student opinion and critique are important components of this re-validation process. 4. Re-Accreditation – whereby CASE continues to accept the course as it is satisfied by the revalidation process and that it continues to meet the professional and educational criteria established by the members of CASE. This includes a site visit. 5. Review – whereby the re-validation and re-accreditation processes are undertaken at periods of not more than 5 years. Annual monitoring findings are considered in this review process. 6. Annual monitoring – whereby the ultrasound course is regularly scrutinised. These reports may lead to monitoring visits. It is expected that the institution also undertakes it own internal monitoring of the course to critically appraise and monitor its outcomes. Courses must notify CASE of any proposals to substantially change an accredited course at least 12 months before a re-validation or review date. This annual monitoring is a requirement for continuing accreditation. CASE validation processes include visits to educational institutions and may include interviews with staff and students (where appropriate), visits to buildings and facilities and to the clinical placements. CASE retains the right to visit the educational institutions as it deems necessary


. CASE documentation provides a

useful resource for educational institutions undergoing accreditation and for the accreditors in performing their duties in a fair, effective and appropriate manner. Fees paid to CASE by the educational institutions undergoing new and continuing accreditation cover the administration costs of the process and a coordinator. The costs incurred by accreditors during their training and accreditation activities are paid for by their parent bodies. 25

Sonographer licensing/registration Accreditation of individual practice through personal accreditation is not performed through CASE.


present, there is no compulsory requirement for sonographer accreditation in the UK, within or outside the NHS system. Allied health professionals in the UK are regulated by the Health Professionals Council (HPC) such as radiographers, physiotherapists. As sonography is not recognised as a separate profession by the HPC, there is no mechanism to undergo such regulation as a sonographer.

Many sonographers who have

radiography qualifications (even if they do not work as a radiographer) undertake registration via the HPC and midwives who practice ultrasound must be registered with the Nursing and Midwives Council (NMC)



The Society and College of Radiographers (SCoR) and the United Kingdom Association of Sonographers (UKAS) have jointly been promoting the need for sonographer registration since 2003 and presented a detailed submission to the HPC on the 4th July 2008



At present, both the SCoR and UKAS are

encouraging sonographers to join a voluntary registry open to all professionals who practice sonography. Acceptance on this voluntary register does not authenticate competence or fitness to practise. Registrants are required to provide additional information such as evidence of CPD, and a portfolio of education, training and experience that equates to a recognised qualification and fitness to practice for those with no formal ultrasound qualification. This joint registry ( National Voluntary Registry- NVR) can be accessed by either the UKAS or SCoR websites and is free to members of UKAS and SCoR but there is a joining fee of £75.00 for those who are not members of either professional group (46). A concern held by some members of the SCoR and UKAS is that due to the current shortage of sonographers within the UK, employers are employing sonographers who have qualifications that are unsuitable for registration (i.e. non-radiographer /midwifery) or even those who entered the ultrasound profession at non-graduate entry, such as in the USA (46).

Radiography Course accreditation The Approval and Accreditation Board (AAB) of the College of Radiographers (SCoR) was established in 2004 to place all the College’s approval and accreditation activities under the same framework. The regulations from the HPC entitled “Standards of Education and Training”


are utilised by the AAB in its

radiography course accreditation across the UK. Once a course has been accredited by the AAB, it is recommended to the HPC for eligibility for course accreditation. Accreditation is performed at approximately 5 year intervals with annual monitoring, similar to the CASE annual monitoring.

Site visits are integral to the accreditation process.

AAB also

undertakes training for its assessors and meets with the higher education institutions on a yearly basis to discuss the accreditation processes and allow input from these institutions to be considered for future process improvements (48).


Both the SCoR and HPC maintain a listing of accredited courses.

Radiographer licensing/registration Currently, only students who have completed an AAB/HPC accredited course are eligible for compulsory HPC registration. The HPC has protected the title “radiographer” and therefore to use that title in the UK, you must be registered with the HPC. There is a fine of up to £5000 for anyone who uses a protected title without registration. HPC registrants must: undertake yearly re-registration participate in a continuing education program work in accordance with the HPC standards of o

proficiency in radiography


good character of health professionals (including a character reference)


conduct, performance and ethics

Any behaviour that is outside the scope of required practice outlined in these standards may cause the radiographer to have their registration suspended, publicly cautioned or have their work practices restricted in some way. This is not retrospective and applies only to those who have been, or are on the register. These documents are readily available on the HPC website (49).

3. Australia

Medicine Course accreditation Undergraduate Undergraduate medical courses in Australia and New Zealand are accredited by the Australian Medical Council (AMC).

The AMC was established by the Australian Health ministers in 1984 as a national

standards body for primary medical education.

Membership includes those from a wide-ranging

representation of groups associated with standards of medical practice, including the Australian Medical Association (AMA), medical boards, universities, specialist medical colleges, health consumers and federal and state government. The AMC assesses courses against explicit accreditation standards


. The role of the AMC is to assess

whether a medical school has appropriate structures and resources and has a curriculum that meets these standards. Guidance of any changes required is provided and monitored. The purpose of the accreditation process is the recognition of medical courses that produce graduates competent to practice safely and effectively under supervision as interns and with an appropriate foundation for lifelong learning and for further training in any branch of medicine.


The AMC does not detail prescribed curricula, core subjects or topics, or educational methods; it supports diversity and encourages innovative approaches to education. Therefore the medical schools need to have clear objectives with a curriculum that is capable of achieving these objectives, a system of assessment that assesses if students have achieved the required knowledge, skills and attitudes and a system for evaluating and monitoring the curriculum’s effectiveness with a process of modification to achieve the desired goals/objectives (50). The Accreditation Standards of the AMC are produced to align with the WFME guidelines, but differ in that they only have one level of standard, not 2 as in the WFME documents (4) and provide a commentary of best practice in each area. The aim is ensure that accredited medical schools meet international medical standards. A copy of the AMC Assessment and Accreditation of Medical Schools: Standards and Procedures 2002/2006 can be downloaded from


. Guidelines are also provided to help

medical schools understand the requirements of the process- “Preparing an accreditation submission – A guide to medical schools” (51). The AMC also has input into some vocational post-graduate course accreditation. Written reports are public documents and available from

Post-graduate Post-graduate curricula and educational provision in medicine in Australia are accredited by the Postgraduate Medical Councils (PMCs), one per state. They are, or will shortly be, accredited to fulfil their role by the AMC. This process is still being developed. At a practical level the PMCs inspect and accredit individual hospitals or consortia of hospitals to provide training for junior doctors. At a more senior level, the specialist medical Colleges accredit training posts for junior doctors for their post-graduate specialty training placements. The Post-Graduate Medical Councils are non-profit organisations, funded by State health services with some support from the Federal level for specific projects, that support education, training and career development of hospital medical officers in their first and second years of post-graduate training and beyond if not in a vocational training program. Post graduate training is a minimum of one year for state registration to be granted but may be 2 or 3 years, depending on the requirements of the next stage of training. Guidelines for interns are available to outline the internship training and support mechanisms


. Satisfactory completion of

the 48 weeks of intern training is essential for state registration. The organisation of post-graduate medical education and training is a complex one with many organisations and individuals playing varied and important roles


. Dowton et al, 2005 are concerned that these roles

have developed over the years with a variety of formalisation and little overall coordinated governance. A national structured curriculum for post-graduate medical education has recently been introduced to help 28

provide a clear understanding to medical educators of what competencies are to be achieved.


stakeholders in post-graduate medical education and a brief summary of their role are:

Federal and state government health departments o

Funding to support training in hospital and non-hospital practitioner sites

Healthcare organisations ( hospitals, health authorities) o

Funding, policy frameworks

Specialist colleges o

Define program parameters and curricula, accredit teaching sites, exit standards, candidate assessment, trainee selection (variable involvement).

The committee of Presidents of Medical Colleges o

Intercollegiate communications

Confederation of Post Graduate Medical Education Councils o

Information sharing, projects related to education and training

State Post Graduate Medical Education Councils o

Clinical training education standards/accreditation for new graduates in post-graduate training in public hospitals

University medical schools o

Medical student education programs

Australian Medical Council o

Accreditation of university-based medical education programs leading to state registration, accreditation of post-graduate vocational training programs, advise Minister of Health and Aging of new speciality and sub-speciality recognition.

Medical Training Review Panel o

Examines the supply and demand for training positions, and impact of provider number legislation (16).

AMC Standards – précis of evaluated attributes All standards are clearly defined and have comprehensive explanatory notes provided to assist in understanding of the intent of the standard (50):

Educational standards/ attributes of graduates o

(incorporating knowledge, understanding, skills and attitudes)

Standards for basic medical education o

(medical school governance, leadership/autonomy, course management, educational expertise, educational budget and resource allocation, interaction with the health sector, research




indemnification) Outcomes of the medical course o

(mission, outcomes)

The medical curriculum












continuum of learning) Curriculum – teaching and learning o

(teaching and learning methods)

Curriculum- assessment of student learning o

(assessment approach, methods, rules and progression, quality)

Curriculum – monitoring and evaluation o

(ongoing monitoring, outcome evaluation, feedback and reporting, educational exchanges)

Implementing the curriculum – students o

(student intake, admission policy and selection, student support, student representation, student indemnification)

Implementing the curriculum – educational resources o

( physical facilities, information technology, clinical teaching resources) (50)

AMC accreditation process summary Self-study document o

The medical school completes a standard questionnaire – “Preparing an accreditation submission – A guide to medical schools”


. This document clearly states all requirements

for completing this self-study document. This document must be submitted four months prior to the site visit and it is recommended that it be initiated 18 months prior to the assessment visit. Accreditation team is constituted o

The team is chosen and the school is invited to comment on the team members and any declared interests of the proposed team members. Final team and date of visit is provided to the school. This process is performed 12-6 months prior to the visit.

Final documentation received and visit confirmed o

In practice the visit is generally no longer than 4 working days

Accreditation report o

A first draft is circulated to team members within 3 weeks of the visit. After agreement by the team, it is sent to the school for comment. These comments are then considered and draft recommendations are sent to the school. The school may request a review panel if unhappy with the recommendations. If a panel is requested, an independent chair is selected to review the process and recommendation. This report is sent to the university and the AMC for consideration of their final accreditation decision. If no panel is requested, the AMC make the decision, imposing conditions on accreditation or deciding whether to refuse accreditation.

The maximum period of accreditation is 6 years and the AMC may offer an additional 4 years during its 4


year of accreditation., with periodic reports to be submitted every 2, 5 and 7 years. Accreditation may be reviewed or conditions imposed based on these reports at any time. More frequent reports are required of 30

courses that have conditions imposed on their accreditation and to those schools who have made significant changes to their course (50).

Medical licensing/registration Registration of medical practitioners is a state responsibility at present and is regulated by separate legislation in each state and territory. The individual medical boards administer registration, but to ensure a minimum standard of qualification and to streamline provision of goods and services between the states, it was agreed that those eligible for registration are: graduates of medical school in Australia and New Zealand accredited by the AMC and graduates of other medical schools who have passes the AMC examination. Both categories must complete a period of supervised training before unconditional registration is granted (eg one-year internship for Australian and New Zealand graduates). In Victoria, a recent review of the Health Professions Registration Act 2005 came into effect on 1st July 2007. It has brought most health professions under the one act, including medicine, nursing and medical radiation professionals (excluding sonographers). This is discussed more at the end of the review of radiography accreditation and registration in Australia further in this chapter. The Medical Board of Victoria still exists and has much the same role as previously, but it is hoped that the processes are more accountable, especially in the area of complaint investigation


. It is expected in the future that medical practitioners will have

available to them a national medical registration process, likely from the 1st July 2010 medical boards across Australia are supportive of this endeavour via COAG



. It appears that all


Sonography Course accreditation Australian and New Zealand ultrasound training courses are accredited via the Australasian Sonographer Accreditation Registry (ASAR). While this is compulsory in Australia under Medicare legislation ( requiring an ultrasound examination that bills Medicare to be performed by a registered sonographer and that registration can only be obtained if the sonographer has undertaken an ASAR accredited course), it is not so in New Zealand. The Royal Australian and New Zealand College of Radiologists though require this in their practice accreditation quality program


. Accredited courses are all post-graduate, with the minimum being a post-

graduate certificate. Course accreditation is performed via submission of a document that focuses on these key areas at least: program/unit objectives curricula/syllabus content and delivery processes leading to demonstrable outcomes resources assessment methods 31

quality improvement strategies/outcomes The document is reviewed by the members of the ASAR and they will then request that representatives of that course applying for accreditation to attend a face to face meeting with them to review and clarify the application. No site visits are undertaken. Accreditation is granted for 5 years to a program that shows substantial compliance with the standards, and for 3 years for those who show satisfactory compliance. This 3 year accreditation is conditional of specific points that are highlighted in the accreditation process to be rectified within this time frame to obtain on-going accreditation. If not satisfactorily addressed, accreditation may be withdrawn. Re-accreditation is at either the 5 or 3 year period, but courses are to supply to the ASAR a notice of any significant changes to the program structure. The ASAR consists of representatives of the professional groups integral to sonography in Australasia and 2 representatives from Australian Universities that provide post-graduate ultrasound programs



Sonographer licensing/registration Sonographers are registered by the ASAR on a yearly basis. To be accredited, candidates must have satisfactorily completed an ASAR accredited course, be undertaking an ASAR accredited course, or have completed an international ultrasound qualification that has met ASAR standards (56). As yet there is no other state or national registry, or one that encompasses ALL sonographers, as the current regulatory process under Medicare legislation may allow some to avoid the process. Therefore, those working in a practice that does not bill Medicare (such as non-medical sites for obstetric entertainment scans) or in a purely public system may be able to avoid ASAR registration processes. Due, though, to common practices of public hospital performing examinations on private patients, this becomes less likely to occur. Whether or not sonographers will be included in the proposed national registration of health professionals proposed by COAG is unknown.


Radiography Course accreditation The










Radiations/Radiation Therapy in accordance with the courses inclusion of the following competency-based standards in Australia and New Zealand. The AIR sets these as a minimum standard for the tertiary institutions to use in their program development (57).

Accreditation Process Course accreditation and re-accreditation requires: access to course documentation formal meetings to discuss courses with educational institutions inspection of facilities, the institution and clinical centres interviews with students, academic staff, clinical site practitioners and state branch AIR representatives. New courses undergo a process of in-principle support, then support of the detailed course program. Once this is approved, students may be enrolled. Stage 1 accreditation takes place in the first year of the course and the final stage 2 accreditation takes place in the first final year of the course with extensive examination of the clinical and academic components. Re-accreditation takes place every 5 years. New developments in technology and any issues raised in the initial accreditation are addressed. In regard to post-graduate radiographic education, such as graduate diplomas in CT/MRI or Masters programs of clinical practice, no such accreditation appears to be undertaken by the AIR or sought by the educational institutions running these programs.

Competency-based Standards (summary) 1. Knowledge and understanding the principles, knowledge and concepts that underpin that profession 2. Critical thinking and evaluation the qualities to think critically, creatively and reflectively 3. Professional and Ethical practice application of professional and clinical skills and ethical responsibilities 4. Care and Clinical Management the role of the professional in the care, welfare and clinical management of others including cultural sensitivities 5. Lifelong learning attributes of professional development, teaching, mentoring and research.


Radiographer licensing/registration: Currently, all radiographers within Australia have to be registered/licensed in some form via a state-based board or entity. In NSW, radiographers may obtain an operator’s license from the NSW Environmental Protection Authority (EPA) but no registration is undertaken in that state. Radiographers in South Australia are not required to be registered either. In the remaining states and territories there is a radiographer or medical radiation technologist’s board that vary in their statutory roles. Membership to the state boards generally consist of radiography, medical, radiological, educational and consumer representatives, but may vary from state to state. Most bodies only require registration, but soon the Medical Radiation Practitioners Board (MRPB) of Victoria will additionally require all practicing radiographers to have a Radiation Use License.

This Board’s

provisions also include approval of courses that provide radiography formal education. It is uncertain at this time how the MRBB and AIR roles will complement or impact on each other’s current role but it can be assumed that the MRPB may undertake current roles of the AIR in course accreditation and assessment of overseas practitioners



These changes to the role, name and structure of the Medical Radiations licensing body came about after a review of the Health Professions Registration Act between 2002 and 2005 that took effect on the 1st July 2007. Under this change, a new regulatory model was undertaken that impacts on what the individual boards can do and how they can do it. It aim is to streamline and improve the fairness of the processes for complaint management, increase community representation and public access. The current boards, such as the Medical Board of Victorian still exit, and the Medical Radiation Practitioners Board was established under this new act. The Boards still have the role of ensuring that their health practitioners are appropriately qualified and competent to practice. They will still set standards for entry into these professions, investigate complaints and impose sanctions to those practitioners who are found to have acted inappropriately. One main change is a transfer of responsibility for the conduct of hearings into matters of serious unprofessional conduct from the individual registration boards to the Victorian Civil and Administrative Tribunal (VCAT)



It is expected that the proposed national registration body discussed below may take over this role in the future.

Proposed Australian National accreditation /registration body The Council of Australian Governments (COAG) met on the 26th March 2008 and undertook a major step towards what it hopes will improve Australia’s health system by signing an Intergovernmental Agreement on the health workforce. This follows on from the Productivity Commission’s Australian Health Workforce Research Report of 2005


recommendations. This agreement will for the first time create a single national

registration and accreditation system for nine health professions: medical practitioners; nurses and midwives; pharmacists; physiotherapists; psychologists; osteopaths; chiropractors; optometrists; and dentists (including dental hygienists, dental prosthetists and dental therapists). It is hoped that the new arrangement will help health professionals move around the country more easily, reduce red tape, provide greater safeguards for the public and promote a more flexible, responsive and sustainable health workforce. For example, the new scheme will maintain a public national register for each health profession that will ensure


that a professional who has been banned from practising in one place is unable to practise elsewhere in Australia (1). This new scheme, likely to commence on 1st July 2010 will encompass a single, consolidated scheme and a new national professional board for each of the nine professions. Each profession will develop standards for its profession for approval by Health Ministers (state and federal) and community representatives will play a key role in the new scheme. It is proposed that individual registration and accreditation decisions will remain the responsibility of the professions, but that the national board is expected to encompass educational and training program accreditation amongst other quality assurance activities. This should provide a platform for uniform national standards on which to base registration and to facilitate development of a national approach for the assessment of qualifications from overseas trained health professionals. The amount of input that each profession group may have on the Board and in setting the accreditation standards and scope of practice guidelines is also uncertain. It is also uncertain if the roles of accreditation and registration may be delegated to those currently performing those roles, but the Productivity Commission’s document states that this may be the initial practice


. This national register was originally not intended to include medical

radiations professionals in its initial phase but submissions are being called for to consider inclusion of these professions and sonography. Alternatively, it is expected that they will be included at a later date (59).


5. Multiple Foci of Accreditation Process. Recent concern over the quality of healthcare education, and of medical education in particular in Australia (62)

, may lead to a strengthening of process and outcome accreditation within the next 5 years. This might

result in separate accreditations of the educational process, and of educational outcomes. For example the first would involve an analysis of an institution’s educational provision and the second a national licensing examination of the clinical competence of all of its students. This might mean that for some healthcare professions national licensing examinations might be developed where there currently are none. This would align Australia to the USA and Canadian educational systems, rather than the UK/European ones. Nevertheless, at present in Australia, accreditation mechanisms focus predominantly on process attributes, and this mirrors most practice worldwide in the accreditation of healthcare curricula. As part of this exercise the accreditation system usually focuses on certain dimensions of the educational activities of an institution. In this Chapter we discuss the chief typical foci that accreditation teams examine as part of their deliberations. Across different professions, and in different institutions at different times, the philosophical framework to which these judgements contribute can vary significantly. For example, in the UK the Quality Assurance Agency has changed from a process of detailed subject review to one of institutional audit largely, it is thought, due to excessive intensity and cost of the former


. Nevertheless, many healthcare accrediting

bodies focus on the issues described here, and do so in some detail. The argument for this degree of thoroughness hinges mainly on the need to assure the public that health professionals are being optimally selected, educated and prepared for practice in ways that do not jeopardise patient safety.

Institutional Self Analysis Document A feature common to most accreditation processes is a comprehensive self-analysis prepared by the educational institution, detailing its initial conclusions on the areas described below. Please note that the following areas and elaborations are examples of the considerations and questions raised in relation to the issues, and not a comprehensive list. Some accreditation programs offer assistance to those seeking accreditation in completing this document, whereby a member of the designated team is given the role of advisor

(17, 42)

. Many accreditation bodies

provide specific detailed information to help the course undergoing accreditation complete the self-study document, therefore providing them with specific item that require answers and what additional information must be provided. Others have less detailed submission requirements


. The following discussion points

have been developed with utiilisation of documents that may require further investigation. These are the CASE Validation and accreditation Handbook, March 2000 (provided as Appendix 1), the AMC - Preparing an accreditation submission – A guide to medical schools November 2007 and the LCME Functions and Structure of a Medical School – Standards for Accreditation of Medical Education Programs leading to the M.D.Degree, June 2007 and the GMC/PMETB Standards for training for the Foundation Programme 2007 (36, 42, 51, 64)



Typical Foci of Interest in Accreditation Aims and outcomes The reviewers will consider the broad descriptions of aims and outcomes in the course documentation, and how closely these intended learning outcomes relate to the overall goals of the programme. They will also look at the extent to which the objectives match with external reference points, for example with the accrediting body’s own specifications, or any other professional body requirements. For example how explicitly are the intended learning outcomes of a course and its constituent parts communicated to the staff who will teach it, and to students and external stakeholders? Student handbooks and curricula documents will provide the data for much of this analysis and will also illustrate how aims and outcomes are made explicit.

The Curriculum The curriculum is the sum total of all the course aims, goals, duration, content specifications, design and structure, teaching and learning methods, assessment strategies and course evaluation activities designed and offered by the institution. The major issues here involve the extent to which the curriculum is capable of achieving the outcomes that the institution espouses. Many standards reflect the ‘time-based vs competency’ debates. For example in the European Union there is a rule that medical courses should deliver 1000 hrs of clinical experience to establish ‘competence’.

In the UK this is being resisted, and an approach to

competence-based requirements for licensure may be being developed. Certainly in Australia such a rule would jeopardise efforts being made to shorten the total time-required to specialty qualification in medicine, by using ‘streaming’ at undergraduate level. Other questions typically addressed include: Are there any major gaps in the aims and goals? Do the assessment activities systematically address all the objectives of the curriculum? Are the teaching and learning methods appropriate and developed congruently with international best practice? Does the overall assessment strategy have an adequate formative function capable of nurturing students’ abilities, and developing their academic and professional skills in both campus and clinical placement settings

(36, 51, 42, 64)


At a strategic level do some curriculum models have potentially detrimental outcomes? For example is any part of the curriculum delivered by distance mode? In Australia distance learning is widely used in many courses. Some courses have a component of the learning as face to face group workshops for small segments of some units, but it is likely with the cost of providing these to the universities that these may reduce in the future. Courses offering distance education will be required to meet the same standards as those that offer face to face learning. This is a crucial issue in regard to sonography courses within Australia, due to their predominant distance learning mode. Many do not offer any on-site workshop component, while some do. This issue will be further discussed in Chapter 7.


Evaluation of the curriculum will also be part of the accreditation process- how the course is evaluated and how often and how this information is utilised for the improvement of the course. Outcome data may be requested to demonstrate the extent to which the program’s educational objectives are being met. Research programs developed by the school may also be information sought in an accreditation process, especially for medical schools.

Governance of the Curriculum To what extent should, and do, professional bodies have a say in the governance of curricula. Where there are major pressure groups (e.g. indigenous people in Australia, or the patient voice in the UK) this sometimes impacts on defined standards

(12, 50)

. The degree of interaction with the health sector and the

process may also be required to be provided. Professional body influence may vary from brief outlines of curriculum components to quite extensive descriptors of what is to be taught

(7, 12, 54)

. Churchman and Woodhouse


concluded that the relationships

between universities and professional bodies are very complex and vary greatly between professional disciplines. Industry requirements, student demands, government policy, the regulatory environment and international trends influence relationships concerning accreditation and regulation of courses. As a point of difference from other non-professional university-based courses, students enrolled in professional educational programs (such as the health professions) are studying for a very specific purpose, which may have a direct influence on their employment in the long term. Professional bodies often try to exercise control over the license to practice, while tertiary institutions try to develop within professional education, courses that include breadth, intellectual challenge and develop the critical abilities of students. Both parties then often have to come to a compromise and this may require constant review. Professional bodies may need to ensure that they meet the correct balance in what they impose on the educational bodies so that the standards to be met are well-informed, unbiased, widely encompassing, consider the role of the consumer and educational institution and emerging trends



Governance of the School This may also be considered, to assess the relationships of the school with its clinical and educational components and the appropriate representation of all relevant groups in the decision making. The legal standing and institutional accreditation of the school may also be requested, in addition to the management structure of the school and course. Information on educational budget and resource allocation may also be needed.

Assessment Processes Accreditation bodies typically ask what assessment methods have been selected by the institution, and whether they were appropriate to the intended learning outcomes and to the nature and scope of clinical practice involved. This involves ensuring that a range of assessment formats is used that is appropriately aligned to the components of the course. For example, does the school have a defined and documented assessment policy which guides student learning toward attaining the outcomes of the course? Are the


criteria used coherent and robust enough to enable assessors to distinguish between different levels of competence? What are the characteristics of the security, integrity and consistency of the assessment procedures, and how do the construction, marking, moderation of, and standard setting for the assessments take place in both academic and clinical settings. Are reliable and valid assessment methods utilised and new assessment methods developed where required? Are the assessment standards and processes consistent? How do employers and other practising professionals contribute to the development of assessment strategies? In some accreditations it is not uncommon for a review of samples of marked student work, annual review reports, and statistical and psychometric data on assessments to take place to assist reviewers to evaluate the contribution of assessment to the overall judgement about standards.

Assessment Strategies In general assessment should conform to well established educational principles. These are summarised in Appendix 2. Briefly these principles include the need for assessment to be reliable, valid, acceptable to trainees and assessors, and relatively cost effective. Unfortunately, to be useful and detailed, with constructive feedback to test takers, assessment needs to be comprehensive, and therefore expensive. There is essentially no shortcut through this requirement. The history of assessment has been dogged by attempts to find inexpensive, easy to mark and reliable assessment methods. All such attempts have failed. Furthermore in the USA national clinical assessments were abandoned in the 1960s because early research found them to be unreliable. This was eventually recognised as having a negative impact on the perceived value of clinical work and performance assessment, and this view was encapsulated in an important address by George Miller to the Association of American Medical Colleges on the assessment of competence



In this article Miller identified 4 levels of assessment, using the analogy of a pyramid, where each level addressed a different domain of ‘competence’. However the critical point of Millers’ article was that the top two levels, comprising the pinnacle of the pyramid, the most authentic aspects of clinical professionalism, consisted largely of competence at clinical tasks (shows how) and the performance of clinical tasks in a real setting (does). These two levels, particularly the ‘does’ component, were not being adequately assessed (Fig.1).


Professional authenticity

Competence vs. Performance


Does Shows how


Knows how Knows Miller GE. The assessment of clinical skills/competence/performance. Academic Medicine (Supplement) 1990; 65: S63-S7.

Figure 1. The Miller Pyramid. Diagram converted from an original of Cees van der Vleuten (79). Consequently, in Canada and the USA, new national clinical assessments in medicine started in 2001 and 2004 respectively


. This has also led to greater emphasis or rediscovery by health professions of work-

based assessment. This has become an important feature of postgraduate training in medicine. It has always been a feature of other health professions education, especially nursing, but has not received the detailed attention to sampling, scoring and standardisation that is required to achieve reliable and valid results.

Purpose of Assessment The purposes of assessment are: measuring competence / performance measuring academic achievement diagnosing a student’s problems self-evaluation setting standards measuring improvement identifying effective learning and teaching showing effectiveness of the curriculum predicting future performance introducing curriculum change (83, 84) Assessments should be fit for the purpose they are designed to achieve. For example assessments designed to show the effectiveness of the curriculum do not need to discriminate between individuals, but they do need to reflect the presence of a wide variety of content. Such breadth can be achieved, for 40

example, by assessing different portions of the class on different items so that more are covered. Such an approach would clearly not work for competitive examinations designed to select trainees into an advanced training program. The aim of assessment is to provide an accurate picture of how a person compares to a set of standards. These standards can be defined by pre-set criteria (criterion referenced), or by the performance of others (norm referenced)


. In assessments of professionals, international best practice is moving towards totally

criterion referenced assessments for licensure. Assessment should highlight strengths and weakness of trainees in an unbiased fashion, through comprehensiveness and a high degree of objectivity. Assessment methods should have: reliability- i.e. they should be reproducible ( eg same score with different markers) content validity- i.e. they should demonstrably contain an appropriate and unbiased sample of the clinical and intellectual tasks that they are supposed to measure face validity – i.e. they appears to measure what they are supposed to measure construct validity – i.e. they are demonstrably appropriate to the professional traits being tested critierion/predictive validity – i.e. scores on the assessments correlate with performance on gold standards or that in real-life situations (83,84).

Types of assessment Summative - a measure of end-point achievement, eg pass/fail Formative-

an attempt to define strength and weakness, often performed in the lead up to a summative

assessment to provide feedback to the trainee as to their current level of performance.

Competence vs performance Competence is considered to be the underlying ability to undertake a clinical task and the degree to which individuals can apply the skills and knowledge associated with a profession to a range of situations that they may encounter in their professional role. Most assessments in academic institutions address this ability. There is very little doubt that competence develops from experience (practice) combined with feedback. By contrast, performance is about how the task is performed in a real life, actual setting – in essence what a clinician does on a day to day basis on the job. This is of course determined largely by competence, related to the duration of training and experience, but is modified by attitude, tiredness, illness, the environment, the workplace culture and so on

(77, 104)


The work by Miller was revisited and reworked by a group of authors in the UK in 2000


. Their

conceptualisation (see Figure 2) confirms that performance is also determined by factors other than the ‘simple’ concept of competence. For example, on the job performance can be affected by institutional or human factors. This model is useful for highlighting the need to look at these other factors when making decisions about the performance of individuals and also, in terms of accrediting institutions, for cautioning that while individuals working within them may be ‘competent’ they may be performing less than optimally for a variety of reasons outside of their control. One of the goals of the accreditation may be to enable the institution to achieve more with the personnel at its disposal.


The Miller-Cambridge Pyramid – a new perspective Institutional attributes of performance: culture; funding sufficiency; quality of other personnel

Systems factors


Personal and Professional underpinnings of Performance : Health, Conduct, motivation,


Competence Factors

Academic underpinnings of performance: Competence = Clinical decision-making, Procedural skills Communications skills

Cambridge Conference on Physician Assessment, 2001

Figure 2. The Miller-Cambridge Diamond Model


The literature on competence development and assessment is vast, and beyond the scope of this document to cover in detail. However there are some notable models of the development of competence that have informed curriculum accreditation. Dreyfus and Dreyfus


developed a model for learning called the “Five

stages of skill acquisition”, alternatively known as “The five steps from novice to expert”.

This was

developed by observation of the skill acquisition of airline pilots, chess players, automobile drivers and adult learners of a second language. A common pattern was found in all cases, though not all people achieve the highest, expert level in all skill areas

For example, in chess, only a small fraction of beginners can ever

master the domain, whereas in automobile driving, almost all novices can eventually reach the expert level, though some will be more skilled than others. This model has been extensively used in allied health education and in the development of competency-based standards in ultrasound in Australia

(100, 101, 102)


Assessment of competency therefore needs to consider these and other models of skill acquisition to both fairly and appropriately examine the student, and to assess if their level of competence is appropriate for their stage of training. It is therefore important to make sure that the assessment regime being used is appropriate to the level of competence developed by the practitioner, and to the level of operation (undergraduate/postgraduate or technical/diagnostic) that is being accredited. For example in the inspection of a training site, if the trainee practitioner never engages in ‘real’ work (actual performance of patient-related tasks) it would not be sensible to assess at the level of performance using those work-based methods (see below) appropriate to the top level of the Miller-Cambridge models (such as direct observation of performance; case based discussions; patient satisfaction questionnaires).


Dreyfus and Dreyfus found that being an expert, or being at a particular stage of skill acquisition, does not necessarily mean performing as well as everyone else exhibiting the same type of thought process. They refer to stages in their model because: Each individual, when confronting a particular situation in their skill domain, will usually approach it first in the manner of a novice, then of advanced beginner and so on through the five stages The most talented individuals employing the kind of thinking that characterises a certain stage will perform more skilfully than the most talented individuals at an earlier stage. Each stage of the Dreyfus’ model will be briefly discussed with extrapolation as to how these stages may apply in ultrasound skill acquisition. Stage 1 – Novice/beginner The novice or beginner learns to recognise various objective facts and features relevant to the skill and acquires rules for determining actions based upon those facts and feature.

These elements are often

context-free, i.e. not recognised with reference to the overall situation in which they occur. The beginner student wants to do a good job, but lacks coherent sense of the overall task. They judge their performance mainly by how well they follow the learned rules. Often in exercising the skill, the beginner is concentrating so much that their capacity to listen to advice or talk is severely limited. The first rules allow for accumulation of experience


As related to the beginner sonographer, they: need direction about how to apply their academic knowledge to clinical situations, requiring close supervision keep to the rules and lack flexibility and clinical know-how have difficulty in coping with more than one demand at a time miss subtle clinical clues use rudimentary clinical communication skills and exhibit a lack of ease with patients find it difficult to cope with the needs of the patient at the same time as the technical aspects of the examination (101). Stage 2 – Advanced Beginner Performance improves to a marginally acceptable level only after the novice has had considerable experience in coping with real situations. Using more context-free facts and sophisticated rules, they are taught more about the concept of the “world of the skill”. Practical experience with elements that have context and meaning allows the student to begin to recognise these elements due to a perceived similarity with previously seen examples. These elements are now called “situational elements” that allow recognition based on experience. Rules can be followed and based on the presence or not of recognisable factors (100,102)


As related to the advanced beginner sonographer, they: begin to modify protocols in light of the clinical question, patient condition and from knowledge obtained from previous cases. begin to take responsibility for planning the entire examination are methodical and efficient with an ability to think laterally 43

cope with more than one demand at a time begin to recognise subtle clinical clues and their implications empathises with the patient and communicate professionally anticipates potential problems, minimising mistakes focus on the needs of the patient at the same time as the technical aspects of the examination still require some guidance with practical skills due to inexperience. have a greater awareness of their strengths and weakness and know when to refer for guidance, beginning to act as a team member (101). Stage 3 – Competence A competent performer no longer merely follows rules, but functions with a goal in mind. They see the situation as a set of facts, where the importance of the facts may depend on the presence of other facts. They have learned that when a situation has a particular constellation of elements a certain conclusion can be drawn, decision made or expectation investigated. The advanced beginner may get along without recognising and using a particular situational element until a sufficient number of examples renders identification easy and sure; but to perform at the competent level requires choosing an organising plan, whereby the choice crucially affects behaviour in a way that one particular situational element rarely does. The competent performer then feels responsible for, and therefore emotionally involved in the product of the choice or plan. They understand and decide in a detached manner, but are intensely involved in what occurs.

An outcome that is successful is then deeply satisfying and leaves a vivid memory of the

components of the plan and the situation. therefore the aim, whether conscious or not

Disasters are also not easily forgotten. Problem solving is

(100, 102, 104)


As related to the competent sonographer, they: rely upon knowledge from previous clinical cases demonstrate initiative and modify protocols due to clinical indicators act professionally in a responsible and ethical manner, is patient focussed grasp the ultrasound examination as a whole, instead of a series of tasks are able to determine what information is relevant prepare for the examination and anticipate for potential problems accomplish the examination in a timely and efficient manner cope with more than one demand at a time recognise subtle clues and their implications recognise pathology, artefacts and has skilled image acquisition communicate effectively with the patient and the team recognise the limits of their knowledge and experience and when advice or assistance is required (101). Stage 4 – Proficiency This stage and the highest stage, Expertise, are characterised by a rapid, fluid, involved form of behaviour compared to the slow, detached reasoning of the problem solving process described in stage 3. The proficient performer is usually deeply involved in the task and will be experiencing it from some perspective of recent events that allow certain features to stand out and others to be ignored. This form of pattern 44

recognition involves no detached choice or deliberation – it just happens likely because the proficient performer has experienced similar situations before and memories of them trigger plans similar to those that worked in the past and anticipations of events similar to those that occurred. Understanding that occurs effortlessly upon seeing similarities with previous experiences is known as intuition, or know how. This is the sort of ability we all use all the time as we go about our everyday tasks, not a form of supernatural inspiration. Therefore, the proficient performer intuitively organises and understands the task, while still thinking analytically about what to do (100, 102). As related to the proficient sonographer, they work in an intuitive manner, patient focussed and with little supervision required Stage 5 – Expertise An expert generally knows what to do based on mature and practiced understanding and further advanced pattern recognition skills. Problems are not seen in a detached way and conscious deliberations such as plans are not devised. The expert’s skill is such a part of them that they are not aware of it - a great deal of their knowledge is finely tuned and automated. Instead of concentrating on how to do the task, they are experiencing it. With expertise comes fluid performance, but an expert though may, in spite of great experience, be thrown by events they could not foresee (100, 102). As related to the expert sonographer, they work similarly to the proficient sonographer, but with further maturity and experience to work in a way that comes naturally and without effort. Much research is begin undertaken to investigate the acquisition of medical competence through cognition and expertise; looking at the development and characteristics of expert performance. Cognitive science is primarily concerned with characterising the knowledge structures and cognitive processes underlying human performance. Cognitive factors of expert performance include efficient memory use, well-organised knowledge and productive problem-solving strategies. Research is ongoing in expertise to determine what distinguishes outstanding individuals in a domain from less outstanding individuals


. Human cognition is

usually constrained by the limitations of working memory and it appears that and expert’s well-organised domain knowledge enables them to circumvent some of these limitations. Performance, though, is not just based on memory skills. It is a function of a well-organised knowledge base adapted to recognise familiar configurations of situations and information (105). Acquisition of expertise includes periods of smooth, incremental learning that are often followed by periods of consolidation of knowledge that may cause a decline in performance. This “intermediate effect” may occur repeatedly at strategic points in learning where large bodies of new knowledge or complex skills are acquired. Knowledge at this stage may need to be further re-organised and performance may reduce until mastery at this level is achieved (105).

Work-based assessment Norcini suggests that assessments of actual practice are a much better reflection of routine performance that assessments done under test conditions. This would seem to be sensible with the proviso that these assessments must yield reliable and equitable judgements on the quality of the actual practice. Such work based methods of assessment target the highest levels of Miller’s pyramid (i.e. “does”) and collect 45

information about the student’s performance in actual practice, although they are likely subject to much more serendipitous variation due to extraneous and unpredictable variation (such as a machine malfunction during a routine clinic). Despite this, they are perhaps a more true indication of the likely performance of the student. For example, the GMC/PMETB standards for the UK medical foundation year require documented evidence of: direct observation of the foundation doctor’s performance reports from colleagues about the foundations doctor’s performance discussions with the foundation doctor about their performance the foundation doctor’s personal portfolio Other evidence may be required such as patient feedback and/or audit outcome (36). Other common methods of assessment, such as multiple choice questions, simulation tests and objective structured clinical examinations (OSCEs) tend to target the lower ends of Miller’s pyramid (i.e. “know how, shows how)


. They assess tasks in a restricted or controlled environment.

Clinical Assessment Assessment of clinical competence and performance is a complex issue. All medical and paramedical schools (and external examination boards) are responsible for developing procedures to assess the clinical competence of large numbers of examinees. The performance of the examinees, as measured by these procedures provides the basis for decisions regarding promotion to the next stage of training and entry into practise. The decision assumes that the assessment procedure is a valid reflection of the examinees’ stage of development and their readiness to perform satisfactorily at the next stage of training or practise



Therefore with this assumption in mind, assessment of this level of competence needs to be as accurate as possible. One approach in determining components of competence is to identify a large number of clinical tasks within the range of the competence of the student and then to measure performance on a sample of those tasks. The ideal method of assessing clinical performance is: easy to administer is valid – i.e. measures the appropriate components of competence is reliable – i.e. produces reproducible results is efficient in terms of resources (75)

Assessments may be carried out in a variety of ways, but should be carried out to the same standard. In choosing how to assess clinical competence, firstly the purpose of the assessment must be determined and the following guidelines, developed by Newble et al

(74, 83)

the assessment method:


could be followed in developing and implementing

1. Define what is to be tested identify the clinical problem that the student should be able to perform to some degree define the clinical skills for each task in which the student should be competent i.e. procedural skills and cognitive skills may require different assessment approaches prepare a blueprint to guide in the selection of problems to be included in the assessment procedure 2. Select test methods and format test methods should strive to represent reality appropriate to the clinical task the clinical task should dictate the method used to test it ( a comprehensive assessment procedure would often include more than one testing format) recognise the practical restraints on selecting optimal examination methods (eg time available, resources, what is appropriate to the profession) 3. Test Administration use valid and reliable formats use efficient testing methods 4. Technical issues consider the technical aspects in selecting an assessment tool –eg. Time frames for clinical observation, training involved when using standardised patients weighting of items, fairness, equity, more important tasks should have higher weighting 5. Bias avoid, monitor and adjust for bias 6. Standard setting determining the score required to pass an examination 7. Reporting of marks all students should receive clear guidelines for the interpretation of the marks marks should be efficiently be collated and archived 8. Item Banking formal












questions/Multiple choice questions. 9.Evaluation of the testing methods to ensure ongoing appropriateness (74)

Methods of Assessment, as likely applicable to sonographer education Detailed examples are provided in Appendix 1

Teaching and learning The reviewers will judge the range and suitability of teaching methods engaged in relation to curriculum content and programme aims. They will also evaluate how staff draw upon their research, scholarship, or professional activity to inform their teaching. How students apply self-direction and how they participate in group learning activities is frequently investigated. Furthermore it is important to know how course materials 47

support learning and how students' independent learning is encouraged by staff, by the clinical context, by the course materials and by the assessment systems. How is quality teaching enhanced through methods such as staff development, peer review, integration of part-time and visiting staff, effective team teaching and the induction and mentoring of new staff? Sources of evidence for these judgements may include: student evaluations, internal review documents, staff development activities, course and student handbooks and discussions with staff and students. In some accreditations direct observation of teaching is included.

Human and Physical Resources The accreditation would review staffing levels and the appropriateness of staff qualifications and experience, including both teaching and support staff, and the extent of continuing professional development to keep abreast of increases in technology and clinical practice. Some accreditation programs require specific details of each staff member, qualifications of the teaching staff and notification of any changes to be made promptly. Staff indemnification arrangements may also be required. The library and other physical resource provision would be assessed including computing hardware, general and subject-specific software availability, the currency of all resources and the suitability of staff and teaching accommodation in relation to the teaching and learning strategy and the provision of support for students. The information collected would include direct observation of physical resources, internal review documents, minutes of meetings, equipment lists, library stocks, staff curricula vitae, external reports and staff development documents.

Student progression and support The accreditation process will normally evaluate the impact of strategies for recruitment, selection, intact numbers, admission and support and guidance of students, to enable them to complete the programme. The process will also determine if the institution is following its own policy in this area – for example, are there anomalies in selection? Accreditation will also typically examine how academic tutorial support is delivered and whether staff is enabled to provide this support by training. The process will also examine the quality of written information; and the extent to which it is consistent with the student entry characteristics and the overall goals of the programme. The reviewers will usually look at progression within programmes in some detail including non-completion rates. This may include the scrutiny of institutional documentation and statistical data on applications, admission, progression and completion, learning and teaching support, course and student handbooks, student evaluation of admission, induction and tutorial support and discussions with staff and students. Typical major issues are likely to include the adequacy of the scope of attributes (academic vs professional) possessed by applicants or recruits, and the degree to which clinical associate staff are prepared for their roles as tutors/examiners. Additionally there may be specific issues attached to the delivery of distance education, for example is there sufficient collaborative faculty effort to support isolated students? Are there issues of communication 48

between students, tutors, staff and support services for such students? Are the sufficient resources for accessing course information, content and references? How does the school ensure that students have ready access to all material, resources and course policies to ensure compliance and progression though the course? Other matters of interest may be: Financial aid, resources and counselling Health services and personal counselling, Academic and career counselling

Professional Attributes/Standards The extent that this element of accreditation is explicit depends on the model that is used, as discussed in Chapter 3. The GMC, for example, has the benefit of a definition of clinical professionalism that drives both accreditation and regulation. This is also the case with many other health professional peak bodies – whereby such standards describe performance benchmarks for their professionals to reach on qualification and within their professional careers. Such standards are often called ‘Competency-Based Standards’. They usually outline the minimum requirements of that profession, incorporating professional, clinical and academic components; providing guidance to course designers to ensure the course will meet accreditation expectations. These standards may also: provide benchmarks for the appropriately licensed practitioner provide a framework for career structure support registration and licensing issues provide standards necessary for overseas applicants who are seeking to work in that country provide a framework for professional and community expectations Further elaboration on standards has been discussed in Chapter 2.


6. Implications of Distance Learning. Much of the accreditation processes that have been discussed are for courses where the bulk of the education is undertaken face-to-face, either in an institutional or clinical setting. Courses that have traditionally only taught in this fashion are now including teaching elements incorporating internet-based or distance education. Distance education can be defined as a structured educational process in which there is a spatial and/or temporal distance between teachers and learners.

This allows learning to take place at the student’s

convenience, especially in post-graduate education where the student is generally working. This is the situation with ultrasound training in Australia, and to a lesser extent overseas. In Australia, some courses offer full distance education learning, whereby there is no face-to-face teaching and learning undertaken or face to face interaction with students. Interaction with students does take place in these types of courses via email, telephone conversation and discussion pages on the web-based learning facility. Some courses offer the bulk of the course in this fashion, but have workshops that provide the opportunity for tutor/student direct discussion and learning. Distance education has two distinct methods: 1. Synchronous, whereby students receive instruction at a set time. This may include on-line discussion and teaching where all students and teachers interact at that time 2. Asynchronous, whereby students complete course content at home in their own time. A combination of the two methods is also common, whereby students complete course content independently and in their own time, but gather collectively to discuss content at periods throughout the course, often termed “Open Learning”


. A recent meta-analysis of the effectiveness of distance education

in allied health science programs in the USA found that working professional students significantly outperformed graduate and undergraduate students. It also found that open learning and synchronous instruction were the most effective methods of instruction in distance education


. A further study though

found that asynchronous instruction enhanced collaboration and conversation between students, as opportunities for participation became more equal and democratic in the online format


. Some may think

that distance education is a more cost effective method of delivering education, but to have effective distance teaching and learning, instructional design components of the course must be well implemented. The costs of production of distance education tools can often be high, but the advantage is that they are more easily updated and adapted to the students and course needs than traditional printed course materials (96, 97)

. Effective teachers with distance education facilitate a course, are engaging and use their

communication skills in a consistent and timely manner so that students do not feel as though they are alone in the course. Poor communication and delayed feedback with poor teacher involvement in discussions may lead to students feeling isolated and lead to ineffective teaching and learning (98).


A recent study of students undertaking on-line/distance education in higher institutions in the USA found seven key items emerged as requirements for effective on-line teaching: adapting to student needs using meaningful examples to help clarify an issue motivating the students to do their best facilitating the course effectively delivering a valuable course communicating effectively (teacher/student and student/student) showing concern for student learning (98). There are many articles and papers written on the subject of distance education and it does seem to polarise educators. The comparison of the effectiveness and outcomes of “on-campus” located education versus remote/distance education will likely be argued for many years.

Distance education though, has been

promoted as the solution for the various challenges of widening participation and greater access, learning for continuing professional development, resource reductions and internationalisation. Accreditation bodies try to ensure that courses offered by distance education reflect similar academic standards as those from equivalent campus-based programs



Distance education provides the following attributes: Access, students far from the higher educational institution may participate in the course, this provides more choice in course selection than only those able to be personally attended. This also increases competitiveness of the institution. Learning flexibility, a student can learn when and where they want to and, to some extent, at their own pace. They can stop when they are tired or bored. Privacy, students can make mistakes or ask questions with a great degree of privacy Teaching flexibility, teachers can be absent from the higher education institution for periods of time but still access students on-line. Teachers can also be drawn from areas distant allowing greater flexibility of teacher choice. Teachers can also communicate with the students and mark assessments at times other than normal business hours Student involvement can be tracked, this can be done better using the on-line programs than in face-to-face education. Students therefore not contributing can then easily be determined and contacted (99). Distance learning provides greater accessibility and individual instruction; it also requires students to be independent learners, self motivated and to work at a pace to keep up with the workload. Those who commonly do not finish these courses often procrastinate more, feel isolated and have poor time management skills

(70, 71, 98)

. Some concerns have been raised that courses should have processes in place

to ensure that only students who can work this way and will then benefit from this education method are enrolled in the course


. Students need to be aware of the reliance of internet technology for these

courses, and be prepared for access, cost, breakdowns, delays and the potential volume of email that may be involved in discussion groups, teacher contact, work submission and examinations


. Should courses

also provide mechanisms to monitor that students are participating sufficiently? Student entering these 51

courses perhaps need to be well-informed about the process of distance education and their own ability to succeed without face-to-face contact with classroom instructors. They should also have knowledge of the technical aptitude and writing skills required to participate in such a course

(70, 71)

. Distance education should

respond to similar expectations of quality that are comparable to the expectations of traditional site-based education and attention to student achievement is integral to this (72). Accreditation of such distance education courses adds additional issues to the accreditation process. Whilst distance education programs and institutions must consider all or most of the same issues as face-to-face education programs (such as course aims, outcomes, content, learning activities), the following specific items, whilst many are common to both forms of education, are crucial for the provision of a quality distance education program: the delivery mode and instructional strategies are appropriate for the course utilising distance education delivery reference material appropriate for the course is provided to the learner qualified individuals are involved in the design and planning of the course course design and content is in a logical progression interactivity is promoted in the program by specific learning strategies and evaluated instructors/teachers have sufficient time to devote to teaching timely feedback and learner support processes/policies appropriate equipment and software is specified learning environments are provided to support the distance learning approaches utilised a complete syllabus or student manual is provided in written (whether it be on paper or digital) form, with accurate and clearly stated information on admission, progression, completion criteria, dismissal, late submission policies etc. student have adequate access to services appropriate to support their learning i.e. orientation programs be provided for students if required – eg to use the distance education software, searching the library on the internet course evaluation additionally includes staff and student satisfaction ratings of the distance learning process objective criteria have been established for evaluating effectiveness in teaching distance learning courses honesty in advertising and promotional material the educational provider should demonstrate an ongoing support of distance learning, reflected by financial and technical commitments for the continuation of the course written evidence of relationships of the course provided with outside organisation that it consistent with the mission and objectives of the school or educational program (67, 68). The Distance Education Training Council (DETC) in the USA accredited courses through its accrediting commission specifically providing educational programs via distance learning.

This then allows these

courses to have a single source of national accreditation recognition, even if programs are accessed by students from many countries. Alternative accreditation programs are also available, but we note that small universities are undertaking this accreditation program, such as the University of Southern Queensland, to 52

signify to their students in Australia and in other countries that their course content and standards of delivery are comparable with “the best in the world” (69) . The LCME in the US will not accredit a course using a curriculum that is substantially or completely taught by distance learning, as it will not meet their current standards that require direct interaction between faculty members and students; these standards are therefore not adapted to distance learning formats


. The

LCME requires learning to be a collaborative effort, therefore isolated student learning does not comply. The QAA in the UK have been accrediting courses with distance education components for some years and have standards to be met by the Higher learning institutions to meet to attain course accreditation


. It appears

that the QAA is more flexible than the LCME in their approach to distance education. Such flexibility may then be required by other course accreditation bodies to enable appropriate distance education courses to attain accreditation. The ASAR has this flexibility in its ultrasound course accreditation as all ultrasound courses within Australia are predominantly of the distance education format. In addition, many potential student sonographers in Australia are graduates of degree programs that have some degree of distance-type teaching and learning. Examples may include discussion groups, on-line assignment submission. Newer graduates of undergraduate courses also have developed high skill levels in electronic media. Flexibility, though, still requires basic standards to be met to ensure course quality and reassure the students and profession as to the integrity and standards of these courses.


7. Accreditation Activities. 1. Standards and training for the accreditation process itself. Those involved in the accreditation review process should be knowledgeable about the professional / competency-based standards of their profession, its education processes and clinical practice. In addition, knowledge of the accreditation system processes, requirements and behaviours is essential.


accreditation bodies often run training workshops for new and experienced personal involved in the accreditation review process to update them on standards, expectations, appropriate conduct etc (86, 87, 89). Training of accreditors could include: introduction to the accreditation program overview of standards and other measurable program characteristics case studies and group exercises using the standards overview of the review process and an introduction to the tools used ie checklists interviewing, consultation and critical thinking skills scoring of how the program meets the standards and documentation review reporting mechanisms to the accrediting body and the program being accredited aspects of professional behaviour and ethical conduct (2).

2. Tools for the conduct of accreditation Review of the many accreditation programs previously discussed highlights many useful tools to encourage complete and appropriate educational institution compliance with the process, and in addition, ease and equity in the assessment of this information, site visits and outcomes. These include: accreditation process explanation - in summary and for each element of the process, time frames from start to outcome, reporting/decisions, processes for unsatisfactory submission/reapplication, fees and cost responsibilities (7, 8, 86, 88) clearly stated standards with guidelines, explanatory statement or examples to elaborate on what is required (7, 86, 88) assignment of an accreditation team member to advise the educational institution undertaking the accreditation process (8, 42) application forms with relevant institutional information required to be completed –this ensures uniformity of the information submitted; this should then make the review of this information easier from one institution to another if in a similar format for the assessors checklists for educational institutions to ensure that their application has all the information required (68) checklists for the accreditors – this should ensure that they can easily review the information provided and to help them easily recognised compliance/non-compliance with the required standards. Checklists would also help new assessors ensure that no key areas are missed, especially in a complex or lengthy application (17, 81) assessor selection processes, conflict of interest, confidentiality information and processes for complaints (42, 86, 88) 54

3. Probity and deportment of assessors Assessors should be provided information to allow them to understand the requirements of their role, especially where site visits are conducted. These include: be professional in behaviour and appearance maintain confidentiality about all issues during the visit, including written and verbal statements be aware of the standards and current issues take care and exercise objectivity and fairness in their interactions with educational institution, staff, teachers and other team members ask open-ended and focussed questions appropriately, avoiding leading questions concentrate on major rather than minor issues listen attentively and actively with an open mind be mindful of the accrediting organisation they represent and they should not promote their own or the views of another organisation respect the agenda of the visits and at all meetings, manage time accordingly be aware that their actions and reports form the basis of the accreditation outcome for that educational institution support the aims of the accreditation program be supportive of change and innovation ensure that best practice and appropriate equal opportunity legislation is followed (8, 90) In addition, the GMC recommends that assessors have the following personal competencies that could very well relate to all assessment program individuals: the ability to absorb, analyse and interpret disparate and complex information the ability to make reliable and objective judgements and convey these effectively the ability to think creatively and thrive in an environment of constant discussion and crossfertilization of ideas the ability to present arguments orally in a fluent and persuasive manner to other team members and the accreditation organisation excellent interpersonal skills to maintain the working relationships required the ability to work effectively as part of a team knowledge, understanding and appreciation of the importance of promoting equality and valuing diversity and its application to work the ability to understand complex organisational structures quickly. CAAHEP also ask that accreditors on site visits: demonstrate maturity, objectivity, diplomacy and dedication cooperative nature, flexible and analytical (8) Those who take on team leader roles should have the addition competencies: knowledge and understanding of the assessment program, process and standards an understanding of the interplay between education and the workplace environment 55

an understanding of medical/allied health management in that country a knowledge and understanding of quality assurance systems a knowledge and understanding of the design and implementation of assessment systems experience of implementation of a curriculum (8, 90). Whilst this is quite a comprehensive listing, it appears important to consider these elements in ensuring that assessment teams and members are suitably selected in order to carry out their roles to an appropriate standard.

4. Conflicts of interest/Confidentiality of accreditation processes and activities

Conflict of interest Policies to prevent issues of conflict of interest, whether actual or perceived, need to be developed to ensure that the accreditation process remains reliable and transparent to all stakeholders. All directors, employees and site visitors should hold all documents, correspondence, discussions, recommendations, outcomes and actions related to any accreditation process as confidential


. Most

accreditation bodies reviewed in this document require that such a written confidentiality agreement is completed. LCME have dealt with this issue comprehensively.

All involved must agree in writing that no LCME

representative shall participate in site visits, discussions, voting involving a program: Where that representative or an immediate family member is connected (as a student, faculty member, contractor or employee within the past 5 years) Where the representative or immediate family member has interviewed for employment in the past 2 years Which is in the same state or close enough to be considered as a competing institution That is part of the university system where the representative is employed Where the representative or immediately family have any contractual, financial, political, professional or other interest that may conflict with the interests of the LCME. Where the representative believes that there may be a conflict due to other accreditation program involvement or close personal relationships or the program manager believes that the representative could be unfairly prejudicial. In addition, no LCME member shall do paid or unpaid external consultation on LCME accreditation matters to any program or institution subject to LCME accreditation unless authorised by the LCME. This applies while an LCME member and for 3 years after completion of their service unless authorised by the LCME (86) . The AMC has similar policies (although less specific and itemised) on conflict of interest whereby members of AMC committees, including the Specialist Education Accreditation Committee, are expected to make decisions responsibly and to apply the standards in a consistent and impartial way. These policies are clearly stated in the “Accreditation Framework” section of the AMC accreditation guidelines and freely available on 56

the AMC website


. The AMC requires that committee members complete notices of interest and update

these regularly. These include any personal or professional interests which might, or might be perceived to, influence their capacity to undertake impartially their roles as members of the committee. In declaring the membership of an assessment committee or team, the AMC will disclose all and any declared interests of those committee members.

Such a conflict may include providing recent informal advice to a

training/educational organisation outside the AMC accreditation process. The Australian Universities Quality Agency also has a comprehensive conflict of interest policy. It notes that due to extensive interaction across the higher education sector, board and committee members may experience conflicts of interest between their roles with AUQA and their other activities. This policy then endeavours to deal with these issues prior to and if they arise, in regard to personal, professional and ideological conflicts, especially involving and audit process. An ideological conflict, for example, would be one where the assessor/auditor may lack sympathy to the style, type or ethos of an educational institution (87).

All members of AUQA committees and panels must then declare any such matters.

Confidentiality In any accreditation process, considerable information is required from the educational institution. This may include sensitive information, such as commercial in confidence material, strategic plans, appraisals of strengths and weaknesses. Confidentiality within the accrediting organisation, its committees, assessment teams and individuals is paramount. Of the information provided in the assessment process, it should only be used for the purpose for which it was obtained. Any published material within the AMC accreditation process, such as reports and surveys, is usually de-identified. If the AMC wish to publish material which identifies an individual educational body or training organisation, it will seek their permission


. Some

material though is of a public nature and as such will have identification of the relevant organisation; such as current accreditation status, date of next accreditation assessment, annual summary, reports (public documents) and press statements following accreditation decisions. The GMC in the UK provides more detailed information linked to each educational institution by making the accreditation reports of each educational/training organisation is freely available to any member of the public on their website



However, specific details of sensitive items are not published. In the US, LMCE disclose only to the public the accreditation status of the school.

A copy of the survey report and the accreditation decision are

conveyed to the school, but this is held confidential by the LCME. The final report can be made public by the medical school at their own discretion (17).

Most accreditation bodies have policies and practices in place both to ensure appropriate behaviour of assessors, and to guarantee the integrity of the accreditation process. Moreover, most bodies strive to make this commitment highly visible and open.


CASE STUDY - Report on UK Brighton and Sussex Medical School 2006/07


This case study summarises a final accreditation report by the GMC on a new undergraduate medical school education program. As such, it is undergoing a yearly accreditation review until the full course is being offered. This case study is included in this document in order to highlight the inclusions of such a publicly available document.

Inclusions: 1. Introduction outline of the GMC’s role in medical education introduction of the report and the school the accreditation (QABME) team members program of visits and activities undertaken 2. The Report (a) Summary of key findings what could not be reviewed items highlighted at the last accreditation event and their progress new items that will require review assessment and progress reports recommendations areas of innovation and good practice (eg growing cohesiveness and effective working relationships of the core academic management team, online search tools) (b) Curricular outcomes, content and structure – discussion on specific aspects of the curriculum (elements reviewed from self study document and/or directly observed) and compared with QABME requirement and the GMC standards in “Tomorrow’s Doctors” (34). Specifically: content the scientific basis of practice treatment clinical and practical skills the health of the public (e.g. “We observed a Year 3 lecture in Neurobiology, focussed on the pathophysiology and pharmacology of addiction, and discussed with staff the integration of basic sciences through the curriculum. The lecture content built effectively on the knowledge gained in Years 1 and 2 and was linked to ‘Tomorrow’s Doctors’. However the learning objectives were not explicit”) Structure (e.g. “The school constructed a curriculum map which details the whole programme and tracks the outcomes of ‘Tomorrow’s Doctors’ and relates teaching outcomes to assessments, following a requirement of the 2005/2006 report”) 58

Delivering the curriculum supervisory structures teaching and learning staff development learning resources and facilities student selection student support, guidance and feedback (e.g. “We were satisfied that there were a range of teaching and learning methods used in the teaching sessions observed this year. The standard of teaching observed was satisfactory, and the resources and equipment for teaching sessions generally good”)

(c) Assessing student performance and competence (e.g. “We had some discussion with the School about the logbook system for monitoring student attendance which is manual and could be open to abuse. The School is reviewing this and would like to establish a less cumbersome method. We look forward to receiving an update on this in 2007/2008”)

(d) Student health and conduct (e.g. “We will revisit this area in 2007/08 to confirm that processes and procedures are developing effectively and to discuss ongoing cases”)

In addition to this report, the GMC school report includes a document from the Brighton and Sussex Medical School in response to this final report.

Processes put in place in response to recommendations and

comments are included in this report, including areas that the school will be working on to progress in the coming year. This is a new medical school and as such is reviewed each year until all year levels have been accredited. This report, and those of all other accredited medical schools in the UK, is available on


8. Implications and Conclusions for Best Practice.

Accreditation programs have numerous goals that generally encompass providing evidence of quality assurance for individuals and the wider community and may include regulation. Many of the accreditation processes we have reviewed in the document are of a rules-based or summative nature, whereby standards are required to be met, or substantially met, to attain the goal of accreditation. These methods are generally quite rigid and take the format of an audit against professionally accepted standards.

A new trend in

accreditation process is that of a formative process, that generates improvements in the educational accreditation entity being accredited. A framework for accreditation is provided with an underlying set of principles. This is often used in models for business excellence, but elements of it are being included in traditional rules-based accreditation programs


. Standards may then have 2 levels, with the traditional

standard, but with another for quality improvement. The GMC accreditation process is an example of accreditation strongly based on standards and rules, but also accepts, encourages and supports innovation and evolving strategies of education


. Issues of course improvements are then more than just getting

through the process; they are then a process of continual review and improvement. This is a potentially important element of health practitioner courses whereby the accreditation is essentially designed to ensure public safety in light of new concepts and technology. To ensure best practice in accreditation process, an accreditation program should have the following components: mission statements and program philosophy infrastructure and authority published standards- relative, objective and measurable management of field operations (assessors, training, protocols, logistics, analysis) framework for decision making ( to ensure a fair, valid and credible process) accreditation database (provide data on those undertaking the process) program sustainability (financial viability, integrity within the profession) (2)

Review of ASAR Program Accreditation Guidelines (PAG) Comparison to other accreditation formats: The ASAR standards for accreditation include more descriptive information on specific ultrasound related skills and knowledge than perhaps the medical accreditation standards do. They are however similar to those of CASE, UK ultrasound course accreditation, and CAAHEP, US ultrasound course accreditation. Time frames for the accreditation process are comparable to other models within Australia and internationally. No site visits are undertaken, but that is likely a reflection of the way ultrasound courses are provided in Australasia. As ultrasound training is predominantly undertaken by distance education, issues related to this should be specifically detailed in the accreditation documents i.e. access, process, outcomes (student support). 60

No accreditation is undertaken of sites where clinical training is undertaken or visits as per the UK and US ultrasound course accreditation processes. No supervisor accreditation/review is undertaken to ensure suitability, as is apparently performed in the UK and US. ASAR PAG does not seem to require information about the specific tutors and teachers (CV’s are required for most accreditation bodies- inc CASE and CAAHEP and any changes are to be reported as soon as possible). ASAR documents lack any advice/protocols for accreditors but the US and UK sonographer education accreditation bodies do. Having such documentation allows the institutions to be aware of protocols, perception of fairness and minimum standards for accreditors and their behaviour is instructive to all – shows an openness of the accreditation process. E.g. – CASE offers very specific guidance to help assessment panel members appropriately review the course documentation, staffing and physical resources


. The CASE documentation could

provide a helpful guide to assessors in the Australian context. Website availability of the guidelines could be improved with a more descriptive title instead of simply “PAG” for those unfamiliar with the term. Further policies related to the accreditation process could be included to advise those undergoing the process of issues of equity and appropriate behaviour. The ASAR does not make public via its website reports on the accreditation outcome of those who have undertaken the accreditation process. In the UK, USA and Australia, successfully accredited sonographer and radiographer educational programs are listed and available to the public. The USA also list those who have withdrawn or been denied accreditation, those with initial and pending accreditation. It may be that as the Australian situation accredits only a small number of courses, this situation has not yet arisen. Medical school accreditation reports in the UK are available to the public but are kept confidential in the USA. It may be subject to freedom of information legislation if government sponsored. In Australia the availability of this information varies according to the accrediting body. Ultrasound course accreditation by CASE in the UK ends with a registry entry for courses passing accreditation standards and a summary of all accredited courses is included, little specific course accreditation information is made public



No examples of accreditation reports of educational institutions were available from the ASAR for review and comment. The content of such reviews though, should be clear, concise and encourage improvement while upholding the standards and accreditation requirements. The ASAR course accreditation process states that there should be appropriate supervision of all ultrasound students, but there is no process in place to ensure that the supervision provided is adequate, appropriate and effective. There is evidence that supervision has a positive effect on patient outcomes and that the lack of supervision is harmful for patients. The quality of the supervisory relationship between supervisor and trainee is also probably the single most important factor in effective supervision (93, 94).


Suggestions for consideration of accreditation process review 1. Ongoing review/updating of both the accreditation and competency standards used to determine minimum levels of practice. As these standards are the basis of the accreditation program, ASAR should be certain of their ongoing applicability. Clear statements of these standards with guidelines/elaboration as required should be provided. The extent to which professional body and educators are involved in this review needs to be considered to ensure wide-ranging review that includes new trends in competency assessment. Distance education issues should also be further considered and included in the accreditation standards to ensure equitable and appropriate provision of resources and available facilities to all students. Competency based assessments are generally reviewed every five years. Some bodies, e.g. the AMC, have rolling updating processes targeting different parts of the process every year, with a major overhaul periodically. 2. Consider future likely competencies of advanced practitioner models (such as the health professional practitioners in the UK and US). 3. Streamlining of documentation. Use of a proforma and checklists for the self-study document completed by the educational institutions may help ensure all the information the ASAR requires is clearly listed, improves ease of completion for institutions and improves the review of the documentation for those assessing the application. 4. Checklists for accreditors may allow ease of review of the self-study document, ensuring that all essential items are appropriately reviewed. This would also ensure that those new to the accreditation process are aware of all aspects requiring review. 5. Policies for confidentiality and conflict of interest should be developed and made publicly available for all current and potential seekers of accreditation. 6. Training of the accreditors/assessors should be ongoing to ensure new and existing members are kept informed of the standards, the process, trends in education and competency assessment and appropriate behaviour as an assessor. 7. Publication of accreditation outcomes. The amount of information to be made publicly available needs to be considered to ensure the public, the profession and those wishing to enter the profession are aware of the accreditation current status of the ultrasound courses provided by individual educational institutions. Legal advice may be required as to the amount of information that can be provided if written reports are publicly available. Alternatively – is it sufficient to only list those courses who have successfully achieved accreditation? 8. If site visits are to be considered (for those sites with a component of on-site education), guidelines for site visitors should be developed.


9. Consideration of clinical supervisor inclusion in the accreditation process; alternatively standards could be developed to ensure that the educational institutions have processes in place to ensure appropriate involvement, standard and review of nominated clinical supervisors. 10. Maintaining good relationships with professional bodies to ensure an ongoing availability of assessors if the accreditation program is to be ongoing and/or expanded. 11. As the role of the ASAR is also to accredit sonographers, consideration of expanding this role to encompass ongoing competency assessment requirements for ongoing accreditation and processes for the education, supervision and de-registration of poorly or inappropriate performers should be researched. This will ensure that the public safety is served by both ultrasound educational provision, but also ongoing competence assessment and accreditation processes, as occurs in the medical and some allied health professions.


References 1 Council of Australian Governments - Communique - 26 March 2008, accessed May2008) 2. Rooney, A.L., van Ostenberg, P.R. Licensure, Accreditation and Certification: Approaches to Health Services Quality. Quality Assurance Project, Bethesda USA, 1999. 3. Selden, W., Porter, H. Accreditation: its purposes and uses. 1977, The Council on Postsecondary Accreditation, Washington D.C. USA. 4. WHO/WFME Promotion of Accreditation of Basic Medical Education, November 2005. 5. Harman, G., Meek, V. DETYA no 6474.HERC 00A “Repositioning Quality Assurance and Accreditation in Australian Higher Education” May 2000 6. Batalden, P., Leach, D., Swing, S., Dreyfus, H. and Dreyfus, S.General Competencies and Accreditation in Graduate Medical Education. Health Affairs 2002, Vol 21, no 5 pp.103-111. 7. Standards for Accreditation of Medical Education Programs leading to M.D.Degree, LCME available at accessed May 2008 8. CAAHEP Accreditation Manual & Standards and Guidelines, August 2006 , accessed May 2008. 9. SDMS Scope of Practice – JDMS 16:206-211 Sept/Oct 2000. ( 10. Competency based standards for the accredited practitioner – November 2008 at (accessed May 2008) 11. Competency Standards – Entry Level and Overview of the competency-based standards, original document 1992, revised in 2000, ASUM . 12. GMC “Duties of a doctor” available at www.gmc-uk/org/education/postgraduate/index.asp 13. ASA code of Conduct, ratified 2006, available at 14. AIUM - 15. Glassing, A. JRC/DMS newsletter issue 9, June 2007, accessed May 2008. 16. Dowton, S.B., Stokes, M., Rawstron, E., Pogson, P. and Brown, M. Post-graduate medical education: rethinking and integrating a complex landscape. Med J Aust 2005; Vol 182, no4 pp177-180. 17., accessed May 2008 18 www.fsmb/org/usmle_eliinitial.html, accessed May 2008 19. , accessed May 2008 20., accessed May 2008) 21., accessed May 2008 22. accessed May 2008 23. accessed May 2008 64

24. Chouinard, N. (current council member SDMS). SDMS email communication 31 may 2008 25. Haydon, D. (current council member/past-president SDMS) SDMS email communication 31 May 2008 26. Letters to the editor, JDMS 19:392-395 Nov/Dec 2003 27. Kuntz, K. Focussing on the Issues, JDMS 20:144-146 Mar/April 2004 28. 29. , accessed May 2008 30. accessed May 2008 31. 32. 34. 35. 36. GMC/PMETB Standards for training for the Foundation Programme,, available at, accessed May 2008 37 Grant, J.R. Changing postgraduate medical education: a commentary from the United Kingdom MJA Vol 186, no 7 April 2007 38. www.gmc-uk/org/education/postgraduate/new_doctor.asp 39. “The New Doctor” www.gmc-uk/org/education/postgraduate/index.asp 40. _apply/registration_factsheet.asp 41. Hepworth, A – CASE Co-ordinator, BMUS email communication, [email protected] 12th May 2008 42. CASE Validation and accreditation Handbook, March 2000. 43. 44. 45. CASE News April 2008 – available at 46. 47. 48. 49. 50. AMC Assessment and Accreditation of Medical Schools Standards and Procedures 2002 & 2006 , available at 51. AMC -“ Preparing an accreditation submission – A guide to medical schools” November 2007 available at 52. 53., accessed May 2008. 65

54., accessed May/June 2008. 55. RANZCR quality program guidelines,, accessed June 2008. 56., accessed May/June 2008. 57. 58. Competency Based Standards for the Accredited Practitioner- November 2005, available at documents listing, accessed May 2008. 59. 60. Productivity Commission’s Australian Health Workforce Research Report of 2005, available at 61. Bulletin, volume2, June 2008, Medical Practitioners Board of Victoria, accessed from, June 2008. 62. n_australia_study/ 63., accessed June 2008 64. Functions and Structure of a Medical School – Standards for Accreditation of Medical Education Programs leading to the M.D.Degree, LCME June 2007. 65. Fisher, L., Levene, C. Planning a professional curriculum, University of Calgary Press, Canada 1989. 66. Churchman, R., Woodhouse, D. The influence of Professional and Statutory bodies on Professional Schools within New Zealand Tertiary Institutions, Quality in Higher Education, Vol.5, No.3, 1999. 67.Meyer, R. iDEC Distance education standards, 2001, from accessed June 2008. 68. DETC Accreditation Overview 2007, from, accessed June 2008 69., accessed June 2008 70. Troy, S., DiGiacinto, D., Elledge, B., Advancement of Higher Education: is Distance Education the Answer? JDMS 20:102-111, March/April 2004 71. Hancock, J. Focus for Instructional Design: Considering the needs of a distance learner, JDMS 20:208211, May/June 2004 72. Distance Learning: Academic and Political Challenges for Higher Education, Council for Higher Education Accreditation (CHEA) accessed from June 2008. 73. Accreditation issues related to distance learning: The perspective of the Liaison Committee on Medical Education, accessed from June 2008. 74. Newble, D. (ed) Guidelines for the development of effective and efficient procedures for the assessment of clinical competence , The Certification and Recertification of Doctors,pp69-91, Cambridge University Press, 1994. 75. Hays, R.(ed), Methods of assessment in re-certification, from The certification and re-certification of doctors; issues in the assessment of clinical competence Cambridge University Press, 1994. 76. Jolly, B., Grant, J. (eds)The Good Assessment Guide The Joint Centre for Medical Education (UK) 1997


77. Jolly, B.The assessment of clinical skills: should architects be builders too? CMHSE Seminar, Monash University 2004. 78. accessed June 2008 79. Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 65 (Supplement) S63-7. 80. p31 (Accessed 06/07/08) 81. Darzi Report, A High Quality Workforce; NHS Next Stage Review, led by Prof. the Lord Darzi, available at, accessed July 2008. 82. Tooke Report Aspiring To Excellence; final report of the Independent Enquiry into Modernising Medical Careers , led by Prof. John Tooke, available at, accessed July 2008 83. Newble, D Medical Education in the new Millenium pp131-142. Oxford University Press 1998. 84. Crossley, J., Humphris, G., Jolly, B. Assessing Health professionals, Medical Education, 36, pp.800-804. 85. Norcini, J.Work based Assessment, BMJ Vol 326, April 2003 pp753-755. 86. Standards for Accreditation of Medical Education Programs leading to M.D.Degree, LCME available at - Other policies and procedure pg22-23. accessed May 2008 87. Australian Universities Quality Agency (AUQA) Policy 006: Conflicts of Interest, available from 88. Australian Medical Council guidelines Assessment and Accreditation of Medical School: Standards and Procedures 2002, Section C- Accreditation Framework. Pp6-8, available at, accessed June 2008. 89. McAvoy, P, McCrorie, P., Jolly, B. et al Training the assessors for the General Medical Council's Performance Procedures Med Educ.2001;35 (Suppl. 1) pp.29-35. 90. GMC QABME Education QA Guidance, available from 35. 91. Quality Assurance of Basic Medical Education: Report on Brighton and Sussex Medical School, available at, accessed June 2008. 92. Haywood, L. Principles-based accreditation: the way forward? MJA 2007; Vol 86, No.7. 93. Kilminster, S., Jolly, B., Effective supervision in clinical practice settings. Med.Educ. 2000; 34:827-840. 94. Koczwara, B., Tattersall, M., et al Achieving equal standards in medical student education: is a national exit examination the answer MJA 2005: 182:pp228-230 95. Assurance of quality of teaching delivered through distributed and distance learning- The Handbook for Institutional Audit 2006, Quality Assurance Agency, UK available at: accessed August 2008. 96. McGhee, P. Distance learning in The academic quality handbook: assuring and enhancing learning in higher education, London; Kogan Page Ltd 2003, Chpt 16 pp 231-250. 97. Williams, S.L., The effectiveness of Distance Education in Allied Health Science Programs: A MetaAnalysis of Outcomes. American Journal of Distance Education 2006, 20(3), pp.127-141. 98. Young, S., Student views of Effective online Teaching in Higher Education. American Journal of Distance Education 2006, 20(2), pp.65-77 67

99. Race, P., 500 Tips for Open and Online Learning, 2nd Ed., RoutledgeFalmer, New York 2005 100. Dreyfus, H.l., Dreyfus, S.E., Mind over Machine, The Free Press, New York 1986. 101. Outline of characteristics in the Novice-to-expert model, Monash University GDMU. 102. Baird, M. Facilitating learning from experience in the clinical setting. Monash University 103. Rethans, J-J., Norcini, J., Barón-Maldonado, Blackmore, D., Jolly, B., LaDuca, T., Lew, S., Page, G, & Southgate, L., The relationship between competence and performance: implications for assessing practice performance. Medical Education 2002; 36 pp901-909. 104. Lysaght, R., Altschuld, J., Beyond initial certification: the assessment and maintence of competency in the professions. Evaluation and Program Planning 23 (2000) pp.95-104 ,available at, accessed June 2008. 105. Patel, V., Glaser, R., Arocha, J. Cognition and expertise: acquisition of medical competence. Clin Invest Med. 2000. 23(4) pp 256-260.

Further useful references for competency assessment: 1. Ram P, van der Vleuten C, Rethans JJ, Schouten B, Hobma S,Grol R. Assessment in general practice: the predictive value of written-knowledge tests and a multiple-station examination for actual medical performance in daily practice. Med Educ. 1999;33:197–203. 2. Talbot, M. Monkey see, monkey do: a critique of the competency model in graduate medical education Med Educ 2004; 38: 587–592 587 3. Norcini, J.J. ABC of learning and teaching in medicine BMJ Vol 326, April 2003 pp.753-755. 4. Norcini, J.J, Blank, L., Arnold, G., Kimball, H. The Mini-CEX ( Clinical Evaluation Exercise): A preliminary investigation Ann Intern Med. 1995;123:795-799. 5. Norcini, J.J.The death of the Long Case? BMJ 2002 Vol 324; pp 408-409. 6. Boud,D. Assessment and learning: complementary or contradictory. In Knight, P. Brown, S.(eds) Assessment for learning in higher education London:Kogan Page Ltd 1995 pp.35-48. 7. Lake, F., Hamdorf, J. Teaching on the run tip 6: determining competence MJA 2004 Vol 181;No.9 pp/502503.


Abbreviations / Acronyms AAB – Approval and Accreditation Board - UK AAMC - Association of American Medical Colleges ACGME - Accreditation Council for Graduate Medical Education (USA) AMA – American Medical Association AMC – Australian Medical Council ACGME - Accreditation Council for Graduate Medical Education (USA) ACOG – American College of Obstetricians and Gynecologists AIR – Australian Institute of Radiography AIUM – American Institute of Ultrasound in Medicine ARDMS – American Registry for Diagnostic Medical Sonography ARRT - American Registry of Radiologic Technologists ASAR – Australasian Sonographers Accreditation Registry ASE - American Society of Echocardiography ASUM – Australasian Society for Ultrasound in Medicine ASRT- American Society of Radiologic Technologists ASA – Australian Sonographers Association AUQA – Australian Universities Quality Agency BMUS - British Medical Ultrasound Society BSE - British Society of Echocardiography CAAHEP - Commission on Accreditation of Allied Health Programs (USA) CARDUP- Canadian Registry of Diagnostic Ultrasound Professionals CASE – Consortium for the Accreditation of Sonographic Education (UK) CCI - Cardiovascular Credentialing International (USA) COAG – Council of Australian Governments DETC – Distance Education Training Council (USA) FSMB – Federation of State Medical Boards (USA) GMC – General Medical Council (UK) HPC – Health Professionals Council (UK) IPEM - Institute of Physics and Engineering in Medicine JRC-CVT – Joint Review Committee on Education in Cardiovascular Technology (USA) JRC-DMS – Joint Review Committee on Education in Diagnostic Medical Sonography (USA) LCME – Liaison Committee on Medical Education (USA) MRPB – Medical Radiations Practice Board (Vic) NBME – National Board of Medical Examiners (USA) NHS – National Health Service (UK) NMC – Nursing and Midwives Council (UK) RANZCR – Royal Australian and New Zealand College of Radiology RCM - Royal College of Midwives SCoR/SoR – Society and College of Radiographers/ alt Society of Radiographers (UK) SDMS – Society of Diagnostic Medical Sonography (USA) SVT - Society of Vascular Technology of Great Britain and Ireland SVU – Society for Vascular Ultrasound (USA) UKAS- United Kingdom Association of Sonographers USMLE – United States Medical Licensing Examination VCAT – Victorian Civil and Administrative Tribunal


Appendix 1.

Methods of Assessment, as likely applicable to sonographer education The aim of assessment is to provide an accurate picture of how a person compares to a set of standards. These standards can be defined by pre-set criteria (criterion referenced), or by the performance of others (norm referenced)


. In assessments of professionals, international best practice is moving towards totally

criterion referenced assessments for licensure. Assessment should highlight strengths and weakness of trainees through comprehensiveness and a high degree of objectivity. Assessment methods should have: reliability- i.e. they should be reproducible ( eg same score with different markers) content validity- i.e. they should demonstrably contain an appropriate and unbiased sample of the clinical and intellectual tasks that they are supposed to measure face validity – i.e. they appears to measure what they are supposed to measure construct validity – i.e. they are demonstrably appropriate to the professional traits being tested critierion/predictive validity – i.e scores on the assessments correlate with performance on gold standards or in real-life situations The following discussion will outline some assessment methods and describe their strengths and limitations, taken from “The Good Assessment Guide” (76)

1. Multiple choice questions – (MCQ) May consist of true/false or up to 5 options Advantages: widely used, good for large groups with few resources objective marking and reliable can cover large areas of knowledge and specialist areas poor questions easily identifiable valid assessment for factual knowledge quick to mark Disadvantages: considerable effort required to produce large numbers of good questions skill required to avoid ambiguity or prompting for the correct answer does not test problem-solving ability examinees have a 50%chance of guessing the correct answer if just true/false or a 20% chance with 5 options. may teach students to learn information by rote or in a shallow, memorised way rather than considering the deeper meaning 2. Multiple choice questions – extended matching (EMQ)


May have 15-20 options, usually clustered around a theme and the student has to match the best option to a few scenarios, usually clinical patient vignettes. Advantages: highly reliable less likelihood of guessing the answer can reflect clinical activities and thinking processes Disadvantages: can be time consuming to develop

3. Short Answer Questions Questions are posed that require a short descriptive (or one word) answer. Advantages: useful in assessing the reasoning that lies behind a decision making process assesses a knowledge base through recall not recognition relatively easy to construct the questions simulations of realistic clinical situations are possible no clues to the likely answer are provided Disadvantages: time consuming for examinee and marker clear assessment criteria required 4. Patient Management Problems May have a variety of forms, often a patient-based scenario where the examinee is required to interpret information e.g. clinical history, investigation results. Advantages: can test key elements of the examinee’s problem-solving ability simulates real life situations can be aimed at variable levels of competency Disadvantages: time consuming putting suitable cases together requires piloting to ensure accuracy of content difficult to simulate the real time scales of patient management assessors may disagree on correct answer pathways and hence are almost impossible to mark reliably

5. Modified Essay Questions (MEQ) A written assessment used for clinical problem solving. Begins with limited amount of knowledge and then questions are asked on clinical, diagnostic or management issues related to the provided clinical scenario. Advantages: flexible form of written assessment problem-solving skills are assessed Disadvantages:


hand marking required, therefore best for small group or individual assessment. Some expertise required to formulate them Clues for answers may be provided as you move through the question 6. Key Feature problems and examinations A clinical case scenario is presented, followed by a number of questions which are designed to test essential critical steps in the investigation, diagnosis or management of a case. Typically 2-4 key features. Advantages: assesses clinical decision making skills problem focussed, therefore a small number of cases can be presented in a short time flexibility in the format relevant to actual clinical situations Disadvantages: considerable preparation is required high degree of consensus as to the correct answers is required a large number of cases is required to achieve reliability

7. Patient base long case – medicine assessment Examinee/trainee takes a history (on a real or simulated patient) and performs an examination unobserved or observed. They then are interviewed by one or more examiners and discuss the history and examination findings, differential diagnosis, final diagnosis and patient management. The examiners then confer to provide a grade, usually on pre-set criteria on the examinee’s clinical competence. Advantages: tests a wide range of attributes – ie problem solving, clinical decision making skills, depth of knowledge, ethical issues and to think on one’s feet. Disadvantages: patient inconsistency or interference if not observed, the examinee’s dealing with patient not examined examiner variation and bias doubts about reliability and validity, especially as only one case is usually sampled. 8. Patient Based Short case Similar to the long case but only focuses on a specific area of clinical practice. Advantages: less dependant on the role of the patient and therefore less inconsistent Disadvantages: examiner variability limited, so personal skills not examined patients numbers difficult to acquire

9. Objective Structured Clinical Examination (OSCE) A series of highly structured and focussed assessment tasks through which a student rotates in a systematic fashion, stations have varying tasks (written, skill, oral) and a set time limit is placed on completing each 72

task. Each station is individually marked. This is an appropriate way of assessing a variety of clinical and proficiency skills. Advantages: detailed analysis of specific skill performance, wide ranging standardised stations, identical conditions for all students detailed feedback can be provided due to the structured assessment Disadvantages: resource intensive (human, time, perhaps money) time consuming must be piloted first time pressure for candidates – settling time variable, can be exhausting 10. Vivas - unstructured Interview between examinee and assessor Advantages: can assess depth of understanding can assess mental agility, verbal skills etc high validity as questions can assess deep knowledge Disadvantages: Content can produce anxiety in the examinee which may reduce their performance level style and content of questions at the whim of the assessor and therefore reliability is often very low assessment criteria are often not specified

11. Viva’s - structured Similar to the above, but questions are pre-determined and therefore less-adversarial format. Advantages: can check areas that other assessment forms miss checks trainee’s verbal and mental ability assess depth of knowledge by deep questioning assess attitudes good level of validity and reliability Disadvantages: still some level of anxiety examinees with poor verbal skills are disadvantaged. 12. Portfolio This may include log books, written evaluations, publications etc. A reflective commentary is then provided in which the trainee identifies significant themes. The themes are then the basis of review by the trainee for personal development.. Advantages: comprehensive, good face and content validity 73

continuous assessment assesses analytical skills assess the ability to learn from experience Disadvantages: difficult to keep to a manageable size time consuming for trainee large document to mark/assess records what they have done and how often, but not how well, therefore poor reliability

13. Log Books - records of practice Record of clinical experience. Advantages: provides a framework for appraisal shows what clinical experience has been undertaken could provide a framework for vivas. reliable due to the limited information Disadvantages: record of experience has no information on clinical competency or quality of work no record of the work having any educational value. Designed to be completed on a daily basis but rarely done. 14. Critical Incident review Trainees are asked to keep a diary to record incident of a critical nature– eg difficult patient, unexpected complication, clinical error. They then record their reflections on these incidents and may include reference to the literature. Advantages: identifies exceptional performance –good or bad continuous assessment assesses attitudes and skills, encourages reflection Disadvantages: large number of assessor may make the judgements on the behaviour of the student variable trainees unlikely to choose incidents that expose their weaknesses may only cover a section of the curriculum

15. Rating Scales An observer makes a judgement along a scale e.g.: no errors observed occasional errors, corrected by trainee frequent errors, corrected by trainee frequent errors, not corrected by trainee trainee unable to proceed without step by step instruction


Advantages: used to assess behaviour or performance in circumstances in which more objective methods are unavailable Disadvantages: subjective raters need to be well trained

16. Checklist A variation on the rating scale, but a checklist is used with limited choices. Advantages: useful in breaking down the skill or attribute being assessed into a series of observable sub-skills of behaviours useful where trainee’s performance is being observed useful in competency assessment to ensure that all/most subskills for are successfully completed. Disadvantages: subjective feasibility limits, therefore usually only used in a section of overall assessment.

17. Triple jump examination 3 steps in the process Step 1 – trainee given some initial clinical information – oral/written examination and they are asked to identify further information required to understand the situation more fully Step 2 – trainee given a period of time (say 2 hours – 2 days) to acquire the additional information independently Step 3 – continuation of the oral or written exam to assess the trainee’s depth of understanding of the problem. Advantages: assesses problem solving skills assesses depth of knowledge in a specific area and ability to acquire further knowledge to develop a full understanding can test several interdependent skills applicable to a clinical situation can provide detailed feedback Disadvantages: time consuming for both trainee and assessor each step score is dependent on the other.

18. Dissertation and project Written work based on personal research involving literature search or original work or both on which supervision is variable. The outcome is presented in the form of a research paper. A project is on a smaller scale. Advantages: 75

assesses knowledge and research skills relevant promotes knowledge and understanding of the value of literature continuous work and continuous assessment feedback provided at a high level may generate a publication high validity Disadvantages: student may find the size and scope of the project daunting concerns of fraud and plagiarism poor reproducibility and reliability

19. Patient –based assessment – simulated patients Assessment of clinical interaction of trainee and a trained “real” patient or a person who is trained to act as a patient. Advantages: as close to the real thing while maintaining standardisation patient can be trained to assess and give feedback to trainee difficult areas can be explored in a safe environment can be trained to simulate several clinical conditions Disadvantages: selection and recruitment is resource intensive expensive to train and use on a regular basis requires skilled staff for training limitations on the demonstration of physical signs 20. Patient –based assessment – standardised patients Assessment of clinical interaction of trainee and a trained “real” patient or a person who is trained to act as a patient. A consistent history/clinical situation is presented to ALL trainees undertaking the assessment. Advantages: high level of standardisation can relatively inexpensive patient can be trained to assess and give feedback to trainee difficult areas can be explored in a safe environment Disadvantages: recruitment and retention of sufficient number of patients with widely differing conditions is difficult training demands are high patient fatigue

21. Practice Observations Working alongside and informally judging the trainee. Data is gathered and a judgement is made about the trainee’s ability to perform the real clinical tasks.


Advantages: potentially assesses the real world with real patients continuous assessment Disadvantages: not suitable where observation would interrupt the clinical process observational judgements are subjective and may be unstructured presence of an observer may affect the performance of the trainee.

22. Structured interviews Examinee and assessor undertake an interview process, similar to the viva, but the questions are predetermined and take the trainee through a range of experiences and to assess their responses. Advantages: less adversarial approach than the traditional viva checks areas that other forms of assessment fail to reach demonstrate trainee’s verbal and mental ability, allow to see if this matches their written form assessed separately assesses attitudes good validity and reliability if the structured format is used little resources required Disadvantages: induces anxiety due to the performance-based assessment trainees with poor verbal skills perform poorly 23. Video Assessment Used to record real clinical encounters. Advantages: opportunity to assess actual clinical practice Disadvantages: ethical issues with videotaping the trainee, patient and others who may be present – protocols need to be in place as to who can view the video, disposal etc. time consuming to review camera angle may inhibit view of some important elements

24. Structured Trainer’s report A structured form in which records are made of the assessment of performance in specific areas during the training period Advantages: continuous assessment, over a long period instead of a single occasion can be designed to have high content validity tests performance within the working context allows assessment of areas difficult to assess by conventional methods Disadvantages:


only applicable when the trainer and trainee have a long-term working relationship that allows regular assessments risks of subjective assessment ( as only made by the one person) and if more than one, the consequence of poor inter-rater reliability time consuming to design and test continuous assessment may affect the relationship between the trainer and trainee.


Appendix 2.

Validation and Accreditation Handbook of the

Consortium for the Accreditation of Sonographic Education

CASE March 2000

Member Organisations: British Medical Ultrasound Society British Society of Echocardiology College of Radiographers Institute of Physics and Engineering in Medicine Royal College of Midwives Society for Vascular Technology of Great Britain and Ireland United Kingdom Association of Sonographers

The Consortium shall retain the copyright of all publications it may from time to time publish.

This document is reproduced with the permission of CASE




All matters relating to CASE activities should be directed through the CASE Co-ordinator at the following address: -

The CASE Co-ordinator C/o The British Medical Ultrasound Society 36 Portland Place London W1B 1LS Tel: 0207 467 9759 Fax: 020 7323 2175 E-mail: [email protected]



Page Introduction to CASE activities





2 2.1 2.2

Aim and roles of CASE Aim of CASE Roles of CASE

4 4 4

3 3.7 3.8

Validation, revalidation and review by CASE Calendar of events leading to and following a single event validation Definitions Validation Accreditation Revalidation Re-accreditation Review

5 6 8 8 8 8 8 9

4 4.1 4.1 4.3

Aims and objectives of validation, revalidation and review Aims Objectives Additional considerations for revalidation and review Internal monitoring Monitoring by CASE

9 9 9 9 10 10


The role of CASE in validation and review procedures Single event validation Process validation

10 10 10


Information required by CASE prior to initial validation, validation of substantive changes to an approved course, revalidation and course review

6.1 6.2 6.3

Information required for initial validation Information required for substantive changes to approved courses Information required for revalidation or course review

11 11 11 12

7 7.1 7.1.1 7.1.2 7.1.3 7.2 7.3 7.3.3 7.3.4 7.3.5 7.4 7.5

Validation, revalidation and review procedures Appointment of CASE representatives at validation, revalidation or review The CASE nominated advisor The CASE lead accreditor The CASE accreditors Internal or faculty validation The validation event Accreditation by CASE The validation panel The final report of the event Process validation Rejection of submission by validation panel

13 13 13 13 13 13 14 14 14 14 15 15


INDEX Page 8 8.1 8.2 8.3 8.4

The roles of CASE nominated representatives and CASE council in validation, revalidation and review The CASE advisor The CASE lead accreditor The CASE accreditors CASE Council

16 16 16 17 18

9 9.2 9.2.2 9.2.3 9.3 9.3.1 9.3.2 9.3.3 9.4 9.5 9.6 9.7

CASE criteria for successful accreditation Course content Core topic areas Specific clinical areas CASE learning outcomes Core component: science and technology Core component: professional issues Specific clinical topic Balance of core and specific clinical areas Assessment The teaching team The learning environment

18 19 19 19 20 20 20 20 21 21 22 22


Course Monitoring



INDEX Page Appendix A Member organisations of CASE

24 24

Appendix B Publications to which CASE may refer for purposes of accreditation and course monitoring


Appendix C Guidelines for information required by CASE within the definitive course document 27 1 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 3

1 2 3 4 5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8

1 2 3 4 5 6


Format Contents Course information Course organisation Staffing Students Course philosophy Course syllabus Teaching/learning methods Clinical education Assessment Internal course monitoring Regulations Course resource provision Clinical resources provision The definitive course document

27 27 27 27 28 28 28 28 28 29 29 29 29 29 29 30

Appendix D Guidance notes for validation and review panel members


Introduction Human resources Physical resources Other resources The course Rationale Aims and objectives Structure Content Admission Assessment Management Policy

31 31 31 31 32 32 32 32 32 33 33 33 33

Appendix E Assessment: checklist of questions


Objectives Presentation Structure and content Weighting and suitability Classification The examiners

34 34 34 35 35 35

Appendix F Review: checklist of questions


Student intake

37 83

2 3 4 5 6 7 8

1 2 3 4 5

Staff provision Students Teaching and learning Teachers Course management Resources Course environment

37 37 37 38 39 39 39

Consortium for the Accreditation of Sonographic Education: Memorandum of Agreement


Preamble Aim Terms of reference Accreditation, validation and review Copyright

40 40 40 41 41


Page Table 1 Table 2 Table 3 Table 4

Single event validation Successful validation with no conditions Validation where conditions are made Unsuccessful validation

6 7 7 8

INTRODUCTION TO CASE ACTIVITIES The Consortium for the Accreditation of Sonographic Education (CASE) was formed in 1993 through the common desire of organisations committed to ensuring that the education and training of sonographers in the United Kingdom was delivered at the highest level. Its philosophy therefore is to promote best ultrasound practice through the accreditation of those training programmes that develop safe and competent ultrasound practitioners. CASE considers that ultrasound training should be delivered at postgraduate level. The majority of its activities therefore relate to supporting those Institutions which offer, or wish to offer, pathways leading to the award of a Masters, Postgraduate Diploma or Postgraduate Certificate in Medical or Clinical Ultrasound. 84

CASE currently consists of seven organisations: • • • • • • •

British Medical Ultrasound Society British Society of Echocardiology College of Radiograohers Institute of Physics and Engineering in Medicine Royal College of Midwives Society for Vascular Technology of Great Britain and Ireland United Kingdom Association of Sonographers


Each of these organisations publish documents which inform best practice as identified by CASE and are listed in Appendix A, together with the contact details of all member organisations. CASE policy is decided by its Council that is constituted by a maximum of two members nominated from each member organisation on an annual basis. The Officers of CASE, the Chairman, Vice Chairman, Secretary and Treasurer, are elected from the Council members on an annual basis. The Council of the Consortium normally meets three times a year to review, discuss and further the provision of ultrasound education in the United Kingdom and to approve those programmes seeking CASE accreditation or re-accreditation. In addition to offering accreditation or new, and re-accreditation of established ultrasound education programmes, CASE undertakes annual monitoring of the courses it has accredited. The information obtained is fed back to the institutions both on an individual basis and through the CASE Report that is published annually. CASE also publishes a Directory of CASE Accredited Courses that contains standardised information on all UK ultrasound education programmes that hold current CASE accreditation. This Directory is updated on an annual basis. A CASE Open Forum is normally held twice a year. Two members of the teaching team from all thoese institutions which run a CASE accredited ultrasound education programme are invited to attent the Open Forum as guests of CASE. These meetings provide a regular means of direct communication between the Institutions and CASE, thus enabling the views of the Institutions to be considered when deciding CASE policy. These meetings are also open to all individuals with an interest in sonographic education. CASE activities are funded through the annual fees paid by each member organisations and each of the Institutions running CASE accredited ultrasound education programmes. The day to day running of CASE activities is made possible by the CASE Co-ordinator whose contact details can be found at the front of this Handbook. This booklet has been produced as a guide to all those involved in the provision of quality ultrasound education. In particular it is if benefit to the following: • • • •

Institutions wishing to provide a course of postgraduate education in sonography Course leaders designing new ultrasound courses Staff involved in on-going programmes Those individuals nominated by CASE to act as advisors and/or accreditors at validation, revalidation or review events involving postgraduate programmes.

CASE Chairman, March 2000 1



This handbook has been written to provide a clear and concise overview of the role of CASE in the accreditation process of a new ultrasound education programme and in the re-accreditation or review process of an established ultrasound education programme. As CASE is acting as the professional body in matters relating to ultrasound education programmes in the UK, this handbook is intended as a source of reference for: a)

The relevant authorities in Universities and Colleges which either intend to offer or which are offering postgraduate education in sonography.

b) Curriculum development and course planning teams in higher education centres. 85


NHS Trusts, purchasing consortia, local education authorities, government departments and services and other agencies which have an interest in managing and/or funding or contracting for sonographic education and which have an interest in the quality of sonographic courses.

d) Those clinical departments which are, or which intend to be, integral components of the clinical education associated with sonography courses. e)

CASE itself, its member organisations and those who may be called upon to implement CASE procedures.


The procedures described in this Handbook are the route by which the member organisations of CASE discharge their respective validation, approval, accreditation and review roles and functions. Though CASE will not, and cannot, replace the roles and functions invested in the member organisations it is intended that these procedures are to be the normal channel for validation, approval, accreditation and review of all sonography courses.


The accreditation of individual practice through personal accreditation is not a process that falls within the remit of CASE.


All policy related to the professional aspects of validation will be determined by the member organisations of CASE. The function of CASE is to implement these policies.


CASE will advise the member organisation on all aspects of the suitability of Educational Institutions to offer courses in sonography and will maintain a register of accredited courses.


CASE will report annually to its member organisations and accredited Institutions on the work of the Consortium.


The immediate expenses of validation and review will be met directly by each Institution. An annual fee for each entry and retention on the CASE register of accredited courses will be charged to each institution.




AIM OF CASE The aim of the Consortium shall be to promote the best and most relevant sonographic education and training by providing a forum for the expression of the widest and most informed view of service needs and by accrediting those examples of best educational and training practice.



To identify best clinical and professional practice and ensure that this is reflected within the courses it accredits.

b) To ensure the accreditation and re-accreditation processes are rigorous, thorough and equivalent for all courses. c)

To undertake quality assurance through the annual monitoring of all accredited courses and to take appropriate action as indicated.

d) To provide a directory of CASE accredited courses that is normally updated annually. e)


To disseminate its views and seek those of others, relative to education and training in ultrasound on a regular basis through the provision of CASE Open Forums.




CASE welcomes the opportunity to accredit ultrasound training programmes that address those requirements set down by CASE. The Higher Education Institution hosting the education programme usually, but not exclusively, carries out this process simultaneously with course validation.


The minimum award that can achieve CASE accreditation is the Postgraduate Certificate.


It is the view of CASE that clinical competence and professional responsibility cannot be achieved from an educational and training programme that is of insufficient content to qualify for a Postgraduate Certificate. Individual modules will therefore only be considered for CASE accreditation if they contribute to a clearly identified Postgraduate Certificate, Postgraduate Diploma r Masters programme in Clinical/Medical Ultrasound.


CASE accreditation will only be given to Postgraduate Diploma or Postgraduate Certificate programmes which incorporate the teaching and assessment of clinical ultrasound skills relevant to that programme. CASE requires that clinical competencies should be associated with a minimum of 75% of the academic content of any programme seeking accreditation. In this context, quality assurance of equipment is taken as a clinical competency.


CASE will only accredit Postgraduate Diploma or Postgraduate Certificate programmes which include Clinical Ultrasound or Medical Ultrasound in their named award.


CASE will normally accredit a course for a period not exceeding 5 years. Shorter periods of accreditation may be awarded in certain circumstances.


Calendar of Events leading to and following a Single Event Validation


The calendar of events leading to and following a single event validation is outlined below.

Table 1 Single Event Validation: Time 12 months prior to anticipated event

6 months prior to anticipated event

4 months prior to anticipated event 3 months prior to event

1 month prior to event Validation Event 2 weeks after event

First CASE Council meeting following event

Action Institution informs CASE of intention to seek validation. • CASE issues validation proforma for completion. • On receipt of validation proforma, CASE appoints CASE advisor/lead accreditor. CASE appoints a maximum of three CASE coaccreditors to assist lead accreditor in review of final documentation. Institution confirms validation date with CASE. • Institution sends five copies of final course documentation to CASE office. • All accreditors receive copy of final course documentation. One copy retained in CASE office. Written reports prepared by all accreditors and received by lead accreditor. Attended by lead accreditor and one co-accreditor Written report of event, including the recommendations of the accreditors attending the event, submitted to CASE office by lead accreditor. CASE Council considers report and awards either: i) outright accreditation ii) accreditation with conditions iii) unsuccessful accreditation A time frame for CASE accreditation will be agreed. A time frame, normally no more than 3 •


2 weeks after first CASE Council meeting following event

months, for any conditions will be agreed. Institution informed of CASE decision in writing.

3.7.2 The events that subsequently follow on from the validation event are dependent on the outcome of the event itself. The calendar of events following a successful validation with no conditions is shown in Table 2. Table 2 Successful Validation with No Conditions Time 2 weeks after first CASE Council meeting following event 12 months prior to anticipated revalidation event


Action Institution informed of CASE decision regarding successful accreditation and period of accreditation in writing. Institution informs CASE of intention to seek revalidation.

The calendar of events following a successful validation with conditions is shown in Table 3.

Table 3 Validation where Conditions are made Time 2 weeks after first CASE Council meeting following event No longer than 3 months after event

Second CASE Council meeting following event 12 months prior to anticipated revalidation event


Action Institution informed of CASE decision regarding accreditation conditions in writing. • Institution submits 5 copies of revised documentation. • Accreditors, through lead accreditor, advise CASE whether conditions have been met. • Conditions adequately met – validation awarded through CASE Chair’s action. • Institution notified of CASE decision in writing and period of accreditation in writing. • Conditions not adequately met – failed validation awarded through CASE Chair’s action (see Table 4) Council advised by CASE Chair of outcome of event. Institution informs CASE of intention to seek revalidation.

The calendar of events following an unsuccessful validation, either at the event or following the submission of revised documentation is shown in Table 4.

Table 4 Unsuccessful Validation Time 2 weeks after CASE Council meeting following event

12 months prior to anticipated event


Action Institution informed of CASE decision in writing. • CASE invites course team to review the process with accreditors. Institution informs CASE of intention to seek validation of revised course. •

Definitions The processes of validation and accreditation and revalidation and re-accreditation are defined below. These definitions have been adopted throughout this handbook. From the definitions given below it will be noted that


the processes of validation and accreditation are similar but not identical. For the purposes of simplicity, events involving both validation and CASE accreditation will be referred to as ‘validation’ in this handbook. VALIDATION is a conjoint venture between the Higher Education Institution and CASE. It is the process whereby a judgement is reached jointly and simultaneously by a group including external peers and CASE as to whether a course designed to lead an award by a statutory body meets the requirements for that award. This will be determined by the awarding body’s charter and statutes, by the regulations established thereunder and also by the criteria established by the member organisations of CASE. ACCREDITATION is the initial acceptance of a course by CASE when CASE has participated in its validation process and is satisfied that the course meets the professional and educational criteria established by the member organisations of CASE. REVALIDATION is a conjoint venture between the Higher Education Institution and CASE. It is the process whereby a judgement is reached jointly and simultaneously by a group including external peers and CASE as to whether a course designed to lead an award by a statutory awarding body continues to meet the requirements for that award. This will be determined by the awarding body’s charter and statutes , by the regulations established thereunder and also by the criteria established by the member organisations of CASE. RE-ACCREDITATION is the continuing acceptance of a course by CASE when CASE has participated in its revalidation process and is satisfied that the course continues to meet the professional and educational criteria established by the member organisations of CASE. REVIEW is the process whereby the progress of an existing course is critically appraised at intervals of not more than five years by a group including external peers and CASE. During the review process, any plans for change are considered in order to confirm that the course remains academically and professionally valid and continues to meet the conditions for an award laid down by the awarding body and by CASE.




Aims: The overall aim of ultrasound course validation, revalidation and review is to secure for students a high quality of educational experience and to develop highly competent ultrasound practitioners. Its most important function is to assess the quality and standards of ultrasound courses. It also stimulates curriculum development by requiring staff to evaluate their courses and to open them to the thinking and practices of external peers.


Objectives: Validation, revalidation and review must ensure that: a)

Ultrasound courses meet the requirements for the relevant award and that the standards are appropriate to that award.

b) The human and physical resources available and both the academic and clinical environments within which the course is offered are satisfactory. c)

The standards of quality of teaching in the relevant subject areas are maintained and, wherever possible, enhanced.

d) Consideration has been given to all aspects of the course in accordance with the criteria set out in this handbook.


Additional Considerations for Revalidation and Review: Course revalidation and review share the objectives of initial validation but are additionally concerned with evaluation of the success of the course in practice. Both processes should include regular, normally annual, internal monitoring (as defined below) of course progress, together with subject review, external Quality Assurance Audit (QAA) and annual monitoring by CASE (as defined below).


Course review should normally be linked with the periodic evaluation through revalidation, of the findings derived from monitoring. Revalidation should normally take place at intervals of not more than five years.

INTERNAL MONITORING is the regular, normally annual, process by which the Institution and its External Examiners critically appraise the operation of the ultrasound course between reviews and ensures that appropriate standards are maintained. MONITORING BY CASE is its regular scrutiny of an ultrasound course, normally carried out annually. Annual course reports/monitoring reports provide the initial mechanism through which CASE monitors courses that it has accredited. These reports may, however, lead to monitoring visits of the Institution and CASE retains the right to visit Institutions as it deems necessary.




CASE will, conjointly with the Higher Education Institution: a)

Consider all postgraduate courses which include the assessment of clinical ultrasound skills.

b) Periodically review such courses. c)

Receive proposals for major changes to those courses already accredited and consider these changes within the framework of the validation arrangements existing between the Institution and CASE.


CASE, in fulfilment of its obligations to monitor and maintain standards, will require the Institution to review the course, normally at five yearly intervals. The Institution may, however, choose to submit a course for review earlier. Review is considered by CASE to be just as important as the original validation although emphasis may well be different, for example, a greater consideration will normally be given to student opinion and critique. The procedures to be adopted for review will normally follow those described for validation.


CASE recognises that there is no common pattern for validation of courses within Higher Educational Establishments although, broadly, two types of validation exist. SINGLE EVENT VALIDATION Validation is vested predominantly in a final validation meeting of the course team and the validating panel. The process is identified as a single calendar event that typically takes one working day to complete. PROCESS VALIDATION The process validation consists of an extended period of consultation with the validation panel, leading to approval. It therefore has an end point that is arrived at, rather thanfixed. The time scale of individual process validations will vary but would normally be expected to take not less than six months and not more that eighteen months to complete.


CASE will, as far as possible, work within the validation structures and procedures of individual Institutions. However, it reserves the right to seek modifications of, or to withdraw from, these structures and procedures if they prevent proper scrutiny of the proposed course.


Institutions which propose to set up, make substantive changes to an approved course, or review a postgraduate course in sonography should contact the CASE Co-ordinator 12 months before the intended validation or review date. A calendar of events leading to and following a single event validation is given in Table 1.




Information Required for Initial Validation The information required for a proposed new course will include: a)

Definitive course document (see Appendix C).

b) Evidence that any necessary administrative approval to offer the course has been granted. c)

Evidence that the proposed course has the support of the academic board of the Institution.

d) Evidence of regional demand and provision. e)

Appropriate memoranda of agreement e.g. between purchaser of education and provider Institution, between clinical provider and educational Institution.


Information Required for Substantive Changes to Approved Courses


It is inevitable that between initial validation, approval and accreditation of a course and the scheduled review, the regular internal monitoring process may identify the need for changes to be made. Where such changes are relatively minor, they should be documented, incorporated into the course immediately, added to the definitive course document as a dated addendum and identified clearly in the next CASE monitoring proforma. They should also be referred to specifically at the next review.


The information required for validation of substantive changes includes:


Relevant background information: the context of the development in terms of Institutional policies and plans, in terms of regional demand and provision and rationale for change.

b) Details of the syllabus, teaching/learning methods, assessment strategy and an indicative booklist for each new module of the course. c)


Curriculum Vitae of all additional staff required to deliver the changes.

Information Required for Revalidation or Course Review The information required for revalidation or course review will include: a)

Sufficient descriptive information about the course for the review panel to engage in discussion of past progress and future intentions.

b) The rationale for all changes that have been made since validation and any plans for further changes. c)

Detailed information including all indicative booklists for new or amended syllabuses.

d) A review of the reports from the External Examiners covering the period since the last review (or initial or most recent approval) with a statement of any action taken in response to the reports. e)

A review of the reports from CASE course monitoring covering the period since the last review (or initial or most recent approval), with a statement of any action taken in response to the reports.


A critical appraisal of the operation of the course, based on the Institution’s own monitoring and evaluation and including the views of students, clinical supervisors, clinical assessors and employers.

g) Information on current resourses, including staff changes, capital and revenue expenditure, development of library and technological learning resources provision. h) Current CVs for all members of staff. This information should form the basis of an updated course document.





Appointment of CASE Representative at Validation, Revalidation or Review Each member organisation of CASE nominates annually a maximum of four representatives to serve as CASE accreditors during the process of validation, revalidation or review. These representatives form the CASE List of Nominated Accreditors. Six representatives from this list are nominated by CASE to act as lead accreditors. These representatives form the CASE List of Lead Accreditors. For every validation, revalidation or review, CASE will nominate a CASE advisor, a lead accreditor and a maximum of three nominated accreditors. The CASE advisor is nominated from the List of Lead Accreditors. The advisor will normally also act as the lead accreditor. The validation is normally attended by the lead accreditor and one other accreditor. The roles of the CASE nominated advisor, CASE lead accreditor and accreditor are described below.


The CASE Nominated Advisor An advisor will be appointed by CASE once the Institution has informed CASE of its intention to proceed with validation, revalidation or review. The CASE nominated advisor will normally be selected from its List of Lead Accreditors and will also normally act as the CASE lead accreditor. The role of the CASE nominated advisor is described in 8.1.


The CASE Lead Accreditor The lead accreditor is selected from the CASE List of Lead Accreditors and will normally have acted as the CASE nominated advisor to the course team prior to validation. The role of the CASE lead accreditor is described in 8.2


The CASE Accreditors CASE normally selects a maximum of three representatives from the CASE List of Nominated Accreditors to act with the lead accreditor to review the course documentation prior to validation. Usually one of these representatives together with the lead accreditor will serve on the joint panel at validation. At least one accreditor serving on the joint panel at validation should be a CASE Council member to act as a link for each validated course. The role of the CASE accreditor is described in 8.3.


Internal or Faculty Validation Institutions are advised that rigorous initial scrutiny of the course should be arranged within the Institution, prior to validation to ensure the maximum information for the course leader and planning team. Such an event may be termed an internal validation, or a Faculty validation. CASE advises that the CASE nominated advisor be invited to attend this initial scrutiny meeting. This procedure is an essential preliminary to a full conjoint validation event with CASE.


The Validation Event


The Institution will normally provide a minimum of four members, including representatives associated with the internal validation but not directly responsible for the running of the course. The Chairman of the validation panel will be selected by the awarding Institution.


A draft copy of the itinerary for the event should be sent to the CASE lead accreditor for comment. The itinerary for the event will be set by the awarding Institution in consultation with the Chairman of the validation panel. It will include full questions/answer sessions between CASE and the course team and smaller group meetings, if required at CASE’s request. The day may also include:


Interviews with staff

b) Interviews with students where appropriate c)

Visits to buildings and facilities

d) Visits to clinical placements 7.3.3

Accreditation by CASE The representatives who attend the validation event on behalf of CASE are not empowered to give an immediate decision regarding accreditation of the course by CASE. This decision is made by CASE Council at 92

the Council meeting following validation. Institutions may wish to consider the timing of CASE Council meetings when identifying a date suitable for the validation event. Dates of CASE Council meetings for the forthcoming year can be found in the CASE Working Procedures Handbook or by contacting the CASE Coordinator. 7.3.4

The Validation Panel There will be a plenary session of the validation panel at the end of the day at which the validation panel’s recommendations will be made clear to the Institution, including any conditions attached to the approval of the course which must be fulfilled, together with the dates and mechanisms by which they should be met. The panel should remember that the final decision regarding CASE accreditation is taken by CASE Council and that the views of the CASE accreditors present are recommendations only.


The Final Report of the Event After a validation event, a written report of the proceedings, stating the conditions and recommendations of the validation panel, where applicable, will be produced by the Institution and a draft copy will be circulated by the Institution to all members of the panel for comment and approval. The Institution is reminded that the conditions and recommendations set by CASE cannot be formalised until CASE Council has considered the report submitted by the lead accreditor attending the event. Following approval of the report by the members of the panel, the approved report will be sent to the CASE Co-ordinator by the Institution for consideration and action by CASE.


Process Validation


Process validations are less structured than single event validations but several similarities remain. CASE normally selects a maximum of three representatives from its List of Nominated Accreditors to act with the lead accreditor to review the course documentation and serve on the validation panel. Usually one of these representatives together with the lead accreditor will serve as the primary representative and will be available for direct consultation by telephone or letter, and to attend appropriate meetings that may be called as part of the validation process. CASE expects that the Institution would also provide a minimum of four representatives to participate in the process validation.


The nature of process validation requires that consultation with CASE representatives takes place regularly throughout the period of validation. Consultations may be made by telephone, electronic means, in writing or through meetings. The process validation should culminate in explicit and written approval being given by each member of the validation panel for the proposed course. The Institution is reminded that the formal approval by CASE cannot be given until CASE Council has considered the report submitted by the lead accreditor nominated to act during the process validation.


Where explicit and written approval proves impossible, or where process validation becomes inappropriately protracted, CASE will seek a formal meeting with the course team and/or with the full validation panel.


Once the course has been approved, a synopsis of the process validation including relevant reports and a copy of the final course documentation must be sent to the CASE Co-ordinator by the Institution.


Rejection of Submission by Validation Panel Should a proposal for a course be found by CASE to be unacceptable for validation, then CASE will give reasons for its unacceptability and make suggestions for its improvement. While it is not in the interests of the course nor CASE to allow a proposal to remain unvalidated, CASE retains the right not to proceed with considerations of submissions that are unsuitable.




The CASE Advisor


The CASE Advisor is selected by CASE from its List of Lead Accreditors and will be appointed as soon as possible following the Institution informing CASE of its intention to proceed with validation, revalidation or review. The advisor should: 93


Advise the course team on matters and aspects of the programme content which relate to CASE accreditation. Areas of particular relevance will include the academic content that relates to clinical skills and professional responsibility, the clinical training scheme, its management, content, delivery and assessment.

b) Give advice by whatever means is acceptable for advisor and course team. Should the course team anticipate that considerable input will be required from the advisor then it is the team’s responsibility to inform CASE of this fact when requesting an advisor. 8.1.2 Travel and other expenses relevant to the role of the advisor will be met by the Institution requesting the advisor.


The CASE Lead Accreditor


The CASE lead accreditor is selected by CASE from its List of Lead Accreditors. The lead accreditor should:


Ensure familiarity with CASE documentation relevant to validation including all referenced documents from Member Organisations (see Appendix B).

b) Liaise with CASE Council to ensure that the expertise of the appointed accreditors covers the necessary areas within the course documentation. c)

Ensure all nominated accreditors have received, and have reviewed, all the documents and reports necessary for validation.

d) Request and receive written reports from all nominated accreditors no less than 4 weeks before validation. e)

Select one of the nominated accreditors to attend the validation.


Provide, on behalf of the nominated accreditors, any documentation which may be requested by the Institution.

g) Identify the agenda which the accreditors attending the event should follow. h) Receive all questions to CASE posed by the Institution at the event in the first instance and delegate them where appropriate. i)

Provide for CASE Council a written report of the event, with recommendations agreed by the accreditors attending the event, within two weeks of the event.


Speak to the appropriate agenda item at the next CASE Council meeting, or provide a short written report if unable to attend.


Appendices C – F are included to aid the lead accreditor and to facilitate consistency across validation events.


Travel and other expenses relevant to the role of lead accreditor will be met by the Institution.


The CASE Accreditors


CASE normally selects a maximum of three representatives from its List of Lead Nominated Accreditors to act with the lead accreditor to review the course documentation prior to validation. It is anticipated that the expertise of the accreditors will compliment, rather than mirror, that of the lead accreditor. Usually one of these representatives, together with the lead accreditor will serve on the joint panel at validation.

The accreditors should: a)

Ensure familiarity with the CASE documentation relevant to validation including all reference documents from Member Organisations (see Appendix B).

b) Liaise with the lead accreditor to ensure all the documents and reports necessary for the writing of a report prior to validation have been received. c)

Submit a written report to the lead accreditor no less than 4 weeks before validation. 94

d) Liaise with the lead accreditor regarding the agenda for validation. e)


Attend the validation if requested, take questions delegated by the lead accreditor at validation and liaise with the lead accreditor regarding the written report of validation to be tabled at the next meeting of CASE Council. Appendices C – F are included to aid the CASE validation accreditors and facilitate consistency across events.

8.3.3 Travel and other expenses incurred by the nominated accreditor attending validation will be met by the Institution. 8.3.4

Expenses incurred by the nominated accreditors not attending the validation will be met by CASE.


CASE Council CASE Council shall: a)

Identify and update the CASE policy to ensure the accreditation/reaccreditation process addresses the needs of clinical competency and professional development.

b) Provide regularly updated documentation which clearly outlines the requirements of CASE for successful accreditation to be achieved. c)

Invite its member organisations to nominate a maximum of four representatives to act as accreditors at validation. These representatives shall form the CASE List of Accreditors which will be reviewed annually.

d) Identify no less than six individuals from the CASE List of Accreditors who Council considers have suitable experience and expertise to act as lead accreditors at validation, revalidation or course review. These individuals shall form the CASE List of Lead Accreditors which will be reviewed annually. e)

Receive the written report from the appointed lead accreditor following validation and award full, conditional or unsuccessful accreditation based on the written report and discussion with the lead accreditor at the next Council meeting, following validation.


Formally approve full CASE accreditation, for an agreed period of time, for those Postgraduate Certificate, Postgraduate Diploma and Masters programmes which have fulfilled all the CASE criteria.

g) Require the CASE Chairman to formally notify the relevant Institution of CASE Council’s decision, within two weeks of the meeting of Council. h) Receive an annual course monitoring report from each accredited Institution and formally approve its continuing accreditation. i)

Retain the right to investigate any CASE accredited course which does not appear to be delivering the extent or quality of content for which it was accredited.


Remove accreditation from such a course within the accredited period if an accredited course is found to be wanting in the extent, quality, provision or assessment of the educational programme for which it was accredited.




The contents of this section are appropriate for both accreditation and reaccreditation. The criteria included must be met, irrespective of whether accreditation is being sought through an accreditation event or a process accreditation. As the aim of CASE is to promote the best and most relevant sonography education and training its primary role in course accreditation is to ensure these objectives are adequately met. Four areas of particular importance in the pursuit of CASE accreditation: course content, learning outcomes, the teaching team and the learning environment (both clinical and academic).



Course Content


CASE will consider the course content in terms of core and additional specific clinical topic area. Although the names of comparative modules may vary between Institutions, the content of the modules, and of comparative programmes, will contain essential or core material that is common across all programmes. CASE requires the Institution to clearly evidence that the programme seeking accreditation delivers this core material effectively.

9.2.2 Core Topic Areas For purposes of accreditation, CASE will divide the core material into two components: a) Science and Technology b) Professional Studies 9.2.3 Specific Clinical Areas All CASE accredited courses are required to provide specific clinical topics in addition to the core material. The specific clinical topic areas currently considered are: a) Cardiac b) General Medical c) Gynaecology d) Obstetric e) Vascular Other clinical areas such as breast, paediatric, fertility, musculo-skeletal ultrasound could also be considered. 9.2.4 The content of the specific clinical topic areas must reflect the appropriate referenced documents of the member organisations identified in Appendix B for accreditation to be achieved. 9.2.5 a)

An Institution seeking CASE accreditation must satisfy CASE: That the learning outcomes associated with the core and specific clinical topics given below can be a satisfactorily achieved through the programme to be accredited.

b) That the assessments address and match the relevant learning outcomes.


CASE Learning Outcomes Identified below are the learning outcomes that CASE requires to see evidenced.

9.3.1 Core Component: Science and Technology On completion of this component, the student should be able to: a)

Demonstrate a thorough knowledge of the physical and technological processes by which ultrasound information is obtained.

b) Apply this knowledge to the implications of artefacts in clinical practice. c)

Recognise and critically discuss the limitations and biohazards of the equipment and techniques employed.

d) Consider and evaluate the above knowledge to enable optimal use of the ultrasound equipment within the current, internationally recognised recommendations for safe practice. 9.3.2 Core Component: Professional Issues On completion of this component, the student should be able to: a)

Demonstrate a thorough knowledge of the legal, ethical and organisational aspects of current diagnostic imaging practice.

b) Consider and evaluate professional accountability and the parameters of the professional role. c)

Evaluate the emotional impact of the ultrasound examination on the client or patient and relevant health professionals.

d) Critically discuss this knowledge in the changing health care needs of clients, patients and organisations.


9.3.3 Specific Clinical Topic On completion of this component, the student should be able to: a)

Demonstrate an understanding of normal ultrasound appearances and ultrasound appearances of the common pathologies relating to specific clinical topics.

b) Produce, recognise and interpret normal and abnormal ultrasound B-mode images, colour flow images and Doppler ultrasound waveforms relating to the specific clinical topics where appropriate. c)

Evaluate the merits, limitations and their implications that influence the choice of ultrasound techniques and equipment relative to the specific clinical topic.

d) Analyse the needs of the patient in order to perform all aspects of the ultrasound examination safely and competently.


9.4 Balance of Core and Specific Clinical areas CASE recommends the balance of student learning hours between core and specific clinical topics as shown below:

PG Diploma PG Certificate

Core Topics 25 – 50% of learning hours 25 – 50% of learning hours

Specific Clinical Topics 50 – 75% of learning hours 50 – 75% of learning hours




The main aim of CASE is to promote the provision of clinically competent and professionally responsible sonographers. As clinical education and its assessment is fundamental to the aim of CASE, clinical assessment must be undertaken, the methods used clearly identified and their rationale adequately justified by the course team. The clinical skills that are associated with CASE accreditation reflect those identified within the reference documents in Appendix B. An Institution seeking CASE accreditation must satisfy CASE that the assessment strategies applied both to the academic and clinical components are sufficiently rigorous to enable successful students to demonstrate such skills as are required by the referenced documents. These strategies must be appropriately matched to and measure the learning outcomes.


CASE requires that clinical assessments should incorporate a Pass/Fail mark, where Pass is equivalent to safe practice or competence as defined by the relevant reference documents.


CASE requires that the assessment methods used for the academic component of each module reflects relevant aspects of the clinical or professional role(s) of the competent and safe sonographer.


Examples of typical essay titles, examination questions, presentation titles etc. are required for each module.

9.5.5 The criteria used for the assessment of both the academic and the clinical components of each module are required. 9.5.6

Strategies relating to a failed assessment should be clearly documented. Compensation for a failed clinical assessment cannot be given under any circumstances.


All the Institution’s ordinances and regulations that relate to the course are required.


The Teaching Team In this context, the teaching team is taken to mean those individuals who contribute to delivery of either the academic or clinical components of the course. CASE requires that: a)

The course leader holds an ultrasound qualification and has had at least the equivalent of three years of full time clinical experience.

b) At least one member of the academic teaching team holds an ultrasound qualification and has had at least the equivalent of two years of full time clinical experience. c)

At least one member of the academic teaching team performs regular clinical ultrasound sessions.

d) At least one member of the teaching team is qualified to deliver the science and instrumentation component of the programme. e)

The strategy by which clinical supervisors are selected, trained and supported is clearly documented and moderated. Current curriculum vitae should be included.


There is clear evidence of liaison strategies between the academic teaching team and the clinical supervisors.

g) The teaching team and clinical supervisors evidence continuing professional development directly related to ultrasound through annual course monitoring.


h) The course document must contain current Curriculum Vitae of all members of the academic teaching team and those of external lecturers who contribute significantly to the course programme.


The Learning Environment The learning environment refers to both the academic and the clinical learning environments. CASE requires that: a)

Suitable accommodation is available for the delivery of lectures, workshops, group sessions, tutorials etc. for the anticipated maximum number of students.

b) There is a clear demonstration of the quality, nature and range of clinical facilities necessary to support clinical education and practice. c)

Suitable audio visual and information technology equipment is available for delivery of an illustrative ultrasound programme.

d) Library facilities are available to support ultrasound students working at postgraduate level.




Validation and review reports are important sources of qualitative information for CASE about design, development, monitoring and evaluation of courses, and complement the factual information available to it through definitive course documents. Reports will be used by CASE to provide an overview on standards being achieved, changing patterns of curricular provision and valuable and innovative practices.


Institutions are required to send an annual report of the sonography course to CASE using the CASE proforma, so that CASE can monitor the progress of a course between accreditation and review. CASE will also agree other forms of regular communication between Institutions, the member organisations and/or CASE on the progress of the course. CASE reserves the right to nominate a representative to visit Institutions to fulfil its monitoring role.


Receipt and approval of the annual report of the sonography course will be necessary for retention on the register of accredited courses.


CASE undertakes to provide a response to all course monitoring reports, normally within 3 months. A response is provided to all Institutions on an individual basis, together with a summary relating to all CASE accredited courses. This is published on an annual basis.


APPENDIX A MEMBER ORGANISATIONS OF CASE The seven member organisations that currently constitute CASE are given below. Organisations which have publications to which CASE may refer for purposes of accreditation and course monitoring are marked with an asterisk *. British Medical Ultrasound Society 36 Portland Place London W1N 3DG Tel: 020 76363714 Fax: 020 7323 2175 Email: [email protected]

British Society of Echocardiology 9 Fitzroy Square London W1P 5AH

College of Radiographers* 207 Providence Square Mill Street London SE1 2EW Tel:

020 7740 7200

Institute of Physics and Engineering in Medicine* Fairmount House 230 Tadcaster Road York YO24 1ES

Royal College of Midwives* 15 Mansfield Street London W1M 0BE Tel:

020 7312 3535

Society for Vascular Technology of Great Britain and Ireland* Dr David Goss SVT Education Committee Chairman Department of Medical Engineering and Physics King’s College Hospital Denmark Hill London SE5 9RS Tel: 020 7346 3711 Fax: 020 7346 4379 Email: [email protected] United Kingdom Association of Sonographers* 36 Portland Place 100

London W1N 3DG Tel: 020 7405 4950 Fax: 020 7242 3691 Email: [email protected]





Code of Professional Conduct Guidance for Obstetric & Gynaecological Ultrasound Departments Ultrasound Education for Radiographers Professional Standards to be achieves in Diagnostic Imaging, Radiotherapy & Oncology Occupational Standards for Diagnostic Ultrasound Occupational Standards for Diagnostic Ultrasound – An abridged version Professional Support – Publications List


Memorandum (Issue 1, October 1999)


Midwives Rules and Code of Practice – 1998 Guidelines for Professional Practice – 1996



Accreditation of the Vascular Technologist – November 1996 – relevant pages only Syllabus 1998-99 Guidelines of Working Practice for the Vascular Technologist, published July 1995 Vascular Laboratory Practice (Published by IPEM Publications – flyer with attached order request form available from CASE office)

Guidelines for Professional Working Standards – Ultrasound Practice – August 1996







The format of the course document is a matter for local custom and practice. It is recommended that it should be in the following form. a) b) c) d)


One document only, concisely expressed. A4 size pages, indexed and numbered. Sections bound or stapled within a clearly designated cover, identifying the name and address of course. A desirable uniformity on presentation.


The course document should include most/all of the following considerations: 2.1

Course Information

This should include a short introductory background to: a) b) c) d) 2.2

The education centre The host institution The faculty/department A brief socio-geographic framework Course Organisation

a) b) c) d) e) f) g) h) 2.3

Management and advisory arrangements for the course. Course leader responsibility for education policy and resource direction. Financing of the course, including capital and revenue budget proposed for the support of the course. Number and structure of course committees including examinations committees. Arrangements for staff to discuss educational policy and to take part in its formulation. Arrangements for consultation between staff and students on day-to-day running and policy information. Representation of student/clinical/educational interests in course administration. Analysis of course time and integration within institution. Staffing


Description of course team including teaching and visiting staff, including their teaching and clinical experience. b) Number of staff in post. c) CVs for all staff concerned with course development and delivery. d) Staff/student ratio. e) Policy for staff development. 2.4

Students a) b) c) d) e) f)

Funding Admission procedure. Entry selection procedure. Health/welfare facilities available. Personal tutoring system. Equal opportunities system.



Course Philosophy a) Rational. b) Aims and objectives c) Outcomes of the course – career prospects


Course Syllabus a) b) c) d)


Rationale, aims, objectives, learning outcomes. Schedule of course studies by year. Integration of academic education with clinical practice. Course content, context and bibliography. Teaching/Learning Methods

a) Rationale. b) Core teaching, team approaches. c) Clinical education. 2.8

Clinical Education a) b) c) d) e) f) g)


Description and details of integration. Aims and objectives of clinical education. Length of placement/hours/availability. Evidence that selection does not adversely affect other students. Criteria for accrediting clinical placements. Quality assurance procedure to ensure that clinical placements. Criteria for selection and appointment of clinical supervisors and clinical assessors associated with the clinical supervisors and clinical assessors. Assessment

a) b) c) d) 2.10

Rationale – how the course aims and objectives are reflected. Methods of academic and clinical assessment. Schedule and weighting. The constitution and terms of reference of the examination board. Internal Course Monitoring

a) Arrangements for course appraisal by students and staff. b) Evidence of appropriate involvements in course appraisal of clinicians, clinical supervisors, clinical assessors and other relevant groups involved with the course. 2.11

Regulations a) Procedures for determining the unsuitability of students on academic, clinical and professional grounds. b) Structures and terms of reference of committees which operate the system. c) How the system relates to that of the host institution.


Course Resource Provision a) b) c) d) e)


Teaching/lecturing/practical provision. Library facilities. Technological resources Laboratory facilities Clinical facilities. Clinical Resources Provision

a) Teaching and learning resources, including personnel. b) Ultrasound and ancillary equipment. c) Case load and case mix. 104




A definitive course document (or appropriate amended sections) must be forwarded to CASE as soon as possible after the course has been approved or reviewed.


Definitive course documents provide CASE with a comprehensive course archive and facilitate the gathering of information on course developments and good practice.






The guidance set out in this section indicates the range of issues which arise when considering the course itself and the conditions under which the course will be delivered.




The quality of the human resource, i.e. the teaching staff, both academic and clinical is crucial.


The course team will need to demonstrate a commitment to professional development.


Institutions must have policies for staff development and research to support the teaching.


The staff must be adequate in number and appropriately qualified. This applies equally to academic, clinical and support staff.



The physical resources needed to sustain the course will include all the appropriate clinical resources and also the relevant library and computer provision, medical resources and specialist laboratory facilities and equipment..



Consideration will need to be given to: a) The appropriateness of the accommodation available. b) The existence of courses on the related fields and the question whether there is or will be competition for resources which could have an adverse impact on the course. c) Where common teaching with other courses is proposed, the suitability of this arrangement for the course under consideration. d) The opportunities for students to mix with other students and to engage in group activities. e) The arrangements for clinical education.





The philosophy and rationale for the course:


Aims and Objectives

The aims and objectives of the course: the aims being related to the purpose of and intention of the course and the objectives being related to the capabilities and competencies the students will be expected to demonstrate as the end.




The course structure to include a rationale for placements demonstrating the integrative nature of the course, and links with other courses, e.g. modular structure, shared learning, common teaching.


Content a)

The relevance of the course content to its title, aims and objectives and the appropriateness of the sequence and progression of content. b) The teaching and learning strategies proposed and their appropriateness to the course. c) The feasibility of any proposals for interdisciplinary studies within the course. d) The level of study proposed in the final stage of the course in relation to the award to which it will lead. e) The provision for clinical education and the way it is proposed to integrate this with the rest of the course. f) The proposals for any dissertation or written project, including the approval chosen topics and the arrangements for supervision.



The criteria for admission in relation to the objectives: teaching methods and assessment, including where necessary consideration of issues relating to equal opportunities.



The rationale for assessment and the proposed scheme including the examination schedule (see Appendix E).


Management a) The overall load on both students and staff. b) The arrangements of the management operation and monitoring of the course, including provision for student representation and tutorial guidance.



Viewing the course as a whole, whether it accords with the policies and requirements of the member organisations or whether it raises the questions of interpretation of existing policy which should be referred back.







1.1 Does the scheme of assessment adequately, match the objectives and learning outcomes of the individual modules and make it possible to test the extent to which students have achieved those objectives.


Are the objectives clear? What are the criteria for success?



2.1 Are the assessment regulations clearly and unambiguously drafted, including provisions for the classification of awards and to allow re-sits where appropriate?

2.2 Will students be adequately informed of the requirements which must be met, both during and at the end of their courses and of the criteria for assessment?



3.1 Is the assessment load appropriate to the nature of the course and is it broadly comparable with that expected in other similar courses? Is there a risk of over or under-assessment?

3.2 Do the frequency and timing of the stages of assessment appropriately reflect the nature of the course and its progression requirements? e.g. the timing of any division between parts in a course.

3.3 Is the relative weighting between different stages of assessment appropriate where they count towards the final award?




Is the weighting attributed to the various components of the assessment scheme appropriate?

4.2 Where practical performance is to be assessed, are there appropriate arrangements for the use of audio or video tape recordings, or of other forms of records, such as student log-books?

4.3 Are provisions for assessment of course work appropriate both in quantity and weighting compared with any formal examinations and in relation to the objectives of the course?




5.1 What means are proposed for determining the classification of students? Is the scheme based entirely on aggregation of marks or does it provide for other techniques, including comparison of separate systems, for arriving at a classification.

5.2 What proportion of the final results counts towards the classification? Can students fail any part and still receive an award? Is there an element of compensation between different components of the assessment scheme?




What are the arrangements for the involvement of External Examiners in the assessment process?

6.2 Is the requirement of the Examination Board appropriate? Where a complex scheme requires a tiered Examinations Board structure, are there adequate arrangements for the examiners to take an overall view of each student’s performance?

6.3 Are there appropriate arrangements where necessary for double marking by internal examiners and/or internal moderation across subjects.

6.4 Are the criteria for assessment clearly worked out, and likely to be understood and applied by all the examiners involved?

6.5 What arrangements are proposed to ensure the validity and objectivity of the assessment process? Is the scheme such that internal examiner’s assessment is accessible to appraisal by external examiners?







Is the student intake compatible with: a) The course’s admissions policies? b) The course’s academic requirements? c) How far have enrolment targets for the course been achieved and if not, what action has been taken?



Is staff provision appropriate to the level and nature of the course in: a) Academic staffing? b) Clinical staffing? c) Support staff?




What do the relevant quantifiable data reveal about the quality of the course e.g. pass rates, classification levels, national comparisons, awards and prizes?


What is the withdrawal rate for the course and has it risen or fallen over the last 3/5 years?


What has been the trend in pass rates and is this satisfactory?



4.1 Is the current teaching and learning strategy in the academic and clinical environment appropriate and effective? 4.2

Are the teaching and learning methods employed appropriate to the objectives, structure and content of the programmes of study e.g.: a) b) c) d) e)

To what extent do lectures and other forms of teacher presentation stimulate learning? How effective is the level of interaction in seminars and tutorials? How far does small group teaching improve student understanding of the subjects(s)? To what extent are staff able to encourage students to be involved and form independent judgement? How far can staff give attention to the interests and problems of individual students?


Has the Course Board of Studies responded to student comment and peer group assessment e.g. external examiners.


Does the Course Board of Studies have an effective system of receiving student feedback?


Has the Course Board of Studies responded positively to such feedback?

4.6 the

Is such feedback elicited at all stages, including comment from candidates who have successfully completed course?





What judgements can be made on the subject knowledge and teaching skills of the teachers?


Do the research and staff development activities of the teachers actively underpin the course?


Is there adequate training and support for clinical supervisors and clinical assessors?


Do students make favourable judgements about the quality of teaching received?


Do the teachers form a balanced and cohesive team?


Do the teachers effectively monitor and support one another?


Is there a sensitive environment of self-appraisal?




Is the Course Board of Studies effective?


Does the Course Director receive adequate support?


Is the tutorial support system effective?



Is the resource base of the course adequate in terms of: a) b) c) d) e) f) g) h) i)

Teaching accommodation? Library provision? Specialist rooms? Technical support? Laboratories? Equipment? Learning facilities? Student access and usage? Clerical and administrative support?




How far does the course take advantage of the academic and clinical environment within which it operates? (this should include the School of Study, Departmental, Faculty, Institution and clinical operations, as appropriate).


How does the course take advantage of the services of the library, laboratory and computer services?





Each member organisation maintains a rigorous national overview of its own professional concerns. The Consortium for the Accreditation of Sonographic Education focuses those concerns to ensure the appropriate and relevant perspective on sonographic education in all of its aspects. The Consortium was formed in 1993 following a meeting at which the member organisations were represented. All members were found to share common concerns that standards of service provision and education be developed in concert with increasing provision and improving technology. All share an equal appreciation of the need to ensure the closest possible collaboration in education and training provision, to ensure its relevance and quality, and make validation and accreditation procedure as robust and unbureaucratic as possible. Each member organisation is unique and has its own particular agenda. But the Consortium provides the forum for interchange of ideas in which shared views may be synthesised and thus inform education.



The aim of the Consortium shall be to promote the best and most relevant sonographic education and training by providing a forum for the expression of the widest and most informed view of service needs by accrediting those examples of best educational and training practice.




To facilitate collaboration with regard to developments which affect education and training in sonography.

3.2 To identify common themes and good practice at all levels of sonographic education and training and to encourage and promote their development. 3.3 To establish a co-ordinated approach to setting, maintaining and enhancing standards and training in sonography.


To establish mechanisms for the joint validation and periodic review of all professional courses in sonography.

3.5 To establish and maintain a register of accredited professional courses in sonography of those successfully validated.




The Consortium shall publish a handbook on accreditation, validation and review setting out its criteria for accreditation of courses, and its procedures and mechanisms, for validation, review and monitoring of courses.


The Consortium shall maintain registers of approved validators, examiners, education and training consultants, and any other groups as the Consortium shall deem appropriate to the pursuance of its aims. Specialist sonographic interests shall be represented in those registers. (Memorandum of Agreement as agreed by the member organisations of CASE 1998).


The Consortium shall maintain a register of accredited courses. 112


Accreditation fees for entry and retention on the Consortium register will be payable with effect from 1st October 1994. The level of fees will be decided annually.


Accreditation will be for a period not exceeding five years after which a course will be reviewed, and if successfully review, will be reaccredited and eligible for retention on the register.



The Consortium shall retain the copyright of all publications it may from time to time publish.

All matters relating to CASE activities should be directed through the CASE Co-ordinator at the following address: -

The CASE Co-ordinator c/o The British Medical Ultrasound Society 36 Portland Place London W1B 1LS Tel: 020 7467 9759 Fax: 020 7323 2175 Email: [email protected]