What do mentors find difficult? - Wiley Online Library

0 downloads 0 Views 162KB Size Report
Nursing, Midwifery and Health Visiting and the Nursing and Midwifery Council which has recently ... The second (cognitive/intellectual factors) has been rarely ...
NUR SIN G R OLES

What do mentors find difficult? Laurence G Moseley

MA, MBCS, CITP

Professor of Health Services Research, Faculty of Health, Sport and Science, University of Glamorgan, Pontypridd, UK

Moira Davies

BN, RGN

Senior Lecturer in Nursing, Faculty of Health, Sport and Science, University of Glamorgan, Pontypridd, UK

Submitted for publication: 11 February 2007 Accepted for publication: 2 August 2007

Correspondence: LG Moseley Professor of Health Services Research HESAS Glyntaff Campus University of Glamorgan Pontypridd CF37 1DL UK Telephone: þ44 1594 826 354 E-mail: [email protected]

M O S E L E Y L G & D A V I E S M ( 2 0 0 8 ) Journal of Clinical Nursing 17, 1627–1634 What do mentors find difficult? Aims. (i) To assess whether mentors had a positive or negative attitude towards their role; and (ii) to discover what aspects of the role they found easy or difficult. Background. The fact that mentorship is an important element in nurse training was recognized by Sir Leonard Peach, the United Kingdom Central Council for Nursing, Midwifery and Health Visiting and the Nursing and Midwifery Council which has recently published new standards to support learning and assessment in practice, which include standards for the preparation of Mentors, to be implemented by September 2007. There are many anecdotal reports of the problems which face mentors, but little firm evidence. Method. This paper reports a study of those problems. It used a Thurstone scale to assess role satisfaction among mentors (n ¼ 86, response rate 89%) and two Likert scales to assess where problems, if any, lay. Results. Unlike anecdotal reports, the Thurstone scale found that, overall, mentors regarded the role positively. In addition, a principal components analysis of responses to the Likert scales showed that there were two clearly delineated factors. The first (interpersonal/organisational factors) had been widely discussed in the literature. The second (cognitive/intellectual factors) has been rarely discussed and could with profit be more strongly stressed in mentor training. Conclusions. (i) Mentors had a positive attitude towards their role and enjoyed it. (ii) When looking at what caused mentors difficulty, in addition to the commonly discussed dimensions of organisational constraints (workload, skill mix) and interpersonal factors, there was clearly an additional cognitive one. Knowledge, not just personality, mattered. Implications for clinical practice. Mentors and those who train them could with profit pay more attention to cognitive components of the role, even if that meant laying a lesser stress on the interpersonal ones.

Key words: attitudes, nurses, nursing, Likert & Thurstone scales, mentors, role

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd doi: 10.1111/j.1365-2702.2007.02194.x

1627

LG Moseley and M Davies

Introduction The study reported here was undertaken by one of the authors (MD) as part of a Welsh National Board WNB fellowship for training in research, supervised by LM. MD organizes the mentorship preparation courses for a UK higher education institution, which makes in excess of 2500 placements per annum. LM is interested in methods, and in ways of checking whether anecdotal evidence is accurate and representative. The study arose because in many courses for, and contacts with, mentors, two common impressions gained were that: • mentors were unhappy with their role; • the major elements of the role were interpersonal and organisational. These impressions arose from volunteered anecdotes – what are commonly called qualitative data. We wished to check whether they could be substantiated in a more structured and repeatable way. We were also interested in looking in more detail at whether or not the mentors were negative about their role and about which elements of that role were easy or difficult for them.

Method – sample and response rate We had available a group of mentors who were being trained at our own institution. The criterion for inclusion in the study was that the respondent should be a registered nurse who had pursued a mentor preparation programme. We invited participation from registered nurses who were supporting preregistration students in practice. The resulting sample covered all branches of nursing. The questionnaire was distributed to 104 mentors. Eighty-six were returned, giving a response rate of 89%. The results from the questionnaires were analysed using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). Although the return rate was high by the usual standards of nursing research, we shall not try to generalise from our findings, as the sample was not known to be random and the response less than complete. However, we think that the methods will be of interest to other researchers, and the results, however, cautious we may be about them, are clearly more reliable than unchecked anecdotal data.

Method 1 A Thurstone scale There is considerable evidence that one can obtain more accurate impressions and overcome researcher bias by using properly developed psychological instruments, culminating in 1628

a systematic review (Meyer et al. 2001). That paper brought together results from 125 meta-analyses on test validity and 800 samples examining multi-method assessment. They drew several general conclusions, of which the three most relevant to our context were: 1 psychological test validity is comparable with medical test validity; 2 distinct assessment methods provide unique sources of information; 3 Clinicians who rely exclusively on interviews are prone to incomplete understandings. Anyone who has had their life saved by an X-ray or blood test will be struck by conclusion 1. Anyone who wishes to use multiple methods will be struck by conclusion 2. Anyone who relies on informal or unstructured interviews alone should be struck by conclusion 3, especially as it is based on such a large body of evidence. Our initial impression was based on such informal interviewing (anecdotes); so, it seemed sensible to try to overcome any resulting problems by using some sort of formally developed scale. To investigate whether the mentors’ attitudes were as negative as our anecdotal experience had led us to believe, we used a Thurstone scale. Such scales are uncommon in the nursing literature. A search on CINAHL found 1032 references to Likert and only 31 to Thurstone scales, a 30:1 preference for the former. However, although they have to date rarely been used in nursing research, Thurstone scales have advantages over the more frequently encountered Likert scales (and certainly more than ‘Likert-type’ scales). In particular, they minimize or avoid the dangers of social desirability biasing effects, largely because respondents do not know what score or weight is going to be given to each of their answers. In addition, Thurstone scales, in which the scoring of items is in the hands of a panel independent of the researchers, overcome the common danger of researcher bias. Part of the creation of a Thurstone scale involves the following: 1 Devise some statements (items) which the researcher believes to be meaningful. In the current study, these items were statements which could be regarded as indicating a positive or negative attitude towards the mentor’s role. That is the creative step, but not the final one. 2 Ask a panel of informed and representative people (not our ultimate respondents) to indicate, in their view, just how positive or negative each item is (irrespective of whether they agree or disagree with it). In the current case, the panel were invited to give a numerical indication of the degree of positive or negative affective tone which each item conveyed. They were asked to do this using a scale

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd

Nursing roles

running from 3 (very negative) through 0 (neutral) to þ3 (very positive). Effectively, it was a seven-point scale. 3 Set up some fixed rules for deciding whether the panel had achieved consensus on their judgements. For example, one might state that consensus had been achieved if all the judgements spanned no more than three adjacent scale points, and the mode was on the middle of those three points, except for end points, for which the rules could be different. 4 The important point is that after conducting this exercise one has statements of which the meaning (in this case, the degree of positive or negative affective tone) of each item is determined not by the researcher or researchers, but by a social consensus among the independent panel. We regard that as the minimum safeguard necessary to protect the public from researcher bias. If the panel can achieve consensus on a given statement, it is treated as meaningful, and the panel’s score on that item is used in the final instrument. However, if the panel cannot achieve consensus according to the rules, the item is deemed to be meaningless and is simply thrown away. The researchers then have to think up one or more replacement items and submit them to the panel. This process (devise, test, throw away if necessary) continues until consensus is achieved. Then, and only then, is the scale offered to the final respondents. It is a hard discipline, but it ensures that the items which remain after the process of attrition can be claimed to be meaningful. They do not depend on the whim of the researchers. Given the fact that the current project was in part a training exercise for MD, the development effort which went into this scale was less than would be the case in a fully funded project. For example, to obtain a scale of seven points, with two items at each point so that consistency among respondents (not among panel members) could later be checked, giving a scale of 14 items in all, the researcher had to start with 35 items. That means that the researcher had to throw away 60% (21/35) of the items which she had originally devised, entirely because there was no adequate social consensus among the panel as to what the items meant. That sounds like quite a tough discipline. Throwing away, 60% of the questions which you originally thought were meaningful is quite a humbling experience, and teaches one great deal about what words mean, and about how much people differ in their use of words. However, in the grander scheme of Thurstone scaling, that is really quite a moderate attrition rate. In another study (Moseley et al. 1998), in which the items were devised by three researchers and rated by a panel of 10, they had to

What do mentors find difficult?

throw away, for lack of adequate consensus, 210 of the 232 items originally devised. That gave an attrition rate of 91%. By that standard, the attrition rate in the current study (60%) was really rather modest. The effort was worthwhile to inject meaning into the words used, and to safeguard against researcher bias. Once this list had been developed, respondents were merely offered each of the items and asked to tick a box if they agreed with each item. They are not, as in the Likert format, offered any scale point labels such as ‘Strongly agree’, ‘Unsure’, etc. They are invisibly allocated a score for each item which they tick. They do not know the scores which will subsequently be allocated to their choices. That in itself is a strong safeguard against social desirability effects. All the scores were then summed for a respondent to give a measure of their overall attitude. An overall score for a respondent could thus range from 12 (as negative as possible) to þ12 (as positive as possible), with any combination of scores in between. Of course, the order of presentation of items in the scale was randomised and the outcome of this exercise is shown in Table 1.

Result 1 Were attitudes to mentorship negative? It is clear at the most commonsense level that the items which were ticked by a large proportion of mentors were the ones which were in general positive, while the ones which were checked by the fewest mentors were in general the ones which were negative in tone. That on its own undermines the pre-existing anecdotal impression that their attitudes were negative. A particularly high proportion ticked the ‘pleasure’ and the ‘rewarding’ nature of working with students. However, that is an impression based on an informal visual scan of the pattern of responses. What is the picture when we take account of the panel scores of the items ticked? Even the briefest glance at the bar chart in Fig. 1 shows that our impressions from the brief visual scan of response patterns were correct, and the anecdotal evidence (and the impressions of many other people) was quite wrong. The chart is skewed heavily to the right, with 96% of the responses being greater than 0, 45% more than threequarters of the way along the scale, and 10% showed the maximum possible positive attitude. It does not give a picture of a group of mentors who are unhappy with their role. Rather it gives a picture of a group who are generally satisfied, with a substantial minority being very satisfied.

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd

1629

LG Moseley and M Davies

% agreeing with individual item

Wording of item With motivated preregistration nursing students it is a pleasure It is rewarding seeing the preregistration nursing students develop It helps keep me up to date I have learnt a lot by doing it Some parts I enjoy and some I don’t Some people are suited to the role and some are not I get no credit/recognition for doing it The skill mix on the ward does not allow for proper supervision The level of my preparation does not allow me to carry out the role satisfactorily It is a challenge and makes me think about my practice I really look forward to it I do not understand what is expected of me I have too many preregistration nursing students allocated at the same time It is time consuming and I do not enjoy it

20 % with each total score 18

% of memtors scoring

16 14 12 10 8 6 4 2 0

–12

–2

0

3

6

8

10

12

Total scale score Figure 1 Distribution of total scores on the mentorship attitude scale.

Method 2 Two Likert scales – for eliciting what was difficult Of course, finding out whether mentors were satisfied with their role or not may be of general interest. It does not, however, on its own give us any information about how we might improve matters – for example by modifying our current training arrangements for them. Our impression was that there were at least three elements to the satisfactions or concerns that they felt. Broadly, they were: • organisational issues to do with skill mix, timing, overloading, etc. 1630

Table 1 Agreement with individual items on the scale

88 85 81 69 63 55 28 21 20 17 16 7 5 3

• interpersonal issues to do with personality, attitudes of students, etc. • cognitive issues to do with the mentor’s understanding of the student’s theoretical study, keeping up to date, giving feedback, assessment, etc. As a second part of the study, we used two seven-point Likert scales, with their own questions and lists of items to be rated. They both aimed at trying to see what contributed to a positive or negative mentorship experience. The items were classified into the three categories above, i.e. broadly organizational (O), interpersonal (P) or cognitive (C). Each of the items below is labelled according to the category to which we had allocated them. It should be noted that this categorisation was undertaken before any of the analysis which we report below had been carried out. We did so to prevent any post hoc biases. It is quite possible to obtain plausible explanations of results after you have seen them. It produces the well-known effect of over-fitting. If, as in our case, the categories of independent variables for analysis are specified in advance, then a major source of potential bias is removed. The two question stems were worded differently. The first question (it was actually question 5 on the schedule) was ‘How often do each of the following factors cause you concern?’. This question stem was presented with a sevenpoint scale labelled from Never a factor (1), Occasionally a factor (3), Frequently a factor (5), Always a factor (7), with the other points being unlabelled. The items about which the question was asked were the following: • the student’s gender (P) • the student’s age (P) • the time of the year (O) • the student’s previous experience (P)

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd

Nursing roles

senior staff attitudes (P) your own personality (P) the student’s timekeeping (O) the amount of work created by the student (O) the student’s personality (P) your own preparation (C) the number of shifts you work with the student (O) the student’s level of preparation/skill (P) skill mix on the ward (O) time availability (O) The second question (it was actually question 8 on the schedule) and the list of choices offered was ‘How easy do you find each of the following activities?’. This question stem was presented with a seven-point scale labelled from Extremely difficult (1), Fairly difficult (3), Fairly easy (5), Extremely easy (7), with the other points being unlabelled. The items about which the question was asked were the following: • developing an effective relationship (P) • serving as a role model (P) • integrating students into practice (P) • assessing the student (C) • creating a learning environment (C) • keeping up to date with the student’s programme (C) • providing constructive feedback to the student (C) It should be immediately apparent that the direction of the scoring is different for the two questions. For question 5, it runs from positive (1) to negative (7). For question 8, it runs in the opposite direction from negative (1) to positive (7). This reverse scoring is a recommended technique to minimize routine and unthinking selection by the respondents. However, it does mean that one cannot easily summarise the results. After data entry, in our SPSS analysis, we therefore reversed the scoring for question 8 so that both questions would now run from positive to negative. This meant that a high score on an individual item from an individual respondent meant ‘this is negative (cause for concern, difficult to do)’. Obviously, when one calculated total scores, summarizing the results for all items and for all respondents, then a high score on the overall scale would also have a negative connotation. In reading our results, it is important to remember that high means negative. • • • • • • • • • •

Result 2 What are the elements which mentors found easy or difficult? The means and coefficients of variability are given in Table 2. Recall that the higher the average score is, the more negative

What do mentors find difficult? Table 2 Means and coefficients of variability for the scale items

The student’s gender (P) The student’s age (P) The time of the year (O) The student’s previous experience (P) Senior staff attitudes (P) Your own personality (P) The student’s timekeeping (O) The amount of work created by the student (O) The student’s personality (P) Your own preparation (C) The number of shifts you work with the student (O) The student’s level of preparation/skill (P) Skill mix on the ward (O) Time availability (O) Developing an effective relationship (P) Serving as a role model (P) Integrating students into practice (P) Assessing the student (C) Creating a learning environment (C) Keeping up to date with the student’s programme (C) Providing constructive feedback to the student (C)

Mean

CV

1Æ23 1Æ55 1Æ73 2Æ08 2Æ12 2Æ33 2Æ49 2Æ87 3Æ01 3Æ02 3Æ02 3Æ14 3Æ16 3Æ28 3Æ34 3Æ35 3Æ58 3Æ69 4Æ29 4Æ69 4Æ75

63 71 77 62 78 75 52 29 62 25 33 54 25 59 40 49 26 33 29 36 20

(cause for concern or difficult) an item is, and the lower the coefficient of variability, the more consensus there is on an item. Firstly, it is clear that the scale was discriminating. The range of mean scores ran from 1Æ23–4Æ75 (of a maximum possible negative score of 7). It was, however, skewed towards the lower end. That is, for most items, the views of the respondents tended towards the positive. Although many respondents rated many items above the midpoint, the general tendency was to score low. No item obtained an average of greater than 5. This reinforces the picture which emerged from the mentorship Thurstone attitude scale – that in general mentors were positive about their role. Such independent support increases one’s confidence in that result. Secondly, it is striking that the highest (most negative) scores were for four of the five cognitive items. They caused concerns and were difficult to achieve. The only one of the items which we had in advance categorised as cognitive which did not appear at the most negative end of the spectrum was the mentor’s own preparation. Perhaps we might have to rethink our categorisation of that item as a cognitive one in future studies. To change it now would be to distort the data. Post hoc rationalisations are easy, and nearly always wrong. Thirdly, the amount of variability was different for different categories. For the interpersonal and organisational categories the median coefficients of variability were 54% and 56%, respectively, i.e. although overall people were

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd

1631

LG Moseley and M Davies

positive about those aspects, they did not agree very closely about them. By contrast, for the cognitive items, the median coefficient of variability was only 29%. Clearly, not only did our respondents have difficulty with the cognitive elements of the role, but they were agreed that these topics were difficult. Thus, we have good reason to believe that it is the cognitive elements of the role which raise concerns or cause difficulty. Should one believe these tentative findings? One has to note that, firstly, the items were scattered throughout our schedule. There was no element of their being placed close together, and therefore it is unlikely that respondents developed a mental set or answered routinely. Secondly, the items in the various categories were included as potential answers to two different questions, with differently labelled and differently ordered rating scales. Again, this should have prevented unthinking routine answers. Thirdly, there was the inclusion of reversed scale labels. Taken together, these characteristics mean that if respondents were trying to fake their answers to give a particular impression, they would have had to be both methodologically sophisticated and willing to devote a large amount of time to their deviousness. They would also have needed a strong eye for detail and the ability to give a consistent false impression. Such skill and deviousness is improbable! In the light of these considerations, we think that as a first approximation, these results give a fair impression of which parts of the role mentors actually do find difficult. They are the cognitive elements of the role.

Method 3 Principal components analysis Was there any more formal clustering of things which were difficult? So far, we have looked at simple descriptive statistics and visual inspection of elementary results in an attempt to find underlying traits. That was what gave us the idea that it was the cognitive elements which caused mentors the most problems. However, it is difficult to see with the naked eye patterns which may lie hidden in a mass of data (even in this small-scale study there were 4380 values to analyse, well beyond the normal limits of unaided human information processing – by a factor of about 1000). Indeed, there is a lot of evidence from cognitive psychology to suggest that it is this combination of data items which is difficult for human to do. That is true both in theory (Miller 1956, Baddeley 1994, Moseley 1999, Cowan 2001), and in practical clinical comparisons between informal, commonsense methods and algorithmic methods of decision making (Meehl 1954, 1632

Sawyer 1966, Dawes & Corrigan 1974, Dawes 1979, Marchese 1992). It has led to some substantial text-book reviews of the evidence (Kahneman et al. 1982, Plous 1993, Baron 2000, Gilovich et al. 2002, Myers 2004) and a recent metaanalysis (Grove et al. 2000). The latter found only eight studies of the 136 covered which found any advantage to informal methods of decision making. Clearly, to overcome our own potential biases, we had to undertake some more formal analysis in addition to the descriptive and visual one already reported. We therefore decided to undertake a principal components analysis of the relevant Likert questions to see whether there actually were (not just whether we thought that there were) any underlying factors which might emerge. Three factors accounted for 50% of the variance and two of those

Table 3 Principal components analysis of things which mentors found easy or difficult Principal components 1 Your own personality causes you a problem Staff attitudes causes you a problem Students’ level of preparation/skill causes you a problem Number of students allocated to you causes you a problem Skill mix causes you a problem Time availability causes you a problem Number of shifts you work with the students causes you a problem Amount of work created by the students causes you a problem Students’ timekeeping causes you a problem Your own preparation causes you a problem Students’ personality causes you a problem Students’ age causes you a problem Students’ previous experience causes you a problem Time of the year causes you a problem Developing an effective relationship Integrating students into practice Providing constructive feedback to the students Serving as a role model Assessing the students Creating a learning environment Keeping up to date with the students’ programme

2 0Æ83

0Æ02

0Æ78 0Æ72

0Æ01 0Æ19

0Æ70

0Æ25

0Æ68 0Æ65 0Æ64

0Æ10 0Æ02 0Æ07

0Æ63

0Æ15

0Æ62

0Æ24

0Æ60 0Æ60

0Æ01 0Æ19

0Æ52 0Æ48

0Æ07 0Æ10

0Æ47 0Æ06 0Æ13 0Æ21

0Æ27 0Æ85 0Æ79 0Æ69

0Æ19 0Æ13 0Æ25 0Æ15

0Æ63 0Æ62 0Æ58 0Æ46

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd

Nursing roles

accounted for 43% of the variance. Table 3 shows the factor loadings for each of the major variables in the study. The picture which emerged was clear. One component, highlighted in bold in Table 3 included 14 of the Likert items offered. The other, in italic, included seven items. As the table shows, they are very clearly distinguishable from each other. Of course, as with any principal components analysis, the statistics merely tell you which items make up a component. They do not tell you how to label that component. We have tried to do the labelling. The picture which emerges from Table 3 is clear. Although there is some overlap, the overlap is very small, and overall the sorts of things that mentors found easy or difficult fall neatly into two categories. We would label the first of those as ‘interpersonal and organizational factors’. They are the ones highlighted in bold face in Table 3. They are the sorts of items which most writing on mentorship would have mentioned. They are what might be called the ‘human face’ of nursing. The second set of factors, in italics in Table 3, are quite different. We have labelled those as ‘cognitive’. It is interesting that some of the concepts which historically have been used in the discussion of mentorship (developing an effective relationship, integrating students into practice and serving as a role model), which one might be tempted to think of as ‘interpersonal’ ones, in fact are statistically associated with the second, cognitive, component. It is clear that there is a major cognitive/intellectual component to mentorship. This is the one which has been neglected in most past writing on the subject, apart from authors who have written on the ‘failing to fail’ phenomenon (Duffy 2004, Wilson et al. 2006).

Conclusions We set out to investigate two questions in a robust and replicable way: 1 Were mentors in general positive or negative about their role? 2 About which aspects of the role were they particularly positive or negative? The answers which emerged were fairly clear cut: 1 They were, overall, markedly positive about the role. 2 On a simple analysis, the interpersonal aspects of the role were easy to achieve and were viewed positively. However, there was a marked tendency for mentors to have more difficulty with the cognitive aspects of the role. In addition, our principal components analysis showed a very clear factor structure. The social and interpersonal aspects which have been central to writing on mentorship clearly formed one component. However, equally clearly

What do mentors find difficult?

there was a cognitive and intellectual component to which attention is rarely drawn. In terms of training mentors, our tentative findings about which parts of the role cause the most difficulty may well be relevant. It has long been a truism that nursing students find the cognitive (largely the Life Sciences and other numerate subjects) difficult, and attention has been turned to trying to improve matters in that area. These are indications that students have difficulty with the more cognitively demanding aspects of their course. Our tentative findings appear to indicate that mentors also have similar difficulty with the cognitive aspects of their role. If this finding is accepted, it would imply that the training of mentors should concentrate on factors such as knowing the structure of the students’ theoretical studies, keeping up to date, finding ways of giving structured feedback, and assessing students’ knowledge and performance. Recent work includes the issues raised by Duffy (2004) where the question of mentors ‘Failing to Fail’ preregistration students was addressed. This created a number of major concerns for the profession and the preparation of mentors was questioned. Given the high proportion of the students’ time which is spent in clinical practice, the mentor will have a major role in assessing whether students will be Fit for Practice at the point of registration, and of ensuring and certifying such fitness. They will need to demonstrate skill in developing and assessing the students’ intellectual competencies, not merely their interpersonal ones. The difficulty of doing this has been reflected in the study of the reaction of some nurses to the concept of the expert patient (Wilson et al. 2006). Should the training move in that direction, it would represent a further step in the journey of nursing from the old stereotype of a strong back and a kind heart towards a newer, more professional image based upon knowledge and skill. With the implementation of the new mentor standards (NMC 2006), the role of the mentor not only in supporting, but also in and assessing, student nurses will be seen even more strongly as a major component of preregistration nurse education.

Contributions Study design: LGM; data analysis: LGM and manuscript preparation: LGM; Data gathering: MD; literature searching: MD and policy background: MD

References Baddeley A (1994) The magical number 7 – still magic after all these years. Psychological Review 101, 353–356.

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd

1633

LG Moseley and M Davies Baron J (2000) Thinking and Deciding, 3rd edn. Cambridge University Press, Cambridge. Cowan N (2001) The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behavioural and Brain Sciences 24, 87–185. Dawes RM (1979) The robust beauty of improper linear models in decision making. American Psychologist 34, 571–582. Dawes RM & Corrigan B (1974) Linear models in decision-making. Psychological Bulletin 81, 95–106. Duffy K (2004) Failing Students. NMC, London. Available at: http:// www.nmc-uk.org (last accessed 18 November 2007). Gilovich T, Griffin D & Kahneman D (2002) Heuristics and Biases: The Psychology of Intuitive Judgement. Cambridge University Press, Cambridge. Grove WM, Zald DH, Lebow BS, Snitz CE & Nelson C (2000) Clinical versus mechanical prediction: a meta-analysis. Psychological Assessment 12, 19–30. Kahneman D, Slovic P & Tversky A (1982) Judgement Under Uncertainty: Heuristics and Biases, Part IV. Cambridge University Press, Cambridge. Marchese MC (1992) Clinical versus actuarial prediction: a review of the literature. Perceptual and Motor Skills 75, 583–594. Meehl PE (1954) Clinical versus Statistical Prediction. University of Minnesota Press, Minneapolis, MN. Meyer GJ, Finn SE, Eyde LD, Kay GG, Moreland KL, Dies RR, Eisman EJ, Kubiszyn TW & Reed GM (2001) Psychological testing

1634

and psychological assessment – a review of evidence and issues. American Psychologist 56, 128–165. Miller G (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review 63, 81–97. Moseley LG (1999) Is Thinking Just Too Difficult?. Inaugural Lecture, University of Glamorgan, Pontypridd. Moseley LG, Mead DM & Cook RM (1998) Experience of, Knowledge of, and Opinions about, Computerised Decision Support Systems among Health Care Clinicians in Wales. Report to the Wales Office of Research & Development for Health & Social Care. Myers DG (2004) Intuition: Its Powers and Perils. Yale University Press, New Haven, CT. Nursing and Midwifery Council (2006) Standards to Support Learning and Assessment in Practice NMC Standards for Mentors, Practice Teachers and Teachers. NMC, London, Available at: http://www.nmc-uk.org (last accessed 18 November 2007). Plous S (1993) The Psychology of Judgment and Decision Making. Temple University Press, Philadelphia, PA. Sawyer J (1966) Measurement and prediction, clinical and statistical. Psychological Bulletin 66, 178–200. United Kingdom Central Council for Nursing, Midwifery and Health Visiting (1999) Fitness for Practice. UKCC, London. Wilson PM, Kendall S & Brooks F (2006) Nurses’ responses to expert patients: the rhetoric and reality of self-management in long-term conditions: a grounded theory study. International Journal of Nursing Studies 43, 803–818.

 2007 The Authors. Journal compilation  2007 Blackwell Publishing Ltd