Attendance and exam performance at university: a case study

The link between absenteeism and students’ academic performance at university is perpetually a hot topic for teaching academics. Most studies suggest the effect is negative, although the strength of this effect is in dispute. The issue is complicated further when researchers draw their inferences from different angles, such as the removal of a mandatory attendance policy or the implementation of a module‐specific attendance policy. Although previous studies have suggested the effect on exam performance of removing a mandatory attendance policy is weak, this study investigates the effect of implementing a module‐specific attendance policy and finds a strong effect on exam performance. We also identify that student‐specific factors are important, including revision strategies and peer‐group effects and that not taking account of these factors will result in biased estimates of the effect of an attendance policy on exam performance. Furthermore, this paper suggests that the effect of absenteeism on exam performance is non‐linear and further research is needed to identify when attendance policy is a justifiable tool.


Introduction
It is generally accepted by tutors that there is a positive relationship between attendance and student achievement; this view is supported by Gendron and Pieper (2005), who demonstrate that a strong negative link between absenteeism and assessment performance is usually reported in the literature, although the statistical significance of this link is not consistently found.
For sceptics of mandatory policies on attendance (e.g., Petress 1996), this lack of significance is enough to challenge the idea that such policies offer universities the 'golden bullet' that will improve both their overall marks and their progression rates. Against this background Marburger (2006) presented an empirical study of the impact of relaxing an American university's mandatory attendance policy for a Level 1 undergraduate module in order to identify whether the lack of policy influenced attendance and whether attendance affected the student's grade on the module. The purpose of the current paper is to identify whether the effect of implementing a module-specific attendance policy has a similar effect on exam performance. If it does not then there might be evidence that the perspectives of Gendron and Pieper (2005), Petress (1996) and Marburger (2006) could all be acceptable in different circumstances.
Although we emphasise that Marburger's findings are far from conclusive because of concerns over the credibility of removing a university-wide mandatory attendance policy for a single seminar group and because the potential impacts of peer-group effects and revision strategies are not explicitly taken into account, our main conclusion is that the relationship between attendance and exam performance may not be uniform across different rates of attendance; the effect of an attendance criteria on exam performance are likely to be much stronger at low levels of attendance and are likely to reduce as attendance rates rise. A small empirical analysis complements our arguments.

Background
There are two ways of measuring the impact of an attendance policy on exam performance within universities. Our choice will depend on the university: (1) We can relax a university-wide mandatory attendance policy so that it does not apply to a sample of students.
(2) We can create an attendance policy and apply it to a sample of students in an institution where there is no university-wide attendance policy.
Both approaches are valid in their own right. The question is, however: will the findings from each be reliable and robust enough to inform policy? In order to assess the influence of an attendance policy on student performance Marburger (2006) adopted the first of these two possible methods. He focused on two groups of students studying an introductory module in microeconomics in two consecutive years at the same university; students in one group (no-policy) were told that the university-wide attendance policy would not be applied to them and students in the other groups (policy) were subject to the university-wide attendance policy. As absenteeism might be endogenous to the day and timing of a particular class he used students from the same teaching slot in 2003 as a control group. Members of the nopolicy group (n = 38) attended the module in 2002, while members of the policy group (n = 39) attended the same module in 2003.
To make the 'match' between absenteeism and exam performance, Marburger recorded the class attendance of both groups in each lesson and, at the end of the lesson, devised multiple-choice questions that related to that lesson's topic. These questions then appeared in their semester exams. The hypothesis being tested was that absence from a lesson can be matched to an incorrect answer for the corresponding multiple-choice exam question; i.e., this 'pure' effect of absenteeism could be detected and measured. Table 1 shows that Marburger's (2006) initial observations concur with others (Romer 1993;Marburger 2001) and that in the absence of any mandatory policy on attendance, absenteeism in the no-policy group increased throughout the semester. This phenomenon is not unfamiliar to most higher education tutors; it is also a characteristic of our study.
However, the magnitude of the absenteeism in relation to the university's policy aspirations was not readily apparent. Using the information presented in his paper, we calculated that the attendance policy in Marburger's (2006) study was designed to ensure that students attend 83% of all lectures (i.e., 29 of the 35 classes timetabled over the semester). 1 Marburger's results illustrate an average rate of absenteeism (over the semester) for the no-policy group of 20.78%; this compares with 11.52% for the policy group; i.e. a student in the no-policy group missed an average of 7.3 classes over the semester, while a student in the policy class missed an average of 4 classes. If we re-interpret these values as average attendance rates we get 79.22% and 88.48% respectively; these numbers are about four and five percentage points either side of the 83% prescribed by the university's policy. Unfortunately we do not know whether the pattern of absenteeism in both groups was random or the result of particular individuals regularly absenting themselves from classes. 2 Overall Marburger's (2006) research design is in line with what we understand to be Romer's (1993) suggestion of using a controlled experiment within higher education. Although it was an experiment, we are not convinced that the students within the 'no-policy' group were not influenced by students in the normal 'policy' groups. This scepticism comes from taking a closer look at the circumstance surrounding the module and the research design used by Marburger. Referring to Table 2 we can see that the no-policy group could have contact with at least five other groups -three from the previous year, who would then be Level 2 students to the no-policy group, as well as students in the other two groups in the 2002 cohort. These five groups would be subject to the strictures of an attendance policy; clearly the same process could occur if the freshmen spoke to third-and fourth-year students. The proximity of the nopolicy group to these other groups is cause for concern on at least three counts.
First, as the no-policy group was not within a 'vacuum' they would be influenced by forces outside of their own class. Research by Thomas and Webber (2001) emphasises the effects of peer groups on student choice, while Webber and Walton (2006) illustrate that peer groups can be gender-specific. Attendance at university seminars may be the result of one's friends attending either that seminar or another class on the same day, and not an independent decision by the student. While the presence and significance of friendships are difficult to model, it is our experience that many freshmen place a high value on the chance to socialise while at university, such that attendance in class is often a by-product of this socialising with friends. Thus, if the no-policy students' friends were attending other seminars or lectures around the same time as they could be attending their own classes then the potential social benefits of non-attendance would be less. Second, we do not know if the students in the no-policy group also had other lessons on the same day; lessons which would be subject to the mandatory attendance policy. If so, the value of missing the microeconomics class might only be an hour's worth of drinking coffee in the student's common room; a small pay-off when compared with taking a whole day off to engage in employment.
Third, habit needs to be taken into account; if the student attends all (or most) of their classes for other modules, then a conscientious choice must be made not to attend the microeconomics classes.
These three possibilities might have combined to reduce the impact of removing mandatory attendance -that is to say the rates of absenteeism would have probably been higher if the no-policy group had been 'in a vacuum'.
The small difference between Marburger's (2006) two group's exam performances (represented here in Table 1) might also be explained in terms of the no-policy group's proximity to these other groups. It is entirely plausible that students in the 2002 cohort got together and formed ad hoc 'study clubs' to swot up on the questions likely to appear in that teaching block's exam, which then improved the no-policy group's performance, thereby weakening the 'pure' effects of absenteeism by this group. 3 Although Marburger's (2006) experiment did use a type of control group, it is our feeling, and for reasons largely out of his control, that his experiment was not particularly robust or rigorous. After all, if it was felt that the control conditions were robust his findings would suggest that for Level 1 undergraduates, a mandatory policy of attendance (if not attendance itself) is virtually redundant. We feel this is counter to most tutor's work-a-day experience and something which many tutors would probably challenge. What can reasonably be concluded from Marburger (2006) is that the influence of the attendance policy on the other groups in the no-policy group's cohort seems to have had spillover effects -ones which facilitated beneficial peer-group activity. This suggests that a policy of mandatory attendance might not need to be applied to all the modules in a given year. We develop our analysis of peer groups later in this paper.

Our contrasting study
In order to assess the influence of an attendance policy on student performance we adopted the second of the two possible methods mentioned earlier. In contrast to Marburger's (2006) attempt to identify the effect of not enforcing a university-wide attendance policy, our analysis concerns an attempt to identify the effect of implementing an attendance policy which is not university-wide. 4 Our analysis contained no control group, and so our results should not be used as a direct contrast to Marburger's, but our empirical examination had a similar aim. Mandatory attendance policies in the UK are rare, but where attendance is taken into consideration the usual practice is to award marks for attendance at seminars and/or lectures, which then make up part of the coursework component of the student's final mark. Indeed most UK tutors are not in a position to implement a mandatory attendance policy on their own modules as such a strategy would be against the ethos of their university. These conventions prevented us from replicating Marburger's (2006) policy/nopolicy experiment; instead we studied the variation in student behaviour within one module of students where attendance formed part of the student's final marks. The other major difference is that our study drew on the experience of a tutor of a core Level 3 module in international economic policy as opposed to Marburger's focus on Level 1 students studying microeconomics. It should be borne in mind that as a Level 3 module many of the students would already know each other having shared many of the same classes in Levels 1 and 2. As a result we could be sure that some peer effects would be at work, in contrast to Marburger's (2006) study where new social connections would evolve during the period of study.
One final difference lay in the nature of the final exam; unlike Marburger's students, the students in our study faced one single end-of-year exam (three hours long) which entailed choosing four out of eight questions to answer. Unlike a multiple-choice test, where the known probabilities of guessing correctly generally encourages the student to attempt all the questions, this exam required the student to choose those questions for which they believed they had the best chance of gaining a high mark.
As for similarities with Marburger (2006), we tested the hypothesis that absence from a seminar can be matched to a low mark for an exam question relating to that seminar's topic; i.e., the effect of absenteeism could be detected and measured. This similarity arose because, as in Marburger (2006), this module made explicit the link between attendance at seminars and performance in the final exam.

The module
This module ran for the first 12 weeks of an autumn term. The whole cohort met in a lecture which took place at 10.30am on Thursdays and the students were split into three seminar groups; seminar group one met at 11.30am, seminar group two met at 1.30pm and seminar group three met at 3.30pm, all on the same day as the lecture.
As part of the assessment, students were expected to write two essays and present a recently published paper to their seminar class. These presentations took place in the seminar slots over the last eight weeks of the module. The papers all contributed to the module's theme, by extending or complementing a particular argument, thus paper number 3 might be strongly related to the findings of papers 1 and 2, while paper number 7 might challenge the findings of papers 3-5 and so on. All students in a presenting group received the same mark as determined by the tutor; the mark of their first essay was then weighted by their presentation mark.
The students who made up the audience to these presentations also received a 'mark' which was conditional on handing in (at the end of the seminar) an evaluation of the presentation. To give structure to these evaluations all students used the same form. This form asked them to comment and grade the introduction, the structure of the presentation, the analysis, the clarity of argument, the conclusion and the usefulness of the presentation for their own revision -these forms were checked by the tutor and copies were subsequently handed back to the students when requested. The submission of these peer assessments can be thought of as a pseudo module-specific attendance policy as the total number of 'ticks' for attending presentations was then used to weight the students' marks for their second essay.
To make the link between attendance and the final exam explicit the students were informed that the eight questions in the end of year exam related directly and explicitly to each of the eight papers which the students presented over the term. Furthermore, all students were told that good-quality cross-referencing to the other papers discussed in the module would attract higher marks in the exam. On several occasions the students were reminded that it was in their interests to attend the presentations as it would count towards their assessment in two ways: higher coursework grades and the opportunity to listen to something which they knew would definitely be in the exam.

Group presentations: the marks and attendance rates
Reflecting the central role that the eight presentations played in both the learning and assessment on this module, Table 3 shows the presentation marks and attendance across the three seminar groups for the whole cohort of 45 students who took this module. Table 3 shows, by group, the number of students in a particular presentation 'team', the percentage mark the 'team' received, the rank of that mark in relation to the presentation marks for all 'teams' and finally the number of students who attended the seminar. The bottom of the table then shows the aggregated marks for the whole cohort.
The first noteworthy thing from Table 3 is the high levels of attendance -not unlike the attendance rates for the policy group reported in Marburger's (2006) study. However, after the first two presentations the level of attendance varied and, interestingly, the rates of absence between two of the seminar groups were highly correlated. Correlations for attendance between groups 1 and 2 and between groups 1 and 3 were 0.74 and 0.75 respectively. Interesting also is the relatively low rate of correlation of attendance between groups 2 and 3; here the correlation was only 0.4. The lowest rate of attendance (78% overall) for all groups was for seminar number 5. Contrary to the typical experience within UK universities, it can be seen that the average attendance rate in these seminars was very high at 90% -a fact commented on when this average rate of attendance was discussed with colleagues in the department.

The model
Our analysis does not consider the need to work for remuneration which might have impacted on the attendance decision; likewise it was not possible to collect data on whether the student was a local resident. 5 These are important variables and their exogenous effects should by subsumed in the error term; if they did not change between Levels 2 and 3 then their effects might be included in the ability variable (Table 4). 6 In order to look at the determinants of students' exam performance (EXAM) for this cohort, five important variables were parameterised; these relate to attendance, ability, two variables which represent attempts to capture the different learning and revision strategies students appear to adopt when preparing for the exam, and a peer effect: • Attendance (ATTENDANCE): this is the number of peer assessments submitted. The intention here was to see to what extent attendance in general might have on the final exam marks. Not surprisingly the correlation with the exam mark was 0.370 (see Table 5). • Ability (ABILITY): the module had a prerequisite, and the exam mark for that module was used as an indicator of the student's entry ability. 7 From Table 5 the correlation between ability and exam performance was 0.171. The rather low correlation suggests that these previous exam marks might not be the most suitable indicator of entry ability. After all, given this is a Level 3 module, we might have expected (if not hoped) to see a discernable change in the student's learning behaviour and application as, like a runner, they gave their all as they approach the finishing line. 8 • Learning strategy for non-attendance (LSNA): the purpose of this variable was to detect and measure whether there had been an absenteeism effect, as postulated by Marburger (2006). There were eight students (21% of the sample) who had answered an exam question which related to a presentation they did not attend, these students were given a score of 1 and all others a score of 0. For students with a score of 1 we would expect a lower overall final exam mark. Table 5 shows that the correlation of LSNA with the exam mark was negative (-0.127). • Revision strategy for presentation question (RSPQ): on this module the more effort the students put into preparing for their presentations then the less effort the students needed to put in for the revision of that question for the exam. To take account of this effect, we included in the regression the rank of the exam question which related to their presentation. If the mark for the exam question that related to their presentation was their best (or equal best) mark, then it would be coded 4. If it was the second (or equal second) best then they would receive a code of 3. If it was the third (or equal third) best then they would receive a code of 2. If it was their worst mark then it would be assigned a code of 1. Finally, if they chose not to answer the question in the exam which related to the paper they presented then it would receive a value of 0. The correlation between this variable and exam mark was 0.286 (see Table 5). It should be noted that this 0 score was associated with six of the students in this sample and the average mark for these six presentations was 65.17% (standard deviation [SD] = 6.80%), compared with 61.08% (SD = 12.04%) for all other presentations. • Peer-group effect (PGE): finally, as students mixed with other students we should have had a measure of peer groups. For each student we took the average ABILITY mark of all the other students in their seminar group. As can be seen from Table 5, the correlation between this variable and exam performance was 0.160. Table 5 shows that ability was positively related with exam mark and attendance, but negatively related with PGE and RSPQ. This could be explained by better able students deciding to focus their revision on understanding other papers, but unfortunately the students did not then put in the effort on the paper they presented because they thought they knew it well enough. Conversely it might be capturing the effect of less-able students deciding to focus on getting a good answer in the exam for the paper they presented. Students who thought about the assessment of this module would realise that it was possible to get 25% of the total exam mark by answering one question perfectly. As the pass mark was 40%, only 5% was needed for each of their other three answers and so they would only need an average individual exam essay mark of 20% for the other questions to pass the exam. Descriptive statistics relating to all these variables appear in Table 6. Table 7 shows the results of the regressions, which all pass the Regression Equation Specification Error Test (RESET) and F-tests. We started by isolating the attendance effect and simply regressed the exam mark on attendance; these results are presented in column 1. The results suggest that for every seminar the student attended, the student could expect to receive a 4.1% increase in their exam mark. Attending a seminar will increase knowledge allowing the student to perform better in the exam. Column 2 then presents re-estimates of column 1, but this time the regression included ability. Even after we took account of ability it appeared that the student could expect to have a higher exam mark by 3.8% for every seminar they chose to  attend. The drop in the magnitude of the coefficient might illustrate that with more explanatory variables and individual specific data further regressions might lead to the dilution of the effect of attendance on student achievement. Indeed, the use of studentspecific dummy variables, as employed by Marburger (2006) may have led to the identification that attendance only has a very small although important effect on student achievement. We then progress to column 3. In addition to taking into account the impact of attendance and ability on exam performance we then included the effect of peer groups and the students' learning and revision strategies (LSNA and RSPQ). Several interesting points should be emphasised here. First, the R 2 increased substantially to 0.348, suggesting that the addition of the extra explanatory variables greatly increased the explanatory power of the model. Second, peer-group effects were found to be a significant determinant of exam performance, suggesting that better quality peers aided the student's learning ability -this is in line with much of the empirical literature on peer-group effects which have come from a variety of contexts. Third, ability had a positive effect on exam performance. Fourth, the revision strategies were important for exam performance: answering a question in the exam which corresponded to a seminar that the student did not attend had a positive effect. Earlier we observed that this variable was strongly negatively correlated with attendance; the interesting point, however, is that the raw data indicate that only one person attended fewer than five of the eight presentations, and this should have provided all but one of the students enough knowledge to answer five questions in the exam. This begs the question why the students decided to answer one of the other questions which they did not attend.

Results
The effect of the revision strategy for the presentation question is reassuring in that this variable was highly statistically significant and positive. It suggests that students who provided relatively better quality answers to the exam question that related to the paper they presented would also receive a higher overall exam mark; for every increase in rank for the presentation-related question they would also obtain a 2.8% higher mark for their overall exam mark.
Finally, the results suggest that if the student answered a question relating to a missed presentation then they would have a higher mark. This is not what we would Notes: n = 38; ATTENDANCE, number of peer assessments submitted; ABILITY, exam mark for the prerequisite module; LSNA, learning strategy for non-attendance; RSPQ, revision strategy for presentation question; PGE, peer-group effect; numbers in parentheses are standard errors; *P < 0.01; **P < 0.05; ***P < 0.001. expect as it seems to suggest that students would do better if they do not attend. Nevertheless, attempting these particular questions was a choice. We can understand how this result came about by looking at the overall exam performance of these eight students as shown in Table 8. The values in bold denote the mark (out of 25) each student received for answering the question which related to the presentation they did not attend. From Table 8 we can see that, for all but student number 6 (from group 1), this mark was greater than their average mark per question. Furthermore, the attendance at presentations for all but student number 6 was at least 62% or above. While Table 8 explains how the learning strategy of non-attendance has come to have a positive effect, it does not explain why this is the case. It follows that the low attendance of student number 6 would make it more difficult for them to exploit the skills and understanding of their peers, which is what we believe occurred in the case of the other students. That is to say, regular attendance which was facilitated by the design of this module's assessment and curriculum enabled students to rely on their fellow students to help them learn and catch up on missed material, perhaps stimulating them to put in even more effort than usual because they felt they had to catch up with their peers' ability -an effect we believe might have happened in Marburger's (2006) study.
The point of this exercise, however, was to investigate the effect of attendance on exam mark and to show an appreciation of the complexity of the issue. Once we take into consideration the effects of peer groups and student-specific revision and learning strategies the effect of attendance on exam mark is strong and statistically significant; it also appears stable and robust to the inclusion of extra explanatory variables. Put another way, the lack of an appreciation of the contribution of peer groups and student strategies means that the effect of attendance on exam mark appears to be diluted. In our sample, once we took into account peer groups and student strategy, the effect of attending one extra seminar would have increased the student's exam mark by over 7.7%. This high value seems to vindicate the intention of this module's curriculum, namely that it was designed to make the link between seminar activities and the final exam explicit and transparent to all students; as a result students' attendance is improved.

Discussion
The purpose of this paper was to identify whether the effect of implementing an attendance policy had a similar effect as removing one. Marburger's (2006) study indicates that the removal of a mandatory attendance policy has a small negative effect on exam performance, a magnitude of only 2%. The results presented in this study suggest that the implementation of an attendance policy increases exam performance: a student can expect to receive an extra 7.7% for each extra seminar they attend.
Common to both studies is the explicit and transparent link between attendance and the exam. This not only makes attendance more attractive, it reduces the need for students to speculate on what will be in the exam, making their revision both focused and efficient. It appears that in both studies the assessment design might have had a bigger role in determining exam success than both studies expected or indeed focused on.
In order to appreciate fully the differences in these results, we need to recognise that the removal of the mandatory attendance policy in Marburger's (2006) study was set within an environment where attendance was relatively high, at about 85%. He Table 8. The exam performance of students answering an exam question which related to a presentation they did not attend.
Student no./ group no. observed only a small fall in attendance by the students to about 76%. This study shows that an attendance policy contributed to the attainment of a 90% attendance rate; for comparison, our anecdotal evidence for the UK suggests that attendance can be as low as 35% by the end of the academic year. Both Marburger's (2006) study and the study presented in this paper suggest there is a positive relationship between attendance policy and exam performance. However, both studies illustrate the effect of an attendance policy at a specific 'normal' attendance rate. There is no reason to suggest that the relationship between an attendance policy and exam performance is going to be linear between these two average attendance rates. Figure 1 highlights this issue. At high rates of attendance the impact of an attendance policy on exam performance may well be weak, as suggested by Marburger (2006). However, at low rates of attendance the impact of an attendance policy on exam performance may well be strong, as suggested by the empirical study presented here. Before attempting to identify whether the relationship between an attendance policy and exam performance is linear (Figure 1  Curve C in Figure 1 is supported by the results of Durden and Ellis (1995) who found the effect of attendance on exam performance to be non-linear, becoming important only after a student had missed four classes during a semester. However, line B is also possible when a module requires students to show their depth of knowledge rather than an indication of a breadth of surface-level learning.

Conclusion
This paper has investigated the relationship between attendance and exam performance. Although most studies suggest the link between an attendance policy and exam performance is positive, the magnitude of the effect is in dispute. In contrast to Figure 1. Impacts of an attendance policy on marks. suggest that although attendance is important in determining exam success, other factors that are correlated with attendance are also important in predicting exam performance; these include learning and revision strategies adopted by the students and peer-group effects.
Finally, the disparities in the results suggest that the relationship between attendance and exam performance may not be uniform or constant; the effect of attendance criteria on exam performance is likely to be much stronger at low levels of attendance and is likely to reduce as attendance rates rise. Further empirical investigations should be undertaken to identify whether this is the case across disciplines, universities and countries. misses more than twice the number of lectures normally scheduled per week would receive an 'F' grade and that a student who misses more than six microeconomics classes would receive an 'F'. 2. The reported results on the link between exam performance and absenteeism are rather surprising: the likelihood of responding incorrectly to a question relating to that class's topic increases from 9% in exam 1 to 14% in exam 3. Yet, when absenteeism was at its highest for the no-policy group (in teaching block III and prior to the final exam), this group was only 2% more likely to get a wrong answer compared with those students in the policy group. 3. As it is we can not speculate any further as Marburger does not tell us the distribution or average marks for all nine groups covering 2001-2003. Other concerns rest with the exam results; firstly we are given no details about the time or the length of the exams, or the number of multiple choice questions that were set or the number of choices found in each question. Burton (2001) demonstrates that a typical 60-question 4-choice test is 'inherently too unreliable for the demands commonly placed on it' (47). If it turns out that the exams set during Marburger's study where of this nature then the degree of guessing could be significant, and would compromise the validity of the final marks for all students. 4. It is our view that the impact of either removing or imposing a policy on attendance is unlikely to be uniform across attendance rates or consistent across cohorts. Each approach will arrive at different conclusions which could then mislead policymakers. 5. It is interesting to note from Marburger's study that the local students worked less; this is most evident in the policy class. In the no-policy class there were fewer locals and these individuals worked more hours on average. This may be associated with higher living costs for rent (not living at home with parents) and for travel costs to get back home to see the family. 6. The extent to which the Level 2 mark accurately captures the student's ability is questionable; the analysis of the changes in exam marks is presented in Table 4. We would expect there to be some degree of regression to the mean, after all the exam mark captures ability and a degree of luck on the day; we would expect students who received higher grades in Level 2 to get lower grades in Level 3 and for the reverse to have occurred for the less-able students. Of course, this is based on the proposition that there is an element of luck. If this were not generally the case then one might expect to see some degree of stratification in that relatively more-/less-able students remain relatively more-/less-able, and this should be borne out in the results. Nevertheless, we are surprised by the amount of average increase in the exam mark for students at the bottom end of the distribution and further research should focus on attempting to identify which types of students gain most from an explicit focus on attendance (either engineered by policy or by curriculum design). It might be possible to identify whether it is the least-able students who gain the most from a heavy emphasis on attendance, as is hinted at in Table 4. 7. This prerequisite module is only applicable if the student followed a specific course. It was not taken by seven students as they came to the module through a different (non-standard) route, which means that this data attrition reduces the sample to 38 in these descriptive statistics and in the econometrics which follow. 8. This possibility is further supported by the fact that the degree awarded to the student is based on 70% of the Level 3 marks and 30% on their Level 2 marks.