“The logic is stunningly obvious: say what you want students to be able to do, teach them to do it and then see if they can, in fact, do it” (Biggs 2011, p143). 

Module learning outcomes specify what the learner should know and be able to do by the end of the module, and the assessment tasks should be selected to enable effective demonstration of the specified knowledge and skills. Teaching and learning activities can then be planned out in support of the learning. This process of constructive alignment (Biggs, 2011) helps ensure validity and coherence, by specifying which outcomes are assessed in each task, what level of achievement is expected and how teaching and learning approaches will support this.

| Assessment design
| Questions and resources
| Using formative assessment

Assessment design

Any assessment design needs to consider what has been described as the “validity/reliability/manageability equation” (Price et al. 2012, p51).

Validity | Reliability | Manageability

Validity

A chosen task needs to be valid; that is, appropriate to the demonstration of the specified learning outcomes, and measuring what it claims to measure. It is also important that each tool or type of activity is properly introduced, so that the student is familiar enough with it to be able to use it effectively (Bloxham and Boyd 2007, p166). 

Some key questions to help you check the validity of a task: 

  • Does it encourage time on task? Ideally, the idea that assessment can drive student learning would be a positive one, and setting the right task would generate the right learning activities in preparation (Gibbs and Simpson 2005, p15). This is not always the case though. Question-response assessments could lead to cue seeking behaviour. Providing a choice of essay titles might lead to selective coverage of the curriculum. A presentation task for a distance learner might require the use of unfamiliar technology, so that the process of the task takes up valuable time better spent on the content. It is important to consider what students will be doing to prepare for the assessment. 
  • Is it relevant? A multiple choice test might ensure that the student can memorise by rote and produce responses to prompts. But is this relevant – not just in terms of content, but in terms of process – to how they will learn and demonstrate competence in the future? 
  • Is it engaging? This may affect the amount of time and effort your learners are prepared to spend on the task. Although there is debate about the relative values of ‘deep’, ‘surface’ and ‘strategic’ learning (see Race 2005, pp.68-9), there is also evidence that students who have an interest in or care about the issue will use different learning strategies, that may lead to better learning (Bloxham and Boyd 2007 p.17). Using ‘real world’ data, scenarios and problems can help to engage students beyond the intellectual level.  
  • Does it discriminate on ability, or on disability? Some students will do better at certain kinds of tasks; others will do better at others. But repetition of the same type of task may be disadvantaging some learners based not on their knowledge and understanding, but on their ability to convey the information in the prescribed format (Race 2005, pp.75-6).? 

Reliability

The second part of the equation is reliability; that is, there must be consistency for students and staff about the expected standards, and about how these will be measured. Demonstrating reliability can be more of a challenge for some tasks than others; it is harder to demonstrate the equal application of grading standards for a presentation or a performance than for a multiple choice test, for example. There are a range of procedures available to mitigate the subjectivity of judgement, including the specification of grading criteria and standards, double marking and moderation. Transparency around these processes can help to evidence that an assessment is ‘fair’.

Manageability

In a study of new lecturers’ approaches to assessment, practical factors such as cost, time and high student numbers were perceived to be the biggest constraints to innovation (Norton et al. 2013). In an ideal world, any task that was deemed valid and reliable could be used to assess student learning. In reality though choices may be constrained by other factors, such as the availability of rooms, equipment and markers and the restrictions on assessment hours specified in the modular framework. Tasks need to be practicable and scalable. 

There is no such thing as the perfect assessment task, but it is possible to reach a balance between these (sometimes conflicting) requirements, to provide a sound basis for assessment. The first step is to recognise which of these elements is the most important for the assessment at hand; this may vary depending on the purpose of the assessment and the learning goal. For assessment of medical procedures, financial and staff time concerns may be secondary to ensuring validity and reliability because of the implications related to success or failure based on these judgements. Yet for assessment of design, reliability may be “less imperative” and so more emphasis will be placed on validity and manageability (examples taken from Price et al. 2012, p51). 

Analysis of a proposed option against these requirements may reveal a conflict, where a task that is strong on one criterion is potentially weak in another. If this can be recognised, then additional measures can be included in the design to mitigate or address this; for example framing tasks more towards application to increase validity, use of second marking or rubrics to increase reliability, providing alternative options for accessibility, etc. (Bloxham and Boyd 2007 pp44-46). This leads to consideration of other dimensions of assessment – with a sound valid, reliable and manageable foundation in place, assessment design can look to other factors, such as authenticity and incorporation of self and peer assessment, to add value to the assessment experience and help address design and implementation challenges.   

Questions and resources for designing summative assessment

Question 1

How will you match assessments to the type/s of learning you want students to achieve? 

Question 2

Does the programme provide progression in assessment appropriate to the level of study?

Question 3

Is there a variety of forms of assessment that align with programme outcomes and professional practice? 

Question 5

If you are using group assessment, how will you develop the students’ skills to do this throughout your programme?  

Question 6

What can you do to help ensure the academic integrity of your assessment?

Resources

Using formative assessment

Providing students with formative assessment opportunities and therefore with constructive feedback before the summative assessment task is a useful strategy to facilitate learning and maximise student performance.

Early assessment of students who have made a significant transition (e.g. new to Level 4, ‘top-up’ students) is recognised as being good practice in guiding students on the appropriate standards and conventions of HE study. This assessment may be formative, or a low-weighted summative item, and should be planned to permit early feedback to students.

Visit this Padlet for some approaches, examples and resources for formative assessment. 

https://uon1.padlet.org/nicola_denning/formativeassessment