Progress Update + Plans + Good News

The pilot trials were successfully conducted for group assessment of Software Engineering1 module. We interviewed the module leader and the results are being analysed. The results produced by our model were comparable to that of the module leader. The module leader said that he is happy to use the model and tool for future assessments with some adjustments. He stated that he would like the facility to override and adjust the final results produced by our tool. However, he did not think that it is necessary to adopt the split-join invariance principle.

We are now developing an improved version of the tool (version 2) for better performance and usabilty. We also intend to conduct more trials for one or modules after the Easter break.

We are pleased to inform that our proposals for presentations submitted at two conferences were accepted. The two conferences are:

1. AdvanceHE Teaching and Learning Conference 2019 :Teaching in the Spotlight: Innovation for Teaching Excellence, 2-4 July 2019, Northumbria University, Newcastle upon Tyne 

https://www.advance-he.ac.uk/programmes-events/conferences/TLConf19

and

2. International Assessment in Higher Education (AHE) Conference, 26 & 27 June 2019, Manchester UK.

https://aheconference.com/conference-2019/

 

Tool developed and Trials begin

The progress has been very good so far. We have developed a tool for group marking. It is now available at http://gpm.andrewsportfolio.co.uk/. The module tutors of Software Engineering1 (CSY1019) in the subject area of computing have kindly agreed to use the tool for marking the group work assessment of Term 1. The module consists of over 150 students and should be a good way of evalyuating the tool and model. We have also submitted two proposals for presentation/discussion to two conferences: AHE 2019 in Manchester and Advance HE STEM 2019 in Newcastle. We now look forward to the analysis of data and also feedback from the tutors.

 

Update on Work and Call for Volunteers

We have successfully recruited a GTA (Andrew Dean) to work on this project. He has been busy with software development. We intend to have a new web-based tool for group marking by mid-January 2019 for user trials. This tool builds upon a previous prototype stand-alone tool developed as part of an URBAN project. The plan is to host it on the university web server and pilot test it on a first year module CSY1019 Software Engineering1 which has group work assessment. We have over 100 students in that module undertaking group work. We would also like to invite any other volunteers in any subject area to take part in our study, use the tool and give us some feedback. If interested, please drop an email to suraj.ajit@northampton.ac.uk.

 

What do we aspire to achieve in this project?

The aims of the project are as follows:

  • To involve students as partners in the assessment of group work (or collaborative learning) in a fair and consistent manner
  • To improve student (and staff) satisfaction by adoption a new approach to group work assessment
  • To inform institutional policy on assessing group work

The objectives are as follows:

  • A mixture of qualitative (semi-structured interviews/focus groups) and quantitative (questionnaires) methodologies involving both students and staff in  within the university.
  • explore the use of a novel mathematical model to calculate individual, differential student scores by combining student-generated assessment of the teamwork process with the lecturer-generated product assessment of the entire team
  • Analysis of findings
  • Recommendations/roadmap to inform institutional policy on assessing group work

Group Marking – Are we being fair? What are the issues? How do we tackle them?

Assessment of group work can play a major part in the learning process of students and is an important component of Active Blended Learning (ABL). However, it can also be a concern for many academics and students. Research conducted by a recent (2017-18) URBAN project in one subject area revealed that one of the concerns is about ensuring that the marking/scoring method is potentially fair and consistent. Some modules give the same mark/grade to each member within a group irrespective of each member’s contribution/co-operation. Other modules incorporate a peer assessment component within rubrics that involves each member assessing other members’ contributions. Problems can arise with peer assessment if students feel that they are being marked subjectively by their peers, for example if one person falls out with the group but still does his/her part of the overall work.

A learning enhancement and innovation bid submitted to address the above issues has been successful. The successful project/bid is titled:

Implementation of a fair group marking, and student scoring scheme based upon separate product and process assessment responsibilities

We have begun work on this project and aim to provide updates/progress of this project on this blog.