Measuring the quality of a programme is very tricky. Paddy Cosgrave, founder of The Dublin Web Summit has suggested that a degree from Trinity College Dublin has more value than an equivalent qualification from other Irish universities. He attempted to justify his statements on a blog posting where he suggests there might be cases where Trinity courses are not any better than others!
Normally I wouldn't comment on things like this, but given Mr. Cosgrave is a member of the Board of the HEA, and he picked such an incredibly insensitive (and morale-destroying) time as now, just when so many third-level students are heading into exams, I feel it's important to comment on this sort of claptrap.
Determining the quality of one course versus another is very complex, there are always subjective elements to measuring quality; Paddy Cosgrave suggested the following criteria on twitter for measure the quality of a programme:"faculty, facilities, funding, possible cohort effects, entrance requirements, hours of course etc"
- The problem with using faculty, how are you measuring the staff members quality?
- The problem with using facilities, how do you measure that, if you use a blunt metric like amount of money spent on facilities, that doesn't tell us if the money was well-spent.
- The problem with using funding as a metric, in most cases "funding" as a criteria has a lot to do with research activity rather than teaching ability.
- The problem with using "possible cohort effects" is the almost complete inability to measure this particular metric.
- The problem with using "entrance requirements" is that if are looking at students moving from second-level where they may be doing lots of subjects they mightn't be interested in, to third-level where the are doing a topic of interest to them, it's difficult to predict which students will be more successful.
- The problem with using "hours of the course" is that it's a blunt instrument, are more lectures and less self-study better, or less lectures and more self-study? etc.
It's important to recognise that similar degrees in different HEIs teach different content (and therefore often have different learning outcomes), but that doesn't mean one is better than the other or worse than the other, in third-level institutes around the country we are trying to create rounded learners, who are learning how to learn. This isn't like training courses where it's an exact like-with-like comparison, we want to have empowered students who are engaging in self-study (particularly in their final year); and because of these issues there will be a significant disparity from student to student even within the same third-level programme.
His argument about "Grade Inflation" is ludicrous, he incorrectly states that TCD has had less of an increase in First Class honours that all the other universities, it hasn't. There are a lots of reasons why students are getting more Firsts in the last decade; the wider diversity of students in classrooms (particularly the often highly-motivated international students), the wider availability of useful online resources for students to help them learn, the increase in the number of lecturers with educational training as well as discipline expertise, the increased number (and more diverse range) of courses available in all third-level institutes.
On twitter Paddy Cosgrave mentioned that he would prefer to hire a computer science student from MIT than the 5000th ranked higher education institute (here and here), I will note that the criteria for ranking third-level institutes is based almost exclusively on research output of these institutes and has very little (or nothing in some cases) to do with the quality or their programmes or the quality of the teaching on those programmes.
A lack of understanding of statistics is evident in Paddy Cosgrave's statements that he is using the Undergraduate Awards as a metric for college courses. He needs for consider issues such as:
- How can you measure the quality of all of the universities on the basis of a couple of hundred students out of over 10,000 enrolled students? it's so statistically insignificant it's shocking.
- How many people from each institute entered the undergraduate awards, the distribution of the results of the UA might just reflect the distribution of entrants?
- What are the judging criteria for the UA? They are a mixture of highly behaviourist categories combined with highly subjective categories.
- Is the data normally or near-normally distributed?
- Why just consider the 2011 awards (when I can say for certain it was under-prompted in IoTs compared to universities that year)?
He also seems to have no idea of the purpose of the NQAI Framework:
or the purpose of Learning Outcomes:
Learning outcomes are not easy in Computer Science, the ever-changing face of these courses (a course that was high quality last year might be out of date this year, things are changing rapidly in the this domain; look at things like Cloud, Apps, Exadata, etc.). So just looking at the outcomes on programmes based on grades is highly tricky.
See also
No comments:
Post a Comment