Wednesday 21 November 2012

History and GIS

This week I am at an ERSA organised workshop on the use of geographical information systems (GIS) in the analysis of economic history. For the occasion Johan Fourie brought out Alex Moradi of Sussex University and after the first afternoon there are already some interesting points to write about.

Alex explained that historical GIS is about structuring, mapping and analysing geographies of the past. It is a way of interrelating information from different sources - some of it spatial. GIS software can be used as a database management tool (layers are key), for analysing data (clustering, spatial autocorrelation) and visualising finished products (make your own maps). We will be learning more about that tomorrow.

For someone who is interested in geographical economics I found Alex's borders example interesting. Geography, like altitude or latitude variables are not really used as an explanatory variables, but rather as a way to help identify differences or discontinuities. I look forward to his inputs on our research ideas.

Sunday 18 November 2012

If you have a moment to spare

November has turned out to be quite busy with lots of meetings about the campus plan and admission of next year's post grads. And some marking of exam scripts. The result is that I have not written about the labour unrest in the Western Cape and missed the best part of the last two issues of The Economist. To add some excitement to this mix, this weeks has a Scholarship of Teaching and Learning conference and an ERSA workshop on history and GIS! If you have a moment to spare I highly recommend this short video: Werner Herzog reads Where's Waldo. Enjoy.


Sunday 4 November 2012

More on lecturer evaluation

In August I wrote a post on lecturer evaluation, having used factor analysis to narrow down the characteristics that our students think make for a good lecturer. The questionnaire has 25 items that students score on a 4-point scale from strongly disagree to strongly agree. The items include things like: The lecturer plans thoroughly and prepares thoroughly for contact sessions. Or, the lecturer makes use of multimedia in support of learning. The lecturer may also explain the relevance of concepts and theories, or explain the relationship between theory and practice. The result was the following:

Component 1 seemed like the baseline characteristics of a good lecturer – being friendly, accessible, fair, offering support, an all-round professional that can explain the work. It accounted for 19% of the variance in the data. Component 2 looked like the things that you have to learn to become a good lecturer. It is not only about knowing your field, you also need to know a bit about formulating outcomes, action verbs, putting it into a study guide and using it. This explained 15% of the variance. Component 3 looked like it was about the enthusiasm and adding value – engaging with the students. And it explained another 15% of the variance. The other 50% of the variance was not explained by these three constructs.

Now, students don't really like doing these lecturer evaluations for all their courses. After the second or third one they realise that it takes up valuable beer-drinking time. And as with many of these "customer satisfaction" type surveys, they don't really see their inputs making a difference. Unless you are repeating the course next year, you would not know if the lecturer took the low score for providing quick feedback to heart.

On the lecturers' side, the process does not seem to provide that much useful information or action. If it is going badly in a course, the School director knows about it long before the lecturer evaluation is done. Low scores on a few items might get a mention at the meeting where next year's teaching load is assigned. If you are really keen, scoring low on "basing assesment on learning outcomes" might get you to enroll in the next assessment training course on campus, but then you also do have some articles to write. At best (and worst) the aggregate lecturer evaluation score matters for the top-20% bonus at the end of the year.

Which brings me to my point, if we are capturing half of what makes for a good lecturer and the result is not particularly useful to students or lecturers, shouldn't we replace the whole system with a simple vote by text message: rate your lecturer with score out of 100.

This has some support in what has been written about so-called "thin slicing": In his book Blink, Malcolm Gladwell writes about people’s ability to 'thin slice' – to judge what is good or important from a narrow period of experience. He goes on to argue that having too much information can interfere with the accuracy of a judgement.

To get an idea of the difference between our questionnaire score and a 'thin-sliced' score we asked our students for both this semester and the results are as follows. For the analysis I have 869 usable responses from undergraduates in 5 modules for 9 lecturers.

The average score from the questionnaire was 85.7% and the standard deviation 13.3. The figure shows the distribution between the courses.

The average of the 'thin slice' scores out of 100 was 82% with a standard deviation of 17. The correlation coefficient between the two sets of scores was .741 and it was significant at the 5% level. Controlling for the different modules the partial correlation is slightly lower at .731. A paired samplest-test showed that the differences between the means is statistically significant.

It seems that the questionnaire is useful to the extent that it ammeliorates a general dislike of the lecturer. For 'thin slice' scores between 0 and 50%, the average questionnaire score was 63%. Having considered all 25 elements of the evaluation it turns out the that lecturer is not so bad as the initial 'thin slice' score out of 100 indicated. This effect gets smaller at higher scores: For 'thin slice' scores between 71 and 80%, the average questionnaire score was 83%.

Thus, for all its shortcomings, it does seem that the questionnaire serves a purpose. Next we'll have to try a five-point scale and see if we can measure more accurately!