When I first started working as an academic, I did a year in a traditional highly-ranked university. It was there that I first met an ambiguous attitude to evaluations that I have found to be pervasive across academia. Evaluation work is valued by institutions for the money attached to it but dismissed as not ‘proper’ research. In this blog post I challenge this value system and explain why I think academics working in sociology should do more evaluation work.
Value and research
Academic research is shot through with value hierarchies that have to be negotiated by individual academics in deciding which conferences you attend, where you publish, and even who you acknowledge. One of the most important of these hierarchies concerns research funding. It’s not just a matter of how much funding you secure but of what sort of money it is. ‘Proper’ research should be done for the sake of knowledge alone, free of external motivations and agendas. From this point of view, the best and purest funding is research council money and the worst and dirtiest is evaluation work for an external body whether public, private or third sector.
We had research council money for the project which this website is based around. This meant we could do the project largely on our own terms; it was our idea and the funders left us to it, apart from expecting a little reporting now and then. As well as the money, research council funding has status, giving you some institutional freedom (that I was terrible at exploiting) and 15 minutes of fame in your field. But it is far from ‘pure’. In order to get the money, you need to promise a lot and we felt under pressure to deliver even more than we’d promised. We also had to try to ensure that our work had impact beyond academia – something that is harder to achieve and trickier to navigate than funding councils and HEFCE suggest. But there’s much bigger problems linked to research council funding.
The pragmatic case for evaluation
In the round where we were successful, only 4% of the projects submitted to the ESRC for education got funded. Research council bids take weeks of work. In 96 out of 100 cases that time and labour was wasted as it’s very difficult to recycle a bid written for a research council (I’ve had loads rejected and never managed it). As well as the sheer amount of time invested in such bids, many academics can become despondent after having a series of rejections over a number of years. Given that funding targets are increasing and the money available is decreasing (As well as becoming more concentrated in elite institutions), we can look to increased rejection rates if we continue to focus on research councils and similar funding sources such as the British Academy. So on pragmatic grounds, there’s a case to be made for doing evaluation work.
Because of the current focus on evidence, particularly in the public and third sectors, evaluation work is one of the few areas where there’s still significant funding. Certainly, there’s less of the big governmental money for this than when Labour were in power, but there’s plenty of smaller projects which can be great for early career academics. Bidding for evaluation work also carries much better odds of success and takes vastly less time than research council bidding. Fewer people bid for evaluations, partly because of the snobbery about them and partly because the turnaround time for bid and report are much faster than standard academic timescales. It takes me about a day to write a small evaluation bid for between 5 and 15 thousand pounds. And if you choose which ones you go for, you should be successful about one time in three. Sometimes, if a funder has liked your work in the past, you may not need to bid at all. This is a very good money: time ratio, something which, as a freelancer, I think about a lot more than I did when attached to a university. However, the case for evaluation isn’t solely pragmatic.
The academic case for evaluation
For a while I worked at a self-funding research institute. We were tasked to collectively bring in our salaries. This was an unattainable goal but one which led us to bid for anything within our remit and for a few things that weren’t. I did a lot of evaluation work there, most of which, whilst mildly entertaining at the time, is not something I’d care to repeat. But I think if you’re selective about evaluation it can be a rewarding way to use social research skills. Evaluation is genuinely useful. You don’t have to craft stories about impact. In my experience organisations do change their practice based on the findings of evaluations. They employ new staff, rework existing schemes, and produce new guidelines and training for staff and others. So what do I mean by being selective?
I have my own share of evaluation horror stories. I worked on the team who evaluated Teach First, a scheme to put successful graduates into challenging London schools for two years, training them on the job. The government rolled out Teach First before we’d even completed our evaluation and Teach First cherry picked from our lengthy report. They have never, to my knowledge, cited the paper where we argued that the scheme is a form of class colonialism. So the first selection criteria is to work with people and organisations who are genuinely open to seeing what you find and acting on it, rather than being interested in using your work to bolster their own position. For this to be possible, and here’s the second selection criteria, you need to have shared values and concerns with the funders. In this way they will be able to hear what you say and you will feel that you are using your skills to do something authentic to your academic identity rather than in opposition to it. When this happens both you and they will learn from the work. Finally, there needs to be mutual respect so that they trust your research skills and you trust their understanding of what they do and what they need from you.
Trackback from your site.