For better or for worse, the assessment of research quality is one of the primary drivers of the behaviour of the academic community, with all sorts of potential for distorting that behaviour. So, if you are going to assess research quality, how do you do it? This chapter explores some of the problems and possibilities, with particular reference to the UK Research Assessment Exercise and the subsequent Research Excellence Framework, and the work of the Framework 7 European Education Research Quality Indicators project (EERQI). It begins by reflecting back on the previous discussion of generic criteria of quality which can be applied to research, and the tension between such criteria and the diverse and sometimes contradictory requirements of educational research. It then looks at attempts to identify measurable indicators of quality, including consideration of the location of the publication, citation and download counts, and approaches based on semantic analysis of machine-readable text, but finds all these quasi-‘scientific’ attempts at quality assessment wanting (hence the ‘impossible science’). This is all the more the case because of their attachment to extrinsic correlates of quality rather than intrinsic characteristics of quality, and hence the probability that the measures will induce behaviours not conducive to quality enhancement. Instead the chapter turns to a different approach. This is better expressed perhaps as quality ‘appreciation’, ‘discernment’, or even ‘connoisseurship’, and is rooted in the arts and humanities rather than in (quasi) science. It considers whether this might offer a better approximation to the kind of judgment involved in quality assessment of a piece of research writing than the sort of metrics approaches favoured in current discussion.