"For almost a century, attempts have been made to evaluate the teaching effectiveness of instructors in higher education. Much of this effort has centred on an attempt to create instruments that would allow students to make this assessment. These instruments have varied over both place and time, and have created one of the longest lasting debates in higher education. ... One source stated that there were close to 3000 articles published on student evaluation of teaching (SET) from 1990 to 2005 alone. Published findings on the topic are so voluminous that many researchers have been using the method of meta–analysis, in which the case is not a subject but an entire published article. Nevertheless, little agreement has been reached on key points. ... One of the difficulties of studying SET has been finding adequate and appropriate large samples, because of the confidential and anonymous nature of the inventories for both students and instructors. An alternative has become available online. Increasingly popular databases, such as ratemyprofessors.com, uloop.com, koofers.com, myEdu.com and studentsreview.com, offer large and easily accessible sources of data. Although these sources are tempting, a question remains about their validity. ... ‘Does ratemyprofessor.com really rate my professor?' "
What does ratemyprofessors.com actually rate? (2013) Assessment & Evaluation in Higher Education doi: 10.1080/02602938.2013.861384
Abstract: This research looks closely at claims that ratemyprofessors.com creates a valid measure of teaching effectiveness because student responses are consistent with a learning model. While some evidence for this contention was found in three datasets taken from the site, the majority of the evidence indicates that the instrument is biassed by a halo effect, and creates what most accurately could be called a ‘likeability’ scale.
More on this exciting field of research