As a culture, we use ratings systems to tell us how well a product or service operates, tastes, smells, feels, satisfies, you name it, but what do these often-numerical representations really mean? In academia, science, medicine, marketing, and other professional g fields, ratings are often cohesive criteria and have operationalized meanings for their ratings, but not everything that is rated has such distinct ability to be draw meaning from these ratings. The case in point for this research is the area of film. Diverse ranges of reviewing film exist. We have film critics, we have web interfaces, we have user reviews, as well as a myriad of other possibilities both online and through print. These scales typically range from 0 – 5 or 0 – 10, but how does giving a film a 2.0 versus a 5.0 really say anything about the film? This project explores the research methods I have deployed in relation to quantitative measures of rating film in a way that is atypical of current processes. The question “What is the best way to rate a film?” was closely analyzed and alternatives were given. In a world with so many methods ranging from the numerical to the alphabetical, but what qualifications are deserving of the ratings that are given?

Minute By Minute Reviewer investigates rating systems, subjectivity, and employs new methods of objective and transparent rating systems. Research was produced using current and differing rating systems such as IMDB, Rotten Tomatoes, MetaCritic, and a variety of online newspaper reviewers were taken into consideration as accepted metrics of critique. Using charting software to visualize my own versions of evaluation, I rated films using different methods and to varying degrees ranging from every single minute to every five minutes, no comments to comments every five minutes.

The project analyzes how we take ratings for granted. If something has a high review, it does not mean that we will like it, so when employing my ratings, I gave specific reviews to why I enjoyed or disliked a specific section of the film. For example, if at minute 25 I gave audio a 2.5, I would give specific reason (at that point in the film) as to why I gave it that rating. This method of rating also allows evaluating consistency of a film. At times, when charting, there are large dips in a film, which would signify why an overall rating may not be as high as one would think (when simply reading a textual review). My scale was based on a standard 10-point scaled. I find that 5 stars do not allow a wide enough visual range for readers to evaluate a film. Within the approximately 40 ratings in categories from acting, sound, cinematography, plot, and believability, the review would be at least transparent. With all things in similar nature, people will take what they want from the content, but I struggle with the snapshot review using scalar methodology in its current draft. This method does allow for a higher level of transparency which will expose bias within a review. Although Minute By Minute Reviewer may not be the most practical way to review a film, it opens up a conversation about the trust mechanisms relating to these systems.