How we reach a rating

How we reach a rating

To support the transparency and consistency of our judgements, we have introduced a scoring framework into our assessments.

Where appropriate, we’ll continue to describe the quality of care using our 4 ratings: outstanding, good, requires improvement, or inadequate.

When we assess evidence, we assign scores to the relevant evidence categories for each quality statement that we’re assessing. Ratings will be based on building up scores from quality statements to an overall rating.

This approach makes clear the type of evidence that we have used to reach decisions.

Some types of services are exempt from CQC's legal duty to provide a rating. Read our guidance for non-rated services.

Scoring

Using scoring as part of our assessments will help us be clearer and more consistent about how we’ve reached a judgement on the quality of care in a service. The score will indicate a more detailed position within the rating scale. This will help us to see if quality or performance is moving up or down within a rating.

For example, for a rating of good, the score will tell us if this is either:

  • in the upper threshold, nearing outstanding
  • in the lower threshold, nearer to requires improvement.

Similarly, for a rating of requires improvement, the score would tell us if it was either:

  • in the upper threshold, nearing good
  • in the lower threshold, nearer to inadequate.

Our quality statements clearly describe the standards of care that people should expect.

To assess a specific quality statement, we will take into account the evidence we have in each relevant evidence category. This will vary depending on the type of service or organisation. For example, the evidence we will collect for GP practices will be different to what we’ll have available to us in an assessment of a home care service.

Evidence could be information that we either:

  • already have, for example from statutory notifications
  • actively look for, for example from an on-site inspection.

Depending on what we find, we give a score for each evidence category that is part of the assessment of the quality statement. All evidence categories and quality statements are weighted equally.

Scores for evidence categories relate to the quality of care in a service:

4 = Evidence shows an exceptional standard
3 = Evidence shows a good standard
2 = Evidence shows some shortfalls
1 = Evidence shows significant shortfalls

As we have moved away from assessing at a single point in time, we aim to assess different areas of the framework on an ongoing basis. This means we can update scores for different evidence categories at different times.

The first time we assess a quality statement, we score all the relevant evidence categories. After this, we can update our findings by updating individual evidence category scores. Any changes in evidence category scores can then update the existing quality statement score.

We will follow these initial 3 stages for services that receive a rating:

  1. Review evidence within the evidence categories we’re assessing for each quality statement.
  2. Apply a score to each of these evidence categories.
  3. Combine these evidence category scores to give a score for the related quality statement.

After these stages, the quality statement scores are combined to give a total score and then a rating for the relevant key question (safe, effective, caring, responsive, and well-led).

We then aggregate the scores for key questions to give a rating for our view of quality at an overall service level. See how we aggregate ratings for different types of services.

How we calculate quality statement scores

When we combine evidence category scores to give a quality statement score, we calculate this as a percentage. This provides more detailed information at evidence category and quality statement level. See the example of calculating scores (link to example).

To calculate the percentage, we divide the total evidence category scores by the maximum possible score. This maximum score is the number of relevant evidence categories multiplied by the highest score for each category, which is 4. This gives a percentage score for the quality statement.

We then convert this back to a score. This makes it easier to understand and combine with other quality statement scores to calculate the related key question score.

We use these thresholds to convert percentages to scores:

25 to 38% = 1
39 to 62% = 2
63 to 87% = 3
over 87% = 4

How we calculate key question scores

We then use the quality statement score to give us an updated view of quality at key question level.

Again, we calculate a percentage score. We divide the total by the maximum possible score. This is the number of quality statements under the key question multiplied by the highest score for each statement, which is 4. This gives a percentage score for the key question.

At key question level, we translate this percentage into a rating rather than a score, using these thresholds:

25 to 38% = inadequate
39 to 62% = requires improvement
63 to 87% = good
88% and above = outstanding

By using the following rules, we can make sure any areas of poor quality are not hidden:

  • If the key question score is within the good range, but one or more of the quality statement scores is 1, the rating is limited to requires improvement.
  • If the key question score is within the outstanding range, but one or more of the quality statement scores is 1 or 2, the rating is limited to good.

Our judgements go through quality assurance processes.

For services that have not previously been inspected or rated, we will need to assess all quality statements in a key question before we publish the rating. For newly registered services, we’ll usually assess all quality statements within 12 months.

How we aggregate ratings using the rating principles

Overall location ratings are produced on the basis of the following principles:

  1. The 5 key questions are all equally important and are weighted equally when aggregating.
  2. At least 2 of the 5 key questions would normally need to be rated as outstanding and 3 key questions rated as good before an aggregated rating of outstanding can be awarded.
  3. There are a number of ratings combinations that will lead to a rating of good. The overall rating will normally be good if there are no key question ratings of inadequate and no more than one key question rating of requires improvement.
  4. If 2 or more of the key questions are rated as requires improvement, then the overall rating will normally be requires improvement.
  5. If 2 or more of the key questions are rated as inadequate, then the overall rating will normally be inadequate.