Categories: |
Author: Rohan Cattell |
Posted: 25/03/2019 |
Views: 1268

This is the second part of a 3-part series on the new Health Roundtable RSI model. Part 1 looked at our rationale for updating the algorithm for expected length of stay. Part 2 is focused on describing the new model and part 3 will look at impact on members.

*Note: Whilst no specific knowledge is needed of
the statistical modelling techniques discussed, this post is more technical
than the previous one. Don’t worry if you don’t know what a GAM, GLM or elastic
net is though, other than that they are types of regression that can be used
for statistical modelling.*

Once we had decided to update the RSI, we had to decide which type of model we would use. There are various types of regression that might have been suitable, and we spent some time investigating the options before eventually settling on a generalised additive model (GAM). The main alternative to this would have been to use some other generalised linear model (GLM) with penalisation, such as the elastic nets algorithm.

In the end we chose to use GAMs partly because of their natural handling of non-linear effects, especially with complex interactions. This allowed us to easily develop prediction surfaces for 2 continuous predictors together, broken down by subgroup.

The predictors in the original RSI model were:

- DRG,
- care type,
- admission type (mapped from urgency status as emergency or other),
- transfer in
- separation mode (with distinction for transfers out from smaller hospitals)
- age group
- complex (binary flag based on ICD codes in more than 3 chapters)

Note that these were all categorical due to the limitations of the model. For our new GAM based model we have:

- ADRG (adjacent DRG is the first 3 letters of the DRG),
- care type (Acute or mental health only),
- ECCL (continuous complexity score, 0-32, from which the final letter of the DRG is derived),
- sex,
- age (now continuous),
- admission type (as above),
- transfer in,
- transfer out

Deaths are now excluded from the model rather than being adjusted for. Only Acute and mental health care types are included in the current model. We hope to develop separate models for subacute care types at some point in the future.

**Disclaimer: the above is subject to final testing and may change before release.**

The most important changes here are the move to continuous variables for age and part of the DRG. ECCL scores are calculated for all episodes as part of the DRG assignment. Essentially the ECCL determines whether an episode is classified into an A, B or C DRG for those DRGs that make that distinction. An ECCL score gives a much more fine-grained estimate of complexity than the three state A, B, C designation.

Age is also now modelled as a continuous variable. There is really no justification for modelling in groups other than the limitations of the modelling technique.

Another important consideration in the model was the choice of distribution family to use for the GAM. For those interested in the technical details we chose the gamma family with a log link function. This was intuitively the right choice due to the skewed nature of the LOS data and also was shown through testing to give the best results when compared to Gaussian or Poisson families or alternative link functions. One drawback we found was computational performance of the gamma model was much slower than the Gaussian model.

To keep performance of the training in check and also to keep memory requirements reasonable we broke the model down by MDC (major diagnostic category, the top level grouping of DRGs), running separate regressions for each.

To illustrate the advantage of using the continuous complexity variable, consider the following example, showing episodes in adjacent DRG family J64 – Cellulitis.

**(acute care type, emergency admission, male, 75 years old, discharged home, not transferred in, ‘complex’)**

Complex is the condition in the old model which we have included to make a comparison possible.

If we run a smoother through the points we can get an estimate of average LOS by ECCS.

**(acute care type, emergency admission, male, 75 years old, discharged home, not transferred in, ‘complex’)**

The smoother applied here has been trained on the data you see, unlike an actual expected LOS calculation which usually needs to work on new unseen data. Ideally our model would get close to this but without the bumps that are artefacts of this particular selection of data.

To illustrate the difference between RSI versions, first consider how the RSI v1 model works. This model simply splits the ECCS range in two at the DRG threshold in order to make an estimate of expected LOS.

**(acute care type, emergency admission, male, 75 years old, discharged home, not transferred in, ‘complex’)**

This RSI version 1 model was trained on DRG 9 data and was trained on a different dataset to our production version but it illustrates the approach. It’s important to note that both this model and the RSI v2 model we’ll show next were trained on the same independent set of data. i.e. independent in the sense that none of the episodes represented by dots here were in the training set.

In both DRGs J64A and J64B, the model overestimates at one end and underestimates at the other. This is a limitation of the grouping into DRGs.

Now consider the RSI v 2.0 model.

This model much better reflects the underlying relationship between ECCS and LOS. But this is just for one fixed age range (I took 1 year interval around 75 for the sample points) but the full model also uses a continuous age variable so that there is a similar curve for any age we want to pick.

Here’s one for 40 year olds:

**(acute care type, emergency admission, male, 40 years old, discharged home, not transferred in, ‘complex’)**

Note the different scales on this chart. Finally here’s one for 90 year olds:

**(acute care type, emergency admission, male, 90 years old, discharged home, not transferred in, ‘complex’)**

We can see that the model is happy to make predictions in areas with limited amounts of actual data and they will at least be consistent with what has been observed. The old model struggles to make predictions when the combination of factors resolves to limited numbers in the reference data. Usually in these cases the RSI v1 model will overfit the reference data, heavily influenced by atypical examples.

We could equally have fixed an ECCS range and plotted a curve for age, but as you can see ECCS is a much bigger influence on LOS than age in this case (and generally). We could also plot this as a surface in 3D with age and ECCS together, but these kinds of plots can be hard to translate into 2D. It’s a good mental model though.

One thing that catches the eye when looking at these charts are the outliers that aren’t explained by the factors in the model. Every model has its limitations but it’s an interesting question as to whether these are due to some common missing factor that we could model, missing documentation that would have pushed these episodes further to the right, things we will never be able to model or actual variation in service delivery. See for example this group in the 75 years old example above:

Whatever lies beneath this mystery is unlikely to be resolved for RSI v2.0 but in creating a model that accurately reflects the distribution of most episodes we have now paved the way for further investigation into the remaining outliers.

In the final part of this series I’ll be looking at how the new model rolls up into RSI values and what it is going to look like for members when it rolls out.

Return to previous page