Introduction

Bovine tuberculosis (TB) in Great Britain (GB) has generally been increasing in the last thirty years1 and has resulted in a huge strain on the cattle industry. In 2012, 37,735 cattle were compulsorily slaughtered due to detected herd TB incidents,2 resulting in compensation costs of approximately £30 million to farmers in England from the Department for Environment, Food and Rural Affairs (Defra) that year.3

Cattle infected with Mycobacterium bovis (the etiological agent of bovine TB) do not normally display clinical signs unless the infection is well advanced. Routine and targeted ante-mortem screening of herds using the tuberculin skin test is performed in GB for control and surveillance of bovine TB whereby officially TB-free (OTF) cattle herds are tested for evidence of exposure of M. bovis annually or every four years depending on past incidence of TB in a region (until December 2012 two-yearly and three-yearly testing frequencies had also been used)4 . In England, nearly 60% of herds are annually tested for TB (those in the West and Midlands), whereas the remainder are tested every four years, thus reflecting the regional clustered nature of the disease. All cattle herds in Wales have been tested annually since 2010, whereas Scottish herds are routinely tested every four years or are not tested at all (Scotland was declared officially free from bovine TB by the European Commission in September 2009). Unbiased interpretation of the data coming from this surveillance programme is crucial to understand the underlying incidence trends and thereby to inform discussion on the impact of the various control tools, including badger culling5 and vaccination.6

Defra currently publish the national statistics of GB bovine TB incidence as the monthly percentage of OTF herds tested that result in the OTF status being withdrawn (OTFW) since January 19962 (Fig. 1a, red line). We have previously shown that this method can lead to artefactual fluctuations in the reported trends7 when large numbers of herds are moved to more frequent testing, and can therefore be misleading, if the downward part of these fluctuations are interpreted as a decline in the underlying incidence. Recent trends in the national statistics of the GB bovine TB incidence have been discussed in the current debate on the need for badger culling. To inform the debate most usefully, the incidence trends should be quantified using a method which is not liable to artefactual fluctuations.

We have previously presented an alternative method,7 adapted from a method presented by Cox,8 that distributes detected cattle herd incidents across the period between herd tests and provides a more accurate description or the underlying risk over time. However the method is somewhat mathematically complex to compute and it may not be sufficiently straightforward for use in the national statistics and for understanding by a lay audience.

At the beginning of 2014 it is expected that a public consultation will take place to decide how the national incidence trends of bovine TB in GB should be reported from the surveillance data. Here we present and apply a new simple incidence-based method that adjusts the monthly percentage of OTF herds resulting in OTFW breakdowns as a function of the time since the last herd test, which may effectively quantify the underlying incidence trends for the general public. We use simulations from a mathematical stochastic model of herd testing and infection incidence over time7 (re-fitted to include more recent data from 2010-2012 as well) to compare the current Defra reporting method, the previously published adapted Cox method7 and the simple incidence-based method in their ability to represent the model-estimated underlying trends.

Methods

Data on the total number of OTF herds tested per month and the number of these tests that resulted in OTFW status from January 2003 to December 2012 for England, Wales and Scotland were obtained from the publicly reported national statistics published by Defra2 (Data given in Appendix 2).The numbers of tests that resulted in OTFW during May 2011 – December 2012 are still subject to final confirmation and are presented as a range for each month in the national statistics and so for this period the mid-point of the reported range was used. Note the data is not stratified by testing interval.

The stochastic dynamic model of herd testing and incidence of bovine TB in GB was described previously7 and was fitted to the national statistics data. The model contains a monthly risk of developing detectable M. bovis infection per herd. The times that the national incidence risk changed and the magnitude of changes were estimated (assuming a linear spline model for the underlying trend), along with the times that large-scale changes in testing frequencies occurred. The underlying model monthly risk is presented as the mean risk across all herds as a percentage, multiplied by twelve (Fig. 1b) to allow for direct comparison with the current Defra method. Model parameters were estimated using maximum likelihood estimation for the period January 2003 – December 2012. The likelihood of observing the data (number of OTFW breakdowns for a given month) was described previously7 , conditional on the model parameters and the number of observed OTF herds tested. One hundred stochastic simulations were run under the best fit model with a given underlying incidence risk per month (Fig. 1b), recording the number of OTF herds tested per month and the number of these that resulted in OTFW (i.e. representing 100 simulated national surveillance data-sets with a known underlying incidence risk per month).

Three methods were applied to these 100 stochastic simulations of herd testing to evaluate their ability to represent underlying incidence risk: the current Defra method (Fig. 1a, grey lines); the adapted Cox method (Fig. 1c); and the new simple incidence-based method which adjusts the current Defra method for the time spent between routine herd tests (Fig. 1d).

Defra publish the incidence trends over time as the percentage of OTF herds tested that result in an OTFW breakdown per month, smoothed using a 23-term Henderson moving average for seasonally-adjusted data (using the X-11 method from the freely available software http://www.census.gov/srd/www/x12a/ with the windows interface, version 3.0).

The adapted Cox method7 distributes incident events over the period of risk and allows for multiple introductions of infection during this period. The method can either assume all herds in each testing frequency are at equal risk to becoming infected or that there is heterogeneity in risk such that a given proportion of herds in each testing frequency are at risk of becoming infected. We previously found that decreasing the proportion of herds assumed to be at risk of becoming infected resulted in increased estimates of underlying risk but provided a similar temporal trend to assuming no heterogeneity in risk7 . Comparing estimates from assuming no heterogeneity in risk to assuming only1/5 of herds were at risk, provided sufficient bounds of the true underlying risk.

The third method we propose here is incidence-based in that it adjusts the current Defra method for the time period between tests. The incidence-based estimated risk per month (t), , is given as: where Nτ,t corresponds to the number of OTF herds tested in month t on testing frequency τ, and xτ,t denotes the number of these tests that resulted in an OTFW breakdown that were of testing frequencyτ, (whereτ= 1,2,3 or 4 (the number of years between herd tests)). Therefore for a given month annually tested herds contribute more to the month’s incidence compared to less frequently tested herds. The incidence trends were then smoothed using the 23-term Henderson moving average for seasonally-adjusted data and multiplied by 100 to give a percentage.

We did not apply the adapted Cox method or the new incidence-based method to the published GB data as these methods require the number of tests and positive tests to be stratified by testing interval, and this information is not publicly reported in the national statistics.

Results

The smoothed monthly percentage of OTF herds tested that result in the OTF status being withdrawn, as presented by Defra as the national incidence trend, is shown in Fig 1a (red line).

Fig. 1: Comparison of different methods to estimate the underlying incidence risk of bovine tuberculosis in Great Britain.

a) Current Defra method (smoothed % of OTF tested herds that are OTFW), red line corresponds to observed data and the grey lines correspond to the method applied to 100 simulations from the fitted model of herd testing and incidence. b) The underlying risk from the best fit model (x12). c) The adapted Cox method (x12) applied to the 100 model simulations either assuming no heterogeneity in risk (dark blue lines) within testing frequency groups or that only 1/5 of herds in each testing frequency group were at risk of developing an incident infection (light blue lines). d) The new simple incidence-based method applied to 100 simulations that accounts for the time between routine herd tests (adjusted smoothed % of OTF tested herds that are OTFW).

Applying this current Defra method to 100 stochastic simulations from the best-fitting mathematical model of herd testing and infection incidence (Fig 1a grey lines) replicated the observed smoothed national incidence trend presented by Defra, except the model slightly underestimated the peak of the second fluctuation.

As found previously7, the true underlying incidence risk for each of one hundred simulations from the best fitting model was estimated to monotonically increase over time without fluctuations (Fig 1b). In the current fit, a small increase in risk was estimated to occur in the third quarter of 2009. Maximum likelihood estimates of parameters are given in Appendix Table A1 and a comparison of the updated underlying model-estimated incidence risk to our previous fit is shown in Appendix Fig A1.

The current Defra method of using the percentage of tests in OTF herds that result in OTFW breakdowns over time applied to the 100 stochastic simulations (Fig 1a, grey lines) did not correspond well with the estimated underlying risk, producing clear artefactual fluctuations. The incidence trend calculated from this method was also higher than the underlying risk by a factor of 1.8 (mean across time series).

The adapted Cox method7 applied to each of the 100 stochastic simulations provided an accurate representation of the underlying risk, whereby assuming heterogeneity in risk (assuming one fifth of herds in each testing frequency group were at risk of a herd breakdown) provided a closer representation (light blue lines) than assuming no heterogeneity in risk (dark blue lines) (Fig 1c).

The incidence trends inferred by the smoothed new simple incidence-based method applied to each of the 100 stochastic simulations (Fig 1d) also reflected the estimated underlying risk without pronounced fluctuations and no requirement of an assumption regarding heterogeneity of risk, although the timings of the changes in risk were more lagged than with the adapted Cox method.

Discussion

There is an urgent need to communicate clearly the incidence trends of bovine TB in GB. A large debate continues as to whether additional control methods (such as badger culling5 or vaccination6 ) are required to reduce the national incidence of cattle TB. The national reporting system currently uses a method which is liable to artefactual fluctuations. These can lead the public to believe that the incidence is declining when in fact it could instead be an artefact of the methods used.7 i.e. for a given underlying incidence risk which does not contain fluctuations but monotonically increases (as estimated in this work) the national reporting system can produce artefactual fluctuations which do not represent the underlying trend. We have previously shown that an adapted method proposed by Cox7,8 provides an accurate representation of the estimated risk, but this method is mathematically complex and it may not be sufficiently straightforward for use in the national statistics. We therefore developed a new simple incidence-based method to represent the underlying incidence trend from the national targeted surveillance data.

Here we have shown that new proposed method that adjusts the current reporting method for the time period between routine herd tests provides a better representation of the estimated underlying trends compared to the current Defra method. This method weights the test-positive tests with OTF status withdrawn by the time period since the last TB herd test, such that herds on four-yearly testing contribute a quarter compared to annually tested herds to the incidence at a given time.

The incidence trends obtained from both this new incidence-based method and the adapted Cox method are similar in magnitude to the underlying risk, whereas the incidence of bovine TB obtained from the current Defra method is generally higher (by a factor of 1.8) due to the overrepresentation of high risk herds (an inevitable feature of a targeted surveillance program). The expansion of the herds annually tested over time has caused the extent of the bias toward high risk herds to vary over time. The new incidence-based method does not provide as accurate a representation of the underlying risk trends as the adapted Cox method, because the timings of the change in risk are delayed, but importantly the method does not result in pronounced artefactual oscillations. Therefore, this method would be advantageous compared to the current reporting method.

Competing Interests

The authors have declared that no competing interests exist.