Epidemiological method to determine utility of a diagnostic test

28
Dr. POONAM KUMARI & Dr. BHOJ R. SINGH Division of Epidemiology, ICAR- Indian Veterinary Research Institute, Izatnagar-243122, India. Email: [email protected] EPIDEMIOLOGICAL METHOD TO DETERMINE UTILITY OF A DIAGNOSTIC TEST

Transcript of Epidemiological method to determine utility of a diagnostic test

Dr. POONAM KUMARI &

Dr. BHOJ R. SINGH

Division of Epidemiology, ICAR-

Indian Veterinary Research

Institute, Izatnagar-243122, India.

Email: [email protected]

EPIDEMIOLOGICAL METHOD TO

DETERMINE UTILITY OF A

DIAGNOSTIC TEST

Diagnostic Test and Screening Test

„ A diagnostic test is used to determine the

presence or absence of a disease when a subject

shows signs or symptoms of the disease.

„ A screening test identifies asymptomatic

individuals who may have the disease.

„ The diagnostic test is performed after a positive

screening test to establish a definitive diagnosis.

Some Common Screening Tests

Pap smear for cervical dysplasia or cervicalcancer

„Fasting blood cholesterol for heart disease

„Fasting blood sugar for diabetes

„Blood pressure for hypertension

„Mammography for breast cancer

„PSA test for prostate cancer

„MRT for brucellosis

„Ocular pressure for glaucoma

„TSH for hypothyroid and hyperthyroid

Diagnostic tests categorisation

The ‘prescribed tests’ are those which are consideredoptimal for determining the health status of animals beforeshipment or reporting a disease.

‘Alternative tests’ do not demonstrate the absence ofinfection in the tested animals with the same level ofconfidence as the prescribed tests do.

However, the OIE Terrestrial Animal Health StandardsCommission considers that an ‘alternative test’, chosen bymutual agreement between the importing and exportingcountries, can provide valuable information for evaluatingthe risks of any proposed trade in animals or animalproducts.

Selection of diagnostic tests

The selection of an appropriate diagnostic testdepends upon the intended use of the result.

If the intention is to rule out a disease, reliablenegative results are required for which a test withhigh sensitivity (i.e., few false negative ) is used.

If it is desired to confirm a diagnosis or findevidence of disease (i.e. to "rule in" the disease)we require a test with reliable positive results (i.e.,high specificity) .

CONTINUE……

As a general rule of thumb, a test with at least

95% sensitivity and 75% specificity should be

used to rule out a disease and one with at least

95% specificity and 75% sensitivity used to rule in

a disease

(Pfeiffer,

1998).

Evaluation of diagnostic techniques

Evaluation of diagnostic techniques requires

some independent, valid measure of the true

condition of the animal (the 'gold standard')

The 'gold standard' may be a single unequivocal

test (histological or post-mortem demonstration of

the disease, for example) or a combination of

alternative tests which,when simultaneously

positive, identify animals which are true positives.

Continue…

However, no 'gold standard' exists for a particular condition and

it is necessary to evaluate the diagnosis by the level of

agreement between different tests.

This assumes that agreement between tests is evidence of

validity, whereas disagreement suggests that the tests are not

reliable.

The kappa test can be used to measure the level of agreement

beyond that which may be obtained by chance. The kappa

statistic lies within a range between -1 and +1.

The kappa test uses the same table as for calculation of

epidemiological values with the observed agreement given by

the formula:

OA = (a + d)/(a + b + c + d )

Kappa is the agreement greater than that expected by chance

divided by the potential excess

CONTINUE…..

The assessment or comparison of diagnostic

tests requires their application, with the 'gold

standard', to a sample of animals with a typical

disease spectrum.

The characteristics of the test are compared with

the gold standard in terms of their sensitivity and

specificity.

Sensitivity and Specificity of a

diagnostic test

Sensitivity− The ability of the test to identify

correctly those who have the disease.

„ Specificity− The ability of the test to identify

correctly those who do not have the disease.

Determining the Sensitivity,

Specificity of a New Test

Must know the correct disease status prior to

calculation

„ Gold standard test is the best test available

− It is often invasive or expensive

„ A new test is, for example, a new screening test

or a less expensive diagnostic test.

„ Use a 2 x 2 table to compare the performance of

the new test to the gold standard test.

Gold Standard Test

Disease

Positive with the test Negative with the test

a+b (all animals with the

disease)

c+d (all animals without the

disease)

Comparison of Disease Status:

Gold Standard Test and New Test

Disease diagnosed

with New test

Disease diagnosed with Gold standard test

Positive Negative

Positive a (true positive) b

Negative c d (true negative)

Sensitivity

Sensitivity is the ability of the test to identify correctly

those who have the disease (a) from all individuals with

the disease (a+c)

Sensitivity = a/a+c = true positive/disease+

Sensitivity is a fixed characteristic of the test.

Specificity

Specificity is the ability of the test to identify correctly

those who do not have the disease (d) from all

individuals free from the disease (b+d)

Specificity = d/b+d = true negative/disease-

Specificity is also a fixed characteristic of the test.

Applying Concept of Sensitivity and

Specificity to a Screening Test

Assume a population of 1,000 people

200 have a disease

800 do not have the disease

A screening test is used to identify the 200 people with

the disease

„The results of the screening appears in this tableResults of

screening test

True status of disease in the

population

Total

Disease No disease

Positive 150 100 250

Negative 50 700 750

Total 200 800 1000

Calculating Sensitivity and Specificity

Results of

screening test

True status of disease in the

population

Total

Disease No disease

Positive 150 100 250

Negative 50 700 750

Total 200 800 1000

Calculating Sensitivity and Specificity

Sensitivity= 150*100/200= 75%

Specificity= 700*100/800= 87.5%

Predictive Values of diagnostic tests

Positive predictive value (PPV)

− The proportion of patients who test positive

who actually have the disease.

„Negative predictive value (NPV)

− The proportion of patients who test negative

who are actually free of the disease.

Test

Results

Disease

Present Absent

Positive a (true positive) b (false

positive)

Negative c (false negative) d (true

negative)

Test

Results

Disease

Present Absent

Positive a + b (all subjects with testing positive)

Negative c + d (all subjects with testing negative)

What the Test Shows

Predictive Value

Positive predictive value = a/ a+ b = true positive/test +

Negative predictive value =d/c +d = true negative /test –

What we get from the Test Results

Results of the

screening test

The status of the disease in

the test population

Total

Disease No disease

Tested

positive

150 100 250

Tested

negative

50 700 750

200 800 1000

Applying Concept of Predictive Values to

Screening TestAssume a population of 1,000 people, 200 have a disease, 800 do not

have the disease. A screening test is used to identify the 200 people with

the Disease. The results of the screening appear in this table.

Calculating Predictive Values

Positive Predictive value (PPV) of the test= 100*150/250= 60%

Negative Predictive value (NPV) of the test= 100*700/750=93.3%

Relationship of Disease Prevalence

to Predictive Value

Suppose sensitivity is 95% and Specificity is 90*

Disease

Prevalence

Test

results

With

Disease

Without

Disease

Total PPV NPV

1%

+ve 95 90 185 95/185=

51.35%

8910/8

915=

99.9% -ve 5 8910 8915

Total 100 9900 10000

5%

+ve 475 950 1425 475/1425=

33.3%

8550/8

575=

99.7%-ve 25 8550 8575

Total 500 9500 1000

Positive Predictive Value (PPV)

Primarily Depends On

The prevalence of the disease in the population

tested, and the test itself (sensitivity and

specificity)

− In general, it depends more on the specificity

(and less on the sensitivity) of the test (if the

disease prevalence is low)

PPV Improvement

The PPV of a particular test can be improved by

appropriate selection strategies

1. Testing of "high risk" groups (animals with

clinical signs rather than normal animals)

2. For the same test using a higher cut-off with

higher specificity or use a second test with a

higher specificity)

3. Use of multiple tests for interpretation of

results.

(Baldock, 1996):

Reproducibility, Repeatability,

Reliability of a diagnostic test

Reproducibility, repeatability, reliability all mean that

the results of a test or measure are identical or

closely similar each time it is conducted

„Because of variation in laboratory procedures,

observers, or changing conditions of test subjects

(such as time, location), a test may not consistently

yield the same result when repeated

Different types of variation

− Intra-subject variation

− Intra-observer variation

− Inter-observer variation

Intra-subject variation is a variation in the results of atest conducted over (a short period of) time on the sameindividual

„ The difference is due to the changes (such asphysiological, environmental, etc.) occurring to thatindividual over that time period.

Inter-observer variation is a variation in the result of atest due to multiple observers examining the result(inter=between)

„Intra-observer variation is a variation in the result of atest due to the same observer examining the result atdifferent times (intra = within)

„The difference is due to the extent to which observer(s)agree or disagree when interpreting the same test result

Conclusions

The interpretation of diagnostic tests depends upon

the definition of clinical disease and its distinction

from the presence of the pathogen.

Ideally, a diagnostic test can be evaluated based on

a clear relationship with an unequivocal "gold

standard" diagnosis.

The use of epidemiological methods in the planning

and analysis of diagnosis, or better still, a greater co-

operation between pathologists and epidemiologists,

will assist greatly in the development and

interpretation of better diagnostic tests.

References1. Steurer J, Fischer JE, Bachmann LM, Koller M, ter Riet G.

Communicating accuracy of tests to general practitioners: a controlled

study. BMJ 2002; 324: 824–6.

2. Waisman Y, Zerem E, Amir L, Mimouni M. The validity of the uriscreen

test for early detection of urinary tract infection in children. Pediatrics 1999;

104: e41.

3. Anthony K Akobeng .([email protected])

Department of Paediatric Gastroenterology, Booth Hall Children’s Hospital,

Central Manchester and Manchester Children’s University Hospitals,

Manchester, UK

4.Thrusfield. M. (1995). Veterinary Epidemiology 2nd Edition. Publ.

Blackwell Science Ltd., Oxford, UK.

5.Baldock, C. (1996). Course notes from the Australian Centre for

International Agricultural Research Workshop on "Epidemiology in Tropical

Aquaculure" Bangkok, 1-12 July, 1996.

6..Pfeiffer, D. (1998). Veterinary Epidemiology. An Introduction. Institute of

Veterinary, Animal and Biomedical Sciences. Massey University,

Palmerston, New Zealand.