In the past, infections were usually found by growing bacteria from blood or urine samples. In recent years, new tests have been developed, that detect bacteria, viruses and fungi directly from their DNA and other genetic material (e.g. respiratory multiplex PCR panels) or by detecting proteins and other parts of these bugs (e.g., fungal biomarkers, such as Beta-D-glucan). Rapid tests for antibiotic resistance have also been developed. Having these tests can identify infections more quickly and might help doctors choose the right treatment sooner.
But having more tests, or even tests done at all, does not always make care better. How well a test helps a patient depends on how to use it, e.g., antibiotic decisions are complex and happen in steps. A test helps if it helps a doctor improve a patient’s management plan. If tests are used when they are not needed, they can for example lead to too many antibiotics, higher costs, and possible harm to patients. Because of this, many hospitals focus on “diagnostic stewardship.” This means using the right test, for the right patient, at the right time.
In our study, we will look at how these new types of tests are used in practice. We will describe how use of tests has changed over time and how it varies between hospital departments, locations, and other factors. We will then check whether patients who receive these tests have different antibiotic decisions, such as starting, stopping, or changing treatment. We will also look at outcomes like length of stay in hospital and 30-day mortality. By combining trends in testing with outcomes, we aim to find out whether more testing is helping patients and where testing could be improved.