Completeness

Completeness can be considered as having two separate dimensions; internal and external completeness.

Internal completeness refers to whether there are missing records or data fields and can be defined as “the frequency of unknown or blank responses to data items in the system”.

External completeness relates to whether the data available to the surveillance system reflects the true number of cases diagnosed with notifiable conditions [Doyle]. One approach to evaluate external completeness consists in the comparison of at least two datasets from different sources of information that are supposed to provide surveillance information on the same disease (e.g. laboratory and notification data for case reporting of salmonellosis). A common method to measure external completeness is “capture-recapture”. However other methods can certainly be used to compare datasets depending on the disease under surveillance, the nature/accessibility of data sources, and other parameters to be defined.

Note: consider completeness of data within a case data or completeness of data within a database (set of cases collected for a time window).

Validity

Validity in the context of surveillance would be the capacity to capture the “true value” for incidence, prevalence or other variables that are useful analysis of surveillance data. The “true value” should be viewed in the context of the surveillance system and objects, for example, it may relate to only those cases diagnosed by health services under surveillance. Validity can be considered to comprise of both internal and external dimensions, where:

Internal validity relates to the extent of errors within the system, e.g. coding errors in translating from one level of the system to the next.

External validity relates to whether the information recorded about the cases is correct and exact. Evaluating external validity implies the comparison of a surveillance indicator measured in a dataset to a “gold standard” value [Doyle]. One possible way to conduct validation study is to compare data recorded in the studied dataset to the original medical records. If data on a same patient are recorded at different points in time for the same information (disease/variable), differences can be due to a “real change“ over time or a bias in the measurement. Reliability studies can help identify this type of bias.

Sensitivity

Sensitivity as a proportion of persons diagnosed with the condition, detected by the system:Sensitivity of the EU-wide surveillance sub-network reflects the combined sensitivity of the national surveillance systems and of the international network, and can be defined on three levels relevant to EU-wide surveillance: (a) the proportion of cases notified to the national system that were reported to the international coordinating centre of the sub-network; (b) the proportion of cases fulfilling the standard case definition, diagnosed at the local level, that were notified to the national system; (c) the proportion of cases detected by the national surveillance system out of all cases truly occurring in the population without respect to whether cases sought medical care or a laboratory diagnosis was attempted (can usually only be determined by special studies). In practice, the sensitivity of the national surveillance systems will determine the sensitivity of the overall surveillance system. However, the sensitivity of national surveillance systems, ie the ratio between (a) and (b) will vary widely from country to country for specific diseases.

When knowledge on the differences in the sensitivity of national surveillance systems is of importance to the objectives of the EU-wide surveillance sub-network, country-specific investigations need to be implemented with defined methodology to determine the sensitivity of the national surveillance systems and to form a basis for comparability of the country-specific data. Stringent criteria for inclusion of cases in several of the existing networks will make EU-wide sensitivity lower than national [Ruutu et al]

The sensitivity of a surveillance system can be considered on two levels. First, at the level of case reporting, sensitivity refers to the proportion of cases of a disease (or other health-related event) detected by the surveillance system. Second, sensitivity can refer to the ability to detect outbreaks, including the ability to monitor changes in the number of cases over time. [CDC guidelines]

Predictive value positive (PVP) is the proportion of reported cases that actually have the health-related event under surveillance [CDC guidelines]

References: