Description of the error
The introduction does not mention what the objectives of the surveillance system are or what they should be.
Rationale to change
The objectives of public health surveillance differ for different conditions. They may include exhaustive identification of cases, outbreak detection or description of risk factors. The evaluation exercise will determine whether the surveillance system meets its objectives. If these objectives are not clear, the evaluation lacks perspective / reference / standards. The objectives of the surveillance system must be considered in view of but are different from the objectives of the programme that the surveillance system is supposed to feed with data for action. For example, the objective of polio surveillance in the context of the eradication is not to eradicate polio, but to find 100% of cases.
2. Excessive focus on a small area
The evaluation uses a nice statistical sampling strategy (e.g., a cluster sample) but focuses on a small area of the location of assignment (e.g., a district).
A sample that is nicely representative is not of much use (i.e., poor external validity) if the results can only be generalized to a small area. In addition, a surveillance system works at many levels, from the population to the state. Thus, it is best to encompass the largest scope possible (examine what happens at the state level) even if that is done only qualitatively.
3. Insufficient description of the methods used
The methods section contains insufficient information to understand the methods that were used to (1) describe the system and (2) evaluate it.
The evaluation of the surveillance system is a scientific process that reviews information to propose an analysis that is followed by recommendations. Thus, the methods used to generate the information (the description of the system and its attributes) must be described.
4. Failure to describe the system
The document contains a careful evaluation of the system in the absence of description of the system.
In a scientific process, description precedes analysis and interpretation. Before the system can be examined critically with attributes, it needs to be described in the way it operates, level by level, from data collection, data transmission, data analysis, information feedback and action.
5. Failure to identify the key attributes of the surveillance system
The evaluation reviews all the attributes one by one without any appreciation of those that matters most for the system to reach its objectives.
The attributes that matter most vary according to the system and its objectives. In some cases, if the objective is case identification (e.g., polio), sensitivity is a key objective. In some others, if the objective is outbreak detection (e.g., meningitis), timeliness is a key objective. Thus, according to the objectives of the surveillance system, the focus of the evaluation will be directed to specific attributes that will be of special importance for the system to meet its objectives.
6. Absence of critical review of the attributes on the basis of the objectives of the surveillance system
The evaluation has an academic spin by which attributes are considered unsatisfactory if they are not perfect. There is no critical review of the attribute to determine whether its characteristics constitute an obstacle for the system to achieve its surveillance objective or not.
Neither surveillance nor surveillance evaluation are academic exercises. Surveillance is not supposed to generate perfect data but information for action. Surveillance evaluation is not supposed to point to every weakness in the surveillance system but to propose practical, feasible recommendations so that surveillance information can be used for action. Hence, rather than asking: 'Is the system 100% sensitive', the question should be: 'Is the system sensitive enough to meet its objectives'
7. Insufficient documentation of the attributes
Insufficient data is presented to support the statements made about the various attributes of the system.
Statements made about the various attributes of a system need to be backed up with qualitative or quantitative data. There is a trade off between the time and resources to invest in the precise documentation of an attribute and the incremental value brought by such efforts. When the attribute is particularly relevant to the system consider (e.g., sensitivity for measles), special efforts (e.g., survey) are justified to obtain a high quality estimate. When the attribute is less relevant, a qualitative appreciation is possible. However, the author must not place excessive confidence in data of poor quality.
8. Confusion between the surveillance system evaluation and programme evaluation
There is confusion between the evaluation of the surveillance system and the evaluation of the programme being fed with data by the surveillance system.
Surveillance (the ongoing collection, transmission and analysis of data for feedback and action) is limited to information management. Programme is broader and addresses prevention, control, care etc... Surveillance is evaluated using the classical attributes while programmes are evaluated using a different framework (i.e., input, process and outcomes...) The two activities should not be mixed. Programmes that have a substantial case search component (e.g., tuberculosis, sleeping sickness, leishmaniasis) may be confusing, as some persons will call case search "surveillance".
9. Confusion between the limitations of the surveillance system and the limitations of its evaluation
The limitation section of the discussion discusses the limitations of the surveillance system.
The limitation section of the discussion section of a surveillance evaluation discusses the limitation of the evaluation process itself. The limitation of the surveillance system should be the focus of the main part of the discussion section.
10. Poor recommendations
The recommendations are weak. They are (1) too general, (2) not based upon the data or (3) not feasible.
The purpose of the evaluation of the surveillance system is to propose recommendations for its improvement. These recommendations must come from the evidence generated by the evaluation. They need to be specific and precise. They need to be feasible and focused.