A6. Appraisal of clinical data – examples of studies that lack scientific validity for demonstration of adequate clinical performance and/or clinical safety
- Lack of information on elementary aspects:
This includes reports and publications that omit disclosure of
- the methods used
- the identity of products used
- numbers of patients exposed
- what the clinical outcomes were
- all the results the clinical study or investigation planned to investigate
- undesirable side-effects that have been observed
- confidence intervals/ calculation of statistical significance
- if there are intent-to-treat and per protocol populations: definitions and results for the two populations
- Numbers too small for statistical significance
Includes publications and reports with inconclusive preliminary data, inconclusive data from feasibility studies, anecdotal experience, hypothesis papers and unsubstantiated opinions. - Improper statistical methods
This includes
- results obtained after multiple subgroup testing, when no corrections have been applied for multiple comparisons.
- calculations and tests based on a certain type of distribution of data (e.g. Gaussian distribution with its calculations of mean values, standard deviations, confidence intervals, t-tests, others tests), while the type of distribution is not tested, the type of distribution is not plausible, or the data have not been transformed. Data such as survival curves, e.g. implant survival, patient survival, symptom-free survival, are generally unlikely to follow a Gaussian distribution.
- Lack of adequate controls
In the following situations, bias or confounding are probable in single arm-studies and in other studies that do not include appropriate controls:
- when results are based on subjective endpoint assessments (e.g. pain assessment).
- when the endpoints or symptoms assessed are subject to natural fluctuations (e.g. regression to the mean when observing patients with chronic diseases and fluctuating symptoms, when natural improvement occurs, when the natural course of the disease in a patient is not clearly predictable).
- when effectiveness studies are conducted with subjects that are likely to take or are foreseen to receive effective co-interventions (including over-the-counter medication and other therapies).
- when there may be other influencing factors (e.g. outcomes that are affected by variability of the patient population, of the disease, of user skills, of infrastructure available for planning/ intervention/ aftercare, use of prophylactic medication, other factors).
- when there are significant differences between the results of existing publications, pointing to variable and ill controlled influencing factors.
In the situations described above, it is generally not adequate to draw conclusions based on direct comparisons with external or historic data (such as drawing conclusions by comparing data from a clinical investigation with device registry data or with data from published literature).
Different study designs may allow direct comparisons and conclusions to be drawn in these situations, such as randomised controlled design, cross-over design, or split-body design.
- Improper collection of mortality and serious adverse events data
Demonstration of adequate benefits and safety is sometimes based on mortality data or occurrence of other serious outcomes that limit a subject’s ability to live in his home and be available for follow-up contacts. In this type of study,
- consent of the subjects for contacting reference persons/ institutions for retrieval of medical information should be obtained during recruitment; when subjects can no longer be found, outcomes should be investigated with the reference persons/ institutions;
- the consequences of missing data on the results should be analysed (e.g. with a sensitivity analysis); alternatively, when patients can no longer be found and their outcomes cannot be identified, they should be considered to meet the SAE endpoint
- the consequences of missing data on the results should be analysed (e.g. with a sensitivity analysis); alternatively, when patients can no longer be found and their outcomes cannot be identified, they should be considered to meet the SAE endpoint under investigation (e.g. the mortality endpoint of a study).
In mortality studies (and other studies addressing serious outcomes) procedures for investigating serious patient outcomes, numbers of subjects lost to follow-up, reasons why subjects leave the study, and the results of sensitivity analysis should be fully disclosed in reports and publications.
- Misinterpretation by the authors
Includes conclusions that are not in line with the results section of the report or publication, such as
- reports and publications not correctly addressing lack of statistical significance/confidence intervals that encompass the null hypothesis.
- effects too small for clinical relevance.
- Illegal activities
Includes clinical investigations not conducted in compliance with local regulations. Clinical investigations are generally expected to be designed, conducted and reported in accordance with EN ISO 14155 or to a comparable standard, and in compliance with local regulations and the Declaration of Helsinki.