Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Establishing a Minimum Dataset for Prospective Registration of Systematic Reviews: An International Consultation

  • Alison Booth ,

    alison.booth@york.ac.uk

    Affiliation Centre for Reviews and Dissemination, University of York, York, United Kingdom

  • Mike Clarke,

    Affiliation Centre for Public Health, Queen's University Belfast, Belfast, United Kingdom

  • Davina Ghersi,

    Affiliation International Clinical Trials Registry Platform, World Health Organisation, Geneva, Switzerland

  • David Moher,

    Affiliations Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada, Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Canada

  • Mark Petticrew,

    Affiliation Department of Social and Environmental Health Research, London School of Hygiene and Tropical Medicine, London, United Kingdom

  • Lesley Stewart

    Affiliation Centre for Reviews and Dissemination, University of York, York, United Kingdom

Abstract

Background

In response to growing recognition of the value of prospective registration of systematic review protocols, we planned to develop a web-based open access international register. In order for the register to fulfil its aims of reducing unplanned duplication, reducing publication bias, and providing greater transparency, it was important to ensure the appropriate data were collected. We therefore undertook a consultation process with experts in the field to identify a minimum dataset for registration.

Methods and Findings

A two-round electronic modified Delphi survey design was used. The international panel surveyed included experts from areas relevant to systematic review including commissioners, clinical and academic researchers, methodologists, statisticians, information specialists, journal editors and users of systematic reviews. Direct invitations to participate were sent out to 315 people in the first round and 322 in the second round. Responses to an open invitation to participate were collected separately. There were 194 (143 invited and 51 open) respondents with a 100% completion rate in the first round and 209 (169 invited and 40 open) respondents with a 91% completion rate in the second round. In the second round, 113 (54%) of the participants reported having previously taken part in the first round. Participants were asked to indicate whether a series of potential items should be designated as optional or required registration items, or should not be included in the register. After the second round, a 70% or greater agreement was reached on the designation of 30 of 36 items.

Conclusions

The results of the Delphi exercise have established a dataset of 22 required items for the prospective registration of systematic reviews, and 18 optional items. The dataset captures the key attributes of review design as well as the administrative details necessary for registration.

Introduction

A protocol should be an integral part of a systematic review, and is important because it pre-specifies the objectives and methods to be used. Having a protocol can help restrict the likelihood of biased post hoc decisions in review methods, such as selective outcome reporting (because it specifies outcomes of primary interest, how information about those outcomes will be extracted, and the methods that might be used to summarize the outcome data quantitatively). An examination of 47 Cochrane reviews revealed indirect evidence for possible selective reporting bias for systematic reviews. Almost all (n = 43) contained a major change, such as the addition or deletion of outcomes, between the protocol and the full publication [1]. However, whether (or to what extent) the changes reflected bias, as opposed to unreported but legitimate changes in methods as the review methods were developed, was not clear. For example, the protocol might have aimed to include specific outcomes, which were then found to be absent from all of the included studies, leading the reviewers to remove these outcomes from their final review. Similarly, setting out inclusion and exclusion criteria prior to author knowledge of the available studies reduces the potential for selective inclusion based on study findings. Publication of a protocol additionally promotes transparency of methods and, as it facilitates identification of reviews that are in process, reduces the potential for unplanned duplication and allows public review of the planned methods.

Capturing the key elements of a systematic review at the protocol stage (or at the design stage if there is no formal protocol) and making these publicly available has similar utility to producing and publishing systematic review protocols. Additionally, a register providing a single point of access should be of great benefit in avoiding unplanned duplication of effort. The issuing of a unique identifier linked to a permanent registration record allows comparison of final reports of reviews with what was planned at registration.

Support for prospective registration of systematic review protocols has been gathering momentum, reflected in a number of recent publications [2], [3], [4], [5]. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions advocates registration and the PRISMA 2009 Checklist requires protocol registration details, if available, to include a registration number and details of the existence of and access to the protocol [2], [3].

Until now there has been no widely adopted process to register systematic reviews formally, outside of specific collections of reviews, such as those produced by the Cochrane Collaboration. Recognising the need for registration, the Centre for Reviews and Dissemination (CRD), in collaboration with an international Register Advisory Group, took the initiative in establishing PROSPERO, an international prospective register of systematic reviews with health outcomes that is freely accessible online (www.crd.york.ac.uk/PROSPERO).

The aim of PROSPERO is to prospectively register systematic reviews at the protocol stage; capturing the key attributes of the protocol or plan; maintaining an audit trail of any subsequent protocol amendments; and adding details of final publications, including peer-reviewed articles, and other documents as they become available. This will provide a permanent public record and unbiased listing of registered reviews. PROSPERO can therefore assist in planning new reviews and updating existing ones by providing stakeholders with information about reviews already in the pipeline. This should help to reduce unplanned duplication of effort and to optimise often limited use of research funds.

It will also provide transparency of process, and facilitate comparison between planned methods and reported results enabling readers to make judgements about the importance of any discrepancies [6]. Ultimately this may serve to discourage bias in the conduct and reporting of reviews.

To achieve these aims, the register needs to capture and make available relevant information related to potential for bias in a timely, transparent, and accessible way. At the same time it should be user friendly and not overly burdensome for those completing the registration details. It also needs to be able to accommodate methodological variations between different types of systematic reviews. The development team recognised that support for and use of the register would require the involvement of a range of interested parties including, for example, clinical and academic researchers, commissioners and journal editors. An international consultation was therefore undertaken with the primary objective of establishing the minimum dataset required for registration of systematic reviews at the protocol stage. A secondary objective was to raise awareness of the development of the register.

Methods

The international Register Advisory Group consists of a small number of key individuals recruited by CRD to assist in taking forward the development of the register. The advisory group members collectively have a wide range of systematic review experience with a variety of methodological interests and significant statistical expertise. In addition members have a detailed knowledge of the Cochrane Collaboration approach to registration of review protocols; experience of clinical trials registers and authorship of the PRISMA statement. The advisory group proposed the use of a Delphi exercise to establish the minimum dataset and subsequently guided each stage of the process.

Design

A modified Delphi exercise was carried out to obtain opinions from international experts in the field of systematic review about which individual constituents of a review protocol should be included in a registration record. The Delphi technique is a method of collecting in a structured and iterative way, the anonymous, individual opinions of a panel with relevant expertise in the topic where a consensus is required. The basic principle is for the panel to receive successive questionnaires, each one containing the anonymous responses to the previous round, and for them to modify their responses until a consensus is reached [7], [8], [9]. We modified the basic Delphi technique for practical reasons.

The survey population of interest had a high level of Internet and email access, were likely to be familiar with the use of electronic online submission processes and to use email as the principal mode of communication. We aimed to include wide international participation, minimise cost, and ensure accurate and efficient collection and analysis of responses. The questionnaires were therefore administered electronically using on-line survey software Survey Monkey (www.surveymonkey.com).

Participants

The opinions of international experts in health and social care involved in undertaking, commissioning, or developing methods for systematic reviews, or in guideline development, were sought, as were those of healthcare journal editors.

Two lists of participants were prepared; a core panel of individuals, and an ‘open list’ of organisations, groups, and electronic mailing lists. The initial circulation list for the core panel contained 350 names. These individuals were nominated by members of the register Advisory Group or identified through existing networks (e.g., the PRISMA Group, the International Network of Agencies for Health Technology Assessment; and International Committee of Medical Journal Editors). Email addresses were collected from personal contact lists and publicly available sources (e.g., organisational websites). All emails were personalised to individuals.

The open list included groups such as Guidelines International Network and the Health Technology Assessment International Information Resources Group, for onward dissemination to their members and electronic mailing lists (e.g., Cochrane Methods Groups and the Coordinating Editors of Cochrane Review Groups; LIS-MEDICAL and EVIDENCE-BASED-HEALTH, and World Association of Medical Editors). The open invitation was also posted on websites (e.g., CRD, National Institute for Health Research (NIHR), Cochrane Collaboration, Committee on Publication Ethics) and placed in newsletters (e.g., CRD, Cochrane Collaboration, NIHR). Details of the exercise were published in a Lancet comment paper, which directed readers to the CRD website for further information. This appeared in the e-version of the Lancet during the survey [10] and in the print version at a later date [11].

Separate response collectors were used within Survey Monkey for the two different types of invitation. Anyone responding on a link cascaded by a core panellist would have been included in the core panel collector.

The second round was sent to everyone in the core panel again, including non-responders unless they had requested removal from the list. In addition those from the open list who completed the first round and supplied their email addresses were added to the revised core panel list. Again, separate collectors were used for the core panel and open lists. The second (final) round of the survey required participants to indicate whether they had taken part in the first round. It was accompanied by a summary report on the responses to the first round (available from http://www.york.ac.uk/inst/crd/projects/register.htm).

All responses were anonymous; it was not possible to tell who responded or to link names to responses even when individuals informed us they had responded. It was hoped that this would encourage participation in both rounds and expression of personal opinion, rather than conforming to group opinion or dropping out after the first round [9].

In order to assess representation of different stakeholder groups and identify any differences in the responses between them, simple demographic details were requested in each questionnaire. These were designation; membership of organisations; health area of interest; review method of interest; number of systematic reviews authored; number of systematic reviews in which involved other than as author; proportion of work that relates to methodology; country; and English as a first language.

Instrumentation

The exercise was limited to two rounds, although provision had been made for subsequent rounds if these were judged necessary by the register Advisory Group. The questionnaires were piloted before distribution.

The time in which the questionnaires were ‘open’ for responses was limited to two weeks for each round. Reminder emails were sent to all members of the core panel approximately one week before the close of each round.

A mixture of ‘pick lists’, pre-specified response options, and free text responses were used to facilitate ease of response and analysis of data from a wide consultation, with large numbers from diverse groups, many of whom may not have English as their first language. In order to ensure that sufficient data were collected and that key areas addressed fully, ‘pick list’ questions were made mandatory. That is respondents had to make a choice before they could submit their answers. It was not mandatory to put anything into the free text boxes.

The questionnaires were prepared by CRD with advice from the register Advisory Group. None of those involved in designing, administering or advising on the questionnaires completed the survey.

The focus for the questions, the language, and explanations used were informed by lessons learned from the development of trials registers, and in particular the requirements for registers as set out by the WHO trials register platform (http://www.who.int/ictrp/en/) [12].

Question formulation

A pragmatic decision was taken not to approach panellists in advance to ask for their participation. This was to minimise the burden on named individuals who were likely to have limited time to devote to the process. For the same reason, we drew up a list of candidate items for inclusion in the minimum data set based on established guidance for writing systematic review protocols [13], [14], [15], [16], the PRISMA statement [3] and information from the WHO trials registry (http://www.who.int/ictrp/en/).

The first round questionnaire sought preferences for 41 candidate items as to whether they should be included in the minimum data set. Respondents were asked to indicate whether they thought each item was ‘Essential’, ‘Desirable’ or ‘Not necessary’. The focus for responses was on the inclusion of data that would help identify ongoing reviews and enable assessment of bias when the review was completed. Opinions on the scope of the register, allocation of unique ID; timing of registration, dealing with amendments to protocols, publications, and updating of reviews, and existence of other protocol registers were also sought. However, these items relating to the development and implementation of a register are not presented in detail here, but are included in the summary reports, available at http://www.york.ac.uk/inst/crd/projects/register.htm.

The second round questionnaire set out suggestions for which items should be mandatory and which should be optional, based on the register Advisory Group's interpretation of the first round responses. Participants were asked to ‘Agree’, or ‘Disagree’ with the suggested categorisation, to state that an item was ‘Not needed’ or state that they had ‘No opinion’. If they disagreed with a categorisation, they were asked to indicate the direction of the disagreement, e.g., that an item suggested as compulsory should be down-weighted to optional. Again the focus for responses was to identify the minimum dataset to achieve the aims of registration. As with the first round questionnaire, free text boxes for comments and suggestions were provided but not mandatory.

The majority vote for ‘Essential’ or ‘Desirable’ in the first round was used to categorise fields as ‘Required’ or ‘Optional’, respectively for the second round questionnaire.

Analysis

All responses were collated in ‘Survey Monkey’ for tabulation and analysis. A summary report on each round was compiled and circulated to both distribution lists (available from http://www.york.ac.uk/inst/crd/projects/register.htm).

Where possible, decisions were based on achieving consensus at a designated level of 70% agreement. This level of consensus was agreed by the Advisory Group as being greater than two-thirds of opinion, indicating a clear majority. Other decisions were made taking into consideration the distribution of alternative responses.

Ethical approval

Formal written consent was not sought; submission of completed questionnaires was taken as implied consent. The research was approved by the University of York Humanities and Social Sciences Ethics Committee (HSSEC 12-2009/10).

Results

Responses and respondents

The first round core panel list included 327 direct invitations, 12 were excluded as their emails were returned as undelivered, making the initial list 315. Five people declined to take part and were removed from the mailing list.

The second round core panel list included 322 direct invitations, four were excluded (three emails were returned as undelivered and one was known to be unavailable while the survey was open), making the list 318. One declined to take part and was removed from the mailing list.

A separate collector was set up for the open list invitation to participate. Both the first and second round questionnaires were sent to a general contact at 15 different organisations, and to a named contact for internal circulation in five other organisations or groups.

There were 194 (143 invited and 51 open) respondents with a 100% completion rate in the first round and 209 (169 invited and 40 open) respondents with a 91% completion rate in the second round. Of those who took part in the second round, 113 (54%) said they had taken part in the first round; 72 (34%) said they had not; and 24 (12%) could not remember (Table 1). A comparison of responses to the second round questionnaire showed no significant differences between those taking part in both rounds and those only taking part in the second round.

There were no significant differences between role designations (Table S1); areas of health interest (Table S2); review methods of interest (Table S3); authorship of (Table S4), or involvement in systematic reviews (Table S5); or proportion of work related to research methodology (Table S6); between the first and second round respondents.

There was little difference between the responses of those who were members of The Cochrane Collaboration and those who were not. There were three items in round one and two items in round two where the differences were of statistical significance. After Bonferroni adjustment for multiple comparisons, these were no longer statistically significant (Table S7).

In the first round, 128 (66%) respondents said English was their first language. In the second round, English was the first language for 124 (65%) of respondents. Respondents to both the first and second rounds were based in 34 countries, with an additional six countries represented in the first round only, and a different five countries represented in the second round only (Figure S1).

In the second round we specifically asked participants whether they supported the principle of registration of ongoing systematic reviews; 199 (95.2%) of participants said they did; three (1.4%) did not and seven (3.3%) had no opinion.

Minimum dataset

Following review of the first round responses, it was decided that the Anticipated publication date field would not be included in the second round. This was because of the large number of comments requesting that the list of items be kept as small as possible, and 158 (82%) respondents felt this field should be optional or was not necessary. The field would be difficult for researchers to estimate at the protocol stage and its inclusion in the register was not integral to achieving the stated aims.

Likewise, 121 (63%) respondents felt it was “Desirable” or “Not necessary” to include the Economic Evaluations field. As this information could and should be included in the Review Question field and elsewhere, it was not included in the second round questionnaire.

Taking into account first round feedback on the need to keep the dataset to the minimum and focus on information that would contribute to reducing bias, it was proposed that although the majority of respondents felt that the Context and Data extraction fields should be required fields, they should be included as optional fields. None of the fields in the first round had a majority in favour of ‘Not needed’.

In the first round of questions, primary and secondary outcomes were presented as separate items from effect measures in order to find out if participants felt both were needed. As only 9% and 12% of the respondents, (respectively for primary and secondary outcomes), felt that effect measures were not necessary, these fields were combined for the second round (Table 2). Time points were added as a requirement in response to suggestions from participants.

thumbnail
Table 2. Registration dataset response rates for Delphi round one and two.

https://doi.org/10.1371/journal.pone.0027319.t002

Informed by the responses to the Delphi exercise, the register Advisory Group confirmed that all items with 70% or greater agreement would be included as Required or Optional fields as responses indicated.

In round one, there was ≥70% agreement on 14 of 40 items; 60–69% agreement on 7 items; 50–59% agreement on 8 items; 40–49% agreement on 10 items and 30–39% on one item.

After the second round, a 70% or greater agreement was reached on whether 30 of 36 items should be required or optional. There was 60–69% agreement on two and 50–59% agreement on the remaining four items (Table 2).

The final PROSPERO dataset agreed by the register Advisory Group consists of 40 items, 22 of which are required, and the remainder are optional. Of the required fields, 12 are for details of review methods, 10 are related to the review title, timescale and review team (Table 3). In addition, the unique identification number was designated as part of the dataset by the Advisory Group as PROSPERO creates a unique number for each accepted registration record.

Discussion

Although the drivers for trials registration differ in some respects (e.g., legal ethical requirement [17]), systematic review protocol registration faces the same potential barriers as trials registration. In order to avoid the problems arising from the existence of multiple trials registers [18], [19] by providing a free, single, comprehensive, open access register, a balance between level of detail required and utility was sought. The proposed level of information to be entered for each field was included in the survey as the quality of data recorded in trials registers has been found to vary considerably [20], [21].

The aims of registering a systematic review include the provision of sufficient information to (i) determine whether reviews already in the pipeline might negate the need to initiate a new review, (ii) enhance the transparency and completeness of the plans for the systematic review, and (iii) make informed judgements about potential risk of bias. The objective of this Delphi process was to establish the minimum data set that will achieve these three aims. The Delphi process did not seek to capture the attributes of the wider information that should be included in a full protocol for a systematic review, or to determine all the variables that people might wish to record in registers of systematic reviews that would be used for other purposes.

The Delphi technique was chosen for its flexibility and adaptability in gathering and analysing the necessary data, and in particular for the utility of the process in garnering views and opinions from a broad spectrum of people [8]. The commissioning, undertaking, publishing and use of systematic reviews involves diverse disciplines, each with their own particular perspective, with both inter- and intra-disciplinary differences of opinion. For the register to fulfil its aims and cater for all potential users it was important to ensure that experts from all the relevant disciplines be invited to contribute their opinions in order to reach a consensus. It would not have been possible to arrange face to face meetings with the number of participants achieved by this approach. The Delphi approach allowed us to carry out the consultation with complete anonymity and maintain a broad heterogeneity in participants without any one discipline or individual having more influence than another.

For pragmatic reasons we modified the standard Delphi technique, and discuss here the limitations of the methods we used.

The notion of an ‘international expert’ in the defined areas is largely subjective. We hoped to minimise any inadvertent bias in the selection of the core panel by also issuing an open invitation to participate. However, because of the option of sharing email invitations, we cannot be sure that only core panel members responded to the core panel collector. Nonetheless, a comparison of the data from the two collectors showed little variation in response between the two groups.

Ideally, the same participants should respond to each round of a Delphi process. The pragmatic decision not to approach participants in advance to confirm commitment to the whole exercise, was balanced against the number being invited to take part. Just over half the respondents participated in both rounds. A comparison of second round responses between returning respondents and new participants showed no significant differences. It is unlikely therefore that the approach taken introduced additional bias.

Normally the first round of a Delphi would present open questions such as ‘What items do you think should be included in the registration of systematic reviews at the protocol stage?’ However, given that the items that should be included in a systematic review protocol are already well established and to reduce the burden on participants, we invited the first round respondents to comment on the utility of a pre-prepared list of candidate items. Respondents also had the opportunity to suggest additional items. The suggestions that were received and adopted were: the addition of an optional field to record other registration details (e.g., on The Cochrane Library); the requirement of time points to be included in the primary and secondary outcomes fields; and an optional field for telephone contact details.

Based on 315 invitations to participate in the first round, and 143 respondents, the response rate was 45%. In the second round 318 invitations were sent out and 169 responses received, making the response rate 54%. However, the true response rates may be lower as we cannot know how many individuals received a cascaded invitation.

Our decision not to use a pre-determined list of participants for the two rounds was based on the desire to ensure a range of respondents, but could have led to an unrepresentative sample of participants. In the event, responses were received from all key groups and those people who labelled themselves as researchers/reviewers were divided similarly in each round between members (119 round one; 105 round two) and non-members (75 round one; 81 round two) of The Cochrane Collaboration.

We succeeded in gathering the opinions and judgments of a large and diverse range of relevant experts. Given the heterogeneity of the respondents and their interests, we believe that the degree of consensus achieved is acceptable, but we will keep the list of data items under review and will revisit it after it has been in use for a year, as part of a wider evaluation of the utility of PROSPERO.

Conclusion

The consultation revealed widespread support for the principle of registration of systematic reviews, and the Delphi exercise established a dataset of 22 required items for the prospective registration of systematic reviews, and 18 optional items. The dataset captures the key attributes of review design, as well as the administrative details necessary for registration. The findings were also used to inform the development and implementation of the technical and process elements of PROSPERO.

Supporting Information

Figure S1.

Participant demographic information: Which country are you based in?

https://doi.org/10.1371/journal.pone.0027319.s001

(DOC)

Table S1.

Professional information about participants: role.

https://doi.org/10.1371/journal.pone.0027319.s002

(DOC)

Table S2.

Professional information about respondents: health areas of interest.

https://doi.org/10.1371/journal.pone.0027319.s003

(DOC)

Table S3.

Professional information about respondents: review method of interest.

https://doi.org/10.1371/journal.pone.0027319.s004

(DOC)

Table S4.

Professional information about respondents: number of systematic reviews authored.

https://doi.org/10.1371/journal.pone.0027319.s005

(DOC)

Table S5.

Professional information about respondents: number of systematic reviews involved with other than as an author.

https://doi.org/10.1371/journal.pone.0027319.s006

(DOC)

Table S6.

Professional information about respondents: proportion of work related to research methodology.

https://doi.org/10.1371/journal.pone.0027319.s007

(DOC)

Table S7.

Professional information about respondents: membership of relevant organisations.

https://doi.org/10.1371/journal.pone.0027319.s008

(DOC)

Acknowledgments

We would like to thank all those who took part in the Delphi exercise. All responses have been considered and the expertise provided and time taken by participants is much appreciated.

Author Contributions

Final decisions on the dataset: AB MC DG DM MP LS. Conceived and designed the experiments: AB MC DG DM MP LS. Performed the experiments: AB. Analyzed the data: AB MC DG DM MP LS. Wrote the paper: AB MC DG DM MP LS.

References

  1. 1. Silagy CA, Middleton P, Hopewell S (2002) Publishing protocols of systematic reviews: comparing what was done to what was planned. JAMA 287: 2831–2834.
  2. 2. Liberati A, Altman DG, Tetzlaff J, Mulrow C, the PRISMA Group (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6: e1000100.
  3. 3. Moher D, Tetzlaff J, Altman DG, for the PRISMA Group (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 339: b2535.
  4. 4. Straus S, Moher D (2010) Registering systematic reviews. CMAJ 182(1): 13–14.
  5. 5. Jüni P, Egger M (2009) PRISMAtic reporting of systematic reviews and meta-analyses. The Lancet 374: 1221–1223.
  6. 6. PLoS Medicine Editors (2007) Many reviews are systematic but some are more transparent and completely reported than others. PLoS Med 4(3): e147.
  7. 7. Murphy MK, Black NA, Lamping DL, McKee CM, Sanderson CFB, et al. (1998) Consensus development methods, and their use in clinical guideline development. Health Technol Assess 2(3): 1–88.
  8. 8. Hsu C-C, Sandford BA (2007) The Delphi Technique: Making Sense of Consensus. Practical Assessment Research & Evaluation 12(10). Available: http://pareonline.net/getvn.asp?v=12&n=10.
  9. 9. Sinha IP, Smyth RL, Williamson PR (2011) Using the delphi technique to determine which outcomes to measure in clinical trials: recommendations for the future based on a systematic review of existing studies. PLoS Med 8(1).
  10. 10. Booth A, Clarke M, Ghersi D, Moher D, Petticrew M, et al. (2010) An international registry of systematic review protocols. The Lancet.
  11. 11. Booth A, Clarke M, Ghersi D, Moher D, Petticrew M, et al. (2011) An international registry of systematic review protocols. The Lancet 377(9760): 108–109.
  12. 12. Ghersi D, Pang T (2009) From Mexico to Mali: four years in the history of clinical trial registration. Journal of Evidence Based Medicine 2: 1–7.
  13. 13. Higgins JPT, Green S, editors. (2009) Cochrane Handbook for Systematic Reviews of Interventions. The Cochrane Collaboration.
  14. 14. Centre for Reviews and Dissemination (2009) Core principles and methods for conducting a systematic review of health interventions. Systematic reviews: CRD's guidance for undertaking reviews in health care. York: University of York. pp. 2–99. Available: http://www.york.ac.uk/inst/crd/index_guidance.htm.
  15. 15. Petticrew M, Roberts H (2006) Systematic reviews in the social sciences: a practical guide. Malden, MA: Blackwell Publishing.
  16. 16. Egger M, Davey Smith G, Altman D, editors. (2001) Systematic reviews in health care: Meta-analysis in context: BMJ Publishing.
  17. 17. World Medical Association (2008) WMA Declaration of Helsinki - ethical principles for medical research involving human subjects. World Medical Association. Available: http://www.wma.net/en/30publications/10policies/b3/index.html. Accessed 7 June 2011.
  18. 18. World Health Organisation International clinical trials registry platform (ICTRP). World Health Organisation. Available: http://www.who.int/ictrp/en/. Accessed 7 June 2011.
  19. 19. Viergever RF, Ghersi D (2011) The quality of registration of clinical trials. PLoS ONE 6(2).
  20. 20. Liu X, Li Y, Yu X, Feng J, Zhong X, et al. (2009) Assessment of registration quality of trials sponsored by China. J Evidence-Based Med 2: 8–18.
  21. 21. Moja LP, Moschetti I, Nurbhai M, Compagnoni A, Liberati A, et al. (2009) Compliance of clinical trial registries with the World Health Organization minimum data set: a survey. Trials 10(56).