DSpace Collection:https://hdl.handle.net/2440/40072024-03-28T15:13:42Z2024-03-28T15:13:42ZVariation observed in consensus judgements between pairs of reviewers when assessing the risk of bias due to missing evidence in a sample of published meta-analyses of nutrition researchKanukula, R.McKenzie, J.E.Cashin, A.G.Korevaar, E.McDonald, S.Mello, A.T.Nguyen, P.-Y.Saldanha, I.J.Wewege, M.A.Page, M.J.https://hdl.handle.net/2440/1403982024-02-09T04:20:28Z2023-01-01T00:00:00ZTitle: Variation observed in consensus judgements between pairs of reviewers when assessing the risk of bias due to missing evidence in a sample of published meta-analyses of nutrition research
Author: Kanukula, R.; McKenzie, J.E.; Cashin, A.G.; Korevaar, E.; McDonald, S.; Mello, A.T.; Nguyen, P.-Y.; Saldanha, I.J.; Wewege, M.A.; Page, M.J.
Abstract: Objectives: To evaluate the risk of bias due to missing evidence in a sample of published meta-analyses of nutrition research using the Risk Of Bias due to Missing Evidence (ROB-ME) tool and determine inter-rater agreement in assessments. Study Design and Setting: We assembled a random sample of 42 meta-analyses of nutrition research. Eight assessors were randomly assigned to one of four pairs. Each pair assessed 21 randomly assigned meta-analyses, and each meta-analysis was assessed by two pairs. We calculated raw percentage agreement and chance corrected agreement using Gwet’s Agreement Coefficient (AC) in consensus judgments between pairs. Results: Across the eight signaling questions in the ROB-ME tool, raw percentage agreement ranged from 52% to 100%, and Gwet’s AC ranged from 0.39 to 0.76. For the risk-of-bias judgment, the raw percentage agreement was 76% (95% confidence interval 60% to 92%) and Gwet’s AC was 0.47 (95% confidence interval 0.14 to 0.80). In seven (17%) meta-analyses, either one or both pairs judged the risk of bias due to missing evidence as ‘‘low risk’’. Conclusion: Our findings indicated substantial variation in assessments in consensus judgments between pairs for the signaling questions and overall risk-of-bias judgments. More tutorials and training are needed to help researchers apply the ROB-ME tool more consistently.2023-01-01T00:00:00ZRapid reviews and the methodological rigor of evidence synthesis: a JBI position statementTricco, A.C.Khalil, H.Holly, C.Feyissa, G.Godfrey, C.Evans, C.Sawchuck, D.Sudhakar, M.Asahngwa, C.Stannard, D.Abdulahi, M.Bonnano, L.Aromataris, E.McInerney, P.Wilson, R.Pang, D.Wang, Z.Cardoso, A.F.Peters, M.D.J.Marnie, C.et al.https://hdl.handle.net/2440/1403562024-01-15T23:32:09Z2022-01-01T00:00:00ZTitle: Rapid reviews and the methodological rigor of evidence synthesis: a JBI position statement
Author: Tricco, A.C.; Khalil, H.; Holly, C.; Feyissa, G.; Godfrey, C.; Evans, C.; Sawchuck, D.; Sudhakar, M.; Asahngwa, C.; Stannard, D.; Abdulahi, M.; Bonnano, L.; Aromataris, E.; McInerney, P.; Wilson, R.; Pang, D.; Wang, Z.; Cardoso, A.F.; Peters, M.D.J.; Marnie, C.; et al.
Abstract: The demand for rapid reviews has exploded in recent years. A rapid review is an approach to evidence synthesis that provides timely information to decision-makers (eg, health care planners, providers, policymakers, and patients) by simplifying the evidence synthesis process. A rapid review is particularly appealing for urgent decisions. JBI is a world-renowned international collaboration for evidence synthesis and implementation methodologies. The principles for JBI evidence synthesis include comprehensiveness, rigor, transparency, and a focus on applicability to clinical practice. As such, JBI has not yet endorsed a specific approach for rapid reviews. In this paper, we compare rapid reviews versus other types of evidence synthesis, provide a range of rapid evidence products, outline how to appraise the quality of rapid reviews, and present the JBI position on rapid reviews. JBI-affiliated Centers conduct rapid reviews for decision-makers in specific circumstances, such as limited time or funding constraints. A standardized approach is not used for these cases; instead, the evidence synthesis methods are tailored to the needs of the decision-maker. The urgent need to deliver timely evidence to decision-makers poses challenges to JBI's mission to produce high quality, trustworthy evidence. However, JBI recognizes the value of rapid reviews as part of the evidence synthesis ecosystem. As such, it is recommended that rapid reviews be conducted with the same methodological rigor and transparency expected of JBI reviews. Most importantly, transparency is essential, and the rapid review should clearly report where any simplification in the steps of the evidence synthesis process have been taken.2022-01-01T00:00:00ZRecommendations for the extraction, analysis, and presentation of results in scoping reviewsPollock, D.Peters, M.D.J.Khalil, H.McInerney, P.Alexander, L.Tricco, A.C.Evans, C.de Moraes, É.B.Godfrey, C.M.Pieper, D.Saran, A.Stern, C.Munn, Z.https://hdl.handle.net/2440/1403412024-01-11T06:54:52Z2023-01-01T00:00:00ZTitle: Recommendations for the extraction, analysis, and presentation of results in scoping reviews
Author: Pollock, D.; Peters, M.D.J.; Khalil, H.; McInerney, P.; Alexander, L.; Tricco, A.C.; Evans, C.; de Moraes, É.B.; Godfrey, C.M.; Pieper, D.; Saran, A.; Stern, C.; Munn, Z.
Abstract: Scoping reviewers often face challenges in the extraction, analysis, and presentation of scoping review results. Using best-practice examples and drawing on the expertise of the JBI Scoping Review Methodology Group and an editor of a journal that publishes scoping reviews, this paper expands on existing JBI scoping review guidance. The aim of this article is to clarify the process of extracting data from different sources of evidence; discuss what data should be extracted (and what should not); outline how to analyze extracted data, including an explanation of basic qualitative content analysis; and offer suggestions for the presentation of results in scoping reviews.2023-01-01T00:00:00ZRevising the JBI quantitative critical appraisal tools to improve their applicability: an overview of methods and the development process.Barker, T.H.Stone, J.C.Sears, K.Klugar, M.Leonardi-Bee, J.Tufanaru, C.Aromataris, E.Munn, Z.https://hdl.handle.net/2440/1402962024-01-04T00:52:43Z2022-01-01T00:00:00ZTitle: Revising the JBI quantitative critical appraisal tools to improve their applicability: an overview of methods and the development process.
Author: Barker, T.H.; Stone, J.C.; Sears, K.; Klugar, M.; Leonardi-Bee, J.; Tufanaru, C.; Aromataris, E.; Munn, Z.
Abstract: JBI offers a suite of critical appraisal instruments that are freely available to systematic reviewers and researchers investigating the methodological limitations of primary research studies. The JBI instruments are designed to be study-specific and are presented as questions in a checklist. The JBI instruments have existed in a checklist-style format for approximately 20 years; however, as the field of research synthesis expands, many of the tools offered by JBI have become outdated. The JBI critical appraisal tools for quantitative studies (eg, randomized controlled trials, quasi-experimental studies) must be updated to reflect the current methodologies in this field. Cognizant of this and the recent developments in risk-of-bias science, the JBI Effectiveness Methodology Group was tasked with updating the current quantitative critical appraisal instruments. This paper details the methods and rationale that the JBI Effectiveness Methodology Group followed when updating the JBI critical appraisal instruments for quantitative study designs. We detail the key changes made to the tools and highlight how these changes reflect current methodological developments in this field.2022-01-01T00:00:00Z