Talk:Impact evaluation

From WikiProjectMed
Jump to navigation Jump to search

Undated comment

The definition of impact does not require a counterfactual (as recognised in the discussion on definitions). The 'requirement' for a counterfactual is a matter of debate and should not be presented as a given in this introduction. Nor should the counterfactual be limited to comparison groups or to the three methods identified in this article. Even some of those who argue that a counterfactual is required (see http://siteresources.worldbank.org/EXTHDOFFICE/Resources/5485726-1295455628620/Impact_Evaluation_in_Practice.pdf) recognise that this does not necessarily mean a comparison group. There are other ways of identifying a counterfactual - including other 'quasi-experimental' designs.

If you want to argue either debated proposition, you should present the argument before using the limited definition presented here.

Dr. White's comment on this article

Dr. White has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:

I would change the typology a bit: (1) Experimental approaches with sub sectors on (i) RCTs and (ii) Natural Experiments (there are a growing number of natural experiments); and (2) Non-experimental: (i) Quasi designs and (ii) Other non-experimental (IV and ITS)

That would help with some issues, eg discussion of limitations of RCTs currently jumbled in with natural experiments, and a clearer discussion of where comparison group comes from for quasi designs, but making clear other non experimental designs don't have an explicit comparison group.

The sections based on Rossi et al repeat what is elsewhere and should be better integrated in the text. The last bit on different types of bias should be dropped. There are more important issues like spillovers and contamination to be discussed, and anyway the Rossi list (secular trends, interfering events) are things a well designed IE takes care of.

Similar the section called estimation methods needs to be integrated. The first para is repetition. The second on ITT vs ToT is important so needs go someplace.

I wouldn't have though COSA belongs in the list of organizations promoting IE and certainly not getting a whole para. JPAL should get prominence but are not mentioned. Add also CEGA and EDePo.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

Dr. White has published scholarly research which seems to be relevant to this Wikipedia article:


  • Reference : White, Howard, 2007. "Evaluating Aid Impact," MPRA Paper 6716, University Library of Munich, Germany.

ExpertIdeasBot (talk) 16:30, 19 May 2016 (UTC)[reply]

Dr. Peters's comment on this article

Dr. Peters has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


The beginning of the article is well written in an encyclopaedic manner. From Section 1.2 onwards, there are, however, quite some shifts in language, referencing styles and conciseness. The remainder of the article is generally written in a more textbook style with too little focus, too many repetitions, and too little interconnection within the different sections. See the section-specific comments below:

1.1 Experimental design This section includes the same discussion as found in “4.2 Methodological debates”. The discussion should thus be shifted to either section (better 4.2 with an internal cross-reference in this section).

1.2 Quasi-experimental design This and the following section does not include any reference and Wikipedia-internal links are missing, e.g. to “Difference in differences”. It would further be useful to have bullets for the different designs. The description could also be improved: for example, matching involves a number of other approaches than just PSM (see Morgan, S. L., & Winship, C. (2014). Counterfactuals and causal inference. Cambridge University Press).

2 Biases in estimating programme effects Biases are an important aspect of impact evaluations, but this section is not well written and neither well integrated in the overall article. It starts with a too long and unfocused discussion based on Rossi et al. (2004), which is not referenced according to Wikipedia standards. The list of biases that follows is repetitive to Section 1 and rather selective.

3 Estimation methods Just let me comment on the following three sentence in this short section: “This method is also called randomised control trials (RCT). According to an interview with Jim Rough, former representant of the American Evaluation Assosiciation, in the magazine D+C Development and Cooperation this method doesn't work for complex, multilayer matters. The single difference estimator compares mean outcomes at end-line and is valid where treatment and control groups have the same outcome values at baseline.“ Sentence 1 is not correct, sentence 2 belongs to Section 4.2 and sentence 3 is repetitive to section 1.2.

4.1 Definitions of Impact Evaluation The first part of this section seems misplaced. In fact there is no debate about the definition of impact evaluation, which is also reflected in the similarity and lack of contradiction across the definitions cited. The most comprehensive, though not perfect, definition is the one by the DIME Initiative, which should rather be integrated in the very first section 1. The debate is more about the operationalization of impact evaluations, which is what the author of the second part of this section seems to intend to express. However, this part rather looks being hijacked by someone willing to share his/ her personal opinion (see: “They are also called ex-post evaluations or we are coining the term sustained impact evaluations.”). Though the point is legitimate, the way it is written is not appropriate for Wikipedia.

4.2 Methodological debates See comment above to 1.1. In addition, the title is not appropriate. This section merely deals with the appropriateness of experimental designs.

4.3 Theory-Based Impact Evaluation The idea of theory-based impact evaluation according to White (2009b) is very closely related to the “5 key principles” listed in Section 1. In fact, White’s points 3 to 5 are basically overlapping with the principles. Point 1&2 (“understanding program theory and context”) and Point 6 (“mixed methods”) are as well rather common sense and are thus not part of a debate per se. Similar to the comment to 4.1, there is instead a debate about the operationalization of “mixed methods”, which should be discussed more prominently here (see, for example, Bamberger, M., Tarsilla, M., & Hesse-Biber, S. (2016). Why so many “rigorous” evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute. Evaluation and program planning, 55, 155-162). Related to that, there is merely one sentence on qualitative methods (“Methods of qualitative data collection include focus groups, in-depth interviews, participatory rural appraisal (PRA) and field visits, as well as reading of anthropological and political literature.”). These would deserve more attention. On a general note to Section 4, it might be worthwhile to mention platforms where debates about impact evaluations are held, most notably: http://blogs.worldbank.org/impactevaluations/

7 Systematic reviews of Impact evidence

There is a much better separate Wikipedia article on systematic reviews. It would thus make more sense to link to that article under “5 Examples of impact evaluations” and drop this section.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. Peters has expertise on the topic of this article, since he has published relevant scholarly research:


  • Reference : Michael Grimm & Jorg Peters, 2015. "Beer, Wood, and Welfare," Ruhr Economic Papers 0538, Rheinisch-Westfalisches Institut fur Wirtschaftsforschung, Ruhr-Universitat Bochum, Universitat Dortmund, Universitat Duisburg-Essen.

ExpertIdeasBot (talk) 18:59, 26 July 2016 (UTC)[reply]

External links modified

Hello fellow Wikipedians,

I have just modified one external link on Impact evaluation. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:54, 12 November 2017 (UTC)[reply]