Network control center, Tbilisi, Georgia

    KfW Development Impact Lab

    The KfW Development Impact Lab bundles all Rigorous Impact Evaluations (RIE) and quantitative impact analyses of KfW Development Bank. RIEs supplement the results of ex-post evaluations with causal, quantitative impact measurements on specific individual topics of particular relevance to Financial Cooperation that are carried out alongside project implementation. Ex-post evaluations and RIE are therefore equally relevant for gaining knowledge and a prerequisite for institutional learning in Financial Cooperation. The agile working methods of the KfW Development Impact Lab are very conducive to the strong networking of RIE with a wide range of expertise providers within and outside KfW Development Bank.

    The task of the KfW Development Impact Lab is to promote existing and select new RIEs that are implemented alongside Financial Cooperation projects. The way in which RIEs are utilised and promoted is tailored entirely to the relevant substantive issue, the context and the needs and capacities within KfW Development Bank and our partners. When implementing RIEs, we conduct surveys of households and analyse satellite or other secondary data - taking into account methodological possibilities and limitations.

    But what exactly are Rigorous Impact Evaluations?

    RIEs describe a toolbox of experimental and semi-experimental methods that measure the causal effects of a project. The emphasis is on causality. In other words, on identifying those effects that can be attributed exclusively to the project and isolating them from concurrent developments or other connections between projects and target indicators. In addition to measuring specific impacts on the projects’ target groups, RIEs also analyse impacts on subgroups or mechanisms underlying the impacts. For example, a healthcare project may have significantly greater effects for women than for men, or a new connection to the electrical grid may only lead to productive electricity uses in areas that have access to markets.

    Two men in the metro traffic control centre

    Randomized Controlled Trials (RCTs)

    The most rigorous methods in the Impact evaluation toolbox are fully experimental methods, such as Randomised Controlled Trials (RCTs), which are also known as the “gold standard”. In RCTs, a project – or even parts of the project – is randomly assigned to a group of individuals, schools, communities or other (“intervention group”). The second group receives access to the project later or – as is the case with a placebo – not at all (control group). The principle of random assignment, similar to medical research, ensures the comparability of the two groups: for example depending on the measure, they are on average the same age, similarly healthy, ambitious, vulnerable or wealthy. This means that all post-intervention differences between the groups can be attributed to the project itself. A well-known example is cash transfers, which are disbursed to households in the target group if their children attend school.

    Quasi-experimental methods - Regression Discontinuity Design (RDD)

    If a purely experimental (random) assignment is not reasonable or feasible, semi-experimental methods are often a useful alternative. For example, comparison groups can be defined along threshold values of certain selection criteria (Regression Discontinuity Design, RDD). If a project targets children under two years of age, participants who are almost two years old can be compared with participants who are just over two years old.

    RCTs and RDDs are only two examples from the IE toolbox. Depending on the type of project, the level of implementation and the criteria for selecting beneficiaries, the toolbox provides a range of methodological options. As part of the KfW Development Impact Lab, we identify the methodology adapted to each project according to the principle: form follows function.

    This might also interest you

    Ex-post evaluations

    Find out more about the evaluation methodology and success rates here.

    Institutional learning

    One size does not fit all. How systematic learning is promoted and used.