Difference-in-differences (DiD) estimation is a type of quasi-experimental impact evaluation method that permits to control time-constant differences in unobservable variables. The method admits that differences between control and treatment groups can exist (such as different averages in socio-economic status for instance). By combining before-after and simple difference analysis, DiD addresses shortcomings of both approaches such as maturation bias or selection bias that result from time-constant unobservable variables.

However, a main disadvantage of DiD compared to controlled designs such as RCTs it is reliance on a common trend assumption. This assumption presupposes that, in the absence of intervention, treatment units and comparison units would have experienced the same evolution over time. The common trend assumption is subjected to many threats and can easily be violated. In fact, even in the case where the two groups may look similar at baseline, at least on observable variables, nothing guarantees they will follow similar trends, as some confounding factors may intervene. For example, if the DiD is designed based on geographically non-overlapping control and treatment groups, this entails the risk that some unforeseen events affect only one of both groups. In the course of the implementation of an educational program for instance, schools of the treatment group might be affected positively by other governmental education programs resulting in a bias in the impact estimates. Similarly, the occurrence of a natural catastrophe such as flooding or an earthquake may cause treatment schools to close for several months, thus impacting negatively on school outcomes. To sum up, unless schools are chosen in a randomized manner like in an RCT design, the existence of some unobservable factor that unequally affects both groups over time can hardly be excluded. It is therefore hard to validate the common trend assumption upon which DiD relies unless multiple time periods are available.