• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/17

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

17 Cards in this Set

  • Front
  • Back

What do Difference in Difference Designs (DiDs) do?

DiDs compare the changesin outcomes over time between a population that is enrolled in a program(the treatment group) and a population that is not (the comparison group).
DiDs estimate the counterfactual for the change in outcome for the treatment group by calculating thechange in outcome for the comparison group. This method enables any differences between the treatment andcomparison groups that are constant over time to be taken into account.

Counterfactual in DiDs

It is important to note that thecounterfactual being estimated here is the change in outcomes for thecomparison group. ·
Thetreatment and comparison groups do not necessarily need to have the same pre-interventionconditions. ·
Butfor DiD to be valid, the comparison groupmust accurately represent the change in outcomes that would have beenexperienced by the treatment group in the absence of treatment.

Application of Difference in Differences

To apply difference-in- differences,all that is necessary is to measure outcomes in the group that receives theprogram (the treatment group) and the group that does not (the comparisongroup) both before and after the program. · Themethod does not require us to specify the rules by which the treatment isassigned.

The Equal Trends Assumption .1.

Inthe absence of the program, the differences in outcomes between the treatmentand comparison groups would need to move in tandem. That is, without treatment, outcomes would needto increase or decrease at the same rate in both groups; werequire that outcomes display equal trends in the absence of treatment.

The Equal Trends Assumption .2.

There is no way for to prove that thedifferences between the treatment and comparison groups would have moved intandem in the absence of the program.
The reason is that we cannot observe what would havehappened to the treatment group in the absence of the treatment—in other words,we cannot observe the counterfactual. Thus, whenwe use the difference-in-differences method, we must assume that, in theabsence of the program, the outcome in the treatment group would have moved in tandem with the outcome in the comparison group.

If equal trends assumption not satisfied

Ifoutcome trends are different for the treatment and comparison groups, then theestimated treatment effect obtained by difference-in- difference methods wouldbe invalid, or biased. Thereason is that the trend for the comparison group is not a valid estimate ofthe counterfactual trend that would have prevailed for the treatment group inthe absence of the program.

Testing the validity of the equal trends assumption in DiDs .1.

  • A good validity check is to compare changes in outcomes forthe treatment and comparison groups before the program is implemented. ·
  • Ifthe outcomes moved in tandem before the program started, we gain confidencethat outcomes would have continued to move in tandem in the post-interventionperiod. ·
  • Tocheck for equality of pre-intervention trends, we need at least two serialobservations on the treatment and comparison groups before the start of theprogram. ·
  • Thismeans that the evaluation would require 3 serial observations—2 pre-inteventionobservations to assess the pre-program trends and at least one post-intervention observation to assess impact with the difference-in-differencesformula.
Testing the validity of the equal trends assumption in DiDs .2.
An alternative way to test the assumption of equal trends would be to perform thedifference-in-differences estimation using different comparison groups.

Limitations of DiDs

Difference-in-differencesis generally less robust than the randomized selection methods.
Not always effective at overcoming all of the threats to internal validity including selection bias, attrition, maturation and history.
Evenwhen trends are parallel before the start of the intervention, bias in theestimation may still appear. ·
Thereason is that DD attributes to the intervention any differences in trends betweenthe treatment and comparison groups that occur from the time interventionbegins. Ifany other factors are present that affect the difference in trends between thetwo groups, the estimation will be invalid or biased.

DiD: Galiani et al.2005

Study looked at the effect of water privatization in Argentina on health outcomes - specifically infant mortality.
Looked at multiple pre-test time periods to establish equal trends assumption All areas had declining child mortality rates. Child mortality rates increased post-test in privatized areas. Fact that trends pre-test were equivalent strengthened equal variance.


Galiani also looked at other variables to see if they changed such as non-water related deaths e.g. cardiac problems. Example of Popper's 1959 Falsification Test which states that for any belief to have credence it must be inherently disprovable before it can be deemed scientific.



Popper's Falsification Test 1959

States that for any belief to have credence it must be inherently disprovable before it can be deemed scientific.

DiD: Winter Ebmer 1988 .1.

Utilized a 3 DiD approach to see if change in duration of welfare benefits for 50 + workers in Austria would encourage employers to lay off older workers instead of younger ones.


28 out of 90 counties enacted a law trebling duration for which older workers could receive benefits. WEb compared older and younger workers w/i counties that had implemented the law and also older and younger workers w/i control (non-treated) counties. In this way if a county had experienced industrial decline one would expect both younger and older workers affected in the same way absent the policy change.

DiD: Winter Ebmer 1988 .2.
By introducing a 3rd element WEb made the DiD much more robust. If they had just compared the older workers (treated) with the younger workers (control) then that would not account for factors unrelated to policy impacting upon the treatment group.
Study concluded that both age groups across T and C counties experienced a decline in unemployment rates. However, older workers in T counties experienced less of a decline suggesting that the new policy was indeed encouraging employers to lay off older workers.

DiD: Riley et al 2011 Study of Job Centre Plus

Example of many of the issues DiDs less well equipped to manage. Job Centre Plus replaced the Employment Service in the UK - major change in welfare delivery. Pre-JC+ no. of unemployed claiming job seekers allowance had decreased but no. of people claiming incapacity benefit had increased. JC+ rolled out in 2 phases which allowed a DiD to assess the speed with which people were moved off benefits in both the T and the C areas. But Equal Trends Assumption compromised as Pathways to Work introduced at the same time acting as a confounder. Example of a history variable i.e. something arising between the commencement of the study and the final post test date that acts as a confounder.

DiD: Card & Krueger 1994


Minimum Wages andEmployment: A Case Study of the Fast-Food Industry in New Jersey andPennsylvania

One of the most famous examples of Difference in Difference.

Card and Krueger use a difference-in-differenceidentification strategy to identify the causal effect of a minimum wageincrease on unemployment. Card & Krueger studied employment at410 fast-food restaurants in Pennsylvania and New Jersey, The main finding was that the increase in theminimum wage had a negligible or even non-existent effect of employment.

DiD: Card & Krueger 1994 - Details
April 1992 NJ'sminimum wage increased from $4.25 to $5.05 per hour. · C&K urveyed 410 fast food restaurants in NJ (T) and Pennsylvania (C)before and after the rise in the minimum wage.· Comparisons of the changes inwages, employment, and prices at stores in NJ relative to stores inPenn (where min wage remained fixed at $4.25/hour) yieldedsimple estimates of the effect of the higher minimum wage. · Empirical findingschallenge the prediction that a rise in the minimum reduces employment. Also compared employmentgrowth at stores in New Jersey that were initially paying high wages (and wereunaffected by the new law) to employment changes at lower-wage stores. · Stores that were unaffected bythe minimum wage had the same employment growth as stores in Penn while stores that had to increase their wages increased their employment.

Criticisms of Card & Krueger 1994

There have beenseveral criticisms to the paper---including criticizing the quality of the dataor the fact that employers might have anticipated the change. Another possibility is that people were switching from lowerwage Pennsylvania jobs to higher wage New Jersey jobs. This would cause employmentto fall in Pennsylvania relative to New Jersey. The Card and Krueger studywould not have caught this, as they surveyed managers not employees. This wouldexplain why the differential went in a different direction than expected.
An area ofconcern relating to this study is the timeline, Card & Krueger surveyed therestaurants just prior to the law being introduced in February 1992 and then againjust a few months after the law had been introduced in November 1992. It could be argued that this was too soon toidentify the impact of the legislation upon employment as it may be a year ormore before fast food restaurants respond by laying off staff.