Thursday, January 2, 2014

Massetti et al. - Part 3 of 3: Comparison of Degree Day Measures

Yesterday's blog entry outlined the differences between Massetti et al. derivation of degree days and our own.  To quickly recap: Our measure show much less variation within a county over the years, i.e., the standard deviation of fluctuations around the mean outcome in a county are about a third of theirs. One possibility is that our measure over-smoothes the year-to-year fluctuations, or alternatively, that Massetti et al.'s fluctuations might include measurement error, which would result in attenuation bias (paper).

Below are tests comparing various degree day measures in a panel of log corn and soybean yields. It seems preferable to test the predictive power in a panel setting as one does not have to worry about omitted variable bias (As mentioned before, Massetti et al. did not share their data with us and we hence can't match the same controls in a cross-sectional regression of farmland values). We use the optimal degree days bounds from earlier literature.

The following two tables regress log corn and soybean yields, respectively, for all counties east of the 100 degree meridian (except Florida) in 1979-2011 on four weather variables, state-specific restricted cubic splines with 3 knots, and county fixed effects. Column definitions are the same as in yesterday's post: Columns (1a)-(3b) use the NARR data to derive degree dats, while column (4b) uses our 2008 procedure. Columns (a) use the approach of Massetti et al. and derive the climate in a county as the inverse-distance weighted average of the four NARR grids surrounding a county centroid.  Columns (b) calculate degree days for each 2.5x2.5mile PRISM grid within a county (squared inverse-distance weighted average of all NARR grids over the US) and derives the county aggregate as the weighted average of all grids where the weight is proportional to the cropland area in a county. 

Columns (0a)-(0b) are added as baseline using a quadratic in growing season average temperature. Columns (1a)-(1b) follow Massetti et al. and first derive average daily temperatures and degree days using daily averages, i.e., degree days are only positive if the daily average exceeds the threshold. Columns (2a)-(2b) calculate degree days for each 3-hour reading. Degree days will be positive if part of the temperature distribution is above the threshold, but not the daily average.  Columns (3a)-(3b) approximate the temperature distribution within a day by linearly interpolating between the 3-hour measures.  Column (4b) uses a sinusoidal approximation between the daily minimum and maximum to approximate the temperature distribution within a day.

Explaining log corn yields 1979-2011.

Explaining log soybean yields 1979-2011.

The R-square is lowest for regressions using a quadratic in average temperature (0.37 for corn and 0.33 for soybeans).  It is slightly higher when we use degree days based on the NARR data set in columns (1a)-(3b), ranging from 0.39-0.41 for corn and 0.35-0.36 for soybeans.  It is much higher when our degree days measure is used in columns (4b): 0.51 for corn and 0.48 for soybeans.

The second row in the footer lists the percent reduction in root mean squared error (RMSE) compared to a model with no weather controls (just county fixed effects and state-specific time trends). Weather variables that add nothing would have 0%, while weather measures that explain all remaining variation would reduce the RMSE by 100%.  Column (4b) reduces the RMSE by twice as much as measures derived from NARR. Massetti et al.'s claim that they introduce "accurate measures of degree days" seems very odd given that their measure performs half as well as previously published measures that we shared with them.

The NARR data set likely includes more measurement error than our previous data set. Papers making comparisons between degree days and average temperature should use the best available degree days construction in order not to bias the test against the degree days model.

Correction (January 30th): An earlier version had a mistake in the code by calculating the RMSE both in and out-of-sample. The corrected version only calculates the RMSE out-of-sample.  While the reduction in RMSE increased for all columns, the relative comparison between models is not impacted.

No comments:

Post a Comment