Field Studies: Replicated comparisons vs. side-by-side comparisons | TheFencePost.com
John Thomas, Sara Berg. Josh Coltrain and Lizabeth Stahl
University of Nebraska Lincoln, South Dakota State University, Kansas State University, University of Minnesota

Back to: News

Field Studies: Replicated comparisons vs. side-by-side comparisons

With the growing season upon us, farmers are planting their crops and getting them established. They want to know what works best, yields the most, and, especially, what is most profitable during these tight economic times. Some want to compare products or practices on their own farm, or see information from other farms or industry studies.

How should a basic study be set up or laid out in the field? One common approach is to divide a field in half and compare the halves, or compare two fields in close proximity and see which variety or practice yields highest. This approach can end with very misleading results because of the variability that exists across a field or fields due to many factors. Some sources of variability include:

· variations in soil type,

· topography,

· varying management practices,

· drainage,

· pesticide residues,

· disease pressure,

· compaction,

· and weather events.

Just as you can count on yield monitor results varying across a field, you can also count on sources of variability (from the list above) impacting study results if a study compares two halves of one field or two fields across the road from each other.

A better approach, which more accurately estimates future performance of a treatment, is to put out replicated studies with random placement of treatments in each replication. This simply means that the same treatment is put out more than one time across the area of study to be assured that treatment performance is not based on location in the field. Replication from three to six times is common in most agricultural studies.

The more replications, the more reliable the results. It is also a good idea to repeat the replicated comparisons for more than one year to test performance over more environments, and come to stronger conclusions and estimations of real differences between treatments.

As an example, this description of an on-farm trial completed in 2016 illustrates how replication affected the results. This study compared two systems commonly used in planting pinto beans in Nebraska. The two treatments were applied and replicated six times with random placement.

The first treatment was 30-inch rows with a population of 90,000 plants per acre; the second treatment was 7.5-inch rows with a population of 120,000 plants per acre. (See the photo at the bottom of the accompanying graphic.)

This was a large field trial, with each treatment being 60 feet wide by 1,400 feet long. (The randomization was laid out as illustrated in the table at the top of the accompanying graphic.)

When average yields were calculated from the two treatments in the six replications, the 7.5-inch treatment yielded 8 bushels per acre more than the 30-inch treatment. The 7.5-inch rows with 120,000 population yielded 52 bushels per acre, and the 30-inch rows with 90,000 population yielded 44 bushels per acre.

Statistical analysis of yield data (at the 0.05 probability level), showed a significant difference in yields, with the least significant difference being 2 bushels per acre. This means that due to variability within the study, a yield difference of less than 2 bushels per acre would not indicate any treatment differences.

During early August a hailstorm damaged the field, with the most significant damage occurring on the half of the field containing replications 4, 5 and 6. If the field had just been split with one treatment on each side, results would have looked different. Combining the 7.5-inch treatments from the hailed side of the field, the average yield is 49 bushels per acre. Combining the 30-inch treatments together on the side with minimal hail, average yield would have equaled 45 bushels per acre. This equals a difference between treatments of 4 bushels per acre, half the difference that was detected by the full, replicated trial.

Conversely, if all the 30-inch treatments had been on the side of the field that received the most hail, yield for this treatment would have been 43 bushels per acre and yield for the 7.5-inch treatment on the side receiving minimal hail would have equaled 54 bushels per acre, for a difference of 11 bushels per acre. (This is illustrated in the bar graph in the middle of the accompanying graphic.)

It is clear that spreading the six replications out across the field resulted in a more accurate estimation of the impact of these systems on yield than splitting the field in half. In all three layouts, the 7.5-inch treatment yielded the most. The split field design either exaggerated or diminished the yield advantage of the 7.5-inch treatment, depending on which treatment was exposed to the heavier hail damage (See the bar graph.)

Poorly laid out field studies can generate misleading data and can lead to incorrect conclusions. Also keep this in mind when you are looking at data from other studies. Today, with GPS guidance, it is relatively easy to put in replicated, randomized studies, even on large field-scale comparisons.