Can someone explain in layman's terms why bootstrapping is valid and how to - (May/21/2016 )
I have 9 PCR measurements for a control group, medium treated group, and a high treated group. The variances for each group are a bit high so of course I am not obtaining statistically signficant results. However, I've recently come across this concept of bootstrapping data, where you can resample your results, simulate the averages thousands of times (so it'd be like performing the expriments many times), and according to theory it should give you a more accurate estimate of the means for each group.
My question is 1.) how is this valid (in layman's terms) and 2.) how do you go about doing hypothesis testing to see if the differences between the medium and high treated groups when compared to the control are statistically significant? I mean a 2% change will pretty much always comback as *** if you put in a N=30000, so I don't think I'm doing this right. How do you go about performing hypothesis tests on bootstrapped data?
Bootstrapping only works if you have very large samples which you can sample from - basically it takes smaller random samples from the large (usually considered to be unmeasurable, such as the average height of people in the world - you obviously can't go out an measure this) data set and estimates the mean of a population from this. This is not the case for your data, you have a small data set and can easily calculate the mean, and can generate more points if needed.
Don't complicate your analysis by trying to gain significance out of something that just requires more data, or by using a complicated method when a simple on will do.