Large experimental sample sizes - (Nov/24/2011 )
Has anyone worked with large sample sizes (n>5) and comparing different groups with such large sample sizes?
Is it more favorable to do all samples at once (e.g. when doing a Western blot, flow cytometry, treatment etc) to reduce the manual handling/technical human error?
Some people do all samples at once because according to them, they do not want controllable factors (temperature, cell culture conditions) increase their error. Is that necessary?
I personally wouldn't consider 5 to be a "large sample number". During my PhD I routinely had a sample no. of 24 to deal with at once. Could end up being 100+ tubes when staining for flow, or up to 72 samples when dealing with different time points.
I think they way to deal with the samples depends a lot on what you are doing and what the nature of the sample is. Here are a list of things I used to do during PhD:
<*>for RNA extraction I used to just add Trizol to my cells (24 different samples and sometimes 2-3 different times points so up to 48-72 samples) and then store at -80C until I was ready to do the extractions - RT - PCR... so that I was sure to extract RNA and prepare cDNA on the same day.
<*>for WB I would prepare cells lysates and store at -80C until I had all the needed samples (if I had different time points/different days I would store them in order to run the gels and transfers for all samples together).
<*>for Flow Cytometry I would harvest sample, stain and acquire on the same day. In this case I would probably end up dealing with >100 tubes including all different stains and controls (flow days can become very long days)
<*>for ELISA I would freeze supernatants until all time points were collected and then ELISAs run for all samples at one (pretty commont to have 20 ELISA plates on one same assay/day)
These are all examples when the samples are cultured cells. If/when working with mice the sample harvesting and handling is a bit different.
I think different people work differently and you'll have to see what works best for you. It might seem like a lot to handle 5+ samples at the moment, but with time and experience you might handle 24+ samples without even thinking about it (I take from your different posts that you are a newbie).
The most important thing is that you have the appropriate controls so that experiments done in different days / different conditions etc can still be compared. Also consider what is going to give you more variability: the treatment (culture conditions etc) or the analysis assay (WB, PCR, flow....) and decide what's better to do at the same time to "minimise error".
Hope this helps, and welcome to the forum.
almost a doctor on Thu Nov 24 11:49:50 2011 said:
I personally wouldn't consider 5 to be a "large sample number". During my PhD I routinely had a sample no. of 24 to deal with at once. Could end up being 100+ tubes when staining for flow, or up to 72 samples when dealing with different time points.
I think they way to deal with the samples depends a lot on what you are doing and what the nature of the sample is. Here are a list of things I used to do during PhD:
<*>for RNA extraction I used to just add Trizol to my cells (24 different samples and sometimes 2-3 different times points so up to 48-72 samples) and then store at -80C until I was ready to do the extractions - RT - PCR... so that I was sure to extract RNA and prepare cDNA on the same day.
<*>for WB I would prepare cells lysates and store at -80C until I had all the needed samples (if I had different time points/different days I would store them in order to run the gels and transfers for all samples together).
<*>for Flow Cytometry I would harvest sample, stain and acquire on the same day. In this case I would probably end up dealing with >100 tubes including all different stains and controls (flow days can become very long days)
<*>for ELISA I would freeze supernatants until all time points were collected and then ELISAs run for all samples at one (pretty commont to have 20 ELISA plates on one same assay/day)
These are all examples when the samples are cultured cells. If/when working with mice the sample harvesting and handling is a bit different.
I think different people work differently and you'll have to see what works best for you. It might seem like a lot to handle 5+ samples at the moment, but with time and experience you might handle 24+ samples without even thinking about it (I take from your different posts that you are a newbie).
The most important thing is that you have the appropriate controls so that experiments done in different days / different conditions etc can still be compared. Also consider what is going to give you more variability: the treatment (culture conditions etc) or the analysis assay (WB, PCR, flow....) and decide what's better to do at the same time to "minimise error".
Hope this helps, and welcome to the forum.
I will be working with close to 20 cell lines but I used the n>5 example because n=3 is suffice for stat purposes. For me, I'm working with primary human cell lines, so we have/need a large group size.
I think my main concern is for WB because of it's 'internal' variability. Running two similar blots on different days can sometimes yield quite different outcomes. There were a few times where my WB internal control between different blots yielded different intensities of the particular protein.
I know it's easier for Flow Cytometry because users can define & save a common setting. and then come back days later to repeat with other batches.
It comes down to getting the data at the end of the day because the more samples/tubes/plates/vials you have to deal with, the higher the risk of stuffing it up.
Research really test aspects of human capacity to perform - concentration, reproducibility, drive and improvement. Hope to learn alot from the masters of the trade.
So for your RNA extractions, you added Trizol and left the extract in the flask/culture plastic?
n=3 is pretty dodgy in terms of stats, the power of your predictions goes down drastically with smaller sample size, despite non-parametric statistics. For normal stats (e.g. t-tests) you need a minimum of n=30.
bob1 on Wed Nov 30 22:30:16 2011 said:
n=3 is pretty dodgy in terms of stats, the power of your predictions goes down drastically with smaller sample size, despite non-parametric statistics. For normal stats (e.g. t-tests) you need a minimum of n=30.
Is n=30 a minimum for t-test? that would expand the sample sizes if you have multiple conditions etc.
I think it also depends on the cell line. If it is a homologous one, then doing n=10 would yield same means as n=100. It is tough to get n=30 for human-isolated cell lines (especially stem cell lines).
Yes, n=30 is the minimum for t-tests. This number is completely independent of the sample type! You can do non-parametric tests that are almost as powerful, using lower sample numbers, but you should go and read a decent statistics text book before you rush into anything.
While you might be able to get the same sample means with n=10, with a t-test you have no way of measuring if this is actually the case, or if you really do need the 100 samples. Also for t-tests you need to be able to assume normal distribution of the data, which can not be determined off 10 samples - you might have a f-style curve or some other distribution (bi-modal etc). Experimental design and statistical testing is very important so as to not misinterpret the data you have.
There are also statistical procedures to determine sample sizes more or less accurate. At least you'll get an impression what is needed, especially good if the material is valuable or rare, or unethical to use too much.
Mead's Resource Equation, Power of test are two keywords. But better as a statistician or have a look in a textbook about this.
Doesn't he mean "n=3" as 3 replicates of the same thing? I don't know if he does, but that would make more sense to me.