high efficiency and what to do with that? - (May/08/2007 )
hi folks,
we got a new cycler and the efficiency values range no longer from 1-2, but from 0-1 (meaning 0 to 100% PCR efficiency (doubling of product)). so far, so clear. but with some primer combinations i get efficienies above 1 (1.16, 1.03 e.g.). unfortunately these values don't work with my relative quantification analysis software (REST). so what do you do when you get efficiencies above the theoretical limit? cut them down to 1? subtract 1? how dou you calculate the two different results together?
mike
Hey, i got the same problem.. and acctually is not solved yet. So, if you manage to do it in some way, just say it. Meanwhile i've just cut the values down to 1.
Well, in theory more than 100% efficiency should not exist. Did you try calculating it by hand (using the slope or even the raw data of your dilution series?)
Well I know the theory, but the calculation is correct - maybe due to primer-dimer or unspecific amplification..? Any ideas or explanations welcome!
And to cut them down to one can't really be correct, as two reactions showing efficiancies of 1.02 and 1.20 behave (obviously) different...
I now found a formula for relativation of results used by another lab, but I don't not really understand what it does (yet)...
Corrected ValueT = (10^(-1/slopeT))^(interceptH-CtT)
where T is the targetgene and H the housekeeper...
I have to check this one out....If anyone can explain to me what this formular does and why it does it, be welcome - again!
Mike
hmmmm Jadefalcon I can't answer your question about the equation because I have not used it. however, I can tell you that efficiency >100% can and does happen, Krumelmonster. I've seen it on a number of occasions
I think the primary reason is that "efficiency" is an inadequate word to describe what is being measured when you calculate that slope, and that it doesn't quite mean that more than one copy/cycle of your template is being made if you get 105%, or 140%, or whatever. I think that's just an oversimplification of how to describe the relationship defined by the value of the slope
it may not be accurate, but this is how I think about it - if your calculated efficiency is close to 1, you get a doubling every cycle...but if it's either higher or lower, that doesn't necessarily mean more copies or less. an efficiency of 1.5 is a POOR efficiency, not a higher efficiency/more copies. an efficiency of 0.5 is also a POOR efficiency, not necessarily lower efficiency/fewer copies. whether or not your efficiency is good, in my mind, is determined by how close the number is to 1, not which side of 1 it lies on. I don't think it's precisely the efficiency of the reaction that is being measured, although that's how it is described
I quantified DNA traces, so I was forced to get the most possible exact standard curves. The standard curve method is the only one approved, which is able to determine the authentic PCR amplification efficiency. But it can be problematic. It only determines the correct amplification efficiency, if PCR really follows the theoretical mathematical relations. The standard curve formula cp=-1/logE*logN0+logN/logE is just a rearrangement of N(cp)=N0*E^cp .
The first challenge is the accuracy. Even intricate standard curves from 5 magnitudes of serial dilutions with 15 single samples showed at my experiments a standard deviation of 0.02. So if your authentic amplification efficiency would be 2.0, you would measure in your experiments in many cases higher or lower efficiencies. However, as more PCR samples are used, the more exact will be the measurement.
The next challenge are unspecific by-products, appearing at low template numbers. These would deform your standard curve down, so the linear regression would set the standard curve at a lower slope. Have you checked them PCR products on a gel? This case would enforce primer and temperature optimization.
The third challenge I was not really able to determine. Some PCRs seem not to follow the basic formula. At lower template numbers they amplify with lower amplification efficiencies than at higher template numbers, which would also bend the standard curve. I'm not really sure if this case even exists, but if, then the standard curve could be still used for quantifications even with wrong determined amplification efficiency since it describes the conditions in a PCR.
i have the same opinion like aimikins about the efficiency values. efficiency values of standard curves >>100% can appear when you have, for example, inhibitors in your samples. sounds a little bit contradictionary but less template means less inhibition (because the inhibitor gets diluted) and less inhibition means a boost in PCR efficiency.
here is the reference (contains also additional interesting information):
http://www.bio-rad.com/pdf/Bulletin_5279B.pdf
"An efficiency close to 100% is the best indicator of a robust, reproducible assay.
In practice, you should strive for an amplification efficiency of 90–105%. Low
reaction efficiencies may be caused by poor primer design or by suboptimal
reaction conditions. Reaction efficiencies >100% may indicate pipetting error in
your serial dilutions or coamplification of nonspecific products, such as primerdimers.
When using the method described above to determine amplification
efficiency, the presence of inhibitor can also result in an apparent increase in
efficiency. This is because samples with the highest concentration of template also
have the highest level of inhibitors, which cause a delayed CT, whereas samples
with lower template concentrations have lower levels of inhibitors, so the CT is
minimally delayed. As a result, the absolute value of the slope decreases and the
calculated efficiency appears to increase."
Hi Ami, nice to see you back - you're definitely right about that - but I have to insist on what I said. In theory, where efficiency is defined as the amount of material copied in one cycle, 100% is the maximum, as you can't possibly make two copies of one template in one cycle. That is the theory. The value for the efficiency is calculated by a rough approximation, in most cases (but not all) based on the slope ...
Nevermind,
Krümel