Does % Input tells us much on the actual binding? - (Nov/17/2008 )
Well, a kind of theoretical question....but also conceptually important I think
If someone say the relative enrichment (in terms of % input) is 3% for protein A at chromosome location X,
Another person found % input is 0.3% for protein B at chromosome location Y.
Can you say there's more binding of A on X than B on Y???
I think the answer is NO
% Input is just so arbitrary because the value is going to be smaller if the antibody used has a lower affinity, or even if 2 laboratory use the same antibody, but different ChIP protocol, the lab with a more stringent washing protocol will have a smaller value too (although the non-specific in the -ve control will also be smaller)
So I don't know how much the value of % Input can tell us about the actual binding.
Would fold increase IgG be a better indicator?
Assuming the IgG (-ve control) represents the background signal, if your ChIPed DNA has a signal close to IgG, then you say there's no binding, if otherwise your ChIPed DNA is say, 50-fold higher than the IgG ChIPed sample, then there's actual binding, irrespect of the % Input (even if % Input is say, 0.01%)
Please let me know if I am thinking right, thanks.
Hi,
I'm by no means an expert at ChIP, but this is how I understand it.
First of all, ChIP people don't always use the same amount of input to calculate enrichment. e.g. some people use 10% input, some use 1%, etc etc. So before comparing two enrichment values, first check if they are even using the same amount of input to normalize.
Second, I think it's always a bit unreliable to compare ChIPs done in different experiments, since the conditions of the cells can vary, and yes as you mentioned, the antibody can bind differently and the protocol may be different. But if you perform the two in parallel, e.g. you looked at ProtA on X at the same time as ProtB on Y in the SAME ChIP experiment (same cells, same person, same protocol except antibody), then I think the enrichment is still comparable, as long as your IgGs are also reasonable. I don't see how fold IgG increase is a better indication, because the differences in Antibody binding efficiencies will not be normalized/removed just by comparing to IgG instead of Input (although normalizing to IgG instead of input can definitely account for wash stringency).
Maybe if you wanted to better compare the two, you can think of some way to first find out how the binding efficiencies of the two antibodies compare to each other, then adjust the ChIP pulldown enrichment with the efficiency. e.g. if Ab1 binds 2X more efficiently than Ab2 to their respective epitopes, then simply divide the Ab1 enrichment value by 2 then compare. Perhaps if you had a known number of Ab binding targets (say, 1000 units of protein1 and protein2) for both Ab1 and Ab2, you can do an IP to see how many units are pulled down. If Ab1 pulls down 500 units, whereas Ab2 only pulls down 250, then you know that Ab1 binds 2X more efficiently than Ab2.
Anyways, these are just some of my thoughts. Hope they are useful.
Angela
If someone say the relative enrichment (in terms of % input) is 3% for protein A at chromosome location X,
Another person found % input is 0.3% for protein B at chromosome location Y.
Can you say there's more binding of A on X than B on Y???
I think the answer is NO
% Input is just so arbitrary because the value is going to be smaller if the antibody used has a lower affinity, or even if 2 laboratory use the same antibody, but different ChIP protocol, the lab with a more stringent washing protocol will have a smaller value too (although the non-specific in the -ve control will also be smaller)
So I don't know how much the value of % Input can tell us about the actual binding.
Would fold increase IgG be a better indicator?
Assuming the IgG (-ve control) represents the background signal, if your ChIPed DNA has a signal close to IgG, then you say there's no binding, if otherwise your ChIPed DNA is say, 50-fold higher than the IgG ChIPed sample, then there's actual binding, irrespect of the % Input (even if % Input is say, 0.01%)
Please let me know if I am thinking right, thanks.
If someone say the relative enrichment (in terms of % input) is 3% for protein A at chromosome location X,
Another person found % input is 0.3% for protein B at chromosome location Y.
Can you say there's more binding of A on X than B on Y???
I think the answer is NO
% Input is just so arbitrary because the value is going to be smaller if the antibody used has a lower affinity, or even if 2 laboratory use the same antibody, but different ChIP protocol, the lab with a more stringent washing protocol will have a smaller value too (although the non-specific in the -ve control will also be smaller)
So I don't know how much the value of % Input can tell us about the actual binding.
Would fold increase IgG be a better indicator?
Assuming the IgG (-ve control) represents the background signal, if your ChIPed DNA has a signal close to IgG, then you say there's no binding, if otherwise your ChIPed DNA is say, 50-fold higher than the IgG ChIPed sample, then there's actual binding, irrespect of the % Input (even if % Input is say, 0.01%)
Please let me know if I am thinking right, thanks.
I don't think there's any way to compare the level of binding of different proteins using % of input or fold over IgG (mock). In either case, the level of enrichment is dependent not only on the level of binding of the protein but also on the efficiency of crosslinking the protein to the complex containing the DNA of interest and the affinity of the antibody for the crosslinked protein.
I'm by no means an expert at ChIP, but this is how I understand it.
First of all, ChIP people don't always use the same amount of input to calculate enrichment. e.g. some people use 10% input, some use 1%, etc etc. So before comparing two enrichment values, first check if they are even using the same amount of input to normalize.
This is an important point. Using different inputs of chromatin can have a big difference on the enrichment. Also different lots of antibody can have different concentrations and thus different pulldown efficiencies (in one particular case, for an antibody with the same catalog number, one lot was affinity purified while a later lot was not ... with NO warning). Any differences in crosslinking procedures or shearing also can make a big difference. Trying to compare ChIPs even within the same lab can be quite difficult.
I'm by no means an expert at ChIP, but this is how I understand it.
First of all, ChIP people don't always use the same amount of input to calculate enrichment. e.g. some people use 10% input, some use 1%, etc etc. So before comparing two enrichment values, first check if they are even using the same amount of input to normalize.
This is an important point. Using different inputs of chromatin can have a big difference on the enrichment. Also different lots of antibody can have different concentrations and thus different pulldown efficiencies (in one particular case, for an antibody with the same catalog number, one lot was affinity purified while a later lot was not ... with NO warning). Any differences in crosslinking procedures or shearing also can make a big difference. Trying to compare ChIPs even within the same lab can be quite difficult.
Thanks, you all are really nice and helpful and I agree with you all!!
I guess my main concern is that sometimes you did a ChIP, using antibody X on protein A, and then get a small % Input, say 0.02%
Then you ask yourself, did I do a good ChIP? or is it crap?
But you compare the sample with the IgG, and see the IgG got a % Input of 0.0002%, then you say maybe you are okay....
Then you go to PubMed or Abcam and see other ppl used the same antibody, and got a much bigger % Input (say, 2%, which is 100 times more than you got)
Then.....you got totally lost...maybe they used a different protocol, so the crosslink is different, the washing is different, and the input they used to calculate the % is different, and of course the chromosome region they studied is also different
I just feel annoyed not being able to use % Input as an indicator to tell if my data is real
Are there any suggestions to that?
I'm by no means an expert at ChIP, but this is how I understand it.
First of all, ChIP people don't always use the same amount of input to calculate enrichment. e.g. some people use 10% input, some use 1%, etc etc. So before comparing two enrichment values, first check if they are even using the same amount of input to normalize.
This is an important point. Using different inputs of chromatin can have a big difference on the enrichment. Also different lots of antibody can have different concentrations and thus different pulldown efficiencies (in one particular case, for an antibody with the same catalog number, one lot was affinity purified while a later lot was not ... with NO warning). Any differences in crosslinking procedures or shearing also can make a big difference. Trying to compare ChIPs even within the same lab can be quite difficult.
Thanks, you all are really nice and helpful and I agree with you all!!
I guess my main concern is that sometimes you did a ChIP, using antibody X on protein A, and then get a small % Input, say 0.02%
Then you ask yourself, did I do a good ChIP? or is it crap?
But you compare the sample with the IgG, and see the IgG got a % Input of 0.0002%, then you say maybe you are okay....
Then you go to PubMed or Abcam and see other ppl used the same antibody, and got a much bigger % Input (say, 2%, which is 100 times more than you got)
Then.....you got totally lost...maybe they used a different protocol, so the crosslink is different, the washing is different, and the input they used to calculate the % is different, and of course the chromosome region they studied is also different
I just feel annoyed not being able to use % Input as an indicator to tell if my data is real
Are there any suggestions to that?
There are a number of things you can do to tell if your data is real. Knockdown of your protein of interest is always a nice (but expensive and often impractical) control. Also, if you are looking at a modification of a particular protein you can use small molecule inhibitors of the enzyme responsible for the modification.
However, the easiest way of getting confidence in your ChIP data is to compare the ChIP signal at positive and negative control regions; a positive control region being one where your protein of interest is known to bind and a negative control region where it does not bind or is highly unlikely to bind. If you see a much higher signal at the positive control region than at the negative control region then you can have more confidence in your data.
I agree as well, comparing ChIP data between different labs is difficult, at least in terms of 'absolute' values. The introduction of real-time PCR-derived quantification is certainly a highly valuable tool (as compared to semi-quantitative PCR), though one needs to be very careful not to be fooled into taking these values as absolute, especially with the myriad of ways to 'normalize' (=distort?) obtained data. It would be very helpful if there were a convention as to how to present ChIP data to which labs adhere...
With regards to the above discussion about normalizing ChIP data: I have always been a little weary / uncertain about the 'normalizing to IgG' methods... The IgG control is certainly an important one to include in ChIP experiments, though when it comes to normalizing your specific antibodies with it, I would rather think to use it in a subtractive way (IP [specific ab] - IP [IgG]), instead of forming the ratio to calculate ' fold enrichment over IgG'? Reason for this being is that forming the ratio is a mathematical / physical formula, and these formulas are typically generated based on the 'ideal situation' of theoretical models. Now, in an 'ideal' situation, your background (IgG) would have the value '0' (...or wouldn't it?), which would render any formula in the style of '{value}/0' as mathematically impossible...?
I'm interested in hearing your opinions,
Cheers,
Jan
With regards to the above discussion about normalizing ChIP data: I have always been a little weary / uncertain about the 'normalizing to IgG' methods... The IgG control is certainly an important one to include in ChIP experiments, though when it comes to normalizing your specific antibodies with it, I would rather think to use it in a subtractive way (IP [specific ab] - IP [IgG]), instead of forming the ratio to calculate ' fold enrichment over IgG'? Reason for this being is that forming the ratio is a mathematical / physical formula, and these formulas are typically generated based on the 'ideal situation' of theoretical models. Now, in an 'ideal' situation, your background (IgG) would have the value '0' (...or wouldn't it?), which would render any formula in the style of '{value}/0' as mathematically impossible...?
I'm interested in hearing your opinions,
Cheers,
Jan
Hi Jan,
Few weeks ago when I analyse my data, I did exactly what you mentioned about the subtraction, so it's like:
% Input of target - % Input of IgG
so if the % Input of target is 2.78% and IgG is 0.03%, the adjusted % Input will be 2.75% (as if 0.03% of the signal is due to non-specific binding of IgG)
BTW, just to show how protocol can affect % Input, I have some results using sample Antibody and doing realtime in same region:
Protocol 1: Before adding Ab, dilute the lysate with dilution buffer for 5-fold, such that the final SDS is 0.2%
Protocol 2: Before adding Ab, dilute the lysate with dilution buffer for 10-fold instead, such that the final SDS is 0.1%
Only this one step difference produce different % Input:
With protocol 1:
% Input of Sample A, Target: 0.03%
% Input of Sample B, Target: 0.01%
% Input of Sample A, IgG: 0.0002%
% Input of Sample B, IgG: 0.0003%
(Basically the CT is so large, although there's difference, I think it's hardly reliable)
With protocol 2:
% Input of Sample A, Target: 2.97%
% Input of Sample B, Target: 1.35%
% Input of Sample A, IgG: 0.01%
% Input of Sample B, IgG: 0.02%
So the trend is still there with both methods, but with a less stringent condition, the % Input is very different, and I assume different lab using different protocol, on different genes and different Ab, will generate very different data which would be hard to compare
With regards to the above discussion about normalizing ChIP data: I have always been a little weary / uncertain about the 'normalizing to IgG' methods... The IgG control is certainly an important one to include in ChIP experiments, though when it comes to normalizing your specific antibodies with it, I would rather think to use it in a subtractive way (IP [specific ab] - IP [IgG]), instead of forming the ratio to calculate ' fold enrichment over IgG'? Reason for this being is that forming the ratio is a mathematical / physical formula, and these formulas are typically generated based on the 'ideal situation' of theoretical models. Now, in an 'ideal' situation, your background (IgG) would have the value '0' (...or wouldn't it?), which would render any formula in the style of '{value}/0' as mathematically impossible...?
I'm interested in hearing your opinions,
Cheers,
Jan
Hi Jan,
Few weeks ago when I analyse my data, I did exactly what you mentioned about the subtraction, so it's like:
% Input of target - % Input of IgG
so if the % Input of target is 2.78% and IgG is 0.03%, the adjusted % Input will be 2.75% (as if 0.03% of the signal is due to non-specific binding of IgG)
BTW, just to show how protocol can affect % Input, I have some results using sample Antibody and doing realtime in same region:
Protocol 1: Before adding Ab, dilute the lysate with dilution buffer for 5-fold, such that the final SDS is 0.2%
Protocol 2: Before adding Ab, dilute the lysate with dilution buffer for 10-fold instead, such that the final SDS is 0.1%
Only this one step difference produce different % Input:
With protocol 1:
% Input of Sample A, Target: 0.03%
% Input of Sample B, Target: 0.01%
% Input of Sample A, IgG: 0.0002%
% Input of Sample B, IgG: 0.0003%
(Basically the CT is so large, although there's difference, I think it's hardly reliable)
With protocol 2:
% Input of Sample A, Target: 2.97%
% Input of Sample B, Target: 1.35%
% Input of Sample A, IgG: 0.01%
% Input of Sample B, IgG: 0.02%
So the trend is still there with both methods, but with a less stringent condition, the % Input is very different, and I assume different lab using different protocol, on different genes and different Ab, will generate very different data which would be hard to compare
Add to that, the problem that comes from using different lots of an antibody with the same catalog number (a problem with many companies). I've had ChIP work very well with one lot and not at all with another. I've even run the two lots side by side to make sure no other factors were the problem and it was obvious that only the difference in lots was the problem.
It usually happens when the company has to switch to using serum from a different rabbit (or whatever animal was immunized).
We're vigilant about writing down lot numbers for each experiment. You can imagine how nerve wracking it could be to have several labs not be able to duplicate your findings because they are likely using the wrong lot of an antibody but you can't prove it because you forgot to write down what lot you used.
It usually happens when the company has to switch to using serum from a different rabbit (or whatever animal was immunized).
We're vigilant about writing down lot numbers for each experiment. You can imagine how nerve wracking it could be to have several labs not be able to duplicate your findings because they are likely using the wrong lot of an antibody but you can't prove it because you forgot to write down what lot you used.
That's really bad
I hope no one will get their paper retracted and being accused of data fabrication, only because someone couldn't reproduce findings using a bad lot of antibody....