Our Stolen Futurea book by Theo Colborn, Dianne Dumanoski, and John Peterson Myers
 
 

 

 

Welshons, WV, KA Thayer, BM Judy, JA Taylor, EM Curran and FS vom Saal. 2003. Large effects from small exposures. I. Mechanisms for endocrine disrupting chemicals with estrogenic activity. Environmental Health Perspectives 111:994-1006.


This paper examines the soft underbelly of regulatory toxicology: the fact that current testing procedures are vulnerable to dramatic under-estimations of health risks because of molecular details of how endocrine disruption of estrogenic signaling actually works. Fixing these mistakes may require strengthening of exposure standards by factors of 10,000x or greater, for endocrine disrupting compounds.

They write:

 

"if the mechanistic information concerning hormone action that we review here had been considered, the currently accepted practice of only testing very high doses to predict effects of doses thousands or even millions of times lower would have been recognized to be inappropriate. The result would have been that doses of EDCs, such as methoxychlor and bisphenol A, far below those currently being described as "safe" would, in fact, have been predicted to produce biological responses, and much lower doses would have been tested."

 

Welshons et al. reach this conclusion by careful review of a series of issues and findings in toxicological research. It is not a simple argument, but its implications are so important that carefully exploring their paper is worthwhile. They concentrate on contaminants that disrupt estrogen signaling, what they call "endocrine-disrupting chemicals with estrogenic activity," or EEDCs. The conclusions, however, are relevant to other chemical signaling systems, including but not limited to other hormones.

Underlying principles

The core of the argument is four-fold:

  1. Natural hormones like 17ß-estradiol work at extremely low concentration levels, far beneath the levels at which all hormone receptors would be bound by the hormone. Once all receptors are bound, further increases in the natural hormone can't increase the response of the system, at least via the interaction of the hormone and its receptor.
  2. EEDCs may be thousands of times less powerful than estradiol itself, but are present at sufficient concentrations to alter the percentage of occupied receptors.
  3. Traditional testing of EEDCs for toxicological impact involve exposures that are thousands of times higher than the level at which all receptors are bound. At these levels, therefore, even large changes in EEDC concentration cannot alter the percentage of bound receptors, and therefore any effect of EEDCs at those levels cannot be due to signaling through estrogen receptors.
  4. Unless tests are explicitly designed to observe EEDC impacts at concentration levels in which the estrogenicity of the EEDC is comparable in potency to the level at which the natural hormone operates normally (i.e., extremely low levels of estrogenicity), it will be impossible to observe EEDC-mediated estrogenic effects. Traditional testing at high levels by definition cannot reveal those effects.

Below follows a more detailed exploration of this paper. Note also a link to an analysis of one key set of experiments presented within this larger paper.

Welshons et al. begin by distinguishing between three different levels of hormone (or hormone mimic/antagonist).

  • The lowest is the physiological level (graph below). This is the level at which hormones are naturally found in the body. Importantly, what matters is not the total amount of the hormone but instead the small fraction of total hormone that is free in blood serum: the free hormone concentration. Typically, most of the hormone is bound to plasma proteins or joined to other substances and thus not available for binding with receptors. Free hormone levels are astoundingly low. For example, in mouse and rat fetuses during development, the level of free estradiol in serum is less than 1 picogram per milliliter. While that is less than one part per trillion, experiments confirm that minor variations in that level alter the course of development.
  • Far above this is the toxicological level. This is the range of doses that cause cell death, mutation, weight loss, etc., and is the level used in standard toxicological testing. It is a high level.
  • And finally, they consider the environmental level. This is the level at which a contaminant is in the blood. High exposures result in environmental levels reaching toxicological levels. Lower exposures may or may not place the contaminant's concentration at a level at which it is within the range, correcting for the strength of its receptor affinity, of natural hormones' physiological levels.

This figure (adapted from Welshons et al.) shows the proliferation response of MCF-7 cells to estradiol over a wide range of exposure levels, from parts per quadrillion on the far left to parts per million on the far right. Response is displayed as a percent of the proliferation seen in the control group. Note that the X-axis is logarithmic.

In this experiment, the "physiological level" for estradiol runs from parts per quadrillion to low parts per trillion. The "toxicological level" is 1,000-10,000 times higher, high parts per billion to parts per million. And in a sequence of experiments, Welshons et al. demonstrate that the response within the physiological level is mediated by estradiol binding with the estrogen receptor, while the toxicological response does not involve the estrogen receptor.

Their analysis rests crucially upon the fact that the biochemical pathway that allows changes in hormone concentration to lead to gene activation (or suppression) depends upon the hormone binding with its receptor, then initiating a sequence of events that finally involves a gene being activated or suppressed when the receptor/hormone complex binds with specific DNA sequences.

Further, within a given cell there are a finite number of intracellular receptors, the density of which vary from tissue to tissue. Up to a point, the more hormone the greater the number of bound receptors, and the more receptors that are bound, the greater the strength of the signal. But because there are a finite number of receptors, the biochemical signal for gene activation cannot increase beyond a maximum which is reached when all receptors are occupied. This can be seen in the graphs below.

The concept of receptor occupancy is key: imagine a cell with a given number of estrogen receptors. Receptor occupancy reflects the portion of that number that have been bound to estrogen, or to a contaminant capable of binding to the receptor like DES or bisphenol A.

Welshons et al. argue that regulatory toxicology works at such a high range of EEDC concentration that for compounds like bisphenol A, DES and other estrogenic contaminants, all available receptors must necessarily be occupied at the levels used in traditional experiments. Hence any variation in responsivity to dose can't be because of more or less receptor occupancy. That will only be seen much lower on the dose-response curve.

The implication of this is that within dose ranges used by regulatory toxicology, variations in response caused by variations in dose aren't mediated by the estrogen receptor but via some other mechanism. In other words, by accident of design, regulatory testing never tests for receptor-mediated endocrine disruption. Only low dose testing built explicitly to test within the range of EEDC concentration where receptor occupancy can vary can observe receptor-mediated responses.

How do they get there?

First, consider the relationship between hormone (or contaminant) concentration, receptor occupancy, and estrogenic response.

At low EEDC doses, not all receptors are occupied. Increases in EEDC lead to increases in receptor occupancy, up to 100% occupancy. Any increase in EEDC above that level cannot lead to a greater estrogenic response mediated by the estrogen receptor, because no more receptors are available to be bound.

(graphed from Welshons et al. Table 2)

 

Thus the more EEDC, the greater the response, up to a point.

It also turns out that cellular mechanisms further strengthen this pattern. They amplify the sensitivity of the system at low levels and suppress sensitivity at high levels. The capacity of the system to respond to increasing levels of hormone (or contaminant) diminishes long before receptor occupancy is complete. Or as Welshons and his coauthors state: "response saturates well before receptor occupancy saturates."

This is evident in the graph to the right, based on theoretical calculations in their first table. The response to estradiol measured as a % of maximum cell proliferation reaches saturation at approximately 50% saturation of receptors.

graphed from Welshons et al. Table 2

An important point to note is the extraordinary sensitivity to extremely low levels of estradiol. While these calculations are based upon theory, they rest upon well established principles of the biochemical interactions between hormones and their receptors.

Welshons et al. emphasize the importance of the fact that the system is much more sensitive to changes in hormone levels when only a small portion of receptors are occupied. In the graph above, it is evident that the greatest sensitivity occurs at receptor occupancy levels beneath 10%, where the rise is roughly linear and sharpest. The curve reaches a plateau not far above 10% occupancy. In fact, according to Welshons et al., over 90% of the proliferative response to estradiol occurs before receptors reach 50% occupancy.

What does this mean for regulatory toxicology?

The challenges for regulatory toxicology that these observations present are stark. According to the authors, "at the dose ranges of EEDCs used in current toxicity testing, chemical are likely to be present within target cells many orders of magnitude above their Kd for estrogen receptors. Within this dose range, changes in hormone density cannot have a detectable effect on receptor occupancy, because all receptors are saturated at 100% and no additional binding, which is required to result in an increase in response, can be observed. No primary hormonal effects can be observed in response to changes within this high dose range, only secondary effects not mediated by estrogen receptors."

In other words, countless regulatory tests for safety have been conducted in which there was literally no chance of detecting low level effects mediated by hormone receptors.

Welshons et al. go on to argue forcefully and elegantly that the use of linear extrapolations from high dose testing to low dose risk analysis will lead to significant underestimates of risk. They develop a specific example for bisphenol A and show that using a linear model to predict low level effects from high level experiments would erroneously conclude that a bisphenol level of approximately 0.8 parts per billion would result in negligible receptor binding and thus no response, when in fact it would fall precisely within the range of bisphenol A concentration where one should expect maximum sensitivity because of the receptor occupancy issues outlined above.

If one adds the additional complication of a non-monotonic dose response curve (as for example, shown above), caused by a combination of low level signal disruption and high level toxicity, the underestimation of risk only becomes more extreme.

In an understated tone, they conclude: ""The potential for error inherent in drawing strong positive conclusions [i.e., that the chemical is safe] from purely negative data has clearly not been appreciated by some toxicologists as well as regulators responsible for assessing this information." ... "Responses to low doses of EEDCs should be determined by testing a much wider range of doses than the 50-fold range common in toxicological studies today, including doses in the environmentally relevant range, and by accounting for all sources of estrogenic activity (endogenous and exogenous) and their interactive effects."

 

 

 

 

 

 

 

 

 

An important corollary of this analysis is what it says about another facet of endocrine disruption. As noted elsewhere, inverted-U or non-monotonic dose response curves are found commonly in studies of EDCs. A common question asked upon first hearing about these dose-response curves is: "Does that mean that some range of higher doses is acceptable because within that range, the response appears to be no different than seen in controls?"

The answer to this is clearly no. At that higher range of doses, all receptors are bound. This means the system is prevented from responding appropriately to variations in natural hormone levels.

 

 

 
     
     

 

 

 


OSF Home
 About this website
Newest
Book Basics
  Synopsis & excerpts
  The bottom line
  Key points
  The big challenge
  Chemicals implicated
  The controversy
  Recommendations
New Science
  Broad trends
  Basic mechanisms
  Brain & behavior
  Disease resistance
  Human impacts
  Low dose effects
  Mixtures and synergy
  Ubiquity of exposure
  Natural vs. synthetic
  New exposures
  Reproduction
  Wildlife impacts
Recent Important    Results
Consensus
News/Opinion
Myths vs. Reality
Useful Links
Important Events
Important Books
Other Sources
Other Languages
About the Authors
 
Talk to us: email