Nonprofit, nonpartisan journalism. Supported by readers.

Donate

UCare generously supports MinnPost’s Second Opinion coverage; learn why.

Cellphones and radiation: A review of troubling studies

Will we one day regret our current cavalier attitude toward cellphone radiation — just as our grandparents and great-grandparents came to regret their nonchalant-ness toward such now-known carcinogens as tobacco and asbestos?

That’s the question Minnesota health writer (and occasional Second Opinion contributor) Paul Scott raises in his troubling article on cellphone and brain cancer that appears in the current issue of Men’s Health.

Here’s an excerpt:

The American Cancer Society, the National Cancer Institute, the U.S. Food and Drug Administration, and the World Health Organization all regard the radio waves emitted from cellphones as safe. But another growing body of experts believes cellphone use can promote tumors, and momentum has been shifting to their side. A researcher in Sweden, for instance, recently reported that people who started using cellphones before the age of 20… have four to five times the odds of developing one type of brain tumor. An unpublished (and therefore not peer-reviewed) analysis by researchers at the University of Pittsburgh Cancer Institute shows an increase in brain tumors among Americans in the under-30 age group.

And according to new research, studies showing that cellphones are safe tend to be (a) less rigorously designed and (b) funded by the cellphone industry, while studies showing that cellphones carry risks are (a) produced with better science and (b) have no financial conflicts of interest.
And if the slow spread of distress within the halls of government means anything, the topic no longer causes eye-rolling among lawmakers. The National Institutes of Health (NIH), for example, has recently authorized a $25 million study to analyze rats that have been bathed in cellphone radiation for a period of 2 years. Both houses of Congress have held hearings on the issue. And in Maine, legislation may soon require warning labels on cellphones sold in that state.
The cellphone industry has responded with studies, mind you—ones that exonerate the technology, including a new study showing that tumor rates are steady in Scandinavia, where cellphones were adopted early. But if you dig deep, those findings aren’t as reassuring as you might hope. For one thing, they tend to limit their good news to people who’ve been using cellphones for less than 10 years.

The International Agency for Research on Cancer commissioned a large 13-nation study in 1998 that was supposed to help answer the question of whether people who use cell phones are more likely to develop meningioma, glioma, acoustic neuroma or parotid gland tumors than those who use (soon-to-be-extinct?) corded phones. 

But, as Scott points out, nobody knows what that study found:

While partial results have been published, the report’s final conclusions are in limbo 4 years after its completion. Press accounts have asserted that the coauthors are bitterly divided over what the study found. Published sections have reported no connection between cellphones and cancer, but most of the patients studied used their cellphones for less than 10 years. That matters, because brain tumors could take decades to develop, and widespread cellphone use in the United States began only in the mid-1990s.

“It took 40 years for brain tumors to show up after Hiroshima,” says Devra Davis, Ph.D., M.P.H., founding director of the center for environmental oncology at the University of Pittsburgh Cancer Institute (UPCI). “How can you expect to see effects from cellphones in 10?”
Studies that look at cellphone use for more than 10 years are less comforting. According to a 2002 study of more than 1,400 brain tumor patients by Swedish cancer epidemiologist Lennart Hardell, M.D., Ph.D., as well as a review by Dr. Hardell of data from other researchers’ studies, regular use of a cellphone for longer than 10 years increases your risk of some types of brain tumors.

(That’s the bad news for adults. The harm of wireless radiation on children and teenagers may be even worse, reports Scott.)

While researchers squabble over the science, you can take steps to lower the amount of radiofrequency energy bombarbing your brain, says Scott. Use earbuds or a headset. Hold your phone as far as possible from your head. Avoid talking on the phone when the battery charge is low (only one or two bars), a situation that requires the phone to boost its radiofrequency output. 

Or, as Scott also recommends, take the truly revolutionary step of talking less.

You can also learn about all our free newsletter options.

Comments (8)

  1. Submitted by Richard Rowan on 04/23/2010 - 05:33 pm.

    Regarding this recommendation: Avoid talking on the phone when the battery charge is low (only one or two bars),

    Do you mean the signal bars? It seems the phone would need to boost the charge when the signal is weak, not when the battery is weak.

  2. Submitted by Paul Udstrand on 04/24/2010 - 10:10 am.

    Never trust an article that uses the phrase: “studies show”. This is how things go horribly wrong when scientific studies are reported in the popular media. You can’t necessarily trust an author’s interpretation, did they read the whole study or just the abstract and summary? Is the author knowledgeable enough about methodology to discern validity? And of course, not all studies actually demonstrate what the authors claim they demonstrate.

    I think if an author really wanted to write something useful about this alleged controversy they would not write this article. Instead of squaring off the two sides and comparing what the “studies say” you would pick two representative studies and compare them in detail. If you’re really familiar with the field you can do this. Such a comparison actually illustrates the methodological strengths and weaknesses of each side rather than alluding to them.

    Unpublished, non peer reviewed studies should never be referenced regardless of sponsorship unless they’ve been accepted for publication, and the results have been publicly presented to a conference of peers.

    Cancer epidemiology is notoriously difficult and complex. Generalizing from laboratory results to field results can be extremely problematic. Attempts to link some environmental insult to some cancer are hit and miss. Researchers have been studying cancer clusters all over the world for decades trying to link them to everything from high energy power lines to something or another in the soil. Again, it’s been hit and miss.

    The problem with the “ten year” complaint is that the older a person gets the more likely they are to develop cancer… period. The problem is your trying to say that 40 year old who develops brain cancer wouldn’t have developed it if they hadn’t been using a cell phone. You don’t know means you don’t know, you can’t say I don’t know if this population will develop cancer 40 years from now therefore I know this population will develop cancer 40 years from now. And cell phones aren’t the only new technology that’s been introduced in the last 20 years, we’re surrounded by transceivers, wi-fi, GPS, laptops, there’s all kinds of new ubiquitous technology to found in our environment that’s emerged roundabout the same time as cell phones were introduced. What about the cordless phone everyone has in their houses for instance?

    Then there’s mundane issues like what do you actually mean by cell phone “use”? I think as many or more people are texting and twittering now as they are actually talking, if that trend continues how do factor that into your exposure model?

    I’m not saying I don’t believe cell phones could be dangerous, but I’d be curious to know what these researchers are trying to explain. Is there actually a spike of some kind in certain cancer frequencies in certain populations that led them to look at cell phones, or did they just decide to try to find out if cell phones can cause cancer? If the latter is the case, this all may be an experimental artifact, an answer in search of a question.

    I wish the article had examined some these issues instead of trying to summarize the body of work in general terms.

  3. Submitted by Paul Scott on 04/24/2010 - 05:57 pm.

    Paul, your instincts are reasonable but you generalize about my reporting in a way that is far less informed than you accuse my reporting of being. For instance, you seem to know the ins and outs of evaluating a study, but you offer no examples of any claims in the article that do not pass your test. I travelled to the only center for environmental oncology in the US to write this UPCI, interviewed the chief epidemiologist for the ACS, spent hours on the phone with Lloyd Morgan and dug out a presentation offered in Davos — the study that has not received peer approval that you disparage. It may not have occurred to you but the study has not received peer approval because the bar is seemingly being set higher for this finding than it has been set for the finding that brain tumors have not been rising. The authors are NIH researchers who never see their work put on the slow track like this.

    The notion of comparing two representative studies in detail seems quixotic, to say the least. How do you choose a “representative study”? The methodological weaknesses of the papers finding cell phones to be safe have been charted out in detail in the meta-analysis referenced in my paper — Joel Moskowitz et all, Berkeley, 2009 — but that is not the sort of data one can spend a great length of time on for a consumer magazine. The questions you raise about isolating cell phone exposure have been thought out with far more precision than you have argued — the effects of cordless phones are one of the factors that differentiate the good studies from the bad, incidentally (and they favor the notion that cell phones raise risk). If you would like to know what the most thorough researchers are doing please Google Lennart Hardell.

    But to answer your last question, they are not responding to a spike, it is simply too soon for a rare illness — but they are building on a concerns that originated with leukemia associations with powerlines, etc. For a longer look at this issue please see the latest issue of Harpers.

  4. Submitted by Paul Udstrand on 04/25/2010 - 12:29 pm.

    Mr. Scott,

    I’m not attacking you personally, I’m just critiquing your article, I appreciate that you put a lot of time and work into this, I just think it could better. Correct if I’m wrong, but it looks like you confirm that you haven’t actually read the published studies, but have interviewed researchers who discussed them? You don’t have to be a scientist to write about science, but this is how things go wrong in science reporting.

    // For instance, you seem to know the ins and outs of evaluating a study, but you offer no examples of any claims in the article that do not pass your test.

    Your article doesn’t contain specific methodological descriptions, only vague references to populations and statistics. There is no methodology to examine, that’s the core of my complaint. I do comment on the idea that one can or cannot conclude something based on the lack of cancer occurrence within or after ten years, and the I question the definition of cell phone use.

    //the study that has not received peer approval that you disparage. It may not have occurred to you but the study has not received peer approval because the bar is seemingly being set higher

    You right, this never occurred to me, because this is not how science or peer review works, there is no “bar” that is raised or lowered according to scientific difficulty. You get your funding, you do your research, and you submit for peer reviewed publication conference presentation. I’m guessing there are at least two dozen oncology journals that a researcher could submit to, they all have different editors, and reviewers. In a small field there may a relatively small number of possible reviewers, but cancer research is not a small field. I don’t know if this “bar” idea is yours ( in which case you are betraying some degree of ignorance) or one of your sources ( in which case you may want to get a different source) but peer review is peer review, you either get published or you don’t. Your study either meets basic criteria or it doesn’t. The whole point of using peers is they know how difficult the research is so this complaint/explanation makes no sense. Sometimes politics enters into it, but before you go there you need to look at the study itself. My experience is in psychology; back in the 80s a group of researchers complained that they couldn’t get their studies on dissociation published. In truth, the reason these guys couldn’t get published was their studies were crap. They solved this problem by publishing their own journal, The Journal of Dissociation, which is long since defunct. The reason it’s a bad idea to reference not-yet or non peer reviewed research is it may contain serious methodological errors. Peer review isn’t perfect, but it’s all we’ve got. So either you as the author need to be able to critique the methodology, or you have to wait for the peer review.

    //The notion of comparing two representative studies in detail seems quixotic,

    There’s nothing quixotic about this at all, it’s done all the time in the journals, it’s a basic literature review. Most of the studies your looking at replications or modifications of previous studies so it’s relatively simple to compare methods and select a couple good studies. I guarantee you that your main source could recommend a couple studies to illustrate the research trends. If they can’t, you need a new main source.

    Meta statistics is a tricky mistress, play with her at your own risk.

    //But to answer your last question, they are not responding to a spike, it is simply too soon for a rare illness –

    See, this is just a logic thing. Either your investigating an observed phenomena or you not. If you’re not, you have a serious problem confirming your observations no matter what they be. I imagine this all got started when the cell phone industry tried to show that cell phones weren’t dangerous by producing studies claiming cell phone safety. Now this is problem because basically all you can say is that they have not yet caused cancer increases, but logically you cannot claim they never will- you can’t prove a negative. On the other hand, you can’t claim they will cause cancer in the future unless you have observed the cause and effect relationship, which no one has. This idea that it’s to soon to see the cancer assumes that the cause has been verified- it hasn’t, and that any cancer that will be observed will not be attributable to any other cause. You can see the problem here. No amount of statistical expertise can create phenomena that doesn’t exist in the real world. That’s why it’s called an experimental artifact. The best you can say at this point is no one knows for sure if these cell phones will cause tumors, and that they haven’t yet. I know this, cell phone radiation is a lot different than nuclear radiation.

  5. Submitted by Paul Scott on 04/25/2010 - 03:21 pm.

    Paul, this is not a good use of a Sunday, but you are way, way off.
    I will try one more time to clear up your confusion. Are you seriously asking if I have read a study that I have written about? I have had authors take me through their tables and I have read the strange little things that are wrong with them, which, incidentally, I wrote about at length in my piece, which I am starting to question if you actually read.

    For instance,

    // Your article doesn’t contain specific methodological descriptions, only vague references to populations and statistics. There is no methodology to examine, that’s the core of my complaint.

    You must have missed this:

    “The thirteen nation Interphone project asked more than 6,000 patients with brain tumors about their cellphone use, then compared their answers with those of a matched group with no brain cancer.”

    or this

    in 1996 Gandhi created models of the smaller, thinner skulls of children ages 5 and 10…the cellphone radiation that hits an adult brain with 72 mW/kg or wireless radiation [subjects] a 10 year old’s brain to 160 mW/kg [and] a 5-year old brain with nearly 240 mW/kg.”

    Did I report that a researcher used this or that form of statistical analysis, or controlled for the fact that the pinna in dummy ears push the phone farther from the head, or change its angle of projection, or controlled for cordless phone use, or used questionnaire X or conducted their review between 6 and 12 weeks after brain surgery — whether they conducted a case control or population study — no, you win, there. And how I wIsh my job was that simple, to just regurgitate the methods, results and discussion sections of a given paper, or as you describe, two “representative papers”. I wish that I could rest at night after simply looking very closely at 2 studies and putting a topic away. I wish that there is an editor of a mainstream publication who would publish an article like that, because it would make my job a lot easier.

    Instead I have to synthesize broad categories of research, then talk to enough people to find out that, say, the industry funded studies described someone as a “regular user” if they used a cellphone once a week (which is what Interphone does), or, as another large study exonerating the technology did, dropped 200,000 business users from its final analysis. This article was one of the first in the country in this sort of publication to highlight the fact that the Interphone study produced a protective effect — that it found that cell phones protect you from cancer, which is biologically implausible by the authors own admission. It was one of the first in the country to describe at least six different ways that nonionizng radiation could bring about the indirect damage of DNA, citing research. It described the molecular steps involved in the Fenton Reaction, which, I doubt you were already familiar with. I discussed the odds ratio problems with Interphone. The odds ratio. So you can see how I might not understand how it is that you ask if I read any studies.

    I do believe we differ on the sanctity of peer review. I have been writing about science for fifteen years and am married to a scientist and live in a town full of them — I have talked with them about the stuff that goes on when they are a peer reviewer — and the one thing I can tell you about peer review is that though it is the best thing we have, there is little doubt that it can reflect politics regarding ideas that challenge a given paradigm, and ideas that are nowhere like those as crackpot as dissociation. You clearly have not evaluated the credentials of my sources — Ron Herberman ran the National Cancer Institute, Devra Davis has testified before the US Senate — or you wouldn’t be throwing around those sorts of comparisons.

    As for your problems with the question of whether cellphones can cause brain cancer, I will just state that the question does not seem as ephemeral to the NTI, which just authorized a $25 million study of the issue, or governments throughout Europe, Canada Israel and Australia which have issued safety advisories. I am not taking your criticism personally, but it is disappointing to be on the receiving end of complaint which seems so far removed from the subject under review.

  6. Submitted by Paul Udstrand on 04/26/2010 - 09:29 am.

    Well Mr. Scott, I guess you told me!

    First of all, I want to thank you for taking the time respond, I don’t think it’s a waste of time.

    Yes, I’m asking if you read all the studies, ALL the studies you report on in the article, the complete study, not just the abstracts and summaries. Don’t be so indignant, I’m not accusing you of not reading them, I’m just asking. You know as well as I do that not all journalist read the studies they report on so the question is not out of bounds. I pressed the question because in your initial response you emphasized who you’d talked to and how far you’d traveled instead of what you’d read. My point was you can’t assume an author has actually read the studies in addition to conducting interviews, you know that’s true.

    I didn’t miss the Interphone or Gandhi stuff, it’s not in the Minnpost article- you can’t blame me for that.

    The Interphone study (6,000 subjects) would be an interesting study to critique. The critical questions are how was the “matching” done? Why did they do a comparison instead of a control group study? For instance one can imagine you might run into some sample bias finding subject that don’t use cell phones, how was that controlled for, and was the matched group really as big as the patient group? They interviewed 12,000 subjects? And why was the sample so large? You don’t need a sample anywhere near that large statistically, people assume the larger the sample the better but quality is better than quantity. How thorough were those interviews? How do keep that much data clean? If you need such a huge sample to get a significant result that may indicate a problem with the model. A sample size that large can actually junk your data, again, play with metastatistics at your won peril. Gandhi may have models, but according to you what they don’t have are people with brain tumors… because it’s to soon…maybe.

    Oh, and the odds ratio- my wife is an epidemiologist, I call her the Wizard of Odds, it hasn’t caught on yet.

    Anyways, the $25 million dollar thing; just because someone gets funding doesn’t mean research is likely to produce one result or another, nor does it guarantee legitimacy. The important thing to notice is that no one’s doing this research for free, and regardless of funding source researchers are invested in their results. On the flip side, failure to get funding doesn’t necessarily prove anything either. At any rate, funding in and of itself doesn’t tell you anything. Which brings me back to… you guessed it- peer review. The only way to really evaluate research is to have people that know what they’re doing critique it.

    Hey, if you and your wife have a better way of vetting research let’s hear it, THAT’S a real story! I’ve already said it’s not perfect, a lot of junk science get published (and funded) every year. But dude, don’t you realize that your article is basically a report on the peer review process? That’s what Gandhi and Interphone are doing! They’re reviewing and critiquing the industry sponsored research- THAT’S peer review. Peer review isn’t just a hoop you jump through to get published. It’s the whole framework of transparent methodology that allows other scientist to evaluate the work regardless of funding or politics. And you’re saying what? You don’t believe in it? That’s why you have to be so careful citing research that’s done outside that model. No peer review means no one who knows what they’re doing has evaluated the research. To the average Joe on the street a sample size of 6,000 sounds impressive, but the first question anyone who knows anything about stats asks is: “why such a huge sample size, and how did you keep that much data clean”? Now there may be a good answer, but peer review elicits that answer, without it, now one even asks. Did you ask? What was the answer? This is the problem with referencing un-reviewed unpublished work. It’s not that work is necessarily crap, but you don’t know what you don’t know.

    Anyways, thanks for the additional details and your responses, I don’t think this is a complete waste of time.

  7. Submitted by Paul Scott on 04/26/2010 - 12:27 pm.

    Paul, Interphone was actually a dozen smaller studies. They have a problem with their controls, too many of them used cell phones and cordless phones and that skewed for a protective effect. It is all in my article, which you can find through the link provided. I have nothing against peer review, and said as much. It’s the best we got. But I do not think it is immune from political forces. I believe you stated that yourself. Thanks for your interest. I don’t believe this has been a waste of time either.

  8. Submitted by Lloyd Burrell on 05/14/2010 - 12:03 pm.

    Susan asks the question “Will we one day regret our current cavalier attitude toward cellphone radiation?”
    My answer is a resounding yes.
    I became very ill with cell phones 8 years ago, nearly lost my job, my health went up the spout, blood pressure, massive fatigue, weight loss, and I just couldn’t think straight when I got near to a cell phone in use.
    Susan also makes a very good point as to the validity of the studies that have been carried out to date, again I wholeheartedly agree with her.
    I might be sticking my neck out here, but in my mind there is no doubt as to whether the published findings of the Interphone Study, due out in 4 days, are going to be representative of the truth, or will be censored because of potential damage to the cell phone industry. I just know the Interphone study is going to be another big cover up. Read more here http://electricsense.com/447/the-interphone-study-disclosure-or-deception

Leave a Reply