Are EMFs bad? â assessing the evidence, Part I: RF-EMF
- Introduction
- Section I: How do we know if something causes harm?
- Section II: RF-EMF studies
- Section III: Returning to the skeptical arguments
- Section IV: Bradford Hill assessment
- Section V: Beyond carcinogenicity
- Section VI: A wrench, and an odd possibility
- In conclusion
Introduction
The discourse around EMFs (electromagnetic fields) and whether they are harmful to humans is a mess, tainted by poorly-communicated conclusions, bad studies, and more.
And to be fair, itâs a tough thing to pin down. Weâre talking about an exposure that:
- went from near-zero for most people to near-constant in the space of just a couple decades for some types of EMF
- where many of the theorized biological effects might only appear decades after exposure begins
- and the general term for which, âEMFs,â refers to the entire electromagnetic spectrum â an enormously varied exposure category
- where itâs nearly impossible to run controlled human trials that keeps participants totally unexposed for long periods of time, even though many of the theorized effects might only appear with chronic use
- plus the details of our exposures have varied massively over the last couple decades as technology has advanced
- and the increase in exposure is tightly coupled to devices and quality-of-life improvements that people most certainly donât want to give up
- plus, the commercialization of these technologies has been an absolute goldmine for industry, who most certainly donât want them to be perceived as harmful
If youâre not familiar with what EMFs are from a conceptual / scientific perspective, Iâd recommend reading my earlier post, âA rational, skeptical, curious personâs guide to EMFsâ. Ideas from that piece will be referenced throughout.
This post series is meant to be a rigorous breakdown of the current state of scientific understanding around the health effects of EMFs.
Or, at least, itâs meant to represent what my understanding of the current state of scientific understanding is. Please do reach out with any thoughts and especially any corrections. I am not a professional scientist. But Iâve tried to reason about this from first principles and examine the evidence honestly and openly.
It is long. Iâm sorry. But Iâm trying to do right by what is a very complex topic, while also making it a bit readable than most journal articles.
(Although if you want to read it in a printable format that looks more journal-y and less blog-y, hereâs a link.)
So, whatâs the question?
Asking âare EMFs bad for you?â is not a question that leads to well-bounded, falsifiable conclusions. We need to define what weâre even talking about â what is the specific, testable question we are looking to answer? â something that is often lost in the noise of EMF debates.
First: the electromagnetic spectrum is wide. Which parts of it are we talking about?

There is no dispute that higher-frequency EMFs like x-rays and gamma rays can cause harm â but they do so through directly breaking chemical bonds. This class of EMFs is called âionizing radiationâ and is known to be harmful through this mechanism. But the ionization capacity of EMFs stops around a roughly 120nm wavelength â or the extreme end of the UV part of the spectrum. To the left of that (longer wavelength) there is not ionization potential.

Moving further down the frequency range from x-rays, thereâs also little-to-no dispute that visual light can have biological impacts, including negative ones if used âimproperlyâ â think about blue light disrupting circadian rhythm.
Thereâs also no dispute that even lower-frequency EMFs can be harmful if they are at high enough power density to cause âthermal effects.â Think about a microwave oven (roughly the same type of EMF as Wi-Fi signals, albeit at much higher power): obviously if you put yourself in a microwave, the result wouldnât be pretty. Thatâs because your tissue would actually heat up, which can clearly be damaging.
These undisputed âthermal effectsâ are the basis for much of the US regulatory regime (the history of which I wrote about here). We are generally protected from these thermal effects by regulation.
So really, what weâre talking about is whether non-ionizing EMFs at power levels too low to cause thermal effects can have deleterious impact on humans.
The place to focus is on the two most controversial and debated common types of exposure:
-
Radiofrequency EMF (RF-EMF): the type of radiation associated with modern wireless communications like WiFi, cell service, and Bluetooth. Frequencies typically range from about 700 MHz to 6 GHz, with 5G extending higher.
-
Extremely Low Frequency Magnetic Fields (ELF-MF): a type of radiation associated with power lines, transformers, and electrical wiring. The core frequency is typically 50-60Hz.
And within these, what is most of interest is exposure at typical levels and durations.
In Part I of this post, I will focus only on RF-EMF. Part II will cover ELF-MF.
We also must consider who it is that weâre talking about. An exposure doesnât need to negatively impact everyone for it to be considered âbad.â If we were to find out that typical levels of RF-EMF were negatively impacting babies, or immunocompromised people, weâd want to know that â even if it was fine for another chunk of the population.
And so perhaps the precise question to answer is something like: is there reason to believe that RF-EMF, at exposure levels and durations typical in modern life, may cause increased risk of adverse health effects, at least for sensitive members of the population?
Why you probably think this is nonsense
If youâre like most educated people, your prior on âEMFs are harmfulâ is pretty low. Mine was too. Hereâs what my arguments were before I got into the literature:
-
The physics argument: Non-ionizing radiation doesnât have enough energy to break chemical bonds. It canât damage DNA the way X-rays do. Basic physics says it shouldnât matter.
-
The authority argument: The FDA, WHO, and every major health agency says current exposures are safe. These are serious institutions with public health mandates.
-
The mixed results argument: Some studies find effects, others donât. If EMFs really caused harm, wouldnât the evidence be more consistent? The positive findings must be cherry-picked.
-
The vibes argument: The people worried about EMFs seem to overlap with extremely-woo, low-rigor crowds. Smart, serious people donât worry about this stuff.
-
The population argument: Cell phone use went from zero to near-total penetration in a couple decades. If it caused cancer, weâd see a massive spike in brain tumors.
Each of these arguments has something to it, and should be factored into our view of the corpus of evidence. However â to spoil my conclusion â Iâve come to believe that none are a silver bullet that totally proves the case against EMF harm. They should be weighed, but they do not end the discussion.
Bottom line up front
Before I send you down a long (~25,000 word?) rabbithole, Iâll share my bottom line conclusions on the basis of the evidence. Here they are:
The weight of scientific evidence supports that there is reason to believe that RF-EMF, at exposure levels typical in modern life, may increase the risk of adverse health effects.
But also: there is immense uncertainty, and it is not clear how applicable historical studies are to our modern exposure levels given constantly changing characteristics of the exposure profile (this could cut either way â positive or negative).
We are effectively running a uncontrolled experiment on the world population with waves of new technology which have no robust evidence of safety.
If you believe new exposures should be demonstrated to be safe before being rolled out to you, you should take precautions around RF-EMF exposures (which have not been demonstrated to be safe).
If you only believe precautions are warranted for exposures which are demonstrated to be harmful, then perhaps you donât need to.
To explain how I got to those conclusions, we will:
- Walk through how science actually establishes that something causes harm. This matters because most people have a mistaken model of how this works.
- Go through animal studies, human epidemiological studies, and studies on potential mechanisms, each in detail, with a focus on carcinogenicity (the most-studied potential endpoint for RF-EMF impact).
- Return to the skeptical arguments and address them explicitly
- Evaluate the body of scientific evidence against our proposed criteria for assessing causality of harm.
- Touch on potential non-carcinogenic effects
- And move to our conclusion and overall reflection
Before we go further, a disclosure: I co-founded Lightwork Home Health, a home health assessment company that helps people evaluate their EMF exposure (as well as lighting, air quality, water quality, and more). As such, I have an economic interest in this matter.
With that said, I want to be clear about the causal direction: I co-founded Lightwork after becoming convinced by the evidence of these types of environmental toxicities. Thatâs why I started the business. This research isnât a post-hoc way for me to justify Lightworkâs existence!
And regardless, my purpose here to do my best to present the evidence fairly â including the strongest skeptical arguments and what would change my mind â and let you draw your own conclusions.
Section I: How do we know if something causes harm?
The problem with proving causation
Hereâs a fundamental challenge: you canât âproveâ causation with 100% certainty in modern observational science.
You canât run a randomized controlled trial where you assign people to use or not use cell phones for 30 years and see who gets brain cancer. You canât ethically differentially expose children to power lines and track their leukemia rates. For most environmental exposures, definitive experimental proof in humans is impossible.
So how did we conclude that tobacco causes lung cancer? That asbestos causes mesothelioma? That lead damages childrenâs brains? None of these had randomized controlled trials in humans either.
The answer is: we use a framework for evaluating causation from imperfect evidence.
The Bradford Hill Criteria
In 1965, epidemiologist Austin Bradford Hill proposed nine criteria (or âviewpointsâ) for assessing whether an observed association is likely to be causal. These criteria have become a common framework in environmental and occupational health:
-
Strength: How large is the effect? Larger effects are less likely to be explained by bias or confounding.
-
Consistency: Is the association observed repeatedly in different populations, places, and times?
-
Specificity: Is the exposure associated with specific outcomes, rather than with everything?
-
Temporality: Does exposure precede the outcome?
-
Biological gradient: Is there a dose-response relationship? This need not always be true, but is a factor.
-
Plausibility: Is there a biologically plausible mechanism by which the exposure could cause the effect in question? (although Hill noted our knowledge here may be limited)
-
Coherence: Is there agreement between epidemiological and laboratory findings?
-
Experiment: Do experimental studies support causation? (Hill: âOccasionally it is possible to appeal to experimental evidence,â which is largely true here given the nature of the exposure.)
-
Analogy: Are there similar cause-effect relationships weâve already accepted?
No single criterion is necessary or sufficient (except temporality is considered necessary). You weigh the totality of evidence across all nine. Hill himself emphasized this:
âNone of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis⊠What they can do, with greater or less strength, is to help us make up our minds.â
This is, for example, how we decided smoking causes cancer. Not because any single study was definitive, but because the evidence accumulated across multiple criteria: strong associations, consistent across populations, dose-response relationships, biological plausibility from animal studies, coherence with disease patterns.
These criteria are especially helpful for considering causality for questions that are very hard to measure directly / experimentally. Say, for example, âwhat are the effects of long-term / lifetime mobile device & WiFi use (i.e. RF-EMF exposure) on humans?â
This is basically an unanswerable question in a direct, experimental, RCT-style way. As mentioned elsewhere, you canât isolate certain people from RF-EMF exposure their whole life, and expose other roughly-equivalent people, and look at what happens to each group.
RF-EMF exposure is inevitable and omnipresent in the modern world. You could put people in a no-EMF chamber for a day vs. not and measure things, but you canât directly measure long-term exposureâs effects.
So instead, we use something like the Bradford Hill viewpoints and ask: when we consider observational human studies, direct experimental animal studies, plausibility of biological mechanisms, and other characteristics of knowledge all together⊠what does it say in aggregate?
Iâll return to these criteria after presenting the EMF evidence.
What ânull hypothesis significance testingâ actually shows
Most modern scientific studies use a methodology called null hypothesis significance testing (NHST). Understanding what it does â and doesnât â tell you is also crucial here.
The NHST process:
- Assume âno effect existsâ (the null hypothesis)
- Collect data
- Calculate: if no effect existed (i.e. if our assumption â the null hypothesis â was true), how likely would we be to see data this extreme?
- If very unlikely (p < 0.05): âreject the nullâ and be able to claim we have âstatistically significantâ evidence that some effect exists
- If not unlikely enough: âfail to reject the nullâ and say we did ânot observe a statistically significant effectâ
Hereâs the critical part: ânot statistically significantâ does not mean âno effect exists.â
It means: given our sample size, measurement precision, and study design, we couldnât rule out that the observed data is consistent with no effect.
I know that sounds a little incoherent and pedantic. But itâs a really, really important point for understanding modern science. If you are looking at a study run with NHST (basically all of them now), there are two possible outcomes:
- The study shows that there is a statistically significant effect
- The study fails to confirm a statistically significant effect
The second point by itself is not strong evidence for no effect existing! Said another way: absence of evidence is not evidence of absence.
You could have an âabsence of evidenceâ finding for a number of reasons, including:
- There truly is no effect
- The effect exists but is small
- The study was underpowered (too few subjects)
- The exposure assessment was too crude
- The follow-up period was too short
- The study design wasnât suited to detect this type of effect
NHST was designed to prevent false positives â to stop people from claiming effects that donât exist. It was not designed to prove safety. It cannot prove safety. Thatâs not what it does.
So when you read âStudy finds no significant link between X and Y,â what the study actually found was: âwe couldnât reject the null hypothesis.â Thatâs not the same as âwe proved X doesnât cause Y.â
Odds ratios, confidence, and intuition
Throughout this post, I will show the results of NHST studies in a typical format for this topic, for example:
OR 2.22 (1.69â2.92)
What does this mean?
OR stands for Odds Ratio. It is a common way of evaluating increased or decreased risk for disease. An OR of more than 1 means an increased risk over baseline; an OR of 1 means no increased risk; an OR of less than one means decreased risk.
Odds Ratio is actually a little bit tricky to build intuition for. Relative Risk is a related but different metric, but can generally only be calculated with access to the entire population, whereas Odds Ratio can be calculated just by sampling a subset of the population.
More specifically, the Odds Ratio is equal to:
- the ratio of exposed people who got the disease to exposed people who didnât
- divided by
- the ratio of unexposed people who got the disease to unexposed people who didnât
For example, if:
- 100 people were exposed to a carcinogen, and 10 of them got cancer
- and 200 people were not exposed to that carcinogen, and 7 of them still got cancer
The OR would be:
For our purposes, you can roughly think of this as a âtriplingâ of your likelihood of getting cancer if exposed to that carcinogen. Thatâs not quite precise, and statisticians would be upset with me for saying soThe reason statisticians would be upset is that odds ratios compare odds, not probabilities / risk. These are only roughly equivalent when the probability of the event in question is small, which it is for the cancers we will be talking about. But it can lead to surprises a la Simpsonâs Paradox if you try to think about odds ratios as similar to relative risk for events with higher probabilities.. But for rare disease like cancer, this is fine for intuition purposes.
Okay so thatâs the OR number. Then, in parentheses, I had â this is the 95% confidence interval (CI). We are saying we are 95% sure that the OR is between 1.69 and 2.92, and the point estimate (typically the maximum-likelihood estimate from a regression) is 2.22. We use a 95% confidence interval because we set p=0.05 per the above commentary on NHST (which is standard).
This then leads to an understanding of âstatistical significance.â A result is only considered statistically significant if the 95% CI does not include the null hypothesis. This makes sense intuitively â if weâre 95% sure the result falls in some range, but that range includes âthereâs no effect,â we shouldnât claim that weâre confident there is an effect.
In the case of Odds Ratios, the null hypothesis is 1, because âno effectâ means âno increased chance over baseline,â and thatâs the definition of an OR of 1.
So that means that if you see a confidence interval that includes 1, the result is not statistically significant, no matter how high the front number is. e.g. if a study reports:
OR 4.75 (0.54â9.12)
That is not a statistically significant finding (1 is in the range 0.54-9.12), despite the fact that it seems to say thereâs an OR of 4.75.
On the other hand, if a study reports:
OR 1.33 (1.07-1.66)
That is a statistically significant finding of a 1.33 Odds Ratio (roughly 33% increased risk, with the caveats above), with 95% confidence that the true Odds Ratio is between 1.07 and 1.66.
The more data you have and the cleaner / less biased the data is, the closer your measured OR will get to the true value, and the tighter your CI will get around it. The less good data you have, the wider that confidence interval will be (which of course means it is harder to get to statistical significant, if there is an underlying effect).
Note that positive or negative findings that are not statistically significant can still be useful â especially in the context of meta-analyses, which pool multiple similar studies. If a study just happened to be underpowered to find an effect size, when it is pooled, it could help push the overall finding to significance.
Finally: itâs important to recognize that relative risk and absolute risk are two very different things. When weâre talking about ORs, weâre talking about the odds of getting a disease increasing. If that disease is rare in the first place (like tumors weâll discuss in this piece), even if you face a moderately increased odds ratio, itâs still rare to get the disease! Itâs just less rare than it was before. But it doesnât automatically turn the rare disease into a common epidemic.
Our roadmap
In the interest of evaluating the Bradford Hill viewpoints, we will first look at animal studies, then human epidemiological studies, and then some proposals on biological mechanisms.
Once weâve done so, weâll be able to go through the criteria on the basis of that corpus of evidence and weigh it.
We laid out our principal question earlier:
Is there reason to believe that RF-EMF, at exposure levels and durations typical in modern life, may cause increased risk of adverse health effects, at least for sensitive members of the population?
Like I said, weâll focus mostly on cancer risk as that is the most studied endpoint. So really, for most of the post weâll be focusing on:
Is there reason to believe that RF-EMF, at exposure levels and durations typical in modern life, may cause increased risk of cancer, at least for sensitive members of the population?
And we will answer that question by looking at animal studies, epidemiological studies, and biological mechanisms, and then applying the Bradford Hill criteria to that data.
Section II: RF-EMF studies
Animal studies
First, weâll consider: is there evidence of RF-EMF carcinogenicity in animals at relevant exposure levels? For this question, we donât need to look very hard.
The National Toxicology Program Study
In 2018, the U.S. National Toxicology Program (NTP) released results from a $30 million, decade-long study â the most comprehensive, rigorous experimental animal assessment of cell phone radiation ever conductedAlso, I believe the only RF-EMF study the US Government has ever conducted, which feels surprising. They were going to do more after this one but cancelled them with this (pretty odd, in my opinion) reasoning: âThe research using this small-scale RFR exposure system was technically challenging and more resource intensive than expected. In addition, this exposure system was designed to study the frequencies and modulations used in 2G and 3G devices, but is not representative of newer technologies such as 4G/4G-LTE, or 5G (which is still not fully defined). Taking these factors into consideration, no further work with this RFR exposure system will be conducted and NIEHS has no further plans to conduct additional RFR exposure studies at this time.â Youâd think those factors would make them want more studies⊠(I discuss what led to this study in an earlier post).
The NTP study exposed rats and mice to cell phone RF-EMF frequencies with common cellular signal modulation schemes (albeit from the early 2000s, since that is when the study was designed). The animals were exposed to levels around and above (although in the same order-of-magnitude) current human limits for 9 hours/day, 7 days/week, for roughly two years (a lifetime for rats). The study was carefully designed, with controlled exposure chambers and sham-exposed concurrent controls.
The NTP uses a standardized evidence scale: âclear evidence,â âsome evidence,â âequivocal evidence,â âno evidence.â They found:
- Clear evidence of an association with tumors in the hearts of male rats (malignant schwannomas)
- Some evidence of an association with tumors in the brains of male rats (malignant gliomas)
- Some evidence of an association with tumors in the adrenal glands of male rats (benign, malignant, or complex combined pheochromocytoma)
- Unclear findings if tumors observed in the studies were caused by exposure to RFR in female rats (900 MHz) and male and female mice (1900MHz)
Remember these tumor categories â schwannomas and gliomas â for when we get to the epidemiological evidence.
âClear evidenceâ is the highest confidence rating NTP assigns, and is not something they say lightly. Their definition of it is:
Clear evidence of carcinogenic activity is demonstrated by studies that are interpreted as showing a dose-related (i) increase of malignant neoplasms, (ii) increase of a combination of malignant and benign neoplasms, or (iii) marked increase of benign neoplasms if there is an indication from this or other studies of the ability of such tumors to progress to malignancy.
And âSome evidenceâ is defined as:
Some evidence of carcinogenic activity is demonstrated by studies that are interpreted as showing a test agent-related increased incidence of neoplasms (malignant, benign, or combined) in which the strength of the response is less than that required for clear evidence.
Both of those categories are considered by the NTP to be âpositive findings.â
They also published followup papers, including one looking at DNA damage, which found significant increases in DNA damage in:
- the frontal cortex of the brain in male mice
- the blood cells of female mice
- the hippocampus of male rat
The NTP studies are among the strongest experimental signals in the literature, and are rightfully seen as extremely compelling evidence for carcinogenicity (in animals) of RF-EMF at levels on the same order as human exposure. And as they concludeWith a similar paragraph for the CDMA-exposed rats, with slight variations.:
Under the conditions of this 2-year whole-body exposure study, there was clear evidence of carcinogenic activity of GSM- modulated cell phone RFR at 900 MHz in male Hsd:Sprague DawleyÂź SDÂź rats based on the incidences of malignant schwannoma of the heart. The incidences of malignant glioma of the brain and benign, malignant, or complex pheochromocytoma (combined) of the adrenal medulla were also related to RFR exposure. The incidences of benign or malignant granular cell tumors of the brain, adenoma or carcinoma (combined) of the prostate gland, adenoma of the pars distalis of the pituitary gland, and pancreatic islet cell adenoma or carcinoma (combined) may have been related to RFR exposure.
The Ramazzini Institute Study
The Ramazzini Institute in Italy â a respected independent toxicology laboratory â then conducted a parallel study examining cell radiation, seeking to reproduce or counter the NTPâs results.

They referred to this setup in a caption in their paper as âwooden circular-shaped devices, as in a sort of condominium,â which made me laugh. Not the sort of condominium I want to live in, whether RF-EMF harm is real or not!

They subjected rats to GSM-modulated RFR at 1.8Ghz over the course of their lives. However, they subjected them to much lower exposures. The NTP exposure was at Specific Absorption Rates (SAR) ranging from 1.5 to 6 W/kg to simulate near-field exposure (like having a cell phone against your head). The Ramazzini exposures were orders of magnitude less, from roughlyâRoughlyâ because the Ramazzini methodology was actually built using a field strength measurement (0, 5, 25, 50 V/m), and converted to SAR estimates via assumptions detailed in section 2.6 of their paper. 0.001 to 0.1 W/kg, to simulate far-field exposure (like being exposed to a cell phone base station or a device further from you).

- A statistically significant increase in the incidence of heart Schwannomas was observed in treated male rats at the highest dose (50 V/m).
- An increase in the incidence of heart Schwann cells hyperplasia was observed in treated male and female rats at the highest dose (50 V/m), although this was not statistically significant.
- An increase in the incidence of malignant glial tumors was observed in treated female rats at the highest dose (50 V/m), although not statistically significant.
And so they conclude:
The RI findings on far field exposure to RFR are consistent with and reinforce the results of the NTP study on near field exposure, as both reported an increase in the incidence of tumors of the brain and heart in RFR-exposed Sprague-Dawley rats.
So, critically â they found at minimum a statistically significant increase in malignant heart schwannomas, which was one of the key findings of the NTP study. And remember that the Ramazzini exposures were 50-1,000 times lower than NTP â within the range people experience simply living near cell towers.
In science, independent replication is everything. When two labs find overlapping unusual results without coordination, itâs very unlikely to be a methodological artifact. And these were two independent, top-tier labs, in different countries, looking at different but adjacent exposures, and finding the same rare tumor types.
Further evidence
There are other studies that suggest other effects from RF-EMF on animals. But for the sake of brevity, I will focus on just findings related to those above (as I believe they have the strongest evidence).
Conveniently, the WHO recently commissioned a systematic review of animal cancer bioassays: Mevissen et al. (2025), Environmental International. The review found that across the 52 publications included, after evaluating for risk of bias, there was evidence that RF-EMF exposure increased the incidence of cancer in experimental animals, with:
- high certainty of evidence for malignant heart schwannomas
- high certainty of evidence for gliomas
This is exactly in line with (and partially driven by) the NTP & Ramazzini findings. I have not seen much dispute over these conclusions.
There are many other animal studies on RF-EMF (including several interesting ones around RF-EMF increasing tumor rates when animals are also exposed to separate known carcinogens).
But because I believe the conclusions above (high certainty of evidence for an association in experimental animals between RF-EMF exposure and gliomas/schwannomas) is relatively uncontroversial, I will leave it here, as this minimum bar is sufficient to make an overall argument. Any additional study inclusion would be for purposes of increasing the scope of potential RF-EMF biological impacts, which we can reserve for a future post.
And brevity is important, because weâre about to get into a more controversial area (which will thus take up much more space): the human studies. So letâs get into those.
Human studies
Now, letâs turn to the human studies on RF-EMF. Unlike rats, we canât put humans in isolated experimental âwooden circular-shaped condominiumsâ and blast them with RF-EMF or sham controls for their entire life. So instead, we rely on observational studies â typically either of the case-control variety, or prospective (the differences between which we will discuss shortly).
This difference is why we want to look at both animal and human studies (as well as other evidence): because it is only in concert that we can get a fuller picture.
There is also a challenge with RF-EMF in particular, as opposed to many other potential toxins: it is really, really hard to get accurate measurements of exposure. In the animal trials, we could look precisely at a quantified SAR (basically absorbed radiation) or field strength (ambient radiation), because the animals were in isolated, shielded chambers. But human exposure is a whole different beast.
Your phone transmits at variable strengths (fewer bars of signal actually means the phone boosts its radio and outputs higher power!), you move through a varying ambient field all day (are you close to a WiFi router? a cell tower?), and many ways of cutting this rely on some degree of human recall (how long were you on your phone?). Plus: cellular technology is constantly changing, so the same behavior one year may have a totally different exposure profile than the next.
This creates a huge amount of noise and challenge in the data. Studies take different approaches, and we will attempt to look at them individually and in totality. But it is worth bearing these challenges in mind. In general, most studies tend to focus on simply âcell phone usage,â with many studies tracking time spent on calls. Especially at large enough scale, this is perhaps the best way to get at exposure, but still leaves questions and noise.
The INTERPHONE Study
The most noteworthy case-control cell phone study is generally considered to be the INTERPHONE study. The main paper was published in 2010, and there have been a number of followups and country-specific ones as well.
INTERPHONE was coordinated by the WHOâs cancer research arm (IARC), and looked at people across 13 countries. They found tumor cases between 2000-2004 and matched controls from the same populations, giving both detailed interviews about cell phone use history. It involved 2,708 glioma cases and 2,409 meningioma cases.
Their headline finding was that there was no overall association between cell phone use and gliomas or meningiomas (âNo elevated OR [Odds Ratio] for glioma or meningioma was observed â„10 years after first phone use.â). However: when you read the details, a more nuanced picture emerges.
The details
As the WHOâs International Agency for Research on Cancer (IARC; the organization that kicked off the study) said themselves in 2010 when the study results were announced:
The majority of subjects were not heavy mobile phone users by todayâs standards. The median lifetime cumulative call time was around 100 hours, with a median of 2 to 2-1/2 hours of reported use per month. The cut-point for the heaviest 10% of users (1,640 hours lifetime), spread out over 10 years, corresponds to about a half-hour per day.
You may wonder, as I did: for those âheaviest 10% of usersâ who spend more than 1,640 lifetime hours over 10 years, averaging âabout a half-hour per dayâ⊠what was their outcome?
You wouldnât have guessed it from the topline âno elevated odds ratioâ conclusion, but:
In the tenth [highest] decile of recalled cumulative call time, â„1,640 h, the OR was 1.40 (95% CI 1.03-1.89) for glioma
Said another way: there was a statistically-significant 40% increased odds ratio of glioma in this set of âheavy users.â
Even more than that, in this same âheavy usageâ group, the OR for temporal lobe (the part of the brain most adjacent to where you hold your phone) glioma was 1.87 (95% CI 1.09â3.22). This means there was a statistically-significant 87% increased risk of temporal lobe gliomas for heavy users over people that were not regular users of their phones (i.e. they used them less than once per week!).
And even more interesting than even that: you might wonder, for this high-usage set, was there a difference in which side of the brain the tumor appeared on? And indeed, there was. Of those with glioma, âipsilateralâ phone use (i.e. mostly holding your phone on the same side of your head as where a tumor developed) came with an OR of 1.96 (95% CI 1.22â3.16) versus âcontralateralâ phone use (i.e. mostly holding your phone on the opposite side of your head from the tumor) at 1.25 (95% CI 0.64-2.42).
Said another way: there was a strong, statistically significant correlation between which side of their heads the glioma participants held their phone on and which side their tumor was on.
Now most people I know use their mobile phones â on a call or in their hand browsing â maybe 5 hours a day (various online stats agree with that order of magnitude). Thatâs 150 or so hours a month, or 1,800 hours a year. INTERPHONE considered âheavy usersâ to be those with more than 1,640 hours of lifetime exposure.
But critically â as we will address later in this piece â holding a phone up to your ear (really the only way people used them in the 90sâ INTERPHONE usage era) is a very different and more intense exposure profile than holding it in your hand. So Iâm not suggesting at all that modern usage hours are simply a scaled-up version of the INTERPHONE usage.
In fact, if that were the case, cancer numbers would be off the charts right now â weâll also discuss this dynamic later.
But the point remains: 1,640 hours of lifetime exposure feels well within reach for a ânormalâ user today â not just a âheavyâ one. And the INTERPHONE study suggests that range has a statistically significant association with glioma, and especially with ipsilateral, temporal lobe glioma.
A reasonable question may arise: am I cherry-picking here? Iâm only talking about the highest-usage group (more than 1,640 hours of lifetime cell phone use), when all the lower-usage groups showed much less â or zero, or negative â impact from cell phone usage.
Well: I think that group is especially relevant to our original question. Weâre trying to figure out whether âexposure levels and durations typical in modern lifeâ have negative health effects. The highest-usage group in the INTERPHONE study used their phones on average about half an hour a day.
Imagine we were studying whether eating ultra-processed food (UPF) was correlated with obesity, and we found a big, well-run study that looked at food consumption of the following groups over 10 years:
| Group | % of calories from UPF |
|---|---|
| Group 1 | ~0% |
| Group 2 | < 5% |
| Group 3 | < 10% |
| Group 4 | < 15% |
| Group 5 | < 20% |
| Group 6 | â„ 20% |
And imagine that study found that for Group 6, there was a statistically-significant 40% increased risk of obesity over Group 1, but for Groups 2-5, there was no statistically significant increase. And on that basis, we said âno elevated OR for obesity was observed â„10 years after first ultra-processed food consumption.â
Thatâs all well and good, but, uh, ultra-processed food now accounts for nearly 60% of US adult calorie consumption. So most people are way into the top group, and as such, thatâs the only one that is particularly relevant to look at.
And that maps almost exactly to what the INTERPHONE study.
To be clear, I donât blame the INTERPHONE authors. They conducted their study based on cell phone usage at the time â the late 1990s and early 2000s. And in their press release when they released the study, they said:
Dr Christopher Wild, Director of IARC said: âAn increased risk of brain cancer is not established from the data from Interphone. However, observations at the highest level of cumulative call time and the changing patterns of mobile phone use since the period studied by Interphone, particularly in young people, mean that further investigation of mobile phone use and brain cancer risk is merited.â
(That quote also hints at our later consideration of âsensitive groupsâ â INTERPHONE only looked at adults, not children.)
There are other critiques of the study (including shortcoming the investigators themselves highlight, and many others published): it was short latency (most cases had only a few years exposure; brain tumor latency is commonly decades), they found an implausible protective effect at low exposure (OR < 1.0) which signals systematic bias pushing results towards null, there were clearly recall errors and possible selection biases, and more.
Additionally and importantly â this will come up again later in our analysis of the evidence â the INTERPHONE study only looked at cellular phone use as âexposureâ but not cordless phone use. At the time the study was done, DECT cordless phones (remember these guys?) were common â and they also emit RF-EMF. But the INTERPHONE study put usage of those devices in the ânot exposureâ bucket. So when they then compared the âmore exposedâ groups to the âless exposedâ groups, it was not a proper comparison of RF-EMF exposure. The study measured the association of âcellular phone usageâ with cancer, but not âRF-EMF exposureâ to cancer (and this cordless exclusion statistically biased the study towards the null).
But at the end of the day, when looking at the data (not just the conclusions), I find the INTERPHONE study a persuasive argument in favor of glioma correlation with what might be in the range of typical modern cell phone usage. And thatâs even before we look at other epidemiological studies likeâŠ
The Hardell Studies
Swedish researcher Lennart Hardell has conducted the a series of studies over two decades. Heâs careful to include deceased cases, detailed laterality analyses, long followups (some 25+ years), cordless phone exposure (not just cellular phone), and dose-response analysis.
To take one of his papers as a representative example (Hardell & Carlberg (2025), Pathophysiology, a pooled analysis of two of their studies):
- mobile phone use increased risk of glioma, OR 1.3 (1.1-1.6)
- the OR increased statistically significantly per 100 hours of use, and per year of latency
- high ORs were found for ipsilateral mobile phone use, OR 1.8 (1.4-2.2)
- the highest risk was found for glioma in the temporal lobe
- first use of mobile phone before age of 20 gave higher OR for glioma than in later age group
There are several other Hardell studies that find similar results. Weâll cover their aggregate outcomes in the meta-analyses section below.
The laterality finding is crucial to addressing a possible source of bias (both here and in the INTERPHONE study). If recall bias drove these results â people with tumors overreporting phone use â theyâd likely end up overreporting on both sides. Thereâs no reason to misremember which ear you used.
But increased risk appears specifically on the side where phones were held. Thatâs exactly what youâd expect from a true biological effect, and is strongly suggestive of a lack of bias (although there remain possibilities around differential recall of side, post-diagnosis rationalization, etc.).
With that said, Hardellâs studies are often contested. His work consistently finds larger effects than other research groups, which is a clear driver of scrutiny.
He has also done a lot of studies (more than anyone else on this topic), and depending on your view on the matter, this ends up biasing meta-analyses (which weâll cover in a moment) one way or the other. Either Hardellâs studies are included in the meta-analysis, which drives the association between RF-EMF and cancer up, or they are excluded, which weakens the association substantially â or even eliminates it.
Critics point to several concerns, most of all his effect sizes not being replicated by other major studies (i.e. an elevated association found in a study is correlated with that study being performed by Hardell). These concerns should be taken into account when weighing the evidence.
Itâs worth noting that the INTERPHONE study (commonly cited as a disagreeing result from Hardell) is actually more in line with his findings than people give it credit for, per my argument above. And I will address some of the other major studies (Million Women; Danish cohort) momentarily.
There was also a noteworthy rebuttal to Hardell in Little et al. (2012), BMJ, which argued that the observed 2008 glioma rate in the US did not line up with what Hardellâs risk estimates would have predicted. I mention this rebuttal in particular (there are many!) because, as I will address later, this and related ideas are what I find to be the most persuasive line of argument against an association between common levels of mobile phone use and cancer.
Littleâs paper led to a flurry of replies (Kundi, Davis, Hardell himself, and Little et al. about the methods used. It is a complicated topic with no black-and-white answer, and worth reading the full series of letters if you are interested.
However, even if were to accept Littleâs work at face value and thus that Hardellâs work overestimates the association (although I havenât seen a compelling methodological argument for why that might be the case): the Little paper still finds that using the INTERPHONE risk estimates â including the elevated odds ratio in the highest-use bins â the projected glioma rate is in line with the observed rate.
And, similarly to my observations on the INTERPHONE study, I can believe two things at the same time: 1) that the work was performed properly, and 2) that it simply does not reflect the reality of modern RF-EMF exposure.
More precisely: the Little study did not have access to actual cell phone usage data in the US, so they assumed it matched the usage of the INTERPHONE control group (which I think is a reasonable assumption, given the constraints). And, depending on the âlatency periodâ you are looking at in the INTERPHONE study, the percentage of people who had more than 1,640 total lifetime hours of mobile phone use was relatively small, which means, coupled with a moderate effect size, there would be very limited projected increase in glioma incidence. And again, that is perhaps a fair annual count of usage for modern exposure.
Little assessed mobile phone subscriptions from 1982 onwards, for evaluation of glioma incidence from 1992 to 2008. This was important work, but does not reflect modern usage. Even if we assume all the work was done correctly, itâs still looking at exposure levels a fraction of what we face today.
And if you build in their assumed latency of 10+ years for tumor incidence and the possibility â reflected by INTERPHONEâs conservative findings â that low usage (<1,640 lifetime hours) doesnât create a large effect size but usage above that creates a moderate effect, it is entirely coherent to suggest that INTERPHONE and Littleâs findings are correct and we are facing the possibility that modern exposures are on track to produce increased incidences.
Million Women & Danish cohort
In contrast to findings above, there are two large-scale studies that are often cited as counterpoints (well, actually â INTERPHONE is typically cited as a negative finding, but per my analysis above I consider it a positive one!): the Million Women Study in the UK, and the Danish cohort study.
The Million Women Study is a very impressive large-scale project. 1.3 million UK women were recruited and followed for health outcomes. This has resulted in a huge array of papers. The study was not specifically focused on RF-EMF (or focused on anything specific, really â it was meant to provide large data across many facets to be later evaluated). But papers on cellular phone exposure were produced, including SchĂŒz et al. (2022), J Natl Cancer Inst.. The core finding:
Adjusted relative risks for ever vs never cellular telephone use were 0.97 (95% confidence intervalâ=â0.90 to 1.04) for all brain tumors, 0.89 (95% confidence intervalâ=â0.80 to 0.99) for glioma, and not statistically significantly different to 1.0 for meningioma, pituitary tumors, and acoustic neuroma. Compared with never-users, no statistically significant associations were found, overall or by tumor subtype, for daily cellular telephone use or for having used cellular telephones for at least 10 years.
So: no association found. However (emphasis mine):
In INTERPHONE, a modest positive association was seen between glioma risk and the heaviest (top decile of) cellular telephone use (odds ratioâ=â1.40, 95% CIâ=â1.03 to 1.89). This specific group of cellular telephone users is estimated to represent not more than 3% of the women in our study, so that overall, the results of the 2 studies are not in contradiction
There are, as usual a variety of letters and rebuttals (Moskowitz, Birnbaum et al., SchĂŒz). I find arguments on various points persuasive on both sides in terms of methodological strengths and weaknessesI also note that those opposed to the âno associationâ conclusions are not cranks â for example, Linda Birnbaum, Ph.D. was director of the National Institute of Environmental Health Sciences (NIEHS) of the National Institutes of Health, and the National Toxicology Program (NTP) from 2009 to 2019, and before that spent 19 years at the EPA where she directed the largest division focusing on environmental health research. Her co-author on the letter is Hugh Taylor, MD, chair of obstetrics, gynecology and reproductive sciences at Yale..
But for me, the piece that really matters â as in my breakdown of the INTERPHONE study â is this âheavy userâ question, which today is much closer to âtypical use.â It is completely plausible that, given the <3% âheavy usersâ in the Million Women Study, an association was not observed even if there is an association between âheavy useâ (as defined here) and cancer â and that heavy use population was not broken out in the Million Women paper anyway.
Finding no association for the roughly 97% of women who hadnât used a cell phone for 1,640 cumulative hours is valuable science. But it is of questionable value in answering our original question. In fact, the study authors themselves, in their reply letter, say:
We do agree, however, with both Moskowitz and Birnbaum et al. that our study does not include many heavy users of cellular phones. This study reflects the typical patterns of use by middle-aged women in the UK starting in the early 2000s.
âŠ
Overall, our findings and those from other studies support our carefully worded conclusion that âcellular telephone use under usual conditions [SchĂŒz et al.âs emphasis] does not increase brain tumor incidence.â However, advising heavy users on how to reduce unnecessary exposures remains a good precautionary approach.
And so I repeat here: their view of âusual conditionsâ may or may not be in line with modern RF-EMF usage; and their suggestion that âadvising heavy users on how to reduce unnecessary exposures remains a good precautionary approachâ is a suggestion that by their own definitions, should likely apply to most people today. Even many of these ânegative findingâ studies seem to actually suggest such.
The Danish cohort study (Frei et al. (2011), BMJ, among other papers) is another oft-cited study. It looked at Danish cell phone subscribers in the early 1990s and followed up regarding their cancer status between 1990-2007. They found no statistically-significant increased risk for brain / central nervous system tumors.
But this study falls victim to some of the same issues as the prior ones: there are a few methodological weaknesses, but it also fails to address what was then heavy usage but is now typical.
The Danish study did not measure actual exposure / usage. Instead, they simply looked at the binary of whether the individual had a cellular subscription or not, so there was no way to stratify by usage.
But beyond that:
- They excluded corporate subscribers from the âexposedâ group and put them in the âunexposedâ group unless they also had a personal subscription. There were 200,507 corporate subscriptions out of 723,421 total mobile phone subscription records. I suspect that corporate subscribers may have been the most active users, so this feels like a painful possible misclassification.
- They only had data on mobile phone subscriptions until 1995, so individuals with a subscription starting 1996 or later were classified as non-users / unexposed.
- Once again, cordless / DECT phone users were treated as âunexposedâ
- And perhaps most importantly, âthe weekly average length of outgoing calls was 23 minutes for subscribers in 1987-95 and 17 minutes in 1996-2002â
So, again: another study that does not feel not strongly informative about the tail of exposure most relevant today.
COSMOS
The most recent large-scale study is the Cohort Study on Mobile Phone Use and Health, or COSMOS, which remains in-progress (Feychting et al. 2024, Environ Int. is the most recent relevant follow-up, but there will be more).
COSMOS is a prospective cohort study following 250,000 mobile phone users recruited between 2007-2012 in several counties across Europe. The headline finding (at least of the 2024 paper) is no association between mobile phone use and risk of glioma, meningioma, or acoustic neuroma.
COSMOS is very interesting, and I will be tracking it closely. It was designed specifically to account for shortcomings in the earlier studies I mentioned. Most of all, the two most relevant and specific earlier study series (INTERPHONE & Hardellâs) were case-control studies â meaning roughly that researchers found people with cancer, then found matching controls, and then asked them all about their cell phone usage.
Whereas COSMOS is a prospective study, which means that they simply started following a population, and will note both cell phone usage over time as well as cancer diagnoses. The intention here is to address what is referred to as ârecall bias.â The concern is that people diagnosed with brain tumors, searching for explanations, might unconsciously overestimate their historical phone useThere is evidence for this type of bias. The COSMOS paper cites Bouaoun et al. (2024), Epidemiology, although itâs worth noting that paper shares an author with the COSMOS paper itself. Bouaoun et al. is a simulation study that found a larger variance in reporting errors among INTERPHONEâs cases than its controls..
There are trade-offs between prospective and case-control studies. While the prospective ones reduce recall bias, they also can struggle evaluating rare outcomes, especially those with long latency. You need really big populations, long follow-ups, and you risk exposure classification degrading as technology changes. Case-control studies can capture longer latencies and rare outcomes by design, but that comes with recall bias concerns.
COSMOS also worked to address the âheavy userâ question. Their conclusions thus far into the study (median followup of 7.12 years) are that there is no statistically significant finding associating cellular phone use with the cancers they are examining.
Unfortunately (for science and for the study authors), by 2007 â when recruitment began â cell phones were basically everywhere, as were other RF exposure sources like WiFi. So itâs now nearly impossible to have an âunexposedâ group to compare to. COSMOSâ approach is to treat the bottom 50% of users (by whatever metric they are slicing for the specific analysis) as the âlow exposureâ group, and have the 50-75% quartile and 75-100% quartile (and in some cases, the 90-100% decile) as the exposure groups to evaluate against the low exposure group.
This obviously raises the possibility that if even people in the âlow exposureâ groups are being affected, that could reduce the measured impact on the higher exposure groups and bias towards the null. Itâs a limitation of the reality of mobile phone proliferation, but there is a reasonable argument that it would be more appropriate to set the cutoff for âlow exposureâ much lower than 50%. For instance, Iâd be interested in the statistics comparing the bottom decile to the top decile or quartile.
The COSMOS study â as it stands today â has somewhat limited statistical power (especially for certain tumor types like acoustic neuroma), and also is limited by the followup duration (roughly 7 years)Although this should not be confused with the duration of phone usage â the followup duration refers to the time between registration and followup. The participants may have been using mobile phones long before that, and many were. They reported this when they were recruited.. There are only 149 glioma, 89 meningioma, and 29 incident cases of acoustic neuroma. These limitations are not a failure of the study design â simply a commentary on the fact that it is early on. Future followups in the study will help to address these.
It would also be great if COSMOS could report on the laterality of the tumors, as well as their locations. From my perspective, the laterality findings in earlier studies are some of the most compelling pieces of evidence against recall bias, and are a key part of the evidentiary puzzle (although it is also argued variously that these can be subject to their own recall biases). In the first COSMOS paper, laterality and location are not mentioned.
Overall, I find the COSMOS study to be a very thoughtfully-designed approach, with compelling early findings of no association between mobile phone use and the cancers they are looking at. There are limitations, as with every study, but they address them well. I will be watching them closely for followups. This study is one of two places â along with population cancer trends â that I think has the strongest chance of causing me to update my conclusions on this matter (depending on what happens).
But â despite all of that â we must remember what we outlined at the outset. Absence of evidence of an association is not evidence of absence. We need to consider this within the mix of everything else. There is no such thing as a single study âdisprovingâ an effect with certainty. It could be that there is no effect â but it could also be that the study is underpowered for the effect size, the followup period was too short, there was noise, or otherwise.
A final note on COSMOS: after it was published, a group called ICBE-EMF (International Commission on the Biological Effects of Electromagnetic Fields â some of the same dissenters cited in previous episodes) published a methodological critique calling for retraction of the studyâs conclusions. The COSMOS authors published a rebuttal. To my eyes, the ICBE-EMF critique raises some valid methodological concerns (some of which I have referenced above), but in this case the call for retraction is too strong. The COSMOS rebuttal defends most of the core points. Both sides have potential biases and perspectives. Net of everything, I find the COSMOS conclusions to be fair based on their data, and think their methodology is a reasonable one. At the same time, I remain deeply grateful for the ICBE-EMFâs work â they tirelessly represent the less-common side of the debate in public and scientific discourse and frequently point out major errors in papers which this post references.
Meta-analyses & reviews
Weâve now gone through several of the biggest / most noteworthy human studies (although there are others!) on the association between cell phone use and cancer. And as you can see, at least from my perspective, the topline conclusions do not always line up with the most relevant takeaway for our purposes today: assessing the impact of typical usage patterns.
This then naturally carries through the meta-analyses. As you can imagine based on the above, if we simply look at a question like âacross all these studies, is there an observed association between people who have used cell phones and cancer rates?â the answer will be âno.â But that may miss the detailed conclusions around what was considered âheavy usageâ in the early 2000s, or other nuances.
The meta-analyses are all (basically) drawing from the same pool of possible studies. Each then chooses a different set of parameters (inclusion criteria; choices around exposure contrast, latency, laterality, reference group definition; and synthesis approach). These choices impact what question the analysis is answering. Iâve laid what I consider to be the top meta-analyses below, grouped by a couple levers and choices that most meaningfully determine their outcomes.
The latest WHO-commissioned systematic review, Karipidis et al. (2024), Environ Int., concludes, among other findings:
For near field RF-EMF exposure to the head from mobile phone use, there was moderate certainty evidence that it likely does not increase the risk of glioma, meningioma, acoustic neuroma, pituitary tumours, and salivary gland tumours in adults, or of paediatric brain tumours.
As usual, there is an ICBE-EMF critique and a rebuttal. Each make compelling points.
There are then several meta-analyses that effectively come to the conclusion that general usage is not associated with increased risk, but long-term / heavy use (and ipsilateral use) is:
Myung et al. (2009), Journal of Clinical Oncology:
- Overall use: OR 0.98 (0.89â1.07) (null).
- â„10 years of use: OR 1.18 (1.04â1.34)
- they also note, interestingly, that: âa significant positive association (harmful effect) was observed in a random-effects meta-analysis of eight studies using blinding, whereas a significant negative association (protective effect) was observed in a fixed-effects meta-analysis of 15 studies not using blindingâ
Prasad et al. (2017), Neurological Sciences:
- Overall use: OR 1.03 (0.92â1.14) (null)
- â„10 years of use (or >1,640 hours of use): OR 1.33 (1.07-1.66)
- they add a similar note to Myung et al.: âstudies with higher quality showed a trend towards high risk of brain tumour, while lower quality showed a trend towards lower risk/protection.â
- Overall use: OR 0.98 (0.88â1.10) (null)
- â„10 years of use: OR 1.44 (1.08â1.91)
- Long-term ipsilateral use: OR 1.46 (1.12â1.92)
- Long-term use association with low-grade glioma: OR 2.22 (1.69â2.92)
- Long-term use association with high-grade glioma: OR 0.81 (0.72â0.92) (protective finding)
Moon et al. (2024), Environmental Health:
- Ipsilateral use: OR 1.40 (1.21-1.62)
- â„10 years of use: OR 1.27 (1.08-1.48)
- â„896 hours of cumulative use: OR 1.59 (1.25-2.02)
An interesting meta-analysis was performed by Choi et al. (2020), Int J Environ Res Public Health (note that there is author overlap with ICBE-EMF membership). They sought to break down some of the variance in results shown above. A key finding:
In the subgroup meta-analysis by research group, cellular phone use was associated with marginally increased tumor risk in the Hardell studies (OR, 1.15 (95% CI, 1.00 to 1.33; n = 10; I2 = 40.1%), whereas it was associated with decreased tumor risk in the INTERPHONE studies (OR, 0.81; 95% CI, 0.75 to 0.88; n = 9; I2 = 1.3%). In the studies conducted by other groups, there was no statistically significant association between the cellular phone use and tumor risk (OR, 1.02; 95% CI, 0.92 to 1.13; n = 17; I2 = 8.1%).
That is: if meta-analyses include the Hardell studies (of which Choi looked at 10), that will shift them towards finding an association. If they include the INTERPHONE studies (9 of them), that will shift them towards the null (in fact, they show a protective effect). Non-Hardell, non-INTERPHONE studies (17 of them), pooled, show null.
However, Choi et al. also looked at duration and found that for users with >1,000 hours of lifetime call time:
- all studies pooled (Hardell, INTERPHONE, other): OR 1.60 (1.12-2.30) (statistically significant positive finding)
- Hardell: OR 3.65 (1.69-7.85) (statistically significant positive finding)
- INTERPHONE: OR 1.25 (0.96-1.62) (positive finding, but just missed on statistical significance)
- other: OR 1.73 (0.66-4.48) (positive finding, but not close to statistical significance â wide range)
So: whatâs the takeaway from all these meta-analyses? At a high level: that the evidence is heterogeneous; that there doesnât seem to be an association between general phone use and cancer, but that there may be one with longer-term and especially ipsilateral use. Those conclusions are contested, and the literature continues to evolve.
But you may be looking at the very-rigorous Karipidis paper and wondering something like âtheir general use findings are roughly in line with these other analyses, but their duration/intensity findings arenât. Why is that?â Karipidis did look at cumulative call time and number of calls in some cases, and reported that they didnât see a consistent upward trend in those strata.
An important piece of context is that Karipidis et al. isnât a meta-analysis. For a variety of reasons they did not feel comfortable performing a pooled meta-analysis and instead chose to do a systematic review with risk-of-bias / GRADE scoring to come to conclusions on certainty of evidenceA meta-analysis is a sort of quantitative âpoolingâ of several studies; a systematic review is a more quantiative-qualitive blended analysis of them, although still procedural and sometimes including subgroup meta-analyses.. So it is also just a different approach to looking at the data than the meta-analyses are.
So the difference arises from a couple places, including their inclusion of cohort studies, not just case-control ones (as in the meta-analyses) and their RoB and evidence certainty scoring. But really, the heart of the disagreement, at least as I read it, is here (emphasis mine):
Consistently with the published protocol, our final conclusions were formulated separately for each exposure-outcome combination, and primarily based on the line of evidence with the highest confidence, taking into account the ranking of RF sources by exposure level as inferred from dosimetric studies, and the external coherence with findings from time-trend simulation studies (limited to glioma in relation to mobile phone use).
In plainer English: Karipidis et al. (per their protocol design) evaluated how reasonable they think the outcomes of studies are before considering how to incorporate them into the evidence output. This is unlike the meta-analyses of case-control studies, which set inclusion/exclusion criteria and then quantitatively pool the results from the included studies. This leads to a wide gap in the findings. Itâs hard for me to trace the math exactly, but I believe the biggest impact comes here (emphasis mine):
Three of these simulation studies consistently reported that RR estimates > 1.5 with a 10+ years induction period were definitely implausible, and could be used to set a âcredibility benchmarkâ. In the sensitivity meta-analyses of glioma risk in the upper category of TSS excluding five studies reporting implausible effect sizes, we observed strong reductions in both the mRR [mRR of 0.95 (95 % CI = 0.86â1.05)], and the degree of heterogeneity across studies (I2 = 3.6 %).
That is: Karipidis looked at simulation studies, which projected cancer rates and compared to actual numbers from cancer registries. Those studies suggested that relative risk estimates of 1.5 with 10+ year induction periods were âdefinitely implausible,â and based on that, Karipidis excluded from the sensitivity analyses of glioma risk the five studies that reported effect sizes above that number. This effectively cut out any data that suggested meaningful effects, and as such, significantly pushed their conclusions towards âno effect.â
This is similar to the approach noted in Röösli et al., 2019, Annual Review of Public Health, another skeptical analysis (also not a straightforward meta-analysis):
For glioma and acoustic neuroma, the results are heterogeneous, with few case-control studies reporting substantially increased risks. However, these elevated risks are not coherent with observed incidence time trends, which are considered informative for this specific topic owing to the steep increase in MP use, the availability of virtually complete cancer registry data from many countries, and the limited number of known competing environmental risk factors. In conclusion, epidemiological studies do not suggest increased brain or salivary gland tumor risk with MP use, although some uncertainty remains regarding long latency periods (>15 years), rare brain tumor subtypes, and MP usage during childhood.
I think these are reasonable choices, and both those papers are valuable contributions to the literature. However: I am personally most interested in doing my own evaluation of what is plausible and what is not (thatâs the purpose of this post!). I agree with Karipidis and Röösli (et al.) that population-level cancer rates is one of the most compelling arguments against there being an association with mobile phone use and cancer â but I would prefer to address that argument separately (which I will) rather than removing actual data from consideration because of it. Said another way: I want to take that argument as its own stage, rather than using it to cut out data.
So, basically, as I read it, the net of all these analyses is something like this:
- there is broad agreement from both meta-analyses and systematic reviews that simply using mobile phones is not associated with cancer risk
- but pooled meta-analyses tend to show a statistically significant association between heavy usage and/or ipsilateral usage and increased brain cancer risk
- the inclusion or exclusion of Hardellâs studies (moves towards positive) or the INTERPHONE studies (moves towards protective / null) can meaningfully shift the conclusions
- high-profile systematic reviews have chosen to exclude positive findings due to perceived incoherence with population-level trends
So: take from that what you will.
A note on tumor concordance
As noted at the outset of this piece, we are trying to figure out whether there is coherent, consistent evidence that, summed up, points to a high likelihood of risk. One place you would look for that is in similarities between tumor characteristics in the different types of studies.
If you recall from the NTP and Ramazzini animal studies above, the tumors they found were schwannomas and gliomas. Schwannomas arise from Schwann cells (the cells forming myelin sheaths around nerves). Gliomas arise from glial cells in the brain.
These are the exact same cell types implicated in human studies:
- Acoustic neuromas (vestibular schwannomas) â tumors of the hearing nerve
- Gliomas â the most common malignant brain tumor
The animal studies found heart schwannomas as opposed to vestibular ones â although this isnât necessarily surprising, since the animal studies used full-body radiation rather than the narrower âphone to headâ radiation of the epidemiological studies. But itâs the same cell type. And the gliomas overlap directly.
Finding the same rare tumor types in the tissues where the radiation was delivered, between controlled animal experiments and large-scale observational studies, is very strong evidence.
Biological plausibility
Now that weâve looked at animal and human studies on the carcinogenicity of RF-EMF, we can turn to studies and proposals on the biological mechanisms that may lead to cancer.
This is, of course, tricky. Cancer is a multistage process. You can contribute to carcinogenesis lots of different ways â oxidative stress, disrupted cell signaling, impaired DNA repair, chronic inflammation, or otherwise.
Often when this topic is brought up, people take up the physics argument that non-ionizing radiation lacks the energy to break chemical bonds, which means that this sort of RF-EMF we are examining canât do direct DNA damage via ionization (which would be an obvious carcinogenic path). This is true.
But there are of course plenty of established carcinogens that donât work through direct ionization. Here is, for example, a page from a U.S. Surgeon Generalâs Advisory on Alcohol and Cancer risk:

Alcohol can cause cancel through breaking down acetaldehyde, through inducing oxidative stress, through endocrine disruption, and through amplifying the effects of other carcinogens. Plus even more pathways are proposed in the cited paper (Rumgay et al. (2021), Nutrients).
It is common for carcinogenic substances to have multiple pathways that contribute to cancer risk. This may also be the case with RF-EMF. Here weâll review a few of the plausible mechanisms by which RF-EMF may affect human biology and physiology.
Note that â at least as weâre approaching the topic in this post â these arguments need not draw a straight line all the way through clear impacts on cancer risk to be relevant. The more clear, the better, but under our Bradford Hill criteria, it is simply useful for us to understand evidence for biological outcomes that may lead to carcinogenesis to support the potential causality.
Oxidative stress
There are a lot of studies that have been run, both in vivo and in vitro, on the impact of RF-EMF on oxidative stress markers. In general, my view is that when looking at a mechanism like this, because we are seeking an effect that can be reasonably generalized (or generally rejected) to apply to our Bradford Hill criteria of causality evaluation, it is best to look at the corpus in aggregate rather than point at individual studies.
This differs at some level from the carcinogenicity studies, where we of course want to look at the full set of evidence as context, but it is also highly informative to examine individual studies which show the âfull cycleâ (exposure to investigated outcome of cancer).
Unfortunately, the two most-cited reviews I have found looking at RF-EMFâs impact on oxidative stress each have flaws that make them unhelpful to rely on fully (donât worry, weâll still get to a conclusion). Each one leans a different way.
As part of their recent series of RF-EMF reviews, the WHO commissioned one on oxidative stress, Meyer et al. (2024), Environ Int.. The findings were effectively equivocal:
The evidence on the relation between the exposure to RF-EMF and biomarkers of oxidative stress was of very low certainty, because a majority of the included studies were rated with a high RoB level and provided high heterogeneity
They donât come to any conclusions with anything beyond âlow certainty,â citing high risk-of-bias and high heterogeneity between the studies. There have been many studies â why could they not get to certainty? On this front, I find the ICBE-EMF groupâs critique of the paper very compelling (Melnick et al. (2025), Environ Health). Here are some excerpts (although I suggest reading the whole critique if you are interestedNote that I donât endorse all the perspectives in this paper, which is a critique of all 12 of the WHOâs recent systematic reviews on RF-EMF, cited throughout this post. Also note that Melnick led the design of the NTP cell phone carcinogenicity study discussed earlier during his 28 year NIH / NTP tenure., in addition to the original paper of course):
Much of this discrepancy is due to the excessive exclusion of relevant studies in SR9. Of the 897 articles that Meyer et al. considered eligible for their review, only 52 were included in their MAs; 360 studies were excluded because the only biomarker reported in those articles was claimed to be an invalid measure of oxidative stress
âŠ
Meyer et al. excluded studies in which TBARS [Thiobarbituric acid reactive substances] was reported as the sole measure of oxidative stress because according to Henschenmacher et al., non-oxidative stress reactions including metabolism can also produce TBARS. However, there is no evidence that exposure to RF-EMF activates metabolic pathways that can produce TBARS.
âŠ
Another exclusion criterion for this SR that resulted in exclusion of 63 studies was âno sufficient exposure contrastâ; the external electric field strength (E) field must be greater than 1 V/m or the power flux density (PD) must be greater than 2.5 mW/m2. However, when exposure to RF-EMF produces a statistically significant increase in a biomarker (including increase in 8-OHdG) in exposed vs. sham samples, and those effects are reduced in the presence of an inhibitor of oxidative damage, then such changes represent meaningful RF-EMF-induced oxidative effects.
âŠ
They go on, discussing the exclusion of studies in non-mammalian animal species, the exclusion of studies with mobile phones where output power wasnât reported, the exlusion of studies where DCFDH was used as the biomarker, and more, summarizing:
Due to the exclusion of studies relevant to the objective of SR9, we conclude that the authorsâ conclusions are severely deficient and unreliable.
So â in echoes of the WHO review on the human carcinogenicity studies discussed above â the inclusion/exclusion criteria for analysis for this paper are disputed. And of course, if you cut out large chunks of the relevant studies, you may end up with lower certainty results.
But there is also a second problem with the oxidative stress systematic review: the authors ended up with just 52 studies (after excluding those above â 360 studies were excluded because the biomarker reported was claimed to be an invalid measure of oxidative stress). But then, for analysis, the authors divided the 52 studies into 19 subgroups which were each quantitatively analyzed.
Once you get down to groups that small (just a couple studies per group), of course you are going to end up with low certainty evidence. As the ICBE-EMF group says (emphasis mine):
Indicators of oxidative stress may occur in all organs of all animal species, and are not necessarily specific to only the brain, liver, blood, testis, or ovary of exposed rodents or rabbits, nor is it specific to only in vivo or in vitro studies, and most certainly it is not specific to measurements of only oxidized DNA bases, oxidized lipids, or certain modified proteins or amino acids. Subdividing the 52 primary studies into 19 subgroups resulted in very few studies in most categories; this diluted the overall effect and weakened the significance of the [meta-analyses] reported by the authors of SR9.
The authors here donât claim that that RF-EMF does not cause oxidative stress â they simply say there is low certainty of evidence, along with some weak suggestions in each direction (maybe yes for oxidative stress in testes, serum, and thymus; maybe no for brain, liver, blood, etc.).
Also, despite all of this, there are several findings in the paper that report statistically significant impacts, even given the authorsâ definitions and criteria. These then get rolled into overall âlow certaintyâ total evidence assessments, but the findings stand, includingNote that these are Standardized Mean Differences, not Odds Ratios, so null is 0 not 1. Anything more than 0 is a positive finding (rather than more than 1, as in the OR measures), and statistical significance is when the parenthetical 95% CI range is also all greater than 0 rather than greater than 1.:
- modified proteins and amino acids in the liver of rodents: SMD 0.55 (0.06-1.05)
- oxidized DNA bases in plasma of rodents: SMD 2.55 (1.27-3.24) â they consider this a large effect
- oxidied DNA bases in testes of rodents: SMD 1.60 (0.62-2.59), another large effect
Going the other direction, we have Yakymenko et al. (2015), Electromagn Biol Med, whose conclusions are often cited as the key evidence for oxidative stress:
Analysis of the currently available peer-reviewed scientific literature⊠indicates that among 100 currently available peer-reviewed studies dealing with oxidative effects of low-intensity RFR, in general, 93 confirmed that RFR induces oxidative effects in biological systems
This represents an alternative approach to a review: erring on the side of inclusion rather than exclusion and writing it as a narrative review rather than doing whole or subgroup meta-analyses. I, admittedly, have not reviewed all 100 of the cited studies. But the approach that Yakymenko et al. have taken here is to aim for an overview of the potential impacts, rather than the sort of pre-determined protocol with explicit criteria that Meyer et al.âs paper went for.
Their claim that many studies have shown oxidative effects is, I believe, relatively undisputed. But between the Meyer campâs exclusion and the Yakymenko groupâs narrative overview, it is hard to quantitatively synthesis a level of confidence on these outcomes. At minimum, I think, when looked at together, they strongly suggest biological plausibility, if not biological certainty.
The paper I wish was a systematic review is Schuermann & Mevissen (2021), Int J Mol Sci (you may recognize Mevissen as the lead author on the WHO-commissioned review of animal cancer bioassays from earlier). This paper is a narrative review, similar to Yakymenkoâs approach although more granular, of the evidence. Their conclusion is quite favorable to the idea that RF-EMF can cause oxidative stress (emphasis mine):
In summary, indications for increased oxidative stress caused by RF-EMF and ELF-MF were reported in the majority of the animal studies and in more than half of the cell studies.
âŠ
Investigations in Wistar and Sprague-Dawley rats provided consistent evidence for oxidative stress occurring after RF-EMF exposure in the brain and testes and some indication of oxidative stress in the heart. Observations in Sprague-Dawley rats also seem to provide consistent evidence for oxidative stress in the liver and kidneys.
âŠ
A trend is emerging, which becomes clear even when taking methodological weaknesses into account, i.e., that EMF exposure, even in the low dose range, may well lead to changes in cellular oxidative balance
Again, this was not a systematic review, and they did not do meta-analyses on subgroups or on the results as a whole. But in what I consider to be the most rigorous and reasonable treatment of the matter, Schuermann & Mevissen come out with a strong statement of biological plausibility.
This evidence, for me, is sufficient regarding the mechanistic plausibility of RF-EMF leading to biological effects that can be upstream of cancer.
One other interesting study to add to the mix â although certainly not conclusive â is Irigaray et al. (2018), Int J Mol Med, where they find that:
overall ~80% of [electrohypersensitivity] self-reporting patients present with one, two or three detectable oxidative stress biomarkers in their peripheral blood, meaning that these patients-as is the case for cancer, Alzheimerâs disease or other pathological conditions-present with a true objective new pathological disorder.
Electrohypersensitivity is a topic beyond the scope of this essay. But this paper found that at least for people who self-report the condition (which is defined further in the paper), oxidative stress markers were elevated. I treat this merely as an interesting note here in support of the perspective from the reviews above.
Now, you may ask: what is the mechanism by which the RF-EMF is causing the oxidative stress itself? This is a fair question â and one I would like to see much more work done on. We need not have an answer to that molecular question to arrive at sufficient confidence under Bradford Hill criteria of causation. But we should answer it anyway.
One possible proposal for that precursor question is the following:
VGIC activation
Cell membranes are observed to contain ion channels that respond to voltage changes â Voltage Gated Ion Channels (VGICs). One proposal for a potential molecular mechanism for biological effects of EMF is the aberrant activation of these channels, causing inappropriate calcium influx (specifically in Voltage Gated Calcium Channels, or VGCCs).
This is a polarizing and hotly disputed perspective. Some researchers say it is central; others dispute the relevance at typical exposure levelsIâm even aware of some fringe open questions about the ion channel model of the cell that could possibly discredit this mechanism, but also potentially provide more clarity that explains the observed behavior and effects..
It has primarily been advanced by Martin Pall, Ph.D., beginning with Pall (2013), J Cell Mol Med; he has since published several other papers on the topic.
If true that RF-EMF can cause inappropriate calcium influx, it would not be surprising to see downstream effects that could lead to tumor development. Excess calcium influx is known to trigger nitrix oxide production, highly-reactive peroxynitrite formation, and free radical cascades and downstream oxidative stress.
There has been recent interesting work advancing this theory, namely Panagopoulos et al. (2025), Front Public Health. They propose further that the the perceived impacts of RF-EMF are actually not due to the carrier waves (in the RF-EMF spectra), but rather the Extremely Low Frequency / Ultra Low Frequency EMF that come with them in the form of their modulation, pulsation, and variability characteristics, and:
This condition induces parallel and coherent low-frequency forced oscillations of mobile ions and other charged/polar molecules in living tissues. The IFO-VGIC mechanism has described how such oscillations induce dysfunction of VGICs in the membranes of all cells resulting in altered intracellular ionic concentrations
If true, this resolves one of the main critiques I have of Pallâs work, which is that if the observed effects were caused by the EMFs exerting direct force on the VGIC sensors themselves, this would require very large applied fields â and it is not clear to me how that would be the case at the intensity of RF-EMF we are looking at.
However, Panagopoulos et al. suggest that the ELF/ULF fields are able to induce coordinated forced oscillations of ions already within the channels (rather than directly affecting the sensors). These intrachannel ions are very close to the sensors. And the forces then exerted on the sensors by the nearby ions oscillating can have a much greater impact.
As always when considering anything about EMFs, distance is the key factor. Electromagnetic fields tend to decrease in strength proportional to the second or third power of distance (depending on near- vs. far-field and the specifics of the source) â which means a very, very rapid decrease as you get further away.
But if the VGCC sensors are being impacted by the fields generated by coherently oscillating in-channel ions (less than a nanometer away from them), that means far less field strength is required to âtipâ the sensor, as opposed to an RF-EMF source from outside the body. They argue that the RF-EMF (or more precisely its related ELF/ULF modulation fields), due to its anthropogenic polarized/coherent character, can force ions to move in tandem, and those ions then naturally emit their own field which impacts the sensorsThere is a funny aside here that many so-called âEMF protectionâ products which I consider to be useless (although there are legitimate ones!) claim to make fields âcoherentâ â but if this theory is correct, perhaps coherence is the exact opposite of what you want! (And I donât even believe those âcoherenceâ products can do so, even if it were desireable.).
I will not â and frankly, donât have the capacity to â critically evaluate these theories further than I have above, but I reference the proposal here as one of the leading theories as to the actual molecular / cellular mechanism that could lead to effects such as oxidative stress, which could then impact the chain of potential carcinogenicity.
DNA damage
Aside from oxidative stress, the NTP follow-up study (Smith-Roe et al. (2019), Environ Mol Mutagen), as noted earlier, found significant DNA strand breaks in certain organs of exposed animals â not through direct ionization, but through these indirect mechanisms.
DNA damage is another biological mechanism that is upstream of carcinogenicity. As discussed multiple times, RF-EMF is not ionizing, so cannot cause the damage by directly breaking the chemical bonds, but there are other indirect pathways that could cause this.
More broadly, Weller et al. (2025), Front Public Health performed a recent scoping review and evidence map study of RF-EMF genotoxicity and found that across 500 studies (emphasis mine):
The evidence map presented here reveals statistically significant DNA damage in humans and animals resulting from man-made RF-EMF exposures, particularly DNA base damage and DNA strand breaks. The evidence also suggests plausible mechanistic pathways for DNA damage, most notably through increased free radical production and oxidative stress. Sensitivity to damage varied by cell type, with reproductive cells (testicular, sperm and ovarian) along with brain cells appearing particularly vulnerable.
âŠ
Overall, there is a strong evidence base showing DNA damage and potential biological mechanisms operating at intensity levels much lower than the ICNIRP recommended exposure limits.
And, very interestingly:
A complex relationship was identified between exposure intensity and duration, with duration emerging as a critical determinant of outcomes. A complex U-shaped dose-response relationship was evident, suggesting adaptive cellular responses, with increased free radical production as a plausible mechanism.
I will leave this item here as another plausible mechanism â but not a certain one at this time. I also note that the enhancement effect observed by Ruediger (similar to that of alcohol, as referenced above) is a somewhat common finding in the EMF study field, and I suspect it may hold part of the key to understanding this potential toxin.
And more?
There are also other proposed mechanisms, including gene expression impacts (see Lai & Levitt (2025), Rev Environ Health).
But overall, my goal with this biological plausibility section is to address the âphysics impossibilityâ argument that assumes direct ionization is the only mechanism by which EMF could impact humans. But biology is more complex than that. Multiple plausible mechanisms have been identified and replicated across laboratories.
With that said, again â there is a ton of opportunity for more mechanistic studies around EMFs.
I expect and hope that in the coming decades we will see a much more evolved understanding of electromagnetismâs effects on our cells. The field of bioelectricity is gaining steam and feels like a piece of the puzzle.
An aside on bias
In the scientific literature on EMFs, there are a lot of accusations of funding source bias and other ad-hominem attacks on peopleâs perspectives, including many allegations of authors seeking to prove their existing perspectives, priors, and earlier work correct. And a lot of questions about whether industry funding and firewall approaches are biasing factors.
To quantify this, there have been studies done on results by funding source, with a variety of findings. The allegations tend not to be that anyone is lying but that funding sources can bias study design, statistical choices, editorial choices, and publication decisions in ways that lean results one way or the other.
After much debate, I have decided not to spend time analyzing any of this in this essay.
Conflicts and biases can undoubtedly be real. But from my perspective, the reason you would need to discuss them is if you believe someone is lying or faking data â that is, if the objective data you can read in the paper is suspected to actually not be real due to intentional fabrication by the author.
But if youâre willing to assume that the data is real, then I think it is much better to simply look in detail at the data and the methodologies â as we have done above â and draw your own conclusions. That means reading beyond the abstracts and actually considering the specifics. If the authors have made methodological choices you donât like, you should take that into account, but it doesnât matter if they did so out of bias, oversight, or simply a different honest perspective.
Iâm choosing to view all authors mentioned in this piece as good faith contributors to the discourse. I have no direct reason to suspect otherwise. And we will simply look at the reported details and draw our own conclusions.
You could be the most financially-conflicted person in the world with the purest scientific heart; or the most unconflicted but with some personal perspective you desperately want to prove right for one reason or another. Thereâs no way to know. So instead letâs stick to the data and methodologies and just do the work to consider them as they stand.
Section III: Returning to the skeptical arguments
Before we go to our Bradford Hill assessment of the evidence, letâs cover the skeptical arguments, as they should be factored in.
Weâll return to the arguments I listed at the beginning â the reasons you might think EMF concerns are nonsense, and why I originally did too â and address them directly. As youâll see, most of them I find not credible and unsupported by the science, but there are some that remain compelling.
âThe physics says itâs impossibleâ
As discussed in the mechanisms section: direct ionization (and thermal effects) arenât the only pathways to biological harm. Oxidative stress, ion channel disruption, and indirect DNA damage are all plausible mechanisms with experimental support. There are likely others out there we will discover as well!
Moreover, by way of simple analogy: there are clearly systems in our body that are sensitive to non-ionizing, non-thermal electromagnetic radiation in other parts of the spectrum â like blue light. Based on the evidence, I think the burden is on the side arguing âthis canât possibly have any physical effectsâ rather than the side saying that it could.
âThe authorities say itâs safeâ
First of all: I believe that in the US, the authorities have taken a fundamentally flawed approach (and are currently under an appeals court order to further explain their reasoning or update it). For more, see my prior article on the flawed assumption of US regulation of RF-EMF.
I will not repeat those arguments here, but the heart of the matter is that they have explicitly taken the stance that they do not need to protect from non-thermal effects of RF-EMF, despite the positive findings of the NTP study that was commissioned for exactly that reason (among others).
But beyond that, the global authorities are not actually harmonized on this matter. Ramirez-Vazquez et al. (2024), Environ Res. looked at country-by-country regulations and found:
The international reference levels established by ICNIRP are also recommended by WHO, IEEE and FCC, and are adopted by most countries. However, some countries such as Canada, Italy, Poland, Switzerland, China, Russia, France, and regions of Belgium establish more restrictive limits than the international ones.
There are also more specific precautionary measures that have been taken, including France and Israelâs bans on WiFi in nurseries and mandates that it be minimized in schools (France, Israel).
The âconsensusâ is not as solid as it appears, and more generally, I believe the evidence weâve examined stands on its own without appeals to authority.
âThe results are mixed â youâre cherry-pickingâ
Weâve touched on this several times in this piece, and will again in the conclusion. But it is important to address, because it reflects a fundamental misunderstanding of how science works. Going back to the beginning of this piece:
A null finding is not evidence of no effect.
As I explained in the methodology section, failing to reject the null hypothesis means âwe couldnât detect an effect under this study design.â It doesnât mean âwe proved there is no effect.â
A study might fail to find an effect because:
- The effect is real but small
- The study was underpowered
- The exposure assessment was crude
- The follow-up was too short
- The comparison group was also exposed
You canât average a positive finding with a non-finding and conclude âthe truth is in the middle.â The non-finding might just reflect a study that wasnât capable of detecting the effect.
If I were cherry-picking random noise, youâd expect the positive findings to be scattered incoherently â different tumor types, different exposure patterns, no biological logic.
Instead, the positive findings cluster in ways that make sense:
- Same tumor types (schwannomas, gliomas) across human and animal studies
- Same anatomical locations (temporal lobe, where phones are held)
- Dose-response relationships (more hours = more risk)
- Laterality effects (ipsilateral > contralateral)
- Longer latency = stronger effects
This is why coherence is one of the Bradford Hill criteria we are looking at.
âThe people worried about this are kooksâ
This is the vibes argument, and I get it. I felt it too. But an ad-hominem is not a sufficient reason to dismiss the evidence.
The perception that âonly cranks worry about thisâ is partly a function of how the debate has been framed in media, and partly because people who express concern get associated with less rigorous adjacent communities. But the scientific evidence exists independently of whoâs talking about it.
âWeâd see it in cancer statisticsâ
This is, to me, the strongest skeptical evidence-based argument against RF-EMF carcinogenicity â and the one Iâm most interested to watch unfold over the coming years (and which I expect has the strongest chance to adjust my conclusions if data trends a certain way).
I will get into the details on this point in just a moment. The summary is that it is not a cut-and-dried issue, but indeed population incidence trends of glioma and other cancers have remained quite stable â and certainly lower than would be expected if a large percentage of the population faced exposure levels similar to the âheavy usersâ of the INTERPHONE and other epidemiological studies and suffered an OR of 1.4 or so. This is why I find the argument compelling.
On the other hand, there exist data to support some incidence trends (particularly possible increases in temporal lobe glioma), and moreover, many confounding factors (latency, usage patterns changing, data challenges, diagnostic changes, etc.).
Weâll break the evidence down down, but I think important context on this line of argumentation comes in de Vocht (2021), Bioelectromagnetics (de Vocht has published several papers skeptically looking at incidence trends which we will also reference shortly):
Ecological data [note: incidence trends are an example of this] are generally considered weak epidemiological evidence to infer causality, and the presented data provide little evidence to confirm or refute mobile phone use or RF radiation as a cancer hazard.
Said another way: looking at incidence trends can indeed be helpful for assessing whether an exposure plausibly has a huge effect size or not; for adding to the mix of Bradford Hill coherence evaluation; for considering population-level risk (separate from the question of usage levels or sensitivity); and for ruling out extreme findings. But incidence / ecological data alone is not sufficient to prove or disprove causality, in large part because there are simply so many confounding factors at play.
You can think of this along the lines of null findings, as discussed earlier. Not finding a clear trend in the ecological data is an absence of evidence, not evidence of total absence.
The evidence
There are many papers looking at different populations and taking different approaches to the question âwhat are the population-level trends in tumors hypothesized to relevant to RF-EMF exposure?â This seems like it should be a simple question, but in practice, it is quite complicated due to limited data (only a few countries have good enough cancer registries), changes in diagnostic approaches and coding, lack of specificity (e.g. missing data on tumor laterality), and confounding factors.
On the skeptical side, Karipidis et al. (2018), BMJ Open looked at brain tumor trends from the 80s through the early 2010s in Australia and modeled out various mobile phone use scenarios. They found:
The overall brain tumour rates remained stable during all three periods. There was an increase in glioblastoma during 1993â2002 (APC 2.3, 95% CI 0.8 to 3.7) which was likely due to advances in the use of MRI during that period. There were no increases in any brain tumour types, including glioma (â0.6, â1.4 to 0.2) and glioblastoma (0.8, â0.4 to 2.0), during the period of substantial mobile phone use from 2003 to 2013. During that period, there was also no increase in glioma of the temporal lobe (0.5, â1.3 to 2.3), which is the location most exposed when using a mobile phone. Predicted incidence rates were higher than the observed rates for latency periods up to 15 years.
On the other hand, Zada et al. (2011), World Neurosurg. looked at trends of the anatomic location of malignant brain tumors in the US in 1992-2006 and found:
Increased AAIRs of frontal (APC +2.4% to +3.0%, P †0.001) and temporal (APC +1.3% to +2.3%, P †0.027) lobe glioblastoma multiforme (GBM) tumors were observed across all registries, accompanied by decreased AAIRs in overlapping region GBMs (-2.0% to -2.8% APC, P †0.015). The AAIRs of GBMs in the parietal and occipital lobes remained stable.
And yet, Inskip et al. (2010), Neuro Oncol.:
we examined temporal trends in brain cancer incidence rates in the United States, using data collected by the Surveillance, Epidemiology, and End Results (SEER) Program⊠With the exception of the 20â29-year age group, the trends for 1992â2006 were downward or flat. Among those aged 20â29 years, there was a statistically significant increasing trend between 1992 and 2006 among females but not among males. The recent trend in 20â29-year-old women was driven by a rising incidence of frontal lobe cancers. No increases were apparent for temporal or parietal lobe cancers, or cancers of the cerebellum, which involve the parts of the brain that would be more highly exposed to radiofrequency radiation from cellular phones.
In the UK, de Vocht (2019), Environ Res. used a Bayesian counterfactual statistical analysis for 2006-2014 UK rates and found:
Increases in excess of the counterfactuals for GBM were found in the temporal (+38% [95% Credible Interval -7%,78%]) and frontal (+36% [-8%,77%]) lobes, which were in agreement with hypothesised temporal and spatial mechanisms of mobile phone usage, and cerebellum (+59% [-0%,120%]).
And previously, in 2016 in Environ Int. he found using a similar approach:
There is no evidence of an increase in malignant glioma, glioblastoma multiforme, or malignant neoplasms of the parietal lobe not predicted in the âsynthetic Englandâ time series. Malignant neoplasms of the temporal lobe however, have increased faster than expected. A latency period of 10 years reflected the earliest latency period when this was measurable and related to mobile phone penetration rates, and indicated an additional increase of 35% (95% Credible Interval 9%:59%) during 2005â2014
As one might expect, there are serious methodological disagreements between these various camps. I, frankly, recognize my limits here and do not think I have the capacity to accurately evaluate the nuances of such nuances. And to give just one example of how tricky this can get (from Karipidis et al. (2025), Environ Int., responding to a critique from the ICBE-EMF):
There have also been shifts in classifying sub-types in updated editions of the WHO classification of tumours of the central nervous system; for example, the WHO 2000 classification induced a shift from anaplastic astrocytoma to glioblastoma. This is addressed in many of the included time-trend simulation studies, e.g. in (Choi et al., 2021, Karipidis et al., 2018) where reclassification of unclassified or overlapping brain cancers was shown to reduce increased trends in morphological or topological sub-types (such as in glioblastoma multiforme).
If the risk were large and widespread and within latency windows, registries should show it. If the risk is modest and confined, or if weâre still within a latency window, they may not. If latency is long and exposures are changing, registries become hard to interpret.
The ongoing reclassification of tumors, the country-by-country heterogeneity in approach, reporting rates, classifications, and exposure levers, and the noise from other factors is too much for me to assign anything but a low certainty to this overall data.
So what?
Is all this useless? No â not at all. It will factor into our Bradford Hill coherence viewpoint, as we will see shortly. There are at least a few takeaways from this data, however mixed and challenging it may be:
This puts bounds on the effect size of possible RF-EMF carcinogenicity at exposure levels and types that were typical in the 80s, 90s, and early aughts (which, of course, are not the same as today).
If RF-EMF at those levels was carcinogenic at an odds ratio of 2, 3, or more â we would have seen it much more clearly than even the most positive interpretation of the data above suggests.
Thereâs an interesting paper from Sato et al. (2019), Bioelectromagnetics where they simulated the effects of heavy usage (per INTERPHONE definition) in Japan based on when mobile phones started to become popular, and they found:
Under the modeled scenarios, an increase in the incidence of malignant brain tumors was shown to be observed around 2020.
Said another way: we are just entering the period where (at least according to their assumptions), increased incidence of tumors might begin to be detected. This is the latency issue â even though mobile phones boomed in the 90s and 2000s, one would not expect an immediate rise in cancer diagnosesI havenât been able to find good analysis of Japanese brain tumor trends since 2020, although Kuratsu et al. (2014), Neuro Oncol. (I could not find the full text, apologies) showed that from 1989 through 2013 in Kumamoto Prefecture:
âAlthough the incidence of glioma remains lower in Japan than in Western countries, it is on the increase. Besides the aging of the population, environmental changes may account for the increase in the incidence of malignant gliomas in Japan.â
And Matsumoto et al. (2021), Neurol Med Chir looked at Miyazaki Prefecture between 2007 and 2016 in a similar manner to the Kuratsu paper and:
âAlthough there were some differences between the two surveys, the IR [incidence rate] of PIT [primary intracranial tumors] showed a similar pattern in Kumamoto and Miyazaki, which are neighboring districts on Kyushu Island.â
These may be chalked up to diagnostic advances. Again: there are many factors at play. .
My overall take on this skeptical argument is that it is strong â and moreover, it is the most important place for us to continue to look if we want to falsify a prediction of RF-EMF carcinogenicity.
Brain tumor latency varies, but if we take it to be 10-30 years, we are now entering a critical time. Yes, exposure profiles have continued to vary significantly over time. But if we truly see no significant increases in relevant endpoint tumors (especially considering laterality and such), that would be a major input to consider in the analysis.
If we failed to see increases, it still may be that non-thermal RF-EMF can be carcinogenic, but the exposure profiles of the 10-30 years prior happened to not be sufficient (at least for large percentages of the population, potentially carving out sensitive users); and it still may be that non-thermal RF-EMF could have other effects; but it would be a powerful argument against the levels and usage at that time mattering.
Section IV: Bradford Hill assessment
Weâve now looked at the studies and responded to some skeptical points. So letâs go back to our Bradford Hill viewpoints on causality and briefly touch on each. As a reminder, Bradford Hill isnât a âscorecardâ where some threshold means âproven.â Itâs a set of lenses for asking: if this association were causal, what patterns would we expect to see â and do we see them?
Iâm going to apply each viewpoint specifically to the question this post has narrowed to:
is there reason to believe that RF-EMF, at exposure levels and durations typical in modern life, may cause increased risk of cancer, at least for sensitive members of the population?
Iâll rate each criterion as Strong / Moderate / Mixed / Weak, with some having multiple ratings depending on the angle.
Strength of association
In the context of what were defined as heavy users in the early-2000s (but I now view as typical usage): Moderate. But for an overall association shown between RF-EMF exposure and cancer: Weak.
In the overall human literature (especially cohort-style analyses and broad âever vs neverâ comparisons), the associations are typically near-null.
But in the highest exposure strata in several case-control datasets and in multiple meta-analyses that focus on long duration / high cumulative call time (again, by 90s and aughts standards), reported effects are often in the ~1.2â1.6 range (sometimes higher depending on definitions and inclusion), which is moderate.
In animals (NTP, Ramazzini), the signal for certain tumors is meaningfully stronger under the specific experimental conditions tested, but translating magnitude from rodents to humans is inherently uncertain.
Consistency
Mixed to moderate.
In favor of consistency: similar directions show up repeatedly when you look specifically at longer duration / heavier cumulative use (and laterality), across multiple countries and in multiple meta-analyses.
But opposed: results vary a lot by study design (case-control vs cohort), by exposure definition (subscription vs call-time vs self-report bins), by comparison group (truly low exposure vs âless exposedâ), and by inclusion/exclusion choices in reviews. Many reviews cite heterogeneity of studies.
On the animal studies, however, there is Moderate (or perhaps even Strong) consistency on the observed endpoints.
Specificity
Moderate. Thereâs tumor-type specificity across lines of evidence; human concern clusters around glioma and schwannoma-derived tumors; anatomical specificity around temporal-lobe proximity (aligning with near-field exposure).
But also: if RF-EMF indeed acts through upstream mechanisms like oxidative stress, you wouldnât necessarily expect a single tumor type or a single tissue to be the only outcome.
Itâs also worth noting that even the laterality findings are sometimes contested â there is dispute whether side-of-use reporting is reliable. I tend to find those less persuasive, but it is an argument that is out there.
So Iâd say the evidence for specificity is strong, but only as much as an exposure that has upstream impacts can be.
Temporality
Strong. When we observed a signal in the epidemiological studies, it tended to be in longer-latency categories, which is coherent with carcinogenesis and temporality. We have no reason to suspect any lack of temporality here (although we will address population-level trends separately).
Biological gradient
Mixed to Moderate. We do see repeated evidence of high cumulative call-time bins showing higher risk, but we also donât see smooth dose-response gradients going up along the way.
Another positive finding for biological gradient is the proximal exposure proxies (ipsilateral use; temporal lobe findings). Those parts of your brain get a bigger dose than the further-away parts, so the associations there are compelling.
But in general, the analyses donât show a clean dose-response curve. Plus, studies and reviews end up conflating different measures of âdoseâ â true cumulative dose to the organ, latency (time since first exposure), age at first exposure, estimates doses to organ. And this creates a lot of noise in the dose-response analysis.
Plausibility
Moderate to Strong. I see the oxidative stress evidence as quite compelling for biological plausibility of a mechanism that is known to be upstream of carcinogenesis. We donât have a universally accepted mechanistic chain demonstrated in humans, but the plausibility is there.
The other mechanisms (genotoxicity, VGCC / signaling-type hypotheses) have weaker evidence, but also some interesting and positive findings.
Coherence
Mixed. This is the category where I see the greatest tension.
One one hand, we have very strong positive evidence for coherence in the overlap between animal study tumors and epidemiological observations. The same rare tumors â glioma, schwannoma â appearing in both is a very strong coherence signal.
But at the same time, we have very weak coherence between many of the studies and the population incidence trends. If the effect were large (which Iâm not necessarily suggesting it is) and widespread, we would likely expect a clearer registry signal by now.
This doesnât refute causality â there are many reasons registry signals could be off â but it does clarify potential magnitude: a large, universal effect would likely be easier to see.
Weâll get back to the population incidence argument in a moment when we address the skeptical perspective. But for now, I rate coherence as mixed given the very-strong tumor type overlap, but the weak population incidence trends.
Experiment
Strong for animals; Weak for humans (and we probably canât have it any other way).
The animal carcinogenicity evidence (NTP; Ramazzini) is very strong and high confidence. Itâs a clean experimental pillar.
But for humans, long randomized trials arenât feasible, and given our suspicion that it is chronic exposure that matters (given the epidemiological evidence), we may never have clear experimental human evidence, even if the effect is true.
So experiment is a major strength of the hazard argument generally (RF-EMF can be carcinogenic under some conditions), but itâs less decisive on the impact on humans.
Analogy
Mixed. I tend to find analogy the squishiest Bradford Hill criterion, but there are reasonable analogies:
- Other non-ionizing exposures can have meaningful biological effects without direct ionization. (e.g. photochemical effects from visible light)
- Environmental carcinogens often show early signals in subsets / specific exposure windows before the story becomes âobviousâ in broader public-health terms
- ELF-MF (which weâll cover in Part II) has strong evidence for at least some increased carcinogenic risk (particularly increased incidence of childhood leukemia)
In summary
| Bradford Hill viewpoint | Ranking |
|---|---|
| Strength of association | Moderate for heavy use; Weak for overall use |
| Consistency | Mixed to Moderate |
| Specificity | Moderate |
| Temporality | Strong |
| Biological gradient | Mixed to Moderate |
| Plausibility | Moderate to Strong |
| Coherence | Mixed |
| Experiment | Strong for animals; Weak for humans |
| Analogy | Mixed |
So what does the Bradford Hill lens say, net?
The strongest causal supports are a) experimental animal evidence and b) specificity/coherence around tumor types between animals and humans, and c) plausible mechanisms.
The weakest / most contested parts are a) consistency, b) doseâresponse cleanliness, and c) coherence with population trends (which mainly puts pressure on effect size and generality, not on âis it physically impossible?â).
My read is not âBradford Hill = slam dunk, RF-EMF carcinogenicity in humans at typical levels is proven, QED.â
Itâs more like âthereâs enough across experiments + mechanistic plausibility + patterned epidemiologic signals to treat RF-EMF as credible hazard,â while also admitting that the human epidemiology is heterogeneous and the population-trend tension means we should be cautious about claiming a large, universal risk increase under modern conditions.
And frankly, I think if we were talking about an exposure that was 1) less beloved by people (we do seem pretty attached to our phones), and 2) less lucrative for industry, we would be well into âtreating this much more carefullyâ territory along with other toxins. But thatâs just my opinion.
Section V: Beyond carcinogenicity
Weâve focused this post on the potential carcinogenic effects of RF-EMF usage, because that is where the greatest body of evidence exists for us to analyze. But there are certainly other proposed endpoints with evidence of their own.
We could go through the whole sequence of in-depth examination of animal studies, epidemiological ones, and biological effects again, and then consider through Bradford Hill viewpoints. Perhaps I will do so in a future post. But I will save you all of that and instead just give a brief overview of some perspectives regarding non-carcinogenic plausible effects of RF-EMF.
In general, if RF-EMF can perturb biology via pathways like oxidative stress, altered ion channel signaling, or indirect DNA damage (even if those pathways are debated), then we should also be looking at effects beyond âpopulation-visible cancer spikes.â
On one hand, these alternative endpoints may be much easier to find and analyze, as they could happen on shorter time horizons concentrate in easier-to-analyze tissues. But on the other hand, cancer is a pretty clean binary signal (although as seen above, still plenty of room for debate and methodological variance), whereas some non-cancer endpoints can be noiser or harder to quantify (sleep, cognitive issues, etc.).
Regardless, here are some non-cancer endpoints that seem most worth looking at seriously.
Male fertility
Starting with the male side: sperm are outside the body, temperature-sensitive, and highly motile, which makes them a very strong biological sensor. Weâve also, in the last couple decades, started to carry around phones in our pockets close to the testes. And indeed, meta-analyses and systematic reviews show:
Adams et al. (2014), Environ Int.:
Exposure to mobile phones was associated with reduced sperm motility (mean difference â 8.1% (95% CI â 13.1, â 3.2)) and viability (mean difference â 9.1% (95% CI â 18.4, 0.2)), but the effects on concentration were more equivocal. The results were consistent across experimental in vitro and observational in vivo studies. We conclude that pooled results from in vitro and in vivo studies suggest that mobile phone exposure negatively affects sperm quality.
Houston et al. (2016), Reproduction:
Among a total of 27 studies investigating the effects of RF-EMR on the male reproductive system, negative consequences of exposure were reported in 21. Within these 21 studies, 11 of the 15 that investigated sperm motility reported significant declines
(Houston et al. also goes further to a biological mechanism by which this happens, âin which RF-EMR exposure leads to defective mitochondrial function associated with elevated levels of ROS production and culminates in a state of oxidative stress that would account the varying phenotypes observed in response to RF-EMR exposure,â consistent with some of our oxidative stress evidence discussed earlier.)
Kim et al. (2021), Environ Res.:
Mobile phone use decreased the overall sperm quality by affecting the motility, viability, and concentration. It was further reduced in the group with high mobile phone usage. In particular, the decrease was remarkable in in vivo studies with stronger clinical significance in subgroup analysis. Therefore, long-term cell phone use is a factor that must be considered as a cause of sperm quality reduction.
As we are perhaps now used to seeing, thereâs a systematic review from the recent WHO commissioned series that takes a lower-certainty persective on the harm, Kenny et al. (2024), Environ Int.:
The evidence is very uncertain regarding the effects of RF-EMF from mobile phones on sperm concentration, morphology, progressive motility and total sperm count in the general public due to very low-certainty evidence
Similar to some of the earlier WHO-commissioned reviews, this low certainty rating primarily arises from extensive exclusion of studies and very narrow subgroup definition. As the ICBE-EMF critique (Melnick et al. (2025), Environ Health) notes:
Only nine studies (7 general public and 2 occupational) on male infertility were included in SR3A. ⊠Each of the dose-response meta-analyses contained fewer than five studies (medianâ=âtwo studies)âtoo few to yield meaningful results. The paper did not report how well the models fit the data.
However, there was also a WHO-commissioned review of the evidence of impacts on male fertility in non-human mammals and in vitro human sperm (as opposed to the in vivo human studies of the Kenny et al. review), Cordelli et al. (2024), Environ Int.:
Among all the considered endpoints, the meta-analyses of animal studies provided evidence of adverse effects of RF-EMF exposure in all cases but the rate of infertile males and the size of the sired litters. The assessment of certainty according to the GRADE methodology assigned a moderate certainty to the reduction of pregnancy rate and to the evidence of no-effect on litter size, a low certainty to the reduction of sperm count, and a very low certainty to all the other meta-analysis results. Studies on human sperm exposed in vitro indicated a small detrimental effect of RF-EMF exposure on vitality and no-effect on DNA/chromatin alterations. According to GRADE, a very low certainty was attributed to these results.
Iâm not claiming it is crystal clear that RF-EMF causes male infertility. The studies vary widely in exposure measurement quality, confounding control, and relevance to typical modern usage patterns. But the reproductive axis certainly looks like a plausible sensitive system.
Pregnancy and birth outcomes
On pregnancy/birth outcomes, the evidence base is smaller, and much of it is animal-experimental rather than human-observational. The WHO-commissioned review on animals (Cordelli et al. (2023), Environ Int.) showed:
There was high certainty in the evidence for a lack of association of RF-EMF exposure with litter size. We attributed a moderate certainty to the evidence of a small detrimental effect on fetal weight. We also attributed a moderate certainty to the evidence of a lack of delayed effects on the offspring brain weight. For most of the other endpoints assessed by the meta-analyses, detrimental RF-EMF effects were shown, however the evidence was attributed a low or very low certainty.
And from the paired human observational systematic review (Johnson et al. (2024), Environ Int.):
From pairwise meta-analyses of general public studies, the evidence is very uncertain about the effects of RF-EMF from mobile phone exposure on preterm birth risk (relative risk (RR) 1.14, 95% confidence interval (CI): 0.97â1.34, 95% prediction interval (PI): 0.83â1.57; 4 studies), LBW (RR 1.14, 95% CI: 0.96â1.36, 95% PI: 0.84â1.57; 4 studies) or SGA (RR 1.13, 95% CI: 1.02â1.24, 95% PI: 0.99â1.28; 2 studies) due to very low-certainty evidence.
If the standard is âhigh-certainty proof of harm at real-world exposures,â we are not there. But if the standard is âis there enough reason to avoid unnecessary exposure during pregnancy when itâs easy to do so?â, I think a reasonable person may say yes based on this data.
Other endpoints
There are a variety of other endpoints with suggestive early evidence, but none at the point where I think the data supports moderate or high certainty. Some jumping-off points:
Bijlsma et al. (2024), Front Public Health. ran a small double-blind, placebo-controlled crossover study on RF-EMFâs effect on sleep and found (emphasis mine):
Sleep quality was reduced significantly (pâ<â0.05) and clinically meaningful during RF-EMF exposure compared to sham-exposure as indicated by the PIRS-20 scores. Furthermore, at higher frequencies (gamma, beta and theta bands), EEG power density significantly increased during the Non-Rapid Eye Movement sleep (pâ<â0.05). No statistically significant differences in HRV or actigraphy were detected.
âŠ
Our findings suggest that exposure to a 2.45âGHz radiofrequency device (baby monitor) may impact sleep in some people under real-world conditions however further large-scale real-world investigations with specified dosimetry are required to confirm these findings.
Wardzinski et al. (2022), Nutrients looked at whether RF-EMF exposure affects appetite (emphasis mine) in a small crossover study:
Exposure to both mobile phones strikingly increased overall caloric intake by 22â27% compared with the sham condition. Differential analyses of macronutrient ingestion revealed that higher calorie consumption was mainly due to enhanced carbohydrate intake. Measurements of the cerebral energy content, i.e., adenosine triphosphate and phosphocreatine ratios to inorganic phosphate, displayed an increase upon mobile phone radiation. Our results identify RF-EMFs as a potential contributing factor to overeating, which underlies the obesity epidemic.
Perhaps related, de Jenlis et al. (2020), Environ Pollut. looked at effects on RF-EMF exposure on rats with respect to weight:
Exposure to RF-EMF and/or noise was associated with body weight gain, with hyperphagia in the noise-only and RF-EMF + noise groups and hypophagia in the RF-EMF-only group.
Jin et al. (2025), Medicine (Baltimore) looked at the relationship between duration of mobile phone use and risk of stroke and found:
Inverse-variance weighted analysis showed a significant causality between DMPU and an increased risk of LAAS [large artery atherosclerosis] (odds ratio [OR]â =â 1.120; 95% confidence interval [CI]â =â 1.005â1.248; Pâ =â .040). No genetic association was found for [stroke overall, or other subtypes]
(Note that the LAAS finding was barely statistically significant, other subtypes did not find an association, and also I am not deeply familiar with the Mendelian randomization approach they followed.)
There are also a range of viewpoints and studies on the impact of RF-EMF on cognition and non-specific symptoms. The series of WHO-commissioned reviews covers some of these, and I generally agree with the low-certainty-of-evidence assessments.
Röösli et al. (2024), Environ Int. on tinnitus, migraine and non-specific symptoms:
For all five priority hypotheses, available research suggests that RF-EMF exposure below guideline values does not cause symptoms, but the evidence is very uncertain.
Benke et al. (2024), Environ Int. on human observation studies on cognition:
This systematic review and meta-analysis found only a few studies that provided very low to low certainty evidence of little to no association between RF-EMF exposure and learning and memory, executive function and complex attention⊠Further studies are needed to address all types of populations, exposures and cognitive outcomes, particularly studies investigating environmental and occupational exposure in adults
Clearly, more work is required. For now, cancer and fertility effects are the endpoints with the most robust body of research to go off of, but there are reasons to suspect and chase down other research directions as well.
And moreover, from a precautionary perspective â once you include all these possible endpoints, even those with low-certainty evidence (which is very different from high-certainty evidence of no effect!), it strengthens the case for mitigating the possible effects, even if any one of them (cancer or otherwise) turns out to not have a true association.
Section VI: A wrench, and an odd possibility
Weâve gone through this whole essay implicitly â and occasionally explicitly â carrying with us an important, undefined phrase: âat exposure levels and durations typical in modern life.â
While it is useful to consider the effects of varied exposures, what really matters for us â in answering the original, poorly-formed question âare EMFs bad?â â are the levels and durations typical in modern life.
While I wish weâd left this term implicit because its definition is trivial, obvious, and easily-generalized, this is not the case. In fact, I suggest it is nigh-undefinable at a population level, and it may have some very odd characteristics.
What is âexposure?â
Letâs look at a single person. Their RF-EMF âexposureâ is really the sum of two sub-exposures: near field exposure and far field exposure. Weâll take a moment here to think about the physics of all this.
Near field exposure comes from RF-EMF sources âclose byâ to you. Far field comes from those further away. You can think of the line between those classifications of exposure as roughly 1/6th the wavelengthItâs actually , and there are multiple ânear fieldâ regions (reactive vs. radiating) that depend on antenna size and geometry. of the wave in question. In the near field, there are strong inductive and capacitive effects of the electromagnetic field, and in the far field, these effects are less pronounced.
RF-EMF radiation ranges from, say, 800MHz (early 1G cell service) through tens of GHz (the higher-frequency parts of 5G). This corresponds to wavelengths ranging from around 37cm (a bit over a foot) down to around 10mm (roughly half an inch).
This means that near field effects come from sources (like cell phones) that are within inches (or less) of your body. Far field effects come from anything further away.
Exposures of these two types tend to be measured differently. We typically look at near field exposures in terms of Specific Absorption Rate (SAR), or a measure of the rate at which energy is absorbed by unit mass in the human body. There are a variety of slight SAR variants, but thatâs the general idea.
SAR is estimatible on a device-by-device level based on required SAR testing (although there are some real disputes around whether these are accurate, see e.g. Gandhi et al. (2012), Electromagn Biol Med., and I agree that the estimations are badly flawed).
SAR, though, is an âinvasiveâ metric to measure (in the mannequin models, they stick a probe into the middle of the head). We canât measure it for humans, we can only estimate it. This works for single-device sources.
But our far field exposures arenât able to be measured in terms of the bodyâs absorption, because our far field ambient exposure comes from many sources at once, arriving from many directions, with varied characteristics and constant change. So instead, we measure the field itself, typically in a unit like V/m (field strength) or ÎŒW/m^2 (power density).
Thatâs all well and good, but why do we care?
Well for example, in the earliest days of mobile phone usage, there was but one way to use them: up against your ear. That was all near field, very limited far-field exposure.
But today, most of our phone use is in our hands â perhaps near-field for hand exposure, but far-field for head exposure.
How can you compare those two exposure types?
And moreover, there are a ton of other variables that have shifted over the past decades as mobile phones have gotten popular:
- power emission from phone radios has gone down (earlier generations required much higher power to transmit)
- weâve dramatically increased our total time using our phones
- weâve shifted habits from phone-to-head to phone-in-hand
- we went from phone-to-head to wired-headphones to wireless (RF-emitting) headphones
- we added WiFi to the mix, a whole new far-field RF EMF exposure
- we started exposing children to RF-EMF younger and younger (whether ambiently via far-field exposures like WiFi, or giving them iPads at a very young age)
- the frequency range of RF-EMF cellular technology has ramped higher and higher, by an order of magnitude or more
- weâve installed more and more cellular towers around, partially to account for ever-decreasing cellular technology wavelengths (plus increased capacity, coverage, indoor penetration)
- the underlying modulation and encoding schemes for signals have changed dramatically, creating entirely new possible characteristics of exposure
- uplink vs. downlink characteristics (including variability in your phoneâs radio broadcast strength due to signal strength)
- the introduction of beamforming technology like 5G as opposed to the always-on nature of other generations
So if you wanted to compare the exposure an average INTERPHONE study participant was exposed to in the 1990s and what youâre exposed to today⊠Iâm just not even sure we have a good way for you to do so.
And to be clear, I am not saying all of this cuts the same direction. Itâs completely mixed. Increased time-of-use obviously bends towards âmore exposureâ â but the shift from phone-to-head to phone-in-hand bends towards âless.â (And not just less, but an entirely different electromagnetic characterization in far vs. near field.)
And without a complete, clear picture of the entire chain of whatâs going on here, itâs extremely hard to chart a path to making this all legible and comparable.
On 5G
5G is a particularly interesting case of uncertainty that deserves a brief aside. There are arguments that it could be better from a public health perspective and arguments that it could be worse, but it is undoubtedly different from the generations before (and its effects on humans have effectively not been studied due to how new it is).
1G through 4G, as well as WiFi, are more similar to each other than any of them are to some of the newer 5G technologies. 1G typically used a frequency of around 0.8-0.9GHz, 2G was 0.85-1.9GHz, 3G was 0.85-2.1GHz, and 4G / LTE was 0.6-3.8GHz. WiFi is typically 2.4 GHz or 5GHz (although some newer deployments get up to 6GHz or more).
A lot of 5G deployments overlap with 4G â 0.6-6GHz. But the newer âmillimeter waveâ deployments can be an order of magnitude higher than all those previous cell generations, between 24-40GHz.
The lower the frequency, the lower the bandwidth of the service â but also the longer the range. That is â in part â why you see so many 5G antennae around. With 1G through 4G, they could get by with pretty spaced-out cell towers. But because some 5G bands are so high-frequency, they need to install a ton of them.
Plus, those high-frequency bands do a very poor job penetrating through walls, or even through weather like rain! So they need a much higher density.
In addition to this, there are two other big differences between 5G and what came before:
- Beamforming technology
- More complex modulation schemes
With pre-5G networks, a base station / cell tower was always broadcasting a relatively broad pattern into its coverage area. Your phone âcaughtâ some of that energy, but it wasnât targeted.
With a lot of 5G deployments (especially the mid-band ones), the base station is more like a searchlight that uses arrays of antenna elements to point energy directly towards your phone, and do so as you move. This is usally called âbeamforming,â or âmassive multi-input multi-output (MIMO).â It âshootsâ the cell signal to you in a targeted way.
This could be better from an exposure perspective, because you arenât being âwashedâ in the field all the time â only during use. And also, the phone may be able to transmit with less power on the uplink because the network can âfocusâ on it, in a sense â and that means less exposure in the near field from your phoneâs radio itself.
But it could also be worse, or at least more complicated: beamforming can create higher localized intensities in teh direction fo the user, even if the siteâs average is not higher. And the exposure becomes more spatially and temporally âpeaky,â which we do not know the effects of.
From a measurement and interpretation perspective, beamforming makes it even harder to answer the question âwhatâs the exposure here?â because it is changing so much.
And on modulation - I will avoid getting too deep into the weeds here â 5G is capable of using much more complex waveforms (different subcarrier spacings, more flexible time scheduling, more aggressive burstiness, although mostly in the context in the same OFDM-family waveforms as 4G). This changes the characteristics of the radiation itself in ways that we have not measured. Is it better? Worse? Who knows!
The real wrinkle comes as the 5G millimeter wave deployments get more prevalent (most 5G deployments to-date are closer to the 4G band). With a 10x higher frequency, we end up with what seems to me to be a very different exposure type.
On the plus side: penetration depth gets much smaller. In the same way 5G struggles to make it through walls and rain, it struggles to get through your skin, in a way that the earlier generations didnât. So weâd expect energy to be deposited more superficially (skin / near-surface tissues). This seems like a good thing, mostly.
But the energy of the radiation is also higher, and so perhaps youâre getting a more focused effect: higher energy, with less tissue to spread out into. And that could have its own negative impacts. And with the much higher number of sources around (due to the required 5G antenna density), you might be more likely, especially if living in a city, to be extremely close to a node.
Net of everything, 5G is a very complicating factor in an already-uncertain space. Itâs a bundle of changes that pull in different directions, and the net effect depends on which exposure mode dominates for a given person in a given environment.
For an overview of some studies on possible health effects of 5G technology, see SimkĂł & Mattsson (2019), Int J Environ Res Public Health., which looked at 94 publications across in vivo and in vitro experiments and found:
Eighty percent of the in vivo studies showed responses to exposure, while 58% of the in vitro studies demonstrated effects. The responses affected all biological endpoints studied. There was no consistent relationship between power density, exposure duration, or frequency, and exposure effects. The available studies do not provide adequate and sufficient information for a meaningful safety assessment, or for the question about non-thermal effects.
Sensitivity
Yet another aspect of our original question that has gone relatively unexamined in this piece:
Is there reason to believe that RF-EMF, at exposure levels and durations typical in modern life, may cause increased risk of adverse health effects, at least for sensitive members of the population?
We havenât examined âsensitivityâ in detail, largely because most of the studies performed look at population-level exposures and because the most obviously-sensitive population (children) are even more out-of-bounds for direct experimentation.
But itâs important to recognize that even if typical modern RF-EMF exposures are âfineâ or low-impact for the majority of the population, if there are sensitive subpopulation disproportionately affected, that would be important for us to know. Such a case could also help explain the population incidence trends â if itâs only a subgroup of the population that is affected, overall population trends would be much more muted.
There are at least two categorical reasons that sensitive populations could be more meaningfully affected:
- Identical levels of radiation impact their system more
- Their system takes in higher levels of radiation than the population average
In the first category, you could imagine immunocomprised populations, electrohypersensitive individuals, or people exposed to other carcinogens that RF-EMF amplifies (for which there is some evidence in animals; see Lerchl et al. (2015), Biochem Biophys Res Commun, Tillmann et al. (2010), Int J Radiat Biol. and others).
In the second category, you can imagine children. Gandhi et al. (2012), Electromagn Biol Med. suggests:
The SAR for a 10-year old is up to 153% higher than the SAR for the SAM model. When electrical properties are considered, a childâs headâs absorption can be over two times greater, and absorption of the skullâs bone marrow can be ten times greater than adult
That is: the measured near field energy absorbed by a child can be dramatically higher than what an adult absorbs. This means that the same phone â or iPad â could result in significantly more radiation exposure for a child than their parent.
In fact, if Gandhi et al. are correct, it could be that children using mobile devices would actually absorb more energy than is allowed by the regulatory limits, as those limits are (remarkably) measured assuming the exposed individual is a 220 lb, 6â2â male. Hereâs a diagram from the Gandhi paper showing the variance in modeled absorption for children vs. adults:

Little research has been done specifically on endpoint impact for these sensitive populations, and so I donât aim to review it here but rather mention it for completeness â and as yet another layer of uncertainty with respect to our original question.
An odd possibility
Letâs assume for a moment that non-thermal RF-EMF can have biological effects â cancer, for instance. And letâs imagine that we have a quantifiable metric for âRF-EMF exposureâ â weâll call it â that is defined as the measure by which there is the clearest generalized dose-response curve for that carcinogenicity (to be clear: this metric doesnât exist today, and I donât know how youâd measure it. But letâs imagine.).
I would conjecture that it is possible that many peopleâs lifetime exposure has gone up and down very much non-monotonically. Letâs say that 10 is very high exposure and 0 is no exposure. Perhaps:
- in the mid 1990s: 0-1, no significant exposure to far-field and no personal devices
- in the late 1990s: 6, got a high-powered cell phone and were holding it to their ear during calls
- early-mid 2000s: 4, wired headphones for phones ramped up and they stopped holding it to their head, but also WiFi started to ramp
- late 2000s: 7, WiFi becoming prevalent, switched to Bluetooth headphones in their ears, and increased phone usage
- early 2010s: 8, even more increased phone usage with smartphones
- late 2010s: 6, more WiFi, more cell towers, similar usage, but 4G/5G come out with lower power than predecessors and beamforming
But, without more of a definite understanding of what underlying characteristics of RF-EMF cause the downstream effects (and through which mechanisms), we donât even know what that âcurveâ looks like.
If it turns out that complex modulation schemes are more harmful, then perhaps weâre on a straight upward trend from 2010 through present with first 4G and then 5G.
If it turns out that near field exposure absolutely dominates, then maybe the fall from âphone-to-headâ days to âwired-headphoneâ days was much steeper, and the rise into âBluetooth headphonesâ days was much faster.
If it turns out that ambient far field exposure is equally important, then the risk in cell tower density and WiFi dominance might be causing a much faster rise today.
Without a better understanding, we are just shooting in the dark, and the applicability of historical studies to our modern exposures is in question.
To be clear, I am making those numbers up. They are just there to demonstrate a point â that itâs possible that whatever the âtrueâ measure of RF-EMF exposure is, it has varied significantly over the last couple decades. If RF-EMF turns out to be carcinogenic, we could see cohorts with seemingly-incoherent data.
(Plus, it could be that epidemiological studies that cross âerasâ of exposure could have era-specific signal drowned out.)
And if this is the case, without a good metric to measure that covers all types of exposure, it may be that we will really have no idea what the effects of our current typical, modern usage are until decades from now when we simply see the results.
This is, to me, perhaps the scariest and strongest point for precaution. We have â as I see it â little to no idea what we are actually exposed to right now or how it might be affecting us.
Studies from the past were on totally different exposures, and we donât know how to analogize them to our current ones. Weâre unleashing wave upon wave of new technologies on a population without any long-term testing, and before we can get the results from the last generation, we are on to the next.
Itâs an odd possibility to imagine that â despite the clear monotonic rise in the rise of RF-EMF emitting devices (mobile phones, base stations, WiFi, smart homes) â the actual relevant exposure profile has not been steadily rising. Maybe itâs gone up and down. Maybe there have been sharp, sudden rises or falls when a new technology or behavior has been introduced. And we donât have a sense for how to quantify the key metric, let alone measure it.
There is plenty of evidence â as we have now painstaking gone through together â for the possibility of effects of non-thermal RF-EMF on humans. But do we have evidence for the impacts of our current exposure profile? I would argue no, not at all â on either side of the debate.
Just to give a taste of scientists wrestling with some of these same questions, you have papers like:
BelĂĄÄkovĂĄ et al. (2025), Environ Res.:
Our measurements of RF-EMF outdoor exposure levels across included microenvironment groups do not indicate change in exposure levels between 2016 and 2023 despite an increase in mobile data traffic by a factor of 8 in Western Europe.
Eight times more traffic but no higher RF-EMF ambient/outdoor exposure levels! But just a few years earlier, in Urbinello et al. (2014), Environ Res., a 50% year-over-year ambient exposure increase:
Within one year, total RF-EMF exposure levels in all outdoor areas in combination increased by 57.1% (p<0.001) in Basel by 20.1% in Ghent (p=0.053) and by 38.2% (p=0.012) in Brussels. Exposure increase was most consistently observed in outdoor areas due to emissions from mobile phone base stations.
And when Zeleke et al. (2018), Int J Environ Res Public Health attempted to look at individual dosimetry (via exposimeters in hip bags measuring ambient field strength, i.e. not near-field coupling / absorption), they found:
The median total RF-EMF exposure was higher on weekdays than weekends (233 vs. 162 mV/m; p = 0.003). Similarly, median RF-EMF exposures from downlink (93 vs. 56 mV/m; p = 0.025), uplink (50 vs. 28 mV/m; p = 0.006), and broadcast (50 vs. 33 mV/m; p = 0.002) were significantly higher during weekdays compared to that of weekends.
Kelsh et al. (2011), J Expo Sci Environ Epidemiol. looked at different phone use scenarios and found, among other conclusions (emphasis mine, regarding rural vs. urban vs. suburban usage):
Our findings suggest that phone technology, and to a lesser extent, degree of urbanization, are the two stronger influences on RF power output.
I admire the European Unionâs Seventh Framework Programme-funded project LEXNET, which ambitiously sought to define something along the lines of my imagined exposure metric. May we one day be able to quantitatively measure something like their proposed Exposure Index:

Weâd just need to find out a few of these parameters:

And these:

Ah, well. Maybe one day.
(As an aside: Iâm pretty sure they got âŹ7.3 million from the EU plus âŹ3.1 million of other finding to do this work! I feel like I deserve at least a tenth of that for this painstaking article⊠right?)
In conclusion
If youâve gotten this far, you deserve a award. Thanks for reading.
I wish I could come out of all this analysis with a dead-simple conclusion like âRF-EMF at modern exposure levels is likely increasing glioma risk by X%,â or âRF-EMF at modern levels is likely to be safe.â
And yet, I donât believe either of those are appropriate interpretations of the data. Hereâs my best shot at summing up my conclusions:
The weight of scientific evidence supports that there is reason to believe that RF-EMF, at exposure levels typical in modern life, may increase the risk of adverse health effects.
But also: there is immense uncertainty, and it is not clear how applicable historical studies are to our modern exposure levels given constantly changing characteristics of the exposure profile (this could cut either way â positive or negative).
We are effectively running a uncontrolled experiment on the world population with waves of new technology which have no robust evidence of safety.
If you believe new exposures should be demonstrated to be safe before being rolled out to you, you should take precautions around RF-EMF exposures (which have not been demonstrated to be safe).
If you only believe precautions are warranted for exposures which are demonstrated to be harmful, then perhaps you donât need to.
Iâm not saying âEMFs definitely cause cancer in everyoneâ (thatâs not how environmental health science works anyway). Iâm saying âthere is substantial, coherent evidence of increased risk, particularly with long-term heavy exposure, and significant uncertainty given the ever-evolving landscape â and the current regulatory posture of âassumed safeâ is not supported by the science.
I also think itâs plausible that you could read this entire essay, look at all the evidence, and come out on the other side of this â saying that, yes, there may be uncertainty, but you think the evidence leans towards safety. You could rank the Bradford Hill criteria viewpoints differently, or even the same but with a different synthesis. I respect such alternative perspectives, although I do not agree with them â this is the nature of dealing with uncertainty.
Iâve tried to present the evidence fairly. Reasonable people can weigh it differently. If you have an alternative perspective or think Iâve missed anything â or especially if I got anything wrong â please do reach out.
Evidence & precaution
As we discussed in the null hypothesis background section up top, definitions of evidence often get inflated in public discourse.
âNo (or uncertain) evidence of harmâ is not the same as âevidence of no harmâ or âevidence of safety.â
What frustrates me most about some of the debate around this topic is when people (not everyone!) point to weak, inconsistent, or no evidence for specific harm, and use that to say that it is safe. That is not a rigorous approach. It needs to be looked at in totality.
We saw similar patterns with tobacco, lead, and asbestos. Arguments were made that âthe science isnât settledâ or âwe need more researchâ or âthe mechanisms arenât fully understoodâ â and in the meantime, exposures continued.
Now, of course, the argument also works in the inverse. âNo (or uncertainty) evidence of safetyâ is not the same as âevidence of harm.â
But we must be very careful in not overstepping from a lack of evidence for one thing to assuming that means there is evidence of its opposite.
The precautionary principle says: when thereâs evidence of the possibility of serious potential harm, prudent action is not to wait for certainty. It is to take precautions. And I would argue that is not being done â at all â with novel RF-EMF exposures. Weâre unleashing them into the world population without evidence of safety, and with some evidence for a lack thereof.
Last â itâs possible to hold two ideas in your head at once: that these technologies have broadly been a net good for the world due to productivity gains and you believe they may have negative health consequences and you want to minimize your own exposure. As dissonant as it can sound, you can certainly believe that they should be rolled out and take steps to reduce their impact on you.
What would change my priors
On carcinogenicity specifically, there are two areas I will be watching closely:
- Population incidence trends
- the COSMOS study followups, and others like it
If ecological studies and prospective studies like COSMOS do not show increases in endpoint outcomes over the coming decade or two that would be consistent with RF-EMF causing harm (ipsilateral, temporal lobe gliomas; possibly endpoints associated with phones being in pocketsWorth noting that male fertility seems to be on the decline broadly, and there are suggestions that this is linked to RF-EMF., etc.), especially among heaviest-user cohorts, that would weaken the argument that RF-EMF exposures over the last decade or two have those specific negative health impacts.
Of course, this only applies backwards to the technology then, not now. But if it turns out that those technologies didnât do it, then that should also weaken your priors on more moderns ones having related effects. It doesnât eliminate the possibility, but should reduce the likelihood.
Similarly, if weâre able to do narrower analyses on sensitive subgroups and we donât see increases there, then that would weaken the argument overall as well as for those specific subgroups (which otherwise could lead to a smaller or null observed effect size if they are the ones being primarily impacted).
Further experimental lab work that points to null findings would also change my priors. This is also the most likely area I will be watching for the non-carcinogenic endpoints.
What this means practically
This is not meant to be a âhow to protect yourself from EMFsâ post. And Iâm not suggesting we abandon modern technology (I use a cell phone and am writing this essay on WiFi at a local coworking space â although with wired headphones and wired computer peripherals!).
I may write a followup with more precautionary options. But in general, the lowest-hanging fruit:
- Use speakerphone or wired earbuds instead of holding your phone to your head (youâll never have to charge your headphones again eitherâŠ)
- Donât sleep with your phone or any wireless devices near you
- Keep your phone away from your body when not in use
- Turn off your WiFi at night (or even hardwire your house â itâs faster!)
- Be more cautious with children (thinner skulls, developing brains, longer lifetime of exposure ahead)
These precautions are mostly cheap or free and have no real downside. Reducing EMF exposure is a low-cost hedge against a possibly high-consequence risk.
We live in a society where lots of people drink alcohol, and most everyone is aware it ainât great for you and that they should weigh the tradeoffs. Some abstain. Most drink in moderation. Why couldnât RF-EMF be treated the same?
I want to repeat my disclosure here, in case you missed it earlier: I co-founded a company called Lightwork Home Health, a home health assessment company that helps people evaluate their EMF exposure (as well as lighting, air quality, water quality, and more). As such, I have an economic interest in this matter (and, if you canât tell, a scientific interest as well. We aim to go this deep on everything we do!).
Despite the economic interest, I want to be clear about the causal direction: I co-founded Lightwork after becoming convinced by the evidence of these types of environmental toxicities. Thatâs why I started the business. This research isnât a post-hoc way for me to justify Lightworkâs existence!
And regardless, my purpose here was to do my best to present the evidence fairly â including the strongest skeptical arguments and what would change my mind â and let you draw your own conclusions.
Iâll be working on Part II: ELF-MF. If you have any comments in the meantime, Iâm all ears.
Looking for more to read?
Want to hear about new essays? Subscribe to my roughly-monthly newsletter recapping my recent writing and things I'm enjoying:
And I'd love to hear from you directly: andy@andybromberg.com