In this new installment of HD Sciences, we’ll explore the instruments used to measure rainfall, starting with what we might call traditional methods: rain gauges and radars. Well-documented and complementary, these two types of tools have provided reliable precipitation measurements for decades. But before delving into how they work, let’s first revisit the concept of measuring rain—a notion that isn’t always as straightforward as it seems.
During the last Mediterranean weather event, or more recently with the relentless rains that hit Pas-de-Calais, you may have read or heard statements like: “Rain intensity reached up to 150 mm/h!” or “210 mm of water fell, equivalent to three months of precipitation.”
This is a screenshot from a Google search.
Clearly, two distinct quantities are being expressed here, in two different units: mm/h (rain intensity) and mm (cumulative rainfall). What do they represent?
Let’s start with a simple illustration. Imagine you’ve set up a small pool in your garden for some summer fun, with a surface area of 3 square meters. Unfortunately, it starts raining. Seizing the opportunity to brush up on your meteorological knowledge, you return 30 minutes later and find that your pool has collected 9 liters of water. You’ve just measured the rain! From this, you can deduce:
So, mm/h is the unit used to measure rain intensity.
Formally, rain intensity can be defined as a flow (or flux) of water per unit of surface area (this might feel intuitive if you like to picture rain as a sort of vertically flowing river). It corresponds to the volume of water (in cubic meters) passing through a 1-square-meter surface in a unit of time (a second). If that surface is the ground—which is likely what interests you most when it comes to rain intensity—it can also be seen as the height of water (in meters) falling onto the ground in a unit of time (a second).
However, this unit (cubic meters per second per square meter) isn’t practical as is. A very heavy rain might produce a surface flow rate of about 0.000028 m³/m²/s (or 0.000028 meters of water on the ground per second). But by converting this to an hourly rate and using millimeters instead of meters, we get a more familiar value: 100 mm/h!
Beware, though! Rain intensity is an instantaneous measurement. The fact that its unit includes “per hour” doesn’t mean it has to be calculated over an hour. It could just as easily be expressed in millimeters per minute (by dividing by 60). For example, the intensity might be 60 mm/h between 00:00 and 00:05, then drop to 30 mm/h between 00:05 and 00:10.
The mm, on the other hand, is the unit used to express cumulative rainfall. This time, it’s a volume of water relative to a unit of surface area (formally, cubic meters per square meter), which, if that surface is the ground, equates to the height of water that has fallen (in meters, or millimeters).
This is an accumulated quantity. Returning to the previous example, if the intensity is 60 mm/h between 00:00 and 00:05, it means that over those 5 minutes, 5 mm of rain fell. Indeed, 60 mm/h equals 1 mm/min, and over 5 minutes, that adds up to 5 mm.
Let’s do some fun calculations. In France, an average of about 900 mm of water falls per year (with significant variation—around 500 mm in the driest areas, like Bouches-du-Rhône, and over 2,000 mm in mountainous regions). By multiplying this by the country’s surface area, we estimate that roughly 500 billion cubic meters of rain fall annually—about 7 million liters per person (or 13 liters per person per hour!).
But now, let’s turn to measuring these quantities—a task that’s less straightforward than it might appear.
A manual rain gauge (see photo below from the Nature et Découvertes catalog) is nothing more than a graduated pool marked in millimeters.
This is an image of a manual rain gauge.
Manual rain gauge (source: Nature et Découvertes)
Formally, it measures a cumulative total, not an intensity. You can calculate an average intensity by dividing the total by the duration, but it tells you nothing about how intensity might have varied during, say, the 30 minutes of rain in the pool experiment above. To avoid having to check the collected volumes every minute, some automation becomes necessary.
Concept
Most non-manual rain gauges are known as tipping-bucket gauges. Their operating principle is relatively simple. Water is collected over a certain surface area and funneled into a small reservoir attached to a tipping bucket. When the water in the reservoir exceeds a certain mass, the bucket tips, and the water falls into a second bucket. By recording the times of these tips, you can determine the amount of water that fell between two consecutive moments. Dividing this by the time interval and the collection surface area gives an estimate of the water mass per unit of time and surface, which can then be converted into a rainfall rate by dividing by the water’s density.
This is a photo of an automatic rain gauge.
Automatic rain gauge from MeteoSwiss (OTT brand model). Source: MeteoSwiss.
National meteorological agencies like Météo-France maintain extensive networks of such rain gauges (see the Météo-France map below), a testament to their value and utility.
This is a map of France.
Météo-France operational rain gauges across metropolitan France (in red, those with the longest records). Source: Météo-France.
Advantages and Limitations
The major advantage of rain gauges (which will become clearer when compared to radars later) is that they provide an almost direct measurement of rain: the instrument records tipping times that correspond fairly directly to volumes of water collected. Errors in converting raw data to cumulative rainfall are therefore minimal.
Unfortunately, several issues remain. First, the instrument’s inherent resolution. Suppose one bucket tip corresponds to a rainfall total of 0.2 mm. Now imagine a steady rain falling at 1 mm/h. That’s 0.2 mm every 12 minutes, so the bucket would tip every 12 minutes.
If you then estimate rain intensities at a 5-minute resolution (one value per 5-minute interval) based on tipping times, the measured intensities might look like this:
High-resolution temporal measurements from rain gauges, especially during light rain, must therefore be handled with care.
Other challenges include the fact that it’s a point measurement, while rain can vary significantly over short distances. Knowing that 20 mm fell at one spot doesn’t necessarily tell you much about what fell 5 or 10 km away (e.g., during a thunderstorm). A dense network of rain gauges is needed to properly cover an area, which can quickly get expensive.
Placement also matters: near buildings or under trees, rain can be affected by these structures. Finally, these instruments can clog, requiring regular maintenance—and the associated costs.
The other major conventional method for measuring rain is the radar. Here’s a photo that does justice to its grandeur (a Météo-France radar in Bollène):
This is a photograph of a radar.
Météo-France radar in Bollène. Source: Météo-France.
Radars might also bring to mind speed cameras or aircraft detection. The principle is the same. A radar emits a signal (an electromagnetic wave) into the air. If that wave hits a target (your car, a plane, a raindrop), part of it is reflected back to the radar, which receives a return signal. By analyzing this signal, properties of the target can be deduced: your car’s speed, the plane’s position, or the amount of rain.
Concept
Let’s dive a bit deeper into how this works for rain, with the help of a small diagram (source: https://www.theses.fr/2016SACLV046):
This is a graph.
In the top graph, you see the power emitted by the radar over time. The radar sends out a signal (a pulse) at the start, for a short duration. Then it stops emitting for a (longer) period before sending another pulse.
At the beginning of this period, a signal is sent into the atmosphere in a specific direction. This signal travels forward. At some point, it encounters a rain zone. Part of the emitted signal hits these drops, while another part passes through and continues on. The signal that hits the drops is (partly) scattered by them—sent back in all directions (per Mie theory, whose equations can be explored here: https://en.wikipedia.org/wiki/Mie_scattering). Some of this signal returns to the radar. This is called backscatter.
The second graph shows the power of the signal the radar receives (what’s returned to it). It doesn’t get a single peak because there’s an entire rain zone: part of the signal is stopped early and returns quickly (say, at time t1), while another part travels farther before being backscattered by drops and returns later, at t2. This results in a power distribution based on the round-trip time.
Here’s the key point: we know the signal’s speed—the speed of light (300,000 km/s). So instead of plotting the backscattered signal distribution over time, we can plot it over distance, grouping it into distance intervals. That’s what the third graph shows. Indirectly, this graph represents the amount of rain in the direction of the radar beam as a function of distance from the radar. Converting the received power into rain intensity relies on relationships established in scientific literature over decades (see details here, for example: https://en.wikipedia.org/wiki/DBZ_(meteorology)).
Next, the radar rotates! We’ve seen how to estimate rainfall amounts at various distances in one direction, but to create a rain map, this process must be repeated in all directions. The radar spins, emitting a new pulse with each turn. Once it completes a full rotation, it has scanned all directions, and a map can be produced—like this one from Texas and Louisiana by the U.S. National Weather Service (NOAA) on August 29, 2005, during Hurricane Katrina:
Radar image measured around New Orleans during Hurricane Katrina. Source: NOAA
Going Further
Several simplifying assumptions were made here. In reality, a radar makes multiple sweeps before producing a map, emitting at different elevation angles (higher and higher into the sky) to account for vertical rain variability and avoid obstacles like terrain or the Eiffel Tower that might block the signal in a given direction.
Also, a radar doesn’t always emit pulses. To save energy, some emit continuously while modulating the frequency (allowing returned signals to still be distinguished).
Finally, the backscatter properties of rain (how much the signal is affected and returned by drops) depend on many factors, including drop size and signal frequency. Operational radars typically work in the microwave range, between 3 and 10 GHz. Higher frequencies increase backscatter. If it’s too strong, the signal gets fully blocked by nearby rain, blinding the radar beyond a certain distance. That’s why Météo-France radars around the Mediterranean use lower frequencies than elsewhere in France—to avoid being overwhelmed by intense autumn rains.
Advantages and Limitations
The radar’s main advantage is that it naturally provides rain maps. Its measurement isn’t limited to a single point; it captures rain all around it, with a range that can extend to about 100 km for operational radars.
However, there are drawbacks. First, it’s an indirect measurement: the radar’s raw data consists of backscattered microwave signal power. Turning that into a rain map takes significant effort!
It’s also a complex, costly system, not available everywhere. Costs include installation, setup, data processing (requiring expertise), and maintenance, plus the risks of breakdowns (availability of parts, skilled technicians, and the fact that a failure halts the entire measurement system).
Finally, measurement is trickier—or even impossible—in very mountainous areas, where terrain blocks the signal unless the radar aims high into the sky, where rain might not be representative (e.g., it could be snow instead).
In a future article, we’ll explore the value of complementing these traditional measurement networks with so-called opportunistic measurements—those that leverage existing instruments or signals not currently used for operational rain measurement.