TRL calibration

Click here to go to our main page on de-embedding S-parameters

Click here to go to a page on TRL calibration in waveguide (new for January 2024!)

Go to our download area and grab a free copy of our TRL calibration calculator spreadsheet, it will automatically calculate the best line lengths for any frequency range.

Through-reflect-line calibration is often used for de-embedding S-parameter data, especially on-wafer MMIC data. TRL works great, but it has a lower frequency limitation in that the "line" standard's length can be more than you'd want to fabricate.  We recommend using TRL for "normal" microwave frequencies (maybe down to a couple of GHz) and then using another scheme if you really need data down at 10 or 20 MHz.  For now, we are not going to get involved in the math for TRL calibration, and as an engineer, chances are you don't have to either.  But we will show you the best way to calculate line lengths for TRL cals, and our spreadsheet does all that for you automatically.

For this discussion, we are talking about de-embedding two-port data.  It is possible to de-embed three-port or higher-order data, but we will leave that for another day.

As the name implies, there are three standards that are measured: the "through" (sometimes called the "thru"), the reflect (often a short circuit or an open circuit) and the "line" (basically, the same as the thru, with some extra length inserted).

Length of the launch

The image below is an artist's conception of a TRL calibration kit for a microstrip board that is RF probeable.  Any resemblance between this and real hardware is pure coincidence! On the top is the through, which is a back-to-back version of what you want to remove from the measurement ports. Each side of the through is referred to as a "launcher" or "launch", because it launches your test equipment signal into the device under test.  Note that it is really short... adding length increases errors. Certainly, the lengths of each side should be less than a quarter-wave at the uppermost frequency of calibration. At least that was what we once thought, then we were smacked upside the head a couple of times about it.  Read the discussion and make up your own mind. One of these days we will break off this discussion to a separate page and clean it up.

What do we mean by the length of a launch?  DK pointed out that the length could include the VNA cables since they are being calibrated out.  That's an interesting statement, but we are talking about the length to the make-and-break connection in the test equipment. You should verify that the cables are well-matched and do not change with time or handling, and then you can ignore them.  The TRL interface could be a coaxial interface, or it could be a RF probe interface, or even a waveguide interface. The repeatability of that connection, and its impedance match, are both keys to a good TRL calibration.

Update on the length of the launch, from Tony (January 2024):

I recently came across your discussion on the length of the launch for TRL calibration structures: TRL calibration (microwaves101.com)

I’m not the type to wade into a controversy, but I found the explanation of why this length should be <1/8 wavelength quite unconvincing (unlike the rest of your excellent site!) and it doesn’t match my experience in the lab. If there is a discontinuity/mismatch at the probe contact point, and the launch line is 50 ohms and lossless (for the sake of argument), then any length of launch rotates the phase of the mismatch without affecting the magnitude. There is nothing inherently ‘better’ or ‘worse’ about some other angle of reflection at the DUT reference plane. Depending on the device being tested, you might get lucky or unlucky, regardless of how long the launch is. And for a passive device you don’t care, as long as the discontinuity is calibrated out correctly (which requires repeatable and uniform probe contact discontinuity across all cal standards and DUTs).

You might be interested in the Keysight app note " Network Analysis Applying the 8510 TRL Calibration for Non-Coaxial Measurements" (available on Keysight's web site here) which gives a thorough treatment of non-coaxial TRL calibration.   Note the section titled ‘Launch Spacing’ on p. 10 (quoted below).

Launch spacing

When calibrating in-fixture, adequate separation between the coax/microstrip launchers is needed during the THRU and LINE measurements. As well as the dominant mode, higher order modes are generated at the launch. If there is not sufficient separation between the launchers, and between the launch and the DUT, coupling of these higher order modes will produce unwanted variations during the error-corrected measurements. A minimum of two wavelengths is recommended.

So, >1x wavelength or <1/8 wavelength, which is correct?

In many situations, especially on-wafer testing at lower frequencies, two wavelengths separation is impractical. But if the probe launch is well designed with low discontinuity/mismatch and insignificant higher order modes then shorter launches can be used successfully, provided there is no direct coupling between probes. But I believe there is no need for the launch to be less than a quarter wavelength and I would suggest that <1/8 wavelength launchers at very high frequency have a high risk of unwanted coupling between probes during the THRU standard measurement.

An older update on this topic:  But wait, there are two sides to the launch length story! Here's a comment from Laurens:

(respondng to this statement... you can make the launchers really long (more than a quarterwave) and enjoy explaining away bad data at the high end of your band. We propose that you never exceed 1/8 wavelength for the launch.)

Unless the substrate thickness is getting to be a large fraction of a wavelength I don't see why the above is true - I've not read it in any HP app note, and in my experience something practical, like 20mm  for the R standard at 6GHz with 0.508mm (20mil) soft substrate is fine, using an SMA-CPW-microstrip transition length of about 2mm, and giving a T length of 40mm.

If we did the math right, assuming ER of 3.5 for a "soft substrate" that works out to 240 electrical degrees for the 20mm launch, which is way more than 1/8 wavelength. But we believe you that it works fine....

So where did that "rule" come from?  A while back, a project for W-band MMICs was getting underway.  To our credit we were smart enough to know that 100μm thick GaAs would cause problems as it would be electrically tall, so we used 50μm. Characterizing test FETs should be easy, we just adopted the 400μm launches that worked so well at Ka-band.  Guess what? The de-embedded data looked like crap. Note that 400μm is more than 150 electrical degrees at 110 GHz. Decreasing the size of the launches to 200μm cured the problem and the data provided the basis for some great designs.  200μm is less than 1/4 wavelength at 110 GHz... Ipso-facto....  what's going on?

First, you should appreciate that we are trying remove the uglyness of the test equipment, especially that make-and-break interface.  If you designed your calibration standards correctly they will be very close to 50 ohms once you move past the connector or RF probe point.

The accuracy of a TRL calibration is affected by the overall impedance match of the make-and-break interface. At six GHz, just about anything you come up with will give you a solid 20 dB match to fifty ohms.  At 110 GHz, the match could be 14 dB, according to this data sheet. So here's the hypothesis: If you want to calibrate out a crappy interface, the closer it is to the calibration reference plane, the better off you will be.  If you locate a bad interface within 90 degrees of the reference plane you will be much better off than it if is located 180 degrees away.  Anyone that can help us verify that hypothesis will receive a nice MIcrowaves101 gift! But if you are working at low frequency with well-behaved test interfaces, be our guest and make the launches longer.  If you make them really long, say multiple wavelengths, tolerances on dielectric properties, substrate physical dimensions etc. may decrease the accuracy of your calibration standards.and put you in another world of hurt.

We'll let Laurens have the final word on this... for now. Just don't say you've never been warned to think about this!

Now that I think about it: 

"Characterizing test FETs should be easy," "If you locate a bad interface within 90 degrees of the reference plane you will be much better off than it if is located 180 degrees away. "

Could it be that the probe port match was poorer than expected and that the "quarter wave launcher" was acting as an impedance transformer, causing the active device to be looking in a different set of load and source impedances, when transformed through a q.w. - and therefore acting rather different than in a true 50 ohm environment? 

A passive device, which are most of the things I test, would, within reason, not care about the actual impedances - but an active device might well do so - in that case a launcher <1/8 of a wavelength would not give that large impedance transformation, thereby keeping the DUT better behaved and avoid weird behaviour in the frequency response?

Below there are two "lines" of different lengths. The ideal line is 1/4 wave in length, but if you need data over frequency, you don't want to make a quarter-wave line for all data points.  It turns out that the line works well enough (engineering-speak for don't overthink this unless you are making a career out of accurately characterizing devices) if it is between 20 and 160 electrical degrees (a quarter-wave is 90 degrees).  Depending on how much bandwidth you need to measure, you will probably need multiple lines cal standards. The longer ones are used at lower frequencies.  

At the bottom is the "reflect", which is a short circuit in this case.  The reflection is ideally right at the reference planes of the launchers.  In the figure, two shorts (grounded with vias, the tiny circles) are configured to be the same distance between probes as line 1, to cut down on set-up. Two reflects are used, so you can calibrate both ports at once.  You could save some layout time by making one reflect standard, but the test guy/gal will wish that bad things happen to you as it forces them to have to turn the standard around during calibration.

 

 Here are some ways you can go wrong, in making calibration standards:

  1. As we stated earlier, you can make the launchers really long (more than a quarterwave) and enjoy explaining away bad data at the high end of your band. We propose that you never exceed 1/4 wavelength for the launch (this used to say 1/8 wavelength but we changed it to one-quarterwavelength in April 2021 after we considered some email feedback).There's some controversy to this statemen, see the discussion earlier on this page.
  2. Connectorized standards assume that the connectors behave exactly the same on every installation.  On PCBs, the assembler might not shove all the connectors up to the board (or maybe he/she will insist on a small gap, which could vary in length).
  3. With connectorized standards, you can buy $3 SMA connectors.  They will all be different, and in the best case you just get bad data.  In the worst case, they damage your VNA cables. Cough up for some precision connectors (line 2.92mm air dielectric) , that's why they were invented. And don't even think about making a cal kit with push-on connectors.
  4. The reflect standard can bring misery... open circuits are notorious radiators, and short circuits have via inductance that brings significant reactance.  These problems get worse at high frequency, if you are trying to de-embed a 2.4 GHz amplifier you can't go wrong.  The best high-frequency standard you can make would be a "real" short circuit, where the launcher is terminated in a vertical wall of metal.  For microstrip circuits, you could make a metal shim the same thickness as the substrate and strap the microstrip line to it somehow.  Then add a wall of metal above the shim. Yes, we need to draw a picture of that one of these days.

If anyone has further advice on cal standards, send it in and we'll post it and maybe you will get a reward...

The best way to calculate TRL "line" standard lengths

Before we get into this, one of the most annoying things about some engineers is that they don't pass along solutions to problems to other engineers, probably in an attempt to keep work flowing their way.  At many companies, there is the "TRL guy" that designs all the cal kits, because he/she keeps their mysterious method to themselves. Once you read the information below, you'll be in the know, too.

 Microwaves101 Rule of Thumb #126

The TRL line standard only works well when it is between 20 and 160 degrees electrical length.  This is just a rule of thumb, not a hard fact, but if your line standard is zero or 180 degrees in length, the math completely falls apart.  Considering the limit of 20 to 160 degrees, one line standard provides up to 156% bandwidth.  If you need more band, you'll need more than one standard, and your calibration routine will need to pick a crossover point for which lines are selected.

In this case, it's much more convenient to think of frequency coverage in terms of the ratio of upper to lower frequency: 20 to 160 degrees is 8:1 bandwidth, so a standard that is 20 degrees at 1 GHz will work to 8 GHz. Let's see how much bandwidth you can get by multiple standards:

Two standards can cover 64:1 bandwidth

Three standards can cover 512:1 bandwidth

Four standards can cover 4096:1 bandwidth

Can you figure out how much band can be covered by five standards?  Here's a hint: it's more than you will ever need, so give it up at four standards. Also, the low frequency standard can get extremely long for MHz frequencies.  How are you going to make a one-meter standards on a 100mm GaAs wafer?

Put another way, suppose you wanted to measure to 110 GHz, what would be the lowest frequency you could calibrate?

One standard can cover down to 13.75 GHz

Two standards can cover down to 1.72 GHz

Three standards can cover down to 215 MHz

Four standards can cover down to 27 MHz

We will use an example to describe the technique.  You have been assigned to design 1-110 GHz standards on GaAs (Keff ~8.25). First, you will notice that the band is 110:1, so three standards are needed.  The easiest thing to do is to come up with a standard that is 20 degrees at 1 GHz and a standard that is 160 degrees at 110 GHz, and somehow come up with a standard in the middle.  Maybe you just geometrically average (not arithmetically average) Line 1 and Line 3 to get Line 2.  This takes a few minutes and you can be done.  But suppose you want the very best calibration?  You will want to design standards and crossover points that ensure you are as far from 0 and 180 degrees as mathematically possible across the entire band.

First, you need to calculate the best crossover points.  This is done by geometric (as opposed to arithmetic) segmentation of the band. For two standards, the crossover is found by:

FT=FL*10^(log10(FH/FL)*(1/2))

Which simplifies to

FT=(FH/FL)^(1/2)

For three standards, crossovers FT1 and FT2 are:

FT1=FL*10^(log10(FH/FL)*(1/3))

FT2=FL*10^(log10((FH/FL)*(2/3))

(Apologies to anyone that tried the original math we posted, it has been fixed for May 2018.  You'll never need to set up the equations yourself, it is built into our TRL spreadsheet you can download...)

Notice the pattern?  You can figure out how to do four or five standards yourself.  Where did we get these formulas?  We figured them out in our spare time, they don't appear in any reference we could find.

For the 1-110 GHz example, crossover 1 is 4.79 GHz and crossover 2 is 22.96 GHz

Now, calculate the center frequency of each sub-band.  This is the arithmetic average of the adjacent crossover points.

For band 1, FC=1+(4.79-1)/2=2.895 GHz

For band 2, FC=4.79+(22.96-4.79)/2=13.87 GHz

For band 3, FC=22.96+(110-22.96)/2=66.48 GHz

Now, calculate quarterwave line lengths for each center frequency, these are the what you will need to fabricate.  Don't forget to take the effective dielectric constant (8.25) into account.

For band 1 (line 1), L=300/2.89/SQRT(8.25)/4=9.02mm

For band 2 (line 2), L=300/13.87/SQRT(8.25)/4=1.88mm

For band 3 (line 3), L=300/66.48/SQRT(8.25)/4=0.39mm

Now, let's plot phase angles of the three cal standards.  At 1 GHz, line 1 is 30 degrees, and at 110 GHz, line 3 is ~150 degrees.  Both of these points are 10 degrees inside the 20-160 degree rule of thumb.  But when you plot on a linear axis, it is hard to see the crossover points...

Let's re-plot that on log scale. If you put a vertical line at the crossover points, you'd see that the phases are all between 30 and 150 degrees.

Now you know the how to design a TRL cal kit, picking the best crossover frequencies and line lengths!

As promised, we've put together an Excel spreadsheet on this technique and uploaded it to our download area.

An Alternate Opinion

We recently got a note from Shankar, who disagrees with us on one point.

" I guess I found an error in this article.  It stresses that the THRU length should always be smaller than 1/4th of the wavelength. I believe this is incorrect. In fact,  the opposite is true. THRU length should be approximately 40% more than the wavelength to avoid multiple modes getting excited."

And he included a couple of references with links (note: these links point to the IEEExplore site, which may or may not let you see more than the abstract depending on your subscription status):

Orii et al., "On the length of THRU standard for TRL de-embedding on Si substrate above 110 GHz," 2013 IEEE International Conference on Microelectronic Test Structures (ICMTS), Osaka, Japan, 2013, pp. 81-86, doi: 10.1109/ICMTS.2013.6528150.

View Article Full Text: PDF 

D. F. Williams, R. B. Marks, and A. Davidson, "Comparison of on-wafer calibrations," 38th ARFTG Conf., pp. 68-81, Dec. 1991.

View Article Full Text: PDF 

D. F. Williams and R. B. Marks, "Calibrating on-wafer probes to the probe tips," 40th ARFTG Conf., pp. 136-143, Dec. 1992.

View Article Full Text: PDF 

So, what do you think?

Author : Unknown Editor