The analysis of TSH is primary in the assessment of thyroid function since it serves as an indicator of normal production, inadequate production, or overproduction of thyroid hormone. Despite the fact that it is an indirect approach, TSH measurement is often used as a stand-alone test to screen for thyroid abnormalities, even in asymptomatic patients. Because of this clinical practice, the analysis of TSH needs to be sensitive, specific, and well-standardized so that clinicians can confidently interpret results and decide whether additional testing or treatment is required.
As immunoassay methods have evolved over the past three decades, the methods have become increasingly sensitive. The first generation assays employed principles of hemagglutination inhibition with a level of detection of 1.0 µIU/mL, deemed far too insensitive for clinical purposes. The second generation assays for TSH employing radioimmunoassay (RIA) and enzyme immunoassay (EIA) principles improved upon this level of sensitivity, able to detect levels down to 0.1µIU/mL. Today's third generation immunoassays, employing monoclonal antibodies and chemiluminescence, are capable of reporting TSH concentrations to levels of 0.01 µIU/mL.
Standardization of TSH assays to conform to the same source of TSH antigen in developing analytical calibrators has helped to minimize variability between analytical methods. Efforts to standardize also have led to the recommendation of universal reference intervals (Garber et al.) to help clinicians better interpret TSH results. A recommended reference interval for TSH is shown. However, despite efforts to standardize immunoassays, a degree of variability between methods still exists. Thus, each laboratory should establish or verify its own reference intervals.