In his book Inventing temperature: measurement and scientific progress, Hasok Chang sets out to use the development of the understanding of temperature and its measurement as ‘a showcase’ for what he calls ‘complementary science’. He explains complementary science in the following way:
Complementary science asks scientific questions that are excluded from current specialist science. It begins by re-examining the obvious, by asking why we accept the basic truths of science that have become educated common sense. Because many things are protected from questioning and criticism in specialist science, its demonstrated effectiveness is also unavoidably accompanied by a degree of dogmatism and narrowness of focus that can actually result in a loss of knowledge. History and philosophy of science in its ‘complementary’ mode can ameliorate this situation, as I hope the following chapters will illustrate in concrete detail.
He goes further and says:
On examining certain discarded elements of past science, we may reach a judgement that their rejection was either for imperfect reasons or for reasons that are no longer valid [author's emphasis]. Such a judgement would activate the most creative aspect of complementary science. If we decide that there were avenues of knowledge that were closed off for poor reasons, we can try exploring them again. At that point complementary science would start creating parallel traditions of scientific research that diverge from the dominant traditions that have developed in specialist science.
Complementary science could trigger a decisive transformation in the nature of our scientific knowledge. Alongside the expanding and diversifying store of current specialist knowledge, we can create a growing complementary body of knowledge that combines a reclamation of past science, a renewed judgement on past and present science and an exploration of alternatives. This knowledge would by its nature tend to be accessible to non-specialists. It would also be helpful or at least interesting to current specialists as it shows them the reasons behind the acceptance of fundamental items of scientific knowledge. It may interfere with their work insofar as it erodes blind faith in the fundamentals, but I believe that would be beneficial overall. The most curious and exciting effect of all may be on education. Complementary science could become a mainstay of science education serving the needs of general education as well as preparation for specialist training. That would be a most far-reaching step, enabling the educated public to participate once again in building knowledge of our universe.
These are fairly extraordinary claims and it is my aim in this review to make a judgement as to whether Chang has substantiated them in his exploration of the development of the understanding of temperature and its measurement. In choosing this subject as the vehicle for demonstrating complementary science, he has set himself up as something of a hostage to fortune in that he is then obliged to find some cases in which the dogmatism and narrowness of view has led to a loss of knowledge of temperature and its measurement that can be recovered by his complementary science. As a reviewer of contributions to specialist science journals I should say that if an article makes extraordinary claims I look for extraordinary evidence to support them. In the same way, I look for substantial evidence that would support the claim that complementary science will trigger a decisive transformation in the nature of our scientific knowledge of temperature and, by extension, the whole of science.
Before coming to the content of Chang's book I present first a brief history of thermometry to put the whole question in context and present the reader with a background against which his claims of complementary science can be judged.
Throughout the whole of recorded history, there exist many descriptions of standards of mass, length and time. These are the quantities that are immediately accessible to quantitative human appraisal and for which it is easy to make either artefact standards or, for time, to use the natural existing movements of the heavenly bodies to provide references. For the other base quantities of today's International System of Units, the SI, namely temperature, electric current, amount of substance and luminous intensity, standards had to wait until the dawn of modern science. For electric current and amount of substance the quantities themselves were unknown, whereas for temperature and luminous intensity the quantities were readily accessible to human appreciation but beyond reach of any sort of quantitative evaluation.
At the beginning of the seventeenth century, very little was known about heat and temperature, most opinions at that time being based on the writings of the Greek physician Galen (AD 129–200). Basing his clinical thermometry on the ideas of Aristotle, he assumed that people differed in their proportions of heat, cold, moisture and dryness. More than 1000 years later, in 1578, Hasler of Bern, another physician, followed Galen in ascribing various degrees of heat and cold to mixtures of drugs. To assist in their prescription he set up a temperature scale in which there were Galen's four degrees of heat and four degrees of cold with a zero in the middle. Against this he set a scale of latitude, postulating that inhabitants of the equatorial and polar regions have the fourth degrees of heat and cold, respectively. Using these scales the appropriate mixtures of drugs could then be calculated depending on where the patient lived.
At that time there was no instrument that could be called a thermometer. Admittedly, Philo of Byzantium had made an instrument, sometimes referred to as a thermoscope, that demonstrated the expansion of air on heating but it had not been used to give an indication of temperature. The first recorded description of an instrument that could be called a thermometer was that published in 1612 by the physician Santorio of Padua, in his Commentaries on Galen. The credit for the invention of the thermometer is, however, usually given to Galileo, who is thought to have invented the air thermometer in about 1592.
The next major advance, that of using a liquid rather than air as the thermometric fluid, was made in 1632 by yet another physician, Jean Rey of Lyon, who used an open-ended water-in-glass thermometer. This was not a very practical device and it was Ferdinand II, Grand Duke of Tuscany, who is credited with the invention in 1641 of what we would recognize as the first real thermometer—an alcohol-in-glass thermometer. The stems of the thermometers made by his remarkable artisan Mariani were marked in equal fractions of the volume of the bulb. He was apparently able to make any number of such thermometers that would all move equally when exposed to the same surroundings. By 1654 a number of such thermometers, having 50 ‘degree’ markings, had been sent to various observers in Parma, Milan and Bologna. Soon such Florentine thermometers began to be made with very long coiled stems, sensitive enough to respond to a warm breath, but these were not nearly as consistent with one another as the 50-degree thermometers. In 1657, according to the records of the Accademia del Cimento, experiments were made with mercury-in-glass thermometers but the conclusion was that mercury was less suitable than alcohol as a thermometric fluid. This was a pity, because with their skill in glass blowing the Florentine artisans could have developed the precision thermometer some 60 years before its eventual appearance at the hands of Fahrenheit in about 1713. Note that there was no suggestion at this stage of calibrating the thermometers at ‘fixed points’ of temperature.
Thus, by the middle of the seventeenth century, sensitive and reproducible alcohol-in-glass thermometers had been fabricated but no serious attempt had been made to produce a universal scale. In 1661, however, a Florentine thermometer came into the hands of Robert Hooke, curator of experiments of the newly created Royal Society. In his book Micrographia published by the Society in 1665, there appeared a description of his thermometric scale. This was based on marking the column of the thermometer with equal increments of volume starting at the freezing point of water. There was much discussion about the freezing point of water and to what extent it had a fixed temperature. In Hooke's scale, each degree represented 1/500 part of the volume of his thermometric liquid. It extended from −7 degrees at extreme cold to +13 degrees for the greatest summer heat. This scale was disseminated by various thermometers calibrated against an original held by the Royal Society, which in turn had been calibrated by Hooke's method. This original, described by Hooke at a meeting of the Royal Society in January 1665, became known as the standard of Gresham College (where the first meetings of the Royal Society took place) and was used until 1709. It was this scale that was used in the first systematic meteorological records during the latter part of the seventeenth century. In modern parlance, these meteorological data were traceable to the standard of Gresham College. Examination of some of these records show that a remarkable consistency was obtained; temperatures measured by different people at the same time were on average within the equivalent of 1 °C and were never more than 4 °C apart. This was before the births of Fahrenheit, Réaumur and Celsius and only some 10 years after the first of the Florentine spirit thermometers had been brought to England!
Fahrenheit and Amontons were the next people to make significant advances in thermometry. Fahrenheit seems to have been the first person to have learnt how to make reliable mercury-in-glass thermometers. In addition, after discussions with the Danish astronomer Römer, he developed between 1708 and 1724 the method of making a scale by taking two fixed points and dividing the interval between them into a convenient number of degrees. As the two fixed points he took the temperature of the human body, to which he assigned a value of 96 degrees, and the freezing point of water, with a fixed temperature of 32 degrees. Using this scale, published in 1724, he made measurements of the boiling points of fluids up to 600 degrees. At about the same time, the French scientist Amontons developed the constant-volume gas thermometer. He used air as the thermometric fluid and found that the greatest summer heat to greatest winter cold in Paris was approximately in the ratio six to five, about what it is today, from +40 °C to −13 °C. He then concluded that the lowest temperature possible would be that corresponding to a zero gas pressure, suggesting that a temperature scale could be established on the basis of just one fixed point with temperatures simply proportional to gas pressure.
Thus by the middle of the eighteenth century the two aspects of thermometry that we know today had been established. On the one hand we had the development of ever more refined practical scales based on arbitrary, but carefully defined, fixed points (Fahrenheit, Celsius, Réaumur and in due course the International Practical Temperature Scales of the twentieth century) and on the other, the parallel development of gas thermometry (Gay-Lussac, Regnault and Chappuis) and, more generally, primary thermometry giving values of thermodynamic temperature on which the practical scales came to be based. Soon afterwards, greatly improved practical thermometers were developed and by the end of the nineteenth century it was possible to obtain mercury-in-glass thermometers, made by Tonnelot in Paris, that could be read and used with a precision of one-thousandth of a degree. Moreover, electrical resistance thermometers using a platinum element were developed together with the means of measuring their resistance by H. L. Callendar in London.
The first internationally agreed temperature scale, the normal hydrogen scale of 1887, was set up at the newly created International Bureau of Weights and Measures (BIPM) in Paris as the reference for the accurate measurement of the temperature of the international prototype of the metre. The first task of the BIPM, created by the Metre Convention in 1875, was to acquire new international prototypes of the metre and the kilogram together with a set of national prototypes of the same for distribution to Member States of the Convention. It had already been decided that each national prototype of the metre would be accompanied by a calibrated thermometer so that the length and temperature coefficient of the prototype metre bars could be specified and that this thermometer should be calibrated against a gas thermometer using air as the thermometric fluid. In the years after the opening of the laboratories of the BIPM in 1884, Pierre Chappuis constructed a constant-volume gas thermometer using a bulb made of Pt–Ir and studied its behaviour with nitrogen, carbon dioxide and hydrogen (but not air). He also acquired several high-precision Tonnelot mercury-in-glass thermometers. The hydrogen gas thermometer scale that resulted from Chappuis's work was adopted by the International Committee for Weights and Measures (CIPM) in 1887 and it defined all the essential parameters necessary to realize the normal hydrogen scale to a few thousandths of a degree. It was intended not only for calibrating the thermometers accompanying prototype metres but was proclaimed as the thermometric scale for general use by the ‘International Service of Weights and Measures’. It is interesting to note that the corrections to be made to the readings of the best Tonnelot thermometers amounted to only about 0.11 degrees at 50 °C. The scale was valid between −25 °C and 100 °C. These Tonnelot thermometers used by Chappuis have been carefully preserved at the BIPM in their original brass cases and it would be interesting to use them again to see by how much their calibration has drifted in the 120 years that have passed. Sadly, the knowledge and expertise required to use such thermometers to a precision of a few thousandths of a degree was lost many years ago with the departure from this world of those whose daily task it was to calibrate such things. The time and effort to reconstitute this knowledge from written accounts would now be too high to justify.
After the adoption of the normal hydrogen scale, there followed a period of 40 years (interrupted by World War I) during which various proposals were discussed for an internationally agreed scale for general scientific and industrial use covering a much wider range. It was well understood by the CIPM in 1887 that the normal hydrogen scale was not the thermodynamic scale, but it was considered to be the closest practical approach. It was always intended that ultimately what was required would be the thermodynamic scale.
Only 12 years later, in 1899, Callendar made a proposal to the British Association for the Advancement of Science at its meeting in Liverpool for an international temperature scale extending up to the freezing point of aluminium, which he gave as 645.5 °C (compare the modern value of 660.32 °C). Callendar's scale was based on the use of a platinum resistance thermometer calibrated at the freezing point of water and the boiling points of water and sulphur, the latter specified as 444.5 °C (compare with today's value, 444.64 °C). He also proposed that a particular batch of platinum wire be selected from which the thermometers would be made for maintaining the scale, and that the scale be called the British Association Scale. It was based on the quadratic difference formula between so-called platinum and gas thermometer temperatures previously obtained at the BIPM by Chappuis and Harker (the latter from the Kew Observatory). Callendar also presented a list of secondary fixed points, the values of which mostly differed by only a few tenths of a degree from the present-day values on ITS-90. It is not clear why his proposal was not taken up. Perhaps it was because at the time the National Physical Laboratory (NPL) had not yet been founded and the Kew observatory was busy with other things. In any event, it was not until 1911 that the next move was made.
In 1911 the Presidents of the Physikalisch Technische Reichenstalt, Berlin (the first of the great national standards laboratories, founded in 1887), addressed a circular letter to the Directors of the BIPM, the NPL (founded in 1901) and the National Bureau of Standards (NBS), Washington (founded in 1900), suggesting that the thermodynamic scale be adopted as the international temperature scale and that its practical realization be the 1899 proposal of Callendar. Both the NPL and the NBS agreed with this proposal and went further, specifying the constants of the platinum to be used and further proposing that above the upper limit of the platinum resistance thermometer a scale be set using the optical pyrometer. At the 5th General Conference on Weights and Measures (CGPM) held in Paris in 1913, this initiative was strongly encouraged. By 1923, when further discussions took place between the three major laboratories, they had each put into operation a scale from −38 to +444.5 °C. During these 1923 discussions the form of the future scale was agreed. It would comprise the platinum resistance thermometer from −38.81 °C up to the freezing point of aluminium at 650 °C, and from this temperature up to 1100 °C a Pt–Rh thermocouple would be used; above 1063 °C, the freezing point of gold, an optical pyrometer would be employed using the Wien equation for extrapolation.
The International Temperature Scale of 1927 (ITS-27) was adopted by the 7th CGPM in 1927 and slightly modified at the 8th CGPM in 1933, but because the modifications were considered editorial it remained the ITS-27. The next significant change did not take place until 1948. In this revision, the International Practical Temperature Scale of 1948 (IPTS-48), the only change below 0 °C was the disappearance of the extrapolation below the boiling point of oxygen to −190 °C because it had been found to be unreliable. The much more significant changes in this range did not take place until the next revision in 1968. In the optical pyrometer range, above the freezing point of gold, Wien's equation was replaced by the Planck equation and the value of the second radiation constant c2 was increased to 1.438 cm K. It was also decided the drop the name ‘degree centigrade’ and replace it by ‘degree Celsius’ so that all common temperature scales would have their units named after someone closely associated with them, namely Kelvin, Celsius, Fahrenheit, Réaumur and Rankine.
The IPTS-48 became very widely used in science and industry all over the world, and indeed even after it was replaced by the IPTS-68 20 years later it took many more years before references to IPTS-48 disappeared from various texts and manuals. In some areas this was perhaps no bad thing because, as we now know, in some temperature ranges IPTS-48 was a better representation of thermodynamic temperature than IPTS-68!
In the 40 years after the adoption of IPTS-48, a great deal of new work in thermometry was carried out, reflecting the growth of science in general in this period. Much of it was concentrated in the low-temperature range, stimulated by the flourishing of low-temperature physics. In the high-temperature range, up to the gold point, new gas thermometry was carried out and all of this led to the need to change IPTS-48. Before this happened, however, an important change was made in 1960 to the definition of the unit of thermodynamic temperature. The 1854 proposal of Kelvin, following on from that of Amontons, was finally adopted, namely that the unit of thermodynamic temperature should be defined by setting the value of a single fixed point. The single point chosen was the triple point of water, which was assigned the temperature of exactly 0.01 degree Kelvin above the freezing point of water, which in turn was fixed at 273.15 degrees Kelvin. This proposal had already been made in 1948 but at that time there were differences of opinion as to the temperature to be assigned to the absolute zero, whether it should be −273.15 °C or −273.16 °C. Remember that the temperature of the absolute zero had been deduced from gas thermometry mostly carried out in the 1930s based on an interval of exactly 100 °C between the freezing and boiling points of water.
This led to the interesting situation that the degree Kelvin, unit of thermodynamic temperature, would be identical to the International Practical Kelvin, unit of International Practical Kelvin Temperature defined by IPTS-48, only if the gas thermometry had been exactly right in assigning a temperature of −273.15 °C to the absolute zero. Since, formally, this could not be the case exactly, it was recognized that the definition of the unit of International Practical Temperature needed also to be changed. This took place in 1968 with the IPTS-68, in which the units of practical and thermodynamic temperatures were defined to be identical and equal to 1/273.16 of the thermodynamic temperature of the triple point of water. The unit itself was renamed ‘the kelvin’ in place of degree Kelvin and the symbol was given as K rather than °K.
After the adoption of the IPTS-68, thermometric work in the national standards laboratories continued to be aimed at refining and improving the scale. Part of this effort was related to measurement of thermodynamic temperature over almost the whole range of the scale, some studies concentrated on improvements to the fixed points, and others concerned a particular property of scales, namely their ‘uniqueness’. The word uniqueness is used to indicate how well various realizations of the scale agree when different examples of the defining interpolation instruments are used. For example, two platinum resistance thermometers correctly calibrated at the specified fixed points according to the prescriptions of the scale will, at a temperature between the fixed points, give slightly different readings for the same temperature. These differences arise because no sample of platinum is exactly the same as another and thus a given interpolation equation will give slightly varying results for different thermometers. The magnitude of these differences is a measure of the uniqueness of the definition of the scale.
In IPTS-68, the apparent uniqueness differences observed were as large as 10 mK in the range below 250 K. This was not considered satisfactory. It was soon realized, however, that these discrepancies were in fact due mostly to differences in the realization of the low-temperature fixed points of IPTS-68 and that the non-uniqueness of the scale itself did not exceed about 1.5 mK in any part of the range. The fixed points of the scale below room temperature were the boiling and triple points of hydrogen, neon, argon and oxygen, and above room temperature the boiling point of water and the freezing points of tin, zinc, silver and gold. A great deal of work was done to identify the factors that limited the reproducibility of these fixed points. These included the purity of the material and the detailed conditions of establishing the thermal conditions to realize the phase transition. The aim was to specify the conditions such that the ideal transition temperature could be reproduced to 0.1 mK. In order to achieve this at the boiling point of water it is necessary to measure the pressure to 0.4 Pa or about 4 parts per million (ppm) of atmospheric pressure. This is very difficult and was the principal reason that the boiling point of water was displaced as one of the fixed points by the freezing point of tin, 231 °C. The relative uncertainty in pressure measurement needed for the low-temperature boiling points was less demanding but those points have, nevertheless, now been displaced by triple points realized in completely sealed cells. This was mainly to avoid the problems of measuring the pressure in a chamber at very low temperatures by means of a barometer at room temperature. It is difficult to obtain a good estimate of the pressure gradient down a connecting tube passing from room temperature to very low temperature. The principal uncertainty in the realization of a freezing point is the purity of the material and in recent decades very pure metals have become available with total impurity content of a few tenths of a ppm. The detailed processes that take place during melting and freezing of a pure metal are now well understood, and very uniform enclosures to contain the melting and freezing crucibles can now be obtained by means of heat pipes. These can provide a uniform temperature to better than a millikelvin over lengths of tens of centimetres. The remaining problems in thermometric fixed points are mainly the estimation of the effects of residual impurities in the fixed point substance. For very high temperatures, above 2000 K, there is a new set of fixed points made from carbon/metal eutectics. Extending beyond 3000 K, these have an excellent reproducibility and are likely to lead to significant improvements in the reproducibility of the scale at these high temperatures. These carbon/metal eutectic fixed points were developed only in the past five years or so. In the particular case of the triple point of water, the limiting uncertainty is now the knowledge of the isotopic content of the water. For those triple-point cells in which this has been measured, recent comparisons show a uniformity of about 25 μK in the triple-point temperatures measured with platinum resistance thermometers in cells from different sources.
As regards the measurement of the thermodynamic temperature of these fixed points, extensive work has been carried out over the past 30 years. Thermometers that give direct measurements of thermodynamic temperature without, in principle, any previous knowledge of values of temperature, are known as primary thermometers; these include the gas thermometer, the acoustic thermometer, the total radiation thermometer and the electrical noise thermometer. Primary thermometers are those based on equations of state for which an explicit relation can be written down between temperature and the measured quantity without the introduction of unknown temperature-dependent functions or constants. The measured quantities in the thermometers mentioned above are pressure, volume, amount of gas, speed of sound, total radiant exitance (the radiant flux per unit area emitted from the surface) of a black body and electrical noise, respectively. The equations of state of these thermometers must include either the temperature of the triple point of water or the Boltzmann or gas constants. Of course, none of them are ideal primary thermometers because there are always small temperature-dependent effects that must be taken into account. For the gas and acoustic thermometers there are the departures from ideal behaviour of the gases used as thermometric fluids, and these are usually accounted for by extrapolating the measured quantities to zero pressure; in the total radiation thermometer the emissivity of the internal walls of the black body is temperature dependent, whereas in the noise thermometer it is the electrical resistance of the element that exhibits a temperature dependence. However, the uncertainty in the final value of thermodynamic temperature can be made small by successive iteration, which in these cases converges rapidly. Great efforts are made to evaluate and identify all the contributing sources of uncertainty but in the end, as for all primary or absolute measurements, confidence in the results is bolstered mainly by agreement obtained by different methods having different types of systematic error.
By means of gas, acoustic and total radiation thermometry, in the 1970s and 1980s it was shown that there were significant differences between thermodynamic and practical temperatures even in the range at room temperature. It had become clear that the temperature of 273.15 °C assigned to the absolute zero was in error and that to be consistent with the fundamental interval of 100 °C exactly between the freezing and boiling points of water a value of −273.22 °C would have been required. It was too late to change the definition of the kelvin (although this possibility was briefly discussed) so the consequence was that the temperature of the boiling point of water would have to be changed from 100 °C to something like 99.975 °C. In fact, the boiling point of water would no longer be a defining point of the scale but its value and that of the fixed points above and below would have to be changed to reflect these differences between thermodynamic and practical temperatures. All this was incorporated into the International Temperature Scale of 1990 (ITS-90).
At temperatures below the lower limit of ITS-90, 0.65 K, a provisional scale down to 10 mK was adopted in 2000 based on the melting pressure of helium and magnetic and nuclear orientation thermometry using superconducting transition temperatures as fixed points. At very high temperatures, above about 2000 K, new methods of measuring thermal radiation based on a cryogenic radiometer and using the carbon/metal eutectic fixed points show promise of a considerable improvement in accuracy.
With the unit of thermodynamic temperature being defined by giving the temperature of the triple point of water the value of 273.16 K exactly, the values of the Boltzmann constant k and the gas constant R are obtained by carrying out an appropriate experiment linking thermal to mechanical quantities at the temperature of the triple point of water. The experiments that have actually been used are the constant-volume gas thermometer or acoustic thermometer, both of which determine R. Note that R=NAk, where NA is the Avogadro constant. The Boltzmann constant has been determined from total radiation thermometry of a black body at the triple point and electrical noise thermometry using an electrical resistor also at the triple point. A project is now under way to determine k from measurements of the Doppler broadening of a molecular absorption line by means of laser spectroscopy of a gas in thermodynamic equilibrium. The smallest uncertainty so far obtained in any of these experiments is that of acoustic thermometry, which has reached about 2 p.p.m. This is equivalent to an uncertainty in temperature measurement of about 0.5 mK at room temperature. Plans exist to redefine the kelvin by adopting a fixed value for the Boltzmann constant. These are part of a much wider project to define the other units of the SI in terms of fixed values for the Planck constant, the Avogadro constant and the charge on the electron, but this is outside the scope of this review.
The story of temperature scales will never end, but it will change. Fresh discoveries in physics and new technologies will lead to novel ways of measuring temperature. The practical realization of ITS-90 is not a simple process but is time consuming and requires a high degree of skill to reach the smallest uncertainties. The need for a practical scale, however, remains the same as it was at the time of Fahrenheit and Amontons, namely that gas thermometers and other primary thermometers are complex and time-consuming devices as well as being less precise than the practical thermometers used in everyday life, even though their measurements are more fundamental. On the day that a primary thermometer is developed that is as precise as a platinum resistance thermometer and as easy to operate, there will be no further need for an International Temperature Scale. It is interesting to note that the first temperature range in which this is close to happening is the range at very high temperatures, where absolute radiometry can match both the precision and, nearly, the ease of operation of the ITS-90 optical pyrometers.
All of this has been to show that thermometry is still a subject very much alive and one in which advances continue to be made. The driving force for the continuing work in thermometry is of course user need. There are many industrial and scientific applications of thermometry that require the highest accuracy and reliability, and these can be met only by continually striving to improve and extend current science and technology. With this perhaps somewhat long introduction, we are now ready to examine Inventing temperature: measurement and scientific progress by Hasok Chang and the claims made in it.
The first four of the six chapters take particular items of knowledge relating to temperature and temperature measurement that, as he says, are taken for granted now but on closer examination show a deep puzzle that makes it appear impossible for that knowledge to have been obtained. These are examined and the judgements made at the time are reviewed. In the first chapter he gives a fairly detailed account of how fixed points of temperature were first created and then goes on in chapters two, three and four to describe the development and use of the thermometer, its various thermometric fluids and how it is possible to create a temperature scale without any previous knowledge of temperature (that is, roughly what we now call primary thermometry) or the values of any fixed points or even whether or not they are fixed. The primary thermometry that took place in the those days was, of course, infinitely more difficult than today's primary thermometry because there did not exist the vast infrastructure of secure scientific knowledge that we have today.
The last two chapters are more abstract and use the results of the first four to justify his central thesis on complementary science. The first four chapters introduce mostly well-known material but it is well presented and he has clearly done his research. Someone interested in the history of thermometry would find these chapters of considerable interest. I do, however, take issue with him when he states that it was not possible to make any progress in thermometry before fixed points had been established. I think this is not the case. As mentioned here, the first reproducible thermometers were those made by Mariani, who succeeded in constructing ‘50 degree’ thermometers that all gave the same reading under identical conditions. He did not calibrate them at fixed points but just observed that under any conditions of temperature they would all give the same readings. In my view this advance was the prerequisite to any progress in thermometry. I believe that it was first necessary to make reproducible thermometers before it could be possible to find out whether or not fixed points were really fixed. This is perhaps only a quibble because for the main story of the development of our knowledge of temperature and thermometry presented by Chang is well done.
I think it unfortunate that Chang chose to limit his exploration of the development of ideas on temperature and thermometry to the period up to the middle of the nineteenth century, essentially finishing with Kelvin. The problem with this approach is that in making a case for complementary science, he finds himself examining the problems and apparent inconsistencies of the subject that were much discussed at the end of the eighteenth and beginning of the nineteenth centuries. All of this is done in the light of and using the language of modern history and philosophy of science. It would have been much more interesting had he extended his period up to modern times and examined today's thermometry, or at least the early twentieth century's thermometry in the light of today's history and philosophy of science. For a modern thermometrist, which I was for many years, this is frustrating because the problems and difficulties highlighted in his book have all long been resolved in one way or another. Either they were overtaken by improved techniques or, much more probably, by the great advances in our understanding of the physical world that have taken place since then.
In looking for problems of temperature measurement that were unresolved or have since been hidden, Chang seems to have identified only one or perhaps two. The first is the realization of the boiling point of water and the second is the apparent existence of cold radiation. The problem with the realization of the boiling point of water was the difficulty in obtaining, at the end of the eighteenth and early nineteenth centuries, consistent values for the boiling point of water—whether it always boils at the same temperature, to what degree the boiling point depends on how long it has been boiling and to what extent superheating occurs and under what conditions and so on. These were indeed difficult questions to answer in the light of the knowledge of the time and illustrate the immense difficulty of extending the frontiers of science at any epoch. They are not, however, questions that have any impact on thermometry today or even on thermometry at the beginning of the twentieth century. Phase transitions continue to be used as the fixed points of the temperature scale and now include the triple points (the equilibrium temperature between the solid, liquid and vapour phases), the freezing points (equilibrium between the solid and liquid phases) but no longer the boiling points (equilibrium between the liquid and vapour phases). Boiling points of hydrogen, neon, argon and oxygen were used until 1990 in the IPTS-68 but were dropped from the ITS-90 for two reasons. First, as we have seen, the requirements for pressure measurement became too demanding, and second, the development of completely sealed cells for the realization of triple points eliminated the need for a connecting tube to the outside for the measurement of the pressure. Before this, however, boiling points, including those of water and sulphur, had been used very successfully with reproducibilities of a few tenths of a millikelvin. Supercooling in the realization of a freezing point is well known and, of course, for this purpose it is necessary to have both solid and liquid present, at the same time arranging that the solid/liquid interface is correctly positioned in the freezing point cell so that the thermometer is as close as possible to the temperature at the interface. Various techniques are used to nucleate the freeze; that is, to bring about the first solid. Without doing this it is indeed possible to have liquid many tens of degrees below the equilibrium solid/liquid temperature. Superheating is much less of a problem because the conditions needed to superheat a liquid—that is, the absence of a liquid/vapour interface—are easy to avoid. The thermometer is always surrounded by the vapour and the liquid always rests in a container having many sharp edges to nucleate bubbles. However, to establish any phase transition as a temperature fixed point at the level of microkelvins, as is now sought for the triple point of water, then a whole new range of problems arise. Some are practical while others are related to our lack of understanding of the physical processes at such a fine level. The resolution of such problems would undoubtedly improve our knowledge and give improved insight into the physical world, but this would not be complementary science in Chang’s terms.
With regard to the existence of the so-called cold rays, this is cited as an example of ‘a fact that actively disturbs our basic conceptual schemes’. Marc-August Pictet, following up his well-known experiment in 1791 that showed that heat could be focused by mirrors, subsequently demonstrated that apparently cold could as well. He placed a flask of ice at the focus of one mirror and found that a thermometer placed at the focus of a distance mirror pointing at the first would fall in temperature. This led to many arguments as to the nature of heat and cold that could not be resolved at the time. It is not one that would actively disturb our conceptual schemes today. Such experiments of the sort Pictet carried out are, however, interesting in that they make one think about the physics involved and are part of the class of experiments often described in the American Journal of Physics that are designed to intrigue students and to challenge them to provide the correct explanation. Some are exceedingly difficult to unravel. For example, some years ago the apparently astonishing observation was made that a glass of hot water placed outside in the snow always freezes more quickly than a glass of cold water. This provoked a long correspondence in the scientific press as to how such a situation could arise but the point was not that our basic understanding of physics was in question but that apparently simple situations in the real world can be extremely complex and involve a subtle interplay of different effects.
From all this, I cannot agree with the conclusions reached by Chang that the difficulties encountered at the end of the eighteenth and during the first half of the nineteenth century in making good measurements of the boiling point of water, or the existence of cold rays, could be the starting point for a ‘complementary science’ study or indeed can be used to justify the whole idea of complementary science. Although there is much discussion in the book about how knowledge in thermometry can be obtained, an account that is interesting but containing little that is new or controversial, there seems nothing else that can be used to support the very wide claims he makes for complementary science. The idea that it would now be worth starting a research programme on the realization of boiling points because they were dropped ‘for imperfect reasons or for reasons that are no longer valid’ is, to say the least, unlikely to attract funding. The same remark can also be made on frigorific radiation, radiation that seems to be of cold rather than of heat.
His assertion that many things are protected from questioning and criticism in specialist science, so that its demonstrated effectiveness is also accompanied by a degree of dogmatism and narrowness of focus that can actually result in a loss of knowledge, is not supported by anything he has encountered in temperature measurement. The science he describes in which there is a received canon of theory and practice that must not be criticized or put into question is wholly at variance with my experience in thermometry. I restrict myself here to thermometry because it is through temperature and temperature measurement that Chang is defending his thesis that complementary science can play a useful role. The first thing that young people do in coming into the thermometry laboratory is to criticize what has been done before and try to do it differently and better. In this regard a thermometry laboratory is no different from any other I have encountered. Today's thermometry conferences are full of new ideas for fixed points and primary methods, such as Coulomb-blockade and Rayleigh scattering thermometry, as well as new ideas on old problems such as the calculation of the emissivity of near-blackbody cavities and the details of phase transitions. There have not, I have to admit, been the great controversies akin to plate tectonics in geophysics that have divided the community and in which people have taken up entrenched positions. In this sense thermometry is a poor example to take, as would be almost any area of metrology. Great advances have taken place in almost all areas of measurement. We can now, for example, use the macroscopic quantum effects, the Josephson and quantum Hall effects, to realize quantum standards of the volt and the ohm reproducible to better than parts in 109; the watt balance is leading to a new definition of the unit of mass in terms of the Planck constant, to say nothing of new atomic clocks using cold atoms or single ions with potential uncertainties of parts in 1018. There is much new science and half a dozen Nobel Prizes in these areas in recent years but no great controversies. There exist the usual personal rivalries and differences of opinion because even thermometrists are human beings! However, there now exists a body of secure physical knowledge that includes linking practical measurement standards to the fundamental constants of physics that is self consistent to a very high level and on which all of today's high-technology industries are based. For example, the thermodynamics of gas turbine engines is understood at a very detailed level and this allows us to predict confidently by how much the fuel efficiency of the engine will improve by raising the operating temperature of the turbine blades. It is such knowledge, together with reliable temperature measurements and improvements in high-temperature materials and engine management, that allows us to fly non-stop from London to Singapore, or soon even non-stop from London to Australia. The implication in Chang's book that somehow all of this is on a weak foundation and that his complementary science will uncover previously hidden inconsistencies is untenable.
I set out in this review to ask the question of the extent to which Chang's examination of the history of temperature and temperature measurement supported his thesis that complementary science can contribute to present-day knowledge of thermometry and, by extension, to science as a whole. My conclusion is that no such support has been demonstrated.
- © 2006 The Royal Society