Tag Archives: Physics

A Model for Liquid Cooling


The motivation for creating a mathematical model to study liquid cooling came from a recent happenstance moment when I was waiting for a cup of piping hot tea to cool down to a reasonable temperature, after having burnt my tongue upon sipping some. Tongue stinging, I was blowing on the tea to cool it, and thought to myself in exasperation “How much time will this take to cool !?”. Et voilà, here we are …

On a serious note, this problem is indeed an interesting one from a physics point of view. We all know that the major factor in liquid cooling is the phenomenon of evaporation, but the exact laws governing this process are not familiar to most of us and aren’t explicitly taught in school either. So, it is interesting to scientifically study such an ordinary, yet important phenomenon that we all take for granted. Indeed, from sweating, to drying our clothes, and waiting for a hot beverage to cool down, to transpiration in trees, evaporation is an invisible process that surrounds us. On a bigger scale, it is a major part of the water cycle and hence, of Earth’s biosphere and life itself. Where there is water, there is life, and also evaporation. From the following model, one can see that liquid cooling can indeed be modeled to a very good approximation using little more than high-school physics.




  • A hot, evaporative liquid is filled up to a height ‘h’ in a cylindrical, insulating, open vessel with cross-sectional area ‘A’.
  • Cooling has two components – radiative and evaporative, both of which occur only at the liquid surface.
  • We assume that the liquid remains in quasi-thermal equilibrium at all times.
  • Due to the open vessel, the liquid vapour from evaporation escapes into the atmosphere and therefore, the reaction of vapour condensing back to liquid is negligible.
  • During evaporation, the bulk-mass of the liquid does not change much.
  • The following symbols represent the physical quantities involved in our calculations:

ρ = liquid density
rliq = mean liquid particle radius
m, M = liquid particle mass and molar mass respectively
T = liquid temperature
t = time
S = liquid specific heat capacity
Hevp = enthalpy of vaporization of the liquid in J.mol-1
mliq  = bulk mass of the liquid
p = effective number of liquid molecules on the surface as a fraction of the total number of molecules on the surface, which may contain impurities (any substance particles which do not contribute to evaporation).
aB = Stefan-Boltzmann constant
e = emissivity coefficient of the liquid
NA = Avogadro number
KB = Boltzmann constant
R = universal gas constant



In an infinitesimal time interval ‘dt’, the heat lost by the bulk of the liquid is given by


Where the infinitesimal heat/temperature change can be expressed as a sum of two contributions – one from cooling due to black-body radiation, and another from cooling due to evaporation, both of which occur at the liquid surface


Radiative cooling contribution:

The heat loss due to black-body radiation at the liquid surface is given by the Stefan-Boltzmann law:


Tatm is the temperature of the surroundings/atmosphere in contact with the liquid surface, which exchanges heat with the liquid via black-body radiation. Tatm is equal to the room temperature, which here we take to be the standard value of ~25 degrees Celsius.

The bulk-mass of the liquid can be expressed in terms of its density and volume,


Thus, from above, we obtain the contribution to the rate of change of temperature due to radiative cooling:

Capture5……… (1)

Evaporative cooling contribution:


We formulate here the physics of evaporative cooling. Firstly, we assume that the molecules at the liquid surface follow the classical Maxwell-Boltzmann statistics. However, we know that this is not exactly true since every molecule of the liquid experiences several intermolecular forces of attraction. It is quite difficult to theoretically model these forces as they depend on the exact chemical structure and spatial distribution of the liquid molecules. Instead, we can find a clever way around this by using the experimentally known enthalpy of vaporization of the liquid ‘Hevp’ (in SI units, the energy in Joules required to vaporize one mole of the liquid), which accounts for both the energy needed to overcome the intermolecular forces and the work done by the vapor to push back the atmosphere. Hevp thus effectively encapsulates information about the liquid properties and the physical properties of its surroundings (therefore it is not constant and varies with the temperature and pressure). Hence, if we consider a molecule at the liquid surface which has at some given time, the upward component of its velocity as ‘vz’, then the minimum kinetic energy that it needs to possess (in the upward direction) to overcome all forces of attraction and escape from the liquid surface must be equal to the heat of vaporization per unit molecule of the liquid: –


Thus, we can define an ‘escape velocity’ for a liquid particle at the surface. Any liquid particle which has a vertical component of velocity greater than or equal to the escape velocity will escape into the atmosphere and contribute towards evaporation. Note that the molecular mass times Avogadro number is the molar mass, ‘M’.


Thus, we model the liquid surface like an ideal gas (since we use the Maxwell-Boltzmann statistics, which is the foundation of the Kinetic Theory of Gases). But unlike a true gas, the liquid molecules are not free to escape into the atmosphere unless they have sufficient energy (modeled as the escape velocity defined above) to overcome the various inter-molecular forces of attraction experienced inside a liquid.

We can take a detour here to qualitatively interpret the cooling effect of evaporation based on the above model. An ideal gas with a given temperature has a velocity distribution given by the Maxwell-Boltzmann statistics. This is governed by the random thermal motions and collisions of the gas particles. If one were to ask what would happen if we selectively start removing from the system only those particles which have acquired (because of their random thermal motion/collisions) a velocity greater than a threshold value, then one would find that the result would be a decrease in the net energy, average velocity, and hence the temperature of the gas. This is exactly the cooling effect that occurs in liquids due to evaporation, which we have modeled here thus.

For a surface particle with vz  vz-esc, we define ‘evaporation’ to have occurred when the particle travels upwards a distance of at least 2rliq in a short time interval Δtevap, during which a new particle from inside the bulk of the liquid would have filled the void left by the escaping particle. Since the backward reaction of condensation is negligible, the process is not reversible and the particle escapes into the atmosphere, thus contributing towards evaporation.


The number of particles at the liquid surface can be expressed as the total number of particles in the bulk of the liquid times the fraction of the total volume that is the liquid surface:


From the Maxwell-Boltzmann statistics, the fraction of liquid particles at the surface which have an upward component of their velocity between vz ± dvz is given by


Each evaporating liquid particle removes on average, Hevp/NA amount of heat from the liquid. Hence, the rate of heat loss due to the evaporating particles (i.e. those particles at the liquid surface that have an upwards velocity greater than or equal to the escape velocity) is


Substituting the expressions for the terms in the integral and carrying out the integration, we obtain the contribution to the rate of change of temperature due to evaporative cooling (we use the relation NA.KB = R, the universal gas constant):

Capture12…….. (2)

Note that if the liquid contains some non-evaporative impurities, this reduces the number of liquid particles available at the surface which can potentially contribute towards evaporation. This is represented through the fraction ‘p’. Also, since there is heat loss, the term has an overall negative sign.

It is also interesting to note that the rate of cooling obtained here is inversely proportional to the height of the liquid in the vessel and is independent of the surface area of the liquid. We know about the dependence of the rate of evaporation on the surface area of the liquid, but this tells us that it is the liquid height that matters. Spreading out a liquid increases its area, but also decreases its height, and that is the reason why we observe a higher rate of evaporation in ‘thin films’. In other words, liquid in a vessel with a small cross-sectional area filled up to a lesser height can cool faster than liquid in a vessel with a large cross sectional filled up to a greater height. Also, liquid in a large open container versus a small open container can cool at the same rate if it is filled up to the same height in both.

Finally, another interesting point is that although we use rliq to calculate some intermediate quantities, it does not explicitly appear in the final expression for the rate of cooling.

Rate of liquid cooling:

The contribution from (1) and (2) give the net rate of liquid cooling as a function of the liquid properties, temperature, properties of the surroundings, and the height up to which it is filled in the vessel: –


Rate of mass-loss:

Although we have assumed that the bulk-mass of the liquid does not change much (and this is a good approximation if we have a large quantity of liquid), it however does not hold good for say, thin film evaporation, where the mass-loss is significant compared to the tiny quantity of liquid in the film. It is not too difficult to derive the rate of mass-loss. The derivation is the same as that of the rate of cooling due to evaporation, except we must replace the term Hevp/NA with ‘m’, since each evaporating molecule removes a mass of ‘m’ from the bulk of the liquid. Thus, we have


This gives us the expression for the rate of change of the height of the liquid in the vessel (where the impurity fraction ‘p’ and the overall negative sign are included as done earlier): –




Solving the ordinary differential equation for the rate of cooling will give us the variation of the liquid temperature with time, starting from some initial (high) temperature and gradually falling to a final temperature.

If we wish to account for the mass-loss along with the cooling, then both ODEs (cooling rate and mass-loss rate) must be solved simultaneously, because they are coupled.

An analytical solution for either case is difficult, and hence we solve the equations numerically.

Hevp is an empirical quantity in our model which also varies with the temperature and pressure. Hence, for a liquid in contact with the atmosphere (i.e. at a given constant pressure), the temperature variation of its Hevp needs to be recorded from an empirical data-sheet. Interpolating this data will allow us to calculate the value of Hevp at any temperature, which is then fed into the equations to be numerically solved.

Choosing 95% pure water as our evaporative liquid filled up to a height of ~8cm in an average sized drinking cup kept outside (making this system equivalent to say, a regular cup of coffee), and accounting for the mass-loss of the liquid, we obtain the results shown in the plots below.


From the results obtained in the plots we can see that it would take a cup of boiling water around ~10 mins to cool down to room temperature (provided that air is continuously blown on the water to remove the vapor molecules, minimizing the backward reaction of condensation, as is the requirement for our model of liquid cooling to be applicable). During this time, the height of the water in the cup would fall by around ~1cm owing to the mass lost by the liquid due to evaporation.

Also, if a hot beverage is served at 80 degrees Celsius, making it too hot to drink, it would take close to ~2 mins of continuous air blowing to cool it down to 50 degrees Celsius, which is generally regarded as the prefect drinking temperature for a hot beverage.

These results seem to agree reasonably well with observations, but a proper experimental study would be required to verify the accuracy of this model.



  • Experimental realization of this model: For experiments to reasonably agree with the results of this model, the experimental conditions should be such that the assumptions under which this model holds must be maintained. An important requirement is that the reaction of vapor condensing back to liquid should be negligible. This can be achieved by continuously blowing on the liquid surface to remove the vapor molecules.
  • Liquid cooling another body: The cooling effect of evaporation is a well-known and widely used phenomenon. Our bodies sweat when it is gets hot so that the water in the sweat can absorb the excess heat and evaporate, thus cooling us down. The rate of cooling of a body in contact with an evaporating liquid can be obtained by solving the heat transfer equation for the body and the liquid, using our model to account for the heat loss from the system due to liquid cooling.
  • Evaporative freezing: Under the right conditions, for instance, inside a vacuum chamber, it is possible for evaporation to cool a liquid till it freezes. It is however important to note that in a vacuum chamber, upon reducing the pressure, we quickly move to a different point on the phase diagram for that substance, and hence evaporation can technically become boiling. However, the rate of cooling can still be modeled using the same concepts as done here, with some modifications considering that boiling is a bulk phenomenon whilst evaporation is a surface phenomenon. Also, to apply our model for this case, it is important to use the correct values of Hevp, since it is a function of the temperature and pressure.
  • While numerically solving for the temperature and height variation using the equations derived above, care must be taken that the initial and final temperature values do not go beyond the boiling and freezing points of the liquid respectively (at a given pressure), where the state of matter changes and the physics of liquid evaporation no longer holds.
  • Although our model is a good approximation of real liquid evaporation, it does suffer from some limitations. One of these is the assumption that the liquid remains in quasi-thermal equilibrium at all times. While this assumption may be reasonable for thin films, there is bound to be differential cooling and heat transfer (convection, etc.) inside the liquid if the liquid height is not small. In that sense we are overestimating the of rate of cooling because our assumption implies that heat transfer occurs instantly between different parts of the liquid.
  • Another limitation is perhaps the way to model Δtevap. An alternate, but more complex way would be to calculate the probability for a liquid molecule at the surface to achieve a velocity greater than or equal to the escape velocity due to collisions with other liquid molecules. This can be calculated using the Maxwell-Boltzmann statistics. Using this, we can obtain the average number of collisions required before such a velocity may be achieved. Multiplying this with the mean free time between two collisions would give us the average time for a liquid molecule at the surface to evaporate, i.e. Δtevap. This can improve the accuracy of our model, but it would make the expression for the evaporative cooling rate more complex. The current way to model Δtevap seems to overestimate the cooling rate. Thus, our model provides an upper limit on the liquid cooling rate or a lower limit on the cooling time.
  • The contribution of the radiative term to liquid cooling is generally small compared to the evaporative term. Hence, the major factor in liquid cooling is evaporation.



Following the scientific method, this model now needs to be tested against experiments to check how accurate its predictions are. So, if you are jobless, and are waiting for your 80 degrees Celsius hot cup of tea filled up to a height of 8 cm to cool down to the perfect drinking temperature of 50 degrees, please help test out this model by checking if it takes around 2 minutes of blowing on it continuously to get the job done!



‘Calculating the optimum temperature for serving hot beverages’ (Brown F, Diller KR.)


Doppler Free/Saturated Absorption Spectroscopy

A concept note on Doppler Free/Saturated Absorption Spectroscopy.



Saturated Absorption Spectroscopy, also known as Doppler Free Spectroscopy, is an experimental method used to determine the exact frequency of the transitions of valence electrons in the atoms of substances between their ground state and optically excited states.

The motivation for this method to be utilized for measuring transitions of optically excited electrons over other spectroscopic techniques (such as observing emission spectra) is its massive plus point in the total elimination of Doppler Broadening (a physical phenomenon that is a consequence of the Doppler Effect), which induces a large inherent error in the precise measurement of the transition frequencies.

In atomic physics experiments, this method is also utilized to tune the frequency of a Laser to the exact transition frequency of the atom.



We know from atomic theory about the quantization of energies of bound electron states in atoms. A sample of a certain substance, when provided with sufficient energy, shows a transition of the valence shell electrons from their ground state to an optically excited state. These optically excited electrons have a probability/lifetime associated with them of decaying back to the ground state, and in that process, they release photons with an energy equal to the energy difference between the optically excited state and the ground state. The frequency of this emitted photon is simply this energy difference divided by Planck’s constant.

However, while practically observing the emission spectrum of a given sample to measure these transition frequencies, we must account for the fact that the atoms of the sample (say in a vapour state) are not stationary. They are continuously bouncing around in random thermal motion as given by the Maxwell-Boltzmann statistics. Thus, the photons released during transitions of the valence electrons are being emitted from a moving source. Therefore, the frequency observed by the experimentalist is Doppler shifted due to the Doppler Effect. Since the atoms in the sample perform random thermal motion with a continuous distribution of velocities, a continuous band of emission frequencies is observed instead of the expected single transition line. This observation is called Doppler Broadening.


Mathematically, the Maxwell-Boltzmann distribution gives the number of particles moving with velocity ‘v’ as follows:


Where N is the number of sample atoms, m is the mass of the atom, T is the temperature of the sample, and KB is the Boltzmann constant. If w0 is the frequency of the photon emitted by an atom moving with velocity ‘v’, then the Doppler shifted frequency (non-relativistic limit at regular temperatures) is given by:


Note that we assume a small sample width, so that the velocity components lateral to the viewing direction have no role in the Doppler shift, and we may consider only the velocity components along the line of viewing (which is what is meant by ‘v’ here). Substituting ‘v’ as a function of w from the Doppler shift formula into the Maxwell-Boltzmann velocity distribution gives us the observed transition frequency spread, which is Gaussian: –


This distribution has a frequency spread given by the full width at half maximum (FWHM) value: –


This is the spread in the measured emission frequency due Doppler Broadening. For Rubidium atoms at room temperature, the Doppler Broadening is around 500 MHz. So, the precision in measuring the transition frequency is hindered greatly due to Doppler Broadening, and not simply due to the fundamental width of the resonance.



The Doppler Broadening effect can be overcome/eliminated using the Saturated Absorption Spectroscopy set-up.

In this setup, the sample is vapourized in a vapour cell (yellow box in figure 3.) and two laser beams with opposite directions of propagation are passed through it. One of these beams has a high intensity, and is called the ‘pump beam’, while the other is weak and is called the ‘probe beam’. For the purpose of this explanation, assume that the sample atoms have two accessible states, Ei (ground state), and Ef (optically excited state). The purpose of the pump beam is to saturate the valence electrons of the sample atoms into their excited states (hence the term ‘saturated’ in Saturated Absorption Spectroscopy). Thus, only a limited number of atoms are available for the probe beam to interact with. The probe beam intensity is being analysed at one end of the tube, and if it gets absorbed, there is a dip in the intensity curve. Thus, the purpose of the pump beam is to help provide ‘contrast’ to get a good absorption spectrum of the probe beam. Also, it is apparent now that in this technique, we analyse the absorption spectrum of a laser source instead of an emission spectrum of the sample, hence the term ‘absorption’ in the title.



The genius behind this method is revealed if we think about which populations of atoms the two laser beams interact with. If both beams are fired at a frequency below that of the sample’s resonance frequency (transition frequency w0), then due to Doppler shifting, the probe beam will interact with atoms moving in a direction opposite to its propagation direction, and the same holds true for the pump beam. For frequencies above the resonance frequency, the directionalities are reversed. The important point here is that since the probe and pump beams travel in opposite directions, they never interact with the same populations of atoms, except for a frequency equal to the resonance frequency, when both beams interact with the same population of atoms viz. those that have velocity components purely perpendicular to the beam propagation directions.

Thus, at other frequencies, the intensity of the probe beam is low (it is being absorbed by atoms and the pump beam doesn’t provide contrast by saturation since it interacts with a different population of atoms). But at the resonance frequency, both beams interact with the same population of atoms, which are saturated into their optically excited states by the pump beam, and thus only a few atoms interact with the probe beam, and most of it is unabsorbed. Therefore, we observe a sharp dip in the absorption spectrum of the probe beam, which allows us to precisely measure the transition frequency of the atoms, whilst circumventing the problem of Doppler Broadening. This is the concept of Doppler Free/Saturated Absorption Spectroscopy. The sharp dip in the absorption curve at the transition frequency is known as the ‘Lamb Dip’



By coupling this concept with a Michelson Interferometer, one may even measure the precise energy levels in hyperfine splitting, as can be easily demonstrated for Rubidium atoms.



  1. http://instructor.physics.lsa.umich.edu/adv-labs/Doppler_Free_Spectroscopy/DopplerFree.pdf
  2. https://en.wikipedia.org/wiki/Saturated_absorption_spectroscopy
  3. https://www.nuclear-power.net/neutron-cross-section/doppler_broadening/

The Nature of Nature’s Study

What really is ‘science’? What is its nature? Why does it work? Why is it useful? What is the nature of ‘reality’? Does science offer the absolute version of reality? In this article, I offer my views/insights and generally accepted explanations to these questions. Note that I use the word ‘explanations’ not ‘answers’, because nobody who has ever lived or who is currently alive knows the true nature of reality, but all the accumulated pool of human knowledge, at the least, allows us to make certain comments about these questions.  It is good to reflect and gain insight into what we do know and what we don’t know; what is, and what isn’t; what can be and what simply cannot be. If you are confused by my seemingly arbitrary banter, generated more through an intuitive analysis of the summary of this article rather than logical reasoning, please read on and let us formulate a fresh, clear chain of reasoning from the very beginning:


Human beings have the ability of ‘cognition’, which means that we can observe events occurring in the world around us, and discern patterns in those events. Over the course of human history, we have been observing events around us, studying nature, and noting down the patterns which we observe in those events. What separates us humans from other living beings are our superior brains, which have given us the ability to store information extra-somatically (i.e. outside our genes and bodies); to transfer information to other humans; and to learn information from other humans (via communication). Hence, in this sense, human knowledge can be thought of as an independent ‘entity’, ever-growing through the efforts of all humans who exist, cognize, learn, store and transfer information. Although a single human may perish, but the ‘entity’ of human knowledge survives and evolves as long as other humans exist. Thus, this entity of human knowledge has managed to evolve exponentially over the past centuries. The experience/knowledge gained by other living creatures, meanwhile, gets wiped out continuously with the death of the creature, not having been transferred or stored anywhere, and the process starts all over again with each new progeny. Moreover, other creatures do not have the ability to comprehend and analyse knowledge as we humans do. This is because other living beings lack precisely the mental prowess which derives from the superior brains that humans possess. This gargantuan accumulation of human knowledge or “data” of natural events, is the first step towards science as we know it.

Furthermore, the superior brains of humans have the ability to “reason”. This means that we are exceptionally good at analyzing and finding patterns in observed data via logical reasoning. Also, the more data you are given, the more you can generalize those patterns, meaning that merely a few general patterns can then be used to explain a large number of phenomena in nature. Even if two phenomena initially appear to be entirely separate, unrelated events, there may exist a single underlying pattern that can explain both, simultaneously. A prime example of this would be the following two observations: “things fall to the ground when dropped from a height” and “the Moon goes around the Earth”. To a person without knowledge (like all the humans of the past), the two events would appear to be quite unrelated. But now, it is common knowledge that they can both, in fact, be explained by the same general and universal phenomenon of gravity.

Here begins “science” as we know it: Humans, tapping into the ever-present, ever-growing pool of “data”, have come to realize that the patterns observed in nature can be generalized into merely a handful of “Laws of Nature” better known in the modern scientific tongue as “The Laws of Physics” that have the potential to explain literally “everything” (indeed, today it is possible to explain *almost* all of the *currently known* natural phenomena through merely four fundamental forces of nature – Gravity, Electromagnetism, the Strong Nuclear Force, and the Weak Nuclear Force).

We can digress here for a moment and observe how intricately and infinitely intertwined the nature of science and reality is with the nature of human consciousness and human evolution. Evolution preferred to give humans big brains, precisely because that allowed us to deduce the laws of nature, and utilize those laws to enslave nature for serving our needs and desires, ensuring our survival. Dear reader, I urge you to take a moment and ponder all the objects in your immediate vicinity. Chances are that most of them have been designed by humans, using the knowledge and the laws of nature accumulated over time, to make nature serve your needs, making your life a jolly jaunt and a piece of cake compared to the difficult life that, say, a deer (a comparatively ignorant creature, unable to deduce or use the laws of nature to benefit itself) must lead in the wild. But I digress too much – we shall pick up where we left off.

Over time, humans have also realized one more thing (heads up because it gets quite self-referential by this time, the complexity being a marker of the evolution of human knowledge): By the process of observation, analysis and learning, we have been able to figure out and perfect the most efficient and “proper” way of observation, analysis and learning. It is known as “The Scientific Method”


The Scientific Method

What is the most effective or “proper” way of doing science? Let us go step by step:

First, you need to make some observations of nature, and gather some initial data (hey, you need to start somewhere …)

Next, you must use your cranial skills to find a pattern in the data (something you’re naturally good at, in fact, as a human being). Hence, you must make a hypothesis to explain the observed data; a possible “law” that governs the pattern observed in the data. But this is not the end – your hypothesis will be subject to continuous scrutiny by more data (or by other hypotheses that are able to deduce and generalize more patterns and explain the observed data better; but let’s keep it simple for now).

You must now use your hypothesis to make predictions i.e. “extrapolate” your data (not literally mathematically, or maybe even so) according to the “law” that you have deduced, and then go back outside into the field of nature to check if your predictions agree with observations and experiments.

If they do, then well done! You now have a “theory” or “model” of reality which you can use to explain the “law” or “pattern” which you observed in the data. If, however, experiments don’t agree with your model, you’re undeniably wrong and you must correct/modify your model, or simply come up with a new one.

Note that when I use words like “data”, “patterns”, “law”, “theory”, “model”, “experiments”, “observations” etc., I am not invoking merely abstract or qualitative concepts. In order to do proper science, these have to be well defined ‘quantitative’ concepts which are written, analysed, and communicated in the language of science, which is the language of any logical reasoning viz. “Mathematics“. Don’t worry, though: for the purpose of this review, the English language will suffice …

But is this the end of the Scientific Method algorithm? Never. This rather tedious process must be repeated over and over again, presumably forever. Science, thus, is a continuous process …. Why? Because we do not yet know reality in its entirety and we have no way of knowing if, at any given time, we have enough data to encompass the entire reality.

It is like exploring a vast ocean in a small boat. You do not know exactly how vast the ocean is, but only that it is indeed quite vast, presumably much like the nature of our world’s reality. To find out how vast it is, you must keep rowing till you’ve explored exactly all of it. The only catch is that you’ll never know whether you’ve explored all of it until you actually do. Hence, in science, we need to continuously explore and observe nature more and more, to get more and more data, all the time continuously testing our theories, repeating the steps of The Scientific Method, and improving our version of reality, in the hope that when (or if?) one day we reach the end of the ocean, we’ll know that we have finally covered it all. There may not even be an end to the ocean’s vastness, in which case we would have to row forever. But as the hopefully curious creatures that we are, in the meanwhile, we must keep rowing.

If a model/theory agrees with experiments/observations for a long time, then it is deemed as a “good” or “reliable” theory/model of nature (although that doesn’t mean that we should stop doubting if it’ll ever fail one day and will have to be modified or scrapped entirely in the wake of new, conflicting observations). An excellent example of this, once again, happens to be gravity. Newton’s Theory of Gravity was considered a “good” and “reliable” theory of nature (as per everything that I have mentioned in the previous lines) until the advent of new technology and better observational capabilities allowed us to acquire new data (like the precession of planet Mercury’s orbit, for instance) which could not be explained by Newton’s theory. This meant that Newton’s theory was not an entirely correct version of the reality of how gravity works (although it is a good approximation, and works well for many scenarios). This theory was later replaced by Albert Einstein’s masterpiece: The General Theory of Relativity (GTR, for short). And for the past 100 years, it has emerged flawlessly victorious over every piece of observational data and experiment that has been thrown at it. Thus, GTR is the currently accepted “good” and “reliable” theory of gravity. As always, you can never stop doubting whether new evidence might come up, leading to the requirement of a better theory of gravity.

Science is therefore, “a self-correcting, self-improving method of investigation”.

This summarizes the work of a scientist and why science is useful – BUT, there is a subtle point that must be considered … It is the devil that hides in the details:


The Nature of Reality

Since I have brought it up previously, l shall once again use Einstein’s General Theory of Relativity (GTR) to illustrate this extremely nasty and oblivious topic.  GTR is a model which describes how gravity works. In a scandalously short review, you can say that GTR declares the three dimensions of space and one dimension of time to exist as a single, four-dimensional “fabric” spanning the entire universe called ‘spacetime’. Any object with ‘mass’ distorts this “fabric” of spacetime, much like how a ball placed on a stretched rubber sheet curves or distorts it. And in Einstein’s own words “Mass tells spacetime how to curve, and spacetime tells the mass how to move”. In conclusion, this ‘curving’ of spacetime is experienced as gravity.

But I ask you to wonder – although this model perfectly agrees with all observational data, does it actually present the true mechanism of gravity? A famous analogy of the nature of reality goes as follows:

To us, reality is like a clock. You’re allowed to observe the ticking of its hands as much as you like, and come up with a theory of how its internal mechanism might be, which enables it to tick the way that it does …. But, you’re not allowed to open the clock, to gaze inside and to check if your model’s version of reality was indeed the version of the true reality. For all you know, both your version of reality and the true version of reality may be capable of producing the same results that you have observed. Indeed, this means that one’s version or “model” of reality is relative, and it is merely a tool that allows you to explain the events around you, and there’s always the chance that a “higher” version of reality exists, of which your version is merely a subset. And you may never know if you have indeed reached the “highest” version of reality, or if there even is a “highest level”. There might even be an infinite number or “higher levels” of reality. But in order to maintain good mental health, we assume that there is indeed an end which culminates in an “ultimate reality” and that we’re getting there, slowly but surely … And when we do reach it, we will know that it is the final truth.

This discussion doesn’t leave us with any useful information about the true nature of reality, but hey, at least, we have some constraints deduced from logical reasoning (yeah science!).

In conclusion, science doesn’t merely run on logic and clockwork, but is also permeated by the human emotions of imagination, curiosity, hope and determination. Science isn’t simply a subject of profession (as opposed to ‘commerce’ or ‘arts’), which absent minded people with high intellect and no social life tend to pursue. The nature of science is the nature of nature’s study, it is the search for the ultimate reality and it is also the nature of the human condition.

Suvrat Rao

Winter Project

I did a winter internship in the December of 2016 at the Indian Institute of Astrophysics, Bangalore under Prof. Dipankar Banerjee, who specializes in Solar Physics. The topic I worked on was most interesting  — “Estimating the arrival times of Coronal Mass Ejections (CMEs) using a Drag-Based Model (DBM)”.  This was indeed an awesome experience for me and I gained a lot of knowledge and experience, not limited to just Solar Physics.

A Coronal Mass Ejection (CME), is a large eruption of plasma and magnetic field from the Sun. It can contain a mass larger than 1013 kg, achieve a speed of several thousand kilometers per second and may span several tens of degrees of heliographic latitude and/or longitude. CMEs often (but not always) accompany Solar Flares, which are high-energy, broad-spectrum bursts of electromagnetic radiation from the Sun.  The frequency of CMEs depends on the Solar Cycle, with occurrences of a couple per day during the solar maxima, and only one per couple of days during the solar minima. CMEs may erupt from any region of the corona but are more often associated with lower latitude regions, particularly near solar minimum. Only a small percentage of CMEs are directed toward the Earth, and are called Halo-CMEs or Partial Halo-CMEs, due to their halo-like appearance around the Sun as seen from instruments on Earth. CMEs can travel large distances (covering the entire Heliospheric region). Far away from the Sun, CMEs are conventionally called ICMEs (Inter-planetary CMEs).

The estimation of the arrival times of CMEs is an important issue as Earth-bound CMEs i.e. Halo-CMEs have a direct, measurable impact on human activities.  Since CMEs are composed of plasma (high energy charged particles and magnetic fields), when they reach the Earth, CMEs can cause geomagnetic storms in the Earth’s magnetosphere, and the injection and interaction of charged particles with the Earth’s atmosphere. Also, associated with CMEs are Solar Flares, which are comprised of high energy radiation (X-Rays etc.). Hence, apart from producing beautiful Aurorae near the poles, Halo-CMEs can also have a lot of negative impacts on human activities, such as:

  1. Interference of telecommunication through phone lines and satellites.
    2. Increase in radiation exposure to high-altitude and/or high-latitude aircraft fliers and astronauts.
    3. Increase in atmospheric drag on orbiting spacecraft, thereby reducing orbit speed (potential crash landing).
    4. Interference in spacecraft circuitry.
    5. Damage to spacecraft hardware (e.g. solar cells).
    6. Interference/damage to ground-based micro – and nano-circuitry.
    7. Unexpected current generation in power-lines, resulting in power station damage.

It is therefore essential to be able to predict the arrival of Halo-CMEs so that accurate measures can be taken to deal with the above possibilities.

The details of my work can be found in this draft report which I am attaching here:

IIAP Winter Project Report