Tuesday, December 29, 2009

Albert Einstein


b. 14 March 1879
d. 18 April 1955

I will not attempt to explain the contributions that Einstein made to physics in this little biographical sketch. For one thing, I don't understand the majority of what he wrote. For another, I would like to focus instead on his character and life as an interesting and important glimpse into his mind.

Though born in Germany, Einstein did not die a citizen thereof. In fact, he renounced his citizenship for the small price of three German marks and became a naturalized citizen of Switzerland five years later, paying twenty francs. The price of his citizenship notwithstanding, national affiliation was extremely important to him. He both detested and rejected the nationalistic and militaristic government of Germany and embraced the peaceful attitude of Switzerland. To the end of his life, Einstein remained a pacifist, conceding only that military force should be used to combat institutions which "pursue the destruction of life as an end in itself."

He further hated the German educational system which consisted of rote studies and a particular deference to authority. Though it is often stated that Einstein was a poor student, the more accurate statement is that he did not thrive in the stifling classroom being forced into a discipline in which he was not naturally engaged.

In fact, after being rejected from the Zürich Polytechnic Institute, Einstein spent a year in Aarau, Switzerland where he succeeded in a flexible education with a casual teacher who allowed Einstein a liberty of thought that was necessary for his future discoveries. About the necessity of such liberty he wrote that "it is, in fact, nothing short of a miracle that the modern methods of instruction have not yet entirely strangled the holy curiosity of inquiry; for this delicate little plant, aside from stimulation, stands mainly in need of freedom."

His ability to thrive in a self-determined schedule accounts for his eventual success at the Zürich Poly (having been accepted after a second application). He largely ignored his classes, showing up only for the exams which he passed due to the copious notes of his studious friend Marcel Grossmann. After graduation, Einstein found his desired freedom in an unlikely place. He was refused a position as an assistant at the Zürich Poly (perhaps due to some underhanded manipulation by a professor who disliked him) and accepted a job at the Bern Patent Office in 1902 where he was assigned to read and approve patent applications. The job proved useful, however, in that it was not difficult. He spent his spare time theorizing and conducting gedankenexperiemnts (literally: thought experiments) which are designed to prove a principle without actually having to conduct the experiment physically.

Einstein was so successful in this free environment that he published three papers in the 1905 edition of Annalen der Physik each on a different subject. The first, for which he eventually won a Nobel Prize, explained the connection between the photoelectric effect and quantum mechanics. The second paper treated molecular behavior. The third was his inspired explanation of relativity which gave rise to spacetime. To restate for emphasis, during seven years working as a patent clerk, Einstein published—among others—a Nobel Prize winning paper and the foundational paper for the most (culturally) famous development in science.

Inevitably, Einstein was noticed by the scientific community. He subsequently worked in Berlin at the Prussian Ministry of Education with Max Planck (a great scientist in his own right who wrote of him, "All in all, one can say that among the great problems, so abundant in modern physics, there is hardly one to which Einstein has not brought some outstanding contributions.") and at Princeton. But the former was overrun with Nazis and the latter boring yet peaceful enough for him to, as he wrote to the Queen of Belgium with whom he had apparently frequent correspondence, "create for [himself] an atmosphere conducive to study . . . free from distraction."

He was married twice. And though the first marriage failed due probably to a lack of attention to his family in favor of scientific pursuits, he remained supportive of his first wife and children, sending them his prize money after receiving the Nobel Prize in 1921 (two years after his marriage to his second wife, Elsa). Elsa was described as "gentle, warm, motherly, and prototypically bourgeoisie." She enjoyed the fame of her husband's publications and tolerated his absence and distractions.

Notably, Einstein's genius was not happened upon, nor was it easy to obtain. Though none can deny his natural ability in theoretical physics, the secret to his success was work. Though some concepts eventually unfolded before him, others such as his Unified Field Theory never came to fruition. Yet he never ceased his work nor became discouraged. "After all," he wrote, "to despair makes even less sense than to strive for an unattainable goal." Three months before he died, Abraham Pais, one of Einstein's biographers, visited him at home and spoke with him for a half an hour. Einstein had been at his desk working when Pais entered and before Pais was able to leave (a journey of approximately five steps), Einstein was hunched over his desk "oblivious to his surroundings" yet again.

Now, fifty years after his death, Einstein remains one of the most well known names in scientific and even in common history. His developments in theoretical physics, along with those of Planck, de Broglie, Schrödinger and others, laid the groundwork for most if not all of the scientific developments that came thereafter.

Thursday, October 29, 2009

Quantum Tunneling

Classical Mechanics


Quantum tunneling allows us a fascinating glimpse at the workings of the subatomic world. We have learned in the last one hundred years or so that atoms aren't really the miniature solar systems that we all imagine them being—a relatively huge nucleus with tiny electrons swirling around it like planets. In fact, we can't really say much of anything about where electrons are or where they're going. It turns out that all we can really say is where they might be at a given time. Let's back up a little and try to illustrate this principle with an example.


Imagine a roller coaster on a track shaped like the black line pictured to the right. It's pretty easy to imagine that if you started on the left side where the red line intersects the slope that when they drop, the cars would easily roll over the middle hump and up the other side to the red-black intersection point on the right side (in fact, it would go to that point exactly if we neglect air resistance and friction). Even though the roller coaster dips pretty deeply just to the left of the center hump, it's not difficult to imagine that the cars would be going fast enough to get over the hump with speed to spare.


Now imagine that we start on the left side again, but this time at the point where the blue line intersects the track. Dropping from this height, we can see that the cars don't have enough energy to get up over the hump. Starting on the left side means never getting to the right side and vise-versa. We'd need some kind of chain (like the ones that they use in roller coasters to pull you up the first hill) to do enough work on the cars to get them over the hill. In other words, unless the roller coaster cars got energy from somewhere else (like being pushed at the beginning or dragged up the slope with a chain) they will never see the right side of the track.

Everything we just discussed comes from "classical" mechanics, which imagines that everything is solid and exists where it's supposed to exist in the way that it's supposed to exist. Unfortunately classical mechanics breaks down when the system is really small (such as in an atom). We'll talk about one way that it stops working after a little lesson in terminology.

Terminology


The term "potential well" refers—in effect—to the dips in the track. In the case of a roller coaster, the "well" is formed by gravity in that the deeper you go into it, the more energy you need to get out. We can see a more solid example of this in the system of the earth. Imagine that our planet is resting on a large rubber sheet, causing a depression. To get into outer space, we must climb out of the hole first which means that we need to have enough energy to get out without falling back in. Once out, we are free to shut off the engines and simply coast (which is how we got to the moon) because there is no danger of falling back into the earth's potential well.

Potential wells can be made from all sorts of things. For instance, a large chunk of positive charge (such as an atomic nucleus) creates a potential well into which negative charges, like electrons, fall (the reason they don't ever fall into the absolute center is a complex question that we won't get into here). In the same way as with gravity, for an electron to escape a nucleus' potential well, it must have at least enough energy to climb out of the hole.

The other term that we need to understand is the "wave function." We refer more to the wave functions of tiny particles than to where they are or how fast they're moving because it allows us a little more accuracy in measurements. In effect, we say that particles have a tendency to behave like waves under the right conditions. For example, electrons (which have definite mass) have a wavelength and a frequency and can even diffract (bend around obstacles) like water waves do. As a result, when an electron is in a potential well, instead of thinking of it as a ball rolling back and forth between to peaks of maximum energy (like a roller coaster), we think of it as being a wave bouncing back and forth between two walls (think of dropping a pebble in a bowl of water and watching the waves bounce around). The size of the wave function at a certain point represents the probability of the electron being found at that spot. That is, the bigger the wave function, the more likely it is to find an electron there if we choose to measure it.

Quantum Tunneling


Remembering the blue track example, imagine an atomic potential well of that shape. Perhaps there are two nuclei of differing charge (so that one is deeper, or has a greater propensity to pull an electron than does the other) close enough together to affect a single electron. We can imagine that the larger of the two nuclei could, under the right circumstances, trap the electron in the deeper well, making it impossible to overcome the hump and orbit the other, smaller nucleus. Classically, this is exactly how we'd explain it.

However, we have observed that sometimes the electron, after having been trapped in a potential well and without enough energy to get out of it, sometimes escapes. This is evidence that the electron is behaving like a wave. Imagine clapping (thus making a sound wave) in a room with closed doors. Can a person outside the room hear you? If you clap loud enough, the sound wave will hit a wall and make it vibrate a little, causing the air on the other side of the wall to vary in pressure a little. That pressure variance becomes a pressure (sound) wave that can travel to another person's ear. It's not as loud, but it's still the same sound.

So also with an electron, when it's wave function hits against a wall, it sometimes tunnels into it. If there is another potential well on the other side close enough and deep enough, the electron will be transmitted to the other side (though with a much weaker wave function). Unlike with sound, the fact that the wave function has a smaller amplitude does not indicate that the electron is any less of one. In fact, it is the same electron that was on the other side of the well. The fact that it has a smaller wave function only means that it is less likely to be there if we chose to measure it. However, that probability only applies to the exact time of measurement. In all other times, the two nuclei behave as if the electron were always with it (even though measurements may seem to indicate that it is with one particular nucleus 80% of the time).

Maybe this all sounds obscure as if it couldn't possibly matter to a normal person, but what I've just described is a covalent bond which is the kind of bond that keeps two hydrogen atoms attached to an oxygen atom in water molecules (and, consequently, the bond that makes most or all of the molecules in air stay together).

Conclusion


The take-home message of quantum tunneling is that small particles act very differently from large ones. I guess, technically that grapefruits have wavelengths too, but for reasons associated with the Uncertainty Principle, the effect that quantum mechanics has on large objects is negligible. But on very small scales, classical mechanics breaks down. Objects that we imagine to be solid become waves, Newtonian mechanics breaks down, and nothing seems to work the way we expect it. The usefulness of knowing exactly how they act on that scale is important, though, as we can see by examining its effects. Knowledge of quantum tunneling leads us to innovations such as flash memory (which the thumb drive in your pocket uses), semiconductors (a fundamental component in computers) and chemistry.

Wednesday, October 21, 2009

Universe Synthesis: Part III

In the last two installments of the Universe Synthesis series, I explained in relatively simple terms what happened during the first 380,000 years of the life of the universe. In the first trillionth of a second or so, the four major forces that we know today split from the single force that they started out as, and then the first inklings of matter formed. We left off with the creation of hydrogen and helium.

The problem with an expanding universe is that as is expands, it loses kinetic energy. Imagine two pots of boiling water (each with an equal amount of water in them). If you set one on the stove and dump the other one on the floor, which will cool down faster? Clearly, the water, as it expands, radiates (and conducts) more heat away from it at a faster rate. In the same sort of way—as the universe expands—it cools down. Unfortunately, it takes energy to force atoms together to make heavier atoms. At 380,000 years after the Big Bang, the universe is overwhelmingly (if not completely) devoid of any element heavier than helium. With the matter in the universe cooling and spreading out at an alarming rate, how did the rest of the periodic table form? Where do we get carbon, oxygen, nitrogen, iron, and gold?

The answer is—in the simplest form—gravity. The fast expanding space is filled, intermittently with enormous clouds of hydrogen (mixed with a very little helium). But the clouds aren't homogeneous; they're clumpy. In some places there are a few more atoms per cubic centimeter, making the region very slightly more massive than the areas around it. Believe it or not, this is the beginning of a star.

The slightly-more-massive clump has just a little stronger gravity than the other slightly less dense regions. As a result, other hydrogen atoms are statistically more likely to fall into the clump and join it. Over a very long time, the clump gets larger and larger, becoming more dense and more compact. As it gains matter (again, only hydrogen), the matter tends to fall towards the center of gravity, causing a particularly large mass of hydrogen gas to start forming there. The gas pushes on itself, or rather, it pulls it self together by its own gravity until the pressure is so great that it ignites.

Ignition, here, does not have the same meaning as it does on earth. The hydrogen is not burning, per se, it's fusing. The pressure is so great that the atoms are fused together. Two protons (which is just a hydrogen atom without its electrons) are fused into deuterium (still hydrogen, but with an extra neutron), deuterium and another proton make helium. Helium fuses into lithium, which fuses into beryllium. This is called the proton-proton chain. In heavier stars, there's enough thermal energy to initiate the CNO cycle, which creates primarily carbon, nitrogen, and oxygen. With each fusion reaction, a little bit of energy is released as light. The light you see when you look at the sun (note: don't look at the sun) is the byproduct of the proton-proton chain.

Atoms continue to fall toward the center of gravity. As they do, and because they do not fall uniformly in every direction, the whole mass begins to spin. As it does, a disc that is perpendicular to the axis of rotation begins to form around the newly formed star. Matter begins to collect into the disc. Soon a star is happily burning. Around it, other pockets of dense hydrogen have started to burn. The whole collection of them is now a galaxy.

In course of time, the star runs out of material to burn. Heavy stars can get big and hot enough to force helium, carbon, and other heavier elements to burn, but eventually the matter in the star ceases to fuse. Either gently, bit by bit, or in a violent explosion, the layers of new elements are ejected into the interstellar medium (left-over hydrogen), enriching the surrounding area with new elements. In the particularly large explosions, the atoms gain enough energy to fuse into extremely heavy elements such as gold, copper, tungsten, or mercury. Since the interstellar medium is still, even after all that, predominantly hydrogen, the whole process stars again. Only this time, the conglomerating gas is enriched.

When the disc forms around this new star, the heavier elements stay behind as the hydrogen and helium fall towards the center. Close to the star, almost all of the hydrogen falls into the giant fusion reactor leaving behind rocky clumps of carbon. These clumps eventually collide and conglomerate themselves, forming huge spinning rocks that eventually form terrestrial planets. Further out, lots of hydrogen and helium remains to collect into large, dense clouds not big enough to become stars; they become gas giants. The further away a gas giant is from the star, the more molecules are able to form in its atmosphere (being cool enough to form them without immediately breaking them apart again) such as methane (which is what give Neptune and Uranus that nice, blue color).

Lest you think that we'll one day run out of building materials, consider that after 14 billion years of element synthesis, the detectable matter in the universe is still 75% hydrogen and a little less than 25% helium. That is, everything else that is not those two elements comprises much less than 1% of the total matter in the universe. Even then, consider your car, your kitchen appliances, a gold deposit in a mountain, or the circuitry in your computer. A long time ago, in a galaxy far away (I couldn't resist, but seriously...) every single one of those atoms was being shoved together in the first few seconds after a violent supernova explosion. And every single breath of air you take is filled with atoms that were fused inside a star millions of years ago. We live and breathe stardust.
__________

So, that's how it happened. Or, at least, that's the best we can do at explaining it right now. This model is constantly being reformed and reworked, and new processed are constantly being discovered. One of the most amazing things to me is that almost all of this information was deduced by astronomers looking at the sun and other stars (note: do not look at the sun unless you are a trained professional). The only information from those sources that we can get is the light that they give off. In other words, astronomers found a way to deduce all of this just by looking at patterns of light given off by stars and combining it with what we already know about physics on earth. That, to me, is an amazing accomplishment.

Thursday, October 1, 2009

Heisenberg Uncertainty Principle

Imagine that you take a picture of a moving car. Depending on the kind of camera you have, your pictures will develop in one of two ways. First, it is possible that your camera has a really slow shutter and that the car is blurry. If you knew the shutter speed of your camera, you could make a pretty good guess at how fast the car was going by studying the size of the blur. Even then, however you wouldn't be exact. The only problem would be to describe exactly where the car was at the moment you took the shot. In fact, you couldn't say that it was anywhere precisely at the time you took the picture. All you could do with any degree of certainty is decide that the car was between two definite points (the beginning and end of the blur) during the entire second that you took the picture.

The other kind of shot would have been taken with a camera that had a very, very fast shutter speed. The picture would turn out crisp, with almost no blurs at all. Finally, we know exactly where the car was at the instant the picture was taken. Unfortunately, by gaining this information, we've lost information that we could have known. Now, looking at the picture, we have no way of saying how fast the car was moving. For all we know, it could be standing still.

In either case, the picture cannot ever tell us everything we want to know about the car. We get one side or the other. And it has nothing to do at all with the quality of the camera. Even if we were using the best camera in the world, a slow shutter would tell us lots about velocity and a fast shutter would give us a good idea of position. This conundrum is the basic idea of the Heisenberg Uncertainty Principle.

Before the advent of quantum physics, it was believed that if we knew the exact position and velocity of a particle then we could determine exactly where it would be at any point in the future. I suppose we could still think of that as being true. The problem is trying to measure both of those quantities simultaneously. We encounter the same problems as we did with the camera. We can only simultaneously determine momentum and position to a certain degree of accuracy.

It's important to realize that the illustration that I gave above with the car is only a metaphor to help us describe the real Uncertainty Principle. In actuality, that "certain degree of accuracy" is an extremely small number (≈10-34) and is thus only really an issue when we are talking about very small things like electrons.

The issue is not, however, as trivial as the particles are small. What are the implications of the Uncertainty Principle? First, we learn that it is impossible, regardless of the quality of the instrument, to learn everything about everything. The information provided to us on the sub-atomic scale is finitely limited. But, that's not necessarily a bad thing. Sometimes it is useful to know in precise terms that which we do not know. Such limits imposed on us by the universe have helped us to understand the shape, size, and configuration of an atom, and thus to describe more completely atomic interactions.

Further, the very idea that we cannot be exactly precise in our measurements caused a paradigm shift that defines the way we think about science today. Before this principle (and others such as de Broglie's wave mechanics and wave-particle duality), people were generally under the impression that the universe was deterministic—that every future event could theoretically be predicted. Now, we view the universe as being probabilistic instead—that we can only know the probability of a future event to happen. The probabilistic ideology, though seemingly less "correct" was something of a step away from perfect—though ultimately incorrect—answers and a step toward the best philosophy of understanding at which we can arrive.

Sunday, August 23, 2009

Lasers

Introduction


The first laser was built by Theodore Maiman and is recorded as having been first displayed on 16 May 1960. This invention is particular, in my opinion, because it is not a naturally occurring phenomenon in the visible spectrum. Unlike lots of other inventions which come simply from us harnessing phenomena that we have discovered, lasing is a step ahead of what nature gives us, a complex application of several principles together to create something new.

Explanation


Laser is really an acronym—Light Amplification by Stimulated Emission of Radiation—which was first postulated by Einstein in 1917. As the name suggests, a laser is really the combination of two separate optical phenomena, stimulated emission and light amplification, which we will explain here.

Stimulated Emission


Emission, as its name connotes, is the term we use for a photon which is created by an atom. To understand this phenomenon, we need to understand atoms a little bit more.

When you picture an atom in your head, you probably imagine a small solar system sort of design with a nucleus of protons and neutrons in the middle and little electrons spinning around it in circles. Sadly, this is not the case, but the model serves well to illustrate emission; so we'll use it with the understanding that it is really not particularly accurate. Electrons in every atom under normal conditions orbit the nucleus in the closest possible orbit (which, for quantum mechanical reasons, is not physically touching the nucleus). Certain molecules (H2 gas, for example) undergo excitation when they are hit by photons of sufficient energy which means that the electron is temporarily pushed to an orbit further away from the center. However, as things in physics tend towards the lowest and most stable energy state, the electron jumps back down to the ground state. The effect can be imagined as being like marbles in a funnel. The faster you push the marbles, the higher they rise in the funnel as they spin around. But over time, no matter how hard you first pushed them (assuming that they can't leave the funnel) gravity will pull them back to the lowest available spot. And since energy can't just disappear, the energy that the electron lost by jumping back down to a lower orbital is emitted as a photon of light of that exact amount of energy (we'll call this precise value ΔE). This is emission.

Stimulated emission is somewhat more complicated. An excited electron in a higher orbital will, obviously, spend some amount of time (it's really short) in the excited state before jumping back to ground state. If a photon whose energy is exactly ΔE passes very very close by the excited electron, the electron will jump before it normally would. Thus the emission was artificially stimulated.

Light Amplification


Light amplification is a direct result of stimulated emission under correct circumstances. If there is an excited medium (maybe an energetic cloud of H2 gas), we can imagine that eventually one of the excited atoms will revert to ground state and emit a photon with energy ΔE. That photon will almost definitely pass near enough to another excited atom (if the cloud is big and dense enough) and stimulate the emission of another photon. Luckily for us, when a photon is emitted by stimulation, it is released in phase with and in the same direction as the incident photon. In other words, where there was one photon, now there are two traveling in exactly the same direction at the same time and in basically the same space. The light is now twice as bright. But these two photons will eventually collide with other excited electrons and stimulate more emission in the same direction. A chain reaction causes a short, bright burst of energy as all of the excited electrons in the direction of stimulation are forced to revert to ground state.

Lasing


The problem with the described situation above is that the cloud of gas runs out of excited electrons extremely quickly. To produce a laser, we need a continuous stream of stimulated photons. To produce this effect, we continually excite the gain medium by a very energetic source of light (a flash lamp or another laser) so that every time an electron jumps to ground state, it is quickly re-excited. Then we put the gain medium between two mirrors that face each other. Eventually, stimulated emission happens in the direction of the mirrors and an amplified light source bounces back and forth between the gain medium, becoming even more amplified. If the optical pump is strong enough, the cloud will never run out of electrons to stimulate. The amplification cycle is infinite (not that it increases in brightness forever, only that it will forever produce a continuous beam of light of a certain brightness that is unidirectional and in phase). To release the beam from the mirrors, we make a part of one of the mirrors semi-translucent so that some of the photons escape when the beam hits that mirror. The escaping photons come out in a beam which we call a laser.


Applications


We use lasers more than you might think. The ubiquitous laser pointer is, of course, one use. However, lasers now assist in medical surgeries, read CDs, cut and weld metals, and are used in printers (you know, laser printers) among many other things. They have become widely used and are on the forefront of our active scientific pursuits today.

Wednesday, August 5, 2009

Photoelectric Effect

Einstein won the Nobel Prize in Physics in 1921. Lots of people assume that he won it either for his work in relativity or for the immensely influential equation E=mc2. However, it was for his groundbreaking discoveries in a physical phenomenon known as the photoelectric effect for which he was awarded the Prize. Herein we will discuss the phenomenon and its subsequent applications and implications.

Explanation:

Simply, the photoelectric effect is the emission of electrons from a metal as a result of incident light. In other words, sometimes, when you shine light on a piece of metal, some of the electrons in the metal come unbound and fly freely though space. I guess we need to back up and talk a little about metals.

One of the properties of metals is the configuration of its electrons. When a whole lot of iron atoms (to use one of many metals in the periodic table) get together, they start to share their electrons. However, unlike other solids, metals share their electrons with the entire solid. The top layer of electrons are free to flow anywhere about the surface of the metal, bound to no specific atom. Incidentally, this property is what makes metals such good conductors of electricity; the "fluid" electrons on the surface carry and transport charge very efficiently in much the same way as it is easier to slide over a wet surface than a dry one.

This property of metals is what makes the photoelectric effect possible. When you shine light on metal, the sea of electrons (as it is often called) receives lots of energy, causing some of the electrons to shoot off. However, not just any kind of light can make it happen. Imagine a swimming pool that is only filled up half way with water. If you were to throw a rock into the pool, you could make some of the water splash out, but only if the rock was traveling fast enough. Even a whole bunch of rocks traveling too slow would only make lots of splashes that didn't remove any of the water. In the same way, light needs to be energetic enough to cause the electrons to escape from the sea. The minimum energy that is required for a photon to remove an electron from a metal is called the work function (symbolized by the Greek letter φ).

Implications:

The implications of this discovery were shattering to the world of physics. There was a huge debate at the time concerning the nature of light--whether it was a particle or a wave. Einstein's discovery helped us to understand the truth. As I mentioned before, only light with a certain minimum energy (equal to φ) could make electrons leave the metal. Einstein discovered that this minimum energy could only be achieved by changing the color of light, not the intensity. That means that red light, no matter how bright, will never induce the photoelectric effect, whereas very very weak ultraviolet light will always do so. We learned some great truths through this. First, the frequency of light (its color) is directly related to its energy. In fact, frequency is the only factor that determines photon energy. Intensity (brightness) of light corresponds not to energy, but to the number of photons hitting the area per unit time.In other words, shining really bright, red light on the metal was like throwing lots and lots of rocks really slowly into the pool. But shining really weak UV light was like throwing just a few rocks really really fast, causing a large splash (but only a few times). To induce a large photoelectric current, one needs only to produce an intense UV source.

Applications:

The applications of the photoelectric effect are many and influential. This is the basic idea that makes solar energy possible (taking light and making electrical energy out of it). Also, from this idea came photomultipliers (which created such devices as night-vision goggles) and CCDs (which are the imaging devices in digital cameras and telescopes), to name just a few.

Wednesday, July 8, 2009

Isaac Newton


b. 25 December 1624
d. 20 March 1727

Noteworthies:
Everyone has heard of Isaac Newton, and for good reason. He's very much the father of mechanics as well as the reason that we are able to calculate everything the way that we do. I'll get to that a bit later, but let's first talk about his character.

Newton was supremely inquisitive. Even as a child he was extremely curious about his surroundings. He drew pictures, invented tools and appliances, experimented, and was eternally posing questions to himself. Interestingly, he was totally apathetic to school and performed terribly. But either the prospect of having to manage the family estate or an alleged attack from an elementary school bully changed his mind and he eventually got accepted to Cambridge. There, he worked his way through school waiting tables and doing janitorial work until he was accepted on scholarship.

In 1665, the campus was closed for 18 months due to an outbreak of the bubonic plague. Oddly, the year and a half he spent at home was Newton's self-termed annus mirabilis (miraculous year), during which his scientific career exploded. While at home he invented calculus and began solving previously unsolvable problems with apparent ease.

Calculus can be termed as the study of infinitesimal progression. Take a falling ball, for example. Imagine that you took a picture of it one time every second until it hit the ground. Developing the pictures, you could analyze how far the ball fell each second and would probably be able to determine that the ball fell more at the end of its flight than at the beginning. Now imagine that you took a picture of the ball every tenth of a second. Suddenly, you might be able to calculate exactly how much further the ball has fallen in each successive shot (think of making a flip book out of each set of pictures, the one-second pictures would depict a ball with very choppy movement, whereas the other set would show a much smoother trajectory). As the time between successive pictures decreases, so also does the accuracy of our measurement for any given period of time. If it were possible that there was an infinitely small separation between one instant and the next, we would know as much as was possible to know about the falling ball.

This is effectively the concept of calculus. Newton developed a way to take infinitesimally small "pictures" of mathematical situations and was thus able to analyze every single instant from the start of an action to its finish. He developed the idea during what was effectively an overly long summer break. It became an enormously powerful tool.

Another scientist named Robert Hooke once proved a hypothesis made by Kepler that planets travelled in elliptical orbits but refused to share it with his coworkers presumably to avoid having to credit them. The coworkers consulted Newton who replied that he had solved the problem four years prior and simply threw it aside and eventually lost it. He then spent 18 months working furiously to publish before Hooke, at which he succeeded.

Here we encounter another one of Newton's more colorful personality traits: ego. It was perhaps his towering pride that led to his most influential discoveries. Frequently his books were published out of spite for another scientist. Any derogatory remark made about him or his work threw him into a black depression that could only be cured by besting the man who made the comment. As a final blow to his then lifelong enemy, Newton even refused to publish his groundbreaking discoveries in the field of optics until Hooke was dead, thus never revealing to him what had been discovered.

Among the body of scientists at the time, problems were frequently shared so as to facilitate their discovery. Johann Bernoulli once posed the problem of the curves of quickest descent (the Brachistochrone curve), which is the path an object must take between two points that causes it to get there the fastest (hint: it is not a straight line unless one point is directly above the other). Only a few responded, one (Newton) anonymously, as if to say that anyone could solve it. But when Bernoulli read the nameless proof he immediately named Newton as the author proclaiming, "Ah! I know the lion by its paw."

Among other publications, Newton wrote a book known as The Mathematical Principles of Natural Philosophy or Principia for short. It outlines the basic laws governing motion and forces and defines the basic terms that we now consider commonplace (force, mass, velocity, acceleration, inertia, etc.). He proved that all masses are acted upon by gravity in the same way (the moon and an apple, for example), and most importantly, gave us the mathematical tools to solve basically every problem with perfect (yes, perfect) accuracy given the correct conditions. Still, of his own accomplishments, he said, "I do not know how I may appear to the world, but to myself I seem to have been only like a boy, playing on the sea-shore, and diverting myself, in now and then finding a smoother pebble or prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

Tuesday, June 30, 2009

Universe Synthesis: Part II

Previously, we explored the first three epochs of the synthesis of the universe from the beginning of the Big Bang until 10-12 seconds after it. Remembering that the longer time is around, the more time it takes to make something interesting happen, let's explore the next several epochs.

The Quark Epoch
10-12 to 10-6 seconds

As the universe continued to cool, fundamental particles finally started to emerge. The first of these are known as quarks. These are the building blocks of subatomic particles and certain kinds of quarks are even associated with each of the four fundamental forces. That is to say, at the formation of quarks, the fundamental forces began to be distinctly separated where before they were unified.

The Hadron Epoch
10-6 to 1 second

The next three epochs are characterized by which kind of particle dominated the rest (in number) at the time considered. The first kind of dominant particle formed due to the continuing cooling of the universe. Quarks started to combine to form multi-quark particles known as hadrons. At the same time, antimatter formed (I know that sounds terribly complicated, but we only use the term to describe a certain kind of matter that, when it reacts with the stuff that's currently in our universe, turns into energy in a process called annihilation. That's not so scary, is it?). Further cooling caused anti-hadrons to collide with the hadrons, eliminating most of them. However, since the number of particles was not exactly equal to antiparticles, a residue of what we now call matter stayed behind in the form of hadrons (i.e. if the other kind of matter had been more numerous, we would have called that matter, and the stuff that's in our universe now would be antimatter). Another noteworthy fact -- of course -- is the fact that now a single second has elapsed in the life of the universe.

The Lepton Epoch
1 to 10 seconds

After the hadron/anti-hadron annihilation period, leptons dominated the particle population in the universe. Leptons are also elementary particles (which come in six flavors), but are not quarks. Your favorite lepton is the electron, which is largely responsible for every electrical device that you've ever heard of. Similar to the Hadron Epoch, the Lepton Epoch ends with a large scale annihilation due to interaction between lepton/anti-lepton pairs.

The Photon Epoch
10 seconds to 380,000 years

Well, now you're thinking, "So everything annihilated everything else? What is left?" We need to get a few things straight. The term annihilation refers only to the annihilation of mass. But nothing can simply disappear. When mass is annihilated, it turns into energy. That's what all that E=mc2 business is about, anyways. Annihilated mass turning into pure energy yields an amount of energy equivalent to its mass times the speed of light squared (9*1016, or a whole bunch). That energy is expressed in little packets of energy called photons, which we more commonly call light. Also worth mentioning is that each of the two previous epochs left behind a substantial amount of matter (from which is formed every planet, star, and galaxy in the universe. So there's still stuff out there.

During this epoch, however, light rules the universe. We have an extremely dense concentration of photons that is rapidly (at the speed of light, no less) expanding. Minutes into the epoch (between 3 and 20) is a period known as nucleosynthesis, during which hadrons and leptons start to combine to form tiny pairs. The most common hadron-lepton pair is the friendly little proton-electron system that we call Hydrogen. Close behind it is a double pair (two protons, two electrons) known more commonly as Helium (finally, something we've heard about before).

--

We are now hundreds of thousands of years into the history of the universe and we are just getting ready to make life sustaining planets (in just a few hundred million years!). The third Universe Synthesis post will talk about how stars and planets are formed and how the universe came to look like it does today (i.e. no more particle physics).

Thursday, June 18, 2009

Newton's Laws

There has not been a more complete contribution to the field of mechanics since Sir Isaac Newton defined the basic laws of motion. The real beauty of these laws which now bear his name is that they can explain how an apple falls off a tree to the group as simply as they can describe how the moon moves around the earth. In fact, it was that very juxtaposition that got him thinking about it in the first place.

Before him, no one could understand what kept planets and moons in their orbits. They weren't connected to anything, so explaining why they didn't fly away was a difficult proposition. But watching the apple fall led him to understand the fundamental principles of forces. I'll define the laws and then use them to explain this phenomenon.

The First Law

Corpus omne perseverare in statu suo quiescendi vel movendi uniformiter in directum, nisi quatenus a viribus impressis cogitur statum illum mutare.

The first law can be described in several different ways. We can say, as is often said, that objects at rest will stay at rest and objects in motion will stay in motion until acted upon by an outside force. Another way to say this is to say that something changes velocity only when another object is applying a force. Most simply, this law describes a characteristic of mass known as inertia (which is its propensity to stay in motion unless acted upon).

This law explains that the natural motion of an object with velocity is a straight line, which is not as obvious as one might think. Before forces were well understood (i.e. before Newton), it was easy to think that forces tended to travel in curves (just throw a ball in the air) and that straight line motion was anomalous. Thus, by logic we can assume that anything moving in anything other than a straight line has a force acting on it.

The Second Law

Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.

The second law describes the effect of forces on masses. Specifically, the amount force on an object can be quantified by multiplying its mass by its acceleration. Succinctly, we say that F = ma. Here we learn an important characteristic of mass: it resists change. The larger the mass, the larger the force required to cause it to accelerate as quickly as a smaller mass.

The first law is just a special case of the second law. That is, the first law is simply a description of what happens when F = 0. By the second law we know that ma = 0, and since we know the object has a non-zero mass (it being an object), we must conclude that there is no acceleration. In other words, something with no forces acting on it cannot change velocity (whether moving or not).

The Third Law

Actioni contrariam semper et aequalem esse reactionem: sive corporum duorum actiones in se mutuo semper esse aequales et in partes contrarias dirigi.

You know this one, too. In fact, I've never met a person to whom I could say the first half without having the second half repeated to me. For every action (applied force) there is an equal force applied in the opposite direction. We sometimes finish that sentence with "...there is an equal an opposite reaction," but I prefer to avoid the term reaction, as it can cause misconceptions.

To clarify the terms of the law, nothing can apply a force on another object without having that object exert an identical force on it. To prove this to yourself, stand facing a wall with your toes against it and push as hard as you can without moving your feet. Of course, you move backwards. Why? Because the wall pushed you with the force that you pushed it.

It's easy to over-think this law. If all forces are paired and equal, then how does anything move at all? Why don't all forces cancel out? The answer is contained in the second law. When you push against a train, the train pushes against you. The train having a huge mass (relatively) has an extremely small acceleration. You, on the other hand, have a very small mass and thus your acceleration is much greater. That is why -- as a general rule -- we try to avoid getting hit by trains.

An Application:

Suddenly, very complex situations are rather easy to describe qualitatively. We can see that the moon is not travelling in a straight line, but that it is travelling in a circular motion. By application of the first and second laws we can say that there is a non-zero force acting on the moon which causes its acceleration. By observing an apple fall (which moves toward the earth without being connected to it), we can assume that the moon is similarly falling toward the earth while moving linearly past it thus keeping it in orbit.

But how can we verify this unseen force? How do we know that it is the earth which exerts a force on the moon and not some other thing that we haven't yet discovered. Newton's third law tells us that if the earth exerts a force on the moon, then the moon must necessarily exert a force on the earth. This force is observed in the tides. The moon's gravitational force on the earth causes the envelope of water around the earth to be distorted, egg-shaped. As the earth rotates, the envelope stays oriented towards the moon and we observe varying depths of the ocean depending on the time of day.

This (long) explanation and application of the most basic laws of physics have set the stage for the explanation of every mechanical (moving) system that we have been able to describe. These laws provide the fundamentals of operations for -- off the top of my head -- space travel, jet engines, dishwashers, fork lifts, building construction, airplanes, cars, and many, many more situations.

Saturday, June 13, 2009

Galileo Galilei


b. 15 February 1564
d. 8 January 1642

Noteworthies:
  • Invented physics
Galileo is one of those people to whom people attribute lots of things just because he was great. In much the same way that Washington did not throw a silver dollar across the Potomac (it being more than a mile across at Mount Vernon) any more than he chopped down a cherry tree on his father's estate, Galileo is largely innocent of all of the one-liner attributions that he is awarded. For instance, he did not invent the telescope (although he was the first to turn it skyward). Equally, he never performed an experiment during which he dropped weights off the Tower in Pisa, thus proving that all masses fall at the same rate. Not surprisingly, he did not really invent physics either. But I'll show you what I meant by that.

Galileo was a student of observation. On top of that, he was sarcastic, confrontational, pugnacious, and brilliant. His mantra was the quest of observable truth and the rejection of "truth" declared in ignorance. His mission was to enlighten those ignorant who trusted their source of truth.

During his time, truth was whatever the Church declared it to be. The Earth was the center of the universe, all things in the heavens were perfectly spherical and traveled in perfect circles, and all unanswerable questions were answered by Church leaders. Galileo's life seems to have been dedicated to breaking the mindset that truth is what men of power think it should be. His methodology was flawless: experimentation and demonstration.

When told (by a Cardinal) that ice floats only because of its sheet-like shape, Galileo performed a public experiment in which he demonstrated that density rules buoyancy. The audience watched as thin sheets of ebony sunk while large blocks of ice remained at the surface. No one could refute the evidence before them.

He was challenged on basically every important discovery he made. When observing the moon through a telescope, he discovered mountains, ridges, and hills. Saturn had "ears" and the Sun had spots. All of these went against the common philosophy that the sky was filled with perfectly circular, perfectly formed bodies. His discoveries were uniformly pronounced untrue until he simply showed his accusers what he had seen with his own eyes.

Again, turning heavenward, Galileo discovered that Venus—like the Moon—displayed phases: crescent, half, full, and back to new. Such a thing could only be possible if it orbited the sun, sometimes lying between us and the Sun, and sometimes being on the other side of the Sun. The Church was scared and frustrated. If a layman could disprove "truths" that had been taught for years by the Church, their authority would be undermined. They arrested him, threatened him with his life, and eventually exiled him. But the damage was done. People started to see that physical truths needed to be observable. Simply declaring a geocentric universe could not make it true. Our declarations must be backed by confirmed fact.

Lest we erroneously think that Galileo's anti-Church stance was anti-religious, let us consider the counsel he gave to his accusers who argued their points from out-of-context Biblical references: "The task of wise interpreters is to find true meanings of scriptural passages that will agree with the evidence of sensory experience." Indeed, his stance was more religious than their own. He maintained that God created a physically explainable world and that part of our reason for being on it was to figure out how it worked. We do not have to deny that God held the Sun in the sky for Joshua, or that He parted the Red Sea just because we can't explain it. But we also do not have to assume that it will remain unexplainable forever.

From his example comes the scientific method. A scientific question asked can only be considered answered when it is backed by repeatable, concrete evidence. The answered question then remains to be further backed by experiment or else disproved by more detailed analysis. The quest is not to be personally right, but to find the truth behind the phenomena that we encounter each day.

Perhaps these stories of Galileo disproving the clergy by experimentation seem trivial. Surely they would have thought to test buoyancy by putting things in water. Isn't that the obvious solution? That sentiment, in and of itself, is a tribute to the great gift that Galileo gave us. We see the simplicity in his methods because we have adopted them through and through. You were raised to experiment, to test, to try, to guess and be wrong, and to reason in part because of the scientific contributions made by an Italian astronomer (of course) several hundred years ago. Someone else probably would have done it if he hadn't been so persistent, but his influence stands out as the catalyst for a reasoning, scientific community that seeks for physical truth by physical confirmation.

Saturday, May 23, 2009

Color

My last post caused me to think a lot about how and why we see colors. Let's talk.

Colored light is exactly the same kind of radiation as X-rays, gamma rays, UV rays, radio waves, or microwaves. The only difference is in its frequency (the number of times the wave can oscillate between two maxima in a second) and wavelength (the distance between two maxima). Other than that, it's all the same thing. We call light having a wavelength of about 400-700nm (100nm = 10-7m) visible light only because it ends up that our eyes process it when it hits them. But all of it is light. So what's going on that makes us see certain wavelengths as color?

First off, why do objects give off some wavelengths of light but not others? There are two ways an object can have (or lack) color. First, an object can emit light all by itself. You don't do this in the visible range, but the stars do. The graph here shows the light output of several different kinds of stars. You'll notice that each star emits light in all of the visible colors, but in one more than all the others. That's why some stars look blue and others red. Ours looks like the middle curve and actually appears white in space (emitting all of the colors fairly evenly) although it appears yellow on earth (we'll get to that in a bit).

The next way an object can have color is by scattering light that is incident upon it. Depending on the chemical composition of the material, it will absorb some wavelengths and scatter others. Obviously, only the wavelengths that get to your eye are the ones that your brain processes, so you perceive distinct colors in objects illuminated with white light. Scattering is a rather prevalent phenomenon. One of the most common occurrences happens with white sunlight traveling through our atmosphere. It just so happens that the size of air molecules corresponds very well to scattering smaller wavelengths of light. Blue, having the smallest wavelength in visible, is preferentially scattered in every direction, which is the reason we see it when we look at any part of the daytime sky (this is called Rayleigh scattering). If we looked at the source of the light, the sun (note: do not look at the sun), we would expect to see the remaining light; white minus blue, which we call yellow. In other words, if our sky scattered red light, our sun would look green instead. Particles much larger than molecular gas particles (such as water vapor particles) scatter light, but do so evenly. Clouds (composed of water vapor) thus scatter all incident light that they receive evenly, causing us to see white (a phenomenon called Mie scattering).

So, when (scattered or emitted) light reaches our eyes, how does our brain distinguish between all the colors? As you are well aware, our eyes have four kinds of small photoreceptors in them called generally rods and cones. Each, by a process known as phototransduction, transmits electrical impulses to the brain when hit with light. However, not all of them respond to the same wavelengths. Some only respond to blue, and others only to green or red. The graph here displays the response functions by wavelength of the three different kind of cones in our eyes. You see that one transduces primarily in the blue range whereas there are two that transduce in almost the same range, but one slightly redder than the other.

Color, then, is just the end product of our eyes' response to a source. Imagine a source at 450nm. The blue receptor responds strongly and green and red each respond to a much lesser degree, but green a little more than red. Thus we see mostly blue with a much smaller dose of red and green. In other words, we see blue on its way to becoming purple. Looking at the response graph, one can deduce that the easiest color to see is at almost exactly 550nm. Here, red and green respond equally in strong measure, producing a sickly-yellow color. It as at this intersection point where the largest number of photoreceptors are giving some kind of response. Interestingly, a human's ability to see this color so well is the reason that they started painting emergency vehicles this color (as pictured here).

All of the colors that we see are simply combinations of red, green, and blue. Sometimes they are represented in the form <ratio of red, ratio of green, ratio of blue>. The "pure" colors are ones that can be represented by only one wavelength. In other words, if you can produce a color by drawing a single vertical line on the receptor graph above and mix the resulting ratios of red, green, and blue, you are seeing what a "pure" color. Some colors require that at least two wavelengths of light combine to create the response in our eye. Brown is the most common example. Consequently, that's why brown is not part of the rainbow; a rainbow diffracts light and allows you to see white light (a combination of all colors) split up into single wavelength portions. Since brown cannot be created in the human brain without at least two stimuli, it cannot be in the rainbow.

The science behind scattering, absorption, and reflection is much, much deeper. But I hope that this allows at least the first look into the beautiful complexity of optics and biology as an application of physics (of course). The resolution of our eyes is astounding. The difference between blue and red light (the extremes of our vision) is only about 10-7m yet our eyes distinguish the myriad of colors and details that make our world vibrant and beautiful.

Wednesday, May 20, 2009

Nomarski Imaging


As per request, I'm going to cover an application of physics today that is really on the proverbial cutting edge. Differential Interference Contrast (DIC) microscopy or Nomarski imaging is an exciting optical method that allows us to "see" microscopic, translucent biological material. As is becoming a theme, I'll need to explain a few concepts in optics before continuing to the meat of DIC imaging.

Prerequisite Light Discussion:

It's no secret that light is a rather complicated beast. First of all, a photon (basic unit) of light is simply a packet of electromagnetic radiation. Sometimes it acts like a wave (it refracts, diffracts, and reflects) and sometimes it acts like a particle (we can shoot photons one at a time, which is no more wave-like than a single water molecule by itself on the beach). To be clear, light is neither a wave nor a particle, but it acts like one or the other depending on the conditions under which we observe it. The wave part of the wave-like side of light is the behavior of the electromagnetic field of which it is composed. The electric field grows stronger and weaker with regular oscillations as does the magnetic field (oriented perpendicularly to the electric field). It is from these oscillations that we determine frequency, wavelength and other wave-like characteristics.

Polarization is the term we use to describe how all of the photons' electric and magnetic fields from a specific source are aligned. If the field oscillations in each photon have random orientations, the light is unpolarized. If all of the electric fields of each individual photon are oriented up and down, we call this vertical polarization. We can also achieve circular polarization by causing the electric fields of each photon to rotate either clockwise or counterclockwise such that at any instant, each of the photons are oriented in the same direction. This is more applicable than you might think. We use polarized filters in sunglasses to cut out reflective glare, in films to produce three dimensional effects (if you wear those silly glasses) and in astronomy (of course).


Phase is another important concept in light that we'll need to consider here. As shown in the image, two waves can be identical in amplitude, wavelength, and frequency, but can still be out of phase. This means that their moments of maximum field strength happen at different times. Phase is the reason that photographs don't look the same as real life. A picture can record the differences in intensity of light hitting the screen, but (except for in holography) film cannot record the phase difference in light adequately enough to reproduce it for the observer. The image comes out flat-looking.

DIC Imaging:

Differential interference contrast imaging uses a combination of applications in polarization and phase to image translucent images. 45-degree polarized light is split into two beams, one of 90-degree polarized light and the other of 0-degree polarized light. Though polarization is divided in this split, phase is kept constant. That means that two photons -- one 90- and the other 0-degree polarized -- that passed through the beam splitter at the same time will keep the same relative phases that they had when the beam was together. Each beam is indepentantly but simultaneously passed through the the material using a converging lens. The material is not necessarily homogenous throughout. It will have regions of high density and perhaps regions of differing composition. Since light travels a little bit slower in dense media (with a higher index of refraction), the photons passing through denser parts of the material will take a longer time to get through it. Thus, the phase of each beam becomes variable over the beam, not constant as it was at the beginning.

The beams (each now identically phase-shifted and perpendicularly polarized) are brought back together and projected onto a film. Here, you'll notice that both phase and polarization are recorded in each beam. As the beams combine, not only will they interfere (due to phase differences), causing the texture of the material to become visible through a series of brighter and darker contours, but the three-dimensionality of the material (its thickness, for example) will come out because of the polarization. To see in three dimensions, we must have two slightly different views of the same thing (such as through your left and right eyes). Polarization provides just that sort of perspective, rendering the image in three dimensions. This splitting and recombination of beams to measure objects is known as interferometry and is prevalent in optical and astronomical research, having many applications.

Thus, even though we cannot actually see the material that we are analyzing, its variable density lends itself to visible analysis by exploitation of the propensity of light to slow down in denser media. The images we get yield the kind of extreme detail required to learn about microscopic, organic materials.

Tuesday, May 19, 2009

Enrico Fermi


b. 29 September 1901
d. 28 November 1954

Noteworthies:

  • Nobel Laureate, Physics
  • Namesake of Fermium on the Periodic Table
  • Namesake of Fermi Labs
  • Significant contributor to the Manhattan Project

Fermi, an Italian-born physicist, is probably the most complete physicist since Newton. He was equally and exceptionally gifted in both experimental and theoretical physics and is a contributor to (even arguably the father of) modern nuclear and particle physics. His life and gifts are extraordinary and are worth talking about.

He grew up in Italy, schooled by his mom in their unheated house. It was apparently so cold that he devised a way to turn the pages of his books with his tongue to keep from having to use his hands which he sat on to keep them warm. He was always a great experimenter, using his brother as an assistant. His interest -- or, rather, obsession -- with physics theory came at the tragic and early death of his brother. His first tutor recognized his unique ability to understand and remember physics and math.

"When he read a book, even once, he knew it perfectly and didn't forget it," (1) commented Adolfo Amidei when asked to recollect his student's progress. Later in his life he would even recite whole chapters of physics texts out loud while driving on long trips. His grasp of theory was so concrete that his friends nicknamed him "The Pope" for his infallibility. At college he would often pass his time lying on the grass writing textbooks from memory without any kind of notes or scratch paper. His writings were never interrupted with erased or crossed our words. Often, the director of the research lab where he worked would seek him out (he the student) and say simply, "teach me something." (1)

One of the skills for which Fermi is particularly well known is that of estimation. He possessed the ability to look at a system or a problem and produce remarkably accurate results without any research or calculation except for what was already in his head. When working at Los Alamos labs on the bomb, he accurately estimated the yield (explosive size) of a bomb by dropping scraps of paper as the shock wave passed him. He is also attributed to, without any sort of research or other-than-mental calculation, accurately estimating how many molecules were stripped off of a car tire each rotation, how many piano tuners were in the city of Chicago, and the number of molecules of water in a teaspoon versus the number of teaspoons of water on the planet. These kinds of problems are now actually referred to as Fermi Problems.

His Nobel prize was awarded for his work in nuclear physics, which described the actual process of beta particle emission. For years before, physicists knew that electrons were emitted from atomic nuclei, but were unclear as to where they came from. Fermi determined that a neutron in the nucleus of the atom actually turned itself into a proton by splitting into a proton and an electron-neutrino pair. Inherent in this is the discovery of subatomic particles—quarks—which has led to our knowledge of the history of the universe as well as our current descriptions of the cause of fundamental forces like gravity and magnetism.

He further categorized a class of particles known as fermions which follow certain quantum statistical rules, the understanding of which has led to our comprehension of the behavior of stars, the flow of electricity (and thus, cooper pairs and superconducting materials), which are used every day in current scientific research.

Most notably, Fermi invented or engineered many of the components of elementary nuclear reactors, which now power whole countries (not our own, unfortunately), and the United States Navy's submarines and aircraft carriers. He introduced the cadmium control rods that protect against meltdown during critical-phase nuclear reactions.

Fermi totally immersed himself in every project he undertook, often working around the clock (not because he was under a deadline, but because of his natural interest). He died of stomach cancer at the age of fifty three. "Fermi told [a friend] in 1945, at the end of the war, that he had then completed about one-third of his life's work. By that reckoning, when he died nine years later, Enrico Fermi had given us no more than half of what he had to offer." (1)

1. Cropper, William H., Great Physicists. New York : Oxford University Press, 2001.

Friday, May 15, 2009

Universe Synthesis: Part I

Lifetimes of research have been dedicated to discovering how the universe was formed, and what it looked like and how it behaved during its stages of development. The topic is a little lengthy, so it will be divided into several independent posts. But first (as usual) we need to address a critical concept: time.

Astronomers estimate that the age of the universe is something like 14 billion years. Imagine what it will look like in twice that time, another 14 billion years from today. Do you expect it to look the same? Clearly not. Most of the stars currently in the universe will have extinguished and exploded and new ones (made up of the less pure remnants of the recently departed stars) will have taken their place. With these new stars (known as population III stars), it is entirely possible that everything we currently know about the behavior and interactions between stars will have changed. Their compositions, lifetimes, and nuclear reactions will be different. With that established, it doesn't seem like a stretch to say that doubling the age of the universe changes it significantly.

Keep that in mind as we explore the first few epochs of the existence of the universe. They take place fractions of seconds after each other in what would today be termed rapid succession. Remember, however, that when the universe was 10-30 seconds old (a trillionth of a trillionth of a millionth of a second or so), it was millions of times older than it was when it was 10-40 seconds old. Back then, time wasn't very old. The age of the universe doubled in units of time too small to think about. So, at that time, fractions of a second were as significant as billions and billions of years would be today. Let's take a look at the earliest stages of the life of our home.

The universe started as an immensely massive, yet infinitely tiny point. At some point in time, something caused it to expand at an astounding rate.

The Planck Epoch
0 to 10-43 seconds

Light and heat are the only two things able to exist. Due to an extremely high temperature (~1032 °C), nothing solid can form and remain formed. Much in the same way that ice can't remain ice at high temperatures, any energy trying to form itself into matter disassociated (broke apart) immediately due to high energy light and became energy again. During this time, it seems that all of the forces that we are familiar with (gravitational, electromagnetic, and nuclear forces) were combined into one unified force that governed all things.

The Grand Unification Epoch
10-43 to 10-36 seconds

As the universe expands, it cools (the same amount of energy is distributed over a larger space), causing gravity to establish itself as a unique, fundamental force. The smallest and most fundamental particles (Higgs Bosons) also form.

The Electroweak Epoch
10-36 to 10-12 seconds

At a whopping 1028 °C, the universe separates the nuclear strong force into a unique and fundamental force (the word fundamental here is not a contradiction even though the force came from something else. I only mean that in these conditions, the force becomes fundamental). During this epoch, we suspect that a rapid expansion period took place known as the cosmic inflationary period. The volume of the universe expanded enormously for several thousand trillionths of trillionths of seconds (during which the total age of the universe doubled about 5000 times). The temperature dropped significantly and quarks (the building blocks of protons and neutrons) formed out of the now cooler energy.

--

We're still a trillionth of a second away from the first whole second of the universe's life, but we have already traversed three major epochs in the formation of the universe in which we live. Stay tuned for the next major age of synthesis.

Monday, May 11, 2009

The Second Law of Thermodynamics


The Second Law of Thermodynamics is perhaps the most important governing principle of which we are aware in the physical universe. Strangely, of all of the physical laws that we use to describe physical systems, it is the only one that is not -- strictly speaking -- a law.

Let's take Newton's Third Law of Mechanics (for every force, there is an equal force in the opposite direction) as an example of a classically true one. No matter what you do -- if you push on a wall or throw a baseball or kick a dog (note: don't kick dogs) -- the object will exert the same force on you that you exert on it. Don't believe me? Go punch a wall as hard as you can and see what happens (note: don't do this either). It always happens. Always.

Conversely, the Second Law of Thermodynamics is actually just a declaration of statistics. Simply, it states that entropy tends to increase over time. Let's define entropy with a little example.

Imagine that you have a coin and you flip it. What is the probability that it will land heads-up? Right. 50-50. Now flip ten coins. What is the probability that five of them will land heads-up and five of them heads-down? You might be surprised to find out that it's about 25%.
I've included a little table to prove this to you. A microstate is the total number of possible arrangements of the corresponding number of coins showing heads. In other words, there is only one way to show no heads (all tails) and there are ten ways to show 1 heads-up coin (coin 1 heads up and the rest tails, coin 2 up and the rest tails, etc.) and so on. Beneath the "Microstates" column is a sum of all of the possible states. Divide the number of microstates for a certain coin combination by the total number possible and you get the probability of that many heads showing.

So what does that have to do with entropy? Well, what if you flipped all ten coins a billion times? You would expect that the most likely combination (5H and 5T) would show up most often. And that's what happens. Well, that's entropy. If you take the natural logarithm of the numbers in the "Microstate" column, you'll get the entropy of that particular combination of heads and tails. And since the natural logarithm of x gets larger as the value of x increases, the most likely combination has the highest entropy. So when we say that entropy tends to increase, all we mean is that over time, the most likely organization will tend to display itself. In other words, the Second Law of Thermodynamics can be quite simply stated as "Things that are likely to happen are likely to happen."

But you'll notice that this is a statistical certainty, not an absolute one. If you flipped ten coins ten times, you may notice that 6H and 4T showed up more. We can only be absolutely certain of the outcome with really really large numbers. Fortunately, the things we try to describe (heat transfer between two objects, for example) with this law have big numbers in them (objects that transfer heat regularly have billions of trillions of molecules). To illustrate, let's flip 100 coins. The odds of landing 50 heads is relatively small, about 8%. But that's something like eight times more likely than flipping 60 heads, 100 million times more likely than flipping 80 heads and one million billion times more likely than flipping 90 heads. As the number of coins increases, the likelihood of there ever being anything but a 50-50 decreases to zero.

To illustrate, flipping 1023 coins (there are about that many atoms in a just few grams of carbon), you could flip them all once a second every second for the entire age of the universe (1018 seconds) and still never find a result further away from 50-50 than 1%. When you get to the size of things we actually care about, there is absolutely no chance that any other arrangement than the most likely will ever happen ever. So, while the Second Law doesn't absolutely determine the outcome of statistical events, it is still certainly going to happen.

Applications:

So who cares about that? Didn't we already know that likely things happen? Isn't that why they are likely? Why does knowing this mathematically help anyone? To name a few things, this law provides the foundation for our heat transfer models that we use to heat homes, businesses, and cars. It helps determine the size of the fan on your computer or your car needed to keep it from overheating. It helps us know how big of a heat pump your fridge or freezer will need, how solutions tend to mix at a given temperature and when they will freeze or melt (which is how they invented antifreeze, solder and plastic), or at what temperature magnets will cease to be magnetized (more important than you think). It plays a part in determining the conductivity of materials at a given temperature (like the wires in your house or computer), the formation of minerals and rocks in the earth's crust, and the chemical distribution in our atmosphere. In short, the Second Law of Thermodynamics helped us either to develop or manufacture almost every household object that we consider to be common. That is no exaggeration.

To recap, the Second Law is more than just a "description of chaos" as many people choose to describe it. Rather, it is the statistical certainty of likely things to occur over time (which tends towards basic forms of energy—like heat—that could be described as chaotic). As I consider its implications, I am amazed at how such a fundamental law can be so simple while describing so much.

Saturday, May 9, 2009

Louis-Victor de Broglie


b. 15 August 1892
d. 19 March 1987

Noteworthies:
  • Nobel Prize laureate
  • Fellow, Académie française
  • Perpetual Secretary, Académie des sciences
  • 7th Duke of Broglie
  • Fellow, Royal Society of London for the Improvement of Natural Knowledge
Born into a rich, aristocratic family, de Broglie (pronounced /də'brɔɪ/) had really no need to become a physicist, or anything else for that matter. His interest in physics came as a combination of natural intuition and skill in the subject and (most probably) from the influence of his older brother, Maurice, who was an accomplished physicist himself. However, his entrance into the field was as well-timed as it was risky.

The last thirty years of science had been revolutionary. Einstein had recently declared light to possess characteristics of both waves and particles, introducing what is now known as wave-particle duality. Planck and Bohr followed up with the discovery of quantization, which is the simply but profound idea that waves (bound energy) can exist only in certain, specific denominations, between which energy propagation is non-existent. Since before, physics had been divided into two separate domains -- particle physics, and wave physics -- duality was something of a hot topic that caused frequent debates. Bohr even refused to believe the results of his own experiments, trying force his findings to fit a (consequently incorrect) model that he couldn't give up.

I know it may seem like a trivial thing to us, but this was groundbreaking work that seemed to be trying to convince the world that apples and oranges were the same thing. De Broglie found himself in the middle of a great schism with Einstein on one side and Bohr on the other. A wrong step could end any credibility he had gained and potentially end his career. Slamming together some of the most basic equations (E=mc2 and E=hν), he boldly declared the impossible, initially based on intuition and faith that not only was light duality correct, but that all things exhibited such a behavior. Sometimes light acts like a particle and sometimes like a wave. It is, however, neither of the two by the classical definition. Even more surprising, sometimes matter behaves like a particle, and sometimes like a wave.

In other words, you have a wavelength. So does a grapefruit. And your car. At the right speed and in the right conditions, a stream of grapefruits would diffract around a corner exactly like a water wave does. At a large scale, this doesn't mean a darned thing. But on the scale of atoms and electrons, duality lies at the heart of basically every modern invention in the world today. Computers function because we understand how to control electrons because we finally figured out the they weren't little tiny balls bumping into each other and flowing along, but could instead be treated as a wave. There are hundreds and hundreds of applications stemming from duality that I could mention, but even if semi-conductors (things that help make computers go) were the only thing ever invented because of de Broglie, can you see the implications of his work? What doesn't use a computer to help it function?

The science here is too deep for me to go into (or even understand myself). But that's kind of a blessing because I'd rather focus on the lessons learned from de Broglie as he discovered these principles. He was under immense pressure not to believe in duality. He very easily could have embarrassed himself and his family (particularly devastating to a French aristocrat) and ended his career. On a simpler note, he could have dismissed his intuition as passing insanity and focused on what everyone else forced themselves to see just because that was what they had always believed. Instead, de Broglie challenged and followed his own ideas, discarding the false ones along the way, and became one of the founding fathers of modern physics and the technological age in which we live. His work demonstrates genuine curiosity and courage and is a perfect example of the true scientific method.

Source: Cropper, William H., Great Physicists. New York : Oxford University Press, 2001.