The above image is known as the Pentagram of Venus; it is the shape of Venus' orbit as viewed from a geocentric perspective. This animation shows the orbit unfold, while this one shows the same process from a heliocentric perspective. There are five places in Venus' orbit where it comes closest to the Earth (known as perigee), and this is due to the coincidence that
When two orbital periods can be expressed as a ratio of integers it is known as an orbital resonance (similar to how a string has resonances equal to integer multiples of its fundamental frequency). The reason that there are five lobes in Venus' geocentric orbit is that 13–8=5. Coincidentally, these numbers are all part of the Fibonacci sequence, and as a result many people associate the Earth-Venus resonance with the golden ratio. (Indeed, pentagrams themselves harbor the golden ratio in spades.) However, Venus and Earth do not exhibit a true resonance, as the ratio of their orbital periods is about 0.032% off of the nice fraction 8/13. This causes the above pattern to precess, or drift in alignment. Using the slightly more accurate fraction of orbital periods, 243/395, we can see this precession.
This is the precession after five cycles (40 Earth years). As you can see, the pattern slowly slides around without the curve closing itself, but the original 13:8 resonance pattern is still visible. If we assume that 243/395 is indeed the perfect relationship between Venus and Earth's orbital periods (it's not; it precesses 0.8° per cycle), the resulting pattern after one full cycle (1944 years) is
Which is beautiful. The parametric formulas I used to plot these beauties are
Where t is time in years, r is the ratio of orbital periods (less than one), and τ = 2π is the circle constant.
Should the Earth-Moon system be considered a binary planet? This sounds outlandish at first, since the Moon is a moon, obviously. It orbits the Earth as a natural satellite, just as the Galilean moons (Ganymede, Callisto, Io, and Europa) orbit Jupiter, Titan orbits Saturn, Triton orbits Neptune, and so on, right?
The definition of a moon is vague, and thus there are multiple ways of determining whether or not a planet-moon system is really a binary planet. One way of drawing the line between the two descriptions is by finding the barycenter (or center-of-mass) of the system. The center of mass of a collection of N masses is given by
where M is the total mass of the system, and mi and ri are the mass and position of the ith object, respectively. If the center of mass of a two-body system lies outside the larger object in that system, call it a binary planet. This makes sense, right? This means that the smaller body doesn't orbit the larger body, but instead they both orbit some point in space. For instance, the barycenter of the Pluto-Charon system lies outside Pluto (0.83 Pluto radii above Pluto's surface), the larger of the two bodies, while the Earth-Moon barycenter lies within the Earth (just under 3/4 of an Earth radius from the planet's center). By this definition, the Pluto-Charon system is a binary (dwarf) planet system, while the Earth-Moon system is is a planet-moon system. (Although, we are slowly losing our moon due to tidal acceleration. In a few billion years, the Moon will have drifted far enough away that the barycenter of the Earth-Moon system will leave the interior of our planet.) However, when you plug in values for the Sun-Jupiter system, you find that the center of mass lies outside the Sun! Indeed, Jupiter is the only natural satellite of the Sun for which this is true. (Does this mean Jupiter should have a different classification from the rest of the planets? Not really; the Sun is around 1000 times more massive than Jupiter, so the reason for this is that Jupiter is very distant from the Sun.)
Maybe a different definition is needed to distinguish planet-moons from binary planets, then, since the Sun-Jupiter system is not a binary star (Jupiter is slightly too small to generate nuclear fusion). Another proposition is to look at the so-called tug-of-war value of a body. The tug-of-war value of a moon determines which Solar System object has a stronger gravitational hold, the Sun or the moon's "primary" (the Earth is the Moon's primary). Using Newton's law of gravitation
we can take a ratio of the Sun's pull on a satellite to the primary's pull. The result is the tug-of-war value, proposed by Isaac Asimov.
Here the subscripts s and p refer to the Sun and the primary, respectively; m is the mass of the body referred to by the subscript; and d is the distance between the moon and the body referred to by the subscript. If the tug-of-war value is larger than 1, then the primary has a larger hold on the moon than the Sun, whereas if it's less than 1, the Sun's gravity dominates. For the Earth-Moon system, it turns out this number is 0.46, which means that the Sun pulls on the Moon with more than twice the force of Earth's pull. This is an oddity among moons, but is not unique. It does mean, though, that the Moon, when viewed from the Sun, never undergoes retrograde motion; it moves across the solar sky without changing direction. Another way to put this is that the Moon is always falling toward the Sun (like the planets), and never in its orbit does it fall away from the Sun (unlike most moons). If you look at the orbits of the Earth and Moon from the point of view of the Sun, they dance around each other in careful step, which is unlike most other moons in the Solar System. For Asimov, this was reason enough to consider the Earth and Moon as a binary planet system.
This tug-of-war value does not, however, classify Pluto and Charon as a binary dwarf planet system (they're too far from the Sun for their tug-of-war value to be less than 1). Perhaps the definition of a binary planet is a difficult one to pin down.
Should the Moon be promoted to planet, just as Pluto was renamed as a dwarf planet? I don't know, but it gives us something to think about as we look up at the starry night, watching the dance of all the chunks of rock and gas hurtling through space in our sky, to music written by nature and heard through science.
Chaos is complexity that arises from simplicity. Put in a clearer way, it's when a deterministic process leads to complex results that seem unpredictable. The difference between chaos and randomness is that chaos is determined by a set of rules/equations, while randomness is not deterministic. Everyday applications of chaos include weather, the stock market, and cryptography. Chaos is why everyone (including identical twins who having the same DNA) have different fingerprints. And it's beautiful.
How does simplicity lead to complexity? Let's take, for instance, the physical situation of a pendulum. The equation that describes the motion of a pendulum is
where θ is the angle the pendulum makes with the imaginary line perpendicular to the ground, l is the length of the pendulum, and g is the acceleration due to gravity. This leads to an oscillatory motion; for small angles, the solution of this equation can be approximated as
where A is the amplitude of the swing (in radians). Very predictable. But what happens when we make a double pendulum, where we attach a pendulum to the bottom of the first pendulum?
Can you predict whether the bottom pendulum will flip over the top? (Credit: Wikimedia Commons)
It's very hard to predict when the outer pendulum flips over the inner pendulum mass, however the process is entirely determined by a set of equations governed by the laws of physics. And, depending on the initial angles of the two pendula, the motion will look completely different. This is how complexity derives from simplicity.
Another example of beautiful chaos is fractals. Fractals are structures that exhibit self-similarity, are determined by a simple set of rules, and have infinite complexity. An example of a fractal is the Sierpinski triangle.
Triforce-ception! (Image: Wikipedia)
The rule is simple: start with a triangle, then divide that triangle into four equal triangles. Remove the middle one. Repeat with the new solid triangles you produced. The true fractal is the limit when the number of iterations reaches infinity. Self-similarity happens as you zoom into any corner of the triangle; each corner is a smaller version of the whole (since the iterations continue infinitely). Fractals crop up everywhere, from the shapes of coastlines to plants to frost crystal formation. Basically, they're everywhere, and they're often very cool and beautiful.
Chaos is also used in practical applications, such as encryption. Since chaos is hard to predict unless you know the exact initial conditions of the chaotic process, a chaotic encryption scheme can be told to everyone. One example of a chaotic map to disguise data is the cat map. Each iteration is a simple matrix transformation of the pixels of an image. It's completely deterministic, but it jumbles the image to make it look like garbage. In practice, this map is periodic, so as long as you apply the map repeatedly, you will eventually get the original image back. Another application of chaos is psuedorandom number generators (PRNGs), where a hard-to-predict initial value is manipulated chaotically to generate a "random" number. If you can manipulate the initial input values, you can predict the outcome of the PRNG. In the case of the Pokémon games, the PRNGs have been examined so thoroughly that, using a couple programs, you can capture or breed shininess/perfect stats.
Dat shiny Rayquaza in a Luxury ball, tho.
So that's the beauty of chaos. Next time you look at a bare tree toward the end of autumn or lightning in a thunderstorm, just remember that the seemingly unpredictable branches and forks are created by simple rules of nature, and bask in its complex beauty.
Where T is the absolute temperature (e.g. Kelvin scale), m is the mass of the particles making up the gas, and k is Boltzmann's constant. But this is a specific case. In general, we need a more encompassing definition. In thermodynamics, there is a quantity known as entropy, which basically quantifies the disorder of a system. It is related to the number of ways to arrange the elements of a system without changing the energy.
For instance, there are a lot of ways of having a messy room. You can have clothes on the floor, you can track mud into it, you can leave dishes and food everywhere. But there are very few ways to have an immaculately clean room, where everything is tidy and put in its proper place. Thus, the messy room has a larger entropy, while the clean room has very low entropy. It is this quantity that helps to define temperature generally. Denoting entropy as S, we have that
Or, in words, temperature is defined as the change in energy divided by the change in entropy of something when its volume remains fixed, which is equivalent to the change in enthalpy (heat) divided by the change in entropy at constant pressure. Thus, if you increase the energy of an object and find that it becomes more disordered, the temperature is positive. This is what we are used to. When you heat up air, it becomes more disorderly because the particles making it up are moving faster and more randomly, so it makes sense that the temperature must be positive. If you cool air, the particles making it up slow down and it tends to become more orderly, so the temperature is still positive, but decreasing. What happens when you can't pull any more energy out of the air? Well, that means that the temperature has gone to zero, and movement has stopped. Since the movement has stopped, the gas must be in a very ordered state, and the entropy isn't changing. When the speed of the gas particles is zero, we call its temperature absolute zero, when all motion has stopped.
It is impossible to reach absolute zero temperature, but it isn't intuitive as to why at first. The main reason is due to quantum mechanics. If all atomic motion of an object stopped, its momentum would be known exactly, and this violates the Uncertainty Principle. But there is also another reason. In thermodynamics, there is a quantity related to temperature that is defined as
Since k is just a constant, β can be thought of as inverse temperature. This sends absolute zero to β being infinity! Now, this makes much more sense as to why achieving absolute zero is impossible – it means we have to make a quantity go to infinity! It turns out that β is the more fundamental quantity to deal with in thermodynamics because of this role (and others).
Now, you're probably thinking, "Akano, that's all well and good, but, are you saying that this means that you can get to infinite temperature?" In actuality, you can, but you need a special system to be able to do it. To get temperature to infinity, you need β to go to zero. How do we do that? Well, once you cross zero, you end up with a negative quantity, so if we could somehow get a negative temperature, then we would have to cross β equals zero. But how do we get a negative temperature, and what would that be like? Well, we would need entropy to decrease when energy is added to our system.
It turns out that an ensemble of magnets in an external magnetic field would do the trick. See, when a compass is placed in a magnetic field, it wants to align with the field (call that direction north). But if I put some energy into the system (i.e. I push the needle), I can get the needle of the compass to point in the opposite direction (south). When less than half of the compasses are pointing opposite the external field, each time I flip a compass needle I'm increasing entropy (since the perfect order of all the compasses pointing north has been tampered with). But once more than half of those compasses are pointing south, I am decreasing the disorder of the system when I flip another magnet south! This means that the temperature must be negative! In practice, the compasses are actually molecules with an electric dipole moment or electrons with a certain spin (which act like magnets), but the same principles apply. So, β equals zero is when exactly half of the compasses are pointing north and the other half are pointing south, and β equals zero is when T is infinite, and it is at this infinity that the sign on T swaps.
It's interesting to note that negative temperatures are actually hotter than any positive temperature, since you have to add energy to get to negative temperature. One could define a quantity as –β, so that plotting it on a line would be a more intuitive way to see that the smaller the quantity, the colder the object is, while preserving the infinities of absolute zero and "absolute hot."
This mass is known as the inertial mass. The larger an object's inertial mass, the more it resists being accelerated by a given force. The second definition of mass also comes from Newton, but it is instead determined by his law of gravitation.
The mass here determines how much two massive objects attract one another; this is known as the gravitational mass. But here's the interesting thing about these two masses: there is no law of physics that says these masses are one and the same. Such a notion is known in physics as the equivalence principle. The weak equivalence principle was discovered by Galileo; he noticed that objects with different masses fall at the same rate. Einstein came up with the strong equivalence principle, which discusses how a uniform force and a gravitational field are indistinguishable when you look at a small enough portion of spacetime. The only reason we believe these two masses are equivalent is because experiments show that they are equal to within the precision of the instruments with which we measure them, and there are ongoing experiments trying to narrow down that precision to determine if there is any difference between the two.
What this says is that the product of the uncertainty of a measurement of a particle's position multiplied by the uncertainty of a measurement of a particle's momentum has to be greater than a constant (given by the reduced Planck constant, h over τ = 2π). This has nothing to do with the tools with which we measure particle; this is a fundamental statement about the way our universe behaves. Fortunately, this uncertainty product is very small, since ħ is around 1.05457 × 10-34 J s. The real question to ask is, "Why do particles have this uncertainty associated with them in the first place? Where does it come from?" Interestingly, it comes from wave theory.
Take the two waves above. The one on top is very localized, meaning its position is well-defined. But what is its wavelength? For photons, wavelength determines momentum, so here we see a localized wave doesn't really have a well-defined wavelength, thus an ill-defined momentum. In fact, the wavelength of this pulse is smeared over a continuous spectrum of momenta (much like how the "color" of white light is smeared over the colors of the rainbow). The second wave has a pretty well-defined wavelength, but where is it? It's not really localized, so you could say it lies smeared over a set of points, but it isn't really in one place. This is the heart of the uncertainty principle. Because waves exhibit this phenomenon – and quantum particles behave like waves – quantum particles also have an uncertainty principle associated with them.
However, this is arguably not the most bizarre thing about the uncertainty principle. There is another facet of the uncertainty principle that says that the shorter the lifetime of a particle (how long the particle exists before it decays), the less you can know about its energy. Since mass and energy are equivalent via Einstein's E = mc2, this means that particles that "live" for very short times don't have a well-defined mass. It also means that, if you pulse a laser over a short enough time, the light that comes out will not have a well-defined energy, which means that it will have a spread of colors (our eyes can't see this spread, of course, but it means a big deal when you want to use very precise wavelengths of light in your experiment and short pulses at the same time). In my lab, we use this so-called "energy-time" uncertainty to determine whether certain configurations of the hydrogen molecule, H2, are long-lived or short lived; the longer-lived states have thinner spectral lines, and the short-lived states have wider spectral lines.
So while we can't simultaneously measure the position and momentum of a particle to arbitrary certainty, we can definitely still use it to glean information about the world of the very, very small.
The triangular numbers are the numbers of objects one can use to form an equilateral triangle.
Anyone up for billiards? Or bowling? (Image: Wikimedia Commons)
Pretty straightforward, right? To get the number, we just add up the total number of things, which is equal to adding up the number of objects in each row. For a triangle with n rows, this is equivalent to
This means that the triangular numbers are just sums from 1 to some number n. This gives us a good definition, but is rather impractical for a quick calculation. How do we get a nice, shorthand formula? Well, let's first add sequential triangular numbers together. If we add the first two triangular numbers together, we get 1 + 3 = 4. The next two triangular numbers are 3 + 6 = 9. The next pair is 6 + 10 = 16. Do you see the pattern? These sums are all square numbers. We can see this visually using our triangles of objects.
(Image: Wikimedia Commons)
You can do this for any two sequential triangular numbers. This gives us the formula
We also know that two sequential triangular numbers differ by a new row, or n. Using this information, we get that
Now we finally have an equation to quickly calculate any triangular number. The far right of the final line is known as a binomial coefficient, read "n plus one choose two." It is defined as the number of ways to pick two objects out of a group of n + 1 objects.
For example, what is the 100th triangular number? Well, we just plug in n = 100.
T100 = (100)(101)/2 = 10100/2 = 5050
We just summed up all the numbers from 1 to 100 without breaking a sweat. You may be thinking, "Well, that's cool and all, but are there any applications of this?" Well, yes, there are. The triangular numbers give us a way of figuring out how many elements are in each row of the periodic table. Each row is determined by what is called the principal quantum number, which is called n. This number can be any integer from 1 to infinity. The energy corresponding to n has n angular momentum values which the electron can possess, and each of these angular momentum quanta have 2n - 1 orbitals for an electron to inhabit, and two electrons can inhabit a given orbital. Summing up all the places an electron can be in for a given n involves summing up all these possible orbitals, which takes on the form of a triangular number.
The end result of this calculation is that there are n2 orbitals for a given n, and two electrons can occupy each orbital; this leads to each row of the periodic table having 2⌈(n+1)/2⌉2elements in the nth row, where ⌈x⌉ is the ceiling function. They also crop up in quantum mechanics again in the quantization of angular momentum for a spherically symmetric potential (a potential that is determined only by the distance between two objects). The total angular momentum for such a particle is given by
What I find fascinating is that this connection is almost never mentioned in physics courses on quantum mechanics, and I find that kind of sad. The mathematical significance of the triangular numbers in quantum mechanics is, at the very least, cute, and I wish it would just be mentioned in passing for those of us who enjoy these little hidden mathematical gems.
There are more cool properties of triangular numbers, which I encourage you to read about, and other so-called "figurate numbers," like hexagonal numbers, tetrahedral numbers, pyramidal numbers, and so on, which have really cool properties as well.
Oak Log Bans
Akano Toa of Electricity
Stone Champion Nuva
+2 for Premier Membership
+1 from Pohuaki for reporting various things in Artwork
Real Name: Forever Shrouded in Mystery
Likes: Science, Math, LEGO, Bionicle, Ponies, Comics, Yellow, Voice Acting
Notable Facts: One of the few Comic Veterans still around
Has been a LEGO fan since ~1996
Bionicle fan from the beginning
Misses the 90's. A lot.