Welcome to Scientia Pro Publica, 32th edition!
This round, it seems that the name of the game is biology. Submissions really ran the gamut in biology, including some delightfully from-left-field posts. Bryan Perkins having some fun with embryological development, and Amanda Morti shows us why we have actinomycete bacteria to thank for that fresh rain smell (and for throwing off Latinate intuition - anyone else read "actinomycete" and think muscley whale?). Speaking of whales, David "WhySharksMatter" Shiffman does a great bit of ResearchBlogging and reminds us why not all fish (fine, non-sarcopterygian gnathostome) stock are created equal - sandbar sharks (Carcharhinus plumbeus) are closer in population dymanics to bowhead whales and other balaenids than cod.
That wasn't the only bit of ResearchBlogging this time around (Hey! ResearchBlogging! Stop hating my feeds!). Kelsey has a great post about the intraspecific male competition among red-eyed treefrogs. Sure, they amplex for dear life, but what about before that? Turns out that ... they shake their butt (Does that mean that the frog Sir Mix-a-Lot is a lady?). Madhu R-Blogs over at Reconciliation Ecology takes the opportunity to do a great smackdown on a pet peeve of mine — evolution is not a "ladder" or any such silliness. It is blind and targetless. It was a statement from a Stanfurd professor, though, so what can you expect (Go Bears!)? But before we get ranty, Luigi diverts our attention from critterland and the rivalries of my alma mater to teach us about why Kibale's Wild Coffee Project didn't get off the ground, concluding that scientists, once again, just can't do "messaging".
Illustration: Peter Trusler for Wildlife of Gondwana/NOVA (PBS). From Grrlscientist's post.
Thonoir continues to take us down our diversion away from Critterland, showcasing two sets of endangered non-metazoans, and my total ignorance of plant/photosynthizing phylogenies. We don't stray from Critterland for long, though, as John at Kind of Curious details a very interesting ponderous borer. Emily talks about sensationalism, mountain lions, and that Fox, even as they get extirpated from areas densely populated by a certain primate. Which, as Amanda points out, is no good thing, and there are difficulties restoring predators to ecosystems that they have been extirpated from (trust me, a one sentence synposis does not do that entry justice). The great Grrlscientist brings us some aborignal rock art possibly depicting Genyornis newtoni (Dromornithidae, incomplete phylogeny link (Anseriformes)). This is both the oldest paintings in Australia at 40,000 years (predating the earliest European cave paintings) and is of something that can be loosely imagined as an ostrich-sized duck, which simply can't be awesome.
Now, we don't end here. Oh no. That was just organismal biology and evolution. How about a dose of medicine? Michelle Dawson is better than Mary Poppins, because her post about circadian rythyms certainly doesn't need any sugar for you to take it down (and introduces you to an interesting side effect of autism-spectrum disorder). Scientific Chick writes about cell phones improving mental performance in Alzheimered mice. Meanwhile, Wendy at Bioloser gives us the physiological background of shock, and a shocking description of shock in a man nearly severed in half.
The larger constructs of medicine were not neglected, either. Bradley Kreit discusses the fact that we need to accept our intellectual limits, while Luke examines the crazy in large groups, looking at HIV denialism and Ryan looks at child mortality.
Bisected men and child mortality? Lets get a bit more lighthearted. Jessica Drake at Soilduck ponders what makes a scientist a scientist, and Romeo Vitelli tells us how subliminal messaging was an advertising gimmick (how many levels of fake-out is that?). Adam Park redeems some sci-fi stories with various predictions made therein that have come true today. Of course, Asimov gets a mention for the mention of pocket calculators in Foundation, but Asimov also nailed our reliance on them as time went on in The Feeling of Power (psh, arithmetic).
BP, the Gulf, and the utter dismaying farce of the spill have been in the news, and oil makes its showing in Scientia this time around. Scienceguy238 gives us a history leading up to the spill, and Grrlscientist looks at the ethics involved with oiled seabirds. Jeremy at The Voltage Gate writes about how the Saudi coast has recovered, 20 years later, from the 11-million-barrel (1.2 GL, or 1.2e6 m3) spill. A decade afterwards, 1 million cubic meters still persisted. Every spill is different, though, so hopefully ours won't be as bad.
You know you love it.
Finally, we round out with the physical sciences, which didn't get much love this time around. Lab Rat talks about bacteria and climate change, while Matt Wills talks about the more metaphorical breathing Earth, Charles Lyell, and mollusk damage in Greek columns. Finally, Sarah Kavassalis gives a great article on one of my favorite subjects: special relativity, astronomical distances, and the meaning of "now". After all, if a star (super)nova's in the distance, but you don't see (E&M) or feel (gravity) it, has it gone? She even does it without the inevitable jargoning I'd go into!
That does it for this round of Scientia Pro Publica! This was my first blog carnival, so I more than welcome suggestions. Hope you all enjoyed it!
If you want to learn more about this carnival, head over to the carnival's website. Be sure to check out the next round hosted by Andrew over at Southern Fried Science, on June 21st. And remember — this is a blog carnival! Submissions and hosts are wanted! If you're interested in hosting, check out the current schedule on the official schedule thread and drop Grrlscientist a line (or leave a note in the comments). If you find a cool article, submit it! Send a link via this submission form. Thanks again all!
So, I should be asleep. Or posting one of the FOUR Tuesday Tetrapods due today (Plans include Python molurus bivittatus, Pelecanus onocrotalus, Gypaetus barbatus, and Xenopus laevis. I've had them lined up for weeks, but haven't had a chance to complete them). But instead, I have to mention that the Large Hadron Collider just had its first collisions, minutes ago. The beams were at 7 TeV
7 TeV is an odd unit that needs perspective for the non-physicists in the crowd. So, consider, according to Wikipedia, that a flying mosquito has about .2 microjoules (µJ) in kinetic energy. A 7 TeV beam has protons that each have about .6 µJ (3.5 TeV in each direction).
That is to say, each proton has a kinetic energy similar to 3 flying mosquitos. If you were capable of feeling a single proton hitting you, that one lone proton would definitely be noticeable. For the record, a mosquito has about 1021 protons.
Physics is freaking awesome.
(Oh, by the way? World? Totally not destroyed.)
I thought I'd share an interested, back-of-the-envelope approximation of the Bohr radius I played with today, that ends up being pretty damn accurate!
The Bohr radius is essentially a classical approximation to the radius of the hydrogen atom. Let's take a semiclassical path there and see what we get. We will assume the following:
- The total energy is approximately the ionization energy plus the rest energy of the electron.
- The minimum stable orbit is a single standing wave.
- The momentum of the electron is entirely described by Einstein's equation E2=p2c2 + m2c4.
We can then say (using 4 significant digits for the fundamental change and mass of the electron; quantities from Wikipedia):
Where c = 299 792 458 m/s (by definition). This gives us a momentum of ~1.99e-24 Ns. Running this through the deBroglie relation:
This gives us a wavelength of about 332.6 pm. Assuming you get one complete wavelength in the ground energy level of hydrogen, divide out by 2\pi to get a Bohr radius of 5.29e-11 m.
Now, the actual value is about 53 pm, or 5.29e-11 m. So, basically dead-on! Error in the trailing digits (shows up in the next decimal place) can be attributed to rounding in the values of the electron mass and in Planck's constant. Oddly enough, the fine-structure constant isn't needed here. Not quite sure why (probably this odd semi-classical calculation), but hey, you can now evaluate the Bohr radius quicky!
Update: Kit talks about the divergence of this model from a fully relativistic theory, incorporating the "gamma factor" γ=(1-β2)-1/2.
Alcubierre metric. (CC-BY-SA) Allen McC.
Slashdot brings up another resurgence of Warp Drive in the media. While it'd be amazing, it's still at the theoretically interesting stage. Though the theory is good, it requires a lot of energy. While before it required energies similar to that of the universe, now it seems like it will need only about 1045 J. This is in the ballpark of about 5.9 jovian masses. So, better, but still pretty impractical.
The figure at the right pretty cleanly describes the effect. In essense, spacetime is compressed and expanded around the ship so it can "surf" on a wave of expanding spacetime, which itself "moves" faster than c=299792458 m·s-1, but the spaceship itself remains stationary and thus violates no physics. It is, in a fairly literal sense, a warp drive. It is perfectly valid for space to have superluminal expansion rates; distant objects in our Hubble Volume recede from us at apparently faster than light, and the hyperinflationary epoch was defined by an incredible, superluminal spacetime expansion.
The whole thing is based on the Alcubierre metric, described by
Alcubierre metric. Variable definitions as used by Alcubierre available at Wikipedia
In a very technical sense, this is not a solution to the Einstein Field Equations, but rather has ADM forms that can be adapted to various observers that are actually not physically distinguished from each other. However, as a Hamiltonian formulation, the solution is not exactly one to the EFEs, but is much easier to work with and is often used in practical and theoretical studies.
This was probably one of my "denser" posts, so for those of you that understood it, I give you an icon to use:
So, I'm sitting in the Berkeley BART, soon to head off to Pleasanton to go catch Star Trek with Jessica and some others (it occurs to me this is one of my few friends without a blog or site to link to!). Since I will have a lot of down time, and probably 3 hours of battery on my Eee PC, I decided I'd throw some "Physics of Star Trek" out there.
VOY: "Blink of an Eye"
Case: Time-Distorted Planet
Plausibility: Highly unlikely
While a nice episode with an interesting premise, the catch here that prevents it from entering the realm of plausible is the fact that time went faster for those on the planet, rather than Voyager. General Relativity provides for various forms of time dilation, including gravitational an other odd spacetime constructs that distort spacetime. However, all of these distortions increase your dilation, as "neutral" is flat, empty space. For your rate of passage though time to increase, your speed would have to be imaginary, so that when squared (IE, when calculating your spacetime interval along a Minkowski metric), you need to increase your rate of passage through time to be greater than unity (or c, depending on how you look at it).
A quick little news snippet from Science: The ESA's new satelitte, Planck, is due to launch on 5/14/09 and will take up the mantle of COBE and WMAP. However, in addition to just improving measurements of the Cosmic Microwave Background, Planck will also possibly prove inflationary theory.
The nuts-and-bolts of inflation say that, in the early early universe, an inversion of the Higgs field resulted in spacetime expanding at superluminal velocities and rapidly slowed down. This explains the flatness of space, the lack of magnetic monopoles, and perhaps the most importantly the uniform temperature of space. This could mean that parts of space no longer causally connected once were, and thus had time to reach a thermal equilibrium before expanding apart.
As a side-effect of inflationary theory, though, we expect to see B-mode polarization of the CMB (that is, polarization of the magnetic field). To quote the article:
But the prize quarry for Planck researchers is the B modes. These features are swirls in the CMB polarization mapped across the sky, and spotting them would essentially clinch the case for the mind-bending theory of inflation.
Although inflation fits the facts so far, researchers do not yet have direct proof that it occurred. The B modes would provide that. Current theory predicts that inflation should have generated gravitational waves and that those waves should have left lingering swirls in the polarization of the CMB.
The polarization may not be strong enough for Planck to detect, but with luck, they will be — and 45 years after the discovery of the CMB, and 30 years after the proposal of inflation, we might finally have an answer.
Now, this is pretty awesome — tying together biology and astronomy in one fell swoop. It turns out that the ten most common (of 20) amino acids are substantially more thermodynamically favorable to form. Now, that's certainly got to take a bit of wind out of creationist sails.
An amino acid. R represents a functional group. Shamelessly hotlinked from Wikipedia
Of the twenty amino acids used in proteins, ten were formed in Miller's atmospheric discharge experiments. The two other major proposed sources of prebiotic amino acid synthesis include formation in hydrothermal vents and delivery to Earth via meteorites. We combine observational and experimental data of amino acid frequencies formed by these diverse mechanisms and show that, regardless of the source, these ten early amino acids can be ranked in order of decreasing abundance in prebiotic contexts. This order can be predicted by thermodynamics. The relative abundances of the early amino acids were most likely reflected in the composition of the first proteins at the time the genetic code originated. The remaining amino acids were incorporated into proteins after pathways for their biochemical synthesis evolved. This is consistent with theories of the evolution of the genetic code by stepwise addition of new amino acids. These are hints that key aspects of early biochemistry may be universal.
The results here hinge on the fact that the ranked amino acid frequencies under various criterion correlate strongly (r=0.96) to ΔGsurf, where ΔG is the Gibbs free energy, which is defined as:
Source: Kittel & Kroemer 1980
Which is essentially the enthalpy of a system (loosely, the "thermodynamic potential energy") minus the fundamental entropy of a system
Source: Kittel & Kroemer 1980. g(N,U) is more commonly known as the "number of accessible microstates", and is sometimes denoted as W
multiplied by the fundamental temperature (the temperature in Kelvin times Boltzmann's Constant, 1.381 x 10-23 JK-1). In essense, the "sign" of this value (and its magnitude) denote how easy it is for them to spontaneously form. The lower the free energy, the less energy is needed to make a given event occur; thus, something with a negative free energy is spontaneous and releases energy upon its occurance, such as dissolving NaOH in water. Some other events take energy to occur, such as dissolving CaCl2 in water. Thus, in the first example, the beaker gets very hot, and in the second, it gets very cold. So, under the premise of the abiotic origin of life, one would expect the most "entrenched", or common, amino acids should be the easiest to produce. The results from the research support this conclusion:
|Group||Gsurf (kJ/mol)||Err||MW (Da)||Err||ATP cost||Err|
Table of values for early and late group amino acids. All errors +/-.
Clearly, the "early group" amino acids, the ones most common in organisms, and that are most simple to form abiotically, have a formation advantage in terms of energy, size, and spontaneity over other amino acids. This is further supported (with some caveats expressed in the paper) in that the amino acid distributions were a bit off for hyperthermophillic bacteria — i.e., the ones living around underwater hydrothermal vents. With the higher energy densities available, differences in amino acid synthesis costs may be a reason for different amino acid preferences in high-expression proteins (though the authors are quick to point out this may merely be an artifact of high temperature stability for proteins).
The remainder of the paper is also quite interesting, but requires a bit more knowledge of biochemistry than I'd like to assume for this blog. The authors touch on the diversity of amino acids, and why the observed diversity in nature is fewer than the maximum number.
Hmmm. A spurt of posting might be coming ...
Paul G. Higgs, Ralph E. Pudritz (2009). A thermodynamic basis for prebiotic amino acid synthesis and the nature of the first genetic code Astrobiology DOI: arXiv:0904.0402v1
OK, perhaps the title is a bit misleading. However, a Science paper published today (DOI: 10.1126/science.1167747) describes a method by which images of magnetic monopoles can be induced and measured, in full compliance with E&M. The trick? By taking advantage of the quantum Hall Effect, you can construct a system such that the boundary conditions can be constructed to break temporal (T) symmetry, allowing quantum mechanical topological effects.
By doing this around an insulating surface, and bringing a charged particle near it, a magnetic monopole image is induced as a mirror to the electric charge. Whew. The idea of this is a nice mathematical trick that can make solutions far simpler than they would normally be, if you reduce a system to a set of "image charges" that represent a more complex field.
Perhaps someone has noted that this appears to break Maxwell's equations. Namely , the divergence of B = 0 (∂μFνλ=0) according to Maxwell's Equations is maintained by the following:
As we started with the Maxwell's equation, which includes [del] · B = 0, the magnetic flux integrated over a closed surface must vanish. We can check that this is the case by considering a closed surface—for example, a sphere with radius a—that encloses a topological insulator. The detailed calculation is presented in the supporting online material (17). Inside the closed surface, there is not only a image magnetic monopole charge, but also a line of magnetic charge density whose integral exactly cancels the point image magnetic monopole.
And thus everything ends up working out. Trippy. Even more to the point, this could be measured by a magnetic force microscope, with mathematical proof to show its contribution could be distinguished from other, more trivial considerations. Perhaps not the most pertinent discovery every, but still quite interesting to move the idea of a magnetic monopole out of pure speculation into something detectable.
So, while Astro has had the pretty awesome result of imaging exoplanets, Maxwell's Demon is a very old hypothetical to invalidate the Second Law of Thermodynamics. Science has a short article on a real life demonstration of Maxwell's demon, via reflection of atoms on a light wall. Turns out the "answer" to the puzzle lives in both irreversable information erasure and to the fact that the entropic state of an atom and its bound emitted photon are unbound as the photon travels freely away from the atom. Check out the article for a fairly readible analysis.
President-elect Obama: I was a Republican until this election, because the (nominal) fiscal policies of the party appealed to me. However, an increasing pandering to the religious right and a strong antiscience bent made me vote for the first time this past election day for a democrat – yourself. I truly hope that the change you promise to bring to America is fulfilled, and I have high hopes that the scientific community will benefit under your administration. However, in hopes that you will truly read some of these letters, I wanted to write to urge you to consider one of John McCain’s positions, and only one. I would like you to very strongly consider the expanded use of nuclear power in the United States. One of the first things to understand about nuclear power (in this, I refer to fission power unless otherwise mentioned) is that a fission reactor is, quite literally, the third most efficient mechanism of generating energy known to exist in the universe. The only more efficient ways of generating energy known to modern physics is by fusion (1% mc^2, ten times more efficient than fissions 0.1% mc^2), throwing matter into a black hole and capturing its radiation (about 25% mc^2, though we know of no way to do this on Earth), and an antimatter reaction (100% mc^2). These numbers provided neglect reduction in efficiency due to heat transfer mechanisms. Needless to say, these numbers dwarf conventional fuels, with a cubic foot of uranium containing the same fuel-energy as several million tons of coal or several million barrels of oil [ http://en.wikipedia.org/wiki/Fuel_efficiency#Energy_content_of_fuel ]. One kilogram of gasoline will generate about 50 MJ of energy -- one kilogram of fusion fuel will generate about 630,000,000 MJ of energy, or 12 million times more efficient by mass. For comparison to a renewable such as solar power, let us calculate the total energy influx from the sun (this is the theoretical maximum amount of energy that can be taken in by photovoltaics and wind). Covering every point of the US with the most advanced photocells will give about 500 TW (trillion watts) of energy (at 50% efficiency), of which the US uses 3.5 TW. In practicality, not all of the nearly 10 million square kilometers will be used by photocells. To generate the US’s current energy demands (day and night), we would need something like Conneticut entirely covered by photocells, receiving uninterrupted maximal sunlight for 12 hours per day and storing half of it for use at night. For comparison, about 3,500 of the newest reactor designs would accomplish the same goal at a small fraction of the area requirement, which decreases when we consider the renewable energy sources already in place and simple measures like solar cells on rooftops. Furthermore, an increase in funding in fission reactor technology, particularly breeder reactors, will grant us thousands of years of clean energy (approximately 40,000 years, since a breeder reactor uses U-238) and generate more stable end isotopes [ http://matse1.mse.uiuc.edu/energy/prin.html ]. It is important to realize that the fact that something is radioactive does not make it dangerous. Both quantity and its activity on a biological time scale are important. Specifically, the half-life of the material needs to be comparable to a human life span. If it is not, only a very small fraction of the energy possible through radioactive decay is released. That is, a radioactive material that releases most of its energy over two or three years is much, much more dangerous than one that releases the same amount of energy over several million years – because any person standing around the second one receives a very small fraction of the dose, which is harmless. After all, we get small doses of radiation from a banana and even more from the sky every day, as charged particles travelling near the speed of light hit our atmosphere. So let me reiterate: long lived isotopes, like those you are more likely to get with a breeder reactor, are much, much less biologically potent than those in breeder nuclear reactors, but last much longer. It is a mystery to me why breeder reactors are so frowned upon by the government, when virtually any scientist with a knowledge of reactor design will agree that they are the best, if not the only way to proceed with fission-based nuclear power. Careful selection of pathways will allow you to tune to short or long lived isotopes, depending on goal. There are two reasons why we should proceed with fission-based power. First, and simplest, is the fact that it is incredibly “green”. It is a zero-emissions source, with manageable waste produced more cheaply in an smaller land footprint than “renewable” sources [http://www.world-nuclear.org/info/inf02.html , http://www.our-energy.com/energy_facts/nuclear_energy_facts.html ]. The second point relates to fusion, which we can all agree is a superior alternative to fission. It has no radioactive byproducts by many pathways, and it is ten times more mass-efficient with a much larger mass source than fission. However, it is important to realize that most neutron-less (aneutronic) fusion pathways are not feasible with current technology. The sun’s reaction path, the “proton-proton chain”, works only because the incredible pressures placed on the plasma by gravity allow quantum mechanical tunneling to bypass part of the coulomb barrier. In slightly more digestible terms, imagine you have the same two sides of a very powerful magnet. They are very hard to press together, because they repel. These represent two protons which are to be fused. As you press them closer and closer, the repulsive force increases, making it harder to press them a little closer together. In a star, gravity gets them so close that quantum mechanics allows for the probability that the proton will just “jump” that gap, and arrive close enough to fuse. Every single photon of light we get from the sun is due to this probabilistic jump that lets the protons get closer. Without the pressures of the sun, though, for us to replicate an aneutronic chain like the sun we would need to have our reactors ten times hotter than the core of the sun. It is a strange quirk of physics that without quantum mechanics, the sun (and all stars) are literally too cold to fuse matter. So if we accept fusion-based chains that allow for neutronic reactions (reactions with neutrons as by-products, which unfortunately carry much of the reaction energy away in addition to being what most people think of as “radioactivity”), there is still an underfunding of nuclear fusion research in the US, which is in no small part due to the social stigma of fission reactors. The successful expansion of nuclear fission reactors is critical to the more rapid development of fusion reactors. It is still no small task for the scientists, but with neither funding nor social support, fusion cannot proceed. Mr. President-elect, I implore you to look past my somewhat erratic prose and strongly reconsider your position on nuclear power and help the United States enter a true nuclear age.
I followed this up with an email to some friends, in the hopes this might make it to some third level aid and have some itty bitty effect. When Peter sent me this reply: "Sorry, but I've heard that nuclear power generates nuclear waste, and even after processing it must lie underneath the ground for ~8000 years to become safe. Thus, I don't really trust nuclear power. Care to convince me otherwise?" I chose to follow up with this:
Inevitably it generates nuclear waste, but the problem is largely mitigated by breeder reactors. By using these, we can essentially tune the type of waste we would like. It is generally preferred to have short half-life products, which is primarily produced by breeder reactors (http://en.wikipedia.org/wiki/Fast_breeder_reactor ). The idea is it will lose essentially all of its radioactivity in a manageable time frame, thus having to be stored for a much reduced period of time. The standard nuclear waste has a half life on the order of 25,000 years. This isn’t particularly dangerous, if you consider the meaning of “half-life”. If you lived next to nuclear waste for say 75 years, you will absorb 1-.5^(75/25000) = 0.26 % of its total radiative output. Consider Tin-126, for example, with a half life of 2.3e5 years and a decay energy of 4.1 MeV. A LD-50 in 14 days dose for a 100 kg man (for a 126 g, or 1 mol sample) occurs after 3.7 hours, with 45 minute exposure being equivalent to 5% increase in cancer risk (1 Gray, or 1 J/kg). It is a particularly nasty by-product though, being 20-50 times worse than virtually every other byproduct with a shorter half life. A more representative isotope such as Pd-107 instead gives the same man about 9 mGy dose over an entire day – about the same as an abdominal CT scan (8 mGy). We can to some extent tailor products by choosing the reactions we use to generate energy, so we can make even these long-lived isotopes pretty safe inherently, in addition to the fact they’d be buried in a mountain. (Half life and sample products source http://en.wikipedia.org/wiki/Nuclear_waste#Physics ) Short half-life products are much worse during their toxic time, but have half lives between 5 and 90 years. The containers we have made have been theorized to have zero degredation from erosion for approximately a 10,000 year period and are furthermore tested by such means as crashing trains into them, dropping them from 10 m onto steel spikes, and underwater submersion to ensure integrity over long periods. This means the material is essentially guaranteed to stay sealed up for 100-2000 half lives, leaving less than 10^(-31) of its original mass left over. For reference, this is the equivalent of the sun reducing to a tenth of a kilogram! (HAH astro rocking the absurdly high exponents again) This may further be mitigated by new initiatives such as the LIFE project (http://www.contracostatimes.com/localnews/ci_10951822?nclick_check=1&forced=false) that recycle nuclear waste for further fission, further reducing half-lives. Whew! Hopefully this sheds some light on why I’m not particularly concerned. Besides, look at the alternatives. The only two more efficient things are throwing matter into black holes and M/AM reactions. “Renewables” such as wind and hydro are essentially secondary solar effects; to power the US, the entire state of Connecticut (at 50% efficiency for 12 hours/day, storing half of that power for night-time use, with a nominal solar radiation of 500 W/m^2 at the equator. Area: ~ 14,000 km^2 or 14e9 m^2) would be needed to produce our current 3.5 TW of power usage. It’s simply not practical. To produce the world’s current 15 TW usage, we’d need about the equivalent of West Virginia coated in photovoltaics. Multiply as appropriate to accommodate for cloud cover and room for expansion (say, quadroupling it to account for it all) and you get every last square centimeter of *Texas* covered in photovoltaics. Again, plain and simple not practical. Wind and hydro both take more area to generate the same amount of power. For other alternatives, “clean” coal isn’t, CNG is a carbon emitter and H_2 compresses so poorly that it takes as much or more fossil fuel burning to compress it as you save (simple PVNRT calcs). Finally, the P-P chain used in the sun (which is aneutronic) requires either solar compression or a temperature 10x hotter than the solar core. D-D and D-T fusion produces neutron side products, even when those neutrons are used to breed more tritium courtesy Li-6. H+B-11 can be used for aneutronic fusion, but power densities drop considerably and supersolar temperatures are still required; that is to say, the only realistic fusion will still generate radioactive byproducts. As a species and a country, we need to come to grips with the fact that to stop destroying planetary level ecology we have to accept geologically short to short-medium term storage of nuclear byproducts leading to extremely localized hot-spots. There’s simply not a good way around it. I hope this, if not outright convinces you, at least puts a little doubt into your mind that maybe makes you see why, for example, Kit and I are both extremely pro-nuclear power.
It was an interesting writing set, and a lot of research (I gained a bit of insight into isotope length choices by the end of it, though I fall lightly in favor of long-lived isotopes still), and I hope that this is an interesting read for you guys.
That is to say, the Large Hadron Collider, or LHC, turned on today at 10:33 CET, and we're all still here. The test beam went in one direction (not the two necessary you know, for a collision) at less than full power, so doomsayers can't possibly have been correct. However, I think it might be worth listing off a few reasons why the LHC couldn't destroy the Earth:
- You've got protons. Accelerated really fast. So if you make a black hole, its going to have a really tiny Schwarzschild radius, or the radius of the event horizon (Rs). Like, GMc-2 (to first order). Which is to say, at the speed these guys are traveling, a completely tangential approach would only decay into the black hole at < 1.5Rs. How big is Rs? Ballparking, we have
(10-11)(10-26)/(1016)=10-43 m. That's really really tiny. And since black holes evaporate roughly as ħm-2, really small things evaporate really really really really fast — in this case, 1020 kg/s. This little mini black hole will need to get within 10-43 m of particles faster than it evaporates, which even at a hairs breadth below the speed of light it doesn't cut it. Really roughly, it'll last:
10-17m3 s = 10-98 s, Which translates to a distance of under 10-90 m — virtually no distance at all, and an infinitesimal fraction of a nucleus. In fact, being a very small fraction of a Planck length, its virtually meaningless to say it traveled at all.
- This is a bit more complicated by the fact that black holes have no hair but retain charge, so this will be a charged nearly light-speed traveling black hole. Very small correction, but there.
- The strangelet hypothesis also won't happen. Sure, colliding strangelets with normal matter can convert normal matter into strange matter. But, cross-sections are again really tiny, and statistics ensures that the reaction would die out by decaying of the strangelet.