Archive for the ‘Science’ Category
This is my hypothesis based on a relatively consistent diet of physics books over the last couple of decades. Now, this hypothesis is by no means original to me (see here). But it is certainly not the dominant view of and most laypersons are probably unaware it even exists.
The latest bit of data that reinforces my belief in the Computation Engine Hypothesis is From Eternity to Here, by Sean Carroll. In this book, Carroll tries to explain the arrow of time using the concept of entropy from the perspective of statistical mechanics.
The basic idea behind entropy is to compute the number of physical microstates (e.g., positions of each molecule of oxygen) that correspond to the same physical macrostate (e.g., the physical distribution of those molecules in a jar). If a macrostate has lots of different corresponding microstates, it’s “ordinary” and has high entropy. If a macrostate, has only a few corresponding microstates, it’s “special” and has low entropy. There are a lot more ways to arrange a set of oxygen molecules so they are uniformly distributed than there are to arrange them in the shape of a duck, so the former has high entropy while the latter has low entropy.
High entropy states occur more frequently than low entropy states. So any interaction tends to increase entropy because transitions to more common states are more likely. Thus the arrow of time is a statistical property of dynamic behavior.
But now there’s a problem. One can apply this same type of analysis to the Universe as a whole (or more precisely, our “observable patch” of the Universe). You see, it has rather low entropy compared to its maximum (which we can calculate using concepts from statistical mechanics). There’s all this orderly clumping of matter into galaxies, solar systems, planets, animals, and humans. And that’s just not very likely. Now, you could try invoking the Anthropic Principle: that we wouldn’t be here to observe the Universe unless it were ordered this way. Sorry, but no. It’s actually much more likely that our brains would materialize out of the ether due to random quantum fluctuations (so called “Boltzmann Brains”).
Carroll has a loophole. What if our Universe (and indeed each Universe in the “Metaverse”) spawns new Universes? Then there is no maximum entropy and the configuration of our observable patch becomes much more likely. Here’s how it might happen. Even a Universe at maximum entropy still undergoes fluctuations, definitely of quantum fields and perhaps of spacetime itself. If a quantum fluctuation to a higher vacuum energy occurred at the same time that a bit of spacetime pinched off, you would get what looks like a new universe undergoing a Big Bang. Astronomically unlikely at any given time and place, but almost certain to happen eventually in a given Universe.
Aha! Problem solved. But think of the implications. There’s a huge proliferation of Universes. Now, add in the proliferation of different versions of the Universe from from the Many Worlds Interpretation (MWI) of quantum mechanics. Recall that the MWI explains apparently “spooky” quantum behavior by suggesting that the wavefunction does not actually collapse. Instead, every possible value of the wavefunction is realized in a different blob of amplitude, a process known as decoherence. Effectively, any time a quantum particle interacts with a macro objects, it generates a version of the universe for each possible outcome of that interaction.
So at the quantum level, we’ve got all this branching of the Universe every microsecond. Then at the astrophysics level, we’ve got new Universes spawning. Of course, this spawning also obeys the MWI, so you’ve actually got an exponential proliferation of baby Universes. If you squint, this whole process looks like a multi-dimensional forward-chaining computation. Every possibility in this Universe is realized, whole new Universes with slightly different rules get created, and every possibility in them is realized.
Going back to the concept of entropy, it turns out that the Thermodynamic Entropy we can calculate for objects is exactly the same as the Shannon Entropy we can calculate for information. Shannon Entropy measures how unique a piece of information is. Think of it in terms of compression. You can’t compress a file any smaller than its Shannon Entropy will allow. Structured files have low entropy and by encoding their structure, you can compress them more. A random string of bits in a file has maximum entropy, so you can’t compress it at all. Shannon Entropy is a measure of how potentially useful information is. Just like Thermodynamic Entropy is a measure of potential energy.
So there’s already a known equivalence between the physical and informational. Then if you buy into Carroll’s hypothesis and the MWI, it looks like the Metaverse is trying to compute every possible outcome. In fact, it may compute every possible outcome more than once. An infinite number of times if it runs an infinite amount of time. After it runs long enough, someone who could observe the whole Metaverse could actually calculate very precise odds of any outcome given any condition. You’d be statistically omnipotent.
Never bet against a statistically omnipotent being.
Things just got worse if you put your faith in the “consensus” about catastrophic anthropogenic global warming (AGW). You’ll recall that the disclosure of internal emails undermined confidence in both the surface temperature record and the peer-review process that qualifies research for inclusion into the blue ribbon International Panel on Climate Change (IPCC) reports.
Now we find out that some of the more sensational claims about potential consequences contained in the IPCC AR4 report are not actually backed up by peer-reviewed research. Instead, they come from assertions made by advocacy groups such as the WWF and Greenpeace. Then there’s the dependence on anecdotal newspaper and magazine reports. Oh, and an amusing reference to a boot cleaning manual from an Antarctic tour operator.
It all started with the infamous, “Himalayan glaciers will be gone by 2035,” claim, which was substantiated solely by a WWF report. Not cool because IPCC rules state they should only reference peer-reviewed research from respectable journals.
Things get worse. Bear with me here. The story is a bit involved, but it reveals how feckless the guys at the top of the AGW food chain can be. India’s environmental minister tried to call BS by referring to, you know, actual measurements of glacial retreat. But the chairman of the IPCC called this “voodo science.” Of course, the scientist who lead the development of that section of IPCC AR4, eventually admitted that the claim about glaciers disappearing by 2035 was not supported by peer-reviewed research. And it turns out that the chairman of the IPCC was actually informed about the problem months earlier.
I realize that people want to defer to the leading scientists in an area. It’s perfectly rational. In fact it was what I did before I started looking into AGW myself. But there should be some evidence that will cause you to update this position. I think we’ve reached that point.
As you may have heard, an unknown hacker breached the Hadley Climatic Research Centre and disclosed a large volume of email and documents, thus giving us a peek inside the sausage factory. First, let me say that the breach itself rather concerns me. We’re talking about a government sponsored research facility. Somebody virtually waltzed right in and and took everything but the kitchen sink. Heads should roll in the information security department.
Second, the email correspondence is pretty damning. It won’t affect my position much because I was already fairly sure these types of shenanigans were going on. But if you put your faith in the “consensus”, you should consider updating your position. There are numerous instances of three types of egregious behavior from senior scientists:
- Coordinated efforts to portray all results as supporting the conclusion that anthropogenic global warming (AGW) is a serious threat. Such efforts included the spinning of results, application of statistical “tricks”, and selective use of data.
- Coordinated efforts to suppress professional dissent. Such efforts included going after editors of journals that published articles supporting a skeptical view and lobbying university administrations to pressure researches who didn’t toe the line.
- Coordinated efforts to evade Freedom of Information Act requests and destroy data that might support the skeptical position if disclosed.
By themselves, these actions should be alarming because they obfuscate the real answer to the question of how serious a threat AGW presents .
But the real take home point is the tone of many emails. These are leading scientists in the field. Yet they clearly hold bitter contempt for colleagues who don’t agree with them. This isn’t business. This is personal. To paraphrase, Robin Hanson, climate science isn’t about the science of climate. It’s about social status. The AGW proponents see themselves as an “in group” and AGW skeptics as an “out group”. They are more concerned about destroying the out group than actually figuring out what’s going on with the climate.
Given this attitude, it’s hard to have any confidence that we’ll get a rational, scientific answer any time in the near future.
Normally, I don’t debate random bloggers on Anthropogenic Global Warming (AGW). However, I made an exception for Robin Hanson. For those who don’t already know of him, he was both an early proponent of decision markets and has a reasonably well known journal article on why two Bayesian rationalists can’t agree to disagree. I’m a fan of his work and have been reading his blog for years.
Yesterday, he put up a post titled CO2 Warming Looks Real. He’s not an expert. Like me, he has an economics background and did some detailed research. Yet from the title and body of the post, I though he must have reached a very different conclusion than I did. So I thought I’d try to engage him to find out where we differ. The results were interesting.
I happened to come across two interesting posts with Singularity implications that I thought you might be interested in. First, the Singularity Hub reports that Osiris has a promising phase II trial underway for a treatment that uses foreign stem cells to repair the muscle damage from heart attacks. If you’re about 40 like Rafe and I, this means your chances of dying from heart disease could go way down. Now if we can just make some progress on cancer, we’ll be centenarians.
Second, via Prometheus, Wired reports on a robot-software combination that was able to generate, test, and refine it’s own hypotheses to identify coding for orphan enzymes in yeast. Obviously, this is a very special purpose kind of science. But the fact they got a closed loop is very impressive. I also like the fact that it’s in the biological sciences. Hey, maybe some descendant of this program can solve the aformentioned cancer problem.
As I mentioned in this post, one of the three primary planks of my worldview is that, “…the human brain is a woefully inadequate decision making substrate.” I started adopting this posture in graduate school and have refined it with constant input from the cognitive psychology and neurobiology literature over the years. Luckily, you don’t have to put in that kind of time. Simply go out and read Rational Choice in an Uncertain Worlds by Hastie and Dawes and The Accidental Mind by Linden.
I try not to practice false modesty (those of you who know me well probably just did a spit take at that understatement). So while I try to stand up and admit when I’m wrong, I also like to stand up and point out where I’m right.
It shouldn’t be a surprise to any of you that I came to the conclusion that climate models are pretty much total bullshit. My problem with them is that they are incomplete, overfitted, and unproven. It turns out that one of the foremost experts on forecasting in general also thinks that these models have no predictive value. In fact, items (6) and (7) of their statement shows that you can predict the future temperature really well simply by saying it will be the same as the current temperature.
You can read their more formal indictment of climate forecasting methods here.
I apologize for the posting lull. I’ve had a bad cold and been struggling to add Monte Carlo simulation to my discrete stochastic model of the startup lifecycle (if anyone is planning on using Oracle’s Crystal Ball, I can tell you the good and bad). But I’m almost finished with my next substantial post.
In the meantime, I finished a really good physics book: Lightness of Being by Nobel prize winner Frank Wilczek. It requires a basic knowledge of quantum mechanics (I suggest Al-Khalili’s Quantum) and particle physics (any recent popular book that spends more than one chapter on the Standard Model).
Given that, it does an awesome job of explaining three things that have always bothered me. First, how the strong force can possibly get more powerful the farther away you get. Second, why we can’t break protons and neutrons into their component quarks. Third, where the heck a proton’s mass really comes from. It turns out all three things are related and the explanation is quite elegant. I don’t know why the dozen other physics books I’ve read in the last five years ommitted an explanation (or at least an explanation that stuck with me).