Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Shoelayceberry the [Unlaced]

Pages: [1] 2 3 4 5 6 ... 22
Tech Heads / Permissions Management
« on: February 16, 2015, 03:11:49 PM »
Next up, we have our storage upgrade/migration going this month. In order to reduce some complexity, we are moving from older systems on NetApp, Isilon, and Sun, to just NetApp and Isilon. NetApp will run all performance required roles (grid and VMware infrastructure) and Isilon will handle the generic file server role.

Apparently, the move of our home areas to their new location on Isilon will destroy all NTFS permissions, while maintaining UNIX presmissions. It was handed off to me due to my growing skill with PowerShell. At first blush, I figured it would be easy to just back up, then reapply. Of the ~1000 home directories, ~200 errored out. I got piled under other work and still haven't figured out why it errored. At this point, I just need to solve the problem for the majority, then figure out why the the ~200 fail.

As I put more work on it, it seems like it may be harder than I expected. The migration happens next weekend whether I am ready or not. I knew of its existence before, but I ran across icacls.exe today.

The questions is, ever done this? Do you have a preference, and if so, why?

Tech Heads / Exchange Admins?
« on: January 20, 2015, 04:57:40 PM »
Long shot, but looking for some guidance. Maintenance/recovery question...

We will eventually migrate off of this, to Office 365, but in the meantime I have inherited an Exchange 2007 environment, with 2 Hub/CAS servers in an NLB, and a CCR Failover Cluster. All server OSes are Server 2008. All servers are running all latest patches and Exchange rollups. We have 25 Storage Groups, each containing a single DB. On the Mailbox server cluster, we have separate filesystems for logs and DBs. This is all virtual in VMware 5.0 (very behind here too).

The admin I inherited this from had started a test phase with O365 by setting up a 3rd Hub/CAS server with Exchange 2010 (Server 2008 R2, patched and Exchange rollups all the way).

Everything is running great with Exchange. The problem is our backup software (Netbackup) is out of date and backup hardware is taxed. So, the occasional backup fails. This causes our log drive to fill, causing Exchange to stop sending mail. We had no monitoring solution that I know of, so during the last couple of months, I have been documenting everything (for my own sake, to learn how everything works) and getting these upgrades done, including getting PowerShell v2 on all nodes. This all should be finished in the next couple of days. My plan is to setup a repeating PowerShell script to monitor drive fullness that will email me nightly, and my boss weekly; eventually this (or similar) will be deployed to all critical servers. 2 Things:

1) What else should I be doing? I had just taken a class early last year, for Exchange 2013, but that has been little help in a production environment of 2007; almost everything has changed. I have mostly read an Exchange 2007 book, so along with my 2013 reading, I have a feel for how everything should work. I expanded on that with the documentation, which we had 0 of, so that I know all points of failure. Still needing to trace mail flow from OWA (and other web services), as I think that may be hosted on the "new" 2010 box as there was recently a disruption after my boss patched it; (re)starting some services on that box brought everything back online.

2) CCR Failover Cluster. What is the proper way to recover from our full log drive? It seems we have to reseed the passive node every time we recover. I think we may be causing that ourselves. Current steps are:

a) Dismount-Database -Identity dbname
b) eseutil /mh "path_to_db.edb"
c) assumung state = clean shutdown, delete all logs on largest Storage Groups\DBs  <THIS MIGHT BE POTENTIAL SPOT OF FAILURE>
d) Mount-Database -Identity dbname

DBs come back healthy, but CCR maintains an initializing state for as long as we let it, then we reseed (Update-StorageGroupCopy -Identity dbname) which eventually gets everyone back in a happy copy state, in case of failover.

Since I inherited my new role back in ~August last year, I have been in a state of perpetual fear and failure making life in-general stressful. I am happy that I have my new responsibilities, as I know after I master this I will have experience with all major Windows Enterprise technologies: NLBs, Failover Clusters, Exchange Support, IIS, Basic Database work (I have another project I was working on requiring MS SQL, but that has been on the backburner due to this), but it has been a bad way to learn - maybe this is how everyone does it? Anyway, I have been pretty burnt out for about a year now, due to previous posts I have made about moving buildings, so once this gets handled in a way that I am happy with, I can relax a bit, then pick back up on my other task and learn some SQL.

Any help would be appreciated. Thanks.

General Discussion / [SCIENCE!] Back to tachyons = Neutrinos ?
« on: January 03, 2015, 02:14:51 PM »
Possibly with imaginary mass. Multiple links in original article.

How To Find Faster-Than-Light Particles

December 27, 2014 | by Stephen Luntz
Photo credit: Sumanch via wikimedia commons. A faster than light particle would be invisible as it approached, but be seen in both directions after it passed

A new paper claims to demonstrate that neutrinos not only travel faster than the speed of light, but have the brain-twisting characteristic of “imaginary mass”, a property that means they actually speed up as they lose energy.

The phrase “extraordinary claims require extraordinary proof” has seldom been more appropriate, but Professor Robert Ehrlich, recently retired from George Mason University, believes he has that, with six different measurements from different areas of physics. All of these, Ehrlich claims in Astroparticle Physics, provide matching results that not only indicate that neutrinos have imaginary mass, but point towards the same value, making it less likely the readings are in error.

While the mere idea of imaginary mass sounds improbable to the non-physicist, it is a concept theoreticians have been tinkering with for some time. While imaginary numbers represent the square root of negative numbers, and have proven exceptionally valuable tools for physics, imaginary mass squared gives a negative mass value.

This just makes the concept sound even more improbable, but the idea actually falls fairly neatly out of the theory of Special Relativity. One of Einstein's key discoveries was the realisation that, for ordinary matter, mass increases with velocity. The formula is m2=m2rest/(1-(v/c)2) where m is mass, v is velocity and c is the speed of light.

The conclusion that faster than the speed of light is impossible comes from the fact that, for an object with mass, traveling at the speed of light would make that mass infinite. Unless someone can work out how to get from traveling slower than light to faster than light without going through the speed of light it appears we are stuck with exploring the universe at a painfully slow pace.

However, in 1962 George Sudarshan pointed out that nothing in relativity theory prevented the possibility of particles that always travel faster than the speed of light. For these objects, dubbed tachyons, light speed would be a floor, not a ceiling.

This raised the question as to whether, if tachyons exist, we would be able to detect them. In 1985 it was suggested that neutrinos are actually tachyons. Most physicists paid little attention and went back to arguing about whether neutrinos have mass and travel slower than light, or are massless objects traveling at lightspeed. However, the claim has resurfaced several times since, most famously in the erroneous timing of neutrinos from Geneva to central Italy.

Ehrlich uses results from “Cosmic Microwave Background fluctuations, gravitational lensing, cosmic ray spectra, neutrino oscillations, and double beta decay.” From these he arrives at a mass that, besides its imaginary status, is less than a millionth that of an electron (m2νe=−0.11±0.016eV2), consistent with a speed only slightly above that of light.

Moreover, Ehrlich claims, “There are no known observations in clear conflict with the claimed result.” He also suggests three further tests that could be conducted to verify or disprove his conclusion, one of which is set to occur in 2015.

General Discussion / Hyperloop developments
« on: December 21, 2014, 08:43:01 PM »
 :nerdglasses: See article for cool graphics.

These Dreamers Are Actually Making Progress Building Elon’s Hyperloop
By Alex Davies 
12.18.14  |  1:10 pm

When Elon Musk unveiled his idea for the Hyperloop in August of 2013, no one seemed sure what the next step would be. The Tesla Motors and SpaceX CEO dropped a 57-page alpha white paper on us, noting he didn’t really have the time to build a revolutionary transit system that would shoot pods full of people around the country in above-ground tubes at 800 mph.

Fortunately for futurists and people who enjoy picking apart complicated plans, an El Segundo, California-based startup has taken Musk up on his challenge to develop and build the Hyperloop. JumpStartFund combines elements of crowdfunding and crowd-sourcing—bringing money and ideas in from all over the place—to take ambitious ideas and move them toward reality.

When Musk proposed his idea, JumpStartFund was fresh off its beta launch, and taking on the Hyperloop seemed like the perfect way to test the company’s approach (and drum up headlines), says CEO Dirk Ahlborn. So they reached out to SpaceX, proposed the project on their online platform, and created a subsidiary company to get to work: Hyperloop Transportation Technologies, Inc.

The incorporated entity has a fancy name and all, but it’s less a standard company than a group of about 100 engineers all over the country who spend their free time spitballing ideas in exchange for stock options. That said, this isn’t a Subreddit trying to solve the Boston Marathon bombing. These gals and guys applied for the right to work on the project (another 100 or so were rejected) and nearly all of them have day jobs at companies like Boeing, NASA, Yahoo!, Airbus, SpaceX, and Salesforce. They’re smart. And they’re organized.

The team is split into working groups, based on their interests and skills, that cover various aspects of the massive project, including route planning, capsule design, and cost analysis. They work mostly over email, with weekly discussions of their progress. Hierarchy is minimal, but leaders have naturally emerged, says Ahlborn. And if a decision needs to be made, as CEO, he makes the call.

A lot of the work is being done by 25 UCLA students. The school’s SUPRASTUDIO design and architecture program partnered with JumpStartFund, and now the students are working on all the design solutions the new transit system would require.

Ahlborn doesn’t expect to have the technical feasibility study finished until mid-2015, but he decided to show off what his team has done so far to coincide with the midterm break of the design group at UCLA. So far, the team has made progress in three main areas: the capsules, the stations, and the route.

Here’s what we know so far about the Hyperloop JumpStartFund wants to build.
The Route

The group working on finding a suitable route used algorithms that account for things like existing buildings, roads, and geography, and optimize the path for speed and comfort. That means keeping the line as straight as possible. Like in a plane, high speeds alone don’t lead to nausea, but if you start turning, you feel the g-forces. The route won’t be completely smooth, Ahlborn says, but contrary to the claim of one transportation blogger, “I don’t think it’s a barf ride.”

Musk’s proposed Hyperloop route running from San Francisco to Los Angeles came under a lot of criticism: What about earthquakes? Right of way? Crossing the San Francisco Bay? How will you avoid the political struggles that have made the region’s in-development high-speed rail system something of a punch line? Ahlborn has the answer: Pick a different route. Los Angeles to Las Vegas is being considered, as are other parts of the US and the world. “We would love to see LA to San Francisco, but our primary goal is to build the Hyperloop.” Yes, there are political hurdles. But not everywhere. Not in Dubai.

The UCLA students working on potential routes imagine networks criss-crossing the country, as well as Europe and Asia. This is where things get fanciful: we’re at least 10 years away from a commercially viable Hyperloop, and the idea of a national network is hard to imagine. They tacked on the idea of a “Mini Hyperloop,” which would offer shorter routes into and around cities.

The Capsules

The team had to make a few changes to the capsules Musk proposed. The Tesla CEO suggested doors that would open upward, but Ahlborn says that’s hard to do, since the low-pressure environment of the tube requires fairly heavy doors. So the team decided on what it calls a “bubble strategy.” There’s the swanky capsule, the one with fancy doors and windows, that pulls into the station. It’s the “bubble.” Passengers get in, and that capsule enters an outer shell as it’s loaded into the tube. The outer shell is built to handle the ride, and has the air compressor and other needed bits.

Don’t expect the Hyperloop to end the struggle between the bourgeoisie and proletariat: in addition to capsules made for freight, there will be economy class, and a roomier business class.

The Stations

As the UCLA students imagine it, a passenger would arrive at a station and drop her luggage off with a Kiva robot (the kind Amazon uses in its warehouse). She would pass through security on what seems to be a moving sidewalk going under a metal detector, an idea that sounds tricky when you consider how often people in airports forget to take coins or various terrifying objects out of their pockets. But once through, she would be able to kill time in the lobby doing some shopping, grabbing a bite, using the bathroom, or renting a tablet for the trip. Then she heads to her platform, gets in her assigned seat, and is whisked away.

The Hyperloop would be made of two stacked tubes, in which the capsules travel in opposite directions. When a capsule reaches a station, the bubble slides out sideways and onto the platform, and the passengers unload. Then the capsule is moved to the opposite tube and ready to get going again.
What Remains to Be Done

So JumpStartFund and the UCLA students have made good progress, but there’s a lot to figure out before anyone gets to tackle the really fun parts like testing, permitting, and construction. Ahlborn says the questions of how to build the low-pressure tube and the pylons that support it have mostly been solved, and creating the capsules shouldn’t be too tricky. The hard part is moving the capsules within the tube, and seeing how fast they can go. To eliminate friction in the tube, Musk proposed using a compressor to create a pocket of air under the capsule. That’s the cheapest approach, Ahlborn says, but it has its drawbacks. His team is looking at the possibility of using magnetic levitation and other alternatives. “We want to find the best possible way to make this work.”

“I have almost no doubt that once we are finished, once we know how we are going to build and it makes economical sense, that we will get the funds,” Ahlborn says, and Musk’s cost estimate of $6-10 billion for a 400-mile stretch of Hyperloop is on point, based on the team’s work.

Considering the nonsense that’s getting venture capital these days, that’s not a crazy thing to say, though it will require unusually patient investors. Ahlborn expects to start building the first in a series of prototypes sometime in 2015. A final product “can be built within the decade,” Ahlborn says. “That’s for sure.”

At some point, Hyperloop Transportation Technologies will likely have to shift from this work-when-you-can-but-don’t-expect-money model to something a bit more conventional with, you know, employees. But for now, it’s a fitting approach: Bring in as many minds as possible to sort through the myriad questions an idea this ambitious presents. This is why Ahlborn’s excited about the Hyperloop: It’s a huge undertaking. That’s why people like Elon Musk, he says: The dude wants to die on Mars and he’s actually moving toward the awesome, if macabre, goal. “Other people work on their next app.”

General Discussion / Foo Fighters: Sonic Highway
« on: October 25, 2014, 02:03:30 AM »
Documentary series on HBO. If you're a fan of Rock, in general, it's very much worth a look. Very cool juxtaposition of art, music, real-life-stories, and location. Each week they go to a different city. So far, Chicago and Wash DC. For each city, they cover the music scene from its roots, then tie it in to some one in the band, then finish with a song from the new album, where they show the lyrics on-screen and you see how the song relates to everything you just learned. Chicago had a lot of blues in it. DC just had a lot of the DC Punk scene; Dave Grohl grew up there. Next week is Nashville. Curious how it will tie in, but I'm hooked already.

General Discussion / Fermi Paradox breakdown
« on: August 17, 2014, 05:25:36 PM »
Click the link. It's prettier.

The Fermi Paradox: Where the Hell Are the Other Earths?
Tim Urban -
Filed to: space   


5/23/14 11:20am

    Share to Kinja
    Go to permalink

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

Everyone feels something when they're in a really good starry place a really good starry place on a really good starry night and they look up and see this:

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

Some people stick with the traditional, feeling struck by the epic beauty or blown away by the insane scale of the universe. Personally, I go for the old "existential meltdown followed by acting weird for the next half hour." But everyone feels something.

Physicist Enrico Fermi felt something too—"Where is everybody?"

A really starry sky seems vast—but all we're looking at is our very local neighborhood. On the very best nights, we can see up to about 2,500 stars (roughly one hundred-millionth of the stars in our galaxy), and almost all of them are less than 1,000 light years away from us (or 1% of the diameter of the Milky Way). So what we're really looking at is this:

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

When confronted with the topic of stars and galaxies, a question that tantalizes most humans is, "Is there other intelligent life out there?" Let's put some numbers to it (if you don't like numbers, just read the bold)—

As many stars as there are in our galaxy (100 – 400 billion), there are roughly an equal number of galaxies in the observable universe—so for every star in the colossal Milky Way, there's a whole galaxy out there. All together, that comes out to the typically quoted range of between 10^22 and 10^24 total stars in the universe, which means that for every grain of sand on Earth, there are 10,000 stars out there.

The science world isn't in total agreement about what percentage of those stars are "sun-like" (similar to our sun in size, temperature, and luminosity)—opinions typically range from 5% to 20%. Going with the most conservative side of that (5%), and the lower end for the number of total stars (10^22), gives us 500 quintillion, or 500 billion billion sun-like stars.

There's also a debate over what percentage of those sun-like stars might be orbited by an Earth-like planet (one with similar temperature conditions that could have liquid water and potentially support life similar to that on Earth). Some say it's as high as 50%, but let's go with the more conservative 22% that came out of a recent PNAS study. That suggests that there's a potentially-habitable Earth-like planet orbiting at least 1% of the total stars in the universe—a total of 100 billion billion Earth-like planets.

So there are 100 Earth-like planets for every grain of sand in the world. Think about that next time you're on the beach.

Moving forward, we have no choice but to get completely speculative. Let's imagine that after billions of years in existence, 1% of Earth-like planets develop life (if that's true, every grain of sand would represent one planet with life on it). And imagine that on 1% of those planets, the life advances to an intelligent level like it did here on Earth. That would mean there were 10 quadrillion, or 10 million billion intelligent civilizations in the observable universe.

Moving back to just our galaxy, and doing the same math on the lowest estimate for stars in the Milky Way (100 billion), we'd estimate that there are 1 billion Earth-like planets and 100,000 intelligent civilizations in our galaxy. (The Drake Equation provides a formal method for this narrowing-down process we're doing.)

SETI (Search for Extraterrestrial Intelligence) is an organization dedicated to listening for signals from other intelligent life. If we're right that there are 100,000 or more intelligent civilizations in our galaxy, and even a fraction of them are sending out radio waves or laser beams or other modes of attempting to contact others, shouldn't SETI's satellite array pick up all kinds of signals?

But it hasn't. Not one. Ever.

Where is everybody?

It gets stranger. Our sun is relatively young in the lifespan of the universe. There are far older stars with far older Earth-like planets, which should in theory mean far more advanced civilizations than our own. As an example, let's compare our 4.54 billion-year-old Earth to a hypothetical 8 billion-year-old Planet X.

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

If Planet X has a similar story to Earth, let's look at where their civilization would be today:

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

The technology and knowledge of a civilization only 1,000 years ahead of us could be as shocking to us as our world would be to a medieval person. A civilization 1 million years ahead of us might be as incomprehensible to us as human culture is to chimpanzees. And Planet X is 3.4 billion years ahead of us…

There's something called The Kardashev Scale, which helps us group intelligent civilizations into three broad categories by the amount of energy they use:

A Type I Civilization has the ability to use all of the energy on their planet. We're not quite a Type I Civilization, but we're close (Carl Sagan created a formula for this scale which puts us at a Type 0.7 Civilization).

A Type II Civilization can harness all of the energy of their host star. Our feeble Type I brains can hardly imagine how someone would do this, but we've tried our best, imagining things like a Dyson Sphere.

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

A Type III Civilization blows the other two away, accessing power comparable to that of the entire Milky Way galaxy.

If this level of advancement sounds hard to believe, remember Planet X above and their 3.4 billion years of further development (about half a million times as long as the human race has been around). If a civilization on Planet X were similar to ours and were able to survive all the way to Type III level, the natural assumption is that they'd probably have mastered inter-stellar travel by now, possibly even colonizing the entire galaxy.

One hypothesis as to how galactic colonization could happen is by creating machinery that can travel to other planets, spend 500 years or so self-replicating using the raw materials on their new planet, and then send two replicas off to do the same thing. Even without traveling anywhere near the speed of light, this process would colonize the whole galaxy in 3.75 million years, a relative blink of an eye when talking in the scale of billions of years:

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

Source: J. Schombert, U. Oregon

Continuing to speculate, if 1% of intelligent life survives long enough to become a potentially galaxy-colonizing Type III Civilization, our calculations above suggest that there should be at least 1,000 Type III Civilizations in our galaxy alone—and given the power of such a civilization, their presence would likely be pretty noticeable. And yet, we see nothing, hear nothing, and we're visited by no one.
So where is everybody?

Welcome to the Fermi Paradox.

We have no answer to the Fermi Paradox—the best we can do is "possible explanations." And if you ask ten different scientists what their hunch is about the correct one, you'll get ten different answers. You know when you hear about humans of the past debating whether the Earth was round or if the sun revolved around the Earth or thinking that lightning happened because of Zeus, and they seem so primitive and in the dark? That's about where we are with this topic.

In taking a look at some of the most-discussed possible explanations for the Fermi Paradox, let's divide them into two broad categories—those explanations which assume that there's no sign of Type II and Type III Civilizations because there arenone of them out there, and those which assume they're out there and we're not seeing or hearing anything for other reasons:

Explanation Group 1: There are no signs of higher (Type II and III) civilizations because there are no higher civilizations in existence.

Those who subscribe to Group 1 explanations point to something called the non-exclusivity problem, which rebuffs any theory that says, "There are higher civilizations, but none of them have made any kind of contact with us because they all _____." Group 1 people look at the math, which says there should be so many thousands (or millions) of higher civilizations, that at least one of them would be an exception to the rule. Even if a theory held for 99.99% of higher civilizations, the other .001% would behave differently and we'd become aware of their existence.

Therefore, say Group 1 explanations, it must be that there are no super-advanced civilizations. And since the math suggests that there are thousands of them just in our own galaxy, something else must be going on.

This something else is called The Great Filter.

The Great Filter theory says that at some point from pre-life to Type III intelligence, there's a wall that all or nearly all attempts at life hit. There's some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

If this theory is true, the big question is, Where in the timeline does the Great Filter occur?

It turns out that when it comes to the fate of humankind, this question is very important. Depending on where The Great Filter occurs, we're left with three possible realities: We're rare, we're first, or we're fucked.

1. We're Rare (The Great Filter is Behind Us)

One hope we have is that The Great Filter is behind us—we managed to surpass it, which would mean it's extremely rare for life to make it to our level of intelligence. The diagram below shows only two species making it past, and we're one of them.

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

This scenario would explain why there are no Type III Civilizations…but it would also mean that we could be one of the few exceptions now that we've made it this far. It would mean we have hope. On the surface, this sounds a bit like people 500 years ago suggesting that the Earth is the center of the universe—it implies that we'respecial. However, something scientists call "observation selection effect" says that anyone who is pondering their own rarity is inherently part of an intelligent life "success story"—and whether they're actually rare or quite common, the thoughts they ponder and conclusions they draw will be identical. This forces us to admit that being special is at least a possibility.

And if we are special, when exactly did we become special—i.e. which step did we surpass that almost everyone else gets stuck on?

One possibility: The Great Filter could be at the very beginning—it might be incredibly unusual for life to begin at all. This is a candidate because it took about a billion years of Earth's existence to finally happen, and because we have tried extensively to replicate that event in labs and have never been able to do it. If this is indeed The Great Filter, it would mean that not only is there no intelligent life out there, there may be no other life at all.

Another possibility: The Great Filter could be the jump from the simple prokaryote cell to the complex eukaryote cell. After prokaryotes came into being, they remained that way for almost two billion years before making the evolutionary jump to being complex and having a nucleus. If this is The Great Filter, it would mean the universe is teeming with simple prokaryote cells and almost nothing beyond that.

There are a number of other possibilities—some even think the most recent leap we've made to our current intelligence is a Great Filter candidate. While the leap from semi-intelligent life (chimps) to intelligent life (humans) doesn't at first seem like a miraculous step, Steven Pinker rejects the idea of an inevitable "climb upward" of evolution: "Since evolution does not strive for a goal but just happens, it uses the adaptation most useful for a given ecological niche, and the fact that, on Earth, this led to technological intelligence only once so far may suggest that this outcome of natural selection is rare and hence by no means a certain development of the evolution of a tree of life."

Most leaps do not qualify as Great Filter candidates. Any possible Great Filter must be a one-in-a-billion type thing where one or more total freak occurrences need to happen to provide a crazy exception—for that reason, something like the jump from single-cell to multi-cellular life is ruled out, because it has occurred as many as 46 times, in isolated incidents, just on this planet alone. For the same reason, if we were to find a fossilized eukaryote cell on Mars, it would rule the above "simple-to-complex cell" leap out as a possible Great Filter (as well as anything before that point on the evolutionary chain)—because if it happened on both Earth and Mars, it's clearly not a one-in-a-billion freak occurrence.

If we are indeed rare, it could be because of a fluky biological event, but it also could be attributed to what is called the Rare Earth Hypothesis, which suggests that though there may be many Earth-like planets, the particular conditions on Earth—whether related to the specifics of this solar system, its relationship with the moon (a moon that large is unusual for such a small planet and contributes to our particular weather and ocean conditions), or something about the planet itself—are exceptionally friendly to life.

2. We're the First

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

For Group 1 Thinkers, if the Great Filter is not behind us, the one hope we have is that conditions in the universe are just recently, for the first time since the Big Bang, reaching a place that would allow intelligent life to develop. In that case, we and many other species may be on our way to super-intelligence, and it simply hasn't happened yet. We happen to be here at the right time to become one of the first super-intelligent civilizations.

One example of a phenomenon that could make this realistic is the prevalence of gamma-ray bursts, insanely huge explosions that we've observed in distant galaxies. In the same way that it took the early Earth a few hundred million years before the asteroids and volcanoes died down and life became possible, it could be that the first chunk of the universe's existence was full of cataclysmic events like gamma-ray bursts that would incinerate everything nearby from time to time and prevent any life from developing past a certain stage. Now, perhaps, we're in the midst of an astrobiological phase transition and this is the first time any life has been able to evolve for this long, uninterrupted.

3. We're Fucked (The Great Filter is Ahead of Us)

The Fermi Paradox: Where the Hell Are the Other Earths?Expand

If we're neither rare nor early, Group 1 thinkers conclude that The Great Filter must be in our future. This would apply that life regularly evolves to where we are, but that something prevents life from going much further and reaching high intelligence in almost all cases—and we're unlikely to be an exception.

One possible future Great Filter is a regularly-occurring cataclysmic natural event, like the above-mentioned gamma-ray bursts, except they're unfortunately not done yet and it's just a matter of time before all life on Earth is suddenly wiped out by one. Another candidate is the possible inevitability that nearly all intelligent civilizations end up destroying themselves once a certain level of technology is reached.

This is why Oxford University philosopher Nick Bostrom says that "no news is good news." The discovery of even simple life on Mars would be devastating, because it would cut out a number of potential Great Filters behind us. And if we were to find fossilized complex life on Mars, Bostrom says "it would be by far the worst news ever printed on a newspaper cover," because it would mean The Great Filter is almost definitely ahead of us—ultimately dooming the species. Bostrom believes that when it comes to The Fermi Paradox, "the silence of the night sky is golden."

Explanation Group 2: Type II and III intelligent civilizations are out there—and there are logical reasons why we might not have heard from them.

Group 2 explanations get rid of any notion that we're rare or special or the first at anything—on the contrary, they believe in the Mediocrity Principle, whose starting point is that there is nothing unusual or rare about our galaxy, solar system, planet, or level of intelligence, until evidence proves otherwise. They're also much less quick to assume that the lack of evidence of higher intelligence beings is evidence of their nonexistence—emphasizing the fact that our search for signals stretches only about 100 light years away from us (0.1% across the galaxy) and has only been going on for under a century, a tiny amount of time. Group 2 thinkers have come up with a large array of possible explanations for the Fermi Paradox. Here are 10 of the most discussed:

Possibility 1) Super-intelligent life could very well have already visited Earth, but before we were here. In the scheme of things, sentient humans have only been around for about 50,000 years, a little blip of time. If contact happened before then, it might have made some ducks flip out and run into the water and that's it. Further, recorded history only goes back 5,500 years—a group of ancient hunter-gatherer tribes may have experienced some crazy alien shit, but they had no good way to tell anyone in the future about it.

Possibility 2) The galaxy has been colonized, but we just live in some desolate rural area of the galaxy. The Americas may have been colonized by Europeans long before anyone in a small Inuit tribe in far northern Canada realized it had happened. There could be an urbanization component to the interstellar dwellings of higher species, in which all the neighboring solar systems in a certain area are colonized and in communication, and it would be impractical and purposeless for anyone to deal with coming all the way out to the random part of the spiral where we live.

Possibility 3) The entire concept of physical colonization is a hilariously backward concept to a more advanced species. Remember the picture of the Type II Civilization above with the sphere around their star? With all that energy, they might have created a perfect environment for themselves that satisfies their every need. They might have hyper-advanced ways of reducing their need for resources and zero interest in leaving their happy utopia to explore the cold, empty, undeveloped universe.

An even more advanced civilization might view the entire physical world as a horribly primitive place, having long ago conquered their own biology and uploaded their brains to a virtual reality, eternal-life paradise. Living in the physical world of biology, mortality, wants, and needs might seem to them the way we view primitive ocean species living in the frigid, dark sea. FYI, thinking about another life form having bested mortality makes me incredibly jealous and upset.

Possibility 4) There are scary predator civilizations out there, and most intelligent life knows better than to broadcast any outgoing signals and advertise their location. This is an unpleasant concept and would help explain the lack of any signals being received by the SETI satellites. It also means that we might be the super naive newbies who are being unbelievably stupid and risky by ever broadcasting outward signals. There's a debate going on currently about whether we should engage in METI (Messaging to Extraterrestrial Intelligence—the reverse of SETI, which only listens) or not, and most people say we should not. Stephen Hawking warns, "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans." Even Carl Sagan (a general believer that any civilization advanced enough for interstellar travel would be altruistic, not hostile) called the practice of METI "deeply unwise and immature," and recommended that "the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand." Scary.[2]

Possibility 5) There's one and only one instance of higher-intelligent life—a "superpredator" civilization (kind of like humans are here on Earth)—who is farmore advanced than everyone else and keeps it that way by exterminating any intelligent civilization once they get past a certain level. This would suck. The way it might work is that it's an inefficient use of resources to exterminate all emerging intelligences, maybe because most die out on their own. But past a certain point, the super beings make their move—because to them, an emerging intelligent species becomes like a virus as it starts to grow and spread. This theory suggests that whoever was the first in the galaxy to reach intelligence won, and now no one else has a chance. This would explain the lack of activity out there because it would keep the number of super-intelligent civilizations to just one.

Possibility 6) There's plenty of activity and noise out there, but our technology is too primitive and we're listening for the wrong things. Like walking into a modern-day office building, turning on a walkie-talkie, and when you hear no activity (which of course you wouldn't hear because everyone's texting, not using walkie-talkies), determining that the building must be empty. Or maybe, as Carl Sagan has pointed out, it could be that our minds work exponentially faster or slower than another form of intelligence out there—e.g. it takes them 12 years to say "Hello," and when we hear that communication, it just sounds like white noise to us.

Possibility 7) We are receiving contact from other intelligent life, but the government is hiding it. This is an idiotic theory, but I had to mention it because it's talked about so much.

Possibility 8) Higher civilizations are aware of us and observing us but concealing themselves from us (AKA the "Zoo Hypothesis"). As far as we know, super-intelligent civilizations exist in a tightly-regulated galaxy, and our Earth is treated like part of a vast and protected national park, with a strict "Look but don't touch" rule for planets like ours. We wouldn't be aware of them, because if a far smarter species wanted to observe us, it would know how to easily do so without us noticing. Maybe there's a rule similar to the Star Trek's "Prime Directive" which prohibits super-intelligent beings from making any open contact with lesser species like us or revealing themselves in any way, until the lesser species has reached a certain level of intelligence.

Possibility 9) Higher civilizations are here, all around us, but we're too primitive to perceive them. Michio Kaku sums it up like this:

Lets say we have an ant hill in the middle of the forest. And right next to the ant hill, they're building a ten-lane super-highway. And the question is "Would the ants be able to understand what a ten-lane super-highway is? Would the ants be able to understand the technology and the intentions of the beings building the highway next to them?"

So it's not that we can't pick up the signals from Planet X using our technology, it's that we can't even comprehend what the beings from Planet X are or what they're trying to do. It's so beyond us that even if they really wanted to enlighten us, it would be like trying to teach ants about the internet.

Along those lines, this may also be an answer to "Well if there are so many fancy Type III Civilizations, why haven't they contacted us yet?" To answer that, let's ask ourselves—when Pizarro made his way into Peru, did he stop for a while at an anthill to try to communicate? Was he magnanimous, trying to help the ants in the anthill? Did he become hostile and slow his original mission down in order to smash the anthill apart? Or was the anthill of complete and utter and eternal irrelevance to Pizarro? That might be our situation here.

Possibility 10) We're completely wrong about our reality. There are a lot of ways we could just be totally off with everything we think. The universe might appear one way and be something else entirely, like a hologram. Or maybe we're the aliens and we were planted here as an experiment or as a form of fertilizer. There's even a chance that we're all part of a computer simulation by some researcher from another world, and other forms of life simply weren't programmed into the simulation.
How the Mind Works $13.49

Buy now

As we continue along with our possibly-futile search for extraterrestrial intelligence, I'm not really sure what I'm rooting for. Frankly, learning either that we're officially alone in the universe or that we're officially joined by others would be creepy, which is a theme with all of the surreal storylines listed above—whatever the truth actually is, it's mindblowing.

Beyond its shocking science fiction component, The Fermi Paradox also leaves me with a deep humbling. Not just the normal "Oh yeah, I'm microscopic and my existence lasts for three seconds" humbling that thinking about the universe always triggers. The Fermi Paradox brings out a sharper, more personal humbling, one that can only happen after spending hours of research hearing your species' most renowned scientists present insane theories, change their minds again and again, and wildly contradict each other—reminding us that future generations will look at us in the same way we see the ancient people who were sure that the stars were the underside of the dome of heaven, and they'll think "Wow they really had no idea what was going on."

Compounding all of this is the blow to our species' self-esteem that comes with all of this talk about Type II and III Civilizations. Here on Earth, we're the king of our little castle, proud ruler of the huge group of imbeciles who share the planet with us. And in this bubble with no competition and no one to judge us, it's rare that we're ever confronted with the concept of being a dramatically inferior species to anyone. But after spending a lot of time with Type II and III Civilizations over the past week, our power and pride are seeming a bit David Brent-esque.

That said, given that my normal outlook is that humanity is a lonely orphan on a tiny rock in the middle of a desolate universe, the humbling fact that we're probably not as smart as we think we are, and the possibility that a lot of what we're sure about might be wrong, sounds wonderful. It opens the door just a crack that maybe, just maybe, there might be more to the story than we realize.

General Discussion / worth it
« on: July 20, 2014, 02:14:09 PM »
Jon Stewart and Stephen Colbert battle for title of World's Biggest Star Wars Fan!

General Discussion / Silicon Valley
« on: June 02, 2014, 12:18:42 AM »
Season finale tonight. Pretty damn good first season, though slightly slow start. Didn't know one of the main dudes had already died. Sucks. Renewed for second season though.

Tech Heads / Git and Subversion
« on: May 20, 2014, 04:22:05 PM »
where's best place for explanation for both? Just doing simple BASH and PowerShell scripts.

General Discussion / Furnace questions
« on: May 15, 2014, 09:45:05 PM »
Don't remember if I've asked this before or not, quick search of GD says no. Have a gas furnace in my rental house. It has a nasty habit of turning on when the internal temp is >80-85 degrees. I think it's just the fan, as hot air (or at least hotter than ambient) is not blowing. There is no fan/auto control on the thermostat. It's just a wheel-type with internal temp below and setting above, which is off. Yet, it still runs. Why does it do this? Should I kill the pilot light and unplug it?

In the furnace closet, near the pilot light there is a simple toggle switch that says Summer: on/off. This switch appears to do nothing. What is it for?

General Discussion / MOVED: guess
« on: May 15, 2014, 01:54:41 PM »

Tech Heads / Kamara, Cry me a river of SQL suggestions...
« on: May 03, 2014, 03:57:22 PM »
IIRC you are/were an MS SQL guy. T/F?

If so, best way to learn this? I am taking an Exchange class this next week, but I feel like to be a well rounded admin, plus a project I am assigned to this year, I need to know MS SQL. What is the difference (if any) between an Admin maintaining a SQL server, and a SQL Server Admin?

General Discussion / [NERD] Real History of Science Fiction
« on: April 20, 2014, 06:21:17 PM »
Reminder BBCA

started yesterday. missed it, but got it recording now.

Tech Heads / Time to take a minute and reflect...
« on: February 20, 2014, 01:04:14 PM »
System Administrator III promotion went through today

feels good man



Scientists achieve nuclear fusion with giant laser
A successful nuclear fusion process could help solve the world's energy problems

By Lucas Mearian
February 13, 2014 06:24 AM ET

Computerworld - Researchers at the Lawrence Livermore National Laboratory Wednesday said they've achieved a first: A nuclear fusion system has produced more energy than it initially absorbed.

While that may seem a small victory, it is the first time scientists have been able to replicate, to a small degree, the same process that the Sun and stars use to create their massive amounts of energy.

The research, published in the peer reviewed journal Nature, involved a petawatt power laser used to try to ignite fusion plasma fuel in a confined space. Each pulse of the laser, which delivered peak power of 1,000,000,000,000,000 watts, lasted less than 30 femtoseconds, or 0.00000000000003 seconds.

The laser squeezes hydrogen atoms together producing helium atoms, and in the process a massive amount of energy is released.

A fusion reaction is markedly different from fission reactions that are used in today's nuclear reactors. Instead of splitting atoms as fission does, fusion bonds atoms.

With fusion, only a tiny amount of fuel is present at any given time (typically about a milligram), according to Mike Dunne, director for Laser Fusion Energy at Lawrence Livermore Labs.

The laser, known as the National Ignition Facility (NIF), uses 192 beams 300 yards long that focus on a fuel cell about the diameter of a No. 2 pencil.


The hohlraum
A metallic case called a hohlraum holds the fuel capsule for NIF experiments. Target handling systems precisely position the target and freeze it to cryogenic temperatures (18 kelvins, or -427 degrees Fahrenheit) so that a fusion reaction is more easily achieved.

While powerful, the laser has not yet been able to ignite the plasma fuel. When and if it does, the fuel would begin to burn in a self-sustaining reaction to such a degree that it will produce a megajoule of energy.

Producing that staggering amount of energy could help to solve the world's energy issues.

"Think of it like the gas in the piston chamber of your car, where the idea is to ignite all the fuel to produce an efficient burn. So it is 'self-sustaining' but can never be 'run-away.' In the case of laser fusion, the burn time is incredibly short - typically a few tens of picoseconds," Dunne said in an email to Computerworld.

The researchers' latest victory marks the accomplishment of a key goal on the way to plasma fuel ignition: the project generated energy through a fusion reaction that exceeded the amount of energy deposited into deuterium-tritium fusion fuel and hotspot during the implosion process, "resulting in a fuel gain greater than unity," the team stated in the Nature article.

"Ignition is the ultimate goal of the experiments, so the latest result marks a waypoint on the way to that point (albeit quite a significant waypoint)," Dunne said.

The next significant step for the research is to achieve an "alpha burn," where the fusion output more than doubles the energy input to the fuel. In an alpha burn, the researchers hope to pass a particular threshold of energy output -- specifically 10,000,000,000,000,000 fusion reactions.

"We are currently a few percent below this value," Dunne said.

Once ignition is achieved, it promises a path toward a sustainable, environmentally sound energy source that would exceed that of any previously created.

"There are a number of possible paths forward," Dunne said, "and it will require a close partnership between industry and government. But in principle, because the NIF was built at the same scale as the fusion performance needed for a power plant, the leap is not as great as you may think."

Dunne believes there once nuclear fusion energy is achieved, there will be an overriding push to capitalize on the success of it.

"It is, after all, often called the 'holy grail' of energy sources," he said.

Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at Twitter@lucasmearian, or subscribe to Lucas's RSS feed Mearian RSS. His email address is
Read more about Emerging Technologies in Computerworld's Emerging Technologies Topic Center.

Tech Heads / Linux Application Server (cluster) management
« on: January 28, 2014, 11:59:11 AM »
OK, so the other thread got me started and was very helpful. Got the PDC online yesterday, hosting DNS and DHCP, as well as Domain Services. The File server comes online tonight, becoming an "SDC" file and print server. Next up here, my application server, that will start off hosting Symantec antivirus as possibly a Hyper-V vm. Thanks all.

Now, I find myself in the dark arts of Linux management. I was slightly outclassed for the Windows role, now I am a bug amongst giants. I have a very high level understanding of how my current job does this:

We have 4 general purpose shared linux servers that anyone can get to/use, and are grid submission hosts if needed, but are locked down such that anything run on them (since they are shared) gets a RAM and/or CPU and/or Time cap? Any ideas how that happens/works?

Then we have our grid. IANA HPC guy. Zero background. I know there are 3 "queues" that determine how much power you get assigned to your job, where we request people use the smallest unit that will work, obviously. Is this something native to all grid setups, or is it similar/same to above?

I need to figure out what resources to buy. I've never been in charge of something like this. Initial start up will be about 10-20 people. Final numbers not expected to reach over 100, at this time; no idea on that growth timetable either. Going to start out with 1 File Server (and maybe 2nd Domain Controller and/or application server for Antivirus), 1DC with Print services - all Windows OS based. I will also have 1 Mac Mini server to support machine imaging with DeployStudio. I can handle the Mac Server as it won't need too many resources. I'm more worried about the Windows needs and adequately meeting them, but still leaving a bit to grow in the near term.

Other considerations will possibly be encrypted drives for the data.

General Discussion / Shannara fans?
« on: December 08, 2013, 11:41:49 PM »
I only read Elfstones when I was low teens, but loved it.

Terry Brooks Shannara Novels to Be Adapted for MTV
By Rachel Edidin
6:11 PM

MTV announced today that it’s working with Jon Favreau and Smallville co-creators Al Gough and Miles Millar to develop a series based on Shannara, the bestselling fantasy series by Terry Brooks. They’re serious about it, too: apparently the deal involves a straight-to-series commitment, bypassing the pilot stage altogether.

Shannara makes sense — MTV is clearly out to duplicate the success of fellow fantasy series Game of Thrones, and, according to the Hollywood Reporter, Brooks’ 25-volume saga is the world’s best-selling un-adapted fantasy book series.

On the other hand, one of the reasons Shannara has thus far stayed off the screen is that it’s just not all that interesting. It’s not bad, but it is generic: there’s not a lot that distinguishes it from its genre neighbors, and while it’s consistently sold well, it lacks the innovative world-building and die-hard fan community that have propelled A Song of Ice and Fire and kept series like Robert Jordan’s Wheel of Time on the bestseller list for decades. There are dozens of series that fit the Game of Thrones-for-the-tween-crowd model MTV’s obviously shooting for that would make for infinitely more interesting shows.

The irony is that Brooks has written a series that would make for fantastic television — not Shannara, but the lesser-known Landover books, which chronicle the adventures of a man who buys his very own fantasy kingdom through a luxury-goods catalog. It’s a clever twist on the high fantasy genre, and the misadventures of a modern lawyer grappling with the lemon of a fairyland he’s bought his way into ruling seem like a natural fit for television, and a possible antidote to fantasy-meets-modern-life misfires like Grimm – or at least a more interesting option than Shannara.

Just touches on a few things, but sounds like some good reading.

This Is the Man Bill Gates Thinks You Absolutely Should Be Reading

“There is no author whose books I look forward to more than Vaclav Smil,” Bill Gates wrote this summer. That’s quite an endorsement—and it gave a jolt of fame to Smil, a professor emeritus of environment and geography at the University of Manitoba. In a world of specialized intellectuals, Smil is an ambitious and astonishing polymath who swings for fences. His nearly three dozen books have analyzed the world’s biggest challenges—the future of energy, food production, and manufacturing—with nuance and detail. They’re among the most data-heavy books you’ll find, with a remarkable way of framing basic facts. (Sample nugget: Humans will consume 17 percent of what the biosphere produces this year.)

His conclusions are often bleak. He argues, for instance, that the demise of US manufacturing dooms the country not just intellectually but creatively, because innovation is tied to the process of making things. (And, unfortunately, he has the figures to back that up.) WIRED got Smil’s take on the problems facing America and the world.

You’ve written over 30 books and published three this year alone. How do you do it?

Hemingway knew the secret. I mean, he was a lush and a bad man in many ways, but he knew the secret. You get up and, first thing in the morning, you do your 500 words. Do it every day and you’ve got a book in eight or nine months.

What draws you to such big, all-encompassing subjects?

I saw how the university life goes, both in Europe and then in the US. I was at Penn State, and I was just aghast, because everyone was what I call drillers of deeper wells. These academics sit at the bottom of a deep well and they look up and see a sliver of the sky. They know everything about that little sliver of sky and nothing else. I scan all my horizons.

Let’s talk about manufacturing. You say a country that stops doing mass manufacturing falls apart. Why?

In every society, manufacturing builds the lower middle class. If you give up manufacturing, you end up with haves and have-nots and you get social polarization. The whole lower middle class sinks.

You also say that manufacturing is crucial to innovation.

Most innovation is not done by research institutes and national laboratories. It comes from manufacturing—from companies that want to extend their product reach, improve their costs, increase their returns. What’s very important is in-house research. Innovation usually arises from somebody taking a product already in production and making it better: better glass, better aluminum, a better chip. Innovation always starts with a product.

Look at LCD screens. Most of the advances are coming from big industrial conglomerates in Korea like Samsung or LG. The only good thing in the US is Gorilla Glass, because it’s Corning, and Corning spends $700 million a year on research.

American companies do still innovate, though. They just outsource the manufacturing. What’s wrong with that?

Look at the crown jewel of Boeing now, the 787 Dreamliner. The plane had so many problems—it was like three years late. And why? Because large parts of it were subcontracted around the world. The 787 is not a plane made in the USA; it’s a plane assembled in the USA. They subcontracted composite materials to Italians and batteries to the Japanese, and the batteries started to burn in-flight. The quality control is not there.

Can IT jobs replace the lost manufacturing jobs?

No, of course not. These are totally fungible jobs. You could hire people in Russia or Malaysia—and that’s what companies are doing.

Restoring manufacturing would mean training Americans again to build things.

Only two countries have done this well: Germany and Switzerland. They’ve both maintained strong manufacturing sectors and they share a key thing: Kids go into apprentice programs at age 14 or 15. You spend a few years, depending on the skill, and you can make BMWs. And because you started young and learned from the older people, your products can’t be matched in quality. This is where it all starts.

You claim Apple could assemble the iPhone in the US and still make a huge profit.

It’s no secret! Apple has tremendous profit margins. They could easily do everything at home. The iPhone isn’t manufactured in China—it’s assembled in China from parts made in the US, Germany, Japan, Malaysia, South Korea, and so on. The cost there isn’t labor. But laborers must be sufficiently dedicated and skilled to sit on their ass for eight hours and solder little pieces together so they fit perfectly.

But Apple is supposed to be a giant innovator.

Apple! Boy, what a story. No taxes paid, everything made abroad—yet everyone worships them. This new iPhone, there’s nothing new in it. Just a golden color. What the hell, right? When people start playing with color, you know they’re played out.

Let’s talk about energy. You say alternative energy can’t scale. Is there no role for renewables?

I like renewables, but they move slowly. There’s an inherent inertia, a slowness in energy transitions. It would be easier if we were still consuming 66,615 kilowatt-hours per capita, as in 1950. But in 1950 few people had air-conditioning. We’re a society that demands electricity 24/7. This is very difficult with sun and wind.

Look at Germany, where they heavily subsidize renewable energy. When there’s no wind or sun, they boost up their old coal-fired power plants. The result: Germany has massively increased coal imports from the US, and German greenhouse gas emissions have been increasing, from 917 million metric tons in 2011 to 931 million in 2012, because they’re burning American coal. It’s totally zany!

What about nuclear?

The Chinese are building it, the Indians are building it, the Russians have some intention to build. But as you know, the US is not. The last big power plant was ordered in 1974. Germany is out, Italy has vowed never to build one, and even France is delaying new construction. Is it a nice thought that the future of nuclear energy is now in the hands of North Korea, Pakistan, India, and Iran? It’s a depressing thought, isn’t it?

The basic problem was that we rushed into nuclear power. We took Hyman Rickover’s reactor for submarines and pushed it so America would beat Russia. And that’s just the wrong reactor. It was done too fast with too little forethought.

You call this Moore’s curse—the idea that if we’re innovative enough, everything can have yearly efficiency gains.

It’s a categorical mistake. You just cannot increase the efficiency of power plants like that. You have your combustion machines—the best one in the lab now is about 40 percent efficient. In the field they’re about 15 or 20 percent efficient. Well, you can’t quintuple it, because that would be 100 percent efficient. Impossible, right? There are limits. It’s not a microchip.

The same thing is true in agriculture. You cannot increase the efficiency of photosynthesis. We improve the performance of farms by irrigating them and fertilizing them to provide all these nutrients. But we cannot keep on doubling the yield every two years. Moore’s law doesn’t apply to plants.

So what’s left? Making products more energy-efficient?

Innovation is making products more energy-efficient — but then we consume so many more products that there’s been no absolute dematerialization of anything. We still consume more steel, more aluminum, more glass, and so on. As long as we’re on this endless material cycle, this merry-go-round, well, technical innovation cannot keep pace.

Yikes. So all we’ve got left is reducing consumption. But who’s going to do that?

My wife and I did. We downscaled our house. It took me two years to find a subdivision where they’d let me build a custom house smaller than 2,000 square feet. And I’ll test you: What is the simplest way to make your house super-efficient?


Right. I have 50 percent more insulation in my walls. It adds very little to the cost. And you insulate your basement from the outside—I have about 20 inches of Styrofoam on the outside of that concrete wall. We were the first people building on our cul-de-sac, so I saw all the other houses after us—much bigger, 3,500 square feet. None of them were built properly. I pay in a year for electricity what they pay in January. You can have a super-efficient house; you can have a super-efficient car, a little Honda Civic, 40 miles per gallon.

Your other big subject is food. You’re a pretty grim thinker, but this is your most optimistic area. You actually think we can feed a planet of 10 billion people—if we eat less meat and waste less food.

We pour all this energy into growing corn and soybeans, and then we put all that into rearing animals while feeding them antibiotics. And then we throw away 40 percent of the food we produce.

Meat eaters don’t like me because I call for moderation, and vegetarians don’t like me because I say there’s nothing wrong with eating meat. It’s part of our evolutionary heritage! Meat has helped to make us what we are. Meat helps to make our big brains. The problem is with eating 200 pounds of meat per capita per year. Eating hamburgers every day. And steak.

You know, you take some chicken breast, cut it up into little cubes, and make a Chinese stew—three people can eat one chicken breast. When you cut meat into little pieces, as they do in India, China, and Malaysia, all you need to eat is maybe like 40 pounds a year.

So finally, some good news from you!

Except for antibiotic resistance, which is terrible. Some countries that grow lots of pork, like Denmark and the Netherlands, are either eliminating antibiotics or reducing them. We have to do that. Otherwise we’ll create such antibiotic resistance, it will be just terrible.

So the answers are not technological but political: better economic policies, better education, better trade policies.

Right. Today, as you know, everything is “innovation.” We have problems, and people are looking for fairy-tale solutions—innovation like manna from heaven falling on the Israelites and saving them from the desert. It’s like, “Let’s not reform the education system, the tax system. Let’s not improve our dysfunctional government. Just wait for this innovation manna from a little group of people in Silicon Valley, preferably of Indian origin.”

You people at WIRED—you’re the guilty ones! You support these people, you write about them, you elevate them onto the cover! You really messed it up. I tell you, you pushed this on the American public, right? And people believe it now.

Bill Gates reads you a lot. Who are you writing for?

I have no idea. I just write.

General Discussion / [SCIENCE!] First teleporter?
« on: November 18, 2013, 01:31:29 PM »
Haven't really read this yet either. Enjoy.

MOJAVE NATIONAL PRESERVE, Calif. — J. Craig Venter, the maverick scientist, is looking for a new world to conquer — Mars. He wants to detect life on Mars and bring it to Earth using a device called a digital biological converter, or biological teleporter.

 Although the idea conjures up “Star Trek,” the analogy is not exact. The transporter on that program actually moves Captain Kirk from one location to another. Dr. Venter’s machine would merely create a copy of an organism from a distant location — more like a biological fax machine.

Still, Dr. Venter, known for his early sequencing of the human genome and for his bold proclamations, predicts the biological converter will be his next innovation and will be useful on Earth well before it could ever be deployed on the red planet.

The idea behind it, not original to him, is that the genetic code that governs life can be stored in a computer and transmitted just like any other information.

Dr. Venter’s system would determine the sequence of the DNA units in an organism’s genome and transmit that information electronically. At the distant location, the genome would be synthesized — or chemically recreated — inserted into what amounts to a blank cell, and “booted up,” as Mr. Venter puts it. In other words, the inserted DNA would take command of the cell and recreate a copy of the original organism.

To test some ideas, he and a small team of scientists from his company and from NASA spent the weekend here in the Mojave Desert, the closest stand-in they could find for the dry surface of Mars.

The biological fax is not as far-fetched as it seems. DNA sequencing and DNA synthesis are rapidly becoming faster and cheaper. For now, however, synthesizing an organism’s entire genome is still generally too difficult. So the system will first be used to remotely clone individual genes, or perhaps viruses. Single-celled organisms like bacteria might come later. More complex creatures, earthly or Martian, will probably never be possible.

Dr. Venter’s company, Synthetic Genomics, and his namesake nonprofit research institute have already used the technology to help develop an experimental vaccine for the H7N9 bird flu with the drug maker Novartis.

Typically, when a new strain of flu virus appears, scientists must transport it to labs, which can spend weeks perfecting a strain that can be grown in eggs or animal cells to make vaccine.

But when H7N9 appeared in China in February, its genome was sequenced by scientists there and made publicly available. Within days, Dr. Venter’s team had synthesized the two main genes and used them to make a vaccine strain, without having to wait for the virus to arrive from China.

Dr. Venter said Synthetic Genomics would start selling a machine next year that would automate the synthesis of genes by stringing small pieces of DNA together to make larger ones.

Eventually, he said, “we’ll have a small box like a printer attached to your computer.” A person with a bacterial infection might be sent the code to recreate a virus intended to kill that specific bacterium.

“We can send an antibiotic as an email,” said Dr. Venter, who has outlined his ideas in a new book, “Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life.” Proteins might also be made, so that diabetics, for instance, could “download insulin from the Internet.”

Dr. Venter, 67, has many scientific achievements — though critics deride some of them as stunts — but has had less success converting his ideas into successful businesses.

A previous company, Celera Genomics, raced the federally funded Human Genome Project to determine the complete DNA sequence in human chromosomes. The race was declared a tie in 2000, but Celera could not sustain a business selling the genomic information.

A deal worth up to $600 million that Synthetic Genomics made with Exxon Mobil in 2009 to produce biofuels using algae has been scaled back to a research project.

In 2010, Dr. Venter made headlines by creating what some considered the first man-made life. His team synthesized the genome of one species of bacterium and transplanted it into a slightly different species. The transplanted DNA took command of its new host cell, which then multiplied, passing on the synthetic genome.

Critics said Dr. Venter had not really created life, just copied it. Dr. Venter said in the interview that while he did not create life from scratch, he had created a new type of life.

“DNA is the software of life, and to get new life, you just have to change the software,” he said.

Dr. Venter said his team was designing a genome that was not a copy of an existing one and trying to insert it into a host cell. “It’s not alive yet,” he said. “We’re close.”

George Church, a professor of genetics at Harvard Medical School, said there was nothing unique about Dr. Venter’s work so far because others had already synthesized viruses based on DNA sequence information available on the Internet.

 “Most people in the past didn’t call it teleportation,” he said, “but if you want to, fine.”

He also questioned the utility of doing genome engineering to make a copy of something, rather than “doing genome engineering to make something new and exotic and potentially useful.”

Space exploration is one area where the teleporter might be especially useful. It would be extremely costly and time-consuming to send a medicine physically to a colonist on another planet who becomes sick. And it would be difficult to send a sample from Mars back to Earth.

That is why Dr. Venter’s team was camped here this weekend, about 200 miles northeast of Los Angeles. The mission was to find microbial life in the desert, determine its sequence and transmit it to Synthetic Genomics’ headquarters in San Diego.

This dry run was far from the automated process that would be needed on Mars. Two scientists spent hours Friday in a bus filled with laboratory equipment, carefully scraping green microbes off rocks and preparing their DNA for sampling. The sequencing, done on a desktop machine in the bus, took 26 hours.

Chris McKay, a scientist at NASA’s Ames Research Center who is working on the project, said the bus would have to be shrunk to a shoe box to make it feasible for a Mars mission, which would take many years and dollars. “By the time we get to Mars, we will have spent $500 million on that shoe box,” he said.

But sequencing machines are rapidly becoming smaller. A team at Harvard and M.I.T. is hoping to have a sequencer ready for use in a Mars mission departing in 2020.

Of course, all this assumes there is life on Mars to begin with and that it is based on DNA.

But that can be left for another day. Dr. Venter is known for combining business with pleasure, such as when he sailed his yacht around the world to collect ocean life for sequencing. He arrived here Friday in a pickup truck hauling three motorcycles and some libations.

After touring a site on Friday from which his scientists had collected rocks on which green cyanobacteria were growing, Dr. Venter declared: “We’ve had the quartz. Now, let’s get a pint.”

Beam us up, Craig!

General Discussion / [SCIENCE!] LEED Platinum Research Lab online
« on: November 11, 2013, 07:19:47 PM »
First hurdle, achieved. Notw to finish the move before going insane...

LA JOLLA — Genome pioneer J. Craig Venter, who helped sequence the first human genome, opened a new chapter in La Jolla Saturday night.

The occasion was a gala celebrating construction of the new campus for his J. Craig Venter Institute, which was built not only to advance science, but to showcase how science can be compatible with the best of environmentally sustainable practices.

The gala marked the official debut of the $37 million, 45,000-square-foot campus. It was also a fundraiser to endow a chair in genomics.

The black-tie event of about 250 was highlighted by an appearance by former Vice President Al Gore, a Nobel Peace Prize winner for his environmental work.

Gore praised  the environmental theme of the new campus, designed by ZGF Architects and constructed by McCarthy Building Companies to Venter's exacting environmental specifications.

"When you talk about (being a) zero-carbon, net zero-energy facility -- wow! You're usually not talking about a facility like this," Gore said. He was referring to the heavy energy demands of science research centers, which require electricity to run below-zero freezers and to power scientific equipment.

As the first such scientific research building in the world, the new campus presented many challenges, Venter said in an interview. Getting everything right delayed construction.

"We paid some overtime to move a little bit faster," Venter said. "There were no fundamental problems."

Speaking before Gore's talk, Venter mentioned the tight construction deadline, but pronounced the campus "cocktail-ready" for the event.

Some of the equipment was moved in Saturday morning from the existing La Jolla campus, said scientific director Mark Adams. One of the machines, a BioCel system from Agilent, robotically selects single cells from specimens. The DNA from these cells is then copied and read.

Many organisms don't grow in the lab, Adams said, so it's necessary to read the DNA directly from one cell.

On Nov. 20, the staff moves in, and the new campus gets to work.

Much of the institute's research focuses on the theme of digitizing life, that is, turning the DNA code that governs life into the 0s and 1s of computer language. Digitizing biology will enable scientists to reverse-engineer life, and to create synthetic life forms that can help provide renewable energy, new disease treatments and other products to serve humanity, Venter says.

A major part of Venter's work is cataloging the earth's genetic diversity. He has taken his personal yacht, the Sorcerer II, around the world to collect oceanic microbes. One of the expeditions found 60 million new genes from these microbes.

Before and after Gore's talk, performers such as earth harpists entertained the crowd, which included a heavy dose of La Jolla's scientific talent: stem cell researcher Jeanne Loring of The Scripps Research Institute, Larry Goldstein, head of UC San Diego's stem cell program, and UC San Diego Chancellor Pradeep Khosla. The J. Craig Venter Institute campus occupies land owned by UCSD, which invited Venter to build there.

Venter, a UCSD alumnus, remarked that he first came to the university about 38 years ago.

"I started my research career here, I got a bachelor's in biology in 1972, I got a Ph.D. in 1975," Venter said. "This opening tonight represents a fulfillment of a long-term dream to be back in La Jolla, back on the campus. I don't think I'd ever dreamed I'd be in such a fantastic building."

General Discussion / Graphic Designers opinion needed
« on: September 17, 2013, 12:24:18 PM »
So, Adobe is going to their Cloud only apps soon. It's fucking horseshit for small time shops, or small time needs - like prepping graphics in manuscripts/publications. It will significantly raise our costs, for less flexibility.

Do you know of any apps (free or paid) that we should look to besides GIMP and Inkscape? Mac and Windows apps needed, but post whatever you know please.

General Discussion / [Opinion] Job Market scariness
« on: August 26, 2013, 12:40:56 PM »
I feel scared for our younger guys on TZT. I would not want to be facing this job market - though it could be said the DotCom bust in 2000 is what I faced and overcame. It must seem as if you are trapped in low wage / low skill / low pay jobs, when you know you're better than that.

I am both continually happy and scared in my chosen profession. I see the diminishing need for low level Sys Admins as Virtualization becomes better and better. It creates a market for High Tech High Skilled Admins, and I feel like I can make it there, but I am not quite at the level I want to be, to feel protected. Though I guess it could be said that you have to transition from Low level tech to high level tech through experience, but I am not sure if that's 100% true. I would agree that it's possible, however, to be dropped in to a high skill sys admin job just out of school, with rigorous/proper training, though you would still not be very good. I don't even know if "good" work is appreciated anymore. To understand how good of work I accomplish, you almost HAVE to be in my line of work. When the people paying you have no measuring stick for your day-to-day duties, how can you expect to be compensated accordingly?

Optimistically, I hope to be doing what I am doing for another 30 years - I'll need to in order to retire with enough money - but realistically, there's no way I won't be pushed out when I get older. I'll get 20 years, if I'm lucky.

The Great Divide   August 24, 2013, 2:35 pm 411 Comments   
How Technology Wrecks the Middle Class

In the four years since the Great Recession officially ended, the productivity of American workers — those lucky enough to have jobs — has risen smartly. But the United States still has two million fewer jobs than before the downturn, the unemployment rate is stuck at levels not seen since the early 1990s and the proportion of adults who are working is four percentage points off its peak in 2000.

This job drought has spurred pundits to wonder whether a profound employment sickness has overtaken us. And from there, it’s only a short leap to ask whether that illness isn’t productivity itself. Have we mechanized and computerized ourselves into obsolescence?

Are we in danger of losing the “race against the machine,” as the M.I.T. scholars Erik Brynjolfsson and Andrew McAfee argue in a recent book? Are we becoming enslaved to our “robot overlords,” as the journalist Kevin Drum warned in Mother Jones? Do “smart machines” threaten us with “long-term misery,” as the economists Jeffrey D. Sachs and Laurence J. Kotlikoff prophesied earlier this year? Have we reached “the end of labor,” as Noah Smith laments in The Atlantic?

Of course, anxiety, and even hysteria, about the adverse effects of technological change on employment have a venerable history. In the early 19th century a group of English textile artisans calling themselves the Luddites staged a machine-trashing rebellion. Their brashness earned them a place (rarely positive) in the lexicon, but they had legitimate reasons for concern.

Economists have historically rejected what we call the “lump of labor” fallacy: the supposition that an increase in labor productivity inevitably reduces employment because there is only a finite amount of work to do. While intuitively appealing, this idea is demonstrably false. In 1900, for example, 41 percent of the United States work force was in agriculture. By 2000, that share had fallen to 2 percent, after the Green Revolution transformed crop yields. But the employment-to-population ratio rose over the 20th century as women moved from home to market, and the unemployment rate fluctuated cyclically, with no long-term increase.

Labor-saving technological change necessarily displaces workers performing certain tasks — that’s where the gains in productivity come from — but over the long run, it generates new products and services that raise national income and increase the overall demand for labor. In 1900, no one could foresee that a century later, health care, finance, information technology, consumer electronics, hospitality, leisure and entertainment would employ far more workers than agriculture. Of course, as societies grow more prosperous, citizens often choose to work shorter days, take longer vacations and retire earlier — but that too is progress.

So if technological advances don’t threaten employment, does that mean workers have nothing to fear from “smart machines”? Actually, no — and here’s where the Luddites had a point. Although many 19th-century Britons benefited from the introduction of newer and better automated looms — unskilled laborers were hired as loom operators, and a growing middle class could now afford mass-produced fabrics — it’s unlikely that skilled textile workers benefited on the whole.

Fast-forward to the present. The multi-trillionfold decline in the cost of computing since the 1970s has created enormous incentives for employers to substitute increasingly cheap and capable computers for expensive labor. These rapid advances — which confront us daily as we check in at airports, order books online, pay bills on our banks’ Web sites or consult our smartphones for driving directions — have reawakened fears that workers will be displaced by machinery. Will this time be different?

A starting point for discussion is the observation that although computers are ubiquitous, they cannot do everything. A computer’s ability to accomplish a task quickly and cheaply depends upon a human programmer’s ability to write procedures or rules that direct the machine to take the correct steps at each contingency. Computers excel at “routine” tasks: organizing, storing, retrieving and manipulating information, or executing exactly defined physical movements in production processes. These tasks are most pervasive in middle-skill jobs like bookkeeping, clerical work and repetitive production and quality-assurance jobs.

Logically, computerization has reduced the demand for these jobs, but it has boosted demand for workers who perform “nonroutine” tasks that complement the automated activities. Those tasks happen to lie on opposite ends of the occupational skill distribution.

At one end are so-called abstract tasks that require problem-solving, intuition, persuasion and creativity. These tasks are characteristic of professional, managerial, technical and creative occupations, like law, medicine, science, engineering, advertising and design. People in these jobs typically have high levels of education and analytical capability, and they benefit from computers that facilitate the transmission, organization and processing of information.

On the other end are so-called manual tasks, which require situational adaptability, visual and language recognition, and in-person interaction. Preparing a meal, driving a truck through city traffic or cleaning a hotel room present mind-bogglingly complex challenges for computers. But they are straightforward for humans, requiring primarily innate abilities like dexterity, sightedness and language recognition, as well as modest training. These workers can’t be replaced by robots, but their skills are not scarce, so they usually make low wages.

Computerization has therefore fostered a polarization of employment, with job growth concentrated in both the highest- and lowest-paid occupations, while jobs in the middle have declined. Surprisingly, overall employment rates have largely been unaffected in states and cities undergoing this rapid polarization. Rather, as employment in routine jobs has ebbed, employment has risen both in high-wage managerial, professional and technical occupations and in low-wage, in-person service occupations.

So computerization is not reducing the quantity of jobs, but rather degrading the quality of jobs for a significant subset of workers. Demand for highly educated workers who excel in abstract tasks is robust, but the middle of the labor market, where the routine task-intensive jobs lie, is sagging. Workers without college education therefore concentrate in manual task-intensive jobs — like food services, cleaning and security — which are numerous but offer low wages, precarious job security and few prospects for upward mobility. This bifurcation of job opportunities has contributed to the historic rise in income inequality.

HOW can we help workers ride the wave of technological change rather than be swamped by it? One common recommendation is that citizens should invest more in their education. Spurred by growing demand for workers performing abstract job tasks, the payoff for college and professional degrees has soared; despite its formidable price tag, higher education has perhaps never been a better investment. But it is far from a comprehensive solution to our labor market problems. Not all high school graduates — let alone displaced mid- and late-career workers — are academically or temperamentally prepared to pursue a four-year college degree. Only 40 percent of Americans enroll in a four-year college after graduating from high school, and more than 30 percent of those who enroll do not complete the degree within eight years.

The good news, however, is that middle-education, middle-wage jobs are not slated to disappear completely. While many middle-skill jobs are susceptible to automation, others demand a mixture of tasks that take advantage of human flexibility. To take one prominent example, medical paraprofessional jobs — radiology technician, phlebotomist, nurse technician — are a rapidly growing category of relatively well-paid, middle-skill occupations. While these paraprofessions do not typically require a four-year college degree, they do demand some postsecondary vocational training.

These middle-skill jobs will persist, and potentially grow, because they involve tasks that cannot readily be unbundled without a substantial drop in quality. Consider, for example, the frustration of calling a software firm for technical support, only to discover that the technician knows nothing more than the standard answers shown on his or her computer screen — that is, the technician is a mouthpiece reading from a script, not a problem-solver. This is not generally a productive form of work organization because it fails to harness the complementarities between technical and interpersonal skills. Simply put, the quality of a service within any occupation will improve when a worker combines routine (technical) and nonroutine (flexible) tasks.

Following this logic, we predict that the middle-skill jobs that survive will combine routine technical tasks with abstract and manual tasks in which workers have a comparative advantage — interpersonal interaction, adaptability and problem-solving. Along with medical paraprofessionals, this category includes numerous jobs for people in the skilled trades and repair: plumbers; builders; electricians; heating, ventilation and air-conditioning installers; automotive technicians; customer-service representatives; and even clerical workers who are required to do more than type and file. Indeed, even as formerly middle-skill occupations are being “deskilled,” or stripped of their routine technical tasks (brokering stocks, for example), other formerly high-end occupations are becoming accessible to workers with less esoteric technical mastery (for example, the work of the nurse practitioner, who increasingly diagnoses illness and prescribes drugs in lieu of a physician). Lawrence F. Katz, a labor economist at Harvard, memorably called those who fruitfully combine the foundational skills of a high school education with specific vocational skills the “new artisans.”

The outlook for workers who haven’t finished college is uncertain, but not devoid of hope. There will be job opportunities in middle-skill jobs, but not in the traditional blue-collar production and white-collar office jobs of the past. Rather, we expect to see growing employment among the ranks of the “new artisans”: licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides. These workers will adeptly combine technical skills with interpersonal interaction, flexibility and adaptability to offer services that are uniquely human.

David H. Autor is a professor of economics at the Massachusetts Institute of Technology. David Dorn is an assistant professor of economics at the Center for Monetary and Financial Studies in Madrid.

Tech Heads / Home NAS
« on: August 03, 2013, 12:02:41 AM »
RAID5 or RAID6. A drobo is $500, diskless. Can I beat that price? Suggestions?

General Discussion / [SCIENCE!] Dark matter miscalculation?
« on: July 18, 2013, 12:38:56 AM »
Just realized it's an older article. Oh well.

If Not Dark Matter, Then What?

Natalie Wolchover, Life's Little Mysteries Staff Writer   |   April 19, 2012 03:49pm ET


Astronomers mapped the motions of hundreds of stars in the Milky Way in order to deduce the amount of dark matter that must be tugging on them from the vicinity of our sun. Their surprising conclusion? There's no dark matter around here.

As the researchers write in a forthcoming paper in the Astrophysical Journal, the stellar motion implies that the stars, all within 13,000 light-years of Earth, are gravitationally attracted by the visible material in our solar system — the sun, planets and surrounding gas and dust — and not by any unseen matter.

"Our calculations show that [dark matter] should have shown up very clearly in our measurements. But it was just not there!" said lead study author Christian Moni-Bidin, an astronomer at the University of Concepción in Chile.

If the analysis of the data from Chile's European Southern Observatory (ESO) is correct — a big "if," several physicists say — it overturns the decades-old theory that dark matter permeates space in our region of the Milky Way. Dark matter is an invisible material thought to make up 80 percent of all matter in the universe. Although it doesn't interact with light and so cannot be seen, its presence is invoked to explain why the outskirts of galaxies, including the Milky Way, rotate much more quickly than would be expected based on the gravitational pull of visible matter alone. Commonly accepted as fact, dark matter plays an essential role in models of galaxy formation and evolution, and several experiments are under way to detect dark matter particles on Earth.

But if dark matter isn't here in the solar system, it may not be anywhere, because its distribution through the galaxy would have to be extremely peculiar to avoid this region in space. "Modern theories have serious troubles to explain the formation of a [dark matter] halo so curiously shaped," Moni-Bidin told Life's Little Mysteries.

Scott Tremaine, professor of physics at Princeton University's Institute for Advanced Study, said, "If the authors' conclusions are correct, this is indeed a serious blow to dark matter."

Future astronomical surveys, such as the European Space Agency's Gaia mission, will clarify the situation by observing the movements of millions of stars, instead of just hundreds. But in the meantime, by calling dark matter into question, the new ESO finding invites discussion of a topic that hasn't gotten much airtime in recent years: What other theories could account for the rotation of galaxies, as well as other observations explained by dark matter? If not dark matter — or, at least, not the dark matter we expected — then what? Experts have a few other options, though they're not nearly as satisfying.

Gravity 2.0

If the force of gravity is a lot messier than Newton and Einstein thought, then it could account for the speedy rotation of spiral galaxies without requiring dark matter. For gravity to speed up stars on a galaxy's edge, it must deviate from the "inverse-square law" — the rule that gravity decreases by the square of the distance away from something — at galactic distances. In other words, the force would need to suddenly spike at the edge of galaxies. But for it to act that way, gravity fields and the equations associated with them would have to be tremendously convoluted. [Top 3 Questions People Ask an Astrophysicist (and Answers)]

The theory is called "modified Newtonian dynamics," or MOND. "The nicest of the alternative models for spiral galaxies is the alternative gravity theory MOND, as it seems to be able to [mathematically] reproduce the galaxy rotation curves with few assumptions built into it," said Douglas Clowe, an astrophysicist at Ohio University who studies dark matter.

However, MOND doesn't fill as many gaps as dark matter does: it works perfectly only for spiral galaxies, Clowe said. For elliptical galaxies, galaxy groups, galaxy clusters, and larger-scale structures, the theory doesn't quite fit observations, and so it requires that extra matter — i.e., dark matter — be invoked once again. "So instead of just using an undiscovered particle to explain our observations of structures in the universe, MOND requires both an undiscovered particle and a modification to the gravitational-force law," he said.

Another knock against MOND is that it, like the dark matter theory, doesn't match the new ESO findings. According to Moni-Bidin, because the team members used Newtonian gravity in their calculations, MOND would predict a discrepancy to arise in the amount of mass they measured in the solar system. "MOND expects a 'phantom disk' of unseen matter to be detected in a work like ours," he said — just as using Newton's law to model the galaxy leads one to predict dark matter.

Fields of phions

John Moffat, a physicist at the Perimeter Institute for Advanced Study in Canada, has proposed a sub-theory of MOND called MOG, or "modified gravity." He claims MOG explains the peculiar motion of galaxies, as well as galaxy clusters and cluster collisions, without invoking dark matter at any scale.

"I take Einstein's gravity and I add to this three fields," Moffat explained. One of the fields has a mass, and this introduces variations in the force law at different distance scales. However, in order to have a mass, the field must have a particle associated with it, which Moffat calls the phion. And, like dark matter particles, the phion's existence has not been verified. [Smart Answers for Crazy Hypothetical Questions]

Warm and dark

If the ESO analysis is correct, it could just mean that dark matter behaves very differently — or is distributed very differently in space — than has been thought. "It would mean that dark matter would need to be distributed on a wider scale within the inner parts of a galaxy," Clowe said, "which is [mathematically confirmed] if you make the dark matter particles less massive than the currently favored models."

According to Douglas Spolyar, a dark matter theorist at the University of Chicago, the less massive variety is called warm dark matter. "People use it to explain two things — one that you would have a core in your dark matter profile, so dark matter stays constant inside some radius in the galaxy. Secondly, if you look at the dark matter sub-haloes in the Milky Way, the amounts [of warm dark matter] are much lower," he said. That could explain why the ESO astronomers didn't find any dark matter in our cosmic neighborhood. [What If Our Solar System had Formed Closer to the Milky Way's Edge?]

However, the researchers said that cold dark matter particles are strongly preferred by cosmologists, because less massive dark particles would have problems forming galaxies quickly enough to match astronomers' observations of the early universe.

New theory

If future surveys of the motions of stars bolster the ESO findings, strongly suggesting there really is no dark matter in our region of the galaxy, then cosmologists may have to scrap all the current theories and begin anew. "To date, a comprehensive relativistic theory alternative to the dark matter paradigm, able to explain the observations on all scales, from galactic rotation to the clusters of galaxies, is not known," Moni-Bidin said.

Princeton's Tremaine concurred: "I don't think any of the alternatives to dark matter are very likely."

General Discussion / The "Ex-Beatle" - twice
« on: July 07, 2013, 01:46:19 PM »
Hope you can get through the pay-wall. Just starting this now, but pretty cool. This dude was in both Nirvana and Soundgarden at one point.

Pages: [1] 2 3 4 5 6 ... 22