Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Agrul

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 24
151
General Discussion / Introverts are all Pokemon Nerds
« on: August 18, 2014, 12:16:30 PM »
Do our Facebook posts reflect our true personalities? Incrementally, probably not. But in aggregate, the things we say on social media paint a fairly accurate portrait of our inner selves. A team of University of Pennsylvania scientists is using Facebook status updates to find commonalities in the words used by different ages, genders, and even psyches.

The so-called “World Well-Being Project” started as an effort to gauge happiness across various states and communities.

“Governments have an increased interest in measuring not just economic outcomes but other aspects of well-being,” said Andrew Schwartz, a UPenn computer scientist who works on the project. “But it's very difficult to study well-being at a large scale. It costs a lot of money to administer surveys to see how people are doing in certain areas. Social media can help with that.”

For the studies, Schwartz and his co-authors asked people to download a Facebook app called “My Personality.” The app asks users to take a personality test and indicate their age and gender, and then it tracks their Facebook updates. So far, 75,000 people have participated in the experiment.

Then, through a process called differential language analysis, they isolate the words that are most strongly correlated with a certain gender, age, trait, or place. The resulting word clouds reveal which words are most distinguishing of, say, a woman. Or a neurotic person.

In the six studies they’ve published so far, they’ve found that, for example, introverts make heavy use of emoticons and words related to anime, but extroverts say “party,” “baby,” and “ya.”

Words Used by Introverts (top) vs. Extroverts





Schwartz and his colleagues have also tracked “openness,” which is “characterized by traits such as being intelligent, analytical, reflective, curious, imaginative, creative, and sophisticated.” Open people talk about “dreams” and the “universe,” apparently, while people with “low openness”—“characterized by traits such as being unintelligent, unanalytical, unreflective, uninquisitive, unimaginative, uncreative, and unsophisticated”—use contractions, misspellings ... and misspelled contractions.

Openness (top) and Non-Openness Word Correlations





They’ve also analyzed how our use of certain words changes as we age. People are much less likely to be “bored” at 60 than at 13, it turns out, but much more likely to feel proud. Twenty-five year olds tend to mention “drunk,” but 55-year-olds talk about “wine.”

In one of the first studies, the team correlated past Gallup research on life satisfaction with tweets from various counties.

Happy communities, they found, talk about exercise—fitness, Zumba, and the gym—while the sadder ones felt “bored” or “tired.” The more upbeat locales were also more likely to donate money and volunteer, but also to go to meetings. The hidden socio-economic variable is clear: Having money allows you to go rock-climbing, give to charity, and it makes you happier, too.

So far, many of the findings have been rather predictable—which isn’t a bad thing, when it comes to social science.

“Subjects living in high elevations talk about the mountains,” they write. “Neurotic people disproportionately use the phrase ‘sick of’ and the word ‘depressed.’”

But some have shed light on a strange connections between who we are, how we live our lives, and the words we choose to present to the world. For example, “an active life implies emotional stability,” they note. And, “males use the possessive ‘my’ when mentioning their wife or girlfriend more often than females use ‘my’ with ‘husband’ or ‘boyfriend.’”

“It is a very unbiased view of humanity,” Schwartz said of the lab’s work so far. “The data tells the story, and it tells it about people.”

http://www.theatlantic.com/health/archive/2014/08/what-an-introvert-sounds-like/378624/

152
General Discussion / Bypassing Bankers: P2P Lending Companies
« on: August 17, 2014, 04:51:15 PM »
One of the more hopeful consequences of the 2008 financial crisis has been the growth of a group of small companies dedicated to upending the status quo on Wall Street. Bearing cute, Silicon Valley–esque names such as Kabbage, Zopa, Kiva, and Prosper, these precocious upstarts are tiny by banking standards, and pose no near-term threat to behemoths like Goldman Sachs, Morgan Stanley, JPMorgan Chase, Bank of America, or Citigroup—banks that between them control much of the world’s capital flow. But there is no question that these young companies have smartly exploited the too-big-to-fail banks’ failure to cater to the credit needs of consumers and small businesses, and will likely do so more noticeably in the years ahead.

At the forefront of the group is Lending Club, a San Francisco–based company founded in 2007 by Renaud Laplanche, a serial entrepreneur and former Wall Street attorney. Laplanche, 43, grew up in a small town in France and, as a teenager, worked every day for three hours before school in his father’s grocery store. He also won two national sailing championships in France, in 1988 and 1990. Now an American citizen, he created Lending Club after being astonished at the high cost of consumer credit in the United States. Lending Club uses the Internet to match investors with individual borrowers, most of whom are looking to refinance their credit-card debt or other personal loans. The result is a sort of eHarmony for borrowers and lenders. Lending Club has facilitated more than $4 billion in loans and is the largest company performing this sort of service, by a factor of four.

The matching of individual lenders with borrowers on Lending Club’s Web site takes place anonymously (lenders can see would-be borrowers’ relevant characteristics, just not their name), but each party gets what it wants. Many borrowers can shave a few percentage points off the interest rate on the debt they refinance, and lock in the lower rate for three to five years. But that interest rate is still more than the lenders could earn on a three-year Treasury security (about 1 percent), or a typical “high yield” or “junk” bond (averaging about 5 percent). Lending Club claims that its loans have so far yielded an annual net return to lenders of about 8 percent, after fees and accounting for losses. It’s worth noting, however, that what lenders gain in yield, they lose in safety: the loans are unsecured, so if a borrower does not pay his debts—and each year, between 3 and 4 percent of Lending Club borrowers do not—the lender can do little about it except absorb the loss and move on. The average consumer loan on Lending Club is about $14,000; many lenders make several loans at once to hedge against the risk of any single loan going bad.

Lending Club’s astute initial investors, including the venture-capital firms Norwest Venture Partners, Canaan Partners, and Foundation Capital, also get what they want: no liability for the loans being made, no oversight from persnickety bank regulators (Lending Club is regulated by the Securities and Exchange Commission), none of the costs associated with the typical bank-branch network, and, best of all, a plethora of fees, collected from both the borrower and the lender, totaling about 5 percent of the loan amount, on average.

Compared with Wall Street firms, Lending Club is a flea on an elephant’s tail.
In the first quarter of 2014, it helped arrange 56,557 loans totaling $791 million; JPMorgan Chase made $47 billion in what it classifies as consumer loans during the same period. But the company is growing quickly. In 2013, its revenue—the fees it charges for the loans it helps arrange—tripled, to $98 million. There is talk of an IPO later this year. In April, the company was valued at $3.75 billion—38 times its 2013 revenue and more than 520,000 times its net income—when it raised $65 million in additional equity from a new group of high-powered institutional investors, including BlackRock and T. Rowe Price. Lending Club used the cash to help it acquire Springstone Financial, which provides financing for school loans and some elective medical procedures.

In other words, Lending Club is backed by quite a few smart-money players, eager to buy its equity at nosebleed valuations in return for the chance to get in on the micro-loan market—and perhaps to change the way consumers and small businesses get credit. “It’s a value proposition that really comes from the fact that we operate at a lower cost, and then pass on the cost savings to both borrowers and investors,” Laplanche told me. “We give each side a better deal than they could get elsewhere.” That’s certainly true: Lending Club doesn’t have physical branches, or several other layers of costs that weigh down traditional banks. But Lending Club also seems to exploit a market inefficiency that is really quite shocking, given the supposed sophistication of the big Wall Street firms. When it comes to interest rates, the major credit-card issuers—among them JPMorgan Chase and Citigroup—do not differentiate greatly among the many people who borrow money on their credit cards. They charge just about all of them similarly usurious rates. While a dizzying array of credit cards offer a plethora of introductory interest rates and benefits—cash back, for instance—regular interest rates on cards issued by the big players to consumers with average credit scores typically range between 13 and 23 percent. Lending Club’s business strategy, in part, is simply to differentiate more finely among borrowers, particularly those with good credit histories.

Lending Club screens loan applicants—only 10 to 20 percent of people seeking loans get approved to use the marketplace. The company then places each approved borrower into one of 35 credit categories, using many factors, including Fico score. Those with the highest credit ranking can borrow money at about 7 percent interest.
As of the first quarter of 2014, the largest category of Lending Club loans charged borrowers an interest rate of about 13 percent, well below the rate charged by the typical credit-card company, which in early June was almost 16 percent.

It’s quite possible, of course, that Lending Club is merely mispricing the credit risk posed by these small borrowers. After all, Lending Club isn’t making the loans; it bears no liability if, say, default rates rise when another recession hits. So far, however, Lending Club’s loan-default rates appear no worse than the industry average.

Another possibility is that the six largest credit-card issuers in the United States—Chase, Bank of America, American Express, Citigroup, CapitalOne, and Discover—which together control about two-thirds of the domestic consumer-credit-card market, have been acting like a cartel, keeping lending rates higher than they would be in a truly competitive market, and reaping huge profits. In the first quarter of 2014, Chase’s credit-card business—which also includes auto loans and merchant services—had a net income of $1.1 billion and a profit margin of nearly 25 percent. Few businesses on Wall Street provide the same level of consistent profitability as does the consumer-credit-card business. If a few crumbs fall off the table to the likes of Lending Club or Prosper, so be it.

Renaud Laplanche is a firm believer in transparency, and Lending Club’s Web site and public filings are filled with statistics about borrowers. In contrast to the practice of the big banks, the company makes details about each loan available publicly. It recently announced a partnership with San Francisco–based Union Bank, which has $107 billion in assets, to offer the bank’s customers access to its borrowing marketplace.

At a conference in May in San Francisco, where more than 900 peer-to-peer-banking enthusiasts gathered to hear about the latest trends in the industry, Charles Moldow, a general partner at Foundation Capital—one of Lending Club’s largest investors—reportedly created a stir when he discussed a white paper titled “A Trillion Dollar Market by the People, for the People.” In his talk, Moldow spoke about how marketplace lending would change banking in much the same way Amazon has changed retail. He went on to cite Bill Gates’s observation two decades ago that banking is necessary, but bricks-and-mortar banks are not. “Marketplace lending is now poised to demonstrate how accurate that observation was,” Moldow concluded.

That’s probably too exuberant. Whether or not bank branches themselves are necessary, applying for individual peer-to-peer loans will always be more of a hassle than swiping a piece of plastic: inertia is a powerful force. And as his company’s alliance with Union Bank demonstrates, Laplanche is not hell-bent on blowing up the old banking model: he wants to work with established banks. To that end, he has invited onto Lending Club’s board of directors John Mack, the former CEO of Morgan Stanley and a stalwart of the Wall Street status quo. Larry Summers, the former Treasury secretary, is also on the board. “In order to transform the banking system, it’s useful to have people on board who have participated in building it,” Laplanche explained. “We essentially combine that experience and brainpower with more of a Silicon Valley mind-set of using technology to shake things up for the benefit of the consumer.”

One can only hope that it works out that way. For all of Big Finance’s innovation in recent decades, ordinary people haven’t seen much obvious benefit. Perhaps if Lending Club continues to win away some of the credit-card business’s best customers—those with persistent balances but solid credit ratings, for whom it is worth the effort to refinance their personal debt through the marketplace—the big banks might begin to treat borrowers more subtly and equitably. If that were to happen—and I wouldn’t hold my breath—then the cost of credit could be lowered for more people, and Wall Street could take a step toward meeting whatever obligation it feels it may have to repair its tattered relationship with Main Street.

http://www.theatlantic.com/magazine/archive/2014/09/bypassing-the-bankers/375068/?single_page=true

153
Several years ago, the Defense Advanced Research Projects Agency got wind of a technique called transcranial direct-current stimulation, or tDCS, which promised something extraordinary: a way to increase people’s performance in various capacities, from motor skills (in the case of recovering stroke patients) to language learning, all by stimulating their brains with electrical current. The simplest tDCS rigs are little more than nine-volt batteries hooked up to sponges embedded with metal and taped to a person’s scalp.

It’s only a short logical jump from the preceding applications to other potential uses of tDCS. What if, say, soldiers could be trained faster by hooking their heads up to a battery?

This is the kind of question DARPA was created to ask. So the agency awarded a grant to researchers at the University of New Mexico to test the hypothesis. They took a virtual-reality combat-training environment called Darwars Ambush—basically, a video game the military uses to train soldiers to respond to various situations—and captured still images. Then they Photoshopped in pictures of suspicious characters and partially concealed bombs. Subjects were shown the resulting tableaus, and were asked to decide very quickly whether each scene included signs of danger. The first round of participants did all this inside an fMRI machine, which identified roughly the parts of their brains that were working hardest as they looked for threats. Then the researchers repeated the exercise with 100 new subjects, this time sticking electrodes over the areas of the brain that had been identified in the fMRI experiment, and ran two milliamps of current (nothing dangerous) to half of the subjects as they examined the images. The remaining subjects—the control group—got only a minuscule amount of current. Under certain conditions, subjects receiving the full dose of current outperformed the others by a factor of two. And they performed especially well on tests administered an hour after training, indicating that what they’d learned was sticking. Simply put, running positive electrical current to the scalp was making people learn faster.

Dozens of other studies have turned up additional evidence that brain stimulation can improve performance on specific tasks. In some cases, the gains are small—maybe 10 or 20 percent—and in others they are large, as in the DARPA study. Vince Clark, a University of New Mexico psychology professor who was involved with the DARPA work, told me that he’d tried every data-crunching tactic he could think of to explain away the effect of tDCS. “But it’s all there. It’s all real,” Clark said. “I keep trying to get rid of it, and it doesn’t go away.”

Now the intelligence-agency version of DARPA, known as IARPA, has created a program that will look at whether brain stimulation might be combined with exercise, nutrition, and games to even more dramatically enhance human performance. As Raja Parasuraman, a George Mason University psychology professor who is advising an IARPA team, puts it, “The end goal is to improve fluid intelligence—that is, to make people smarter.”

Whether or not IARPA finds a way to make spies smarter, the field of brain stimulation stands to shift our understanding of the neural structures and processes that underpin intelligence. Here, based on conversations with several neuroscientists on the cutting edge of the field, are four guesses about where all this might be headed.

1. Brain stimulation will expand our understanding of the brain-mind connection.
The neural mechanisms of brain stimulation are just beginning to be understood, through work by Michael A. Nitsche and Walter Paulus at the University of Göttingen and by Marom Bikson at the City College of New York. Their findings suggest that adding current to the brain increases the plasticity of neurons, making it easier for them to form new connections. We don’t imagine our brains being so mechanistic. To fix a heart with simple plumbing techniques or to reset a bone is one thing. But you’re not supposed to literally flip an electrical switch and get better at spotting Waldo or learning Swahili, are you? And if flipping a switch does work, how will that affect our ideas about intelligence and selfhood?

Even if juicing the brain doesn’t magically increase IQ scores, it may temporarily and substantially improve performance on certain constituent tasks of intelligence, like memory retrieval and cognitive control. This in itself will pose significant ethical challenges, some of which echo dilemmas already being raised by “neuroenhancement” drugs like Provigil. Workers doing cognitively demanding tasks—air-traffic controllers, physicists, live-radio hosts—could find themselves in the same position as cyclists, weight lifters, and baseball players. They’ll either be surpassed by those willing to augment their natural abilities, or they’ll have to augment themselves.

2. DIY brain stimulation will be popular—and risky.
As word of research findings has spread, do-it-yourselfers on Reddit and elsewhere have traded tips on building simple rigs and where to place electrodes for particular effects. Researchers like the Wright State neuroscientist Michael Weisend have in turn gone on DIY podcasts to warn them off. There’s so much we don’t know. Is neurostimulation safe over long periods of time? Will we become addicted to it? Some scientists, like Stanford’s Teresa Iuculano and Oxford’s Roi Cohen Kadosh, warn that cognitive enhancement through electrical stimulation may “occur at the expense of other cognitive functions.” For example, when Iuculano and Kadosh applied electrical stimulation to subjects who were learning a code that paired various numbers with symbols, the test group memorized the symbols faster than the control group did. But they were slower when it came time to actually use the symbols to do arithmetic. Maybe thinking will prove to be a zero-sum game: we cannot add to our mental powers without also subtracting from them.

3. Electrical stimulation is just the beginning.
Scientists across the country are becoming interested in how other types of electromagnetic radiation might affect the brain. Some are looking at using alternating current at different frequencies, magnetic energy, ultrasound, even different types of sonic noise. There appear to be many ways of exciting the brain’s circuitry with various energetic technologies, but basic research is only in its infancy. “It’s so early,” Clark told me. “It’s very empirical now—see an effect and play with it.”

As we learn more about our neurons’ wiring, through efforts like President Obama’s BRAIN Initiative—a huge, multiagency attempt to map the brain—we may become better able to deliver energy to exactly the right spots, as opposed to bathing big portions of the brain in current or ultrasound. Early research suggests that such targeting could mean the difference between modest improvements and the startling DARPA results. It’s not hard to imagine a plethora of treatments tailored to specific types of learning, cognition, or mood—a bit of current here to boost working memory, some there to help with linguistic fluency, a dash of ultrasound to improve one’s sense of well-being.

4. The most important application may be clinical treatment.
City College’s Bikson worries that an emphasis on cognitive enhancement could overshadow therapies for the sick, which he sees as the more promising application of this technology. In his view, do-it-yourself tDCS is a sideshow—clinical tDCS could be used to treat people suffering from epilepsy, migraines, stroke damage, and depression. “The science and early medical trials suggest tDCS can have as large an impact as drugs and specifically treat those who have failed to respond to drugs,” he told me. “tDCS researchers go to work every day knowing the long-term goal is to reduce human suffering on a transformative scale.” To that end, many of them would like to see clinical trials test tDCS against leading drug therapies. “Hopefully the National Institutes of Health will do that,” Parasuraman, the George Mason professor, said. “I’d like to see straightforward, side-by-side competition between tDCS and antidepressants. May the best thing win.”

A Brief Chronicle of Cognitive Enhancement
500 b.c.: Ancient Greek scholars wear rosemary in their hair, believing it to boost memory.

1886: John Pemberton formulates the original Coca-Cola, with cocaine and caffeine. It’s advertised as a “brain tonic.”

1955: The FDA licenses methylphenidate—a k a Ritalin—for treating “hyperactivity.”

1997: Julie Aigner-Clark launches Baby Einstein, a line of products claiming to “facilitate the development of the brain in infants.”

1998: Provigil hits the U.S. market.

2005: Lumosity, a San Francisco company devoted to online “brain training,” is founded.

2020: A tDCS company starts an SAT-prep service for high-school students.

http://www.theatlantic.com/magazine/archive/2014/09/prepare-to-be-shocked/375072/

154
In recent weeks, the managers, employees, and customers of a New England chain of supermarkets called "Market Basket" have joined together to oppose the board of director's decision earlier in the year to oust the chain's popular chief executive, Arthur T. Demoulas.

Their demonstrations and boycotts have emptied most of the chain's seventy stores.

What was so special about Arthur T., as he's known? Mainly, his business model. He kept prices lower than his competitors, paid his employees more, and gave them and his managers more authority.

Late last year he offered customers an additional 4 percent discount, arguing they could use the money more than the shareholders.

In other words, Arthur T. viewed the company as a joint enterprise from which everyone should benefit, not just shareholders. Which is why the board fired him.

It's far from clear who will win this battle. But, interestingly, we're beginning to see the Arthur T. business model pop up all over the place.

Patagonia, a large apparel manufacturer based in Ventura, California, has organized itself as a "B-corporation." That's a for-profit company whose articles of incorporation require it to take into account the interests of workers, the community, and the environment, as well as shareholders.

The performance of B-corporations according to this measure is regularly reviewed and certified by a nonprofit entity called B Lab.

To date, over 500 companies in sixty industries have been certified as B-corporations, including the household products firm "Seventh Generation."

In addition, 27 states have passed laws allowing companies to incorporate as "benefit corporations." This gives directors legal protection to consider the interests of all stakeholders rather than just the shareholders who elected them.

We may be witnessing the beginning of a return to a form of capitalism that was taken for granted in America sixty years ago.

Then, most CEOs assumed they were responsible for all their stakeholders.

"The job of management," proclaimed Frank Abrams, chairman of Standard Oil of New Jersey, in 1951, "is to maintain an equitable and working balance among the claims of the various directly interested groups ... stockholders, employees, customers, and the public at large."

Johnson & Johnson publicly stated that its "first responsibility" was to patients, doctors, and nurses, and not to investors.

What changed? In the 1980s, corporate raiders began mounting unfriendly takeovers of companies that could deliver higher returns to their shareholders - if they abandoned their other stakeholders.

The raiders figured profits would be higher if the companies fought unions, cut workers' pay or fired them, automated as many jobs as possible or moved jobs abroad, shuttered factories, abandoned their communities, and squeezed their customers.

Although the law didn't require companies to maximize shareholder value, shareholders had the legal right to replace directors. The raiders pushed them to vote out directors who wouldn't make these changes and vote in directors who would (or else sell their shares to the raiders, who'd do the dirty work).

Since then, shareholder capitalism has replaced stakeholder capitalism. Corporate raiders have morphed into private equity managers, and unfriendly takeovers are rare. But it's now assumed corporations exist only to maximize shareholder returns.

Are we better off? Some argue shareholder capitalism has proven more efficient. It has moved economic resources to where they're most productive, and thereby enabled the economy to grow faster.

By this view, stakeholder capitalism locked up resources in unproductive ways. CEOs were too complacent. Companies were too fat. They employed workers they didn't need, and paid them too much. They were too tied to their communities.

But maybe, in retrospect, shareholder capitalism wasn't all it was cracked up to be. Look at the flat or declining wages of most Americans, their growing economic insecurity, and the abandoned communities that litter the nation.

Then look at the record corporate profits, CEO pay that's soared into the stratosphere, and Wall Street's financial casino (along with its near meltdown in 2008 that imposed collateral damage on most Americans).

You might conclude we went a bit overboard with shareholder capitalism.


The directors of "Market Basket" are now considering selling the company. Arthur T. has made a bid [3], but other bidders have offered more.

Reportedly, some prospective bidders think they can squeeze more profits out of the company than Arthur T. did.

But Arthur T. knew may have known something about how to run a business that made it successful in a larger sense.

Only some of us are corporate shareholders, and shareholders have won big in America over the last three decades.

But we're all stakeholders in the American economy, and many stakeholders have done miserably.

Maybe a bit more stakeholder capitalism is in order.

155
At age 13, jazz great Thelonious Monk ran into trouble at Harlem's Apollo Theater. The reason: he was too good. The famously precocious pianist was, as they say, a “natural,” and by that point had won the Apollo’s amateur competition so many times that he was barred from re-entering. To be sure, Monk practiced, a lot actually. But two new studies, and the fact that he taught himself to read music as a child before taking a single lesson, suggest that he likely had plenty of help from his genes.

The question of what accounts for the vast variability in people’s aptitudes for skilled and creative pursuits goes way back — are experts born with their skill, or do they acquire it? Victorian polymath Sir Francis Galton — coiner of the phrase "nature and nurture" and founder of the “eugenics” movement through which he hoped to improve the biological make-up of the human species through selective coupling — held the former view, noting that certain talents run in families.

Other thinkers, perhaps more ethically palatable than Galton, have argued that mastering nearly any skill can be achieved through rote repetition — through practice.

A 1993 study by Ericsson and colleagues helped popularize the idea that we can all practice our way to tuba greatness if we so choose. The authors found that by age 20 elite musicians had practiced for an average of 10,000 hours, concluding that differences in skill are not “due to innate talent.” Author Malcolm Gladwell lent this idea some weight in his 2008 book “Outliers.” Gladwell writes that greatness requires an enormous time investment and cites the “10,000-Hour Rule” as a major key to success in various pursuits from music (The Beatles) to software supremacy (Bill Gates).

However, new research led by Michigan State University psychology professor David Z. Hambrick suggests that, unfortunately for many of us, success isn’t exclusively a product of determination — that despite even the most hermitic practice routine, our genes might still leave greatness out of reach.

Hambrick and his colleague Elliot Tucker-Drob, an assistant professor of psychology at the University of Texas, set out to investigate the genetic influences on musical accomplishment using data from a study of 850 same-sex twin pairs from the 1960s. Participants where originally queried on their musical successes and how often they practiced, both of which Hambrick found to have a genetic component. One quarter of the genetic influence on musical accomplishment appears related to the act of practicing itself. Certain genes and genotypes presumably confer qualities that drive some kids to hole up in their basement and, at the expense of their family’s sanity, perfect that drum fill — traits like musical aptitude, musical enjoyment and motivation, that in turn could draw reinforcement from parents and teachers, leading to even more desire to practice. Hambrick's findings don't reveal what accounts for the remaining majority of genetic influence on musical accomplishment, though he assumes it's innate differences in faculties that would logically contribute to musical ability, such as sound processing and motor coordination.

But it gets more complicated. The new findings suggest that it's the way our genes and environment interact that is most crucial to musical accomplishment. Not only do genetically-influenced qualities contribute to whether people are likely to practice, Hambrick’s data show that the genetic influence on musical success was far larger in those who practiced more. It was previously thought that people might start out with a genetic leg up for a particular activity, but that skill derived through practice could eventually surpass any genetic predilections. “Our results suggest that it’s the other way around,” explains Hambrick, “that genes become more, not less important in differentiating people as they practice…genetic potentials for skilled performance are most fully expressed and fostered by practice."
In other words, people have various genetically determined basic abilities, or talents, that render them better or worse at certain skills, but that can be nurtured through environmental influences. Hence Hambrick is far from down on dedication: “If you want to be a better musician, practice! If you want to be a better golfer, practice!”

A similar study forthcoming in Psychological Science by Miriam A. Mosing of Stockholm's Karolinska Institute leans even heavier on the role of genes in musicality. Mosing and colleagues looked at the association between music practice and specific musical abilities like rhythm, melody and pitch discrimination in over 10,000 identical Swedish twins. They reported that the propensity to practice was between 40% and 70% heritable and that there was no difference in musical ability between twins with varying amounts of cumulative practice. "Music practice,” they conclude, “may not causally influence musical ability and … genetic variation among individuals affects both ability and inclination to practice."

Though both new studies focused on musicality, the findings can in theory be extrapolated to other skilled and creative activities. Similar data exist suggesting a genetic component to chess mastery, and Hambrick is currently analyzing the same twin data set to assess the genetics of scientific accomplishment. Not to get overly reductionist, but it could be assumed that nearly all of our talents and cognitive characteristics are least partly influenced by our respective strings of nucleotides. Complex pursuits, whether creative or technical, involve numerous communicating regions from all over the brain (in contrast to the overly simplistic and now debunked "left brain/right brain" assignments for analytical vs creative types). These structures and the brain’s general blueprint are shaped by our genetic code throughout development; also genes encode for the proteins that run our bodies and brains while plenty of data link specific genetic profiles with varying cognitive abilities.

Like all studies, Hambrick’s has its limitations. The assessments of musical practice and accomplishment were “fairly coarse” and the study subjects were primarily high-achieving students, though not specifically selected for elite musical ability. And while beyond the scope of both Hambrick’s and Mosing’s investigations, their work evokes the question of what it is to be “good” at something — how to reconcile the murky, often contentious divide between technical proficiency and creativity or artistic worth. Virtuosity can come across cold while three sloppy guitar chords can register in deep, mind-altering, meaningful ways. “No one would argue that the Sex Pistols or The Ramones — or even The Beatles or The Rolling Stones — were the most technically proficient musicians,” says Hambrick, “but they created something that, for whatever reason, resonated with people. I think it would be interesting to measure both creativity and expertise in the same sample. My guess is they are both are influenced by genes, but by different genes.”

It’s potentially unsettling that our abilities are so influenced by a genetic crapshoot. Some people people will always be maddeningly proficient at shredding through guitar solos, or blowing tubas, or winning amateur competitions at the Apollo Theater. But Hambrick sees his findings as constructive. If practicing our way to being just pretty good at something isn’t enough, we can better seek our strengths. More importantly we can avoid setting up unrealistic expectations for children: “I think it’s important to let kids try a lot of different things…and find out what they’re good at, which is probably also what they’ll enjoy. But the idea that anyone can become an expert at most anything isn't scientifically defensible, and pretending otherwise is harmful to society and individuals.”

http://www.scientificamerican.com/article/what-do-great-musicians-have-in-common-dna/?WT.mc_id=SA_Facebook

156
Raymond Burse hasn't held a minimum-wage job since his high school and college years, when he worked side jobs on golf courses and paving crews. Yet this summer, the interim president at Kentucky State University made a large gesture to his school's lowest-paid employees. Burse announced that he would take a 25 percent salary cut to boost their wages.

The 24 school employees making less than $10.25 an hour, who mostly serve as custodial staff, groundskeepers and lower-end clerical workers, will see their pay rise to that new baseline. Some had been making as little as $7.25, the current federal minimum. Burse, who assumed the role of interim president in June, says he asked the school's chief financial officer how much such an increase would cost. The amount: $90,125.


"I figured it was easier for me to forgo that amount, rather than adding an additional burden on the institution," Burse says. "I had been thinking about it almost since the day they started talking to me about being interim president."

Burse announced his decision to take the funds out of his salary in a board meeting at the end of July, and the school ratified his employment contract on the spot — decreasing it from $349,869 to $259,744. He has pledged to take further salary cuts any time new minimum-wage employees are hired on his watch, to bring their hourly rate to $10.25.

This isn't Burse's first time leading KSU. He served as president from 1982 to 1989, before joining a law firm in Louisville, Ken., and then becoming a vice president and general counsel at GE. He will hold the school's interim leadership role for at least a year, or longer if they need more time to find a permanent replacement.

Burse describes himself as someone who believes in raising wages, and who also has high expectations and demands for his staff. "I thought that if I'm going to ask them to really be committed and give this institution their all, I should be doing something in return," Burse says. "I thought it was important."

Earlier this year, the Kentucky House passed a bill to increase the minimum wage to $10.10 by July  2016, but the bill failed to pass the state's Senate. There have been a few recent instances of colleges boosting their minimum wage on campus, such as at Hampton University in Virginia, where the president made a personal donation to increase workers' salaries. Yet at most organizations that have made news for their decisions to increase pay, like IKEA and GAP, the funding isn't tied to deductions in leaders' salaries.

"I didn’t have any examples of it having been done out there and I didn’t do it to be an example to anyone else," Burse says. "I did it to do right by the employees here."

http://www.washingtonpost.com/blogs/on-leadership/wp/2014/08/05/kentucky-state-president-to-share-his-salary-with-schools-lowest-paid-workers/?tid=sm_fb

157
"Practically every food you buy in a store for consumption by humans is genetically modified food. There are no wild, seedless watermelons. There's no wild cows." — Neil deGrasse Tyson


Cosmos star Neil deGrasse Tyson is known for defending climate science and the science of evolution. And now, in a video recently posted on YouTube (the actual date when it was recorded is unclear), he takes a strong stand on another hot-button scientific topic: Genetically modified foods.

In the video, Tyson can be seen answering a question posed in French about "des plantes transgenetiques"—responding with one of his characteristic, slowly-building rants.

"Practically every food you buy in a store for consumption by humans is genetically modified food," asserts Tyson. "There are no wild, seedless watermelons. There's no wild cows...You list all the fruit, and all the vegetables, and ask yourself, is there a wild counterpart to this? If there is, it's not as large, it's not as sweet, it's not as juicy, and it has way more seeds in it. We have systematically genetically modified all the foods, the vegetables and animals that we have eaten ever since we cultivated them. It's called artificial selection." You can watch the full video above.

In fairness, critics of GM foods make a variety of arguments that go beyond the simple question of whether the foods we eat were modified prior to the onset of modern biotechnology. They also draw a distinction between modifying plants and animals through traditional breeding and genetic modification that requires the use of biotechnology, and involves techniques such as inserting genes from different species.

http://www.motherjones.com/environment/2014/07/neil-degrasse-tyson-on-gmo

158
General Discussion / If minimum wages, why not maximum wages?
« on: July 29, 2014, 09:09:02 AM »
I was in a gathering of academics the other day, and we were discussing minimum wages. The debate moved on to increasing inequality, and the difficulty of doing anything about it. I said why not have a maximum wage? To say that the idea was greeted with incredulity would be an understatement. So you want to bring back price controls was once response. How could you possibly decide on what a maximum wage should be was another.

So why the asymmetry? Why is the idea of setting a maximum wage considered outlandish among economists?

The problem is clear enough. All the evidence, in the US and UK, points to the income of the top 1% rising much faster than the average. Although the share of income going to the top 1% in the UK fell sharply in 2010, the more up to date evidence from the US suggests this may be a temporary blip caused by the recession. The latest report from the High Pay Centre in the UK says:



“Typical annual pay for a FTSE 100 CEO has risen from around £100-£200,000 in the early 1980s to just over £1 million at the turn of the 21st century to £4.3 million in 2012. This represented a leap from around 20 times the pay of the average UK worker in the 1980s to 60 times in 1998, to 160 times in 2012 (the most recent year for which full figures are available).”

I find the attempts of some economists and journalists to divert attention away from this problem very revealing. The most common tactic is to talk about some other measure of inequality, whereas what is really extraordinary and what worries many people is the rise in incomes at the very top. The suggestion that we should not worry about national inequality because global inequality has fallen is even more bizarre.

What lies behind this huge increase in inequality at the top? The problem with the argument that it just represents higher productivity of CEOs and the like is that this increase in inequality is much more noticeable in the UK and US than in other countries, yet there is no evidence that CEOs in UK and US based firms have been substantially outperforming their overseas rivals. I discussed in this post a paper by Piketty, Saez and Stantcheva which set out a bargaining model, where the CEO can put more or less effort into exploiting their monopoly power within a company. According to this model, CEOs in the UK and US have since 1980 been putting more bargaining effort than their overseas counterparts. Why? According to Piketty et al, one answer may be that top tax rates fell in the 1980s in both countries, making the returns to effort much greater.

If you believe this particular story, then one solution is to put top tax rates back up again. Even if you do not buy this story, the suspicion must be that this increase in inequality represents some form of market failure. Even David Cameron agrees. The solution the UK government has tried is to give more power to the shareholders of the firm. The High Pay Centre notes that: “Thus far, shareholders have not used their new powers to vote down executive pay proposals at a single FTSE 100 company.”, although as the FT report shareholder ‘revolts’ are becoming more common. My colleague Brian Bell and John Van Reenen do note in a recent study “that firms with a large institutional investor base provide a symmetric pay-performance schedule while those with weak institutional ownership protect pay on the downside.” However they also note that “a specific group of workers that account for the majority of the gains at the top over the last decade [are] financial sector workers .. [and] .. the financial crisis and Great Recession have left bankers largely unaffected.”

So increasing shareholder power may only have a small effect on the problem. So why not consider a maximum wage? One possibility is to cap top pay as some multiple of the lowest paid, as a recent Swiss referendum proposed. That referendum was quite draconian, suggesting a multiple of 12, yet it received a large measure of popular support (35% in favour, 65% against). The Swiss did vote to ban ‘golden hellos and goodbyes’. One neat idea is to link the maximum wage to the minimum wage, which would give CEOs an incentive to argue for higher minimum wages! Note that these proposals would have no disincentive effect on the self-employed entrepreneur.

If economists have examined these various possibilities, I have missed it. One possible reason why many economists seem to baulk at this idea is that it reminds them too much of the ‘bad old days’ of incomes policies and attempts by governments to fix ‘fair wages’. But this is an overreaction, as a maximum wage would just be the counterpart to the minimum wage. I would be interested in any other thoughts about why the idea of a maximum wage seems not to be part of economists’ Overton window.

from economist Simon Wren Lewis's blog

159
Germany is the global leader in energy efficiency, and the U.S., with its ingrained car culture, is among the least energy efficient of the world’s largest economies.

That’s the conclusion of a new report released by the American Council for an Energy-Efficient Economy, which ranks the world’s 16 largest economies based on 31 different measurements of efficiency, including national energy savings targets, fuel economy standards for vehicles, efficiency standards for appliances, average vehicle mpg, and energy consumed per square foot of floor space in residential buildings, among other metrics.

The ACEEE report ranked the U.S. 13th overall, with Germany, Italy, smaller European Union nations, France and China making up the top five most energy efficient economies in the world.

Using energy more efficiently is a critical step countries can take to reduce their fossil fuels consumption and its related climate change-driving carbon dioxide and methane emissions. The U.S. Environmental Protection Agency used state energy efficiency standards to help set CO2 emissions reductions goals for each state in the agency’s proposed Clean Power Plan, announced in June.

The U.S. was the 9th most energy-efficient economy in the ACEEE’s 2012 ranking, which criticized the country for focusing more on road construction than expanding public transportation.

Since then, the U.S. has made very little progress toward using energy more efficiently, the 2014 report says.This year, the U.S. took a major hit for its lack of a national energy savings plan or national greenhouse gas reduction plan, and its ongoing resistance to public transit.

Americans drive more than 9,300 miles per year, more than citizens in any other major world economy, according to the report. Australians, ranking second-to-last for annual per-capita vehicle miles traveled, drive 6,368 miles per year. India tops the list, driving 85 miles per year per capita, followed by China with 513 miles per year.

Americans also ranked last for the percentage of their travel accomplished using public transit — 10 percent, tying with Canada. Residents of China use transit 72 percent of the time, followed by Indians, who use transit 65 percent of the time.

The U.S. scored well for its energy efficiency tax credit and loan programs. And, it scored well for efficient ovens and refrigerators.

“We’re a leader in appliance and equipment standards,” said the report’s lead author, ACEEE national policy research analyst Rachel Young.The report called EnergyGuide appliance labels and Energy Star labels “best practices” for voluntary appliance and equipment standards.

The ACEEE gave the U.S. credit for energy efficiency standards included in residential and commercial building codes in many states, but criticized the country for not having adequate national building standards in place.

Young said the U.S. may improve in the energy efficiency rankings if the Clean Power Plan is finalized because a state may be able to increase the efficiency of its power plants and buildings as ways to reduce carbon dioxide emissions from existing power plants.

“The rule could spur greater investment in energy efficiency throughout the country,” she said.

By contrast, Germany scored well in nearly every category in the survey, including spending on energy efficiency measures, aggressive building codes, and the country’s tax credit and loan programs.

Germany has set a national target of a 20 percent reduction in primary energy consumption below 2008 levels by 2020 and 50 percent by 2050.

The U.S. is one of only two countries in the survey with no national energy savings plan or greenhouse gas emissions reduction plan.



http://www.scientificamerican.com/article/u-s-gets-lackluster-energy-efficiency-rating/?WT.mc_id=SA_Facebook

160
General Discussion / AZ takes nearly 2 hours to execute prisoner...
« on: July 25, 2014, 01:10:39 PM »
In January the state of Ohio executed the convicted rapist and murderer Dennis McGuire. As in the other 31 U.S. states with the death penalty, Ohio used an intravenously injected drug cocktail to end the inmate's life. Yet Ohio had a problem. The state had run out of its stockpile of sodium thiopental, a once common general anesthetic and one of the key drugs in the executioner's lethal brew. Three years ago the only U.S. supplier of sodium thiopental stopped manufacturing the drug. A few labs in the European Union still make it, but the E.U. prohibits the export of any drugs if they are to be used in an execution.

Ohio's stockpile of pentobarbital, its backup drug, expired in 2009, and so the state turned to an experimental cocktail containing the sedative midazolam and the painkiller hydromorphone. But the executioner was flying blind. Execution drugs are not tested before use, and this experiment went badly. The priest who gave McGuire his last rites reported that McGuire struggled and gasped for air for 11 minutes, his strained breaths fading into small puffs that made him appear “like a fish lying along the shore puffing for that one gasp of air.” He was pronounced dead 26 minutes after the injection.

There is a simple reason why the drug cocktail was not tested before it was used: executions are not medical procedures. Indeed, the idea of testing how to most effectively kill a healthy person runs contrary to the spirit and practice of medicine. Doctors and nurses are taught to first “do no harm”; physicians are banned by professional ethics codes from participating in executions. Scientific protocols for executions cannot be established, because killing animal subjects for no reason other than to see what kills them best would clearly be unethical. Although lethal injections appear to be medical procedures, the similarities are just so much theater.

Yet even if executions are not medical, they can affect medicine. Supplies of propofol, a widely used anesthetic, came close to being choked off as a result of Missouri's plan to use the drug for executions. The state corrections department placed an order for propofol from the U.S. distributor of a German drug manufacturer. The distributor sent 20 vials of the drug in violation of its agreement with the manufacturer, a mistake that the distributor quickly caught. As the company tried in vain to get the state to return the drug, the manufacturer suspended new orders. The manufacturer feared that if the drug was used for lethal injection, E.U. regulators would ban all exports of propofol to the U.S. “Please, Please, Please HELP,” wrote a vice president at the distributor to the director of the Missouri corrections department. “This system failure—a mistake—1 carton of 20 vials—is going to affect thousands of Americans.”

This was a vast underestimate. Propofol is the most popular anesthetic in the U.S. It is used in some 50 million cases a year—everything from colonoscopies to cesareans to open-heart surgeries—and nearly 90 percent of the propofol used in the U.S. comes from the E.U. After 11 months, Missouri relented and agreed to return the drug.

Such incidents illustrate how the death penalty can harm ordinary citizens. Supporters of the death penalty counter that its potential to discourage violent crime confers a net social good. Yet no sound science supports that position. In 2012 the National Academies' research council concluded that research into any deterrent effect that the death penalty might provide is inherently flawed. Valid studies would need to compare homicide rates in the same states at the same time, but both with and without capital punishment—an impossible experiment. And it is clear that the penal system does not always get it right when meting out justice. Since 1973 the U.S. has released 144 prisoners from death row because they were found to be innocent of their crimes.

Concerns about drug shortages for executions have led some states to propose reinstituting the electric chair or the gas chamber—methods previously dismissed by the courts as cruel and unusual. In one sense, these desperate states are on to something. Strip off its clinical facade, and death by intravenous injection is no less barbarous.

http://www.scientificamerican.com/article/lete28099s-stop-pretending-the-death-penalty-is-a-medical-procedure-editorial/

161
Spamalot / hey AD
« on: July 24, 2014, 12:04:47 AM »
pyopencl & mpmath were waaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay easier to install in ubuntu than windows

suuuuuuuck it bro

although i still have no idea what made pyopencl finally start working. the 'easier' install still took 7 hours, woo

162
General Discussion / 10 Things Millennials Won’t Spend Money On
« on: July 23, 2014, 12:13:04 PM »
By 2017, millennials will have more buying power than any other generation. But so far, they're not spending like their parents did.

MORE
Getting Some Financial Help From Mom and Dad? Tell Us Your Story.
Millennials (With Jobs) Are Super Saving Their Way to Retirement
Why ‘Millennial Bashing’ In the Workplace Needs to Stop
Millennials are often maligned for their lack of financial literacy, but there is one money skill the younger generation has in spades: saving. After growing up during the Great Recession, millennials want to keep every cent they can. (If you don’t believe us, just check out this Reddit Frugal thread inspired by our recent post on millennial retirement super-saving.)

MORE
6 Mother’s Day Factoids to Show You’re Not a Horrible, Ungrateful Son or Daughter
$300 for a Jersey? NFL Fan Gear Just Got More Expensive
Flight Horror: Passenger Plane Crashes in Stormy Taiwan NBC News
Bleak Homecoming: First Ukraine Crash Bodies Land in Netherlands NBC News
Moving Moment: Wrecked Concordia Makes Final Voyage NBC News
This generation may be way ahead of where their parents were at the same age when it comes to preparing for retirement, but the frugality doesn’t end there. Kids these days also aren’t making the same buying decisions our parents made. Here are 10 things that a disproportionate number of today’s young adults won’t shell out for.

POPULAR AMONG TIME SUBSCRIBERS
Eat Butter Fat Time Magazine Cover
Ending the War on Fat
Subscribe
The End of Iraq
How Many People Watched Orange Is the New Black? No One Knows
1. Pay TV
The average American still consumes 71% of his or her media on television, but for people age 14-24, it’s only 46%—with the lion’s share being consumed on phone, tablet, or PC. Many young people aren’t getting a TV at all. Nielsen found that most “Zero-TV” households tended toward the younger set, with adults under 35 making up 44% of all television teetotalers.

Millennials aren’t the only ones tuning out the tube. In 2013, Nielsen reported aggregate TV watching time shrank for the first time in four years.

2. Investments
By all accounts, young people should be investing in equities. Those just entering the work force have plenty of time before retirement to ride out market blips, and experts recommend younger investors place 75% to 90% of their portfolio in stocks or stock funds.

Unfortunately, after growing up in the Great Recession, millennials would rather put their money in a sock drawer than on Wall Street. When Wells Fargo surveyed roughly 1,500 adults between 22 and 32 years of age, 52% stated they were “not very” or “not at all” confident in the stock market as a place to invest for retirement.

Of those surveyed, only 32% said they had the majority of their savings in stocks or mutual funds. (Too be fair, an equal number admitted to having no clue what they were invested in, so hopefully their trust fund advisors are making good decisions.)

3. Mass-Market Beer
Bud. Coors. Miller. When parents want a drink, they reach for the classics. Maybe a Heineken for a little extra adventure. Millennials? Not so much. When Generation Now (thank god that moniker didn’t catch on) wants to get boozy, the data says we prefer indie brews.

According to one recent study, 43% of millennials say craft beer tastes better than mainstream beers, while only 32% of baby boomers said the same. And 50% of millennials have consumed craft brew, versus 35% of the overall population. Even Pete Coors, CEO of guess-which-brand, blames pesky kids for his beer’s declining sales.

4. Cars
Back when the Beach Boys wrote Little Deuce Coupe in 1963, there was a whole genre called “Car Songs.” Nowadays you’d be hard pressed to find someone under 35 who knows what a “competition clutch with the four on the floor” even means.

The sad fact is that American car culture is dying a slow death. Yahoo Finance reports the percentage of 16-to-24-year-olds with a driver’s license has plummeted since 1997 and is now below 70% for the first time since Little Deuce Coupe’s release. According to the Atlantic, “In 2010, adults between the ages of 21 and 34 bought just 27 percent of all new vehicles sold in America, down from the peak of 38 percent in 1985.”

5. Homes
It’s not that millennials don’t want to own homes—nine in ten young people do—it’s that they can’t afford them. Harvard’s Joint Center for Housing Studies found that homeownership rate among adults younger than 35 fell by 12 percent between 2006 and 2011, and 2 million more were living with Mom and Dad.

It’s going to be a while before young people start purchasing homes again. The economic downturn set this generation’s finances back years, and reforms like the Dodd-Frank Act have made it even more difficult for the newly employed to get credit. Now that unemployment is decreasing, working millennials are still renting before they buy.

6. Bulk Warehouse Club Goods
This one initially sounds weird, but remember: millennials don’t own cars or homes. So a Costco membership doesn’t make much sense. It’s not easy to bring home a year’s supply of Nesquik and paper towels without a ride, and even if you take a bus, there’s no room to stash hoards of kitchen supplies in a studio apartment.

Responding to tepid millennial demand, the big box giant is trying to win over youngsters by partnering with Google to deliver certain items right to your home. However, even Costco doesn’t seem all that excited about its new strategy.

“Don’t expect us to go to everybody’s doorstep,” Richard Galanti, Costco’s chief financial officer, told Bloomberg Businessweek. “Delivering small quantities of stuff to homes is not free. Ultimately, somebody’s got to pay for it.”

7. Weddings
Getting hitched early in life used to be something of a right of passage into adulthood. A full 65% of the Silent Generation married at age 18 to 32. Since then, though, Americans have been waiting longer and longer to tie the knot. Pew Research found 48% of boomers were married while in that age range, compared to 35% in Gen X. Millennials are bringing up the rear at just 26%.

Just like with homes, it’s not that today’s youth just hates wedding dresses—far from it. Sixty-nine percent of millennials told Pew they would like to marry, but many are waiting until they’re more financially stable before doing so.

8. Children
It’s hard to spend money on children if you don’t have any.

After weddings, you probably saw this one coming, but millennials’ procreation abstention isn’t only because they’re not married. Many just aren’t planning on having kids. In a 2012 study, fewer than half of millennials (42%) said they planned to have children. That’s down from 78% 20 years ago.

Stop me if you heard this one: it’s not that millennials don’t want children (or homes, or weddings, or ponies), it’s that this whole recession thing has really scared them off any big financial or life commitments. Most young people in the above study hoped to have kids one day, but didn’t think their economic stars would align to make it happen.

9. Health insurance
According the Kaiser Family Foundation, adults ages 18 to 34 made up 40% of the uninsured population in the pre-Obamacare world. Why don’t young people get health coverage? Because they’re probably not going to get sick. This demographic is so healthy that those in the health insurance game refer to them as “invincibles.”

Since the Affordable Care Act, more millennials are gradually buying insurance. Twenty-eight percent of Obamacare’s 8 million new enrollees were 18-34 year-olds. That’s well short of the 40% the Congressional Budget Office wanted in order to subsidize older Americans’ plans, but better than the paltry number of millennials who signed up before Zach Galifianakis got involved.

10. Anything you tell them to buy
When buying a product, older Americans tend to trust the advice of people they know. Sixty-six percent of boomers said the recommendations of friends and family members influences their purchasing decisions more than a stranger’s online review.

Most millennials, on the other hand, don’t want their parent’s or peer’s help. Fifty-one percent of young adults say they prefer product reviews from people they don’t know.

http://time.com/money/2820241/10-things-millennials-wont-shell-out-for/

163
A reporter asked me for a quote regarding the importance of statistics. But, after thinking about it for a moment, I decided that statistics isn’t so important at all. A world without statistics wouldn’t be much different from the world we have now.

What would be missing, in a world without statistics?

Science would be pretty much ok. Newton didn’t need statistics for his theories of gravity, motion, and light, nor did Einstein need statistics for the theory of relativity. Thermodynamics and quantum mechanics are fundamentally statistical, but lots of progress could’ve been made in these areas without statistics. The second law of thermodynamics is an observable fact, ditto the two-slit experiment and various experimental results revealing the nature of the atom. The A-bomb and, almost certainly, the H-bomb, maybe these would never have been invented without statistics, but on balance I think most people would feel that the world would be a better place without these particular scientific developments. Without statistics, we could forget about discovering the Hibbs boson etc, but that doesn’t seem like such a loss for humanity.

At a more applied level, statistics helped to win World War 2, most notably in cracking the Enigma code but also in various operations-research efforts. And it’s my impression that “our” statistics were better than “their” statistics. So that’s something.

Where would civilian technology be without statistics? I’m not sure. I don’t have a sense of how necessary statistics was for quantum theory. In a world without statistics, would the study of quantum physics have progressed far enough so that transistors were invented? This one, I don’t know. And without statistics we wouldn’t have modern quality control, so maybe we’d still be driving around in AMC Gremlins and the like. Scary thought, but not a huge deal, I’d think. No transistors, though, that would make a difference in my life. No transistors, no blogging! And I guess we could also forget about various unequivocally beneficial technological innovations such as modern pacemakers, hearing aids, cochlear implants, and Clippy.

Modern biomedicine uses lots and lots of statistics, but would medicine be so much worse without it? I don’t think so, at least not yet. You don’t need statistics to see that penicillin works, nor to see that mosquitos transmit disease and that nets keep the mosquitos out. Without statistics, I assume that various mistakes would get into the system, various ineffective treatments that people think are effective, etc. But on balance I doubt these would be huge mistakes, and the big ones would eventually get caught, with careful record-keeping even without statistical inference and adjustments. Without statistics, biologists would not be able to sequence the gene, and I assume they’d be much slower at developing tools such as tests that allow you to check for chromosomal abnormalities in amnio. I doubt all these things add up to much yet, but I guess there’s promise for the future. Statistics is also necessary for a lot of drug development—right now my colleagues and I are working on a pharmacodynamic model of dosing—but, again, without any of this, it’s not clear the world would be so much different.

The Poverty Lab team use statistics and randomized experiments to see what works to help the lives of poor people around the world. That’s cool but I’m not ultimately convinced this all makes a difference in the big picture. Or, to put it another way, I suspect that the statistical validation serves mostly as a way to build political consensus for economic policies that will be effective in sharing the wealth. By demonstrating in a scientific way that Treatment X is effective, this supports the idea that there is a way to help the sort of people who live in what Nicholas Wade would describe as “tribal” societies. So, sure, fine, but in this case the benefits of the statistical methods are somewhat indirect.

Without statistics, we wouldn’t have most of the papers in “Psychological Science,” but I could handle that. Piaget didn’t need any statistics, and I think the modern successors of Piaget could’ve done pretty much what they’ve done without statistics, just by carefully observation of major transitions.

Careful observation and precise measurement can be done, with or without statistical methods. Indeed, researchers often use statistics as a substitute for careful observation and precise measurement. That is a horrible thing to do, and if you have a clear understanding of statistical theory, you can see why. But statistics is hard, and lots of researchers (and journal editors, news reporters, etc.) don’t have that understanding. When statistics is used as a substitute for, rather than an adjunct to, scientific measurement, we get problems.

OK, here’s another one: no statistics, no psychometrics. That’s too bad but one could make the argument that, on the whole, psychometrics has done more harm than good (value-added assessment, anyone?). Don’t get me wrong—I like psychometrics, and a strong argument could be made that it’s done more good than harm—but my point here is that the net benefit is not clear; a case would have to be made.

Polling. Can’t do it well without statistics. But, would a world without polling be so horrible? Much as I hate to admit it, I don’t think so. Don’t get me wrong, I think polling is on balance a good thing—I agree with George Gallup that measurement of public opinion is an important part of the modern democratic process—but I wouldn’t want to hang too much of the benefits of statistics on this one use, given that I expect lots of people would argue that opinion polls do more harm than good in politics.

The alternative to good statistics is . . .

Perhaps the most important benefits of statistics come not from the direct use of statistical methods in science and technology, but rather in helping us learn about the world. Statisticians from Francis Galton and Ronald Fisher onward have used statistics to give us a much deeper understanding of human and biological variation. I can’t see how any non-statistical, mechanistic model of the world could reproduce that level of understanding. Forget about p-values, Bayesian inference, and the rest: here I’m simply talking about the nature of correlation and variation.

For a more humble example, consider Bill James. Baseball is a silly example, sure, but the point is to see how much understanding has been gained in this area through statistical measurement and comparison. As James so memorably wrote, the alternative to good statistics is not “no statistics,” it’s “bad statistics.” James wrote about baseball commentators who would make asinine arguments which they would back up by picking out numbers without context. In politics, the equivalent might be a proudly humanistic columnist such as David Brooks supporting his views by just making up numbers or featuring various “too good to be true” statistics and not checking them.

So here’s one benefit to the formal study of statistics: Without any statistics, there still would be numbers, along with people trying to interpret them.

Could governments and large businesses be managed well without statistics? I’m not sure. Given that half the U.S. Congress seems willing to shut down the government from time to time, it’s not clear than any agreement on the numbers will have much to do with political action. Similarly, all the statistics in the world don’t seem to be stopping the euro-zone from drifting. But maybe things would be much worse without a common core of statistical agreement. I don’t know; unfortunately this seems like the sort of causal question that is too difficult for statistics to answer.

Finally, one way that statistics is potentially having a huge impact in our lives is through the measurement of global warming and all the rest. But I’m guessing that a lot of this could be done with a pre-statistical understanding. The basic physics is already there, as would be the careful measurements. Statistical modeling is certainly relevant to the study of climate change—if you’re trying to reconstruct historical climate conditions from tree-ring data, it’s tough enough to do it with statistical modeling, I can’t imagine how it could be done otherwise—but the basic patterns of carbon dioxide, temperature, melting ice, etc., are apparent in any case. And, even with statistics, much uncertainty remains.

Summary

When I started writing this post, I was thinking that statistics doesn’t really matter, but I think that’s because I was focusing on some of the more highly-publicized by less beneficial applications of statistics: the use of statistical experimentation and inference to get p-values for tabloid-bait scientific papers, or for Google, Amazon, etc., to perfect their techniques for squeezing money out of their customers or, even at best, to test a medical treatment that increases survival rate for some rare disease by 2 percentage points. But statistics is central to how we think about the world. I still think that statistics is much less central to our lives than, say, chemistry. But it ain’t nothing.

http://andrewgelman.com/2014/07/23/world-without-statistics/

164
General Discussion / Basilisk Gedankenexperiment Freaks out Futurists
« on: July 21, 2014, 08:34:34 PM »
Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko’s Basilisk. For Roko’s Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber. It's like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

Are you sure you want to keep reading? Because the worst part is that Roko’s Basilisk already exists. Or at least, it already will have existed—which is just as bad.

Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality. LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism; his research institute, the Machine Intelligence Research Institute, which funds and promotes research around the advancement of artificial intelligence, has been boosted and funded by high-profile techies like Peter Thiel and Ray Kurzweil, and Yudkowsky is a prominent contributor to academic discussions of technological ethics and decision theory. What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.

One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Quote
Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

Some background is in order. The LessWrong community is concerned with the future of humanity, and in particular with the singularity—the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, “The ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon—within the next 50 years or so. Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever. “If you don't sign up your kids for cryonics then you are a lousy parent,” Yudkowsky writes.

If you believe the singularity is coming and that very powerful AIs are in our future, one obvious question is whether those AIs will be benevolent or malicious. Yudkowsky’s foundation, the Machine Intelligence Research Institute, has the explicit goal of steering the future toward “friendly AI.” For him, and for many LessWrong posters, this issue is of paramount importance, easily trumping the environment and politics. To them, the singularity brings about the machine equivalent of God itself.

Yet this doesn’t explain why Roko’s Basilisk is so horrifying. That requires looking at a critical article of faith in the LessWrong ethos: timeless decision theory. TDT is a guideline for rational action based on game theory, Bayesian probability, and decision theory, with a smattering of parallel universes and quantum mechanics on the side. TDT has its roots in the classic thought experiment of decision theory called Newcomb’s paradox, in which a superintelligent alien presents two boxes to you:



The alien gives you the choice of either taking both boxes, or only taking Box B. If you take both boxes, you’re guaranteed at least $1,000. If you just take Box B, you aren’t guaranteed anything. But the alien has another twist: Its supercomputer, which knows just about everything, made a prediction a week ago as to whether you would take both boxes or just Box B. If the supercomputer predicted you’d take both boxes, then the alien left the second box empty. If the supercomputer predicted you’d just take Box B, then the alien put the $1 million in Box B.

So, what are you going to do? Remember, the supercomputer has always been right in the past.

This problem has baffled no end of decision theorists. The alien can’t change what’s already in the boxes, so whatever you do, you’re guaranteed to end up with more money by taking both boxes than by taking just Box B, regardless of the prediction. Of course, if you think that way and the computer predicted you’d think that way, then Box B will be empty and you’ll only get $1,000. If the computer is so awesome at its predictions, you ought to take Box B only and get the cool million, right? But what if the computer was wrong this time? And regardless, whatever the computer said then can’t possibly change what’s happening now, right? So prediction be damned, take both boxes! But then …

The maddening conflict between free will and godlike prediction has not led to any resolution of Newcomb’s paradox, and people will call themselves “one-boxers” or “two-boxers” depending on where they side. (My wife once declared herself a one-boxer, saying, “I trust the computer.”)

TDT has some very definite advice on Newcomb’s paradox: Take Box B. But TDT goes a bit further. Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. (I’ve adopted this example from Gary Drescher’s Good and Real, which uses a variant on TDT to try to show that Kantian ethics is true.) The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself. That includes simulating you. So you, right this moment, might be in the computer’s simulation, and what you do will impact what happens in reality (or other realities). So take Box B and the real you will get a cool million.

What does all this have to do with Roko’s Basilisk? Well, Roko’s Basilisk also has two boxes to offer you. Perhaps you, right now, are in a simulation being run by Roko’s Basilisk. Then perhaps Roko’s Basilisk is implicitly offering you a somewhat modified version of Newcomb’s paradox, like this:



Roko’s Basilisk has told you that if you just take Box B, then it’s got Eternal Torment in it, because Roko’s Basilisk would really you rather take Box A and Box B. In that case, you’d best make sure you’re devoting your life to helping create Roko’s Basilisk! Because, should Roko’s Basilisk come to pass (or worse, if it’s already come to pass and is God of this particular instance of reality) and it sees that you chose not to help it out, you’re screwed.

You may be wondering why this is such a big deal for the LessWrong people, given the apparently far-fetched nature of the thought experiment. It’s not that Roko’s Basilisk will necessarily materialize, or is even likely to. It’s more that if you’ve committed yourself to timeless decision theory, then thinking about this sort of trade literally makes it more likely to happen. After all, if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you. The problem isn’t with the Basilisk itself, but with you. Yudkowsky doesn’t censor every mention of Roko’s Basilisk because he believes it exists or will exist, but because he believes that the idea of the Basilisk (and the ideas behind it) is dangerous.

Now, Roko’s Basilisk is only dangerous if you believe all of the above preconditions and commit to making the two-box deal with the Basilisk. But at least some of the LessWrong members do believe all of the above, which makes Roko’s Basilisk quite literally forbidden knowledge. I was going to compare it to H. P. Lovecraft’s horror stories in which a man discovers the forbidden Truth about the World, unleashes Cthulhu, and goes insane, but then I found that Yudkowsky had already done it for me, by comparing the Roko’s Basilisk thought experiment to the Necronomicon, Lovecraft’s fabled tome of evil knowledge and demonic spells. Roko, for his part, put the blame on LessWrong for spurring him to the idea of the Basilisk in the first place: “I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm,” he wrote.

If you do not subscribe to the theories that underlie Roko’s Basilisk and thus feel no temptation to bow down to your once and future evil machine overlord, then Roko’s Basilisk poses you no threat. (It is ironic that it’s only a mental health risk to those who have already bought into Yudkowsky’s thinking.) Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it. But I do believe there’s a more serious issue at work here because Yudkowsky and other so-called transhumanists are attracting so much prestige and money for their projects, primarily from rich techies. I don’t think their projects (which only seem to involve publishing papers and hosting conferences) have much chance of creating either Roko’s Basilisk or Eliezer’s Big Friendly God. But the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology, and I don’t expect Yudkowsky and his cohorts to be an exception.

I worry less about Roko’s Basilisk than about people who believe themselves to have transcended conventional morality. Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. He has explicitly argued that given the choice, it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes. No one, not even God, is likely to face that choice, but here’s a different case: What if a snarky Slate tech columnist writes about a thought experiment that can destroy people’s minds, thus hurting people and blocking progress toward the singularity and Friendly AI? In that case, any potential good that could come from my life would far be outweighed by the harm I’m causing. And should the cryogenically sustained Eliezer Yudkowsky merge with the singularity and decide to simulate whether or not I write this column … please, Almighty Eliezer, don’t torture me.

http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html

165
General Discussion / "Kafka-esque" Comcast CS Call
« on: July 15, 2014, 05:20:35 PM »
"When a customer service call is described as "Kafkaesque" and "hellish," you pretty much know how it's going to go down before even taking a listen. But in case you haven't heard the condescending, tedious call that's lit up the Internet, here it is: ..."

http://www.npr.org/blogs/alltechconsidered/2014/07/15/331681041/comcast-embarrassed-by-the-service-call-making-internet-rounds?utm_source=facebook.com&utm_medium=social&utm_campaign=npr&utm_term=nprnews&utm_content=20140715

166
We are in the process of researching this issue, but it looks like the domain registration for the domain SonyOnline.net expired this morning, reverting back to being the property of NetworkSolutions. While SOE did not use the domain publicly, preferring SOE.com, the aforementioned domain contains all of the nameservers (ns2.sonyonline.net, ns4.sonyonline.net, ns6.sonyonline.net) which route users to SOE’s websites and forums, as well as connecting to its games.

For the last 20 minutes or so, we have been monitoring the situation. We’ve seen the forums and websites briefly become accessible before going dark again. It’s possible that conflicting DNS information is propagating out there, preventing users from reliably connecting to SOE’s websites, forums, and games.

It seems plausible that the expiration notices from NetworkSolutions have been going to an unread e-mail address at SOE for the last few weeks and NS finally decided to reclaim this one. The expiration for SonyOnline.net was set for 26-may-2014, while today, July 15th, is some 7 weeks later. That’s some grace period!

UPDATE: Although there don’t seem to have been any changes behind the scenes, SOE.com, EverQuest2.com, and other SOE websites all seem to be loading “domain parked” pages full of advertisements now.

UPDATE from EverQuest Next and Landmark Community Manager Colette “Dexella” Murphy who is likely in need of coffee at 5:30am PDT:

We’re working on the SOE games/website/forum issues people are reporting. More news when it’s available! Thanks, everyone.

— Colette (Dexella) (@DexellaCM) July 15, 2014

http://eq2wire.com/2014/07/15/sonyonline-net-domain-expires-shenanigans-ensue-for-all-soe-games-websites/

167
General Discussion / Have Anti-ACA Ads generated greater enrollment?
« on: July 11, 2014, 06:33:02 PM »
According to the recent report of nonpartisan analysts Kantar Media CMAG, ACA opponents have spent 450 million dollars on anti-Obamacare ads so far. Spending on negative ads outpaced positive ones by more than 15 to 1. This map shows the spending on negative ads in each state.

How Have Ads Impacted the ACA Enrollment in Different States?

I used the ACA enrollment data released by the Department of Health and Human Services, to calculate the ACA enrollment ratio as the number of enrollees divided by the total number of people who could have potentially enrolled in the ACA. This number includes the ones who were either uninsured or had purchased private insurance. Although more than 8 million Americans have signed-up to purchase health insurance through the marketplaces during the first open enrollment period, this nationwide number masks tremendous variation in participation across states. While the enrollment percentage in Minnesota is slightly above five percent, in Vermont, close to fifty percent of all eligible individuals have signed up for the ACA.

ACA Enrollment Ratios by State

can't_link_this_map

I also calculated the per capita spending on anti-Obamacare ads as the total amount of spending in each state divided by its population. By spending close to a dollar per resident, District of Columbia outpaced all of the fifty states in per capita spending on anti-Obamacare ads, yet over 11% of its eligible population signed-up for Obamacare.

The following plot compares the per capita spending on anti-ACA ads and the enrollment ratio in 49 states. I removed DC and Vermont since they had abnormally high ad spending or enrollment ratios.

Enrollment Ratio and Per Capita Anti-ACA Advertising



The blue dots represent the states in which Senate Democrats are up for re-election in 2014, while the red dots represent the states in which Republican Senators are running for re-election. The states in which no Senate midterm elections are held are shown in green.

The four states with the highest per capita spending on anti-ACA ads are Kentucky, Arkansas, Louisiana, and North Carolina. Interestingly, in all of these four states, the midterm Senate elections are expected to be very competitive. Although the volume of spending on anti-ACA ads is driven by the competitiveness of the Senate midterm elections and may be effective in reducing the votes for the targeted political figure, they may not necessarily reduce the popularity of the ACA. The blue and red lines show the association between the anti-ACA ads spending and the ACA enrollment ratio in states with Democratic and Republican Senators running for re-election. While the negative ads reduce the enrollment in red states, they have an opposite effect in blue states.

In fact, after controlling for other state characteristics such as low per capita income population and average insurance premiums, I observe a positive association between the anti-ACA spending and ACA enrollment. This implies that anti-ACA ads may unintentionally increase the public awareness about the existence of a governmentally subsidized service and its benefits for the uninsured. On the other hand, an individual’s prediction about the chances of repealing the ACA may be associated with the volume of advertisements against it. In the states where more anti-ACA ads are aired, residents were on average more likely to believe that Congress will repeal the ACA in the near future. People who believe that subsidized health insurance may soon disappear could have a greater willingness to take advantage of this one time opportunity.

http://www.brookings.edu/blogs/techtank/posts/2014/07/9-anti-aca-ads-backfire

168
General Discussion / The World Cup Can Help Test Economic Theories
« on: July 11, 2014, 06:07:34 PM »
THE World Cup is finally underway. At long last, soccer fans can don their team colors, head down to the local pub — and begin collecting data to test economic theories.

Or at least, some of us will.

For instance, I’m interested in penalty kicks. In addition to being an exciting part of the game, penalty kicks present an opportunity to test an important idea in economics: the Nash equilibrium.

The economist John Forbes Nash Jr. analyzed how people should behave in strategic situations in which it is not optimal to repeatedly make the same move — like the children’s game rock, paper, scissors, in which selecting one move again and again (rock, rock, rock ...) makes you easy to beat. According to Mr. Nash’s theory, in a zero-sum game (i.e., where a win for one player entails a corresponding loss for the other) the best approach is to vary your moves unpredictably and in such proportions that your probability of winning is the same for each move. In rock, paper, scissors, for example, the optimal strategy is to mix your choices randomly among the three options.

To test this theory in the real world, we can study penalty kicks, which are zero-sum games in which it is not optimal to repeatedly choose the same move. (The goalie has an easier time stopping your shot if you always kick to the same side of the net.) Unlike complex real-world strategic situations involving firms, banks or countries, penalty kicks are relatively simple, and data about them are readily available.

I analyzed 9,017 penalty kicks taken in professional soccer games in a variety of countries from September 1995 to June 2012. I found, as Mr. Nash’s theory would predict, that players typically distributed their shots unpredictably and in just the right proportions. Specifically, roughly 60 percent of kicks were made to the right of the net, and 40 percent to the left. The proportions were not 50-50 because players have unequal strengths in their legs and tend to shoot better to one side. Shooting 50-50, in other words, would not take full advantage of their better leg, while shooting any more often to the stronger side would have been too predictable.

In accordance with Mr. Nash’s theory, penalty kicks shot to the left were successful with the same frequency as kicks shot to the right — roughly 80 percent of the time.

Penalty kicks are just one example. Data from soccer can also illuminate one of the most prominent theories of the stock market: the efficient-market hypothesis. This theory posits that the market incorporates information so completely and so quickly that any relevant news is integrated into a stock’s price before anyone has a chance to act on it. This means that unless you have insider information, no stock is a better buy (i.e., undervalued) when compared with any other.

If this theory is correct, the price of an asset should jump up or down when news breaks and then remain perfectly flat until there is more news. But to test this in the real world is difficult. You would need to somehow stop the flow of news while letting trading continue. That seems impossible, since everything that happens in the real world, however boring or uneventful, counts as news.

This is where soccer is useful. In a study published earlier this year in The Economic Journal, the economists Karen Croxson and J. James Reade analyzed live soccer betting markets, looking at second-by-second betting activity around goals scored just seconds before halftime and betting activity during halftime. Their data, which concerned 1,206 Premier League soccer matches in England, contained 160 such “cusp” goals scored within seconds of the end of the first half.

The break in play at halftime provided a golden opportunity to study market efficiency because the playing clock stopped but the betting clock continued. Any drift in halftime betting values would have been evidence against market efficiency, since efficient prices should not drift when there is no news (or goals, in this case). It turned out that when goals arrived within seconds of the end of the first half, betting continued heavily throughout halftime — but the betting values remained constant, a necessary condition to prove that those markets were indeed efficient.


Other research by me and others has shown that data from soccer can shed light on the economics of discrimination, fear, corruption and the dark side of incentives in organizations. In other words, aspects of the beautiful game that are less than beautiful from a fan’s perspective can still be illuminating for economists.

But perhaps most beautiful of all, for me, is that the core principles of my beloved professional discipline are exemplified by my beloved game.

http://www.nytimes.com/2014/06/15/opinion/sunday/the-world-cup-can-help-test-economic-theories.html?_r=0

169
Last month, the inaugural Breakthrough Prizes in mathematics, founded and partially funded by internet billionaires Yuri Milner and Mark Zuckerberg, were awarded to five people: Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Terence Tao, and Richard Taylor. The prize is $3 million per person, and the first five winners will be on the committee for the selection of future winners. (In the future, there will be only one prize awarded per year.)

I was a bit surprised that there hasn’t been much talk on blogs about the prizes, but there has been a bit. Peter Woit wrote about the prize on Not Even Wrong, and the comments to his post are interesting. “Shecky Riemann” also has a post on Math-Frolic.

I must admit that I am somewhat cynical about the prize. (Now might be a good time to reiterate the disclaimer that appears on the sidebar of this blog: my opinions do not necessarily reflect the opinions of the AMS.) The five winners are all productive, brilliant mathematicians who have enhanced their fields immensely, and they deserve to be recognized. But $3 million is just so much money! It’s hard for me to see how concentrating that much money in the hands of so few people is an efficient way to support mathematics.

Woit’s post voices some similar concerns. He writes,

“…it’s still debatable whether this is a good way to encourage mathematics research. The people chosen are already among the most highly rewarded in the subject, with all of them having very well-paid positions with few responsibilities beyond their research, as well as access to funding of research expenses. The argument for the prize is mainly that these sums of money will help make great mathematicians celebrities, and encourage the young to want to be like them. I can see this argument and why some people find it compelling. Personally though, I think our society in general and academia in particular is already suffering a great deal as it becomes more and more of a winner-take-all, celebrity-obsessed culture, with ever greater disparities in wealth, and this sort of prize just makes that worse. It’s encouraging to see that most of the prize winners have already announced intentions to redirect some of the prize moneys for a wider benefit to others and the rest of the field.

In fact, the New York Times reports that Tao, one of the winners, has similar feelings:

“Dr. Tao tried to talk Mr. Milner out of it, and suggested that more prizes of smaller amounts might be more effective in supporting mathematics. ‘The size of the award, I think it’s ridiculous,’ he said. ‘I didn’t feel I was the most qualified for this prize.’

“But Dr. Tao added: ‘It’s his money. He can do whatever he wants with it.’

“Dr. Tao said he might use some of the prize money to help set up open-access mathematics journals, which would be available free to anyone, or for large-scale collaborative online efforts to solve important problems.”

As a young academic who has seen postdoc positions seem to dry up since the beginning of the financial crisis, I can’t help but do a little arithmetic. $50,000 is a nice round salary for a postdoc. Before benefits are factored in, that makes each $3 million prize the equivalent of 60 postdoc years. Even if we add another $50,000 a year for health insurance, travel, and other research expenses, that money could fund 30 postdocs a year, or create 10 three-year postdoc positions each year.

But 30 postdocs a year wouldn’t make a good press release. The New York Times wouldn’t write an article about their multimillion dollar minds. And the funders of the Breakthrough Prize want to encourage mathematical celebrity, which supposedly will lead to public awareness, not to fund worthwhile math research in the most efficient way possible. In a Scientific American article about the prize, Ben Fogelson writes,

“Milner’s goal, however, is to increase the popularity of science by celebrating the scientists. ‘Dividing [money] in small pieces and distributing it widely has been tried before and it works,’ Milner says. ‘I think the idea behind this initiative is to really focus on raising public awareness.’”

A commenter on Woit’s post suggested that each year, the prize money could be used to endow a research position at a university, noting that at MIT, you can endow a professorship for $3 million. Would that be high-profile enough? I think you could still write a press release about it!

I had some interesting discussions about the prize on Twitter after the prizewinners were announced, mainly focused on the utility of mathematical celebrity. Those discussions helped me frame a few questions about celebrity and public awareness. I’ve tried to figure out some analogous questions about movies and the Oscars because the Breakthrough Prizes have been described as the Oscars of science.

Will people think mathematics is more valuable because a few people can earn giant prizes from it? (Do people think filmmaking is more valuable because the Oscars exist?)
Will people want to become mathematicians because they think they could earn a big prize from it? (Do people become actors or filmmakers because they think they could win an Oscar?)
If the ultimate goal of the prize is to raise public awareness of math, what is a more effective way to do that: tell them about a successful mathematician, or tell them about an idea in math? (If someone doesn’t know much about cinema, would it be more effective to tell them about an Oscar-winning actor or show them a movie?)
Are these even the right questions and analogies?
This post might sound like I’m saying, “I don’t like this new prize because I’m never going to get it, but I would like it if it funded people more like me.” But I don’t think I’m quite there either. I have reservations about the suggested alternate uses of the prize money as well, thanks largely to two posts by Cathy O’Neil about billionaire money in mathematics and in academia in general.

On a lighter note, if you are a mathematician who is a bit embarrassed about a recent windfall, Persiflage suggests that “a bottle of Chateau d’Yquem 1967 does wonders to wash away any last remaining vestiges of embarrassment…”

- See more at: http://blogs.ams.org/blogonmathblogs/2014/07/09/the-inaugural-breakthrough-prizes-in-mathematics/#sthash.1oD6cq7t.dpuf

http://blogs.ams.org/blogonmathblogs/2014/07/09/the-inaugural-breakthrough-prizes-in-mathematics/#sthash.1oD6cq7t.dpbs

170
Reporters for BBC News are being directed to significantly curb the amount of air time they give to people with anti-science viewpoints — including people who deny climate change exists — in order to improve the accuracy and fairness of the network’s news coverage, according to a report released by the BBC’s governing body on Thursday.

The BBC Trust’s report was designed to assess the network’s impartiality in science coverage, in other words, whether it is staying neutral on critical issues. In order to be neutral when covering science, however, the BBC noted it needs to avoid “false balance,” a fallacy that occurs when two sides of an argument are assumed to have equal value.

“Science coverage does not simply lie in reflecting a wide range of views but depends on the varying degree of prominence such views should be given,” the report said.

The type of “false balance” news segment that the BBC is now actively trying to avoid is one that is fairly common in American network news’ climate change coverage. It involves putting one person who is well-versed on climate science next to a person who denies climate science, and having them debate.

Editorially, this type of debate makes the network look like it’s being balanced, giving equal opportunity to opposite viewpoints. However, because 95 to 97 percent of climate scientists agree that man-made greenhouse gas emissions are causing the planet to warm, that balance is false, giving disproportionate time to a viewpoint that is widely rejected in the scientific community.

In order to have a truly balanced and statistically representative debate about climate change, television news networks would have to pit 97 climate scientists against three climate deniers. Because that likely wouldn’t work very well, the BBC is favoring an approach that instead severely limits the amount of air time climate deniers are given.

So far, the report said, approximately 200 staff members have attended seminars and workshops aimed at improving the balance of their science coverage.

The BBC Trust’s report did note that climate deniers wouldn’t be completely excluded from the conversation. “The Trust also would like to reiterate that … ‘this does not mean that critical opinion should be excluded. Nor does it mean that scientific research shouldn’t be properly scrutinized,’” the report said. “The BBC has a duty to reflect the weight of scientific agreement but it should also reflect the existence of critical views appropriately. Audiences should be able to understand from the context and clarity of the BBC’s output what weight to give to critical voices.”

But despite the BBC’s pledge to have their reporters avoid false balance in climate change coverage, false balance is still a widespread phenomenon across prominent American news platforms. According to a 2013 report from Media Matters on the issue, half of print outlets used false balance to debate the existence of global warming. When covering the U.N.’s landmark climate change report that year, CBS News gave climate deniers more than six times their representation in the scientific community, and 69 percent of guests on Fox News cast doubt on the science.

The obvious effect of this is that viewers are being misled about the reality of climate change and the urgency that comes with it. But the other effect is that viewers wind up not caring about climate change altogether.

“In the case of people who watch cable news, we’ve been so conditioned to favor a sense of certainty,” Dr. Stephen Reese, author of a 2008 white paper on how people make judgments about journalistic balance, told ThinkProgress in May. “We want to have our beliefs upheld. So when you introduce [climate change] as a political issue up for debate, it’s just, ‘well okay, there they go again,’ — just dismiss it as hopelessly polarized.”

When news outlets introduce false balance into its climate change stories, its audience then thinks those stories are less pressing than they actually are, a factor which contributes to uncertainty surrounding the issue and, ultimately, apathy. A 2009 study from the American Psychological Association confirmed this, noting that “perceived or real uncertainty” on climate change can lead to both “systematic underestimation of risk” and “sufficient reason to act in self interest over that of the environment.”

http://thinkprogress.org/climate/2014/07/07/3456782/bbc-cuts-climate-deniers/

171
It’s the rallying cry for opponents of same-sex marriage: “Every child deserves a mom or a dad.” But a major new study finds that kids raised by same-sex couples actually do a bit better “than the general population on measures of general health and family cohesion.”

The study, conducted in Australia by University of Melbourne researchers “surveyed 315 same-sex parents and 500 children.” The children in the study scored about six percent higher than Australian kids in the general population. The advantages held up “when controlling for a number sociodemographic factors such as parent education and household income.” The study was the largest of its kind in the world.

The lead researcher, Dr. Simon Crouch, noted that in same-sex couples parents have to “take on roles that are suited to their skill sets rather than falling into those gender stereotypes.” According to Crouch, this leads to a “more harmonious family unit and therefore feeding on to better health and well being.”

The findings were in line with “existing international research undertaken with smaller sample sizes.”

Family Voice Australia, a group that opposes same-sex marriage, said the study should be discounted because it does not consider “what happens when the child reaches adulthood.”
In the United States, opponents of same-sex marriage routinely claim that children raised by same-sex couple fare worse. The most commonly cited study, conducted by sociologist Mark Regnerus, did not actually study children raised by same-sex couples. Indeed, “most of the subjects in the study grew up in the 1970s, 80s, and 90s, long before marriage equality was available or adoption rights were codified in many states”. Instead, Regnerus studied children raised in “failed heterosexual unions” where one parent had a “romantic relationship with someone of the same sex.” It has been condemned by the American Sociological Association. Other frequently cited studies have similar methodological problems.

http://thinkprogress.org/lgbt/2014/07/05/3456717/kids-raised-by-same-sex-couples-are-healthier-and-happier/

172
General Discussion / New State of Matter Discovered
« on: July 01, 2014, 07:07:06 PM »
There was a time when states of matter were simple: Solid, liquid, gas. Then came plasma, Bose -Einstein condensate, supercritical fluid and more. Now the list has grown by one more, with the unexpected discovery of a new state dubbed “dropletons” that bear some resemblance to liquids but occur under very different circumstances.
 
The discovery occurred when a team at the University of Colorado Joint Institute for Lab Astrophysics were focusing laser light on gallium arsenide (GaAs) to create excitons.
 
Excitons are formed when a photon strikes a material, particularly a semiconductor. If an electron is knocked loose, or excited, it leaves what is termed an “electron hole” behind. If the forces of other charges nearby keep the electron close enough to the hole to feel an attraction, a bound state forms known as an exciton. Excitons are called quasiparticles because the electrons and holes behave together as if they were a single particle.
 
If this all sounds a bit hard to relate to, consider that solar cells are semiconductors, and the formation of excitons is one possible step to the production of electricity. A better understanding of how excitons form and behave could produce ways to harvest sunlight more efficiently.
 
Graduate student Andrew Almand-Hunter was forming biexcitons – two excitons that behave like a molecule, by focusing the laser to a dot 100nm across and leaving it on for shorter and shorter fractions of a second.
 
“But the experiment didn’t behave at all in the way we expected,” Almand-Hunter said. When the pulses were lasting less than 100 millionths of a second exciton density reached a critical threshold. “We expected to see the energy of the biexcitons increase as the laser generated more electrons and holes. But, what we saw when we did the experiment was that the energy actually decreased!”
 
The team figured that they had created something other than biexcitons, but were not sure what. They contacted theorists at Philipps-University, Marburg who suggested they had made droplets of 4, 5 or 6 electrons and holes, and constructed a model of these dropletons' behavior.
 
The dropletons are small enough to behave quantum mechanically, but the electrons and holes are not in pairs, as they would be if the dropleton was just a group of excitons. Instead they form a “quantum fog” of electrons and holes that flow around each other and even ripple like a liquid, rather than existing as discrete pairs. However, unlike liquids we are familiar with, dropletons a finite size, outside which the electron/hole association breaks down.
 
The discovery has been published in Nature. Perhaps the most remarkable thing is that the dropletons are stable, by the standards of quantum physics. While they can only survive inside solid materials, they last around 25 trillionths of a second, which is actually long enough for scientists to study the way their behavior is shaped by the environment. At 200nm wide the dropletons are as large as very small bacteria – a size that can be seen by conventional microscopes.
 
"Classical optics can detect only objects that are larger than their wavelengths, and we are approaching that limit," Mackillo Kira of Philipps-University who provided much of the theoretical grounding told Scientific American. "It would be really neat to not only detect spectroscopic information about the dropleton, but to really see the dropleton."
 
JILA lab leader Professor Steven Cundiff says, “Nobody is going to build a quantum droplet widget." However, the work could help in the understanding of systems where multiple particles interact quantum mechanically.

http://www.iflscience.com/physics/new-state-matter-discovered#UdQtD16tU4Zfsjsl.01

173
'Community' Lives! Yahoo Has Saved Greendale for a Sixth Season

https://tv.yahoo.com/blogs/tv-news/community-season-6-yahoo-204612640.html

174
The environment doesn't appreciate our meat obsession.

The average meat-eater in the U.S. is responsible for almost twice as much global warming as the average vegetarian, and close to three times that of the average vegan, according to a study (pdf) published this month in the journal Climatic Change.

The study, which was carried out at Oxford University, surveyed the diets of some 60,000 individuals (more than 2,000 vegans, 15,000 vegetarians, 8,000 fish-eaters, and nearly 30,000 meat-eaters). Heavy meat-eaters were defined as those who consume more than 3.5 ounces of meat per day—making the average American meat-eater (who consumes roughly four ounces per day) a heavy meat-eater. Low meat-eaters were those who eat fewer than 1.76 ounces. And medium meat-eaters were those whose consumption fell somewhere in between.

The difference found in diet-driven carbon footprints was significant. Halve your meat intake, and you could cut your carbon footprint by more than 35 percent; stick to fish, and you could cut it by nearer to 50 percent; go vegan, and the difference could be 60%.



The variations were so drastic that the study's authors suggested that countries should consider revising their definition of a sustainable diet. "National governments that are considering an update of dietary recommendations in order to define a ‘healthy, sustainable diet’ must incorporate the recommendation to lower the consumption of animal-based products," the study says.

The livestock industry is responsible for roughly 15 percent of global carbon emissions. The resources necessary to produce even the smallest amounts—like, say, a quarter pound hamburger—of market ready meat are staggering.

The good news is that while Americans might still eat more meat than mother nature would prefer, they are cutting down on their intake, and especially with the most environmentally unfriendly kind—per capita beef consumption has fallen by 36 percent since its peak in 1976, according to data from the USDA. The bad news is that the rest of the world appears to be headed in the opposite direction. Global demand for meat is expected to grow by more than 70 percent by 2050, largely driven by burgeoning middle classes in the developing world. Couple that with the potential for changing health narratives in the U.S.—some of which now tout red meat (and fats, in general) as healthier than once thought—and even Americans could find themselves putting more meat on their plates in the future.

http://www.washingtonpost.com/blogs/wonkblog/wp/2014/06/30/how-much-your-meat-addiction-is-hurting-the-planet/?tid=sm_fb

175
General Discussion / House Committee declares IPCC Report Not Science
« on: June 03, 2014, 10:52:45 AM »
The Intergovernmental Panel on Climate Change (IPCC) report warned that more intense droughts and heat waves will cause famine and water shortages. But, don't worry! Yesterday, the GOP held a hearing to tell us the IPCC is, in fact, a global conspiracy to control our lives and "redistribute wealth among nations."

The hearing, titled "Examining the UN Intergovernmental Panel on Climate Change Process," was convened by the House Committee on Science, Space and Technology—the same folks who recently demonstrated their inability to grasp the idea that the world's climate varies across different regions and who informed us that warmer weather didn't bother the dinosaurs, so what's all the fuss about?

In principle, there's nothing wrong with assessing the methodology of such an important and influential report. But, in one of those quintessential moments of Washington double-think, Chairman Lamar Smith (R-TX)—who accuses the IPCC of creating data to serve a predetermined political agenda—summarized the hearing's conclusions a day before it even began. "The IPCC does not perform science itself and doesn't monitor the climate," Smith told a reporter, "but only reviews carefully selected scientific literature."

So, small wonder that Rep. Eddie Bernice Johnson (D-TX), the ranking Democrat on the committee, offered the opinion:

Quote
While the topic of today's hearing is a legitimate one, namely, how the IPCC process can be improved, I am concerned that the real objective of this hearing is to try to undercut the IPCC and to cast doubt on the validity of climate change research.

We aren't going to get very far if we spend our time continually revisiting a scientific debate that has already been settled. Nor will we get far if we continue a recent practice on this Committee of seeming to question the trustworthiness and integrity of this nation's scientific researchers.

Fair and Balanced

Another source of Johnson's skepticism might have been that three of the four expert witnesses testifying at the hearing either deny that humans are responsible for global warming or believe that the potential impact of climate change is grossly overstated.

The witnesses for the prosecution were:

(1) Roger Pielke,Sr.

Who is he?

Senior Research Scientist, Cooperative Institute for Research in Environmental Sciences, and Professor Emeritus of Atmospheric Science, Colorado State University

What's he known for?

Pielke says that carbon dioxide is responsible, at most, for about 28% of human-caused warming up to the present and he is among the most vocal skeptics of reports that the polar ice caps are melting and that sea levels are rising.

What did he say at the hearing?

The IPCC is "giving decision makers who face decisions at the regional and local level a false sense of certainty about the unfolding climate future."

(2) Richard Tol

Who is he?

A professor of economics at the University of Sussex

What's he known for?

He resigned his position with the IPCC team producing the working group's Summary for Policymakers, which he classified as "alarmist." Global warming creates benefits as well as harms, he believes, and in the short term, the benefits are especially pronounced. He's also expressed doubt that climate change will play any role in exacerbating conflicts.

Tol has been criticized by other scientists who have raised questions about his methodology and who have noted that he has a history of making contradictory statements. For instance, in a widely cited 2009 paper, he wrote of "considerable uncertainty about the economic impact of climate change … negative surprises are more likely than positive ones. … The policy implication is that reduction of greenhouse gas emissions should err on the ambitious side."

What did he say at the hearing?

"Academics who research climate change out of curiosity but find less than alarming things are ignored, unless they rise to prominence in which case they are harassed and smeared….The IPCC should therefore investigate the attitudes of its authors and their academic performance and make sure that, in the future, they are more representative of their peers."

(3) Daniel Botkin

Who is he?

Professor Emeritus, Department of Ecology, Evolution, and Marine Biology, University of California, Santa Barbara.

What's he known for?

He has long argued that life has had to deal with environmental change, especially climate change, since the beginning of its existence on Earth—and that we underestimate the ability of species, including humans, to find ways to adapt to the problem.

Botkin wrote a controversial editorial for the Wall Street Journal (Oct 17, 2007) arguing that global warming will not have much impact on life on Earth, and noted that: "the reality is that almost none of the millions of species have disappeared during the past 2.5 million years — with all of its various warming and cooling periods."

The editorial prompted several responses from within the scientific community, including this:

For the past 2.5 million years the climate has oscillated between interglacials which were (at most) a little warmer than today and glacials which were considerably colder than today. There is no precedent in the past 2.5 million years for so much warming so fast. The ecosystem has had 2.5 million years to adapt to glacial-interglacial swings, but we are asking it to adapt to a completely new climate in just a few centuries. The past is not a very good analog for the future in this case. And anyway, the human species can suffer quite a bit before we start talking extinction.
What did he say at the hearing?

"I want to state up front that we have been living through a warming trend driven by a variety of influences. However, it is my view that this is not unusual, and contrary to the characterizations by the IPCC….these environmental changes are not apocalyptic nor irreversible…..Yes, we have been living through a warming trend, no doubt about that. The rate of change we are experiencing is also not unprecedented, and the "mystery" of the warming "plateau" simply indicates the inherent complexity of our global biosphere. Change is normal, life on Earth is inherently risky; it always has been."

The Q & A

The lone witness for the defense was Michael Oppenheimer, the Albert G. Milbank Professor of Geosciences and International Affairs at Princeton University. He was selected by the Democrats, "because he's one of the foremost experts in the world and has been involved with the IPCC," a spokesperson for the Democratic contingent of the committee told Motherboard reporter Jason Koebler.

Koebler describes how things went down at the hearing after the experts presented their statements:

For two hours, climate change deniers interrupted, berated, and cut off Oppenheimer, while the other three other witnesses fielded softball questions from conservative lawmakers and dodged tougher ones from Democratic ones.

In fact, at one point, Rep. Larry Buchson (R-Ind.), who, seconds before had interrupted Oppenheimer and said he wasn't interested in hearing his views, wanted to "apologize on behalf of Congress" to Pielke for the aforementioned "juvenile and insulting questions trying to disparage the credibility" of witnesses who didn't take climate change seriously…..

Dana Rohrabacher [R-CA] pulled out the air quotes when he said "global warming," and took offense to Oppenheimer not being able to "capsulize" all the reasons why he believes that climate change is a big deal in 10 seconds. Smith suggested that the "only thing we know about [climate change models] is that they will be wrong" and suggested that "even if the US was completely eliminated, it's not going to have any discernible impact on global temperatures in the near or far future."

Paul Broun [R-GA] and Buchson noted their belief in the "scientific process" and suggested that they knew more about it because they are doctors (Broun is a dentist; Buchson is a surgeon).

So predictable, and such a waste of time. As I noted earlier, in principle, there is nothing wrong with assessing the methodology of such an important and influential report. But there are far better ways to do it than this.

The most noteworthy aspect of the IPCC is that it has worked far better than anyone anticipated. As Spencer Weart, the former director of the Center for History of Physics has written:

The IPCC's constitution should have been (and perhaps was intended to be) a recipe for paralysis. Instead, the panel turned its procedural restraints into a virtue: whatever it did manage to say would have unimpeachable authority.

Experts contributed their time as volunteers, writing working papers that drew on the latest studies. These were debated at length in correspondence and workshops. The IPCC scientists, initially 170 of them in a dozen workshops, worked hard and long to craft statements that nobody could fault on scientific grounds. The draft reports next went through a process of peer review, gathering comments from virtually every climate expert in the world. It was much like the process of reviewing articles submitted to a scientific journal, although with far more reviewers. All this followed the long-established practices, norms and traditions of science. The scientists found it easier than they had expected to reach a consensus. This undertaking was the first of its kind in terms of breadth, and the exhaustive level of review and revision.
If Congress only worked half as well.

http://io9.com/the-house-science-committee-declares-the-ipcc-report-is-1583909402

176


Marriage is a right that belongs to any consenting adults. But an over-religious court official in Virginia has a message for atheists and any other non-Christian: you have no right to get married if you don’t believe in God.

Bud Roth is a court appointed officiant in Franklin County, Virginia. He performs wedding ceremonies for couples who go to the courthouse to get married. Atheists, however, have no right to get married as far as he’s concerned.

When Morgan Strong and Tamar Courtney contacted the county courthouse to seal their love for each other after six years together, they were directed to Roth. Roth refused to perform the ceremony at the courthouse and only agreed to marry the couple if they tied the knot at his church. A deal was struck and the cost and date were set. Strong and Courtney would go through the legal part of the ceremony at Roth’s church. That’s when the whole situation turned ugly.

Roth asked the couple about their religious beliefs and upon hearing that he would be performing a ceremony for an atheist and an agnostic, turned the couple away. Why? Because they “didn’t know where God was.” That’s right, Roth refused to marry the couple out of sheer religious bigotry. Disappointed, Morgan and Courtney decided to discuss the situation with Roth and they kindly recorded the conversation.

Upon asking why Roth denied them their right to wed, he replied:

“Because she’s agnostic and you’re an atheist. I will not marry you. You don’t believe in God… I just don’t marry anyone who does not believe in God [or] believes that there is a God someplace. So I’m not going to talk the issue over with you and I’m not going to argue about it, okay? I’m just not going to marry you. Correct?”

The couple contacted the county clerk, who was floored by their story. She suggested they contact the judge who appointed Roth in the first place. So they wrote a letter to Judge William Alexander who didn’t see any problem at all with a court officiant refusing to marry a couple simply because they don’t share his religious beliefs. The judge referred the couple to the other court appointed officiant who agreed to perform the civil ceremony this coming Monday.

But this incident raises serious concerns. First, a civil servant is supposed to serve the public. That means anyone. As long as a couple has a marriage license, there shouldn’t be any problem. Second, religious discrimination is wrong no matter the venue, but for it to occur at a courthouse by a court official is totally unacceptable. People go to get married at a courthouse to avoid religious pomp and circumstance and because it’s quicker. They don’t go there to have religion shoved down their throats. That’s why my wife and I married at a courthouse. Not because we didn’t believe in a god, but because we didn’t want religion to dominate our day.

Roth was wrong to refuse to perform the ceremony just because Morgan and Courtney don’t share his beliefs. He was also wrong to require them to get married at his church. He’s a COURT-APPOINTED OFFICIANT, for crying out loud! He’s a courthouse employee. Therefore, anyone who wishes to marry at the courthouse should be married at the courthouse. Even if he’s the one asked to perform the ceremony. He’s paid by taxpayers to do this task. He’s not paid to drag couples to church or to refuse to marry a couple because of his own religious beliefs. Separation of church and state is clearly being violated here by both Judge Alexander and Roth. If Roth were a private citizen, then he could refuse to marry anyone. But in this case, he’s NOT a private citizen. The people of Franklin County, Virginia should be embarrassed and outraged by this and they should demand a change be made. Because once religious discrimination infects our courts, anyone is at risk to have their rights and freedoms trampled on by self-righteous pricks in the name of Christianity.



http://www.addictinginfo.org/2014/05/31/marriage-only-for-christians/

178
this will interest no one

but i think it's an unusual look at open and very public disagreements b/w academics

too long to copy/paste:

http://andrewgelman.com/2014/05/31/jessica-tracy-alec-beall-authors-fertile-women-wear-pink-study-comment-garden-forking-paths-paper-comment-comments/

179
Spamalot / TZT Brawler: what should your item be?
« on: May 30, 2014, 10:12:16 PM »
Not sure how many people are aware, but I threw together a preliminary version of a brawler (... though I began adding levels to it anyway, so maybe some kind of weird brawler/RPG combo), based on a 3D version I made of AD's 2D signature troll, in the game dev forum:

http://tallonzektimes.org/bb/index.php/topic,54400.msg1245142.html#msg1245142

I was thinking I might just give it up to work on other stuff for a while, but if I had specific projects to try to do justice to, that could help.

Right now I have 2 named weapons: an ax (named after solayce) and a shield (named after taket). What should your weapon be and why?

180
Google owns a lot of computers—perhaps a million servers stitched together into the fastest, most powerful artificial intelligence on the planet. But last August, Google teamed up with NASA to acquire what may be the search giant’s most powerful piece of hardware yet. It’s certainly the strangest.

Located at NASA Ames Research Center in Mountain View, California, a couple of miles from the Googleplex, the machine is literally a black box, 10 feet high. It’s mostly a freezer, and it contains a single, remarkable computer chip—based not on the usual silicon but on tiny loops of niobium wire, cooled to a temperature 150 times colder than deep space. The name of the box, and also the company that built it, is written in big, science-fiction-y letters on one side: D-WAVE. Executives from the company that built it say that the black box is the world’s first practical quantum computer, a device that uses radical new physics to crunch numbers faster than any comparable machine on earth. If they’re right, it’s a profound breakthrough. The question is: Are they?

Hartmut Neven, a computer scientist at Google, persuaded his bosses to go in with NASA on the D-Wave. His lab is now partly dedicated to pounding on the machine, throwing problems at it to see what it can do. An animated, academic-tongued German, Neven founded one of the first successful image-recognition firms; Google bought it in 2006 to do computer-vision work for projects ranging from Picasa to Google Glass. He works on a category of computational problems called optimization—finding the solution to mathematical conundrums with lots of constraints, like the best path among many possible routes to a destination, the right place to drill for oil, and efficient moves for a manufacturing robot. Optimization is a key part of Google’s seemingly magical facility with data, and Neven says the techniques the company uses are starting to peak. “They’re about as fast as they’ll ever be,” he says.

That leaves Google—and all of computer science, really—just two choices: Build ever bigger, more power-hungry silicon-based computers. Or find a new way out, a radical new approach to computation that can do in an instant what all those other million traditional machines, working together, could never pull off, even if they worked for years.

That, Neven hopes, is a quantum computer. A typical laptop and the hangars full of servers that power Google—what quantum scientists charmingly call “classical machines”—do math with “bits” that flip between 1 and 0, representing a single number in a calculation. But quantum computers use quantum bits, qubits, which can exist as 1s and 0s at the same time. They can operate as many numbers simultaneously. It’s a mind-bending, late-night-in-the-dorm-room concept that lets a quantum computer calculate at ridiculously fast speeds.

Unless it’s not a quantum computer at all. Quantum computing is so new and so weird that no one is entirely sure whether the D-Wave is a quantum computer or just a very quirky classical one.
Not even the people who build it know exactly how it works and what it can do. That’s what Neven is trying to figure out, sitting in his lab, week in, week out, patiently learning to talk to the D-Wave. If he can figure out the puzzle—what this box can do that nothing else can, and how—then boom. “It’s what we call ‘quantum supremacy,’” he says. “Essentially, something that cannot be matched anymore by classical machines.” It would be, in short, a new computer age.

A former wrestler short-listed for Canada’s Olympic team, D-Wave founder Geordie Rose is barrel-chested and possessed of arms that look ready to pin skeptics to the ground. When I meet him at D-Wave’s headquarters in Burnaby, British Columbia, he wears a persistent, slight frown beneath bushy eyebrows. “We want to be the kind of company that Intel, Microsoft, Google are,” Rose says. “The big flagship $100 billion enterprises that spawn entirely new types of technology and ecosystems. And I think we’re close. What we’re trying to do is build the most kick-ass computers that have ever existed in the history of the world.”

The office is a bustle of activity; in the back rooms technicians peer into microscopes, looking for imperfections in the latest batch of quantum chips to come out of their fab lab. A pair of shoulder-high helium tanks stand next to three massive black metal cases, where more techs attempt to weave together their spilt guts of wires. Jeremy Hilton, D-Wave’s vice president of processor development, gestures to one of the cases. “They look nice, but appropriately for a startup, they’re all just inexpensive custom components. We buy that stuff and snap it together.” The really expensive work was figuring out how to build a quantum computer in the first place.

Like a lot of exciting ideas in physics, this one originates with Richard Feynman. In the 1980s, he suggested that quantum computing would allow for some radical new math. Up here in the macroscale universe, to our macroscale brains, matter looks pretty stable. But that’s because we can’t perceive the subatomic, quantum scale. Way down there, matter is much stranger. Photons—electromagnetic energy such as light and x-rays—can act like waves or like particles, depending on how you look at them, for example. Or, even more weirdly, if you link the quantum properties of two subatomic particles, changing one changes the other in the exact same way. It’s called entanglement, and it works even if they’re miles apart, via an unknown mechanism that seems to move faster than the speed of light.

Knowing all this, Feynman suggested that if you could control the properties of subatomic particles, you could hold them in a state of superposition—being more than one thing at once. This would, he argued, allow for new forms of computation. In a classical computer, bits are actually electrical charge—on or off, 1 or 0. In a quantum computer, they could be both at the same time.

It was just a thought experiment until 1994, when mathematician Peter Shor hit upon a killer app: a quantum algorithm that could find the prime factors of massive numbers. Cryptography, the science of making and breaking codes, relies on a quirk of math, which is that if you multiply two large prime numbers together, it’s devilishly hard to break the answer back down into its constituent parts. You need huge amounts of processing power and lots of time. But if you had a quantum computer and Shor’s algorithm, you could cheat that math—and destroy all existing cryptography. “Suddenly,” says John Smolin, a quantum computer researcher at IBM, “everybody was into it.”

That includes Geordie Rose. A child of two academics, he grew up in the backwoods of Ontario and became fascinated by physics and artificial intelligence. While pursuing his doctorate at the University of British Columbia in 1999, he read Explorations in Quantum Computing, one of the first books to theorize how a quantum computer might work, written by NASA scientist—and former research assistant to Stephen Hawking—Colin Williams. (Williams now works at D-Wave.)

Reading the book, Rose had two epiphanies. First, he wasn’t going to make it in academia. “I never was able to find a place in science,” he says. But he felt he had the bullheaded tenacity, honed by years of wrestling, to be an entrepreneur. “I was good at putting together things that were really ambitious, without thinking they were impossible.” At a time when lots of smart people argued that quantum computers could never work, he fell in love with the idea of not only making one but selling it.

With about $100,000 in seed funding from an entrepreneurship professor, Rose and a group of university colleagues founded D-Wave. They aimed at an incubator model, setting out to find and invest in whoever was on track to make a practical, working device. The problem: Nobody was close.

At the time, most scientists were pursuing a version of quantum computing called the gate model. In this architecture, you trap individual ions or photons to use as qubits and chain them together in logic gates like the ones in regular computer circuits—the ands, ors, nots, and so on that assemble into how a computer thinks. The difference, of course, is that the qubits could interact in much more complex ways, thanks to superposition, entanglement, and interference.

But qubits really don’t like to stay in a state of super­position, what’s called coherence. A single molecule of air can knock a qubit out of coherence. The simple act of observing the quantum world collapses all of its every-number-at-once quantumness into stochastic, humdrum, non­quantum reality. So you have to shield qubits—from everything. Heat or other “noise,” in physics terms, screws up a quantum computer, rendering it useless.

You’re left with a gorgeous paradox: Even if you successfully run a calculation, you can’t easily find that out, because looking at it collapses your superpositioned quantum calculation to a single state, picked at random from all possible superpositions and thus likely totally wrong. You ask the computer for the answer and get garbage.

Lashed to these unforgiving physics, scientists had built systems with only two or three qubits at best. They were wickedly fast but too underpowered to solve any but the most prosaic, lab-scale problems. But Rose didn’t want just two or three qubits. He wanted 1,000. And he wanted a device he could sell, within 10 years. He needed a way to make qubits that weren’t so fragile.

“WHAT WE’RE TRYING TO DO IS BUILD THE MOST KICK-ASS COMPUTERS THAT HAVE EVER EXISTED IN THE HISTORY OF THE WORLD.”

In 2003, he found one. Rose met Eric Ladizinsky, a tall, sporty scientist at NASA’s Jet Propulsion Lab who was an expert in superconducting quantum interference devices, or Squids. When Ladizinsky supercooled teensy loops of niobium metal to near absolute zero, magnetic fields ran around the loops in two opposite directions at once. To a physicist, electricity and magnetism are the same thing, so Ladizinsky realized he was seeing superpositioning of electrons. He also suspected these loops could become entangled, and that the charges could quantum-tunnel through the chip from one loop to another. In other words, he could use the niobium loops as qubits. (The field running in one direction would be a 1; the opposing field would be a 0.) The best part: The loops themselves were relatively big, a fraction of a millimeter. A regular microchip fab lab could build them.

The two men thought about using the niobium loops to make a gate-model computer, but they worried the gate model would be too susceptible to noise and timing errors. They had an alternative, though—an architecture that seemed easier to build. Called adiabatic annealing, it could perform only one specific computational trick: solving those rule-laden optimization problems. It wouldn’t be a general-purpose computer, but optimization is enormously valuable. Anyone who uses machine learning—Google, Wall Street, medicine—does it all the time. It’s how you train an artificial intelligence to recognize patterns. It’s familiar. It’s hard. And, Rose realized, it would have an immediate market value if they could do it faster.

In a traditional computer, annealing works like this: You mathematically translate your problem into a landscape of peaks and valleys. The goal is to try to find the lowest valley, which represents the optimized state of the system. In this metaphor, the computer rolls a rock around the problem-­scape until it settles into the lowest-possible valley, and that’s your answer. But a conventional computer often gets stuck in a valley that isn’t really lowest at all. The algorithm can’t see over the edge of the nearest mountain to know if there’s an even lower vale. A quantum annealer, Rose and Ladizinsky realized, could perform tricks that avoid this limitation. They could take a chip full of qubits and tune each one to a higher or lower energy state, turning the chip into a representation of the rocky landscape. But thanks to superposition and entanglement between the qubits, the chip could computationally tunnel through the landscape. It would be far less likely to get stuck in a valley that wasn’t the lowest, and it would find an answer far more quickly.

Better yet, Rose and Ladizinsky predicted that a quantum annealer wouldn’t be as fragile as a gate system. They wouldn’t need to precisely time the interactions of individual qubits. And they suspected their machine would work even if only some of the qubits were entangled or tunneling; those functioning qubits would still help solve the problem more quickly. And since the answer a quantum annealer kicks out is the lowest energy state, they also expected it would be more robust, more likely to survive the observation an operator has to make to get the answer out. “The adiabatic model is intrinsically just less corrupted by noise,” says Williams, the guy who wrote the book that got Rose started.

By 2003, that vision was attracting investment. Venture capitalist Steve Jurvetson wanted to get in on what he saw as the next big wave of computing that would propel machine intelligence everywhere—from search engines to self-driving cars. A smart Wall Street bank, Jurvetson says, could get a huge edge on its competition by being the first to use a quantum computer to create ever-smarter trading algorithms. He imagines himself as a banker with a D-Wave machine: “A torrent of cash comes my way if I do this well,” he says. And for a bank, the $10 million cost of a computer is peanuts. “Oh, by the way, maybe I buy exclusive access to D-Wave. Maybe I buy all your capacity! That’s just, like, a no-brainer to me.” D-Wave pulled in $100 million from investors like Jeff Bezos and In-Q-Tel, the venture capital arm of the CIA.

The D-Wave team huddled in a rented lab at the University of British Columbia, trying to learn how to control those tiny loops of niobium. Soon they had a one-qubit system. “It was a crappy, duct-taped-together thing,” Rose says. “Then we had two qubits. And then four.” When their designs got more complicated, they moved to larger-scale industrial fabrication.

As I watch, Hilton pulls out one of the wafers just back from the fab facility. It’s a shiny black disc the size of a large dinner plate, inscribed with 130 copies of their latest 512-qubit chip. Peering in closely, I can just make out the chips, each about 3 millimeters square. The niobium wire for each qubit is only 2 microns wide, but it’s 700 microns long. If you squint very closely you can spot one: a piece of the quantum world, visible to the naked eye.

Hilton walks to one of the giant, refrigerated D-Wave black boxes and opens the door. Inside, an inverted pyramid of wire-bedecked, gold-plated copper discs hangs from the ceiling. This is the guts of the device. It looks like a steampunk chandelier, but as Hilton explains, the gold plating is key: It conducts heat—noise—up and out of the device. At the bottom of the chandelier, hanging at chest height, is what they call the coffee can, the enclosure for the chip. “This is where we go from our everyday world,” Hilton says, “to a unique place in the universe.”

By 2007, D-Wave had managed to produce a 16-qubit system, the first one complicated enough to run actual problems. They gave it three real-world challenges: solving a sudoku, sorting people at a dinner table, and matching a molecule to a set of molecules in a database. The problems wouldn’t challenge a decrepit Dell. But they were all about optimization, and the chip actually solved them. “That was really the first time when I said, holy crap, you know, this thing’s actually doing what we designed it to do,” Rose says. “Back then we had no idea if it was going to work at all.” But 16 qubits wasn’t nearly enough to tackle a problem that would be of value to a paying customer. He kept pushing his team, producing up to three new designs a year, always aiming to cram more qubits together.

When the team gathers for lunch in D-Wave’s conference room, Rose jokes about his own reputation as a hard-driving taskmaster. Hilton is walking around showing off the 512-qubit chip that Google just bought, but Rose is demanding the 1,000-qubit one. “We’re never happy,” Rose says. “We always want something better.”

“Geordie always focuses on the trajectory,” Hilton says. “He always wants what’s next.”

In 2010, D-Wave’s first customers came calling. Lockheed Martin was wrestling with particularly tough optimization problems in their flight control systems. So a manager named Greg Tallant took a team to Burnaby. “We were intrigued with what we saw,” Tallant says. But they wanted proof. They gave D-Wave a test: Find the error in an algorithm. Within a few weeks, D-Wave developed a way to program its machine to find the error. Convinced, Lockheed Martin leased a $10 million, 128-qubit machine that would live at a USC lab.

The next clients were Google and NASA. Hartmut Neven was another old friend of Rose’s; they shared a fascination with machine intelligence, and Neven had long hoped to start a quantum lab at Google. NASA was intrigued, because it often faced wickedly hard best-fit problems. “We have the Curiosity rover on Mars, and if we want to move it from point A to point B there are a lot of possible routes—that’s a classic optimization problem,” says NASA’s Rupak Biswas. But before Google executives would put down millions, they wanted to know the D-Wave worked. In the spring of 2013, Rose agreed to hire a third party to run a series of Neven-designed tests, pitting D-Wave against traditional optimizers running on regular computers. Catherine McGeoch, a computer scientist at Amherst College, agreed to run the tests, but only under the condition that she report her results publicly.

Rose quietly panicked. For all of his bluster—D-Wave routinely put out press releases boasting about its new devices—he wasn’t sure his black box would win the shoot-out. “One of the possible outcomes was that the thing would totally tank and suck,” Rose says. “And then she would publish all this stuff and it would be a horrible mess.”

IS THE D-WAVE ACTUALLY QUANTUM? IF NOISE IS DISENTANGLING THE QUBITS, IT’S JUST AN EXPENSIVE CLASSICAL COMPUTER.

McGeoch pitted the D-Wave against three pieces of off-the-shelf software. One was IBM’s CPLEX, a tool used by ConAgra, for instance, to crunch global market and weather data to find the optimum price at which to sell flour; the other two were well-known open source optimizers. McGeoch picked three mathematically chewy problems and ran them through the D-Wave and through an ordinary Lenovo desktop running the other software.

The results? D-Wave’s machine matched the competition—and in one case dramatically beat it. On two of the math problems, the D-Wave worked at the same pace as the classical solvers, hitting roughly the same accuracy. But on the hardest problem, it was much speedier, finding the answer in less than half a second, while CPLEX took half an hour. The D-Wave was 3,600 times faster. For the first time, D-Wave had seemingly objective evidence that its machine worked quantum magic. Rose was relieved; he later hired McGeoch as his new head of benchmarking. Google and NASA got a machine. D-Wave was now the first quantum computer company with real, commercial sales.


That’s when its troubles began.

Quantum scientists had long been skeptical of D-Wave. Academics tend to get suspicious when the private sector claims massive leaps in scientific knowledge. They frown on “science by press release,” and Geordie Rose’s bombastic proclamations smelled wrong. Back then, D-Wave had published little about its system. When Rose held a press conference in 2007 to show off the 16-bit system, MIT quantum scientist Scott Aaronson wrote that the computer was “about as useful for industrial optimization problems as a roast-beef sandwich.” Plus, scientists doubted D-Wave could have gotten so far ahead of the state of the art. The most qubits anyone had ever got working was eight. So for D-Wave to boast of a 500-qubit machine? Nonsense. “They never seemed properly concerned about the noise model,” as IBM’s Smolin says. “Pretty early on, people became dismissive of it and we all sort of moved on.”

That changed when Lockheed Martin and USC acquired their quantum machine in 2011. Scientists realized they could finally test this mysterious box and see whether it stood up to the hype. Within months of the D-Wave installation at USC, researchers worldwide came calling, asking to run tests.

The first question was simple: Was the D-Wave system actually quantum? It might be solving problems, but if noise was disentangling the qubits, it was just an expensive classical computer, operating adiabatically but not with quantum speed. Daniel Lidar, a quantum scientist at USC who’d advised Lockheed on its D-Wave deal, figured out a clever way to answer the question. He ran thousands of instances of a problem on the D-Wave and charted the machine’s “success probability”—how likely it was to get the problem right—against the number of times it tried. The final curve was U-shaped. In other words, most of the time the machine either entirely succeeded or entirely failed. When he ran the same problems on a classical computer with an annealing optimizer, the pattern was different: The distribution clustered in the center, like a hill; this machine was sort of likely to get the problems right. Evidently, the D-Wave didn’t behave like an old-fashioned computer.
Lidar also ran the problems on a classical algorithm that simulated the way a quantum computer would solve a problem. The simulation wasn’t superfast, but it thought the same way a quantum computer did. And sure enough, it produced the U, like the D-Wave shape. At minimum the D-Wave acts more like a simulation of a quantum computer than like a conventional one.

Even Scott Aaronson was swayed. He told me the results were “reasonable evidence” of quantum behavior. If you look at the pattern of answers being produced, “then entanglement would be hard to avoid.” It’s the same message I heard from most scientists.

But to really be called a quantum computer, you also have to be, as Aaronson puts it, “productively quantum.” The behavior has to help things move faster. Quantum scientists pointed out that McGeoch hadn’t orchestrated a fair fight. D-Wave’s machine was a specialized device built to do optimizing problems. McGeoch had compared it to off-the-shelf software.

Matthias Troyer set out to even up the odds. A computer scientist at the Institute for Theoretical Physics in Zurich, Troyer tapped programming wiz Sergei Isakov to hot-rod a 20-year-old software optimizer designed for Cray supercomputers. Isakov spent a few weeks tuning it , and when it was ready, Troyer and Isakov’s team fed tens of thousands of problems into USC’s D-Wave and into their new and improved solver on an Intel desktop.

This time, the D-Wave wasn’t faster at all. In only one small subset of the problems did it race ahead of the conventional machine. Mostly, it only kept pace. “We find no evidence of quantum speedup,” Troyer’s paper soberly concluded. Rose had spent millions of dollars, but his machine couldn’t beat an Intel box.

What’s worse, as the problems got harder, the amount of time the D-Wave needed to solve them rose—at roughly the same rate as the old-school computers. This, Troyer says, is particularly bad news. If the D-Wave really was harnessing quantum dynamics, you’d expect the opposite. As the problems get harder, it should pull away from the Intels. Troyer and his team concluded that D-Wave did in fact have some quantum behavior, but it wasn’t using it productively. Why? Possibly, Troyer and Lidar say, it doesn’t have enough “coherence time.” For some reason its qubits aren’t qubitting—the quantum state of the niobium loops isn’t sustained.

One way to fix this problem, if indeed it’s a problem, might be to have more qubits running error correction. Lidar suspects D-Wave would need another 100—maybe 1,000—qubits checking its operations (though the physics here are so weird and new, he’s not sure how error correction would work). “I think that almost everybody would agree that without error correction this plane is not going to take off,” Lidar says.

Rose’s response to the new tests: “It’s total bullshit.”

D-Wave, he says, is a scrappy startup pushing a radical new computer, crafted from nothing by a handful of folks in Canada. From this point of view, Troyer had the edge. Sure, he was using standard Intel machines and classical software, but those benefited from decades’ and trillions of dollars’ worth of investment. The D-Wave acquitted itself admirably just by keeping pace. Troyer “had the best algorithm ever developed by a team of the top scientists in the world, finely tuned to compete on what this processor does, running on the fastest processors that humans have ever been able to build,” Rose says. And the D-Wave “is now competitive with those things, which is a remarkable step.”

But what about the speed issues? “Calibration errors,” he says. Programming a problem into the D-Wave is a manual process, tuning each qubit to the right level on the problem-solving landscape. If you don’t set those dials precisely right, “you might be specifying the wrong problem on the chip,” Rose says. As for noise, he admits it’s still an issue, but the next chip—the 1,000-qubit version codenamed Washington, coming out this fall—will reduce noise yet more. His team plans to replace the niobium loops with aluminum to reduce oxide buildup. “I don’t care if you build [a traditional computer] the size of the moon with interconnection at the speed of light, running the best algorithm that Google has ever come up with. It won’t matter, ’cause this thing will still kick your ass,” Rose says. Then he backs off a bit. “OK, everybody wants to get to that point—and Washington’s not gonna get us there. But Washington is a step in that direction.”

Or here’s another way to look at it, he tells me. Maybe the real problem with people trying to assess D-Wave is that they’re asking the wrong questions. Maybe his machine needs harder problems.

On its face, this sounds crazy. If plain old Intels are beating the D-Wave, why would the D-Wave win if the problems got tougher? Because the tests Troyer threw at the machine were random. On a tiny subset of those problems, the D-Wave system did better. Rose thinks the key will be zooming in on those success stories and figuring out what sets them apart—what advantage D-Wave had in those cases over the classical machine. In other words, he needs to figure out what sort of problems his machine is uniquely good at. Helmut Katzgraber, a quantum scientist at Texas A&M, cowrote a paper in April bolstering Rose’s point of view. Katzgraber argued that the optimization problems everyone was tossing at the D-Wave were, indeed, too simple. The Intel machines could easily keep pace. If you think of the problem as a rugged surface and the solvers as trying to find the lowest spot, these problems “look like a bumpy golf course. What I’m proposing is something that looks like the Alps,” he says.

In one sense, this sounds like a classic case of moving the goalposts. D-Wave will just keep on redefining the problem until it wins. But D-Wave’s customers believe this is, in fact, what they need to do. They’re testing and retesting the machine to figure out what it’s good at. At Lockheed Martin, Greg Tallant has found that some problems run faster on the D-Wave and some don’t. At Google, Neven has run over 500,000 problems on his D-Wave and finds the same. He’s used the D-Wave to train image-recognizing algorithms for mobile phones that are more efficient than any before. He produced a car-recognition algorithm better than anything he could do on a regular silicon machine. He’s also working on a way for Google Glass to detect when you’re winking (on purpose) and snap a picture. “When surgeons go into surgery they have many scalpels, a big one, a small one,” he says. “You have to think of quantum optimization as the sharp scalpel—the specific tool.”

The dream of quantum computing has always been shrouded in sci-fi hope and hoopla—with giddy predictions of busted crypto, multiverse calculations, and the entire world of computation turned upside down. But it may be that quantum computing arrives in a slower, sideways fashion: as a set of devices used rarely, in the odd places where the problems we have are spoken in their curious language. Quantum computing won’t run on your phone—but maybe some quantum process of Google’s will be key in training the phone to recognize your vocal quirks and make voice recognition better. Maybe it’ll finally teach computers to recognize faces or luggage. Or maybe, like the integrated circuit before it, no one will figure out the best-use cases until they have hardware that works reliably. It’s a more modest way to look at this long-heralded thunderbolt of a technology. But this may be how the quantum era begins: not with a bang, but a glimmer.

http://www.wired.com/2014/05/quantum-computing/

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 24