Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Agrul

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 24
General Discussion / obama / china climate change deal
« on: November 13, 2014, 04:56:56 AM »
No threads on this yet?

Kind've a detail-light article from the NYT. If anyone has one with more detail on the deal itself, that'd be cool.

Tech Heads / Good Multi-GPU Mobos?
« on: November 05, 2014, 10:41:34 PM »
Anyone have any suggestions for a solid, reasonably priced mobo that can support 2-4 GPUs at once? Think I'm gonna build a new desktop in the next month or two and stuffing as many AMD GPUs into it as I can is a priority for work.

Doing a little bit of research on the issue now, but there's a lot've information to absorb out there. From what I gather so far, I'm best off going AMD for software reasons, and it'd be ideal for me to pick up a mobo with as many PCIe x16 slots as possible to maximize bandwidth per GPU.

Figure I'll be spending no more than $1k on the whole rig initially, although buying the multiple video cards may shunt me above that figure eventually.

Americans might think they know how bad inequality is, but it turns out they actually have no idea.

A new study conducted at Harvard Business School found that Americans believe CEOs make roughly 30 times what the average worker makes in the U.S., when in actuality they are making more than 350 times the average worker. "Americans drastically underestimated the gap in actual incomes between CEOs and unskilled workers," the study says.

But that underestimation isn't merely drastic—it is also unmatched in the world. The gap between Americans' perception and reality is the most among any of the 16 countries for which the researchers measured both the perceived and actual pay inequality.

Part of that stems from Americans’ comparatively modest estimation. The citizens of four countries—South Korea, Australia, Chile, and Taiwan—estimate a higher pay gap between CEOs and low level workers. In South Korea, the perception is that CEOs make 42 times more than the average worker; in Australia, it’s just over 41; in Taiwan, it’s roughly 34; and in Chile, it’s about 33.

But the reason Americans are so bad at guessing how much CEOs make may also be tied to the fact that American CEOs are significantly better paid than those from just about anywhere else

The average Fortune 500 CEO in the United States makes more than $12 million per year, which is nearly five million dollars more than the amount for top CEOs in Switzerland, where the second highest paid CEOs live, more than twice that for those in Germany, where the third highest paid CEOs live, and more than twenty one times that for those in Poland.

While a handful of countries might perceive larger pay gaps than the United States, none of the ones surveyed have an actual pay gap anywhere nearly as large. In Switzerland, the country with the second largest CEO-to-worker pay gap, chief executives make 148 times the average worker; in Germany, the country with the third largest gap, CEOs make 147 times the average worker; and in Spain, the country with the fourth largest gap, the ratio is 127 to one.

Look no further than a few of America's largest corporations for evidence of the country's exceptionally large pay gap. An analysis from last year estimated that it takes the typical worker at both McDonald's and Starbucks more than six months to earn what each company's CEO makes in a single hour.

What Americans share with the rest of the world is a collective disdain for pay inequality. People of all ages, education levels, and income brackets, the study found, believe that low-skilled workers are getting paid too little and high-skilled workers are getting paid too much. "The consensus that income gaps between skilled and unskilled workers should be smaller holds in all subgroups of respondents regardless of their age, education, socioeconomic status, political affiliation and opinions on inequality and pay," the study says.

One can only imagine what that disappointment would look like if everyone had a better sense of how great the pay gap actually is.

On June 28, 2009, the world-famous physicist Stephen Hawking threw a party at the University of Cambridge, complete with balloons, hors d'oeuvres and iced champagne. Everyone was invited but no one showed up. Hawking had expected as much, because he only sent out invitations after his party had concluded. It was, he said, "a welcome reception for future time travelers," a tongue-in-cheek experiment to reinforce his 1992 conjecture that travel into the past is effectively impossible.

But Hawking may be on the wrong side of history. Recent experiments offer tentative support for time travel's feasibility—at least from a mathematical perspective. The study cuts to the core of our understanding of the universe, and the resolution of the possibility of time travel, far from being a topic worthy only of science fiction, would have profound implications for fundamental physics as well as for practical applications such as quantum cryptography and computing.

Closed timelike curves
The source of time travel speculation lies in the fact that our best physical theories seem to contain no prohibitions on traveling backward through time. The feat should be possible based on Einstein's theory of general relativity, which describes gravity as the warping of spacetime by energy and matter. An extremely powerful gravitational field, such as that produced by a spinning black hole, could in principle profoundly warp the fabric of existence so that spacetime bends back on itself. This would create a "closed timelike curve," or CTC, a loop that could be traversed to travel back in time.

Hawking and many other physicists find CTCs abhorrent, because any macroscopic object traveling through one would inevitably create paradoxes where cause and effect break down. In a model proposed by the theorist David Deutsch in 1991, however, the paradoxes created by CTCs could be avoided at the quantum scale because of the behavior of fundamental particles, which follow only the fuzzy rules of probability rather than strict determinism. "It's intriguing that you've got general relativity predicting these paradoxes, but then you consider them in quantum mechanical terms and the paradoxes go away," says University of Queensland physicist Tim Ralph. "It makes you wonder whether this is important in terms of formulating a theory that unifies general relativity with quantum mechanics."

Experimenting with a curve
Recently Ralph and his PhD student Martin Ringbauer led a team that experimentally simulated Deutsch's model of CTCs for the very first time, testing and confirming many aspects of the two-decades-old theory. Their findings are published in Nature Communications. Much of their simulation revolved around investigating how Deutsch's model deals with the “grandfather paradox,” a hypothetical scenario in which someone uses a CTC to travel back through time to murder her own grandfather, thus preventing her own later birth. (Scientific American is part of Nature Publishing Group.)

Deutsch's quantum solution to the grandfather paradox works something like this:

Instead of a human being traversing a CTC to kill her ancestor, imagine that a fundamental particle goes back in time to flip a switch on the particle-generating machine that created it. If the particle flips the switch, the machine emits a particle—the particle—back into the CTC; if the switch isn't flipped, the machine emits nothing. In this scenario there is no a priori deterministic certainty to the particle's emission, only a distribution of probabilities. Deutsch's insight was to postulate self-consistency in the quantum realm, to insist that any particle entering one end of a CTC must emerge at the other end with identical properties. Therefore, a particle emitted by the machine with a probability of one half would enter the CTC and come out the other end to flip the switch with a probability of one half, imbuing itself at birth with a probability of one half of going back to flip the switch. If the particle were a person, she would be born with a one-half probability of killing her grandfather, giving her grandfather a one-half probability of escaping death at her hands—good enough in probabilistic terms to close the causative loop and escape the paradox. Strange though it may be, this solution is in keeping with the known laws of quantum mechanics.

In their new simulation Ralph, Ringbauer and their colleagues studied Deutsch's model using interactions between pairs of polarized photons within a quantum system that they argue is mathematically equivalent to a single photon traversing a CTC. "We encode their polarization so that the second one acts as kind of a past incarnation of the first,” Ringbauer says. So instead of sending a person through a time loop, they created a stunt double of the person and ran him through a time-loop simulator to see if the doppelganger emerging from a CTC exactly resembled the original person as he was in that moment in the past.

By measuring the polarization states of the second photon after its interaction with the first, across multiple trials the team successfully demonstrated Deutsch's self-consistency in action. "The state we got at our output, the second photon at the simulated exit of the CTC, was the same as that of our input, the first encoded photon at the CTC entrance," Ralph says. "Of course, we're not really sending anything back in time but [the simulation] allows us to study weird evolutions normally not allowed in quantum mechanics."

Those "weird evolutions" enabled by a CTC, Ringbauer notes, would have remarkable practical applications, such as breaking quantum-based cryptography through the cloning of the quantum states of fundamental particles. "If you can clone quantum states,” he says, “you can violate the Heisenberg uncertainty principle,” which comes in handy in quantum cryptography because the principle forbids simultaneously accurate measurements of certain kinds of paired variables, such as position and momentum. "But if you clone that system, you can measure one quantity in the first and the other quantity in the second, allowing you to decrypt an encoded message."

"In the presence of CTCs, quantum mechanics allows one to perform very powerful information-processing tasks, much more than we believe classical or even normal quantum computers could do," says Todd Brun, a physicist at the University of Southern California who was not involved with the team's experiment. "If the Deutsch model is correct, then this experiment faithfully simulates what could be done with an actual CTC. But this experiment cannot test the Deutsch model itself; that could only be done with access to an actual CTC."

Alternative reasoning
Deutsch's model isn’t the only one around, however. In 2009 Seth Lloyd, a theorist at Massachusetts Institute of Technology, proposed an alternative, less radical model of CTCs that resolves the grandfather paradox using quantum teleportation and a technique called post-selection, rather than Deutsch's quantum self-consistency. With Canadian collaborators, Lloyd went on to perform successful laboratory simulations of his model in 2011. "Deutsch's theory has a weird effect of destroying correlations," Lloyd says. "That is, a time traveler who emerges from a Deutschian CTC enters a universe that has nothing to do with the one she exited in the future. By contrast, post-selected CTCs preserve correlations, so that the time traveler returns to the same universe that she remembers in the past."

This property of Lloyd's model would make CTCs much less powerful for information processing, although still far superior to what computers could achieve in typical regions of spacetime. "The classes of problems our CTCs could help solve are roughly equivalent to finding needles in haystacks," Lloyd says. "But a computer in a Deutschian CTC could solve why haystacks exist in the first place.”

Lloyd, though, readily admits the speculative nature of CTCs. “I have no idea which model is really right. Probably both of them are wrong,” he says. Of course, he adds, the other possibility is that Hawking is correct, “that CTCs simply don't and cannot exist." Time-travel party planners should save the champagne for themselves—their hoped-for future guests seem unlikely to arrive.

General Discussion / Introverts are all Pokemon Nerds
« on: August 18, 2014, 12:16:30 PM »
Do our Facebook posts reflect our true personalities? Incrementally, probably not. But in aggregate, the things we say on social media paint a fairly accurate portrait of our inner selves. A team of University of Pennsylvania scientists is using Facebook status updates to find commonalities in the words used by different ages, genders, and even psyches.

The so-called “World Well-Being Project” started as an effort to gauge happiness across various states and communities.

“Governments have an increased interest in measuring not just economic outcomes but other aspects of well-being,” said Andrew Schwartz, a UPenn computer scientist who works on the project. “But it's very difficult to study well-being at a large scale. It costs a lot of money to administer surveys to see how people are doing in certain areas. Social media can help with that.”

For the studies, Schwartz and his co-authors asked people to download a Facebook app called “My Personality.” The app asks users to take a personality test and indicate their age and gender, and then it tracks their Facebook updates. So far, 75,000 people have participated in the experiment.

Then, through a process called differential language analysis, they isolate the words that are most strongly correlated with a certain gender, age, trait, or place. The resulting word clouds reveal which words are most distinguishing of, say, a woman. Or a neurotic person.

In the six studies they’ve published so far, they’ve found that, for example, introverts make heavy use of emoticons and words related to anime, but extroverts say “party,” “baby,” and “ya.”

Words Used by Introverts (top) vs. Extroverts

Schwartz and his colleagues have also tracked “openness,” which is “characterized by traits such as being intelligent, analytical, reflective, curious, imaginative, creative, and sophisticated.” Open people talk about “dreams” and the “universe,” apparently, while people with “low openness”—“characterized by traits such as being unintelligent, unanalytical, unreflective, uninquisitive, unimaginative, uncreative, and unsophisticated”—use contractions, misspellings ... and misspelled contractions.

Openness (top) and Non-Openness Word Correlations

They’ve also analyzed how our use of certain words changes as we age. People are much less likely to be “bored” at 60 than at 13, it turns out, but much more likely to feel proud. Twenty-five year olds tend to mention “drunk,” but 55-year-olds talk about “wine.”

In one of the first studies, the team correlated past Gallup research on life satisfaction with tweets from various counties.

Happy communities, they found, talk about exercise—fitness, Zumba, and the gym—while the sadder ones felt “bored” or “tired.” The more upbeat locales were also more likely to donate money and volunteer, but also to go to meetings. The hidden socio-economic variable is clear: Having money allows you to go rock-climbing, give to charity, and it makes you happier, too.

So far, many of the findings have been rather predictable—which isn’t a bad thing, when it comes to social science.

“Subjects living in high elevations talk about the mountains,” they write. “Neurotic people disproportionately use the phrase ‘sick of’ and the word ‘depressed.’”

But some have shed light on a strange connections between who we are, how we live our lives, and the words we choose to present to the world. For example, “an active life implies emotional stability,” they note. And, “males use the possessive ‘my’ when mentioning their wife or girlfriend more often than females use ‘my’ with ‘husband’ or ‘boyfriend.’”

“It is a very unbiased view of humanity,” Schwartz said of the lab’s work so far. “The data tells the story, and it tells it about people.”

General Discussion / Bypassing Bankers: P2P Lending Companies
« on: August 17, 2014, 04:51:15 PM »
One of the more hopeful consequences of the 2008 financial crisis has been the growth of a group of small companies dedicated to upending the status quo on Wall Street. Bearing cute, Silicon Valley–esque names such as Kabbage, Zopa, Kiva, and Prosper, these precocious upstarts are tiny by banking standards, and pose no near-term threat to behemoths like Goldman Sachs, Morgan Stanley, JPMorgan Chase, Bank of America, or Citigroup—banks that between them control much of the world’s capital flow. But there is no question that these young companies have smartly exploited the too-big-to-fail banks’ failure to cater to the credit needs of consumers and small businesses, and will likely do so more noticeably in the years ahead.

At the forefront of the group is Lending Club, a San Francisco–based company founded in 2007 by Renaud Laplanche, a serial entrepreneur and former Wall Street attorney. Laplanche, 43, grew up in a small town in France and, as a teenager, worked every day for three hours before school in his father’s grocery store. He also won two national sailing championships in France, in 1988 and 1990. Now an American citizen, he created Lending Club after being astonished at the high cost of consumer credit in the United States. Lending Club uses the Internet to match investors with individual borrowers, most of whom are looking to refinance their credit-card debt or other personal loans. The result is a sort of eHarmony for borrowers and lenders. Lending Club has facilitated more than $4 billion in loans and is the largest company performing this sort of service, by a factor of four.

The matching of individual lenders with borrowers on Lending Club’s Web site takes place anonymously (lenders can see would-be borrowers’ relevant characteristics, just not their name), but each party gets what it wants. Many borrowers can shave a few percentage points off the interest rate on the debt they refinance, and lock in the lower rate for three to five years. But that interest rate is still more than the lenders could earn on a three-year Treasury security (about 1 percent), or a typical “high yield” or “junk” bond (averaging about 5 percent). Lending Club claims that its loans have so far yielded an annual net return to lenders of about 8 percent, after fees and accounting for losses. It’s worth noting, however, that what lenders gain in yield, they lose in safety: the loans are unsecured, so if a borrower does not pay his debts—and each year, between 3 and 4 percent of Lending Club borrowers do not—the lender can do little about it except absorb the loss and move on. The average consumer loan on Lending Club is about $14,000; many lenders make several loans at once to hedge against the risk of any single loan going bad.

Lending Club’s astute initial investors, including the venture-capital firms Norwest Venture Partners, Canaan Partners, and Foundation Capital, also get what they want: no liability for the loans being made, no oversight from persnickety bank regulators (Lending Club is regulated by the Securities and Exchange Commission), none of the costs associated with the typical bank-branch network, and, best of all, a plethora of fees, collected from both the borrower and the lender, totaling about 5 percent of the loan amount, on average.

Compared with Wall Street firms, Lending Club is a flea on an elephant’s tail.
In the first quarter of 2014, it helped arrange 56,557 loans totaling $791 million; JPMorgan Chase made $47 billion in what it classifies as consumer loans during the same period. But the company is growing quickly. In 2013, its revenue—the fees it charges for the loans it helps arrange—tripled, to $98 million. There is talk of an IPO later this year. In April, the company was valued at $3.75 billion—38 times its 2013 revenue and more than 520,000 times its net income—when it raised $65 million in additional equity from a new group of high-powered institutional investors, including BlackRock and T. Rowe Price. Lending Club used the cash to help it acquire Springstone Financial, which provides financing for school loans and some elective medical procedures.

In other words, Lending Club is backed by quite a few smart-money players, eager to buy its equity at nosebleed valuations in return for the chance to get in on the micro-loan market—and perhaps to change the way consumers and small businesses get credit. “It’s a value proposition that really comes from the fact that we operate at a lower cost, and then pass on the cost savings to both borrowers and investors,” Laplanche told me. “We give each side a better deal than they could get elsewhere.” That’s certainly true: Lending Club doesn’t have physical branches, or several other layers of costs that weigh down traditional banks. But Lending Club also seems to exploit a market inefficiency that is really quite shocking, given the supposed sophistication of the big Wall Street firms. When it comes to interest rates, the major credit-card issuers—among them JPMorgan Chase and Citigroup—do not differentiate greatly among the many people who borrow money on their credit cards. They charge just about all of them similarly usurious rates. While a dizzying array of credit cards offer a plethora of introductory interest rates and benefits—cash back, for instance—regular interest rates on cards issued by the big players to consumers with average credit scores typically range between 13 and 23 percent. Lending Club’s business strategy, in part, is simply to differentiate more finely among borrowers, particularly those with good credit histories.

Lending Club screens loan applicants—only 10 to 20 percent of people seeking loans get approved to use the marketplace. The company then places each approved borrower into one of 35 credit categories, using many factors, including Fico score. Those with the highest credit ranking can borrow money at about 7 percent interest.
As of the first quarter of 2014, the largest category of Lending Club loans charged borrowers an interest rate of about 13 percent, well below the rate charged by the typical credit-card company, which in early June was almost 16 percent.

It’s quite possible, of course, that Lending Club is merely mispricing the credit risk posed by these small borrowers. After all, Lending Club isn’t making the loans; it bears no liability if, say, default rates rise when another recession hits. So far, however, Lending Club’s loan-default rates appear no worse than the industry average.

Another possibility is that the six largest credit-card issuers in the United States—Chase, Bank of America, American Express, Citigroup, CapitalOne, and Discover—which together control about two-thirds of the domestic consumer-credit-card market, have been acting like a cartel, keeping lending rates higher than they would be in a truly competitive market, and reaping huge profits. In the first quarter of 2014, Chase’s credit-card business—which also includes auto loans and merchant services—had a net income of $1.1 billion and a profit margin of nearly 25 percent. Few businesses on Wall Street provide the same level of consistent profitability as does the consumer-credit-card business. If a few crumbs fall off the table to the likes of Lending Club or Prosper, so be it.

Renaud Laplanche is a firm believer in transparency, and Lending Club’s Web site and public filings are filled with statistics about borrowers. In contrast to the practice of the big banks, the company makes details about each loan available publicly. It recently announced a partnership with San Francisco–based Union Bank, which has $107 billion in assets, to offer the bank’s customers access to its borrowing marketplace.

At a conference in May in San Francisco, where more than 900 peer-to-peer-banking enthusiasts gathered to hear about the latest trends in the industry, Charles Moldow, a general partner at Foundation Capital—one of Lending Club’s largest investors—reportedly created a stir when he discussed a white paper titled “A Trillion Dollar Market by the People, for the People.” In his talk, Moldow spoke about how marketplace lending would change banking in much the same way Amazon has changed retail. He went on to cite Bill Gates’s observation two decades ago that banking is necessary, but bricks-and-mortar banks are not. “Marketplace lending is now poised to demonstrate how accurate that observation was,” Moldow concluded.

That’s probably too exuberant. Whether or not bank branches themselves are necessary, applying for individual peer-to-peer loans will always be more of a hassle than swiping a piece of plastic: inertia is a powerful force. And as his company’s alliance with Union Bank demonstrates, Laplanche is not hell-bent on blowing up the old banking model: he wants to work with established banks. To that end, he has invited onto Lending Club’s board of directors John Mack, the former CEO of Morgan Stanley and a stalwart of the Wall Street status quo. Larry Summers, the former Treasury secretary, is also on the board. “In order to transform the banking system, it’s useful to have people on board who have participated in building it,” Laplanche explained. “We essentially combine that experience and brainpower with more of a Silicon Valley mind-set of using technology to shake things up for the benefit of the consumer.”

One can only hope that it works out that way. For all of Big Finance’s innovation in recent decades, ordinary people haven’t seen much obvious benefit. Perhaps if Lending Club continues to win away some of the credit-card business’s best customers—those with persistent balances but solid credit ratings, for whom it is worth the effort to refinance their personal debt through the marketplace—the big banks might begin to treat borrowers more subtly and equitably. If that were to happen—and I wouldn’t hold my breath—then the cost of credit could be lowered for more people, and Wall Street could take a step toward meeting whatever obligation it feels it may have to repair its tattered relationship with Main Street.

Several years ago, the Defense Advanced Research Projects Agency got wind of a technique called transcranial direct-current stimulation, or tDCS, which promised something extraordinary: a way to increase people’s performance in various capacities, from motor skills (in the case of recovering stroke patients) to language learning, all by stimulating their brains with electrical current. The simplest tDCS rigs are little more than nine-volt batteries hooked up to sponges embedded with metal and taped to a person’s scalp.

It’s only a short logical jump from the preceding applications to other potential uses of tDCS. What if, say, soldiers could be trained faster by hooking their heads up to a battery?

This is the kind of question DARPA was created to ask. So the agency awarded a grant to researchers at the University of New Mexico to test the hypothesis. They took a virtual-reality combat-training environment called Darwars Ambush—basically, a video game the military uses to train soldiers to respond to various situations—and captured still images. Then they Photoshopped in pictures of suspicious characters and partially concealed bombs. Subjects were shown the resulting tableaus, and were asked to decide very quickly whether each scene included signs of danger. The first round of participants did all this inside an fMRI machine, which identified roughly the parts of their brains that were working hardest as they looked for threats. Then the researchers repeated the exercise with 100 new subjects, this time sticking electrodes over the areas of the brain that had been identified in the fMRI experiment, and ran two milliamps of current (nothing dangerous) to half of the subjects as they examined the images. The remaining subjects—the control group—got only a minuscule amount of current. Under certain conditions, subjects receiving the full dose of current outperformed the others by a factor of two. And they performed especially well on tests administered an hour after training, indicating that what they’d learned was sticking. Simply put, running positive electrical current to the scalp was making people learn faster.

Dozens of other studies have turned up additional evidence that brain stimulation can improve performance on specific tasks. In some cases, the gains are small—maybe 10 or 20 percent—and in others they are large, as in the DARPA study. Vince Clark, a University of New Mexico psychology professor who was involved with the DARPA work, told me that he’d tried every data-crunching tactic he could think of to explain away the effect of tDCS. “But it’s all there. It’s all real,” Clark said. “I keep trying to get rid of it, and it doesn’t go away.”

Now the intelligence-agency version of DARPA, known as IARPA, has created a program that will look at whether brain stimulation might be combined with exercise, nutrition, and games to even more dramatically enhance human performance. As Raja Parasuraman, a George Mason University psychology professor who is advising an IARPA team, puts it, “The end goal is to improve fluid intelligence—that is, to make people smarter.”

Whether or not IARPA finds a way to make spies smarter, the field of brain stimulation stands to shift our understanding of the neural structures and processes that underpin intelligence. Here, based on conversations with several neuroscientists on the cutting edge of the field, are four guesses about where all this might be headed.

1. Brain stimulation will expand our understanding of the brain-mind connection.
The neural mechanisms of brain stimulation are just beginning to be understood, through work by Michael A. Nitsche and Walter Paulus at the University of Göttingen and by Marom Bikson at the City College of New York. Their findings suggest that adding current to the brain increases the plasticity of neurons, making it easier for them to form new connections. We don’t imagine our brains being so mechanistic. To fix a heart with simple plumbing techniques or to reset a bone is one thing. But you’re not supposed to literally flip an electrical switch and get better at spotting Waldo or learning Swahili, are you? And if flipping a switch does work, how will that affect our ideas about intelligence and selfhood?

Even if juicing the brain doesn’t magically increase IQ scores, it may temporarily and substantially improve performance on certain constituent tasks of intelligence, like memory retrieval and cognitive control. This in itself will pose significant ethical challenges, some of which echo dilemmas already being raised by “neuroenhancement” drugs like Provigil. Workers doing cognitively demanding tasks—air-traffic controllers, physicists, live-radio hosts—could find themselves in the same position as cyclists, weight lifters, and baseball players. They’ll either be surpassed by those willing to augment their natural abilities, or they’ll have to augment themselves.

2. DIY brain stimulation will be popular—and risky.
As word of research findings has spread, do-it-yourselfers on Reddit and elsewhere have traded tips on building simple rigs and where to place electrodes for particular effects. Researchers like the Wright State neuroscientist Michael Weisend have in turn gone on DIY podcasts to warn them off. There’s so much we don’t know. Is neurostimulation safe over long periods of time? Will we become addicted to it? Some scientists, like Stanford’s Teresa Iuculano and Oxford’s Roi Cohen Kadosh, warn that cognitive enhancement through electrical stimulation may “occur at the expense of other cognitive functions.” For example, when Iuculano and Kadosh applied electrical stimulation to subjects who were learning a code that paired various numbers with symbols, the test group memorized the symbols faster than the control group did. But they were slower when it came time to actually use the symbols to do arithmetic. Maybe thinking will prove to be a zero-sum game: we cannot add to our mental powers without also subtracting from them.

3. Electrical stimulation is just the beginning.
Scientists across the country are becoming interested in how other types of electromagnetic radiation might affect the brain. Some are looking at using alternating current at different frequencies, magnetic energy, ultrasound, even different types of sonic noise. There appear to be many ways of exciting the brain’s circuitry with various energetic technologies, but basic research is only in its infancy. “It’s so early,” Clark told me. “It’s very empirical now—see an effect and play with it.”

As we learn more about our neurons’ wiring, through efforts like President Obama’s BRAIN Initiative—a huge, multiagency attempt to map the brain—we may become better able to deliver energy to exactly the right spots, as opposed to bathing big portions of the brain in current or ultrasound. Early research suggests that such targeting could mean the difference between modest improvements and the startling DARPA results. It’s not hard to imagine a plethora of treatments tailored to specific types of learning, cognition, or mood—a bit of current here to boost working memory, some there to help with linguistic fluency, a dash of ultrasound to improve one’s sense of well-being.

4. The most important application may be clinical treatment.
City College’s Bikson worries that an emphasis on cognitive enhancement could overshadow therapies for the sick, which he sees as the more promising application of this technology. In his view, do-it-yourself tDCS is a sideshow—clinical tDCS could be used to treat people suffering from epilepsy, migraines, stroke damage, and depression. “The science and early medical trials suggest tDCS can have as large an impact as drugs and specifically treat those who have failed to respond to drugs,” he told me. “tDCS researchers go to work every day knowing the long-term goal is to reduce human suffering on a transformative scale.” To that end, many of them would like to see clinical trials test tDCS against leading drug therapies. “Hopefully the National Institutes of Health will do that,” Parasuraman, the George Mason professor, said. “I’d like to see straightforward, side-by-side competition between tDCS and antidepressants. May the best thing win.”

A Brief Chronicle of Cognitive Enhancement
500 b.c.: Ancient Greek scholars wear rosemary in their hair, believing it to boost memory.

1886: John Pemberton formulates the original Coca-Cola, with cocaine and caffeine. It’s advertised as a “brain tonic.”

1955: The FDA licenses methylphenidate—a k a Ritalin—for treating “hyperactivity.”

1997: Julie Aigner-Clark launches Baby Einstein, a line of products claiming to “facilitate the development of the brain in infants.”

1998: Provigil hits the U.S. market.

2005: Lumosity, a San Francisco company devoted to online “brain training,” is founded.

2020: A tDCS company starts an SAT-prep service for high-school students.

In recent weeks, the managers, employees, and customers of a New England chain of supermarkets called "Market Basket" have joined together to oppose the board of director's decision earlier in the year to oust the chain's popular chief executive, Arthur T. Demoulas.

Their demonstrations and boycotts have emptied most of the chain's seventy stores.

What was so special about Arthur T., as he's known? Mainly, his business model. He kept prices lower than his competitors, paid his employees more, and gave them and his managers more authority.

Late last year he offered customers an additional 4 percent discount, arguing they could use the money more than the shareholders.

In other words, Arthur T. viewed the company as a joint enterprise from which everyone should benefit, not just shareholders. Which is why the board fired him.

It's far from clear who will win this battle. But, interestingly, we're beginning to see the Arthur T. business model pop up all over the place.

Patagonia, a large apparel manufacturer based in Ventura, California, has organized itself as a "B-corporation." That's a for-profit company whose articles of incorporation require it to take into account the interests of workers, the community, and the environment, as well as shareholders.

The performance of B-corporations according to this measure is regularly reviewed and certified by a nonprofit entity called B Lab.

To date, over 500 companies in sixty industries have been certified as B-corporations, including the household products firm "Seventh Generation."

In addition, 27 states have passed laws allowing companies to incorporate as "benefit corporations." This gives directors legal protection to consider the interests of all stakeholders rather than just the shareholders who elected them.

We may be witnessing the beginning of a return to a form of capitalism that was taken for granted in America sixty years ago.

Then, most CEOs assumed they were responsible for all their stakeholders.

"The job of management," proclaimed Frank Abrams, chairman of Standard Oil of New Jersey, in 1951, "is to maintain an equitable and working balance among the claims of the various directly interested groups ... stockholders, employees, customers, and the public at large."

Johnson & Johnson publicly stated that its "first responsibility" was to patients, doctors, and nurses, and not to investors.

What changed? In the 1980s, corporate raiders began mounting unfriendly takeovers of companies that could deliver higher returns to their shareholders - if they abandoned their other stakeholders.

The raiders figured profits would be higher if the companies fought unions, cut workers' pay or fired them, automated as many jobs as possible or moved jobs abroad, shuttered factories, abandoned their communities, and squeezed their customers.

Although the law didn't require companies to maximize shareholder value, shareholders had the legal right to replace directors. The raiders pushed them to vote out directors who wouldn't make these changes and vote in directors who would (or else sell their shares to the raiders, who'd do the dirty work).

Since then, shareholder capitalism has replaced stakeholder capitalism. Corporate raiders have morphed into private equity managers, and unfriendly takeovers are rare. But it's now assumed corporations exist only to maximize shareholder returns.

Are we better off? Some argue shareholder capitalism has proven more efficient. It has moved economic resources to where they're most productive, and thereby enabled the economy to grow faster.

By this view, stakeholder capitalism locked up resources in unproductive ways. CEOs were too complacent. Companies were too fat. They employed workers they didn't need, and paid them too much. They were too tied to their communities.

But maybe, in retrospect, shareholder capitalism wasn't all it was cracked up to be. Look at the flat or declining wages of most Americans, their growing economic insecurity, and the abandoned communities that litter the nation.

Then look at the record corporate profits, CEO pay that's soared into the stratosphere, and Wall Street's financial casino (along with its near meltdown in 2008 that imposed collateral damage on most Americans).

You might conclude we went a bit overboard with shareholder capitalism.

The directors of "Market Basket" are now considering selling the company. Arthur T. has made a bid [3], but other bidders have offered more.

Reportedly, some prospective bidders think they can squeeze more profits out of the company than Arthur T. did.

But Arthur T. knew may have known something about how to run a business that made it successful in a larger sense.

Only some of us are corporate shareholders, and shareholders have won big in America over the last three decades.

But we're all stakeholders in the American economy, and many stakeholders have done miserably.

Maybe a bit more stakeholder capitalism is in order.

At age 13, jazz great Thelonious Monk ran into trouble at Harlem's Apollo Theater. The reason: he was too good. The famously precocious pianist was, as they say, a “natural,” and by that point had won the Apollo’s amateur competition so many times that he was barred from re-entering. To be sure, Monk practiced, a lot actually. But two new studies, and the fact that he taught himself to read music as a child before taking a single lesson, suggest that he likely had plenty of help from his genes.

The question of what accounts for the vast variability in people’s aptitudes for skilled and creative pursuits goes way back — are experts born with their skill, or do they acquire it? Victorian polymath Sir Francis Galton — coiner of the phrase "nature and nurture" and founder of the “eugenics” movement through which he hoped to improve the biological make-up of the human species through selective coupling — held the former view, noting that certain talents run in families.

Other thinkers, perhaps more ethically palatable than Galton, have argued that mastering nearly any skill can be achieved through rote repetition — through practice.

A 1993 study by Ericsson and colleagues helped popularize the idea that we can all practice our way to tuba greatness if we so choose. The authors found that by age 20 elite musicians had practiced for an average of 10,000 hours, concluding that differences in skill are not “due to innate talent.” Author Malcolm Gladwell lent this idea some weight in his 2008 book “Outliers.” Gladwell writes that greatness requires an enormous time investment and cites the “10,000-Hour Rule” as a major key to success in various pursuits from music (The Beatles) to software supremacy (Bill Gates).

However, new research led by Michigan State University psychology professor David Z. Hambrick suggests that, unfortunately for many of us, success isn’t exclusively a product of determination — that despite even the most hermitic practice routine, our genes might still leave greatness out of reach.

Hambrick and his colleague Elliot Tucker-Drob, an assistant professor of psychology at the University of Texas, set out to investigate the genetic influences on musical accomplishment using data from a study of 850 same-sex twin pairs from the 1960s. Participants where originally queried on their musical successes and how often they practiced, both of which Hambrick found to have a genetic component. One quarter of the genetic influence on musical accomplishment appears related to the act of practicing itself. Certain genes and genotypes presumably confer qualities that drive some kids to hole up in their basement and, at the expense of their family’s sanity, perfect that drum fill — traits like musical aptitude, musical enjoyment and motivation, that in turn could draw reinforcement from parents and teachers, leading to even more desire to practice. Hambrick's findings don't reveal what accounts for the remaining majority of genetic influence on musical accomplishment, though he assumes it's innate differences in faculties that would logically contribute to musical ability, such as sound processing and motor coordination.

But it gets more complicated. The new findings suggest that it's the way our genes and environment interact that is most crucial to musical accomplishment. Not only do genetically-influenced qualities contribute to whether people are likely to practice, Hambrick’s data show that the genetic influence on musical success was far larger in those who practiced more. It was previously thought that people might start out with a genetic leg up for a particular activity, but that skill derived through practice could eventually surpass any genetic predilections. “Our results suggest that it’s the other way around,” explains Hambrick, “that genes become more, not less important in differentiating people as they practice…genetic potentials for skilled performance are most fully expressed and fostered by practice."
In other words, people have various genetically determined basic abilities, or talents, that render them better or worse at certain skills, but that can be nurtured through environmental influences. Hence Hambrick is far from down on dedication: “If you want to be a better musician, practice! If you want to be a better golfer, practice!”

A similar study forthcoming in Psychological Science by Miriam A. Mosing of Stockholm's Karolinska Institute leans even heavier on the role of genes in musicality. Mosing and colleagues looked at the association between music practice and specific musical abilities like rhythm, melody and pitch discrimination in over 10,000 identical Swedish twins. They reported that the propensity to practice was between 40% and 70% heritable and that there was no difference in musical ability between twins with varying amounts of cumulative practice. "Music practice,” they conclude, “may not causally influence musical ability and … genetic variation among individuals affects both ability and inclination to practice."

Though both new studies focused on musicality, the findings can in theory be extrapolated to other skilled and creative activities. Similar data exist suggesting a genetic component to chess mastery, and Hambrick is currently analyzing the same twin data set to assess the genetics of scientific accomplishment. Not to get overly reductionist, but it could be assumed that nearly all of our talents and cognitive characteristics are least partly influenced by our respective strings of nucleotides. Complex pursuits, whether creative or technical, involve numerous communicating regions from all over the brain (in contrast to the overly simplistic and now debunked "left brain/right brain" assignments for analytical vs creative types). These structures and the brain’s general blueprint are shaped by our genetic code throughout development; also genes encode for the proteins that run our bodies and brains while plenty of data link specific genetic profiles with varying cognitive abilities.

Like all studies, Hambrick’s has its limitations. The assessments of musical practice and accomplishment were “fairly coarse” and the study subjects were primarily high-achieving students, though not specifically selected for elite musical ability. And while beyond the scope of both Hambrick’s and Mosing’s investigations, their work evokes the question of what it is to be “good” at something — how to reconcile the murky, often contentious divide between technical proficiency and creativity or artistic worth. Virtuosity can come across cold while three sloppy guitar chords can register in deep, mind-altering, meaningful ways. “No one would argue that the Sex Pistols or The Ramones — or even The Beatles or The Rolling Stones — were the most technically proficient musicians,” says Hambrick, “but they created something that, for whatever reason, resonated with people. I think it would be interesting to measure both creativity and expertise in the same sample. My guess is they are both are influenced by genes, but by different genes.”

It’s potentially unsettling that our abilities are so influenced by a genetic crapshoot. Some people people will always be maddeningly proficient at shredding through guitar solos, or blowing tubas, or winning amateur competitions at the Apollo Theater. But Hambrick sees his findings as constructive. If practicing our way to being just pretty good at something isn’t enough, we can better seek our strengths. More importantly we can avoid setting up unrealistic expectations for children: “I think it’s important to let kids try a lot of different things…and find out what they’re good at, which is probably also what they’ll enjoy. But the idea that anyone can become an expert at most anything isn't scientifically defensible, and pretending otherwise is harmful to society and individuals.”

Raymond Burse hasn't held a minimum-wage job since his high school and college years, when he worked side jobs on golf courses and paving crews. Yet this summer, the interim president at Kentucky State University made a large gesture to his school's lowest-paid employees. Burse announced that he would take a 25 percent salary cut to boost their wages.

The 24 school employees making less than $10.25 an hour, who mostly serve as custodial staff, groundskeepers and lower-end clerical workers, will see their pay rise to that new baseline. Some had been making as little as $7.25, the current federal minimum. Burse, who assumed the role of interim president in June, says he asked the school's chief financial officer how much such an increase would cost. The amount: $90,125.

"I figured it was easier for me to forgo that amount, rather than adding an additional burden on the institution," Burse says. "I had been thinking about it almost since the day they started talking to me about being interim president."

Burse announced his decision to take the funds out of his salary in a board meeting at the end of July, and the school ratified his employment contract on the spot — decreasing it from $349,869 to $259,744. He has pledged to take further salary cuts any time new minimum-wage employees are hired on his watch, to bring their hourly rate to $10.25.

This isn't Burse's first time leading KSU. He served as president from 1982 to 1989, before joining a law firm in Louisville, Ken., and then becoming a vice president and general counsel at GE. He will hold the school's interim leadership role for at least a year, or longer if they need more time to find a permanent replacement.

Burse describes himself as someone who believes in raising wages, and who also has high expectations and demands for his staff. "I thought that if I'm going to ask them to really be committed and give this institution their all, I should be doing something in return," Burse says. "I thought it was important."

Earlier this year, the Kentucky House passed a bill to increase the minimum wage to $10.10 by July  2016, but the bill failed to pass the state's Senate. There have been a few recent instances of colleges boosting their minimum wage on campus, such as at Hampton University in Virginia, where the president made a personal donation to increase workers' salaries. Yet at most organizations that have made news for their decisions to increase pay, like IKEA and GAP, the funding isn't tied to deductions in leaders' salaries.

"I didn’t have any examples of it having been done out there and I didn’t do it to be an example to anyone else," Burse says. "I did it to do right by the employees here."

"Practically every food you buy in a store for consumption by humans is genetically modified food. There are no wild, seedless watermelons. There's no wild cows." — Neil deGrasse Tyson

Cosmos star Neil deGrasse Tyson is known for defending climate science and the science of evolution. And now, in a video recently posted on YouTube (the actual date when it was recorded is unclear), he takes a strong stand on another hot-button scientific topic: Genetically modified foods.

In the video, Tyson can be seen answering a question posed in French about "des plantes transgenetiques"—responding with one of his characteristic, slowly-building rants.

"Practically every food you buy in a store for consumption by humans is genetically modified food," asserts Tyson. "There are no wild, seedless watermelons. There's no wild cows...You list all the fruit, and all the vegetables, and ask yourself, is there a wild counterpart to this? If there is, it's not as large, it's not as sweet, it's not as juicy, and it has way more seeds in it. We have systematically genetically modified all the foods, the vegetables and animals that we have eaten ever since we cultivated them. It's called artificial selection." You can watch the full video above.

In fairness, critics of GM foods make a variety of arguments that go beyond the simple question of whether the foods we eat were modified prior to the onset of modern biotechnology. They also draw a distinction between modifying plants and animals through traditional breeding and genetic modification that requires the use of biotechnology, and involves techniques such as inserting genes from different species.

General Discussion / If minimum wages, why not maximum wages?
« on: July 29, 2014, 09:09:02 AM »
I was in a gathering of academics the other day, and we were discussing minimum wages. The debate moved on to increasing inequality, and the difficulty of doing anything about it. I said why not have a maximum wage? To say that the idea was greeted with incredulity would be an understatement. So you want to bring back price controls was once response. How could you possibly decide on what a maximum wage should be was another.

So why the asymmetry? Why is the idea of setting a maximum wage considered outlandish among economists?

The problem is clear enough. All the evidence, in the US and UK, points to the income of the top 1% rising much faster than the average. Although the share of income going to the top 1% in the UK fell sharply in 2010, the more up to date evidence from the US suggests this may be a temporary blip caused by the recession. The latest report from the High Pay Centre in the UK says:

“Typical annual pay for a FTSE 100 CEO has risen from around £100-£200,000 in the early 1980s to just over £1 million at the turn of the 21st century to £4.3 million in 2012. This represented a leap from around 20 times the pay of the average UK worker in the 1980s to 60 times in 1998, to 160 times in 2012 (the most recent year for which full figures are available).”

I find the attempts of some economists and journalists to divert attention away from this problem very revealing. The most common tactic is to talk about some other measure of inequality, whereas what is really extraordinary and what worries many people is the rise in incomes at the very top. The suggestion that we should not worry about national inequality because global inequality has fallen is even more bizarre.

What lies behind this huge increase in inequality at the top? The problem with the argument that it just represents higher productivity of CEOs and the like is that this increase in inequality is much more noticeable in the UK and US than in other countries, yet there is no evidence that CEOs in UK and US based firms have been substantially outperforming their overseas rivals. I discussed in this post a paper by Piketty, Saez and Stantcheva which set out a bargaining model, where the CEO can put more or less effort into exploiting their monopoly power within a company. According to this model, CEOs in the UK and US have since 1980 been putting more bargaining effort than their overseas counterparts. Why? According to Piketty et al, one answer may be that top tax rates fell in the 1980s in both countries, making the returns to effort much greater.

If you believe this particular story, then one solution is to put top tax rates back up again. Even if you do not buy this story, the suspicion must be that this increase in inequality represents some form of market failure. Even David Cameron agrees. The solution the UK government has tried is to give more power to the shareholders of the firm. The High Pay Centre notes that: “Thus far, shareholders have not used their new powers to vote down executive pay proposals at a single FTSE 100 company.”, although as the FT report shareholder ‘revolts’ are becoming more common. My colleague Brian Bell and John Van Reenen do note in a recent study “that firms with a large institutional investor base provide a symmetric pay-performance schedule while those with weak institutional ownership protect pay on the downside.” However they also note that “a specific group of workers that account for the majority of the gains at the top over the last decade [are] financial sector workers .. [and] .. the financial crisis and Great Recession have left bankers largely unaffected.”

So increasing shareholder power may only have a small effect on the problem. So why not consider a maximum wage? One possibility is to cap top pay as some multiple of the lowest paid, as a recent Swiss referendum proposed. That referendum was quite draconian, suggesting a multiple of 12, yet it received a large measure of popular support (35% in favour, 65% against). The Swiss did vote to ban ‘golden hellos and goodbyes’. One neat idea is to link the maximum wage to the minimum wage, which would give CEOs an incentive to argue for higher minimum wages! Note that these proposals would have no disincentive effect on the self-employed entrepreneur.

If economists have examined these various possibilities, I have missed it. One possible reason why many economists seem to baulk at this idea is that it reminds them too much of the ‘bad old days’ of incomes policies and attempts by governments to fix ‘fair wages’. But this is an overreaction, as a maximum wage would just be the counterpart to the minimum wage. I would be interested in any other thoughts about why the idea of a maximum wage seems not to be part of economists’ Overton window.

from economist Simon Wren Lewis's blog

Germany is the global leader in energy efficiency, and the U.S., with its ingrained car culture, is among the least energy efficient of the world’s largest economies.

That’s the conclusion of a new report released by the American Council for an Energy-Efficient Economy, which ranks the world’s 16 largest economies based on 31 different measurements of efficiency, including national energy savings targets, fuel economy standards for vehicles, efficiency standards for appliances, average vehicle mpg, and energy consumed per square foot of floor space in residential buildings, among other metrics.

The ACEEE report ranked the U.S. 13th overall, with Germany, Italy, smaller European Union nations, France and China making up the top five most energy efficient economies in the world.

Using energy more efficiently is a critical step countries can take to reduce their fossil fuels consumption and its related climate change-driving carbon dioxide and methane emissions. The U.S. Environmental Protection Agency used state energy efficiency standards to help set CO2 emissions reductions goals for each state in the agency’s proposed Clean Power Plan, announced in June.

The U.S. was the 9th most energy-efficient economy in the ACEEE’s 2012 ranking, which criticized the country for focusing more on road construction than expanding public transportation.

Since then, the U.S. has made very little progress toward using energy more efficiently, the 2014 report says.This year, the U.S. took a major hit for its lack of a national energy savings plan or national greenhouse gas reduction plan, and its ongoing resistance to public transit.

Americans drive more than 9,300 miles per year, more than citizens in any other major world economy, according to the report. Australians, ranking second-to-last for annual per-capita vehicle miles traveled, drive 6,368 miles per year. India tops the list, driving 85 miles per year per capita, followed by China with 513 miles per year.

Americans also ranked last for the percentage of their travel accomplished using public transit — 10 percent, tying with Canada. Residents of China use transit 72 percent of the time, followed by Indians, who use transit 65 percent of the time.

The U.S. scored well for its energy efficiency tax credit and loan programs. And, it scored well for efficient ovens and refrigerators.

“We’re a leader in appliance and equipment standards,” said the report’s lead author, ACEEE national policy research analyst Rachel Young.The report called EnergyGuide appliance labels and Energy Star labels “best practices” for voluntary appliance and equipment standards.

The ACEEE gave the U.S. credit for energy efficiency standards included in residential and commercial building codes in many states, but criticized the country for not having adequate national building standards in place.

Young said the U.S. may improve in the energy efficiency rankings if the Clean Power Plan is finalized because a state may be able to increase the efficiency of its power plants and buildings as ways to reduce carbon dioxide emissions from existing power plants.

“The rule could spur greater investment in energy efficiency throughout the country,” she said.

By contrast, Germany scored well in nearly every category in the survey, including spending on energy efficiency measures, aggressive building codes, and the country’s tax credit and loan programs.

Germany has set a national target of a 20 percent reduction in primary energy consumption below 2008 levels by 2020 and 50 percent by 2050.

The U.S. is one of only two countries in the survey with no national energy savings plan or greenhouse gas emissions reduction plan.

General Discussion / AZ takes nearly 2 hours to execute prisoner...
« on: July 25, 2014, 01:10:39 PM »
In January the state of Ohio executed the convicted rapist and murderer Dennis McGuire. As in the other 31 U.S. states with the death penalty, Ohio used an intravenously injected drug cocktail to end the inmate's life. Yet Ohio had a problem. The state had run out of its stockpile of sodium thiopental, a once common general anesthetic and one of the key drugs in the executioner's lethal brew. Three years ago the only U.S. supplier of sodium thiopental stopped manufacturing the drug. A few labs in the European Union still make it, but the E.U. prohibits the export of any drugs if they are to be used in an execution.

Ohio's stockpile of pentobarbital, its backup drug, expired in 2009, and so the state turned to an experimental cocktail containing the sedative midazolam and the painkiller hydromorphone. But the executioner was flying blind. Execution drugs are not tested before use, and this experiment went badly. The priest who gave McGuire his last rites reported that McGuire struggled and gasped for air for 11 minutes, his strained breaths fading into small puffs that made him appear “like a fish lying along the shore puffing for that one gasp of air.” He was pronounced dead 26 minutes after the injection.

There is a simple reason why the drug cocktail was not tested before it was used: executions are not medical procedures. Indeed, the idea of testing how to most effectively kill a healthy person runs contrary to the spirit and practice of medicine. Doctors and nurses are taught to first “do no harm”; physicians are banned by professional ethics codes from participating in executions. Scientific protocols for executions cannot be established, because killing animal subjects for no reason other than to see what kills them best would clearly be unethical. Although lethal injections appear to be medical procedures, the similarities are just so much theater.

Yet even if executions are not medical, they can affect medicine. Supplies of propofol, a widely used anesthetic, came close to being choked off as a result of Missouri's plan to use the drug for executions. The state corrections department placed an order for propofol from the U.S. distributor of a German drug manufacturer. The distributor sent 20 vials of the drug in violation of its agreement with the manufacturer, a mistake that the distributor quickly caught. As the company tried in vain to get the state to return the drug, the manufacturer suspended new orders. The manufacturer feared that if the drug was used for lethal injection, E.U. regulators would ban all exports of propofol to the U.S. “Please, Please, Please HELP,” wrote a vice president at the distributor to the director of the Missouri corrections department. “This system failure—a mistake—1 carton of 20 vials—is going to affect thousands of Americans.”

This was a vast underestimate. Propofol is the most popular anesthetic in the U.S. It is used in some 50 million cases a year—everything from colonoscopies to cesareans to open-heart surgeries—and nearly 90 percent of the propofol used in the U.S. comes from the E.U. After 11 months, Missouri relented and agreed to return the drug.

Such incidents illustrate how the death penalty can harm ordinary citizens. Supporters of the death penalty counter that its potential to discourage violent crime confers a net social good. Yet no sound science supports that position. In 2012 the National Academies' research council concluded that research into any deterrent effect that the death penalty might provide is inherently flawed. Valid studies would need to compare homicide rates in the same states at the same time, but both with and without capital punishment—an impossible experiment. And it is clear that the penal system does not always get it right when meting out justice. Since 1973 the U.S. has released 144 prisoners from death row because they were found to be innocent of their crimes.

Concerns about drug shortages for executions have led some states to propose reinstituting the electric chair or the gas chamber—methods previously dismissed by the courts as cruel and unusual. In one sense, these desperate states are on to something. Strip off its clinical facade, and death by intravenous injection is no less barbarous.

Spamalot / hey AD
« on: July 24, 2014, 12:04:47 AM »
pyopencl & mpmath were waaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay easier to install in ubuntu than windows

suuuuuuuck it bro

although i still have no idea what made pyopencl finally start working. the 'easier' install still took 7 hours, woo

General Discussion / 10 Things Millennials Won’t Spend Money On
« on: July 23, 2014, 12:13:04 PM »
By 2017, millennials will have more buying power than any other generation. But so far, they're not spending like their parents did.

Getting Some Financial Help From Mom and Dad? Tell Us Your Story.
Millennials (With Jobs) Are Super Saving Their Way to Retirement
Why ‘Millennial Bashing’ In the Workplace Needs to Stop
Millennials are often maligned for their lack of financial literacy, but there is one money skill the younger generation has in spades: saving. After growing up during the Great Recession, millennials want to keep every cent they can. (If you don’t believe us, just check out this Reddit Frugal thread inspired by our recent post on millennial retirement super-saving.)

6 Mother’s Day Factoids to Show You’re Not a Horrible, Ungrateful Son or Daughter
$300 for a Jersey? NFL Fan Gear Just Got More Expensive
Flight Horror: Passenger Plane Crashes in Stormy Taiwan NBC News
Bleak Homecoming: First Ukraine Crash Bodies Land in Netherlands NBC News
Moving Moment: Wrecked Concordia Makes Final Voyage NBC News
This generation may be way ahead of where their parents were at the same age when it comes to preparing for retirement, but the frugality doesn’t end there. Kids these days also aren’t making the same buying decisions our parents made. Here are 10 things that a disproportionate number of today’s young adults won’t shell out for.

Eat Butter Fat Time Magazine Cover
Ending the War on Fat
The End of Iraq
How Many People Watched Orange Is the New Black? No One Knows
1. Pay TV
The average American still consumes 71% of his or her media on television, but for people age 14-24, it’s only 46%—with the lion’s share being consumed on phone, tablet, or PC. Many young people aren’t getting a TV at all. Nielsen found that most “Zero-TV” households tended toward the younger set, with adults under 35 making up 44% of all television teetotalers.

Millennials aren’t the only ones tuning out the tube. In 2013, Nielsen reported aggregate TV watching time shrank for the first time in four years.

2. Investments
By all accounts, young people should be investing in equities. Those just entering the work force have plenty of time before retirement to ride out market blips, and experts recommend younger investors place 75% to 90% of their portfolio in stocks or stock funds.

Unfortunately, after growing up in the Great Recession, millennials would rather put their money in a sock drawer than on Wall Street. When Wells Fargo surveyed roughly 1,500 adults between 22 and 32 years of age, 52% stated they were “not very” or “not at all” confident in the stock market as a place to invest for retirement.

Of those surveyed, only 32% said they had the majority of their savings in stocks or mutual funds. (Too be fair, an equal number admitted to having no clue what they were invested in, so hopefully their trust fund advisors are making good decisions.)

3. Mass-Market Beer
Bud. Coors. Miller. When parents want a drink, they reach for the classics. Maybe a Heineken for a little extra adventure. Millennials? Not so much. When Generation Now (thank god that moniker didn’t catch on) wants to get boozy, the data says we prefer indie brews.

According to one recent study, 43% of millennials say craft beer tastes better than mainstream beers, while only 32% of baby boomers said the same. And 50% of millennials have consumed craft brew, versus 35% of the overall population. Even Pete Coors, CEO of guess-which-brand, blames pesky kids for his beer’s declining sales.

4. Cars
Back when the Beach Boys wrote Little Deuce Coupe in 1963, there was a whole genre called “Car Songs.” Nowadays you’d be hard pressed to find someone under 35 who knows what a “competition clutch with the four on the floor” even means.

The sad fact is that American car culture is dying a slow death. Yahoo Finance reports the percentage of 16-to-24-year-olds with a driver’s license has plummeted since 1997 and is now below 70% for the first time since Little Deuce Coupe’s release. According to the Atlantic, “In 2010, adults between the ages of 21 and 34 bought just 27 percent of all new vehicles sold in America, down from the peak of 38 percent in 1985.”

5. Homes
It’s not that millennials don’t want to own homes—nine in ten young people do—it’s that they can’t afford them. Harvard’s Joint Center for Housing Studies found that homeownership rate among adults younger than 35 fell by 12 percent between 2006 and 2011, and 2 million more were living with Mom and Dad.

It’s going to be a while before young people start purchasing homes again. The economic downturn set this generation’s finances back years, and reforms like the Dodd-Frank Act have made it even more difficult for the newly employed to get credit. Now that unemployment is decreasing, working millennials are still renting before they buy.

6. Bulk Warehouse Club Goods
This one initially sounds weird, but remember: millennials don’t own cars or homes. So a Costco membership doesn’t make much sense. It’s not easy to bring home a year’s supply of Nesquik and paper towels without a ride, and even if you take a bus, there’s no room to stash hoards of kitchen supplies in a studio apartment.

Responding to tepid millennial demand, the big box giant is trying to win over youngsters by partnering with Google to deliver certain items right to your home. However, even Costco doesn’t seem all that excited about its new strategy.

“Don’t expect us to go to everybody’s doorstep,” Richard Galanti, Costco’s chief financial officer, told Bloomberg Businessweek. “Delivering small quantities of stuff to homes is not free. Ultimately, somebody’s got to pay for it.”

7. Weddings
Getting hitched early in life used to be something of a right of passage into adulthood. A full 65% of the Silent Generation married at age 18 to 32. Since then, though, Americans have been waiting longer and longer to tie the knot. Pew Research found 48% of boomers were married while in that age range, compared to 35% in Gen X. Millennials are bringing up the rear at just 26%.

Just like with homes, it’s not that today’s youth just hates wedding dresses—far from it. Sixty-nine percent of millennials told Pew they would like to marry, but many are waiting until they’re more financially stable before doing so.

8. Children
It’s hard to spend money on children if you don’t have any.

After weddings, you probably saw this one coming, but millennials’ procreation abstention isn’t only because they’re not married. Many just aren’t planning on having kids. In a 2012 study, fewer than half of millennials (42%) said they planned to have children. That’s down from 78% 20 years ago.

Stop me if you heard this one: it’s not that millennials don’t want children (or homes, or weddings, or ponies), it’s that this whole recession thing has really scared them off any big financial or life commitments. Most young people in the above study hoped to have kids one day, but didn’t think their economic stars would align to make it happen.

9. Health insurance
According the Kaiser Family Foundation, adults ages 18 to 34 made up 40% of the uninsured population in the pre-Obamacare world. Why don’t young people get health coverage? Because they’re probably not going to get sick. This demographic is so healthy that those in the health insurance game refer to them as “invincibles.”

Since the Affordable Care Act, more millennials are gradually buying insurance. Twenty-eight percent of Obamacare’s 8 million new enrollees were 18-34 year-olds. That’s well short of the 40% the Congressional Budget Office wanted in order to subsidize older Americans’ plans, but better than the paltry number of millennials who signed up before Zach Galifianakis got involved.

10. Anything you tell them to buy
When buying a product, older Americans tend to trust the advice of people they know. Sixty-six percent of boomers said the recommendations of friends and family members influences their purchasing decisions more than a stranger’s online review.

Most millennials, on the other hand, don’t want their parent’s or peer’s help. Fifty-one percent of young adults say they prefer product reviews from people they don’t know.

A reporter asked me for a quote regarding the importance of statistics. But, after thinking about it for a moment, I decided that statistics isn’t so important at all. A world without statistics wouldn’t be much different from the world we have now.

What would be missing, in a world without statistics?

Science would be pretty much ok. Newton didn’t need statistics for his theories of gravity, motion, and light, nor did Einstein need statistics for the theory of relativity. Thermodynamics and quantum mechanics are fundamentally statistical, but lots of progress could’ve been made in these areas without statistics. The second law of thermodynamics is an observable fact, ditto the two-slit experiment and various experimental results revealing the nature of the atom. The A-bomb and, almost certainly, the H-bomb, maybe these would never have been invented without statistics, but on balance I think most people would feel that the world would be a better place without these particular scientific developments. Without statistics, we could forget about discovering the Hibbs boson etc, but that doesn’t seem like such a loss for humanity.

At a more applied level, statistics helped to win World War 2, most notably in cracking the Enigma code but also in various operations-research efforts. And it’s my impression that “our” statistics were better than “their” statistics. So that’s something.

Where would civilian technology be without statistics? I’m not sure. I don’t have a sense of how necessary statistics was for quantum theory. In a world without statistics, would the study of quantum physics have progressed far enough so that transistors were invented? This one, I don’t know. And without statistics we wouldn’t have modern quality control, so maybe we’d still be driving around in AMC Gremlins and the like. Scary thought, but not a huge deal, I’d think. No transistors, though, that would make a difference in my life. No transistors, no blogging! And I guess we could also forget about various unequivocally beneficial technological innovations such as modern pacemakers, hearing aids, cochlear implants, and Clippy.

Modern biomedicine uses lots and lots of statistics, but would medicine be so much worse without it? I don’t think so, at least not yet. You don’t need statistics to see that penicillin works, nor to see that mosquitos transmit disease and that nets keep the mosquitos out. Without statistics, I assume that various mistakes would get into the system, various ineffective treatments that people think are effective, etc. But on balance I doubt these would be huge mistakes, and the big ones would eventually get caught, with careful record-keeping even without statistical inference and adjustments. Without statistics, biologists would not be able to sequence the gene, and I assume they’d be much slower at developing tools such as tests that allow you to check for chromosomal abnormalities in amnio. I doubt all these things add up to much yet, but I guess there’s promise for the future. Statistics is also necessary for a lot of drug development—right now my colleagues and I are working on a pharmacodynamic model of dosing—but, again, without any of this, it’s not clear the world would be so much different.

The Poverty Lab team use statistics and randomized experiments to see what works to help the lives of poor people around the world. That’s cool but I’m not ultimately convinced this all makes a difference in the big picture. Or, to put it another way, I suspect that the statistical validation serves mostly as a way to build political consensus for economic policies that will be effective in sharing the wealth. By demonstrating in a scientific way that Treatment X is effective, this supports the idea that there is a way to help the sort of people who live in what Nicholas Wade would describe as “tribal” societies. So, sure, fine, but in this case the benefits of the statistical methods are somewhat indirect.

Without statistics, we wouldn’t have most of the papers in “Psychological Science,” but I could handle that. Piaget didn’t need any statistics, and I think the modern successors of Piaget could’ve done pretty much what they’ve done without statistics, just by carefully observation of major transitions.

Careful observation and precise measurement can be done, with or without statistical methods. Indeed, researchers often use statistics as a substitute for careful observation and precise measurement. That is a horrible thing to do, and if you have a clear understanding of statistical theory, you can see why. But statistics is hard, and lots of researchers (and journal editors, news reporters, etc.) don’t have that understanding. When statistics is used as a substitute for, rather than an adjunct to, scientific measurement, we get problems.

OK, here’s another one: no statistics, no psychometrics. That’s too bad but one could make the argument that, on the whole, psychometrics has done more harm than good (value-added assessment, anyone?). Don’t get me wrong—I like psychometrics, and a strong argument could be made that it’s done more good than harm—but my point here is that the net benefit is not clear; a case would have to be made.

Polling. Can’t do it well without statistics. But, would a world without polling be so horrible? Much as I hate to admit it, I don’t think so. Don’t get me wrong, I think polling is on balance a good thing—I agree with George Gallup that measurement of public opinion is an important part of the modern democratic process—but I wouldn’t want to hang too much of the benefits of statistics on this one use, given that I expect lots of people would argue that opinion polls do more harm than good in politics.

The alternative to good statistics is . . .

Perhaps the most important benefits of statistics come not from the direct use of statistical methods in science and technology, but rather in helping us learn about the world. Statisticians from Francis Galton and Ronald Fisher onward have used statistics to give us a much deeper understanding of human and biological variation. I can’t see how any non-statistical, mechanistic model of the world could reproduce that level of understanding. Forget about p-values, Bayesian inference, and the rest: here I’m simply talking about the nature of correlation and variation.

For a more humble example, consider Bill James. Baseball is a silly example, sure, but the point is to see how much understanding has been gained in this area through statistical measurement and comparison. As James so memorably wrote, the alternative to good statistics is not “no statistics,” it’s “bad statistics.” James wrote about baseball commentators who would make asinine arguments which they would back up by picking out numbers without context. In politics, the equivalent might be a proudly humanistic columnist such as David Brooks supporting his views by just making up numbers or featuring various “too good to be true” statistics and not checking them.

So here’s one benefit to the formal study of statistics: Without any statistics, there still would be numbers, along with people trying to interpret them.

Could governments and large businesses be managed well without statistics? I’m not sure. Given that half the U.S. Congress seems willing to shut down the government from time to time, it’s not clear than any agreement on the numbers will have much to do with political action. Similarly, all the statistics in the world don’t seem to be stopping the euro-zone from drifting. But maybe things would be much worse without a common core of statistical agreement. I don’t know; unfortunately this seems like the sort of causal question that is too difficult for statistics to answer.

Finally, one way that statistics is potentially having a huge impact in our lives is through the measurement of global warming and all the rest. But I’m guessing that a lot of this could be done with a pre-statistical understanding. The basic physics is already there, as would be the careful measurements. Statistical modeling is certainly relevant to the study of climate change—if you’re trying to reconstruct historical climate conditions from tree-ring data, it’s tough enough to do it with statistical modeling, I can’t imagine how it could be done otherwise—but the basic patterns of carbon dioxide, temperature, melting ice, etc., are apparent in any case. And, even with statistics, much uncertainty remains.


When I started writing this post, I was thinking that statistics doesn’t really matter, but I think that’s because I was focusing on some of the more highly-publicized by less beneficial applications of statistics: the use of statistical experimentation and inference to get p-values for tabloid-bait scientific papers, or for Google, Amazon, etc., to perfect their techniques for squeezing money out of their customers or, even at best, to test a medical treatment that increases survival rate for some rare disease by 2 percentage points. But statistics is central to how we think about the world. I still think that statistics is much less central to our lives than, say, chemistry. But it ain’t nothing.

General Discussion / Basilisk Gedankenexperiment Freaks out Futurists
« on: July 21, 2014, 08:34:34 PM »
Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko’s Basilisk. For Roko’s Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber. It's like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

Are you sure you want to keep reading? Because the worst part is that Roko’s Basilisk already exists. Or at least, it already will have existed—which is just as bad.

Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality. LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism; his research institute, the Machine Intelligence Research Institute, which funds and promotes research around the advancement of artificial intelligence, has been boosted and funded by high-profile techies like Peter Thiel and Ray Kurzweil, and Yudkowsky is a prominent contributor to academic discussions of technological ethics and decision theory. What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.

One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Listen to me very closely, you idiot.


You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

Some background is in order. The LessWrong community is concerned with the future of humanity, and in particular with the singularity—the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, “The ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon—within the next 50 years or so. Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever. “If you don't sign up your kids for cryonics then you are a lousy parent,” Yudkowsky writes.

If you believe the singularity is coming and that very powerful AIs are in our future, one obvious question is whether those AIs will be benevolent or malicious. Yudkowsky’s foundation, the Machine Intelligence Research Institute, has the explicit goal of steering the future toward “friendly AI.” For him, and for many LessWrong posters, this issue is of paramount importance, easily trumping the environment and politics. To them, the singularity brings about the machine equivalent of God itself.

Yet this doesn’t explain why Roko’s Basilisk is so horrifying. That requires looking at a critical article of faith in the LessWrong ethos: timeless decision theory. TDT is a guideline for rational action based on game theory, Bayesian probability, and decision theory, with a smattering of parallel universes and quantum mechanics on the side. TDT has its roots in the classic thought experiment of decision theory called Newcomb’s paradox, in which a superintelligent alien presents two boxes to you:

The alien gives you the choice of either taking both boxes, or only taking Box B. If you take both boxes, you’re guaranteed at least $1,000. If you just take Box B, you aren’t guaranteed anything. But the alien has another twist: Its supercomputer, which knows just about everything, made a prediction a week ago as to whether you would take both boxes or just Box B. If the supercomputer predicted you’d take both boxes, then the alien left the second box empty. If the supercomputer predicted you’d just take Box B, then the alien put the $1 million in Box B.

So, what are you going to do? Remember, the supercomputer has always been right in the past.

This problem has baffled no end of decision theorists. The alien can’t change what’s already in the boxes, so whatever you do, you’re guaranteed to end up with more money by taking both boxes than by taking just Box B, regardless of the prediction. Of course, if you think that way and the computer predicted you’d think that way, then Box B will be empty and you’ll only get $1,000. If the computer is so awesome at its predictions, you ought to take Box B only and get the cool million, right? But what if the computer was wrong this time? And regardless, whatever the computer said then can’t possibly change what’s happening now, right? So prediction be damned, take both boxes! But then …

The maddening conflict between free will and godlike prediction has not led to any resolution of Newcomb’s paradox, and people will call themselves “one-boxers” or “two-boxers” depending on where they side. (My wife once declared herself a one-boxer, saying, “I trust the computer.”)

TDT has some very definite advice on Newcomb’s paradox: Take Box B. But TDT goes a bit further. Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. (I’ve adopted this example from Gary Drescher’s Good and Real, which uses a variant on TDT to try to show that Kantian ethics is true.) The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself. That includes simulating you. So you, right this moment, might be in the computer’s simulation, and what you do will impact what happens in reality (or other realities). So take Box B and the real you will get a cool million.

What does all this have to do with Roko’s Basilisk? Well, Roko’s Basilisk also has two boxes to offer you. Perhaps you, right now, are in a simulation being run by Roko’s Basilisk. Then perhaps Roko’s Basilisk is implicitly offering you a somewhat modified version of Newcomb’s paradox, like this:

Roko’s Basilisk has told you that if you just take Box B, then it’s got Eternal Torment in it, because Roko’s Basilisk would really you rather take Box A and Box B. In that case, you’d best make sure you’re devoting your life to helping create Roko’s Basilisk! Because, should Roko’s Basilisk come to pass (or worse, if it’s already come to pass and is God of this particular instance of reality) and it sees that you chose not to help it out, you’re screwed.

You may be wondering why this is such a big deal for the LessWrong people, given the apparently far-fetched nature of the thought experiment. It’s not that Roko’s Basilisk will necessarily materialize, or is even likely to. It’s more that if you’ve committed yourself to timeless decision theory, then thinking about this sort of trade literally makes it more likely to happen. After all, if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you. The problem isn’t with the Basilisk itself, but with you. Yudkowsky doesn’t censor every mention of Roko’s Basilisk because he believes it exists or will exist, but because he believes that the idea of the Basilisk (and the ideas behind it) is dangerous.

Now, Roko’s Basilisk is only dangerous if you believe all of the above preconditions and commit to making the two-box deal with the Basilisk. But at least some of the LessWrong members do believe all of the above, which makes Roko’s Basilisk quite literally forbidden knowledge. I was going to compare it to H. P. Lovecraft’s horror stories in which a man discovers the forbidden Truth about the World, unleashes Cthulhu, and goes insane, but then I found that Yudkowsky had already done it for me, by comparing the Roko’s Basilisk thought experiment to the Necronomicon, Lovecraft’s fabled tome of evil knowledge and demonic spells. Roko, for his part, put the blame on LessWrong for spurring him to the idea of the Basilisk in the first place: “I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm,” he wrote.

If you do not subscribe to the theories that underlie Roko’s Basilisk and thus feel no temptation to bow down to your once and future evil machine overlord, then Roko’s Basilisk poses you no threat. (It is ironic that it’s only a mental health risk to those who have already bought into Yudkowsky’s thinking.) Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it. But I do believe there’s a more serious issue at work here because Yudkowsky and other so-called transhumanists are attracting so much prestige and money for their projects, primarily from rich techies. I don’t think their projects (which only seem to involve publishing papers and hosting conferences) have much chance of creating either Roko’s Basilisk or Eliezer’s Big Friendly God. But the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology, and I don’t expect Yudkowsky and his cohorts to be an exception.

I worry less about Roko’s Basilisk than about people who believe themselves to have transcended conventional morality. Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. He has explicitly argued that given the choice, it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes. No one, not even God, is likely to face that choice, but here’s a different case: What if a snarky Slate tech columnist writes about a thought experiment that can destroy people’s minds, thus hurting people and blocking progress toward the singularity and Friendly AI? In that case, any potential good that could come from my life would far be outweighed by the harm I’m causing. And should the cryogenically sustained Eliezer Yudkowsky merge with the singularity and decide to simulate whether or not I write this column … please, Almighty Eliezer, don’t torture me.

General Discussion / "Kafka-esque" Comcast CS Call
« on: July 15, 2014, 05:20:35 PM »
"When a customer service call is described as "Kafkaesque" and "hellish," you pretty much know how it's going to go down before even taking a listen. But in case you haven't heard the condescending, tedious call that's lit up the Internet, here it is: ..."

We are in the process of researching this issue, but it looks like the domain registration for the domain expired this morning, reverting back to being the property of NetworkSolutions. While SOE did not use the domain publicly, preferring, the aforementioned domain contains all of the nameservers (,, which route users to SOE’s websites and forums, as well as connecting to its games.

For the last 20 minutes or so, we have been monitoring the situation. We’ve seen the forums and websites briefly become accessible before going dark again. It’s possible that conflicting DNS information is propagating out there, preventing users from reliably connecting to SOE’s websites, forums, and games.

It seems plausible that the expiration notices from NetworkSolutions have been going to an unread e-mail address at SOE for the last few weeks and NS finally decided to reclaim this one. The expiration for was set for 26-may-2014, while today, July 15th, is some 7 weeks later. That’s some grace period!

UPDATE: Although there don’t seem to have been any changes behind the scenes,,, and other SOE websites all seem to be loading “domain parked” pages full of advertisements now.

UPDATE from EverQuest Next and Landmark Community Manager Colette “Dexella” Murphy who is likely in need of coffee at 5:30am PDT:

We’re working on the SOE games/website/forum issues people are reporting. More news when it’s available! Thanks, everyone.

— Colette (Dexella) (@DexellaCM) July 15, 2014

General Discussion / Have Anti-ACA Ads generated greater enrollment?
« on: July 11, 2014, 06:33:02 PM »
According to the recent report of nonpartisan analysts Kantar Media CMAG, ACA opponents have spent 450 million dollars on anti-Obamacare ads so far. Spending on negative ads outpaced positive ones by more than 15 to 1. This map shows the spending on negative ads in each state.

How Have Ads Impacted the ACA Enrollment in Different States?

I used the ACA enrollment data released by the Department of Health and Human Services, to calculate the ACA enrollment ratio as the number of enrollees divided by the total number of people who could have potentially enrolled in the ACA. This number includes the ones who were either uninsured or had purchased private insurance. Although more than 8 million Americans have signed-up to purchase health insurance through the marketplaces during the first open enrollment period, this nationwide number masks tremendous variation in participation across states. While the enrollment percentage in Minnesota is slightly above five percent, in Vermont, close to fifty percent of all eligible individuals have signed up for the ACA.

ACA Enrollment Ratios by State


I also calculated the per capita spending on anti-Obamacare ads as the total amount of spending in each state divided by its population. By spending close to a dollar per resident, District of Columbia outpaced all of the fifty states in per capita spending on anti-Obamacare ads, yet over 11% of its eligible population signed-up for Obamacare.

The following plot compares the per capita spending on anti-ACA ads and the enrollment ratio in 49 states. I removed DC and Vermont since they had abnormally high ad spending or enrollment ratios.

Enrollment Ratio and Per Capita Anti-ACA Advertising

The blue dots represent the states in which Senate Democrats are up for re-election in 2014, while the red dots represent the states in which Republican Senators are running for re-election. The states in which no Senate midterm elections are held are shown in green.

The four states with the highest per capita spending on anti-ACA ads are Kentucky, Arkansas, Louisiana, and North Carolina. Interestingly, in all of these four states, the midterm Senate elections are expected to be very competitive. Although the volume of spending on anti-ACA ads is driven by the competitiveness of the Senate midterm elections and may be effective in reducing the votes for the targeted political figure, they may not necessarily reduce the popularity of the ACA. The blue and red lines show the association between the anti-ACA ads spending and the ACA enrollment ratio in states with Democratic and Republican Senators running for re-election. While the negative ads reduce the enrollment in red states, they have an opposite effect in blue states.

In fact, after controlling for other state characteristics such as low per capita income population and average insurance premiums, I observe a positive association between the anti-ACA spending and ACA enrollment. This implies that anti-ACA ads may unintentionally increase the public awareness about the existence of a governmentally subsidized service and its benefits for the uninsured. On the other hand, an individual’s prediction about the chances of repealing the ACA may be associated with the volume of advertisements against it. In the states where more anti-ACA ads are aired, residents were on average more likely to believe that Congress will repeal the ACA in the near future. People who believe that subsidized health insurance may soon disappear could have a greater willingness to take advantage of this one time opportunity.

General Discussion / The World Cup Can Help Test Economic Theories
« on: July 11, 2014, 06:07:34 PM »
THE World Cup is finally underway. At long last, soccer fans can don their team colors, head down to the local pub — and begin collecting data to test economic theories.

Or at least, some of us will.

For instance, I’m interested in penalty kicks. In addition to being an exciting part of the game, penalty kicks present an opportunity to test an important idea in economics: the Nash equilibrium.

The economist John Forbes Nash Jr. analyzed how people should behave in strategic situations in which it is not optimal to repeatedly make the same move — like the children’s game rock, paper, scissors, in which selecting one move again and again (rock, rock, rock ...) makes you easy to beat. According to Mr. Nash’s theory, in a zero-sum game (i.e., where a win for one player entails a corresponding loss for the other) the best approach is to vary your moves unpredictably and in such proportions that your probability of winning is the same for each move. In rock, paper, scissors, for example, the optimal strategy is to mix your choices randomly among the three options.

To test this theory in the real world, we can study penalty kicks, which are zero-sum games in which it is not optimal to repeatedly choose the same move. (The goalie has an easier time stopping your shot if you always kick to the same side of the net.) Unlike complex real-world strategic situations involving firms, banks or countries, penalty kicks are relatively simple, and data about them are readily available.

I analyzed 9,017 penalty kicks taken in professional soccer games in a variety of countries from September 1995 to June 2012. I found, as Mr. Nash’s theory would predict, that players typically distributed their shots unpredictably and in just the right proportions. Specifically, roughly 60 percent of kicks were made to the right of the net, and 40 percent to the left. The proportions were not 50-50 because players have unequal strengths in their legs and tend to shoot better to one side. Shooting 50-50, in other words, would not take full advantage of their better leg, while shooting any more often to the stronger side would have been too predictable.

In accordance with Mr. Nash’s theory, penalty kicks shot to the left were successful with the same frequency as kicks shot to the right — roughly 80 percent of the time.

Penalty kicks are just one example. Data from soccer can also illuminate one of the most prominent theories of the stock market: the efficient-market hypothesis. This theory posits that the market incorporates information so completely and so quickly that any relevant news is integrated into a stock’s price before anyone has a chance to act on it. This means that unless you have insider information, no stock is a better buy (i.e., undervalued) when compared with any other.

If this theory is correct, the price of an asset should jump up or down when news breaks and then remain perfectly flat until there is more news. But to test this in the real world is difficult. You would need to somehow stop the flow of news while letting trading continue. That seems impossible, since everything that happens in the real world, however boring or uneventful, counts as news.

This is where soccer is useful. In a study published earlier this year in The Economic Journal, the economists Karen Croxson and J. James Reade analyzed live soccer betting markets, looking at second-by-second betting activity around goals scored just seconds before halftime and betting activity during halftime. Their data, which concerned 1,206 Premier League soccer matches in England, contained 160 such “cusp” goals scored within seconds of the end of the first half.

The break in play at halftime provided a golden opportunity to study market efficiency because the playing clock stopped but the betting clock continued. Any drift in halftime betting values would have been evidence against market efficiency, since efficient prices should not drift when there is no news (or goals, in this case). It turned out that when goals arrived within seconds of the end of the first half, betting continued heavily throughout halftime — but the betting values remained constant, a necessary condition to prove that those markets were indeed efficient.

Other research by me and others has shown that data from soccer can shed light on the economics of discrimination, fear, corruption and the dark side of incentives in organizations. In other words, aspects of the beautiful game that are less than beautiful from a fan’s perspective can still be illuminating for economists.

But perhaps most beautiful of all, for me, is that the core principles of my beloved professional discipline are exemplified by my beloved game.

Last month, the inaugural Breakthrough Prizes in mathematics, founded and partially funded by internet billionaires Yuri Milner and Mark Zuckerberg, were awarded to five people: Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Terence Tao, and Richard Taylor. The prize is $3 million per person, and the first five winners will be on the committee for the selection of future winners. (In the future, there will be only one prize awarded per year.)

I was a bit surprised that there hasn’t been much talk on blogs about the prizes, but there has been a bit. Peter Woit wrote about the prize on Not Even Wrong, and the comments to his post are interesting. “Shecky Riemann” also has a post on Math-Frolic.

I must admit that I am somewhat cynical about the prize. (Now might be a good time to reiterate the disclaimer that appears on the sidebar of this blog: my opinions do not necessarily reflect the opinions of the AMS.) The five winners are all productive, brilliant mathematicians who have enhanced their fields immensely, and they deserve to be recognized. But $3 million is just so much money! It’s hard for me to see how concentrating that much money in the hands of so few people is an efficient way to support mathematics.

Woit’s post voices some similar concerns. He writes,

“…it’s still debatable whether this is a good way to encourage mathematics research. The people chosen are already among the most highly rewarded in the subject, with all of them having very well-paid positions with few responsibilities beyond their research, as well as access to funding of research expenses. The argument for the prize is mainly that these sums of money will help make great mathematicians celebrities, and encourage the young to want to be like them. I can see this argument and why some people find it compelling. Personally though, I think our society in general and academia in particular is already suffering a great deal as it becomes more and more of a winner-take-all, celebrity-obsessed culture, with ever greater disparities in wealth, and this sort of prize just makes that worse. It’s encouraging to see that most of the prize winners have already announced intentions to redirect some of the prize moneys for a wider benefit to others and the rest of the field.

In fact, the New York Times reports that Tao, one of the winners, has similar feelings:

“Dr. Tao tried to talk Mr. Milner out of it, and suggested that more prizes of smaller amounts might be more effective in supporting mathematics. ‘The size of the award, I think it’s ridiculous,’ he said. ‘I didn’t feel I was the most qualified for this prize.’

“But Dr. Tao added: ‘It’s his money. He can do whatever he wants with it.’

“Dr. Tao said he might use some of the prize money to help set up open-access mathematics journals, which would be available free to anyone, or for large-scale collaborative online efforts to solve important problems.”

As a young academic who has seen postdoc positions seem to dry up since the beginning of the financial crisis, I can’t help but do a little arithmetic. $50,000 is a nice round salary for a postdoc. Before benefits are factored in, that makes each $3 million prize the equivalent of 60 postdoc years. Even if we add another $50,000 a year for health insurance, travel, and other research expenses, that money could fund 30 postdocs a year, or create 10 three-year postdoc positions each year.

But 30 postdocs a year wouldn’t make a good press release. The New York Times wouldn’t write an article about their multimillion dollar minds. And the funders of the Breakthrough Prize want to encourage mathematical celebrity, which supposedly will lead to public awareness, not to fund worthwhile math research in the most efficient way possible. In a Scientific American article about the prize, Ben Fogelson writes,

“Milner’s goal, however, is to increase the popularity of science by celebrating the scientists. ‘Dividing [money] in small pieces and distributing it widely has been tried before and it works,’ Milner says. ‘I think the idea behind this initiative is to really focus on raising public awareness.’”

A commenter on Woit’s post suggested that each year, the prize money could be used to endow a research position at a university, noting that at MIT, you can endow a professorship for $3 million. Would that be high-profile enough? I think you could still write a press release about it!

I had some interesting discussions about the prize on Twitter after the prizewinners were announced, mainly focused on the utility of mathematical celebrity. Those discussions helped me frame a few questions about celebrity and public awareness. I’ve tried to figure out some analogous questions about movies and the Oscars because the Breakthrough Prizes have been described as the Oscars of science.

Will people think mathematics is more valuable because a few people can earn giant prizes from it? (Do people think filmmaking is more valuable because the Oscars exist?)
Will people want to become mathematicians because they think they could earn a big prize from it? (Do people become actors or filmmakers because they think they could win an Oscar?)
If the ultimate goal of the prize is to raise public awareness of math, what is a more effective way to do that: tell them about a successful mathematician, or tell them about an idea in math? (If someone doesn’t know much about cinema, would it be more effective to tell them about an Oscar-winning actor or show them a movie?)
Are these even the right questions and analogies?
This post might sound like I’m saying, “I don’t like this new prize because I’m never going to get it, but I would like it if it funded people more like me.” But I don’t think I’m quite there either. I have reservations about the suggested alternate uses of the prize money as well, thanks largely to two posts by Cathy O’Neil about billionaire money in mathematics and in academia in general.

On a lighter note, if you are a mathematician who is a bit embarrassed about a recent windfall, Persiflage suggests that “a bottle of Chateau d’Yquem 1967 does wonders to wash away any last remaining vestiges of embarrassment…”

- See more at:

Reporters for BBC News are being directed to significantly curb the amount of air time they give to people with anti-science viewpoints — including people who deny climate change exists — in order to improve the accuracy and fairness of the network’s news coverage, according to a report released by the BBC’s governing body on Thursday.

The BBC Trust’s report was designed to assess the network’s impartiality in science coverage, in other words, whether it is staying neutral on critical issues. In order to be neutral when covering science, however, the BBC noted it needs to avoid “false balance,” a fallacy that occurs when two sides of an argument are assumed to have equal value.

“Science coverage does not simply lie in reflecting a wide range of views but depends on the varying degree of prominence such views should be given,” the report said.

The type of “false balance” news segment that the BBC is now actively trying to avoid is one that is fairly common in American network news’ climate change coverage. It involves putting one person who is well-versed on climate science next to a person who denies climate science, and having them debate.

Editorially, this type of debate makes the network look like it’s being balanced, giving equal opportunity to opposite viewpoints. However, because 95 to 97 percent of climate scientists agree that man-made greenhouse gas emissions are causing the planet to warm, that balance is false, giving disproportionate time to a viewpoint that is widely rejected in the scientific community.

In order to have a truly balanced and statistically representative debate about climate change, television news networks would have to pit 97 climate scientists against three climate deniers. Because that likely wouldn’t work very well, the BBC is favoring an approach that instead severely limits the amount of air time climate deniers are given.

So far, the report said, approximately 200 staff members have attended seminars and workshops aimed at improving the balance of their science coverage.

The BBC Trust’s report did note that climate deniers wouldn’t be completely excluded from the conversation. “The Trust also would like to reiterate that … ‘this does not mean that critical opinion should be excluded. Nor does it mean that scientific research shouldn’t be properly scrutinized,’” the report said. “The BBC has a duty to reflect the weight of scientific agreement but it should also reflect the existence of critical views appropriately. Audiences should be able to understand from the context and clarity of the BBC’s output what weight to give to critical voices.”

But despite the BBC’s pledge to have their reporters avoid false balance in climate change coverage, false balance is still a widespread phenomenon across prominent American news platforms. According to a 2013 report from Media Matters on the issue, half of print outlets used false balance to debate the existence of global warming. When covering the U.N.’s landmark climate change report that year, CBS News gave climate deniers more than six times their representation in the scientific community, and 69 percent of guests on Fox News cast doubt on the science.

The obvious effect of this is that viewers are being misled about the reality of climate change and the urgency that comes with it. But the other effect is that viewers wind up not caring about climate change altogether.

“In the case of people who watch cable news, we’ve been so conditioned to favor a sense of certainty,” Dr. Stephen Reese, author of a 2008 white paper on how people make judgments about journalistic balance, told ThinkProgress in May. “We want to have our beliefs upheld. So when you introduce [climate change] as a political issue up for debate, it’s just, ‘well okay, there they go again,’ — just dismiss it as hopelessly polarized.”

When news outlets introduce false balance into its climate change stories, its audience then thinks those stories are less pressing than they actually are, a factor which contributes to uncertainty surrounding the issue and, ultimately, apathy. A 2009 study from the American Psychological Association confirmed this, noting that “perceived or real uncertainty” on climate change can lead to both “systematic underestimation of risk” and “sufficient reason to act in self interest over that of the environment.”

It’s the rallying cry for opponents of same-sex marriage: “Every child deserves a mom or a dad.” But a major new study finds that kids raised by same-sex couples actually do a bit better “than the general population on measures of general health and family cohesion.”

The study, conducted in Australia by University of Melbourne researchers “surveyed 315 same-sex parents and 500 children.” The children in the study scored about six percent higher than Australian kids in the general population. The advantages held up “when controlling for a number sociodemographic factors such as parent education and household income.” The study was the largest of its kind in the world.

The lead researcher, Dr. Simon Crouch, noted that in same-sex couples parents have to “take on roles that are suited to their skill sets rather than falling into those gender stereotypes.” According to Crouch, this leads to a “more harmonious family unit and therefore feeding on to better health and well being.”

The findings were in line with “existing international research undertaken with smaller sample sizes.”

Family Voice Australia, a group that opposes same-sex marriage, said the study should be discounted because it does not consider “what happens when the child reaches adulthood.”
In the United States, opponents of same-sex marriage routinely claim that children raised by same-sex couple fare worse. The most commonly cited study, conducted by sociologist Mark Regnerus, did not actually study children raised by same-sex couples. Indeed, “most of the subjects in the study grew up in the 1970s, 80s, and 90s, long before marriage equality was available or adoption rights were codified in many states”. Instead, Regnerus studied children raised in “failed heterosexual unions” where one parent had a “romantic relationship with someone of the same sex.” It has been condemned by the American Sociological Association. Other frequently cited studies have similar methodological problems.

General Discussion / New State of Matter Discovered
« on: July 01, 2014, 07:07:06 PM »
There was a time when states of matter were simple: Solid, liquid, gas. Then came plasma, Bose -Einstein condensate, supercritical fluid and more. Now the list has grown by one more, with the unexpected discovery of a new state dubbed “dropletons” that bear some resemblance to liquids but occur under very different circumstances.
The discovery occurred when a team at the University of Colorado Joint Institute for Lab Astrophysics were focusing laser light on gallium arsenide (GaAs) to create excitons.
Excitons are formed when a photon strikes a material, particularly a semiconductor. If an electron is knocked loose, or excited, it leaves what is termed an “electron hole” behind. If the forces of other charges nearby keep the electron close enough to the hole to feel an attraction, a bound state forms known as an exciton. Excitons are called quasiparticles because the electrons and holes behave together as if they were a single particle.
If this all sounds a bit hard to relate to, consider that solar cells are semiconductors, and the formation of excitons is one possible step to the production of electricity. A better understanding of how excitons form and behave could produce ways to harvest sunlight more efficiently.
Graduate student Andrew Almand-Hunter was forming biexcitons – two excitons that behave like a molecule, by focusing the laser to a dot 100nm across and leaving it on for shorter and shorter fractions of a second.
“But the experiment didn’t behave at all in the way we expected,” Almand-Hunter said. When the pulses were lasting less than 100 millionths of a second exciton density reached a critical threshold. “We expected to see the energy of the biexcitons increase as the laser generated more electrons and holes. But, what we saw when we did the experiment was that the energy actually decreased!”
The team figured that they had created something other than biexcitons, but were not sure what. They contacted theorists at Philipps-University, Marburg who suggested they had made droplets of 4, 5 or 6 electrons and holes, and constructed a model of these dropletons' behavior.
The dropletons are small enough to behave quantum mechanically, but the electrons and holes are not in pairs, as they would be if the dropleton was just a group of excitons. Instead they form a “quantum fog” of electrons and holes that flow around each other and even ripple like a liquid, rather than existing as discrete pairs. However, unlike liquids we are familiar with, dropletons a finite size, outside which the electron/hole association breaks down.
The discovery has been published in Nature. Perhaps the most remarkable thing is that the dropletons are stable, by the standards of quantum physics. While they can only survive inside solid materials, they last around 25 trillionths of a second, which is actually long enough for scientists to study the way their behavior is shaped by the environment. At 200nm wide the dropletons are as large as very small bacteria – a size that can be seen by conventional microscopes.
"Classical optics can detect only objects that are larger than their wavelengths, and we are approaching that limit," Mackillo Kira of Philipps-University who provided much of the theoretical grounding told Scientific American. "It would be really neat to not only detect spectroscopic information about the dropleton, but to really see the dropleton."
JILA lab leader Professor Steven Cundiff says, “Nobody is going to build a quantum droplet widget." However, the work could help in the understanding of systems where multiple particles interact quantum mechanically.

'Community' Lives! Yahoo Has Saved Greendale for a Sixth Season

The environment doesn't appreciate our meat obsession.

The average meat-eater in the U.S. is responsible for almost twice as much global warming as the average vegetarian, and close to three times that of the average vegan, according to a study (pdf) published this month in the journal Climatic Change.

The study, which was carried out at Oxford University, surveyed the diets of some 60,000 individuals (more than 2,000 vegans, 15,000 vegetarians, 8,000 fish-eaters, and nearly 30,000 meat-eaters). Heavy meat-eaters were defined as those who consume more than 3.5 ounces of meat per day—making the average American meat-eater (who consumes roughly four ounces per day) a heavy meat-eater. Low meat-eaters were those who eat fewer than 1.76 ounces. And medium meat-eaters were those whose consumption fell somewhere in between.

The difference found in diet-driven carbon footprints was significant. Halve your meat intake, and you could cut your carbon footprint by more than 35 percent; stick to fish, and you could cut it by nearer to 50 percent; go vegan, and the difference could be 60%.

The variations were so drastic that the study's authors suggested that countries should consider revising their definition of a sustainable diet. "National governments that are considering an update of dietary recommendations in order to define a ‘healthy, sustainable diet’ must incorporate the recommendation to lower the consumption of animal-based products," the study says.

The livestock industry is responsible for roughly 15 percent of global carbon emissions. The resources necessary to produce even the smallest amounts—like, say, a quarter pound hamburger—of market ready meat are staggering.

The good news is that while Americans might still eat more meat than mother nature would prefer, they are cutting down on their intake, and especially with the most environmentally unfriendly kind—per capita beef consumption has fallen by 36 percent since its peak in 1976, according to data from the USDA. The bad news is that the rest of the world appears to be headed in the opposite direction. Global demand for meat is expected to grow by more than 70 percent by 2050, largely driven by burgeoning middle classes in the developing world. Couple that with the potential for changing health narratives in the U.S.—some of which now tout red meat (and fats, in general) as healthier than once thought—and even Americans could find themselves putting more meat on their plates in the future.

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 24