Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Agrul

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 26
151
Tech Heads / i was desperate for a cheap new PC/laptop, so i bought
« on: September 10, 2015, 11:33:01 AM »
this desktop

and this this laptop

they're both the literal worst

but i was impressed that i could get basic functionality out of $450. the laptop is especially hilarious w/ it's 20GB HDD, didn't even notice it was that small until i went to install a second substantial program on it and got an out-of-memory error :grin:

152
At the smallest scales, everything in the universe can be broken down into fundamental morsels called particles. The Standard Model of particle physics—the reigning theory of these morsels—describes a small collection of known species that combine in myriad ways to build the matter around us and carry the forces of nature. Yet physicists know that these particles cannot be all there is—they do not account for the dark matter or dark energy that seem to contribute much of the universe’s mass, for example. Now two experiments have observed particles misbehaving in ways not predicted by any known laws of physics, potentially suggesting the existence of some new type of particle beyond the standard zoo. The results are not fully confirmed yet, but the fact that two experiments colliding different types of particles have seen a similar effect, and that hints of this behavior also showed up in 2012 at a third particle collider, has many physicists animated. “It’s really bizarre,” says Mark Wise, a theorist at the California Institute of Technology who was not involved in the experiments. “The discrepancy is large and it seems like it’s on very sound footing. It’s probably the strongest, most enduring deviation we’ve seen from the Standard Model.” Finding such a crack in the Standard Model is exciting because it suggests a potential path toward expanding the model beyond those particles currently known.

The eyebrow-raising results come from the LHCb experiment at the Large Hadron Collider (LHC) in Switzerland and the Belle experiment at the High Energy Accelerator Research Organization (KEK) in Japan. Both observed an excess of certain types of leptons compared to others produced when particles called B mesons (made of a bottom quark and an antiquark) decay. Leptons are a category of particles that includes electrons, as well as their heavier cousins muons and taus. A Standard Model principle known as lepton universality says that all leptons should be treated equally by the weak interaction, the fundamental force responsible for radioactive decay. But when the experiments observed a large number of B meson decays, which should have produced equal numbers of electrons, muons and taus among their final products (after the different masses of the particles are taken into account), the decays actually made more taus.

Atom smashing

The LHC collides protons with protons, whereas the Belle accelerator smashes electrons into their antimatter counterpart, positrons. Both types of collisions sometimes result in B mesons, however, allowing each to measure the end products when the unstable mesons decay. In a paper published in the September 11 issue of Physical Review Letters, the LHCb team announced that they had observed a potential excess of taus about 25 to 30 percent greater than the frequency predicted by the Standard Model. Belle saw a similar, but less pronounced, effect, in data reported in a paper under review at Physical Review D. Both teams shared their findings in May at the Flavor Physics & CP Violation 2015 conference in Nagoya, Japan.

Intriguingly, both results also agree with earlier findings from 2012 (and expanded on in 2013) made by the B and B-Bar Experiment (BaBar) at the SLAC National Accelerator Laboratory in Menlo Park, Calif. “By itself neither the Belle result nor the LHCb result is significantly off from the Standard Model,” says Belle co-spokesperson Tom Browder of the University of Hawaii. “Together with BaBar we can make a ‘world average’ (combining all results), which is 3.9 sigma off from the Standard Model.” Sigma refers to standard deviations—a statistical measurement of a divergence—and the usual threshold among physicists for declaring a discovery is five sigma. Although a 3.9 sigma difference does not quite hit the mark, it indicates that the chance of this effect occurring randomly is just 0.011 percent. “Right now we have three suggestive but not yet conclusive hints of an extremely interesting effect,” says theorist Zoltan Ligeti of Lawrence Berkeley National Laboratory, who was not involved in the experiments. “We should know the answer definitively in a few years” as the experiments collect more data.

If the discrepancy is real, rather than a statistical fluke, researchers will then face the tough challenge of figuring out what it means. “This effect is really not the kind that most physicists would have expected,” Ligeti says. “It is not easy to accommodate in the most popular models. In that sense it is quite surprising.”

For instance, the darling of so-called “new physics,” or beyond-the-Standard-Model, ideas—supersymmetry—does not usually predict an effect quite like this. Supersymmetry posits a host of undiscovered particles to mirror the ones already known. Yet none of its predicted particles easily produce this kind of violation of lepton universality. “I don’t think at this point we can say that this points to supersymmetry,” says Hassan Jawahery, a physicist at the University of Maryland and a member of the LHCb collaboration, “but it doesn’t necessarily violate supersymmetry.”

SEE ALSO:
Energy & Sustainability: Bigger Cities Aren't Always Greener, Data Show | Evolution: New Clues about the Evolution of Dogs | Health: Researchers Seek Cancer Clues from Pet Dogs | Mind & Brain: Animals Have More Social Smarts Than You May Think | Space: How Many Trees Are There in the World? [Video] | Technology: From Waste to Wealth: A Q&A with Agricultural Engineer Noble Banadda
Yet if the signal is real, then some kind of new particle is probably implicated. In all B meson decays, at one point a heavier “virtual” particle is created and then quickly disappears—a strange phenomenon allowed by quantum mechanics. In the Standard Model this virtual particle is always a W boson (a particle that carries the weak force), which interacts equally with all leptons. But if the virtual particle were something more exotic that interacts with each lepton differently, depending on its mass, then more taus could be created at the end because taus are the heaviest leptons (and thus might interact more strongly with the virtual particle).

New Higgs or leptoquark?

One potentially appealing candidate for the virtual particle is a new type of Higgs boson that would be heavier than the particle discovered to much fanfare in 2012 at the LHC. The known Higgs boson is thought to give all other particles their mass. The new Higgs, in addition to being heavier than this known particle, would have other differing qualities—for example, to affect the B meson decays, it would have to have electromagnetic charge, where the known Higgs has none. “It would mean that the one Higgs we found so far is not the only one that is responsible for generating the mass for all the particles,” Jawahery says. Supersymmetry, in fact, predicts additional Higgs bosons beyond the one we know. Yet in most formulations of the model, these predicted Higgs particles would not create a discrepancy as large as the one showing up in the experiments.

Another option is an even more exotic hypothetical particle called a leptoquark—a composite of a quark and a lepton, which has never been seen in nature. This particle, too, would interact more strongly with the tau than the muon and the electron. “Leptoquarks can occur very naturally in certain types of models,” Ligeti says. “But there is no reason to expect them to be as low-mass as what would be needed to explain these data. I think most theorists would not consider these models particularly compelling right now.”

In fact, all of the explanations theorists can think of so far for the observations leave something to be desired—and do not do much to solve any of the larger outstanding problems of physics, such as the question of what makes dark matter or dark energy. “There’s nothing nice about these models—they’re just sort of cooked up to explain this fact, not to get at the trouble with other facts,” Wise says. “But just because the theorists are not comfortable with it, nature will do what nature does.”

There is also a chance, albeit slim, that physicists have incorrectly calculated the Standard Model’s predictions, and that the reigning rules still apply. “It’s possible the Standard Model calculation is not correct, but recent calculations have not revealed any serious problem there,” says Michael Roney of the University of Victoria in Canada, spokesperson for the BaBar Experiment. “It is also conceivable that the experiments have missed some more conventional explanation, but the experimental conditions at LHCb and BaBar are very different. In BaBar we have been continuing to mine our data in different ways but the effect persists.”

Physicists are optimistic the mystery will be sorted out soon with more data. In April the LHC started running collisions at higher energy, which for LHCb translates to more B mesons produced, and more chances to look for the discrepancy. Belle, meanwhile, is planning an upgraded experiment with an improved detector called Belle II scheduled to start collecting data in 2018. Both experiments should eventually find more data to confirm the effect, or see it fizzle if it was a statistical fluke. “If it is there then we have a huge program ahead of us for the next decade to study it in even more detail,” Jawahery says. “By then we would hopefully know what it also means, not just that it is there.”

http://www.scientificamerican.com/article/2-accelerators-find-particles-that-may-break-known-laws-of-physics1/

153
Cancer cells have been programmed back to normal by scientists in a breakthrough which could lead to new treatments and even reverse tumour growth.

For the first time aggressive breast, lung and bladder cancer cells have been turned back into harmless benign cells by restoring the function which prevents them from multiplying excessively and forming dangerous growths.

Scientists at the Mayo Clinic in Florida, US, said it was like applying the brakes to a speeding car.

So far it has only been tested on human cells in the lab, but the researchers are hopeful that the technique could one day be used to target tumours so that cancer could be ‘switched off’ without the need for harsh chemotherapy or surgery.

"We should be able to re-establish the brakes and restore normal cell function,” said Profesor Panos Anastasiadis, of the Department for Cancer Biology.

“Initial experiments in some aggressive types of cancer are indeed very promising.

“It represents an unexpected new biology that provides the code, the software for turning off cancer."

Cells need to divide constantly to replace themselves. But in cancer the cells do not stop dividing leading to huge cell reproduction and tumour growth.

The scientists discovered that the glue which holds cells together is regulated by biological microprocessors called microRNAs. When everything is working normally the microRNAs instruct the cells to stop dividing when they have replicated sufficiently. They do this by triggering production of a protein called PLEKHA7 which breaks the cell bonds. But in cancer that process does not work.

Scientists discovered they could switch on cancer in cells by removing the microRNAs from cells and preventing them from producing the protein.

And, crucially they found that they could reverse the process switching the brakes back on and stopping cancer. MicroRNAs are small molecules which can be delivered directly to cells or tumours so an injection to increase levels could switch off disease.
Instant breast cancer test will spare thousands an anxious wait

Reprogramming cells could stop cancer cells replicating  Photo: PA

“We have now done this in very aggressive human cell lines from breast and bladder cancer,” added Dr Anastasiadis.

“These cells are already missing PLEKHA7. Restoring either PLEKHA7 levels, or the levels of microRNAs in these cells turns them back to a benign state. We are now working on better delivery options.”

Cancer experts in Britain said the research solved a riddle that biologists had puzzled over for decades, why cells did not naturally prevent the proliferation of cancer.

“This is an unexpected finding,” said Dr Chris Bakal, a specialist in how cells change shape to become cancerous, at the Institute for Cancer Research in London.

“We have been trying to work out how normal cells might be suppressing cancer, and stopping dividing when they form contacts with each other, which has been a big mystery.

“Normal cells touch each other and form junctions then they shut down proliferation. If there is a way to turn that back on then that would be a way to stop tumours from growing.

“I think in reality it is unlikely that you could reverse tumours by reversing just one mechanism, but it’s a very interesting finding.”

Henry Scowcroft, Cancer Research UK’s senior science information manager, said: “This important study solves a long-standing biological mystery, but we mustn’t get ahead of ourselves.

“There’s a long way to go before we know whether these findings, in cells grown in a laboratory, will help treat people with cancer. But it’s a significant step forward in understanding how certain cells in our body know when to grow, and when to stop. Understanding these key concepts is crucial to help continue the encouraging progress against cancer we’ve seen in recent years.”

The research was published in the journal Nature Cell Biology.

http://www.telegraph.co.uk/news/science/science-news/11821334/Cancer-cells-programmed-back-to-normal-by-US-scientists.html#disqus_thread

154
General Discussion / The Rubber Room: The Worst Teachers in NYC
« on: September 03, 2015, 02:16:48 PM »
Old article, but I liked it.

 a windowless room in a shabby office building at Seventh Avenue and Twenty-eighth Street, in Manhattan, a poster is taped to a wall, whose message could easily be the mission statement for a day-care center: “Children are fragile. Handle with care.” It’s a June morning, and there are fifteen people in the room, four of them fast asleep, their heads lying on a card table. Three are playing a board game. Most of the others stand around chatting. Two are arguing over one of the folding chairs. But there are no children here. The inhabitants are all New York City schoolteachers who have been sent to what is officially called a Temporary Reassignment Center but which everyone calls the Rubber Room.

These fifteen teachers, along with about six hundred others, in six larger Rubber Rooms in the city’s five boroughs, have been accused of misconduct, such as hitting or molesting a student, or, in some cases, of incompetence, in a system that rarely calls anyone incompetent.

The teachers have been in the Rubber Room for an average of about three years, doing the same thing every day—which is pretty much nothing at all. Watched over by two private security guards and two city Department of Education supervisors, they punch a time clock for the same hours that they would have kept at school—typically, eight-fifteen to three-fifteen. Like all teachers, they have the summer off. The city’s contract with their union, the United Federation of Teachers, requires that charges against them be heard by an arbitrator, and until the charges are resolved—the process is often endless—they will continue to draw their salaries and accrue pensions and other benefits.

“You can never appreciate how irrational the system is until you’ve lived with it,” says Joel Klein, the city’s schools chancellor, who was appointed by Mayor Michael Bloomberg seven years ago.

Neither the Mayor nor the chancellor is popular in the Rubber Room. “Before Bloomberg and Klein took over, there was no such thing as incompetence,” Brandi Scheiner, standing just under the Manhattan Rubber Room’s “Handle with Care” poster, said recently. Scheiner, who is fifty-six, talks with a raspy Queens accent. Suspended with pay from her job as an elementary-school teacher, she earns more than a hundred thousand dollars a year, and she is, she said, “entitled to every penny of it.” She has been in the Rubber Room for two years. Like most others I encountered there, Scheiner said that she got into teaching because she “loves children.”

“Before Bloomberg and Klein, everyone knew that an incompetent teacher would realize it and leave on their own,” Scheiner said. “There was no need to push anyone out.” Like ninety-seven per cent of all teachers in the pre-Bloomberg days, she was given tenure after her third year of teaching, and then, like ninety-nine per cent of all teachers before 2002, she received a satisfactory rating each year.

“But they brought in some new young principal from their so-called Leadership Academy,” Scheiner said. She was referring to a facility opened by Klein in 2003, where educators and business leaders, such as Jack Welch, the former chief executive of General Electric, hold classes for prospective principals. “This new principal set me up, because I was a whistle-blower,” Scheiner said. “She gave me an unsatisfactory rating two years in a row.Then she trumped up charges against me and sent me to the Rubber Room. So I’m fighting, and waiting it out.”

The United Federation of Teachers, the U.F.T., was founded in 1960. Before that, teachers endured meagre salaries, tyrannical principals, witch hunts for Communists, and gender discrimination against a mostly female workforce (at one point, there was a rule requiring any woman who got pregnant to take a two-year unpaid leave). Drawing its members from a number of smaller and ineffective teachers’ groups, the U.F.T. coalesced into a tough trade union that used strikes and political organizing to fight back. By the time Bloomberg took office, forty-two years later, many education reformers believed that the U.F.T. and its political allies had gained so much clout that it had become impossible for the city’s Board of Education, which already shared a lot of power with local boards, to maintain effective school oversight. In 2002, with the city’s public schools clearly failing, the State Legislature granted control of a new Department of Education to the new mayor, who had become a billionaire by building an immense media company, Bloomberg L.P., that is renowned for firing employees at will and not giving contracts even to senior executives.

Bloomberg quickly hired Klein, who, as an Assistant Attorney General in the Clinton Administration, was the lead prosecutor in a major antitrust case against Microsoft. When Klein was twenty-three, he took a year’s leave of absence from Harvard Law School to study education and teach math to sixth graders at an elementary school in Queens, where he grew up. Like Bloomberg, Klein came from a world far removed from the borough-centric politics and bureaucracy of the old board.

Test scores and graduation rates have improved since Bloomberg and Klein took over, but when the law giving the mayor control expired, on July 1st, some Democrats in the State Senate balked at renewing it, complaining that it gave the mayor “dictatorial” power, as Bill Perkins, a state senator from Manhattan, put it. Nevertheless, by August the senators had relented and voted to renew mayoral control.

One thing that the legislature did not change in 2002 was tenure, which was introduced in New York in 1917, as a good-government reform to protect teachers from the vagaries of political patronage. Tenure guarantees teachers with more than three years’ seniority a job for life, unless, like those in the Rubber Room, they are charged with an offense and lose in the arduous arbitration hearing.

In Klein’s view, tenure is “ridiculous.” “You cannot run a school system that way,” he says. “The three principles that govern our system are lockstep compensation, seniority, and tenure. All three are not right for our children.”

Brandi Scheiner says that her case is likely to be heard next year. By then, she will have twenty-four years’ seniority, which entitles her to a pension of nearly half her salary—that is, her salary at the time of retirement—for life, even if she is found incompetent and dismissed. Because two per cent of her salary is added to her pension for each year of seniority, a three-year stay in the Rubber Room will cost not only three hundred thousand dollars in salary but at least six thousand dollars a year in additional lifetime pension benefits.

Scheiner worked at P.S. 40, an elementary school near Manhattan’s Stuyvesant Town. The write-ups on Web sites that track New York’s schools suggest that P.S. 40 is one of the city’s best. I spoke with five P.S. 40 parents, who said that Scheiner would have had nothing to “blow the whistle” about, because, as one put it, the principal, Susan Felder, is “spectacular.”

Scheiner refused to allow me access to the complete file related to her incompetence proceeding, which would detail the charges against her and any responses she might have filed, saying only that “they charged me with incompetence—boilerplate stuff.” (Nor could Felder comment, because Scheiner had insisted that her file be kept sealed.) But Scheiner did say that she and several of her colleagues in the Rubber Room had brought a “really interesting” class-action suit against the city for violations of their due-process and First Amendment rights as whistle-blowers. She said that the suit was pending, and that she would be vindicated. Actually, she filed three suits, two of which had long since been dismissed. And, a month and a day before she mentioned it to me, the magistrate handling the third case—in a move typically reserved for the most frivolous litigation—had ordered Scheiner and her co-plaintiffs to pay ten thousand dollars to the city in court costs, because that filing was so much like the second case. This third case is pending, though it no longer has a lawyer, because the one who brought these cases has since been disbarred, for allegedly lying to a court and allegedly stealing from Holocaust-survivor clients in unrelated cases.

It takes between two and five years for cases to be heard by an arbitrator, and, like Scheiner, most teachers in the Rubber Rooms wait out the time, maintaining their innocence. One of Scheiner’s Rubber Room colleagues pointed to a man whose head was resting on the table, beside an alarm clock and four prescription-pill bottles. “Look at him,” she said. “He should be in a hospital, not this place. We talk about human rights in China. What about human rights right here in the Rubber Room?” Seven of the fifteen Rubber Room teachers with whom I spoke compared their plight to that of prisoners at Guantánamo Bay or political dissidents in China or Iran.

It’s a theme that the U.F.T. has embraced. The union’s Web site has a section that features stories highlighting the injustice of the Rubber Rooms. One, which begins “Bravo!,” is about a woman I’ll call Patricia Adams, whose return to her classroom, at a high school in Manhattan, last year is reported as a vindication. The account quotes a speech that Adams made to union delegates; according to the Web site, she received a standing ovation as she declared, “My case should never have been brought to a hearing.” The Web site account continues, “Though she believes she was the victim of an effort to move senior teachers out of the system, the due process tenure system worked in her case.”

On November 23, 2005, according to a report prepared by the Education Department’s Special Commissioner of Investigation, Adams was found “in an unconscious state” in her classroom. “There were 34 students present in [Adams’s] classroom,” the report said. When the principal “attempted to awaken [Adams], he was unable to.” When a teacher “stood next to [Adams], he detected a smell of alcohol emanating from her.”

Adams’s return to teaching, more than two years later, had come about because she and the Department of Education had signed a sealed agreement whereby she would teach for one more semester, then be assigned to non-teaching duties in a school office, if she hadn’t found a teaching position elsewhere. The agreement also required that she “submit to random alcohol testing” and be fired if she again tested positive. In February, 2009, Adams passed out in the office where she had to report every day. A drug-and-alcohol-testing-services technician called to the scene wrote in his report that she was unable even to “blow into breathalyzer,” and that her water bottle contained alcohol. As the stipulation required, she was fired.

Randi Weingarten, the president of the U.F.T. until this month (she is now the president of the union’s national parent organization), said in July that the Web site “should have been updated,” adding, “Mea culpa.” The Web site’s story saying that Adams believed she was the “victim of an effort to move senior teachers out” was still there as of mid-August. Ron Davis, a spokesman for the U.F.T., told me that he was unable to contact Adams, after what he said were repeated attempts, to ask if she would be available for comment.

In late August, I reached Adams, and she told me that no one from the union had tried to contact her for me, and that she was “shocked” by the account of her story on the U.F.T. Web site. “My case had nothing to do with seniority,” she said. “It was about a medical issue, and I sabotaged the whole thing by relapsing.” Adams, whose case was handled by a union lawyer, said that, last year, when a U.F.T. newsletter described her as the victim of a seniority purge, she was embarrassed and demanded that the union correct it. She added, “But I never knew about this Web-site article, and certainly never authorized it. The union has its own agenda.” The next morning, Adams told me she had insisted that the union remove the article immediately; it was removed later that day. Adams, who says that she is now sober and starting a school for recovering teen-age substance abusers, asked that her real name not be used.

The stated rationale for the reassignment centers is unassailable: Get these people away from children, even if tenure rules require that they continue to be paid. Most urban school systems faced with tenure constraints follow the same logic. Los Angeles and San Francisco pay suspended teachers to answer phones, work in warehouses, or just stay home; in Chicago they do clerical work. But the policies implemented by other cities are on a far smaller scale—both because they have fewer teachers and because they have not been as aggressive as Klein and Bloomberg in trying to root out the worst teachers.

It seems obvious that by making the Rubber Rooms as boring and as unpleasant as possible Klein was trying to get bad teachers to quit rather than milk the long hearing process—and some do, although the city does not keep records of that.

“They’re in the Rubber Room because they have an entitlement to stay on the payroll,” says Dan Weisberg, the general counsel and vice-president for policy of a Brooklyn-based national education-reform group called the New Teacher Project. “It’s a job. It’s an economic decision on their part. That’s O.K. But don’t complain.” Until January, Weisberg ran the Department of Education’s labor-relations office, where, in 2007, he set up the Teacher Performance Unit, or T.P.U.—an élite group of lawyers recruited to litigate teacher-incompetence cases for the city.

“When we announced the T.P.U., the U.F.T. called a candlelight vigil”—at City Hall—“to protest what they called the Gotcha Squad,” says Chris Cerf, a deputy chancellor, who, like Klein and Weisberg, is an Ivy League-educated lawyer. “You would think candlelight vigils would be reserved for Gandhi or something like that, but you could hear this rally all the way over the Brooklyn Bridge.”

Randi Weingarten is unapologetic. “We believed that the way this Gotcha Squad was portrayed in the press by the city unfairly maligned all the teachers in the system,” she says. Weingarten, who was a lawyer before becoming a teacher and a U.F.T. officer, is a smart, charming political pro. She always tries to link the welfare of teachers to the welfare of those they teach—as in “what’s good for teachers is good for the children.”

Cerf’s response is that “this is not about teachers; it is about children.” He says, “We all agree with the idea that it is better that ten guilty men go free than that one innocent person be imprisoned. But by laying that on to a process of disciplining teachers you put the risk on the kids versus putting it on an occasional innocent teacher losing a job. For the union, it’s better to protect one thousand teachers than to wrongly accuse one.” Anthony Lombardi, the principal of P.S. 49, a mostly minority Queens elementary school, puts it more bluntly: “Randi Weingarten would protect a dead body in the classroom. That’s her job.”

“For Lombardi to say that,” Weingarten said, “shows he has no knowledge of who I am.”

Should a thousand bad teachers stay put so that one innocent teacher is protected? “That’s not a question we should be answering in education,” Weingarten said to me. “Teachers who are treated fairly are better teachers. You can’t have a situation that is fear-based. . . . That is why we press for due process.”

Steve Ostrin, who was assigned to a Brooklyn Rubber Room fifty-three months ago, might be that innocent man whom the current process protects. In 2005, a student at Brooklyn Tech, an élite high school where Ostrin was an award-winning social-studies teacher, accused him of kissing her when the two were alone in a classroom. After her parents told the police, Ostrin was arrested and charged with endangering the welfare of a child. He denied the charge, insisting that he was only joking around with the student and that the principal, who didn’t like him, seized upon the incident to go after him. The tabloids ran headlines about the arrest, and found a student who claimed that a similar thing had happened to her years before, though she had not reported it to the police. But many of Ostrin’s students didn’t believe the allegations. They staged a rally in support of him at the courthouse where the trial was held. Eleven months later, he was acquitted.

Nevertheless, the city refused to allow him to return to class. “Sometimes if they are exonerated in the courts we still don’t put them back,” Cerf said, adding that he was not referring to Ostrin in particular. “Our standard is tighter than ‘beyond a reasonable doubt.’ What would parents think if we took the risk and let them back in a classroom?”

Ostrin’s case may be vexing, but it is a distraction from the real issue: how to deal not with teachers accused of misconduct but with the far larger number who, like Scheiner, may simply not be teaching well. While maintaining that the union in no way condones failing teachers, Weingarten defends the elaborate protections that shield union members: “Teachers are not . . . bankers or lawyers. They don’t have independent power. Principals have huge authority over them. All we’re looking for is due process.”

Dan Weisberg, of the New Teacher Project, independently offered a similar analogy for the other side: “You’re not talking about a bank or a law firm. You’re talking about a classroom—which is far more important—and your ability to make sure that the right people are teaching there.”

By now, most serious studies on education reform have concluded that the critical variable when it comes to kids succeeding in school isn’t money spent on buildings or books but, rather, the quality of their teachers. A study of the Los Angeles public schools published in 2006 by the Brookings Institution concluded that “having a top-quartile teacher rather than a bottom-quartile teacher four years in a row would be enough to close the black-white test score gap.” But, in New York and elsewhere, holding teachers accountable for how well they teach has proved to be a frontier that cannot be crossed.

One morning in July, I attended a session of the arbitration hearing for Lucienne Mohammed, a veteran fifth-grade teacher. Mohammed, unlike most teachers sent to the Rubber Room, agreed to allow the record of her case to be public. (Her lawyer declined to make her available for an interview, however.) She had been assigned to P.S. 65, in Brooklyn’s East New York section, and was removed from the school in June of 2008, on charges of incompetence.

Mohammed’s case was the first to reach arbitration since the introduction of an initiative called Peer Intervention Program (P.I.P.) Plus, which was created to address the problem of tenured teachers who are suspected of incompetence, not those accused of a crime or other misconduct. P.I.P. Plus was included in the contract negotiated by Klein and Weingarten in 2007. The deal seemed good for both sides: a teacher accused of incompetence would first be assigned a “peer”—a retired teacher or principal—from a neutral consulting company agreed upon by the union and the city. The peer would observe the teacher for up to a year and provide counselling. If the observer determined that the teacher was indeed incompetent and was unlikely to improve, the observer would write a detailed report saying so. The report could then be used as evidence in a removal hearing conducted by an arbitrator agreed upon by the union and the city. “We as a union need to make sure we don’t defend the indefensible,” Weingarten told me. Klein and Weingarten both say that a key goal of P.I.P. Plus was to streamline incompetency arbitration hearings. It has not worked out that way.

The evidence of Mohammed’s incompetence—found in more than five thousand pages of transcripts from her hearing—seems as unambiguous as the city’s lawyer promised in his opening statement: “These children were abused in stealth. . . . It was chronic . . . a failure to complete report cards. . . . Respondent failed to correct student work, failed to follow the mandated curriculum . . . failed to manage her class.” The independent observer’s final report supported this assessment, ticking off ten bullet points describing Mohammed’s unsatisfactory performance. (Mohammed’s lawyer argues that she began to be rated unsatisfactory only after she became active with the union.)

This was the thirtieth day of a hearing that started last December. Under the union contract, hearings on each case are held five days a month during the school year and two days a month during the summer. Mohammed’s case is likely to take between forty and forty-five hearing days—eight times as long as the average criminal trial in the United States. (The Department of Education’s spotty records suggest that incompetency hearings before the introduction of P.I.P. Plus generally took twenty to thirty days; the addition of the peer observer’s testimony and report seems to have slowed things down.) Jay Siegel, the arbitrator in Mohammed’s case, who has thirty days to write a decision, estimates that he will exceed his deadline, because of what he says is the amount of evidence under consideration. This means that Mohammed’s case is not likely to be decided before December, a year after it began. That is about fifty per cent more time, from start to finish, than the O.J. trial took.

While the lawyers argued in measured tones, Mohammed—a slender, polite woman who appeared to be in her early forties—sat silently in one of six chairs bunched around a small conference table. The morning’s proceedings focussed first on a medical excuse that Mohammed produced for not showing up at the previous day’s hearing. Dennis DaCosta, an earnest young lawyer from the Teacher Performance Unit, pointed out that the doctor’s letter was eleven days old and therefore had nothing to do with her supposedly being sick the day before. The letter referred to a chronic condition, Antonio Cavallaro, Mohammed’s union-paid defense counsel, replied. Siegel said that he would reserve judgment.

Next came some discussion among the lawyers and Siegel about Defense Exhibit 33Q, a picture of Mohammed’s classroom. The photograph showed a neatly organized room, with a lesson plan chalked on the blackboard. But, under questioning by her own lawyer, Mohammed conceded that the picture had been taken, in consultation with her union representative, one morning before class, after the principal had begun complaining about her. The independent observer’s report had said that as of just a month before Mohammed was removed—and three months after the peer observer started observing and counselling her, and long after this picture was taken—Mohammed had still not “organized her classroom to support instruction and enhance learning.”

The majority of the transcript of the twenty-nine previous hearing days was given over to the lawyers and the arbitrator arguing issues that included whether and how Mohammed should have known about the contents of the Teachers’ Reference Manual; whether it was admissible that when Mohammed got a memo from the principal complaining about her performance, her students said, she angrily read it aloud in class; whether it was really a bad thing that she had appointed one child in her class “the enforcer,” and charged him with making the other kids behave; whether Mohammed’s union representative should have been present when she was reprimanded for not having a lesson plan; and whether the independent observer was qualified to evaluate Mohammed, even though she came from the neutral consulting company that the union had approved.

When the bill for the arbitrator is added to the cost of the city’s lawyers and court reporters and the time spent in court by the principal and the assistant principal, Mohammed’s case will probably have cost the city and the state (which pays the arbitrator) about four hundred thousand dollars.

Nor is it by any means certain that, as a result of that investment, New York taxpayers will have to stop paying Mohammed’s salary, eighty-five thousand dollars a year. Arbitrators have so far proved reluctant to dismiss teachers for incompetence. Siegel, who is serving his second one-year term as an arbitrator and is paid fourteen hundred dollars for each day he works on a hearing, estimates that he has heard “maybe fifteen” cases. “Most of my decisions are compromises, such as fines,” he said. “So it’s hard to tell who won or lost.” Has he ever terminated anyone solely for incompetence? “I don’t think so,” he said. In fact, in the past two years arbitrators have terminated only two teachers for incompetence alone, and only six others in cases where, according to the Department of Education, the main charge was incompetence.

Klein’s explanation is that “most arbitrators are not inclined to dismiss a teacher, because they have to get approved again every year by the union, and the union keeps a scorecard.” (Weingarten denies that the union keeps a scorecard.)

Antonio Cavallaro, the union lawyer, admitted that the process “needs some ironing out.”

Dan Weisberg says that because of the way cases are litigated by the union it’s impossible to move them along. He notes that, unlike in a criminal court, where the judge has to clear his docket, there is no such pressure on an arbitrator. One of Weisberg’s main concerns is the principals, who have to document cases and then spend time at the hearings. “My goal is to look them in the eye and say you should do the hard work,” he says. “I can’t do that if the principal is going to be on the stand for six days.”

Daysi Garcia, the principal of P.S. 65, is a Queens native and is considered by Klein to be a standout among the principals who attended the first classes of the Leadership Academy. She told me that, despite the five days she had to spend testifying, and the piles of paperwork she accumulated to make a record beforehand, she would do it again, because “when I think about the impact of a teacher like this on the children and how long that lasts, it’s worth it, even if it is hard.”

The document that dictates how Daysi Garcia can—and cannot—govern P.S. 65 is the U.F.T. contract, a hundred and sixty-six single-spaced pages. It not only keeps the Rubber Roomers on the payroll and Garcia writing notes to personnel files all day but dictates every minute of the six hours, fifty-seven and a half minutes of a teacher’s work day, including a thirty-seven-and-a-half-minute tutorial/preparation session and a fifty-minute “duty free” lunch period. It also inserts a union representative into every meaningful teacher-supervisor conversation.

The contract includes a provision that, this fall, will allow an additional seven hundred to eight hundred teachers to get paid for doing essentially no teaching. These are teachers who in the past year—or two or three—have been on what is called the Absent Teacher Reserve, because their schools closed down or the number of classes in the subject they teach was cut. Most “excessed” teachers quickly find new positions at other city schools. But these teachers, who have been on the reserve rolls for at least nine months, have refused to take another job (in almost half such cases, according to a study by the New Teacher Project, they have refused even to apply for another position) or their records are so bad or they present themselves so badly that no other principal wants to hire them. The union contract requires that they get paid anyway.

“Most of the excessed teachers get snapped up pretty fast,” Lombardi, the principal of P.S. 49, says. “You can tell from the records and the interviews who’s good and who’s not. So by the time they’ve been on the reserve rolls for more than nine months they’re not the people you want to hire. . . . I’ll do almost anything to avoid bringing them into my school.” These reserve teachers are ostensibly available to act as substitutes, but they rarely do so, because principals don’t want them or because they are not available on a given day; on an average school day the city pays more than two thousand specially designated substitute teachers a hundred and fifty-five dollars each.

Until this year, the city was hiring as many as five thousand new teachers annually to fill vacancies, while the teachers on the reserve list stayed there. This meant that, in keeping with Klein’s goals, new blood was coming into the schools—recruits from Teach for America or from fellowship programs, as well as those who enter the profession the conventional way. Now that New York, like most cities, is suffering through a budget crisis, Klein has had to freeze almost all new hiring and has told principals that they can fill openings only with teachers on the reserve list or with teachers who want to transfer from other schools.

Even so, the number of teachers staying on reserve for more than nine months is likely to exceed eleven hundred by next calendar year and cost the city more than a hundred million dollars annually. Added to the six hundred Rubber Roomers, that’s seventeen hundred idle teachers—more than enough to staff all the schools in New Haven.

The teachers’-union contract comes up for renewal in October, and Klein told me that he plans to push for a time limit of nine months or a year for reserve teachers to find new positions, after which they would be removed from the payroll. “If you can’t find a job by then, it’s a pretty good indicator that you’re not looking or you’re not qualified,” he said.

In Chicago, reserve-list teachers are removed from the payroll after ten months. Until December, the head of the Chicago school system was Arne Duncan, who is now President Obama’s Education Secretary. Duncan has consistently emphasized improving the quality of teachers by measuring and rewarding—or penalizing—them based on performance. “It’s my highest priority,” he told me.

Leading Democrats often talk about the need to reform public education, but they almost never openly criticize the teachers’ unions, which are perhaps the Party’s most powerful support group. In New York, where Weingarten is a sought-after member of Democratic-campaign steering committees, state legislators and New York City Council members are even more closely tied to the U.F.T., which has the city’s largest political-action fund and contributes generously to Democrats and Republicans alike. As a result, in April of 2008 the State Legislature passed a law, promoted by the union, that prohibited Klein from using student test data to evaluate teachers for tenure, something that he had often talked about doing.

Scores should be used only “in a thoughtful and reflective way,” Weingarten told me. “We acted in Albany because no one trusted that Joel Klein would use them to measure performance in a fair way.”

Reformers like Cerf, Klein, Weisberg, and even Secretary Duncan often use the term “value-added scores” to refer to how they would quantify the teacher evaluation process. It is a phrase that sends chills down the spine of most teachers’-union officials. If, say, a student started the school year rated in the fortieth percentile in reading and the fiftieth percentile in math, and ended the year in the sixtieth percentile in both, then the teacher has “added value” that can be reduced to a number. “You take that, along with observation reports and other measures, and you really can rate a teacher,” Weisberg says.

In a speech in July to the National Education Association, a confederation of teachers’ unions, Duncan was booed when he mentioned student test data. But he went on to say that “inflexible seniority and rigid tenure rules . . . put adults ahead of children. . . . These policies were created over the past century to protect the rights of teachers, but they have produced an industrial factory model of education that treats all teachers like interchangeable widgets.”

Duncan’s metaphor was deliberate. He was referring to “The Widget Effect,” a study of teacher-assessment processes in school systems across the country, published in June by the New Teacher Project and co-written by Weisberg. “Our schools are indifferent to instructional effectiveness,” the study declared. Under the subhead “All teachers are rated good or great,” it examined teacher rating processes, and found that in districts that have a binary, satisfactory-unsatisfactory system, ninety-nine per cent of teachers receive a satisfactory rating, and that even in the few school districts that attempt a broader range of rating options ninety-four per cent get one of the top two ratings.

The report lays out a road map for “a comprehensive performance evaluation system,” and recommends that for dismissals “an expedited one-day hearing should be sufficient for an arbitrator to determine if the evaluation and development process was followed and judgments made in good faith.” Lucienne Mohammed’s lawyer spent the equivalent of a day disputing whether she should have been familiar with her training materials.

In seven years, Klein has increased the percentage of third-year teachers not given tenure from three to six per cent. Unsatisfactory ratings for tenured teachers have risen from less than one per cent to 1.8 per cent. “Any human-resources professional will tell you that rating only 1.8 per cent of any workforce unsatisfactory is ridiculous,” Weisberg says. “If you look at the upper quartile and the lower quartile, you know that those people are not interchangeable.”

The Rubber Rooms house only a fraction of the 1.8 per cent who have been rated unsatisfactory. The rest still teach. There are fifty Rubber Roomers—a twentieth of one per cent of all New York City teachers*—awaiting removal proceedings because of alleged incompetence, as opposed to those who have been accused of misconduct.

“If you just focus on the people in the Rubber Rooms, you miss the real point, which is that, by making it so hard to get even the obvious freaks and crazies that are there off the payroll, you insure that the teachers who are simply incompetent or mediocre are never incented to improve and are never removable,” Anthony Lombardi says. In a system with eighty-nine thousand teachers, the untouchable six hundred Rubber Roomers and eleven hundred teachers on the reserve list are only emblematic of the larger challenge of evaluating, retraining, and, if necessary, weeding out the poor performers among the other 87,300.

While Mohammed’s hearing was lumbering on in June, the newsletter of the Chapel Street Rubber Room, in Brooklyn—where Mohammed had spent her school days since 2008—was being handed out by two of its teacher-editors. They were standing under a poster of the room’s mission statement: “TRC”—Temporary Reassignment Center— “Is a Community.” The newsletter’s banner exhorted its readers to “Experience. Share. Enrich. Grow.” Articles included an account of a U.F.T. staff director’s visit to Chapel Street and an essay by one of the room’s inhabitants about how to “quit doubting yourself,” entitled “Perception Is Everything.”

The walls of the large, rectangular room were covered with photographs of Barack Obama and various news clippings. Just to the right of a poster that proclaimed “Bloomberg’s 3 Rs: Rubber Room Racism,” a smiling young woman sat in a lounge chair that she had brought from home. She declined to say what the charges against her were or to allow her name to be used, but told me that she was there “because I’m a smart black woman.”

I asked the woman for her reaction to the following statement: “If a teacher is given a chance or two chances or three chances to improve but still does not improve, there’s no excuse for that person to continue teaching. I reject a system that rewards failure and protects a person from its consequences.”

“That sounds like Klein and his accountability bullshit,” she responded. “We can tell if we’re doing our jobs. We love these children.” After I told her that this was taken from a speech that President Obama made last March, she replied, “Obama wouldn’t say that if he knew the real story.”

But on July 24th President Obama and Secretary Duncan announced that they would award a large amount of federal education aid from the Administration’s stimulus package to school systems on the basis of how they address the issue of accountability. And Duncan made it clear that states where the law does not allow testing data to be used as a measure of teacher performance would not be eligible.

Duncan has fashioned the competition for this stimulus money as a “Race to the Top,” offering four billion dollars to be split among the dozen or so states that do the most to promote accountability in their schools. “That could mean five hundred million dollars for New York, which is huge,” Weisberg says. “But New York won’t be able to compete without radical changes in the law.” Such changes would have to include not only the provision forbidding Klein to use test scores to evaluate teachers (which Weisberg is most focussed on) but also provisions, such as those mandating teacher tenure, that are at the core of the teachers’-union contract. Klein has already come up with a debatable technical argument that the testing restriction won’t actually disqualify New York from at least applying for the money (because the restriction is about using test scores only for tenure decisions). Still, having that law on the books would obviously undercut an application claiming that New York should be declared one of the most accountable systems in the country—as would many provisions of the union contract, such as tenure and compensation based wholly on seniority.

We’ll soon see whether the lure of all that federal money will soften the union position and change the political climate in Albany. If it does, Bloomberg and Klein—who are determined reformers and desperate for the money—would have a chance to turn the U.F.T. contract into something other than a straitjacket when it comes up for renewal, in October. The promise of school funds might also push the legislature, which controls issues such as tenure, to allow a loosening of the contract’s job-security provisions and to repeal the law that forbids test scores to be used to evaluate teachers. If the stimulus money does not push the U.F.T. and the legislature to permit these changes, and if Duncan and Obama are serious about challenging the unions that are the Democrats’ base, the city and the state will miss out on hundreds of millions of dollars in education aid. More than that, publicly educated children will continue to live in an alternate universe of reserve-list teachers being paid for doing nothing, Rubber Roomers writing mission statements, union reps refereeing teacher-feedback sessions, competence “hearings” that are longer than capital-murder trials, and student-performance data that are quarantined like a virus. As the Manhattan Rubber Room’s poster says, it’s the children, not the teachers, who are fragile and need to be handled with care. ♦

*Correction, December 1, 2009: A twentieth of one per cent of all New York City teachers are Rubber Roomers, not half of one per cent, as originally stated.

http://www.newyorker.com/magazine/2009/08/31/the-rubber-room

155
General Discussion / an smbc for skars
« on: August 31, 2015, 10:53:00 PM »

156
General Discussion / The Death of Mt. McKinley
« on: August 31, 2015, 03:52:51 PM »
fuckin' natives, takin our mountains

Quote
President Barack Obama... Alaska...
His first step while he's there: officially renaming the country's tallest mountain from Mt. McKinley to Denali, an historic nod to the region's native population, which the White House says is under threat from the already-present threat of climate change.

http://www.cnn.com/2015/08/30/politics/obama-alaska-denali-climate-change/

157
LoLz / Holy fuck, that finals match tho.
« on: August 24, 2015, 09:57:50 PM »
Spoiler (hover to show)

158
I watched TTGL re-recently, AD.

Shit was upsettingly good.

Incidentally inspired my instrument thread.

! No longer available

159
General Discussion / Do you dudes use Quora?
« on: August 23, 2015, 08:26:20 AM »
Because you fucking should.

It's like Facebook but the shit people say matters.

http://www.quora.com/

160
General Discussion / What's your favorite instrument, TZT?
« on: August 23, 2015, 01:15:25 AM »
i like the piano, esp when this lady plays it

! No longer available

guitar & violin as runners-up

161
LoLz / sooooo laggy
« on: June 19, 2015, 08:33:19 PM »
Man, lately I can't play LoL at all without getting at least a few games with consistent 1200-4k lag spikes. Fios East suuuucks.

162
General Discussion / More Brain Zaps for Learning/Memory
« on: May 24, 2015, 11:37:01 AM »
Imagine you are enjoying your golden years, driving to your daily appointment for some painless brain zapping that is helping to stave off memory loss. That's the hope of a new study, in which people who learned associations (such as a random word and an image) after transcranial magnetic stimulation (TMS) were better able to learn more pairings days and weeks later—with no further stimulation needed. TMS uses a magnetic coil placed on the head to increase electrical signaling a few centimeters into the brain. Past studies have found that TMS can boost cognition and memory during stimulation, but this is the first to show that such gains can last even after the TMS regimen is completed.

In the new study, which was published in Science, neuroscientists first used brain imaging to identify the associative memory network of 16 young, healthy participants. This network, based around the hippocampus, glues together things such as sights, places, sounds and time to form a memory, explains neuroscientist Joel Voss of Northwestern University, a senior author of the paper.

Next, the researchers applied TMS behind the left ear of each participant for 20 minutes for five consecutive days to stimulate this memory network.


To see if participants' associative memory improved, one day after the stimulation regimen finished they were tested for their ability to learn random words paired with faces. Subjects who had had TMS performed 33 percent better, compared with those who received placebo treatments, such as sham stimulation.

“Twenty-four hours may not sound like a long time, but in fact that's quite long in terms of affecting the brain,” Voss says. His team followed up with the participants about 15 days later and found the benefit remained, according to another paper in press at Hippocampus. The team also imaged the subjects' brains one and 15 days after stimulation, finding increases in neural connectivity in their associative memory network.

Voss now plans to test whether this method works on individuals who have disorders in which the memory association network is weak, such as Alzheimer's disease, traumatic brain injury and schizophrenia.

http://www.scientificamerican.com/article/on-the-horizon-a-magnetic-zap-that-strengthens-memory/?WT.mc_id=SA_Facebook

163
General Discussion / No Man's Sky: Game w/ 18 Quintillion Planets
« on: May 17, 2015, 08:11:59 AM »
No Man’s Sky will let virtual travellers explore eighteen quintillion full-featured planets.
No Man’s Sky will let virtual travellers explore eighteen quintillion full-featured planets.
CREDIT HELLO GAMES
The universe is being built in an old two-story building, in the town of Guildford, half an hour by train from London. About a dozen people are working on it. They sit at computer terminals in three rows on the building’s first floor and, primarily by manipulating lines of code, they make mathematical rules that will determine the age and arrangement of virtual stars, the clustering of asteroid belts and moons and planets, the physics of gravity, the arc of orbits, the density and composition of atmospheres—rain, clear skies, overcast. Planets in the universe will be the size of real planets, and they will be separated from one another by light-years of digital space. A small fraction of them will support complex life. Because the designers are building their universe by establishing its laws of nature, rather than by hand-crafting its details, much about it remains unknown, even to them. They are scheduled to finish at the end of this year; at that time, they will invite millions of people to explore their creation, as a video game, packaged under the title No Man’s Sky.

The game’s chief architect is a thirty-four-year-old computer programmer named Sean Murray. He is tall and thin, with a beard and hair that he allows to wander beyond the boundaries of a trim; his uniform is a pair of bluejeans and a plaid shirt. In 2006, frustrated by the impersonal quality of corporate game development, Murray left a successful career with Electronic Arts, one of the largest manufacturers of video games in the world. He believes in small teams and in the idea that creativity emerges from constraint, and so, in 2008, he and three friends founded a tiny company called Hello Games, using money he raised by selling his home. Since then, its sole product has been a game called Joe Danger, about a down-and-out stuntman whose primary skill is jumping over stuff with a motorcycle. Joe Danger, released in several iterations, earned a reputation for playability and humor. (In one version, it is possible to perform stunts as a cupcake riding a bike.) But it was hardly the obvious predecessor to a fully formed digital cosmos. No Man’s Sky will, for all practical purposes, be infinite. Players will begin at the outer edges of a galaxy containing 18,446,744,073,709,551,616 unique planets. By comparison, the game space of Grand Theft Auto: San Andreas appears to be about fourteen square miles.

From the moment Murray unveiled a hastily built trailer for No Man’s Sky, in late 2013, on the Spike TV network, anticipation for the game has taken on an aspect of delirium. For a big-budget franchise like Grand Theft Auto—what people in the industry call a triple-A game—an “announcement trailer” typically features carefully scripted, action-filled vignettes that present a simulacrum of actual play. The No Man’s Sky trailer, which was homemade, featured a minute or so of the actual game: a recording of Murray exploring a planet, beginning undersea, then boarding a ship, flying into space, and engaging in combat. The footage communicated nothing concrete about the game play, but the graphics were rendered with an artistic finesse rarely seen in games, and the arc of Murray’s journey—the unbroken sweep from ocean to land to heavens—implied an unprecedented range of possible discovery.

Other video-game developers advised Murray not to release the trailer, fearing that it was too vague and unconventional, and for days he deliberated. But Murray is not short on self-assurance, and he believed that the footage evoked a near-universal childhood experience: gazing up at the stars and wondering what space might be like. He decided to fly to Los Angeles and present the trailer himself, on the air. “Sean strikes me as incredibly driven and ambitious, but he is also polite and sweet about it,” Joe Shrewsbury, whose band, 65daysofstatic, is writing the game’s soundtrack, told me. Murray, who describes himself as an introvert, says that studio lights terrify him—in keeping with a habit of self-effacement that another colleague described as “the nervous-guy shtick.”

On the Spike TV set, Murray looked downward, as if shielding his eyes, but he also projected fanboy enthusiasm. “It is a huge game,” he said. “I can’t really do it justice. We wanted to make a game about exploration, and we wanted to make something that was real.” Nearly all video games rely on digital façades, drawn by artists, to give the illusion of an explorable world that is far larger than it really is, but No Man’s Sky will contain no such contrivance. Murray’s trailer featured luxuriant scenes of crashed ships on arctic terrain, giant sandworms—a galaxy of exotic dangers. “That planet on the horizon, which you see on the trailer, that’s a real place,” he said on the set. At the time, Murray was working on the game with only three other people, and when he told the show’s hosts they reacted incredulously. “If it is nighttime, and you are in space, and you see stars, those are real stars,” he added. “Those are suns, and they have planets around them—and you can go and visit them.”

When I first met with Murray, at his studio, earlier this year, he had just flown back from the North American headquarters of Sony PlayStation, in California. He had a long relationship with Sony. A few days before he unveiled the No Man’s Sky trailer, in 2013, he had distributed versions of it to people in the industry, and Sony had been immediately interested. “I sent Sean a barrage of texts,” Shahid Ahmad, a director of strategic content at Sony PlayStation, told me. “I said, ‘We need to get this on PlayStation. Tell me what you need.’ ”

Two weeks later, on Christmas Eve, a tributary of the Thames overflowed in Guildford, flooding the Hello Games studio. Murray rushed over and found laptops floating in waist-deep water; tens of thousands of dollars’ worth of equipment was destroyed. Sony’s offer of assistance remained, but Murray told me that he did not ask for funding. Unlike Hollywood, the video-game industry is marked by a vast chasm between big-budget productions and independent ones, and he had learned with Joe Danger that a small studio could easily become beholden to a distributor. Instead, he requested Sony’s help in securing a place for No Man’s Sky at the Electronic Entertainment Expo, or E3, the largest gaming trade show in America. No independently produced title had ever been featured on the main stage, but, as he recalled, “I said—and this was really cocky—‘We want to own E3.’ They were, like, ‘That’s not going to happen,’ but we pushed for it. We traded working for them for being onstage.” (Ahmad told me that the hesitation was largely logistical: “E3 takes time to plan.”)

Sony agreed, and also decided to throw its resources into promoting No Man’s Sky as a top title—an unprecedented gesture for an unfinished product by a tiny studio. The video-game industry now rivals Hollywood; by one estimate, it generated more than eighty billion dollars in revenue last year, and marketing budgets for triple-A games have become comparable to those of blockbuster films. Sony’s marketing strategy for No Man’s Sky suggests that it expects the game to make hundreds of millions of dollars; this year, Sony will promote it alongside half a dozen mega-titles, including the latest installment of the Batman franchise. Adam Boyes, a vice-president at Sony PlayStation, described it to me as “potentially one of the biggest games in the history of our industry.”

All Murray has to do now is deliver. Last year, when an interviewer asked him when the universe would be ready, he said, “We are this super-small team, and we are making this ridiculously ambitious game, and all we are going to do in telling people when it is going to come out, probably, is disappoint them.” Sony’s participation meant that timing for the game’s launch had to be firmly decided, but No Man’s Sky is not an easy project to rush. Because of its algorithmic structure, nearly everything in it is interconnected: changes to the handling of a ship can affect the way insects fly. The universe must be developed holistically; sometimes it must be deconstructed entirely, then reassembled. Before I arrived, Murray warned me, “The game is on the operating table, so you will see it in parts. Other games will have the benefit of having a level that plays really well, while the studio works on other levels. We don’t have that.” The previous “builds” of No Man’s Sky that he had publicly shown—the ones that had generated so much excitement—contained choreographed elements. Features that might have been light-years apart were pressed closer together; animals were invisibly corralled so that they could be reliably encountered. Shifts in the weather that would normally follow the rhythm of atmospheric change were cued to insure that they happened during a demo. Imagine trying to convey life on Earth in minutes: shortcuts would have to be taken.

Footage from the game, with commentary by Raffi Khatchadourian.VIDEO: Footage from the game, with commentary by Raffi Khatchadourian.
We were in a lounge on the second floor of the renovated studio; concept art hung beside a whiteboard covered with Post-its. The furniture was bright, simple, IKEA. Sitting in front of a flat-screen TV the size of a Hummer windshield, Murray loaded up a demo of the game that he had created for E3: a solar system of six planets. Hoping to preserve a sense of discovery in the game, he has been elusive about how it will play, but he has shared some details. Every player will begin on a randomly chosen planet at the outer perimeter of a galaxy. The goal is to head toward the center, to uncover a fundamental mystery, but how players do that, or even whether they choose to do so, is open to them. People can mine, trade, fight, or merely explore. As planets are discovered, information about them (including the names of their discoverers) is loaded onto a galactic map that is updated through the Internet. But, because of the game’s near-limitless proportions, players will rarely encounter one another by chance. As they move toward the center, the game will get harder, and the worlds—the terrain, the fauna and flora—will become more alien, more surreal.

Sitting in the lounge, we began on a Pez-colored planet called Oria V. Murray is known for nervously hovering during demos. “I’ll walk around a little, then I’ll let you have the controller for a bit,” he said. I watched as he traversed a field of orange grass, passing cyan ferns and indigo shrubs, down to a lagoon inhabited by dinosaurs and antelope. After three planets and five minutes, he handed me the controller, leaving me in a brilliantly colored dreamscape, with crystal formations, viridescent and sapphire, scattered in clusters on arid earth. Single-leaf flora the height of redwoods swayed like seaweed. I wandered over hills and came to a sea the color of lava and waded in. The sea was devoid of life. With the press of a button, I activated a jet pack and popped into the air. Fog hung across the sea, and Murray pointed to the hazy outline of distant cliffs. “There are some sort of caves over there,” he said, and I headed for them. The No Man’s Sky cosmos was shaped by an ideal form of wildness—mathematical noise—and the caves were as uncharted as any material caves. I climbed into one of them. “Let’s see how big it is,” Murray said.

The cave’s interior was rendered in blues, greens, purples, and browns, and the light filled it with warmth. Luminescent bits of matter, like inanimate fireflies, filled the air. Triple-A games are often self-serious, dominated by hues so dark they nearly seem black, but Murray favors vivid, polychromatic graphics. “I think that one of the reasons No Man’s Sky resonates is that, at a very reductive level, it’s bright—it’s colorful, vibrant,” he told me.

The game is an homage to the science fiction that Murray loved when he was growing up—Asimov, Clarke, Heinlein—and to the illustrations that often accompanied the stories. In the nineteen-seventies and eighties, sci-fi book covers often bore little relation to the stories within; sometimes they were commissioned independently, and in bulk, and for an imaginative teen-ager it was a special pleasure to imbue the imagery with its own history and drama. Space was presented as a romantic frontier, sublime in its vastness, where ships and futuristic architecture scaled to monumental proportions could appear simultaneously awesome and diminutive. Danger was a by-product of exploration: rockets that crashed on barren asteroids; plots by haywire computers; ominous riddles left behind by lost civilizations. “But inherently what is going on is optimistic,” Murray said. “You would read it and go, Wow, I would love to be this person—this is so exciting. Whereas at the moment a lot of sci-fi is dystopian, and you go, I would hate to be this person. How would I deal with it?”

No Man’s Sky’s references may be dime-store fiction, but the game reimagines the work with a sense of nostalgia and a knowing style that is often more sophisticated than the original. “One thing a lot of video games are missing is a very confident sense of style,” Frank Lantz, the director of New York University’s Game Center, told me. “No Man’s Sky has a personality.”

I approached a cave that looked out on the sea, and Murray gestured toward a portion of the digital geology. “I haven’t seen that before,” he said, and took the controller to get a better look. Murray’s primary coding contribution is to planetary terrain, and he had developed a special appreciation for such formations. After exploring for a bit, he said, “Sorry. You can have the controller back.”

From inside the cave, I turned and approached an opening that looked out upon a ridge high above the shore. “What happens if I jump off?” I asked.

“You’ll be fine,” he said. “We didn’t want you to break your legs and get hurt. It is about exploring. We didn’t want people feeling nervous.”

Each planet had a distinct biome. On one, we encountered a friendly-looking piscine-cetacean hybrid with a bulbous head. (Even aggressive creatures in the game do not look grotesque.) In another, granular soil the color of baked salt was embedded with red coral; a planet hung in the sky, and a hovering robot traversed the horizon. “Those are drones,” Murray said. “They will attack you if they find you killing animals or illegally mining resources.” On a grassy planet, doe-eyed antelope with zebra legs grazed around us. Mist rose off the grass as I headed down a ravine shaded by trees. “This is a place where no one has been before,” Murray said. The biome was Earth-like in light and in color, naturalistic. As I descended, the ravine deepened until rock façades took shape on either side. In spite of the work’s semi-finished state, the world was absorbing. “I’m sorry there’s no game-play element on this planet yet,” Murray said. His mind turned from the screen in front of us—the six planets, tidily assembled for the demo—to the full version of No Man’s Sky on the operating table on the studio’s first floor, below us. Until many improvements were fully realized, the whole of it would inevitably look worse than what we were seeing. “You can lose sight that it once looked like this,” he said.

This version of the game—a frequent reference point for the studio—was a reminder of a public promise: the presentation that Murray had given at E3, where he stood on a huge stage with images of No Man’s Sky projected onto ninety-foot screens. “There were five thousand people in the audience, and at least five million watching at home,” he told me. “I sat backstage, and, before walking out, I had a feeling that I could go to sleep—just turn around and go.” One of the studio’s programmers who was with Murray backstage recalled, “Sean got whiter and whiter—he was just catatonic.” To overcome his nerves, Murray focussed his mind on the story of the game, beginning with the studio’s origins. He told me, “By the time I walked out, I could have burst into tears, because what I was going to say was that this is basically the game I’ve always wanted to make.”

Murray’s earliest memories are of life on a cane farm in Brisbane, Australia. He was born in Ireland, but his parents migrated to Australia when he was two years old. “We basically lived in a glorified shed,” he told me. “It was up on stilts, and it had a corrugated-iron roof.” Two years later, Murray’s parents moved again, to work on a remote million-acre ranch in Queensland. The settlement resembled an alien outpost, with its own power-generation system and its water pumped on-site. Visitors who wanted to avoid a four-hundred-mile drive on a rutted track had to fly in. (The ranch had seven airstrips and an abandoned gold mine.) Dust storms swept across the desiccated soil. Merely crossing the property was like an expedition. “You would go out to check that the windmill, or whatever, was working,” he said. “And you always had to go out in twos. As a kid, you were told that, if something happened to the person you were with, then find some shade, and if there is no shade don’t go looking for it. You will survive for three days without water and without food, and so you have only one job: gather kindling to light a fire. You stay exactly in the same place, and you light a fire at set times, and that’s it. There is a plan: we can fly over, and in three days we can cover the whole grid.” Murray often accompanied his father on multi-day treks. At night, they camped under pristine night sky, with all of space arcing above them.

In the outback, Murray became fascinated with sci-fi. When he first encountered “Dune,” he said, “I can remember being hungry reading it, forgetting to eat.” Years later, when he formed Hello Games, he told his co-founders—two coders named Ryan Doyle and David Ream and an artist named Grant Duncan—to consider their childhoods as source material for games. “I said, ‘Think back to when you were a kid. What did you want to be? A cowboy, an astronaut, a stuntman, a fireman, a policeman, whatever.’ ” Working in Murray’s living room, the four men at first devoted their attention to fundamentals, writing software to determine how objects would behave in a theoretical game space. “We mentioned Pixar a lot, because their work is colorful but not childish,” Duncan told me. The inspiration for Joe Danger came from a stuntman figure that Duncan found in a box of old toys.

The partners worked for a year, and went nearly broke. “I had sold off my PS3,” Murray told me, referring to his PlayStation. “We were down to the bare essentials.” For the release, in June, 2010, Murray bought some cheap cider. “We decided, we are going to drink cider, and it will come out and do what it will do,” Murray said. The game did not appear online in the United Kingdom until after midnight. When it first loaded, the screen was black, causing momentary panic. But within an hour the partners had made back their money.

Murray started No Man’s Sky one morning two years later, during a difficult negotiation with Microsoft over the marketing for Joe Danger’s sequel. “Everyone else was at home,” he recalled. “I was in the studio on my own, and I just started programming. I was furious, and I kept working until three in the morning. Looking back, I think I had the equivalent of a midlife career crisis. What is the point of these games? Like, Joe Danger—how impactful is it?” Murray and his co-founders had joked that they would one day make an ambitious game, which they called Project Skyscraper. The following day, he told Duncan and Ream, “We’re doing it.” He had created only a small patch of sample terrain, without a clear sense of what it would be, and Ream told me, “The thing was quite abstract, and we were like, What are you doing?” Duncan was skeptical. Artists he knew were dismissive of the technique that Murray was using; one had warned him that the results “look like shit.”

Duncan and Ream began to design a relatively conventional game, in the mold of Joe Danger—another humorous take on a childhood dream profession. Their working title was Space Cadets. But Murray urged them to consider the project in more open-ended terms. “I had this feeling: I want to start a new company, like almost an alternate path for Hello Games,” he told me. He split his company into two, and for months the three men, along with a coder named Hazel McKendrick, worked on No Man’s Sky in secrecy, in a locked room.

To build a triple-A game, hundreds of artists and programmers collaborate in tight coördination: nearly every pixel in Grand Theft Auto’s game space has been attentively worked out by hand. Murray realized early that the only way a small team could build a title of comparable impact was by using procedural generation, in which digital environments are created by equations that process strings of random numbers. The approach had been used in 1984, for a space game called Elite, which Murray played as a child. Mark Riedl, the director of Georgia Tech’s Entertainment Intelligence Lab, told me, “Back in those days, games had a lot of procedural generation, because memory on computers was very small; it was largely forgotten, but now it is being rediscovered.” (Minecraft, an expansive world that was designed by only one person, also uses the technique.) Games based on procedural generation often suffer from unrelenting sameness, marked by easily detectable algorithmic patterns (imagine a row of more or less identical trees, stretching to infinity), or from visual turmoil. But Murray hoped that if a middle ground could be achieved he could create graphically rich environments worthy of discovery—a fictional version of exploration that had a grain of reality to it.

Once Murray decided on the basic mathematical architecture of the game, he needed random numbers to feed into it. No computer can generate true randomness, but programmers use a variety of algorithms, and sometimes the physical limitations of the machine, to create approximations. “Computers can understand numbers only of a set size,” Murray told me. “When you are building a computer, you are literally saying, This is where a number gets stored, and this is how many digits can fit in that space.” For a game console, that space is sixty-four bits. When a player first turns on No Man’s Sky, a “seed” number—currently, the phone number of a programmer at Hello Games—is plugged into an equation, to generate long strings of numbers, and when the computer tries to store them in that sixty-four-bit space they become arbitrarily truncated. “What you are left with is a random number,” Murray said. The seed defines the over-all structure of the galaxy, and the random numbers spawned from it serve as digital markers for stars. The process is then repeated: each star’s number becomes a seed that defines its orbiting planets, and the planetary numbers are used as seeds to define the qualities of planetary terrain, atmosphere, and ecology. In this way, the system combines entropy and structure: if two players begin with the same seed and the same formulas, they will experience identical environments.

The design allows for extraordinary economy in computer processing: the terrain for eighteen quintillion unique planets flows out of only fourteen hundred lines of code. Because all the necessary visual information in the game is described by formulas, nothing needs to be rendered graphically until a player encounters it. Murray compared the process to a sine curve: one simple equation can define a limitless contour of hills and valleys—with every point on that contour generated independently of every other. “This is a lovely thing,” he said. “It means I don’t need to calculate anything before or after that point.” In the same way, the game continuously identifies a player’s location, and then renders only what is visible. Turn away from a mountain, an antelope, a star system, and it will vanish just as quickly as it appeared. “You can get philosophical about it,” Murray once said. “Does that planet exist before you visit it? Sort of not—until the maths create it.”

Initially, the system proved fantastically difficult to control. It was generating planetary terrain that was wild, alien-seeming, and also impossible to traverse. If Murray pushed the system in the other direction, the terrain became dull and repetitive. There were also specific natural features, such as rivers, that did not lend themselves easily to equations. To make a river in a conventional game, an artist creates a mountain, places a digital drop of water on it, and maps the water’s trajectory downward. “That is the correct way,” Murray told me. But the process involves laborious computation, and requires that the topography be known in advance. Because of No Man’s Sky’s algorithmic structure—with every pixel rendered on the fly—the topography would not be known until the moment of encounter. Theoretically, the game could quickly render a sample of the terrain before deciding that a particular pixel belonged to a river, but then it would also have to render a sample of the terrain surrounding that sample, and so on. “What would end up happening is what we call an intractable problem to which there is only a brute-force solution,” Murray said. “There’s no way to know without calculating everything.” After much trial and error, he devised a mathematical sleight of hand to resolve the problem. Otherwise, the computer would have become mired in building an entire world merely to determine the existence of a drop of water.

Every morning, at a little past ten, Murray leads a brief meeting with his team. A dozen coders and artists stand among the rows of computers, or swivel their chairs around. In a quick rundown, problems are identified, goals set; in the evening, work is checked into a master build. Murray delegates readily but watchfully.

During my visit, four artists outlined their plans, and then sat down to work. The artists devise archetypes for the coders’ algorithms to mutate. One spent a day making insects: looking up images on Pinterest, designing features for an insect archetype, studying how the algorithms deformed the archetype in hundreds of permutations, then making corrections. “It’s a constant slog of iterating and polishing,” the art director, Grant Duncan, told me. He was working that day on architectural modules that could be combined in myriad ways. Because small changes can have unpredictable effects—the color of a single plant infecting every tree, rock, and animal on a planet—his team uses an algorithmic “drone” that navigates the universe, taking snapshots to measure the repercussions of decisions. Occasionally, Duncan stopped his work to offer suggestions. Reviewing some insects, he said, “Except for the colors, these shapes are kind of working—but the others are bonkers.”

Murray sat down with David Ream, whose focus is coding the game-play systems. Ream had been working to make spaceships handle more realistically in flight, and he wanted Murray to test his work. “I have to give the controller to Sean, because I find that I naturally play the game so that it works, because I know all the numbers,” Ream said. “And also because we have our strong bonds, so I can tell Sean to fuck off.”

Murray played for a few minutes, dogfighting with enemy ships. “This is so much more enjoyable than it was on Sunday,” he said. But he was worried that excessive realism would confuse players who were unaccustomed to the frictionless quality of motion in space. He suggested some tweaks. During the testing, Murray noticed that his ship had exited a planet’s atmosphere too rapidly, without the drama it had in the E3 build. “We’re missing something that used to be there,” he said. “It was a surprise to be suddenly in space.”

Hazel McKendrick walked over and said, “The atmosphere isn’t as thick.” She had adjusted formulas to provide a more natural effect of sunlight passing through it, and a better view of nearby planets. To re-create the old feel, she suggested, the atmosphere’s depth could be artificially increased as the ship passes through.

“So, annoyingly, by doing it wrong you get a nicer effect,” Ream said.

Throughout the day, other members of the team worked on shadows, on creature artificial intelligence, on imbuing objects with “collision,” or physicality. After a coder gave trees and rocks collision, they became destroyable; he shot at a hillside, causing rocks to tumble down, hitting one another in a cascade. Peculiar problems had emerged from the sphericality of planets; in conventional video games, digital spaces are perfectly flat. Until gravity was precisely calibrated, objects sometimes fell off planets. One of the programmers, Charlie Tangora, described a problem with cowlike creatures that kept walking on cave ceilings; it took some troubleshooting before he realized, “Oh, wow. You’re in the Southern Hemisphere. Everything is upside down.”

When Murray wasn’t being pulled away from his computer, he worked on the terrain. He told me that he was always searching for ideas. Last year, he saw the film “Interstellar,” which features scenes of a lifeless snowy planet that “had some very perfect ‘mathlike’ terrain.” The next day, he developed formulas that would create similar crevasses. More recently, he had noticed geological formations that an artist had hand-designed for another video game, and realized that the algorithms of No Man’s Sky were not equipped to make them. The problem nagged at him, until he found an equation, published in 2003 by a Belgian plant geneticist named Johan Gielis. The simple equation can describe a large number of natural forms—the contours of diatoms, starfish, spiderwebs, shells, snowflakes, crystals. Even Gielis was amazed at the range when he plugged it into modelling software. “All these beautiful shapes came rolling out,” he told Nature. “It seemed too good to be true—I spent two years thinking, What did I do wrong? and How come no one else has discovered it?” Gielis called his equation the Superformula.

Cartoon
“The hardest part of homework is keeping my parents motivated.”
BUY THE PRINT »
Murray, sitting before his monitor, typed the Superformula into the terrain of a test planet. He began simply, creating walnut-shaped forms that floated in an infinite grid over a desert. The image resembled a nineteen-eighties album cover, but the over-all look was not the point. Whenever he refreshed the rendering, the floating shapes changed. Many were asymmetrical, marred by depressions and rivulets. Game designers refer to lines of code that require lots of processing time as “costly.” The Superformula is cheap.

“One of the hardest things for us to do is to create coherent shapes,” he told me as he worked. In order to produce varied landscapes, a formula must be able to cope with a wide range of random information without generating mathematical anomalies that cause glitches. “This sounds ridiculous, but it is hard to find a formula that you can rely on,” he said. The Superformula appeared to be reliable. He pointed to a rocky overhang, which looked like desert geology sculpted by harsh erosion. “This is quite naturalistic,” he said. He added more noise to the formula, rotated the shapes it made, played with their scale, buried them beneath the planet surface. “This is effectively more turbulence entering the maths.” He envisioned using the Superformula throughout the game, especially at the center of the galaxy, where landscapes would become more surreal. With only small shifts in its parameters, the equation was producing impressive variability. In one rendering, it produced rolling hills. Murray refreshed the screen: a star-shaped rock formation appeared. He seemed pleased. “It’s always a good sign when I am clicking the button, and there is that slight amount of excitement,” he said.

The allure of taking one more peek at the unknown had a way of distracting even the game’s creators. At one point, I sat next to Duncan, who was populating a test planet with alien fungus. “I’m trying to develop some weirder vegetation,” he told me. Birds flew high above a towering black obelisk—space architecture inspired by John Harris, a British illustrator. Duncan had activated the bird algorithm, but, oddly, a herbivorous dinosaur had also appeared. “I made this world for this test, and I have never encountered animals on it before,” he said. “I don’t know what it’s going to do.” The dinosaur scampered off. After several minutes, I asked what attributes of the fungus he was studying. “I’m just exploring,” he said, sheepishly. “Sometimes I’m, like, What am I doing? I’m supposed to be working.”

By May, the team was working furiously. Murray’s hair and beard were growing wilder. Discussions with Sony became more complicated; the company was banking on No Man’s Sky as a genre-defying hit, and, as its marketers began to consider different ways to promote it, the story of the game was slipping from his full control. Murray told me that he couldn’t sleep at night. “The biggest worry for me is that we release the game because of all the momentum behind it before we are happy with it,” he said. Because of the game’s scope, and because he had decided not to reveal key features, he feared that it had become a Rorschach test of popular expectation, with each potential player looking for something in it that might not be there. “Hype is a difficult taskmaster,” David Braben, one of the creators of Elite, told me when I asked him what he thought about the game.

Even a feature as simple as the Superformula—a hundred and twenty lines of code—created complications when it was written into the terrain-generation system. When I asked Murray how it was working, he told me, “It’s cool, though it currently plays hell with creature A.I.” He was spending as much time as he could coding, but distractions were hard to block.

People at Sony wanted to issue a companion book, and, once he realized that it might be inevitable, he decided to get involved. One afternoon, he met with Dave Gibbons, the co-creator of the “Watchmen” comic series, to discuss his possible role as editor. In the upstairs lounge, they talked excitedly about Philip K. Dick, and about “Terran Trade Authority,” an old sci-fi series that Murray had loved. Then Murray turned toward the flat-screen TV and brought Gibbons onto a snowy mountainous planet, from a build that had been created after E3. “A living, breathing universe,” he said. “I can walk in any direction for days and days, and I will eventually walk the entire planet and come back to where I started.”

“So you could really explore one planet and map it,” Gibbons said.

“For some people, that will be all they do, and they’ll be able to have quite a nice game,” Murray said. He climbed into a ship, and flew through an asteroid belt. “The thing that we haven’t really shown publicly, but I think is really cool, is that if I press a button I can pop out to a galactic map,” he said. He pressed a button, and all of space shrank into a pinpoint of light, representing that solar system.

Cartoon
“Thank you for neglecting your work long enough to listen to my thoughts on efficiency.”
BUY THE PRINT »
The galactic map—as bright and compelling as an image from a Carl Sagan documentary—gave the ship’s location by framing its proximate sun in a white square. A panel of text noted the solar system’s computer-generated name, Ethaedair; a diagram of vectors indicated stars that were reachable with the ship’s hyperdrive. “This has been in games before, but it has always been a fake,” Murray said, gesturing to the map. “Normally, it would be a painting that somebody has made, and there would be two little levels that you can go between, or ten levels, each set on a pretend ‘solar system.’ ” Like a magician working toward a showstopper, he added, offhandedly, “But it is quite nice to just pull around . . .” He manipulated his controller, and all of space rotated around Ethaedair’s sun. Stars and plumes of luminous cosmic matter arced past; what had seemed like a two-dimensional representation suddenly revealed itself to be full of depth. Gibbons gasped, and Murray began to speak more softly: “If I pull back a bit, you start to get a sense of the size of what we are building.” Millions of stars drifted by. Gibbons laughed softly. “It’s like a huge box of chocolates!” he said.

“Maybe I should just go a little faster,” Murray said. Light-years of space unfolded at a terrific rate. It may not have been the universe as it actually was, but there was nonetheless an awesome reality on display: the system’s vast mathematics. Murray turned toward a phosphorescent glowing orb. “That’s the center,” he said. This version of the game allowed Murray to leap to any solar system he wanted, but, drawing out the suspense, he moved deeper into the galactic map’s three-dimensional space. “This build was brought together so I could do a demo onstage. I chickened out, because when I press this button, basically, I don’t know what we’re going to see—and it can be a really weird way to end a demo. Something might go terribly wrong. Or we might find a planet that is quite boring. But I can see now that I should have gone with it, because even when it is boring it still is something new.”

“It is a bit like it really does exist, isn’t it?” Gibbons said.

Murray stopped at a star cluster and admired its density. Finally, overcoming his hesitancy, he picked a destination. “I can’t promise if this is going to be interesting,” he said. The map vanished. He was back in his cockpit. His hyperdrive kicked on. Then all of space blurred, and the ship hurtled into the unknown. ♦

http://www.newyorker.com/magazine/2015/05/18/world-without-end-raffi-khatchadourian?mbid=social_facebook

164
Engineers in the Netherlands say a novel solar road surface that generates electricity and can be driven over has proved more successful than expected.

Last year they built a 70-metre test track along a bike path near the Dutch town of Krommenie on the outskirts of Amsterdam.


In the first six months since it was installed, the panels beneath the road have generated over 3,000kwh. This is enough to provide a single-person household with electricity for a year.

"If we translate this to an annual yield, we expect more than the 70kwh per square metre per year," says Sten de Wit, spokesman for SolaRoad, which has been developed by a public-private partnership.


"We predicted [this] as an upper limit in the laboratory stage. We can therefore conclude that it was a successful first half year."

The project took cheap mass-produced solar panels and sandwiched them between layers of glass, silicon rubber and concrete.

"This version can have a fire brigade truck of 12 tonnes without any damage," said Arian de Bondt, a director at Ooms Civiel, one of consortium of companies working together on the pilot project.

"We were working on panels for big buses and large vehicles in the long run."

The solar panels are connected to smart metres, which optimise their output and feed the electricity to street lighting or into the grid.

"If one panel is broken or in shadow or dirt, it will only switch off that PV panel," said Jan-Hendrik Kremer, Renewable Energy Systems consultant at technology company Imtech.

Five years of research

The research group spent the last five years developing the technology but during the first six months of the trial a small section of a coating, designed to give grip to the smooth glass surface without blocking the sun, delaminated.

This was due to temperature fluctuations causing the coating to shrink. The team is now working on an improved version of the coating. More than 150,000 cyclists have ridden over the panels so far.

"We made a set of coatings, which are robust enough to deal with the traffic loads but also give traction to the vehicles passing by," said Stan Klerks, a scientist at Dutch research group TNO.

He said the slabs also had to "transfer as much light as possible on to the solar cells so the solar cells can do their work".

The group behind the project is now in talks with local councils in the Netherlands to see if the technology can be rolled out in other provinces. A cooperation agreement has also been signed with the US state of California.

"Solar panels on roofs are designed to have a lifetime, which is typically 20/25 years," said de Wit.

"This is the type of lifetime that we also want for these types of slabs. If you have a payback time of 15 years then afterwards you also have some payback of the road itself so that makes the road cheaper in the end."

http://www.aljazeera.com/news/2015/05/150510092535171.html

in other solar FREAKIN roadways news the The Numbers page from the solar freakin roadways people seems to be something like 8 months late on getting updated now

165
General Discussion / Age of Ultron was kinda Meh (spoilers)
« on: May 08, 2015, 04:52:19 PM »
Well, I’ve finally seen Avengers: Age of Ultron.

I shuffled out of the packed theater feeling…letdown.

That’s not to say I didn’t enjoy myself.

Aside from all the insane people who brought their very young, very noisy children to a movie wholly inappropriate for them, I had a pretty good time.

But I didn’t have the same sort of good time I had at the first Avengers. Compared to the last major Marvel movie I saw—Guardians of the Galaxy—Age of Ultron simply pales.
In fact, I’m enjoying the Netflix Original Marvel TV show, Daredevil, quite a lot more than anything I saw today, though Ultron is a lot funnier. The running “watch your language” joke is cute. I chuckled many times, though not enough times to justify the price of admission on comedy alone.

So where did Age of Ultron g0 wrong? It’s filled with action, wise-cracks, and some great special effects, but something is still missing. It’s the perfect recipe on paper, but the final meal is…underwhelming.

Warning: There will be spoilers.

I’ve boiled it down to five major complaints that encompass all the little ways this Avengers film fell short. Let’s start with…

1. ‘Age of Ultron’ is too sappy for no reason, and without a payoff.

Avengers love

The movie constantly tries to tug at your heart-strings (but there are no strings on me!) and almost always fails to land a real emotional punch.

Black Widow and Hulk have a nice moment as she calms the “big guy” down. But that transforms weirdly, quickly, into an oddly forward Natasha Romanoff hitting on a Bruce Banner almost as confused as me.

By the end of the film, Banner is gone and Natasha is all bummed out, and the audience is pretty much unmoved. It’s a weird little side plot that doesn’t add anything but confusion to the story.

Meanwhile, Hawkeye, now questioning his relevance to the team (much as audiences did back when the first Avengers came out) reveals his wife and family to his more super-heroic pals. This is an attempt to humanize him, one presumes, but it just feels…off.

What’s the point? Why do we need to humanize Hawkeye? Just give him more funny lines and let him shoot things with exploding arrows. All these little touchy-feely distractions are used to slow down what’s an otherwise action-packed adventure.

Which leads us to…

2. ’Age of Ultron’ has a serious pacing problem.

Ultron

It’s pretty standard in action movies to sprinkle in slower moments, comic relief, and so forth in between the action to give everyone time to catch their breath.

This works okay in Age of Ultron, but for some reason a lot of the slower scenes—not all, but a lot—just don’t work at all, and serve only to muck up the film’s momentum.

The “lift Thor’s hammer” scene is a good example of how to do humor in an action movie (though mostly everyone had already seen it thanks to the over-marketing campaign moviegoers have been subjected to.)

But many other slow scenes felt bogged down, and there was rarely a sense that our heroes were really in enough trouble to need to catch their breath in the first place.

Hiding out at Hawkeye’s farm? Yeah, these guys don’t even look beat up. Why not just turn around and beat up Ultron. He’s not at all scary (like he’s supposed to be.) More on that later.

The wood-chopping was funny—there are lots of funny bits scattered about the film—and the dream sequences were interesting, but most of the slower moments were just boring.

Not even Samuel L. Jackson could save the day.

3. Unfortunately, the action scenes don’t improve matters.

Avengers assembled

I could make a mini-list about everything wrong with the action scenes in this movie. Other than a couple gems, the action in Age of Ultron fell well short of its predecessor.

I rather enjoyed watching the Hulk duke it out with an over-sized Iron Man, especially with all the macho talk and testosterone-fueled posturing (turns out, size really does matter!)

The Hulk/Iron Man fight reminded me of the best fights of the last movie, which often included our super-heroes facing off against one another. And I think one reason I liked these fights so much, is because we didn’t know who would win or what the outcome would be.

Hulk vs Iron Man

But we pretty much do know the outcome of the fights in Age of Ultron. That robot army doesn’t stand a chance. There’s not even a moment in the entire film when it seems like they’re even all that hard-up.

The one possible threat that might put a dent in our Avengers’ plot to save the earth was the creation of an Infinity-stone powered version of Ultron. Instead, we got Vision—a Jarvis-bot reprogrammed with Tony Stark’s Jarvis AI (which is even more awesome than Ultron’s AI, I guess) and released by Thor’s Mighty Hammer.

I think Vision is a cool character, and I like how he’s portrayed here, but talk about a serious letdown from a plot perspective. They castrate the big bad’s plans well before the final showdown.

avengers-age-of-ultron-vision

The final showdown, meanwhile, is wholly lackluster except for the death of newly-introduced Flash, er, Quicksilver, who has one of the coolest powers and looks pretty dead by the end (but who knows…)

The stakes are rarely, if ever, high in these action sequences, or in the entire film for that matter (we are all fairly sure that Ultron will fail and that none of our heroes will die, nothing horrible will happen, etc.)

Meanwhile, the most bombastic action sequences are simply too messy and chaotic. When you’re trying to watch six or seven different heroes at once fight off (rather lame) robot attackers, it can be a little hard to follow.

While there’s some terrific special effects at play, and some decent fight choreography, there just aren’t all that many “wow” moments, either, to make these fights feel distinct. Maybe that’s just because so much stuff is going on all the time.

Maybe it’s because some of the “so much stuff” involved we’ve kind of seen before, in the last Avengers movie.

Think of the scene in the first Avengers when they’re trying to stop the SHIELD hellicarrier from crashing. Iron Man and Captain America and co. are all trying furiously to save the ship from crashing. It’s a great, tense scene. We move back and forth between this battle and other action, but it all flows together really well.

Fast-forward to Age of Ultron and Iron Man trying to get to “the core” of the big machine Ultron has built in a giant flying city and it not only doesn’t really make sense, it’s just hard to follow. The same dynamic is at play, making it less interesting, but it’s much, much messier.

4. We’re introduced to too many new (and old) characters, but not all the best characters.

Also important to note that Quicksilver is also the hero formerly known as Kick-Ass.
Also important to note that Quicksilver is also the hero formerly known as Kick-Ass.

There’s also the matter of an ever-growing cast of characters. That’s something of an inevitability when you’re making something like the Marvel Cinematic Universe (which, ya know, is making an Ant-Man movie of all things.)

But still, it means we have a lot of divvying up of screen-time and not a ton of focus. We have Scarlet Witch and Quicksilver (but not the X-Men version of Quicksilver!) as well as Don Cheadle’s War Machine, and Falcon (though black Avengers are still just getting cameos here and we’ll see if Scarlet Witch is anything other than a bit part in future films.)

Seriously, the X-Men look positively diverse compared to the Avengers. Regardless, we still have an overly-crowded cast, with Nick Fury and various SHIELD agents also making an appearance. Meanwhile some of our favorite characters—or at least, Loki—don’t appear at all.

Which leads us to the most important new character in the entire movie: Ultron himself.

5. Unfortunately, the villain is lame.

Ultron 2

Ultron is the funniest evil robot I’ve seen in a movie in a long time, maybe ever. He’s a lot like his “maker” Tony Stark, but with a dark, demented side.

But he’s not as scary as the Winter Soldier, or as interesting or entertaining as duplicitous Loki, or as ominous as say, hyper-intelligent Ava from Ex Machina.

In fact, Ultron is a really terrible super-villain. He’s a “villain of the week” at best, and not even a very good one. He’s supposed to be this enormously powerful AI that can use the internet however he pleases, yet he barely does anything other than find ways to blow things up. That doesn’t sound like a hyper-intelligent and adaptable being, it sounds like a cartoon villain.

So instead of using his tech to shut down global banking systems, hack military servers, start a nuclear war*, or do really anything intelligent at all, Ultron builds a great big bomb that requires him to lift an entire city out of the ground in order to detonate.

*Note: I realize in the film Ultron was stymied in his attempts to gain access to nuclear codes. That does not mean a more clever villain couldn’t have used his technological capabilities to start a war. He didn’t bother to create any chaos, any distractions for the heroes outside of the twins. A better villain would have thrust the world into chaos prior to his big destroy the world segment. Ultron failed to do anything particularly interesting in this regard.

ultron heroes

This entire bad guy was devised in order to pull off a special effects gimmick. That’s the extent of thought that went into Ultron. Never once (or at least not for more than a split second if we watched a preview) do we think Ultron will be good. We aren’t given any time for him and Stark to form a relationship that could later turn to hatred.

There is none of the passion that makes a Frankenstein’s Monster actually work as a dramatic element. Stark is no Frankenstein, and Ultron is no Monster. They barely have any interaction at all. There is never that process that allows us to accept Ultron first as Stark’s “child” and then as his antagonist. It feels so rushed, so pointless.

(Note: I’m definitely feeling a little extra biased here having just watched Ex Machina, but I think I would feel this way regardless. If you’re going to go for Frankenstein’s monster, go for broke.)

The villain poses very little threat, very rarely puts any of the heroes into any sort of bind (save once, with the help of Scarlet Witch) and fails to impress at every turn. And my god, the Pinocchio song that he sings in the previews, and again in the movie—let this go down as one of the greatest of MCU’s mistakes since Disney acquired the comic book company.

How disappointing.

avengers-age-of-ultron-3

All of these complaints aside, I still had fun at the Avengers sequel. Yes, there were some very irritating parents in the audience who brought kids far too young to a very long and violent movie, but aside from that I had fun. It was an entertaining sequel, but very much (one hopes) the middle-child of the Avengers films.

Let’s hope Infinity War and the Guardians of the Galaxy sequel fare better.

If I think of other things to add to this list, I’ll update the post.

Meanwhile, if you’ve seen the film and would like to chime in (with how astute and obviously correct I am, unless you truly must tell me what an idiot I am…) feel free to do so in the comments or on social media.

http://www.forbes.com/sites/erikkain/2015/05/02/avengers-review-5-things-age-of-ultron-gets-dead-wrong/

ultron movie with goofy unintimidating james spader ultron is goofy and unintimidating

166
"The researchers... found that racist searches were correlated with higher mortality rates for blacks, even after controlling for a variety of racial and socio-economic variables."

Where do America's most racist people live? "The rural Northeast and South," suggests a new study just published in PLOS ONE.

The paper introduces a novel but makes-tons-of-sense-when-you-think-about-it method for measuring the incidence of racist attitudes: Google search data. The methodology comes from data scientist Seth Stephens-Davidowitz. He's used it before to measure the effect of racist attitudes on Barack Obama's electoral prospects.

[Data suggest Republicans have a race problem]

"Google data, evidence suggests, are unlikely to suffer from major social censoring," Stephens-Davidowitz wrote in a previous paper. "Google searchers are online and likely alone, both of which make it easier to express socially taboo thoughts. Individuals, indeed, note that they are unusually forthcoming with Google." He also notes that the Google measure correlates strongly with other standard measures social science researchers have used to study racist attitudes.

This is important, because racism is a notoriously tricky thing to measure. Traditional survey methods don't really work -- if you flat-out ask someone if they're racist, they will simply tell you no. That's partly because most racism in society today operates at the subconscious level, or gets vented anonymously online.

For the PLOS ONE paper, researchers looked at searches containing the N-word. People search frequently for it, roughly as often as searches for  "migraine(s)," "economist," "sweater," "Daily Show," and "Lakers." (The authors attempted to control for variants of the N-word not necessarily intended as pejoratives, excluding the "a" version of the word that analysis revealed was often used "in different contexts compared to searches of the term ending in '-er'.")

[An entrenched racial slur is now more prevalent than ever]

It's also important to note that not all people searching for the N-word are motivated by racism, and that not all racists search for that word, either. But aggregated over several years and several million searches, the data give a pretty good approximation of where a particular type of racist attitude is the strongest.

Interestingly, on the map above the most concentrated cluster of racist searches happened not in the South, but rather along the spine of the Appalachians running from Georgia all the way up to New York and southern Vermont.

[Three quarters of whites don't have any non-white friends]

Other hotbeds of racist searches appear in areas of the Gulf Coast, Michigan's Upper Peninsula, and a large portion of Ohio. But the searches get rarer the further West you go. West of Texas, no region falls into the "much more than average" category. This map follows the general contours of a map of racist Tweets made by researchers at Humboldt State University.

So some people are sitting at home by themselves, Googling a bunch of racist stuff. What does it matter? As it turns out, it matters quite a bit. The researchers on the PLOS ONE paper found that racist searches were correlated with higher mortality rates for blacks, even after controlling for a variety of racial and socio-economic variables.

"Results from our study indicate that living in an area characterized by a one standard deviation greater proportion of racist Google searches is associated with an 8.2% increase in the all-cause mortality rate among Blacks," the authors conclude. Now, of course, Google searches aren't directly leading to the deaths of African Americans. But previous research has shown that the prevalence of racist attitudes can contribute to poor health and economic outcomes among black residents.

"Racially motivated experiences of discrimination impact health via diminished socioeconomic attainment and by enforcing patterns in racial residential segregation, geographically isolating large segments of the Black population into worse neighborhood conditions," the authors write, summarizing existing research. "Racial discrimination in employment can also lead to lower income and greater financial strain, which in turn have been linked to worse mental and physical health outcomes."

http://www.washingtonpost.com/blogs/wonkblog/wp/2015/04/28/the-most-racist-places-in-america-according-to-google/?tid=sm_fb

167
A Nashville swingers club has undergone a conversion — it says it's now a church — in order to win city approval so it can open next to a Christian school.

The story began last fall, when a fixture in downtown Nashville called The Social Club sold its building and purchased a new one in a run-down office park several miles to the east.

The new building is geographically isolated at the end of a dead-end street, but it is near the back of Goodpasture Christian School, a large private school serving pre-school through high school children.

It might have been years before school officials and parents learned what was going on inside The Social Club — its website says it is "a private club for the enjoyment of both men and women ... to engage in any sexual activity" — if someone had not sent anonymous letters to the school president and the local councilwoman. Both say the person who tipped them off claimed to be a concerned club member, although they don't know that for sure.

Parents and religious leaders were called on to pack the Metro Nashville Council chambers to support a zoning change to prevent the club from opening. That's when the club, which had spent $750,000 on the building and begun renovations, suddenly transformed into a church.

The United Fellowship Center's plans are nearly identical to those of The Social Club but with some different labels. The dance floor has become the sanctuary. Two rooms labeled "dungeon" are now "choir" and "handbells." Forty-nine small, private rooms remain, but most of them have become prayer rooms.

Larry Roberts is the attorney for the club-turned-church. He previously vowed to take the city to court. Now, he says, it's the city that will have to sue.

"The ball is in Metro's court ... We've now gotten a permit to meet as a church, and a church is something that cannot be defined under the U.S. Constitution," he said.

Roberts said church members will "meet and have fellowship" in the new building, but no sex will take place there. "If people have something else in mind, they will go somewhere else."

Several of those who opposed The Social Club say they are skeptical of the change.

"I find it hard to believe that they've invested that kind of money and they're just going to change the activity," Goodpasture President Ricky Perry said. "I really hope that it's true."

Metro Zoning Administrator Bill Herbert said the department takes applicants at their word, so inspectors are treating the building as a church. As long as the United Fellowship Center is in compliance with codes, it will receive permission to operate.

"If it is not operating as a church, that's an enforcement issue," he said. "We can tell them to cease and desist, and if they refuse we can enforce it through the courts."

If it turns out to continue operating as a swingers club, it could also face trouble with the state after lawmakers passed a bill last month disallowing private sex clubs within 1,000 feet of schools, parks, day cares and houses of worship.

Metro Councilwoman Karen Bennett is a Goodpasture graduate who sponsored the legislation to change the zoning for private clubs. She said she will be watching to make sure the United Fellowship Center truly does operate as a church.

"I've heard many, many people say they're planning to attend when it opens," she said.

Copyright 2015 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

http://talkingpointsmemo.com/news/sex-club-nashville-blessing-church

168
Tech Heads / Windows Just Crapped Itself
« on: April 21, 2015, 10:27:07 PM »
Fffuuuuuck.

Desktop turned itself off 5 or 6 times over the course of 20 minutes before finally telling me Windows couldn't start.

Let it run its repair utility; ultimately got prompted to restore to an earlier date.

Did. Hasn't shut off again, but I don't remember the password I was using at this earlier date and the hint I gave doesn't help one iota. And I don't have a reset disk or USB.

Fuuuuuuuck.

Guess the desktop is back to being a Ubuntu only machine.

169
In 2001, the Portuguese government did something that the United States would find entirely alien. After many years of waging a fierce war on drugs, it decided to flip its strategy entirely: It decriminalized them all.

If someone is found in the possession of less than a 10-day supply of anything from marijuana to heroin, he or she is sent to a three-person Commission for the Dissuasion of Drug Addiction, typically made up of a lawyer, a doctor and a social worker. The commission recommends treatment or a minor fine; otherwise, the person is sent off without any penalty. A vast majority of the time, there is no penalty.

Fourteen years after decriminalization, Portugal has not been run into the ground by a nation of drug addicts. In fact, by many measures, it's doing far better than it was before.

The background: In 1974, the dictatorship that had isolated Portugal from the rest of the world for nearly half a century came to an end. The Carnation Revolution was a bloodless military-led coup that sparked a tumultuous transition from authoritarianism to democracy and a society-wide struggle to define a new Portuguese nation.

The newfound freedom led to a raucous attitude of experimentalism toward politics and economy and, as it turned out, hard drugs.

Portugal's dictatorship had insulated it from the drug culture that had swept much of the Western world earlier in the 20th century, but the coup changed everything. After the revolution, Portugal gave up its colonies, and colonists and soldiers returned to the country with a variety of drugs. Borders opened up and travel and exchange were made far easier. Located on the westernmost tip of the continent, the country was a natural gateway for trafficking across the continent. Drug use became part of the culture of liberation, and the use of hard narcotics became popular. Eventually, it got out of hand, and drug use became a crisis.

At first, the government responded to it as the United States is all too familiar with: a conservative cultural backlash that vilified drug use and a harsh, punitive set of policies led by the criminal justice system. Throughout the 1980s, Portugal tried this approach, but to no avail: By 1999, nearly 1% of the population was addicted to heroin, and drug-related AIDS deaths in the country were the highest in the European Union, according to the New Yorker.

But by 2001, the country decided to decriminalize possession and use of drugs, and the results have been remarkable.

What's gotten better? In terms of usage rate and health, the data show that Portugal has by no means plunged into a drug crisis.

As this chart from Transform Drug Policy Foundation shows, the proportion of the population that reports having used drugs at some point saw an initial increase after decriminalization, but then a decline:



(Lifetime prevalence means the percentage of people who report having used a drug at some point in their life, past-year prevalence indicates having used within the last year, and past-month prevalence means those who've used within the last month. Generally speaking, the shorter the time frame, the more reliable the measure.)

Drug use has declined overall among the 15- to 24-year-old population, those most at risk of initiating drug use, according to Transform.

There has also been a decline in the percentage of the population who have ever used a drug and then continue to do so:



Drug-induced deaths have decreased steeply, as this Transform chart shows:



HIV infection rates among injecting drug users have been reduced at a steady pace, and has become a more manageable problem in the context of other countries with high rates, as can be seen in this chart from a 2014 report by the European Monitoring Center for Drugs and Drug Addiction Policy:



And a widely cited study published in 2010 in the British Journal of Criminology found that after decriminalization, Portugal saw a decrease in imprisonment on drug-related charges alongside a surge in visits to health clinics that deal with addiction and disease.

Not a cure but certainly not a disaster: Many advocates for decriminalizing or legalizing illicit drugs around the world have gloried in Portugal's success. They point to its effectiveness as an unambiguous sign that decriminalization works.

But some social scientists have cautioned against attributing all the numbers to decriminalization itself, as there are other factors at play in the national decrease in overdoses, disease and usage.

At the turn of the millennium, Portugal shifted drug control from the Justice Department to the Ministry of Health and instituted a robust public health model for treating hard drug addiction. It also expanded the welfare system in the form of a guaranteed minimum income. Changes in the material and health resources for at-risk populations for the past decade are a major factor in evaluating the evolution of Portugal's drug situation.

Alex Stevens, a professor of criminal justice at the University of Kent and co-author of the aforementioned criminology article, thinks the global community should be measured in its takeaways from Portugal. 

"The main lesson to learn decriminalizing drugs doesn't necessarily lead to disaster, and it does free up resources for more effective responses to drug-related problems,"  Stevens told Mic.

The road ahead: As Portugal faces a precarious financial situation, there are risks that the country could divest from its health services that are so vital in keeping the addicted community as healthy as possible and more likely to re-enter sobriety.

That would be a shame for a country that has illustrated so effectively that treating drug addiction as a moral problem — rather than a health problem — is a dead end.

In a 2011 New Yorker article discussing how Portugal has fared since decriminalizing, the author spoke with a doctor who discussed the vans that patrol cities with chemical alternatives to the hard drugs that addicts are trying to wean themselves off of. The doctor reflected on the spectacle of people lining up at the van, still slaves of addiction, but defended the act: "Perhaps it is a national failing, but I prefer moderate hope and some likelihood of success to the dream of perfection and the promise of failure."

http://mic.com/articles/110344/14-years-after-portugal-decriminalized-all-drugs-here-s-what-s-happening

170
The Justice Department and FBI have formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000.

Of 28 examiners with the FBI Laboratory’s microscopic hair comparison unit, 26 overstated forensic matches in ways that favored prosecutors in more than 95 percent of the 268 trials reviewed so far, according to the National Association of Criminal Defense Lawyers (NACDL) and the Innocence Project, which are assisting the government with the country’s largest post-conviction review of questioned forensic evidence.

The cases include those of 32 defendants sentenced to death. Of those, 14 have been executed or died in prison, the groups said under an agreement with the government to release results after the review of the first 200 convictions.

The FBI errors alone do not mean there was not other evidence of a convict’s guilt. Defendants and federal and state prosecutors in 46 states and the District are being notified to determine whether there are grounds for appeals. Four defendants were previously exonerated.

The admissions mark a watershed in one of the country’s largest forensic scandals, highlighting the failure of the nation’s courts for decades to keep bogus scientific information from juries, legal analysts said. The question now, they said, is how state authorities and the courts will respond to findings that confirm long-suspected problems with subjective, pattern-based forensic techniques — like hair and bite-mark comparisons — that have contributed to wrongful convictions in more than one-quarter of 329 DNA-exoneration cases since 1989.


In a statement, the FBI and Justice Department vowed to continue to devote resources to address all cases and said they “are committed to ensuring that affected defendants are notified of past errors and that justice is done in every instance. The Department and the FBI are also committed to ensuring the accuracy of future hair analysis, as well as the application of all disciplines of forensic science.”

Peter Neufeld, co-founder of the Innocence Project, commended the FBI and department for the collaboration but said, “The FBI’s three-decade use of microscopic hair analysis to incriminate defendants was a complete disaster.”

“We need an exhaustive investigation that looks at how the FBI, state governments that relied on examiners trained by the FBI and the courts allowed this to happen and why it wasn’t stopped much sooner,” Neufeld said.

Norman L. Reimer, the NACDL’s executive director, said, “Hopefully, this project establishes a precedent so that in future situations it will not take years to remediate the injustice.”

While unnamed federal officials previously acknowledged widespread problems, the FBI until now has withheld comment because findings might not be representative.

Sen. Richard Blumenthal (D-Conn.), a former prosecutor, called on the FBI and Justice Department to notify defendants in all 2,500 targeted cases involving an FBI hair match about the problem even if their case has not been completed, and to redouble efforts in the three-year-old review to retrieve information on each case.

“These findings are appalling and chilling in their indictment of our criminal justice system, not only for potentially innocent defendants who have been wrongly imprisoned and even executed, but for prosecutors who have relied on fabricated and false evidence despite their intentions to faithfully enforce the law,” Blumenthal said.

Flawed forensic testimony by state VIEW GRAPHIC
Senate Judiciary Committee Chairman Charles E. Grassley (R-Iowa) and the panel’s ranking Democrat, Patrick J. Leahy (Vt.), urged the bureau to conduct “a root-cause analysis” to prevent future breakdowns.

“It is critical that the Bureau identify and address the systemic factors that allowed this far-reaching problem to occur and continue for more than a decade,” the lawmakers wrote FBI Director James B. Comey on March 27, as findings were being finalized.

The FBI is waiting to complete all reviews to assess causes but has acknowledged that hair examiners until 2012 lacked written standards defining scientifically appropriate and erroneous ways to explain results in court. The bureau expects this year to complete similar standards for testimony and lab reports for 19 forensic disciplines.

Federal authorities launched the investigation in 2012 after The Washington Post reported that flawed forensic hair matches might have led to the convictions of hundreds of potentially innocent people since at least the 1970s, typically for murder, rape and other violent crimes nationwide.

The review confirmed that FBI experts systematically testified to the near-certainty of “matches” of crime-scene hairs to defendants, backing their claims by citing incomplete or misleading statistics drawn from their case work.

In reality, there is no accepted research on how often hair from different people may appear the same. Since 2000, the lab has used visual hair comparison to rule out someone as a possible source of hair or in combination with more accurate DNA testing.

Warnings about the problem have been mounting. In 2002, the FBI reported that its own DNA testing found that examiners reported false hair matches more than 11 percent of the time. In the District, the only jurisdiction where defenders and prosecutors have re-investigated all FBI hair convictions, three of seven defendants whose trials included flawed FBI testimony have been exonerated through DNA testing since 2009, and courts have exonerated two more men. All five served 20 to 30 years in prison for rape or murder.

University of Virginia law professor Brandon L. Garrett said the results reveal a “mass disaster” inside the criminal justice system, one that it has been unable to self-correct because courts rely on outdated precedents admitting scientifically invalid testimony at trial and, under the legal doctrine of finality, make it difficult for convicts to challenge old evidence.

“The tools don’t exist to handle systematic errors in our criminal justice system,” Garrett said. “The FBI deserves every recognition for doing something really remarkable here. The problem is there may be few judges, prosecutors or defense lawyers who are able or willing to do anything about it.”

Federal authorities are offering new DNA testing in cases with errors, if sought by a judge or prosecutor, and agreeing to drop procedural objections to appeals in federal cases.

However, biological evidence in the cases often is lost or unavailable. Among states, only California and Texas specifically allow appeals when experts recant or scientific advances undermine forensic evidence at trial.

Defense attorneys say scientifically invalid forensic testimony should be considered as violations of due process, as courts have held with false or misleading testimony.

The FBI searched more than 21,000 federal and state requests to its hair comparison unit from 1972 through 1999, identifying for review roughly 2,500 cases where examiners declared hair matches.

Reviews of 342 defendants’ convictions were completed as of early March, the NACDL and Innocence Project reported. In addition to the 268 trials in which FBI hair evidence was used against defendants, the review found cases in which defendants pleaded guilty, FBI examiners did not testify, did not assert a match or gave exculpatory testimony.

When such cases are included, by the FBI’s count examiners made statements exceeding the limits of science in about 90 percent of testimonies, including 34 death-penalty cases.

The findings likely scratch the surface. The FBI said as of mid-April that reviews of about 350 trial testimonies and 900 lab reports are nearly complete, with about 1,200 cases remaining.

The bureau said it is difficult to check cases before 1985, when files were computerized. It has been unable to review 700 cases because police or prosecutors did not respond to requests for information.

Also, the same FBI examiners whose work is under review taught 500 to 1,000 state and local crime lab analysts to testify in the same ways.

Texas, New York and North Carolina authorities are reviewing their hair examiner cases, with ad hoc efforts underway in about 15 other states.

http://www.washingtonpost.com/local/crime/fbi-overstated-forensic-hair-matches-in-nearly-all-criminal-trials-for-decades/2015/04/18/39c8d8c6-e515-11e4-b510-962fcfabc310_story.html?postshare=1691429406292855

172
General Discussion / Psych journal bans p-values
« on: April 16, 2015, 11:52:41 PM »
Psychology researchers have recently found themselves engaged in a bout of statistical soul-searching. In apparently the first such move ever for a scientific journal the editors of Basic and Applied Social Psychology announced in a February editorial that researchers who submit studies for publication would not be allowed to use a common suite of statistical methods, including a controversial measure called the p-value.

These methods, referred to as null hypothesis significance testing, or NHST, are deeply embedded into the modern scientific research process, and some researchers have been left wondering where to turn. “The p-value is the most widely known statistic,” says biostatistician Jeff Leek of Johns Hopkins University. Leek has estimated that the p-value has been used at least three million scientific papers. Significance testing is so popular that, as the journal editorial itself acknowledges, there are no widely accepted alternative ways to quantify the uncertainty in research results—and uncertainty is crucial for estimating how well a study’s results generalize to the broader population.

Unfortunately, p-values are also widely misunderstood, often believed to furnish more information than they do. Many researchers have labored under the misbelief that the p-value gives the probability that their study’s results are just pure random chance. But statisticians say the p-value’s information is much more non-specific, and can interpreted only in the context of hypothetical alternative scenarios: The p-value summarizes how often results at least as extreme as those observed would show up if the study were repeated an infinite number of times when in fact only pure random chance were at work.

This means that the p-value is a statement about imaginary data in hypothetical study replications, not a statement about actual conclusions in any given study. Instead of being a “scientific lie detector” that can get at the truth of a particular scientific finding, the p-value is more of an “alternative reality machine” that lets researchers compare their results with what random chance would hypothetically produce. “What p-values do is address the wrong questions, and this has caused widespread confusion,” says psychologist Eric-Jan Wagenmakers at the University of Amsterdam.

Ostensibly, p-values allow researchers to draw nuanced, objective scientific conclusions as long as it is part of a careful process of experimental design and analysis. But critics have complained that in practice the p-value in the context of significance testing has been bastardized into a sort of crude spam filter for scientific findings: If the p-value on a potentially interesting result is smaller than 0.05, the result is deemed “statistically significant” and passed on for publication, according to the recipe; anything with larger p-values is destined for the trash bin.

Quitting p-values cold turkey was a drastic step. “The null hypothesis significance testing procedure is logically invalid, and so it seems sensible to eliminate it from science,” says psychologist David Trafimow of New Mexico State University in Las Cruces, editor of the journal. A strongly worded editorial discouraged significance testing in the journal last year. But after researchers failed to heed the warning, Trafimow says, he and associate editor Michael Marks decided this year to go ahead with the new diktat. “Statisticians have critiqued these concepts for many decades but no journal has had the guts to ban them outright,” Wagenmakers says.

Significance testing became enshrined in textbooks in the 1940s when scientists, in desperate search of data-analysis “recipes” that were easy for nonspecialists to follow, ended up mashing together two incompatible statistical systems—p-values and hypothesis testing—into one rote procedure. “P-values were never meant to be used the way we’re using them today,” says biostatistician Steven Goodman of Stanford University.

Although the laundry list of gripes against significance testing is long and rather technical, the complaints center around a common theme: Significance testing’s “scientific spam filter” does a poor job of helping researchers separate the true and important effects from the lookalike ones. The implication is that scientific journals might be littered with claims and conclusions that are not likely to be true. “I believe that psychologists have woken up and come to the realization that some work published in high-impact journals is plain nonsense,” Wagenmakers says.

Not that psychology has a monopoly on publishing results that collapse on closer inspection. For example, gene-hunting researchers in large-scale genomic studies used to be plagued by too many false-alarm results that flagged unimportant genes. But since the field developed new statistical techniques and moved away from the automatic use of p-values, the reliability of results has improved, Leek says.

Confusing as p-values are, however, not everyone is a fan of taking them from researchers’ statistical took kits. “This might be a case in which the cure is worse than the disease,” Goodman says. “The goal should be the intelligent use of statistics. If the journal is going to take away a tool, however misused, they need to substitute it with something more meaningful.”

One possible replacement that might fit the bill is a rival approach of data analysis called Bayesianism. (The journal said it will consider its use in submitted papers on a “case-by-case basis.”) Bayesianism starts from different principles altogether: Rather than striving for scientifically objective conclusions, this statistical system embraces the subjective, allowing researchers to incorporate their own prior knowledge and beliefs. One obstacle to the widespread use of Bayesianism has been the lack of user-friendly statistical software. To this end Wagenmakers’ team is working to develop a free, open-source statistical software package called JASP. It boasts the tagline: “Bayesian statistics made accessible.”

Other solutions attack the problem from a different angle: human nature. Because researchers in modern science face stiff competition and need to churn out enough statistically significant results for publication and therefore promotion it is no surprise that research groups somehow manage to find significant p-values more often than would be expected, a phenomenon dubbed “p-hacking” in 2011 by psychologist Uri Simonsohn at the University of Pennsylvania.

Several journals are trying a new approach, spearheaded by psychologist Christopher Chambers of Cardiff University in Wales, in which researchers publicly “preregister” all their study analysis plans in advance. This gives them less wiggle room to engage in the sort of unconscious—or even deliberate—p-hacking that happens when researchers change their analyses in midstream to yield results that are more statistically significant than they would be otherwise. In exchange, researchers get priority for publishing the results of these preregistered studies—even if they end up with a p-value that falls short of the normal publishable standard.

Finally, some statisticians are banking on education being the answer. “P-values are complicated and require training to understand,” Leek says. Science education has yet to fully adapt to a world in which data are both plentiful and unavoidable, without enough statistical consultants to go around, he says, so most researchers are stuck analyzing their own data with only a couple of stats courses under their belts. “Most researchers do not care about the details of statistical methods,” Wagenmakers says. “They use them only to support their claims in a general sense, to be able to tell their colleagues, ‘see, I am allowed to make this claim, because p is less than .05, now stop questioning my result.’”

A new, online nine-course “data science specialization” for professionals with very little background in statistics might change that. Leek and his colleagues at Johns Hopkins rolled out the free courses last year, available via the popular Coursera online continuing education platform, and already have two million students have registered. As part of the sequence, Leek says, a full monthlong course will be devoted specifically to understanding methods that allow researchers to convey uncertainty and generalizability of study findings—including, yes, p-values.

http://www.scientificamerican.com/article/scientists-perturbed-by-loss-of-stat-tool-to-sift-research-fudge-from-fact/?WT.mc_id=SA_Facebook

173
Dan Price, like a growing number of CEOs in recent months, is raising the minimum wage for his employees.

But while the chief executives of companies ranging from Aetna to Gap, Inc. to Wal-Mart are upping their wage floors by a few dollars an hour to help them compete for better talent, this CEO — who founded credit-card processing firm Gravity Payments — has another goal. On Monday, the New York Times reported that to protect employees' emotional well-being, Price is cutting his own salary and raising his employees' wages to at least $70,000 a year.

The decision is an extraordinary one when you look at the numbers. At the Seattle-based company, the average salary has been $48,000 a year among its 120 employees, the Times reported. Now, 70 of those workers will see a raise and 30 will see their salaries roughly double. To pay for those huge increases, the newspaper reported, Price plans to cut his nearly $1 million salary down to $70,000, as well as use roughly three-quarters of this year's profits. The report said Price would keep his salary low until those profits are earned back.

According to the Times, Price's idea came from research by Princeton economists Angus Deaton and Daniel Kahneman, who found, essentially, that money can buy happiness — up to a certain point. The duo's research showed that for salaries below about $75,000 a year, increases in income correlated with greater emotional well-being.

The extremely generous raise is, of course, far easier to pull off in a small company with few employees than in one with many. And Price may have a hard time keeping pay at that rate if the company were to struggle at some point — or potentially, if it grows very quickly. But the goodwill he just engendered is likely to pay itself back not only through hard work, greater loyalty and better talent, but through the greater returns that usually follow in such a culture.

http://www.washingtonpost.com/blogs/on-leadership/wp/2015/04/14/this-ceo-raised-all-his-employees-salaries-to-at-least-70000-by-cutting-his-own/?tid=sm_fb

174
People can control prosthetic limbs, computer programs and even remote-controlled helicopters with their mind, all by using brain-computer interfaces. What if we could harness this technology to control things happening inside our own body? A team of bioengineers in Switzerland has taken the first step toward this cyborglike setup by combining a brain-computer interface with a synthetic biological implant, allowing a genetic switch to be operated by brain activity. It is the world's first brain-gene interface.

The group started with a typical brain-computer interface, an electrode cap that can register subjects' brain activity and transmit signals to another electronic device. In this case, the device is an electromagnetic field generator; different types of brain activity cause the field to vary in strength. The next step, however, is totally new—the experimenters used the electromagnetic field to trigger protein production within human cells in an implant in mice.

The implant uses a cutting-edge technology known as optogenetics. The researchers inserted bacterial genes into human kidney cells, causing them to produce light-sensitive proteins. Then they bioengineered the cells so that stimulating them with light triggers a string of molecular reactions that ultimately produces a protein called secreted alkaline phosphatase (SEAP), which is easily detectable. They then placed the human cells plus an LED light into small plastic pouches and inserted them under the skin of several mice.

Human volunteers wearing electrode caps either played Minecraft or meditated, generating moderate or large electromagnetic fields, respectively, from a platform on which the mice stood. The field activates the implant's infrared LED, which triggers the production of SEAP. The protein then diffuses across membranes in the implant into the mice's bloodstream.

Playing Minecraft produced moderate levels of SEAP in the mice's bloodstream, and meditating produced high levels. A third type of mental control, known as biofeedback, involved the volunteers watching the light, which could be seen through the mice's skin, and learning to consciously turn the LED on or off—thereby turning SEAP production on or off.

“Combining a brain-computer interface with an optogenetic switch is a deceptively simple idea,” says senior author Martin Fussenegger of the Swiss Federal Institute of Technology in Zurich, “but controlling genes in this way is completely new.” By using an implant, the setup harnesses the power of optogenetics without requiring the user to have his or her own cells genetically altered. Fussenegger and his co-authors envision therapeutic implants one day producing chemicals to correct a wide variety of dysfunctions: neurotransmitters to regulate mood or anxiety, natural painkillers for chronic or acute pain, blood-clotting factors for hemophiliacs, and so on. Some patients would benefit greatly from having conscious control over intravenous dosage rather than relying on sensors—especially in cases such as pain, which is hard for anyone but the sufferer to measure, or locked-in patients or others who are conscious but cannot communicate.

http://www.scientificamerican.com/article/thought-controlled-genes-could-someday-help-us-heal/?WT.mc_id=SA_Facebook

175
General Discussion / Memories may not live in Synapses
« on: April 05, 2015, 12:26:21 PM »
As intangible as they may seem, memories have a firm biological basis. According to textbook neuroscience, they form when neighboring brain cells send chemical communications across the synapses, or junctions, that connect them. Each time a memory is recalled, the connection is reactivated and strengthened. The idea that synapses store memories has dominated neuroscience for more than a century, but a new study by scientists at the University of California, Los Angeles, may fundamentally upend it: instead memories may reside inside brain cells. If supported, the work could have major implications for the treatment of post-traumatic stress disorder (PTSD), a condition marked by painfully vivid and intrusive memories.

More than a decade ago scientists began investigating the drug propranolol for the treatment of PTSD. Propranolol was thought to prevent memories from forming by blocking production of proteins required for long-term storage. Unfortunately, the research quickly hit a snag. Unless administered immediately after the traumatic event, the treatment was ineffective. Lately researchers have been crafting a work-around: evidence suggests that when someone recalls a memory, the reactivated connection is not only strengthened but becomes temporarily susceptible to change, a process called memory reconsolidation. Administering propranolol (and perhaps also therapy, electrical stimulation and certain other drugs) during this window can enable scientists to block reconsolidation, wiping out the synapse on the spot.

The possibility of purging recollections caught the eye of David Glanzman, a neurobiologist at U.C.L.A., who set out to study the process in Aplysia, a sluglike mollusk commonly used in neuroscience research. Glanzman and his team zapped Aplysia with mild electric shocks, creating a memory of the event expressed as new synapses in the brain. The scientists then transferred neurons from the mollusk into a petri dish and chemically triggered the memory of the shocks in them, quickly followed by a dose of propranolol.

Initially the drug appeared to confirm earlier research by wiping out the synaptic connection. But when cells were exposed to a reminder of the shocks, the memory came back at full strength within 48 hours. “It was totally reinstated,” Glanzman says. “That implies to me that the memory wasn't stored in the synapse.” The results were recently published in the online open-access journal eLife.

If memory is not located in the synapse, then where is it? When the neuroscientists took a closer look at the brain cells, they found that even when the synapse was erased, molecular and chemical changes persisted after the initial firing within the cell itself. The engram, or memory trace, could be preserved by these permanent changes. Alternatively, it could be encoded in modifications to the cell's DNA that alter how particular genes are expressed. Glanzman and others favor this reasoning.

Eric R. Kandel, a neuroscientist at Columbia University and recipient of the 2000 Nobel Prize in Physiology or Medicine for his work on memory, cautions that the study's results were observed in the first 48 hours after treatment, a time when consolidation is still sensitive.

Though preliminary, the results suggest that for people with PTSD, pill popping will most likely not eliminate painful memories. “If you had asked me two years ago if you could treat PTSD with medication blockade, I would have said yes, but now I don't think so,” Glanzman says. On the bright side, he adds, the idea that memories persist deep within brain cells offers new hope for another disorder tied to memory: Alzheimer's.

http://www.scientificamerican.com/article/memories-may-not-live-in-neurons-synapses/?WT.mc_id=SA_Facebook

176
Big Wall Street banks are so upset with U.S. Democratic Senator Elizabeth Warren's call for them to be broken up that some have discussed withholding campaign donations to Senate Democrats in symbolic protest, sources familiar with the discussions said.

Representatives from Citigroup, JPMorgan, Goldman Sachs and Bank of America, have met to discuss ways to urge Democrats, including Warren and Ohio Senator Sherrod Brown, to soften their party's tone toward Wall Street, sources familiar with the discussions said this week.

Bank officials said the idea of withholding donations was not discussed at a meeting of the four banks in Washington but it has been raised in one-on-one conversations between representatives of some of them. However, there was no agreement on coordinating any action, and each bank is making its own decision, they said.

The amount of money at stake, a maximum of $15,000 per bank, means the gesture is symbolic rather than material

Moreover, banks' hostility toward Warren, who is not a presidential candidate, will not have a direct impact on the presumed Democratic front runner in the White House race, Hillary Clinton. That's because their fund-raising groups focus on congressional races rather than the presidential election

Still, political strategists say Clinton could struggle to raise money among Wall Street financiers who worry that Democrats are becoming less business friendly.

The tensions are a sign that the aftermath of the 2008 financial crisis - the bank bailouts and the fights over financial reforms to rein in Wall Street - are still a factor in the 2016 elections.

Citigroup has decided to withhold donations for now to the Democratic Senatorial Campaign Committee over concerns that Senate Democrats could give Warren and lawmakers who share her views more power, sources inside the bank told Reuters.

The Massachusetts senator's economic populism and take-no-prisoners approach has won her a strong following among liberals who raised 300,000 signatures for a petition urging her to run for the White House in 2016.

"They can threaten or bully or say whatever they want, but we aren't going to change our game plan," Warren said in a blog post on her website on Friday. "It's up to us to fight back against a financial system that allows those who broke our economy to emerge from a crisis in record-setting shape while ordinary Americans continue to struggle."

JPMORGAN MET DEMOCRATIC OFFICIALS

Citi spokeswoman Molly Meiners declined to comment specifically on the Warren issue, saying the bank's fund-raising political action committee (PAC) "contributes to candidates and parties across the political spectrum that share our desire for pro-business policies that promote economic growth."

JPMorgan representatives have met Democratic Party officials to emphasize the connection between its annual contribution and the need for a friendlier attitude toward the banks, a source familiar with JPMorgan's donations said. In past years, the bank has given its donation in one lump sum but this year has so far donated only a third of the amount, the source said.

Goldman, which already made its $15,000 donation for the year, took part in the Washington meeting between the four banks to talk about anti-big bank rhetoric of some Democratic lawmakers like Warren but has not had any discussions about withholding money, a source close to the bank said.

"We will continue working cooperatively with members of Congress, regulators and the industry to foster constructive discussions around policy questions," said Andrew Williams, a Goldman spokesman.

Bank of America is not coordinating with other banks on when and how much to give, according to a source familiar with the bank's thinking. It has not yet sent in its check.

"Our decision to contribute will be driven more by the fact that many members of both parties understand the important role we play in driving the real economy and serving customers across the country," said a spokesman, Larry Di Rita.

JPMorgan spokesman Andrew Gray said the bank had "always believed in the importance of engaging constructively with our public officials."

Spokesmen for the Democratic Senatorial Campaign Committee, Warren and Senate Democratic leader Harry Reid all declined to comment.

Warren, a former Harvard Law professor who joined the Senate Banking Committee after taking office in 2013, has accused big banks and other financial firms of unfair dealings that harm the middle class and help the rich grow richer.

In a Dec. 12 speech, she mentioned Citi several times as an example of a bank that had grown too large, saying it should have been broken apart by the Dodd-Frank financial reform law.

In January, Warren angered Wall Street when she successfully blocked the nomination of a banker Antonio Weiss to a top post at the Treasury Department. She argued that as a regulator he would likely be too deferential to his former Wall Street colleagues.

http://www.reuters.com/article/2015/03/27/us-usa-election-banks-idUSKBN0MN0BV20150327

177
Some NFL players spend their offseason working out. Others travel around the world. Baltimore Ravens offensive lineman John Urschel has done both while also getting an article published in a math journal.

Urschel, the Ravens’ 2014 fifth-round pick who graduated from Penn State with 4.0 GPA, also happens to be a brilliant mathematician. This week he and several co-authors published a piece titled “A Cascadic Multigrid Algorithm for Computing the Fiedler Vector of Graph Laplacians” in the Journal of Computational Mathematics. You can read the full piece here: http://arxiv.org/abs/1412.0565

Here’s the summary of the paper:

“In this paper, we develop a cascadic multigrid algorithm for fast computation of the Fiedler vector of a graph Laplacian, namely, the eigenvector corresponding to the second smallest eigenvalue. This vector has been found to have applications in fields such as graph partitioning and graph drawing. The algorithm is a purely algebraic approach based on a heavy edge coarsening scheme and pointwise smoothing for refinement. To gain theoretical insight, we also consider the related cascadic multigrid method in the geometric setting for elliptic eigenvalue problems and show its uniform convergence under certain assumptions. Numerical tests are presented for computing the Fiedler vector of several practical graphs, and numerical results show the efficiency and optimality of our proposed cascadic multigrid algorithm.”

When he’s not protecting Joe Flacco, the 23-year-old Urschel enjoys digging into extremely complicated mathematical models.

“I am a mathematical researcher in my spare time, continuing to do research in the areas of numerical linear algebra, multigrid methods, spectral graph theory and machine learning. I’m also an avid chess player, and I have aspirations of eventually being a titled player one day.”

– See more at: http://yahoo.thepostgame.com/blog/balancing-act/201503/john-urschel-baltimore-ravens-nfl-football-math#sthash.avUHj2Tm.dpuf

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://its-interesting.com/2015/03/19/baltimore-ravens-offensive-lineman-john-urschel-publishes-paper-in-math-journal/

178
General Discussion / John Quiggin Thinks the TPP Sucks
« on: March 18, 2015, 12:02:39 AM »
JQ is A) a well-known microeconomy theorist (not the best cred for this article, vs macro) & B) Australian and way better than skars.

There can be few topics as eye-glazingly dull as international trade agreements. Endless hours of negotiation on such arcane topics as rules of origin and most favoured nation status combine with an alphabet soup of acronyms to produce a barely readable text hundreds of pages long. But unless you were actually involved in exporting or importing goods, or faced import competition, it used to be safe enough to leave the details to diplomats and trade bureaucrats.

That all changed with the emergence of “new generation” agreements, of which the most ambitious so far is the Trans-Pacific Partnership Agreement, or TPP, which is on course to be completed in May this year. Depending on the content of the final deal, it could affect almost everything we do, from buying a secondhand book to campaigning to protect a local park from development.

Although the new generation agreements are described as trade agreements, this is quite misleading. Except for restrictions on imports of agricultural commodities (which are unlikely to go away any time soon), tariffs, quotas and other restrictions on trade have largely disappeared in our region. The new generation agreements are primarily about imposing a particular model of global capitalism, with the United States as the model and multinational corporations as the main engines of economic activity. It’s already clear that the TPP will fit this pattern.

But what exactly do we know about the deal? If it were not for an embarrassing leak of the negotiated draft text of the intellectual property and environment chapters, released by WikiLeaks in late 2013 and early 2014, ordinary Australians would know nothing more than the barest details, namely that the TPP has been the subject of more than a decade of negotiations involving twelve countries, and that it builds on a web of bilateral deals with the United States at the centre.

Given the lack of public information, the negotiations are often described as secret, but this is not quite correct. While citizens in general have been kept in the dark, corporate lobbyists have been actively involved, apparently to the point of drafting much of the text as it affects their corporate interests.

By the time we do see the final text it will probably be too late to do much about it. So we have to make an educated guess, based on the WikiLeaks material and on previous new generation agreements, of which the most important were the (failed) Multilateral Agreement on Investment, or MAI, the Agreement on Trade-Related Aspects of Intellectual Property Rights, or TRIPS, and the Australia–US Free Trade Agreement, or AUSFTA.

These agreements are primarily concerned with protecting the rights of multinational corporations. This fact was clearest in relation to the MAI, which proposed to give these corporations the right to sue governments over legislation on issues such as environmental protection, cultural policy and labour market standards. These investor–state dispute settlement procedures have become a standard demand of US negotiators in bilateral trade agreements. They bypass normal courts, and are only available to corporations, with no corresponding right for states to sue investors.

The MAI would have made them a core part of the structure of global trade managed by the World Trade Organization. But the agreement was abandoned after a string of governments, beginning with France, withdrew from negotiations in the light of public concerns about its implications. Attempts to implement it by other means have continued, with the TPP being the most recent example.

The TRIPS agreement dealt with “intellectual property,” a term that refers to government-granted monopoly rights such as patents and copyright. As such, it is the direct opposite of a free trade policy. The idea of granting inventors and creators of cultural material a temporary right to control the use of their ideas is an old one and, within limits, generally a good one. But as valuable rights have fallen into the hands of corporations, pressure has increased to make them more permanent and to expand their scope.

When the US copyright system was established in 1790, writers and other creators enjoyed a copyright term of fourteen years, which could be extended for a further fourteen years if the author were still alive. This provided the chance to make a living out of writing while ensuring that the vast majority of literature and other cultural material was in the public domain. Over time, the term of copyright was extended to the author’s life and then beyond, and the scope was expanded to material that would not have been considered worthy of protection in the past. The result was to build up corporate interests centred on the exploitation of the system.

The archetypal example is the Disney Corporation, which derives a huge income from the character of Mickey Mouse. Under the legislation prevailing when Mickey was created in 1928, his copyright would have expired in 1984. Whenever Mickey’s copyright has come close to expiry, though, Disney has succeeded in inducing Congress to legislate for longer terms.

Another Disney property, Winnie the Pooh, is an even more egregious case. Mickey Mouse is at least a Disney product, but the rights to Winnie the Pooh were acquired in 1961, five years after the death of his creator A.A. Milne. Again, if it were not for repeated extensions of copyright, Winnie would be in the public domain.

Restrictions on the use of cartoon characters aren’t of great importance. But the expansion of copyright has had a chilling effect on creative activity of all kinds. Even such a simple act as singing the song “Happy Birthday,” composed over a hundred years ago using an even older tune, can potentially attract copyright action from the global conglomerate Warner/Chappell (which has a dubious claim to own the rights). This possibility becomes a certainty if the song is sung as part of a film or play.

If the copyright situation is bad, that of patents is even worse. The patent system for pharmaceuticals has been abused in various ways, from “me too” products with little additional benefit to “evergreening,” involving marginal changes to extend patent life beyond the legally intended period. Then there is the extension of patent protection to things that were never intended to be covered, from business methods to human genes. The result is to stifle the natural tendency of information to flow freely and contribute to new and unexpected innovations.

At the bottom of the heap are “patent trolls,” companies that file patents on trivially obvious activities, such as using a scanner attached to a network. These patents are invariably granted by the intellectual property authorities, whose job it is to decide whether a novel process has been identified. The trolls then send out letters demanding money from anyone who infringes their supposed patent. In many cases it is cheaper to settle than to fight.

The abuse of the patent system has become so bad that some studies conclude we would be better off abolishing patents altogether. Courts and policy-makers have responded to some extent, for example by finding against patent trolls. Unfortunately, trade negotiators haven’t got the message and are still pushing the most extreme version of the intellectual property agenda.

The implications of intellectual property deals and investor–state dispute mechanisms are best illustrated by the dispute over Australia’s legislation for plain packaging of cigarettes. The tobacco companies fought this legislation through the political process and lost. They took their case to the High Court, claiming that the legislation was an unconstitutional “taking” of their branding rights, a claim rejected by a 6–1 majority.

If it were not for the new generation trade deals, that would have been that. But these deals gave Big Tobacco many more venues for litigation. First, the tobacco companies ginned up such major cigarette producers as Ukraine and Honduras to bring disputes under the TRIPS agreement. Next, Philip Morris undertook a corporate restructure to reinvent itself as a Hong Kong company, taking advantage of a 1993 deal with Australia that incorporated investor–state dispute settlement provisions.

It goes without saying that these cases have no merit. But while they drag on, they deter other countries from following the Australian example. And, should the unaccountable tribunals established under these agreements rule in favour of the tobacco companies (for whatever reason), Australia has access to absolutely no redress.

The emergence of plain packaging legislation as a test case may perhaps prove to be a blessing in disguise. There are few litigants less sympathetic than Big Tobacco, reliant on a deadly and addictive product and marked by a long history of dishonesty, criminality and political corruption. The fact that the countries notionally bringing the dispute have no genuine interest makes the case even more unappealing.

Despite their trappings of legality, the tribunals of the World Trade Organization and similar bodies are political bodies. The WTO in particular has been badly burned by the political reaction against its decision that US policies requiring “dolphin safe” labelling of tuna represent an improper restriction of trade. As a result, it recently reversed its previous stance and upheld EU restrictions on the importation of skins from Canadian seals killed in the infamous clubbing hunt.

The political fallout from a decision in favour of Big Tobacco would be far worse than anything the pseudo-courts of international trade have experienced before. It would instantly confirm the most dire predictions of critics of investor–state dispute procedures and intellectual property rules. Precisely for this reason, it seems likely that the tobacco lawsuits will fail, setting precedents that will constrain future abuses of these provisions. But that doesn’t change the obviously undemocratic nature of agreements under which Australian health policy can potentially be overturned by the machinations of corporate lobbyists.

Given our recent experience with such deals, would an Australian government be willing to expose us to more such action? The Labor government responded to the plain packaging dispute by announcing that it would discontinue the practice of seeking to include investor–state dispute provisions in trade agreements with developing countries. More generally, there was some movement away from strong intellectual property policies in areas such as fair use of copyright materials.

But this shift has been reversed under the Abbott government, with its recent rush of bilateral agreements. Unsurprisingly, political journalists pay hardly any attention to the actual content of these agreements, and their signing is almost invariably treated as a political win for the government of the day.

This uncritical attitude is reflected in the generally favourable press received by trade minister Andrew Robb for the signing of agreements with Korea, Japan and China, bringing a rapid conclusion to negotiations that had proceeded at a glacial pace under Labor. No one in the normally hardbitten press gallery, it seemed, was cynical enough to suggest that the easiest way to conclude a negotiation is to accede to the demands of the other party while withdrawing any sticking points of your own.

In the case of the agreement with Japan, for example, Australia secured some modest concessions regarding tariffs on beef, which will be reduced from 38.5 per cent to 19 per cent over a period of fifteen years. In return, our government accepted the total exclusion of rice from the deal, and the maintenance of most restrictions on dairy products.

The Korean agreement, KAFTA, was arguably even worse. Reversing our previous position, the government agreed to the inclusion of investor–state dispute provisions. This was apparently done not in response to Korean demands but because US negotiators were pushing the provision in the parallel negotiations for the TPP.

It seems certain that the final agreement will involve a substantial loss of Australian sovereignty and an acceptance of economically damaging intellectual property rules. In return, Australia will receive marginal and long-drawn-out improvements in market access for agricultural commodities. While a Labor government might perhaps have held out for a better deal, it seems unlikely that the opposition will reject legislation implementing the agreement.

Ironically, our best hope lies in the United States. The Obama administration, backed by the Republican congressional leadership, is seeking approval to push the TPP through on a “fast track” basis, which would not permit any amendments. But it is facing stiff opposition both from Republicans (concerned about sovereignty and unwilling to grant any additional power to Obama) and from liberal Democrats, who reject the key provisions of the deal. In the current congressional atmosphere, inaction is the most likely result of any contentious process. So, it may be that the deal will fail at this crucial hurdle. We can only hope. •

http://insidestory.org.au/the-trans-pacific-partnership-it-might-be-about-trade-but-its-far-from-free

179
General Discussion / John Quiggin Thinks the TPP Sucks
« on: March 18, 2015, 12:02:16 AM »
JQ is A) a well-known microeconomy theorist (not the best cred for this article, vs macro) & B) Australian and way better than skars.

There can be few topics as eye-glazingly dull as international trade agreements. Endless hours of negotiation on such arcane topics as rules of origin and most favoured nation status combine with an alphabet soup of acronyms to produce a barely readable text hundreds of pages long. But unless you were actually involved in exporting or importing goods, or faced import competition, it used to be safe enough to leave the details to diplomats and trade bureaucrats.

That all changed with the emergence of “new generation” agreements, of which the most ambitious so far is the Trans-Pacific Partnership Agreement, or TPP, which is on course to be completed in May this year. Depending on the content of the final deal, it could affect almost everything we do, from buying a secondhand book to campaigning to protect a local park from development.

Although the new generation agreements are described as trade agreements, this is quite misleading. Except for restrictions on imports of agricultural commodities (which are unlikely to go away any time soon), tariffs, quotas and other restrictions on trade have largely disappeared in our region. The new generation agreements are primarily about imposing a particular model of global capitalism, with the United States as the model and multinational corporations as the main engines of economic activity. It’s already clear that the TPP will fit this pattern.

But what exactly do we know about the deal? If it were not for an embarrassing leak of the negotiated draft text of the intellectual property and environment chapters, released by WikiLeaks in late 2013 and early 2014, ordinary Australians would know nothing more than the barest details, namely that the TPP has been the subject of more than a decade of negotiations involving twelve countries, and that it builds on a web of bilateral deals with the United States at the centre.

Given the lack of public information, the negotiations are often described as secret, but this is not quite correct. While citizens in general have been kept in the dark, corporate lobbyists have been actively involved, apparently to the point of drafting much of the text as it affects their corporate interests.

By the time we do see the final text it will probably be too late to do much about it. So we have to make an educated guess, based on the WikiLeaks material and on previous new generation agreements, of which the most important were the (failed) Multilateral Agreement on Investment, or MAI, the Agreement on Trade-Related Aspects of Intellectual Property Rights, or TRIPS, and the Australia–US Free Trade Agreement, or AUSFTA.

These agreements are primarily concerned with protecting the rights of multinational corporations. This fact was clearest in relation to the MAI, which proposed to give these corporations the right to sue governments over legislation on issues such as environmental protection, cultural policy and labour market standards. These investor–state dispute settlement procedures have become a standard demand of US negotiators in bilateral trade agreements. They bypass normal courts, and are only available to corporations, with no corresponding right for states to sue investors.

The MAI would have made them a core part of the structure of global trade managed by the World Trade Organization. But the agreement was abandoned after a string of governments, beginning with France, withdrew from negotiations in the light of public concerns about its implications. Attempts to implement it by other means have continued, with the TPP being the most recent example.

The TRIPS agreement dealt with “intellectual property,” a term that refers to government-granted monopoly rights such as patents and copyright. As such, it is the direct opposite of a free trade policy. The idea of granting inventors and creators of cultural material a temporary right to control the use of their ideas is an old one and, within limits, generally a good one. But as valuable rights have fallen into the hands of corporations, pressure has increased to make them more permanent and to expand their scope.

When the US copyright system was established in 1790, writers and other creators enjoyed a copyright term of fourteen years, which could be extended for a further fourteen years if the author were still alive. This provided the chance to make a living out of writing while ensuring that the vast majority of literature and other cultural material was in the public domain. Over time, the term of copyright was extended to the author’s life and then beyond, and the scope was expanded to material that would not have been considered worthy of protection in the past. The result was to build up corporate interests centred on the exploitation of the system.

The archetypal example is the Disney Corporation, which derives a huge income from the character of Mickey Mouse. Under the legislation prevailing when Mickey was created in 1928, his copyright would have expired in 1984. Whenever Mickey’s copyright has come close to expiry, though, Disney has succeeded in inducing Congress to legislate for longer terms.

Another Disney property, Winnie the Pooh, is an even more egregious case. Mickey Mouse is at least a Disney product, but the rights to Winnie the Pooh were acquired in 1961, five years after the death of his creator A.A. Milne. Again, if it were not for repeated extensions of copyright, Winnie would be in the public domain.

Restrictions on the use of cartoon characters aren’t of great importance. But the expansion of copyright has had a chilling effect on creative activity of all kinds. Even such a simple act as singing the song “Happy Birthday,” composed over a hundred years ago using an even older tune, can potentially attract copyright action from the global conglomerate Warner/Chappell (which has a dubious claim to own the rights). This possibility becomes a certainty if the song is sung as part of a film or play.

If the copyright situation is bad, that of patents is even worse. The patent system for pharmaceuticals has been abused in various ways, from “me too” products with little additional benefit to “evergreening,” involving marginal changes to extend patent life beyond the legally intended period. Then there is the extension of patent protection to things that were never intended to be covered, from business methods to human genes. The result is to stifle the natural tendency of information to flow freely and contribute to new and unexpected innovations.

At the bottom of the heap are “patent trolls,” companies that file patents on trivially obvious activities, such as using a scanner attached to a network. These patents are invariably granted by the intellectual property authorities, whose job it is to decide whether a novel process has been identified. The trolls then send out letters demanding money from anyone who infringes their supposed patent. In many cases it is cheaper to settle than to fight.

The abuse of the patent system has become so bad that some studies conclude we would be better off abolishing patents altogether. Courts and policy-makers have responded to some extent, for example by finding against patent trolls. Unfortunately, trade negotiators haven’t got the message and are still pushing the most extreme version of the intellectual property agenda.

The implications of intellectual property deals and investor–state dispute mechanisms are best illustrated by the dispute over Australia’s legislation for plain packaging of cigarettes. The tobacco companies fought this legislation through the political process and lost. They took their case to the High Court, claiming that the legislation was an unconstitutional “taking” of their branding rights, a claim rejected by a 6–1 majority.

If it were not for the new generation trade deals, that would have been that. But these deals gave Big Tobacco many more venues for litigation. First, the tobacco companies ginned up such major cigarette producers as Ukraine and Honduras to bring disputes under the TRIPS agreement. Next, Philip Morris undertook a corporate restructure to reinvent itself as a Hong Kong company, taking advantage of a 1993 deal with Australia that incorporated investor–state dispute settlement provisions.

It goes without saying that these cases have no merit. But while they drag on, they deter other countries from following the Australian example. And, should the unaccountable tribunals established under these agreements rule in favour of the tobacco companies (for whatever reason), Australia has access to absolutely no redress.

The emergence of plain packaging legislation as a test case may perhaps prove to be a blessing in disguise. There are few litigants less sympathetic than Big Tobacco, reliant on a deadly and addictive product and marked by a long history of dishonesty, criminality and political corruption. The fact that the countries notionally bringing the dispute have no genuine interest makes the case even more unappealing.

Despite their trappings of legality, the tribunals of the World Trade Organization and similar bodies are political bodies. The WTO in particular has been badly burned by the political reaction against its decision that US policies requiring “dolphin safe” labelling of tuna represent an improper restriction of trade. As a result, it recently reversed its previous stance and upheld EU restrictions on the importation of skins from Canadian seals killed in the infamous clubbing hunt.

The political fallout from a decision in favour of Big Tobacco would be far worse than anything the pseudo-courts of international trade have experienced before. It would instantly confirm the most dire predictions of critics of investor–state dispute procedures and intellectual property rules. Precisely for this reason, it seems likely that the tobacco lawsuits will fail, setting precedents that will constrain future abuses of these provisions. But that doesn’t change the obviously undemocratic nature of agreements under which Australian health policy can potentially be overturned by the machinations of corporate lobbyists.

Given our recent experience with such deals, would an Australian government be willing to expose us to more such action? The Labor government responded to the plain packaging dispute by announcing that it would discontinue the practice of seeking to include investor–state dispute provisions in trade agreements with developing countries. More generally, there was some movement away from strong intellectual property policies in areas such as fair use of copyright materials.

But this shift has been reversed under the Abbott government, with its recent rush of bilateral agreements. Unsurprisingly, political journalists pay hardly any attention to the actual content of these agreements, and their signing is almost invariably treated as a political win for the government of the day.

This uncritical attitude is reflected in the generally favourable press received by trade minister Andrew Robb for the signing of agreements with Korea, Japan and China, bringing a rapid conclusion to negotiations that had proceeded at a glacial pace under Labor. No one in the normally hardbitten press gallery, it seemed, was cynical enough to suggest that the easiest way to conclude a negotiation is to accede to the demands of the other party while withdrawing any sticking points of your own.

In the case of the agreement with Japan, for example, Australia secured some modest concessions regarding tariffs on beef, which will be reduced from 38.5 per cent to 19 per cent over a period of fifteen years. In return, our government accepted the total exclusion of rice from the deal, and the maintenance of most restrictions on dairy products.

The Korean agreement, KAFTA, was arguably even worse. Reversing our previous position, the government agreed to the inclusion of investor–state dispute provisions. This was apparently done not in response to Korean demands but because US negotiators were pushing the provision in the parallel negotiations for the TPP.

It seems certain that the final agreement will involve a substantial loss of Australian sovereignty and an acceptance of economically damaging intellectual property rules. In return, Australia will receive marginal and long-drawn-out improvements in market access for agricultural commodities. While a Labor government might perhaps have held out for a better deal, it seems unlikely that the opposition will reject legislation implementing the agreement.

Ironically, our best hope lies in the United States. The Obama administration, backed by the Republican congressional leadership, is seeking approval to push the TPP through on a “fast track” basis, which would not permit any amendments. But it is facing stiff opposition both from Republicans (concerned about sovereignty and unwilling to grant any additional power to Obama) and from liberal Democrats, who reject the key provisions of the deal. In the current congressional atmosphere, inaction is the most likely result of any contentious process. So, it may be that the deal will fail at this crucial hurdle. We can only hope. •

http://insidestory.org.au/the-trans-pacific-partnership-it-might-be-about-trade-but-its-far-from-free

180
A vocal (and apparently very wealthy) biologist and outspoken “vaccination skeptic” was so confident that not only do vaccinations not work, but the measles wasn’t even real, that he made a public bet with the world’s scientists that if they could prove the measles virus exists, he’d pay them. Hilariously, one medical doctor obliged.

Stefan Lanka, a German biologist (we should probably use that term loosely) who believes the measles are “psychosomatic” and therefore exist only in a person’s head, made the bet on his anti-vaxxer website in 2011. Lanka’s bet is full of bizarre misunderstandings of science, medicine, and even common sense. German newspaper The Local translated the following passage:

“Because we know that the ‘measles virus’ doesn’t exist, and according to biology and medical science can’t exist, and because we know the real cause of measles, we want the reward to get people to enlighten themselves, for the enlightened to help the less enlightened and for the enlightened to influence those in power.”

Calling his bluff, a German doctor David Barden gathered the most up-to-date and comprehensive research on the study of the measles virus and sent the evidence to Lanka’s house.

Predictably, Lanka took one look at the combined effort of thousands of scientists, decades of research and the reams of data compiled and declared none of it valid. He reportedly refused to pay Dr. Barden – who then took the biologist to court.

Unfortunately for our intrepid anti-vaxxer, a German judge reviewed the research and – like most rational people – decided that the existence of the measles was fairly obvious. The doctor had fulfilled all the requirements Lanka had demanded (which in this case was probably not that difficult). Lanka was ordered by law to pay out the $106,000 he had promised.

In an ironic twist, Lanka probably did achieve his goal of “enlightening” people about the measles. Having become an international laughingstock, he further discredits an anti-vaccination movement that is built on quackery, dodgy sources, and an ignorance of science. The result of this has been a resurgence in the measles across the world – including in countries which had at one point largely eradicated the disease completely like the United States and Germany.

http://www.addictinginfo.org/2015/03/13/anti-vaxxer-bets-scientists-100000-they-cant-prove-measles-exists-anti-vaxxer-loses-100000/

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 26