151

**General Discussion / A. Cooper clip on Affluenza (or: "never trust psychiatrists")**

« **on:**December 18, 2013, 03:43:59 PM »

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

152

But it looks like Mom was wasting her money. Evidence continues to mount that vitamin supplements don't help most people, and can actually cause diseases that people are taking them to prevent, like cancer.

"Enough is enough," declares an editorial accompanying the studies in Annals of Internal Medicine. "Stop wasting money on vitamin and mineral supplements."

But enough is not enough for the American public. We spend $28 billion a year on vitamin supplements and are projected to spend more. About 40 percent of Americans take multivitamins, the editorial says.

Even people who know about all these studies showing no benefit continue to buy multivitamins for their families. Like, uh, me. They couldn't hurt, right?

If only it was as simple as popping a supplement and being set for life. But alas, no.

More Evidence Against Vitamin D To Build Bones In Middle Age

In most cases, no. But $28 billion is a lot to spend on a worthless medical treatment. So I called up Steven Salzberg, a professor of medicine at Johns Hopkins who has written about Americans' love affair with vitamins, to find out why we're so reluctant to give up the habit.

"I think this is a great example of how our intuition leads us astray," Salzberg told Shots. "It seems reasonable that if a little bit of something is good for you, them more should be better for you. It's not true. Supplementation with extra vitamins or micronutrients doesn't really benefit you if you don't have a deficiency."

Vitamin deficiencies can kill, and that discovery has made for some great medical detective stories. Salzberg points to James Lind, a Scottish physician who proved in 1747 that citrus juice could cure scurvy, which had killed more sailors than all wars combined. It was not until much later that scientists discovered that the magic ingredient was vitamin C.

Ads often tout dietary supplements and vitamins as "natural" remedies. But studies show megadoses of some vitamins can actually boost the risk of heart disease and cancer, warns Dr. Paul Offit.

A Scientist Debunks The 'Magic' Of Vitamins And Supplements

Lack of vitamin D causes rickets. Lack of niacin causes pellagra, which was a big problem in the southern U.S. in the early 1900s. Lack of vitamin A causes blindness. And lack of folic acid can cause spina bifida, a crippling deformity.

Better nutrition and vitamin fortified foods have made these problems pretty much history.

Now when public health officials talk about vitamin deficiencies and health, they're talking about specific populations and specific vitamins. Young women tend to be low on iodine, which is key for brain development in a fetus, according to a 2012 report from the Centers for Disease Control and Prevention. And Mexican-American women and young children are more likely to be iron deficient. But even in that group, we're talking about 11 percent of the children, and 13 percent of the women.

Recent studies have shown that too much beta carotene and vitamin E can cause cancer, and it's long been known that excess vitamin A can cause liver damage, coma and death. That's what happened to Arctic explorers when they ate too much polar bear liver, which is rich in vitamin A.

"You need a balance," Salzberg says. But he agrees with the Annals editorial — enough already. "The vast majority of people taking multivitamins and other supplemental vitamins don't need them. I don't need them, so I stopped."

I'm still struggling with the notion that mother didn't know best. But maybe when the current bottle of chewable kid vitamins runs out, I won't buy more.

http://www.npr.org/blogs/health/2013/12/17/251955878/the-case-against-multivitamins-grows-stronger?utm_content=socialflow&utm_campaign=nprfacebook&utm_source=npr&utm_medium=facebook

153

Quote

Every once in a while, one of my postdocs or students asks, in a grave voice, to speak to me privately. With terror in their eyes, they tell me that they have been unable to replicate one of my laboratory's previous experiments, no matter how hard they try. Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they, and others, have always managed to replicate our previous data.

Articles in both the scientific and popular press1–3 have addressed how frequently biologists are unable to repeat each other's experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials4. But who will evaluate the evaluators? The Reproducibility Initiative, for example, launched by the journal PLoS ONE with three other companies, asks scientists to submit their papers for replication by third parties, for a fee, with the results appearing in PLoS ONE. Nature has targeted5 reproducibility by giving more space to methods sections and encouraging more transparency from authors, and has composed a checklist of necessary technical and statistical information. This should be applauded.

So why am I concerned? Isn't reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months — if not a year — to replicate some of the experiments we have done on the roles of the microenvironment and extracellular matrix in cancer, and that includes consulting with other lab members, as well as the original authors.

People trying to repeat others' research often do not have the time, funding or resources to gain the same expertise with the experimental protocol as the original authors, who were perhaps operating under a multi-year federal grant and aiming for a high-profile publication. If a researcher spends six months, say, trying to replicate such work and reports that it is irreproducible, that can deter other scientists from pursuing a promising line of research, jeopardize the original scientists' chances of obtaining funding to continue it themselves, and potentially damage their reputations.

Fair wind

Twenty years ago, a reproducibility movement would have been of less concern. Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models, especially for human cells, for which engineering new species is not an option.

Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results — something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. Cells in culture are often immortal because they rapidly acquire epigenetic and genetic changes. As such cells divide, any alteration in the media or microenvironment — even if minuscule — can trigger further changes that skew results. Here are three examples from my own experience.

My collaborator, Ole Petersen, a breast-cancer researcher at the University of Copenhagen, and I have spent much of our scientific careers learning how to maintain the functional differentiation of human and mouse mammary epithelial cells in culture. We have succeeded in cultivating human breast cell lines for more than 20 years, and when we use them in the three-dimensional assays that we developed6, 7, we do not observe functional drift. But our colleagues at biotech company Genentech in South San Francisco, California, brought to our attention that they could not reproduce the architecture of our cell colonies, and the same cells seemed to have drifted functionally. The collaborators had worked with us in my lab and knew the assays intimately. When we exchanged cells and gels, we saw that the problem was in the cells, procured from an external cell bank, and not the assays.

Another example arose when we submitted what we believe to be an exciting paper for publication on the role of glucose uptake in cancer progression. The reviewers objected to many of our conclusions and results because the published literature strongly predicted the prominence of other molecules and pathways in metabolic signalling. We then had to do many extra experiments to convince them that changes in media glucose levels, or whether the cells were in different contexts (shapes) when media were kept constant, drastically changed the nature of the metabolites produced and the pathways used8.

A third example comes from a non-malignant human breast cell line that is now used by many for three-dimensional experiments. A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published9.

Repeat after me

The right thing to do as a replicator of someone else's findings is to consult the original authors thoughtfully. If e-mails and phone calls don't solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.

When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers3. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.

One last point: all journals should set aside a small space to publish short, peer-reviewed reports from groups that get together to collaboratively solve reproducibility problems, describing their trials and tribulations in detail. I suggest that we call this ISPA: the Initiative to Solve Problems Amicably.

http://www.nature.com/news/reproducibility-the-risks-of-the-replication-drive-1.14184

Quote

Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes, “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.”

I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent.

That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news.

I’ll explain a bit in the context of Bissell’s article. She writes:

Articles in both the scientific and popular press have addressed how frequently biologists are unable to repeat each other’s experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials.

But, as she points out, such replications will be costly. As she puts it:

Isn’t reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months — if not a year — to replicate some of the experiments we have done . . .

So, yes, if we require everything to be replicate, it will reduce the resources that are available to do new research.

Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they [Bissell's students and postdocs], and others, have always managed to replicate our previous data.

If all science were like Bissell’s, I guess we’d be in great shape. In fact, given her track record, perhaps we could some sort of lifetime seal of approval to the work in her lab, and agree in the future to trust all her data without need for replication.

The problem is that there appear to be labs without 100% successful replication rates. Not just fraud (although, yes, that does exist); and not just people cutting corners, for example, improperly excluding cases in a clinical trial (although, yes, that does exist); and not just selection bias and measurement error (although, yes, these do exist too); but just the usual story of results that don’t hold up under replication, perhaps because the published results just happened to stand out in an initial dataset (as Vul et al. pointed out in the context of imaging studies in neuroscience) or because certain effects are variable and appear in some settings and not in others. Lots of reasons. In any case, replications do fail, even with time and careful consideration of experimental conditions. In that sense, Bissell indeed has to pay for the sins of others, but I think that’s inevitable: in any system that is less than 100% perfect, some effort ends up being spent on checking things that, retrospectively, turned out to be ok.

Later on, Bissell writes:

The right thing to do as a replicator of someone else’s findings is to consult the original authors thoughtfully. If e-mails and phone calls don’t solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.

Hmmmm . . . maybe . . . but maybe a simpler approach would be for the authors of the article to describe clearly (with videos, for example, if that is necessary to demonstrate details of lab procedure) in the public record.

After all, a central purpose of scientific publication is to communicate with other scientists. If your published material is not clear—if a paper can’t be replicated without emails, phone calls, and a lab visit—this seems like a problem to me! If outsiders can’t replicate the exact study you’ve reported, they could well have trouble using your results in future research. To put it another way, if certain findings are hard to get, requiring lots of lab technique that is nowhere published—and I accept that this is just the way things can be in modern biology—then these findings won’t necessarily apply in future work, and this seems like a serious concern.

To me, the solution is not to require e-mails, phone calls, and lab visits—which, really, would be needed not just for potential replicators but for anyone doing further research in the field—but rather to expand the idea of “publication” to go beyond the current standard telegraphic description of methods and results, and beyond the current standard supplementary material (which is not typically a set of information allowing you to replicate the study; rather, it’s extra analyses needed to placate the journal referees), to include a full description of methods and data, including videos and as much raw data as is possible (with some scrambling if human subjects is an issue). No limits—whatever it takes! This isn’t about replication or about pesky reporting requirements, it’s about science. If you publish a result, you should want others to be able to use it.

Of course, I think replicators should act in good faith. If certain aspects of a study are standard practice and have been published elsewhere, maybe they don’t need to be described in detail in the paper or the supplementary material; a reference to the literature could be enough. Indeed, to the extent that full descriptions of research methods are required, this will make life easier for people to describe their setups in future papers.

Bissell points out that describing research methods isn’t always easy:

Twenty years ago . . . Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models . . . Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results — something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. . . .

If the microenvironment is important, record as much of it as you can for the publication! Again, if it really takes a year for a study to be reproduced, if your finding is that fragile, this is something that researchers should know about right away from reading the article.

Bissell gives an example of “a non-malignant human breast cell line that is now used by many for three-dimensional experiments”:

A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published.

That’s great! And that’s why it’s good to publish all the information necessary so that a study can be replicated. That way, this sort of exciting research could be done all the time

Costs and benefits

The other issue that Bissell is (implicitly) raising is a cost-benefit calculation. When she writes of the suffering caused by declaring a finding irreproducible, I assume that ultimately she’s talking about a patient who will get sick or even die because some potential treatment never gets developed or never becomes available because some promising bit of research got dinged. On the other hand, when research that is published in a top journal but does not hold up, this can waste thousands of hours of researchers’ time, spending resources that otherwise could have been used on productive research.

Indeed, even when we talk about reporting requirements, we are really talking about tradeoffs. Clearly writing up one’s experimental protocol (and maybe including a Youtube) and setting up data in archival form, that takes work, it represents time and effort that could otherwise be spent on research (or evan on internal replication). On the other hand, when methods and data are not clearly set out in the public record, this can result in wasted effort by lots of other labs, following false leads as they try to figure out exactly how the experiment was done.

I can’t be sure, but my guess is that, for important, high-profile research, on balance it’s a benefit to put all the details in the public record. Sure, that takes some effort by the originating lab, but it might save lots more effort for each of dozens of other labs that are trying to move forward from the published finding.

Here’s an example. Bissell writes:

When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

I worry about this. If people can’t replicate a published result, what are we supposed to make of it? If the result is so fragile that it only works under some conditions that have never been written down, what is the scientific community supposed to do with it?

And there’s this:

It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.

Yeah, right. Tell it to Daryl Bem.

What happened?

I think that where Bissell went wrong is by thinking of replication in a defensive way, and thinking of the result being to “damage the reputations of careful, meticulous scientists.” Instead, I recommend she take a forward-looking view, and think of replicability as a way of moving science forward faster. If other researchers can’t replicate what you did, they might well have problems extending your results. The easier you make it for them to replicate, indeed the more replications that people have done of your work, the more they will be able, and motivated, to carry on the torch.

Nothing magic about publication

Bissell seems to be saying that if a biology paper is published, it should be treated as correct, even if outsiders can’t replicate it, all the way until the non-replicators “consult the original authors thoughtfully,” send emails and phone calls, and “either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours.” After all of this, if the results still don’t hold up, they can be “weeded out naturally from the literature”—but, even then, only after other scientists “repeatedly fail to reproduce them.”

This seems pretty clear: you need multiple failed replications, each involving thoughtful conversation, email, phone, and a physical lab visit. Until then, you treat the published claim as true.

OK, fine. Suppose we accept this principle. How, then, do we treat an unpublished paper? Suppose someone with a Ph.D. in biology posts a paper on Arxiv (or whatever is the biology equivalent), and it can’t be replicated? Is it ok to question the original paper, to treat it as only provisional, to label it as unreplicated? That’s ok, right? I mean, you can’t just post something on the web and automatically get the benefit of the doubt that you didn’t make any mistakes. Ph.D.’s make errors all the time (just like everyone else).

Now we can engage in some salami slicing. According to Bissell (as I interpret here), if you publish an article in Cell or some top journal like that, you get the benefit of the doubt and your claims get treated as correct until there are multiple costly, failed replications. But if you post a paper on your website, all you’ve done is make a claim. Now suppose you publish in a middling journal, say, the Journal of Theoretical Biology. Does that give you the benefit of the doubt? What about Nature Neuroscience? PNAS? Plos-One? I think you get my point. A publication in Cell is nothing more than an Arxiv paper that happened to hit the right referees at the right time. Sure, approval by 3 referees or 6 referees or whatever is something, but all they did is read some words and look at some pictures.

It’s a strange view of science in which a few referee reports is enough to put something into a default-believe-it mode, but a failed replication doesn’t count for anything. Bissell is criticizing replicators for not having long talks and visits with the original researchers, but the referees don’t do any emails, phone calls, or lab visits at all! If their judgments, based simply on reading the article, carry weight, then it seems odd to me to discount failed replications that are also based on the published record.

My view that we should focus on the published record (including references, as appropriate) is not legalistic or nitpicking. I’m not trying to say: Hey, you didn’t include that in the paper, gotcha! I’m just saying that, if somebody reads your paper and can’t figure out what you did, and can only do that through lengthy emails, phone conversations, and lab visits, then this is going to limit the contribution your paper can make.

As C. Glenn Begley wrote in a comment:

A result that is not sufficiently robust that it can be independently reproduced will not provide the basis for an effective therapy in an outbred human population. A result that is not able to be independently reproduced, that cannot be translated to another lab using what most would regard as standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a ‘scientific allegation’.

To which I would add: Everyone would agree that the above paragraph applies to an unpublished article. I’m with Begley that it also applies to published articles, even those published in top journals.

A solution that should make everyone happy

Or, to put it another way, maybe Bissell is right that if someone can’t replicate your paper, it’s no big deal. But it’s information I’d like to have. So maybe we can all be happy: all failed replications can be listed on the website of the original paper (then grumps and skeptics like me will be satisfied), but Bissell and others can continue to believe published results on the grounds that the replications weren’t careful enough. And, yes, published replications should be held to the same high standard. If you fail to replicate a result and you want your failed replication to be published, it should contain full details of your lab setup, with videos as necessary.

http://andrewgelman.com/2013/12/17/replication-backlash/

154

TBA

155

TBA (not sure this has any problems in it, don't remember any. Might just have to work through each code example as it comes)

156

NITIN JAIN is the big man in Lasalgaon, a dusty town a day’s drive from Mumbai that boasts it has Asia’s biggest onion market. With a trim moustache and a smartphone stuck to his ear he struts past a thousand-odd tractors and trucks laden with red onions. Farmers hurl armfuls at his feet to prove their quality. A gaggle of auctioneers, rival traders and scribes follow him, squabbling and yanking each other’s hair. Asked why onion prices have risen so much, Mr Jain relays the question to the market. “Why?” he bellows. His entourage laughs. He says that the price of India’s favourite vegetable is a mystery that no calculation can explain.

High food prices perturb some men and women even bigger than Mr Jain. Raghuram Rajan, the boss of India’s central bank, is grappling with high inflation caused in large part by food prices: wholesale onion prices soared by 278% in the year to October and the retail price of all vegetables shot up by 46%. The food supply chain is decades out of date and cannot keep up with booming demand. India’s rulers are watching the cost of food closely, too, ahead of an election due by May. Electoral folklore says that pricey onions are deadly for incumbent governments.

A year ago it seemed that India had bitten the bullet by permitting foreign firms to own majority stakes in domestic supermarkets. The decision came after a fierce political battle. Walmart, Carrefour and Tesco have been waiting for years to invest in India. They say they would revolutionise shopping. Only 2-3% of groceries are bought in formal stores, with most people reliant on local markets. They would also modernise logistics chains, either by investing themselves, or indirectly, by stimulating food producers to spend on factories, warehouses and trucks, and establish direct contracts with farmers, eliminating layers of middlemen.

On the ground little has happened. Foreign firms complain of hellish fine print, including a stipulation to buy from tiny suppliers. Individual Indian states can opt out of the policy—which is unhelpful if you want to build a national supermarket chain. In October Walmart terminated its joint venture with Bharti, an Indian group. India has reduced the beast of Bentonville to a state of bewilderment. Tesco has cut expatriate staff.

The reaction from politicians has been indifference. “We have liberalised…to the extent that we can. People have to accept this and decide whether they want to invest,” said Palaniappan Chidambaram, India’s finance minister. Despite the apparently obvious benefits of supermarkets and the experience of most other countries, few Indians seem to want change.

You’re not in Bentonville anymore

Just how bad is India’s food supply chain? To find out The Economist followed the journey of an onion from a field in the heart of onion country, in western India, to a shopping bag in Mumbai, a city of 18m onion-munchers. The trip suggests an industry begging for investment and reform.

“The system hasn’t changed much—it’s been the same since the 1970s,” says Punjaram Devkar, an elderly farmer in a white cap. For generations his forefathers have grown onions near a hamlet called Karanjgaon. He owns a crudely irrigated six-hectare (14-acre) plot, larger than the national average farm of just 1.2 hectares. He does not want to buy more land; unreliable electricity and labour mean “it is too hard to manage.” There are four onion crops each year—in a good season production is three times higher than in a bad one. To hedge his bets he also grows sugar cane. Costs have soared because of rising rural wages, which have doubled in three years. He says welfare schemes have made workers lazy. “They just play cards all day.”

Storage facilities amount to a wooden basket inside a shed—at this time of year onions perish within 15 days, although the variety grown in the spring can last eight months. From here one of Mr Devkar’s finest is thrown into a small trailer, along with the produce of nearby farms, and taken to Lasalgaon. The roads are mostly paved but the 32km (19-mile) journey takes a couple of hours in a rickety old tractor.

Lasalgaon neophytes will find their eyes water upon entering its market, a huge car park full of red onions, trucks and skinny farmers. Although the auction is run twice daily by an official body, it doesn’t look wholly transparent. Some farmers complain that Mr Jain and another trader dominate the trade (Mr Jain denies this). Prices vary wildly day by day and according to size and quality, which are judged in a split second by eye. The average price today is $0.33 per kilo.

Neither traders nor farmers agree why prices have risen so steeply of late. They blame climate change, the media, too much rain last year, too little rain this year, labour costs, an erratic export regime. “Our biggest problem is illiteracy,” says one farmer. “We don’t know how to use technology.” Most folk agree that India needs better cold storage but worry that it is too pricey or that it ruins the taste of onions.

Farmers must pay a 1% fee to the auction house and a 4% commission to the traders. Sometimes they also have to stump up for fees for packing and loading. That takes place at several depots surrounding the market where farmers must drop off their loads and pour them onto tarpaulins on the ground. The onions may wait there for days but once put into hessian sacks they are loaded onto trucks operated by separate haulage firms and owned by intricate webs of independent consortiums.

At 8pm Prabhakar Vishad, a 20-year veteran of the onion-express highway from Lasalgaon to Mumbai, climbs into a battered Tata truck with “Blow Horn” painted in big letters on the back. Over the years the roads have improved and power steering has made life easier. Still, it is dangerous work, says Mr Vishad, who had a bad crash last year. By 6am next morning he sets his bloodshot eyes on Vashi market on the outskirts of Mumbai. It handles 100-150 truckloads of onions a day—enough to satisfy India’s commercial capital.

Onions are sometimes unpacked, sorted and repacked, with wastage rates of up to 20%. By 9am the market is a teeming maze of 300-odd selling agents, who mainly act on behalf of middlemen, and several thousand buyers—who are either retailers or sub-distributors. Everyone stands ankle deep in onions of every size. The bidding process is opaque. The selling agents each drape a towel on their arm. To make a bid you stick your hand under the towel and grip their hand, with secret clenches denoting different prices. Average prices today are about $0.54 per kilo. If the seller likes your tickles you hail a porter. He carries your newly bought sacks on his head to a dispatch depot where another group of couriers takes them into the city.

“I’m crazy, like the guys you see in the movies. I don’t negotiate,” declares Sanjay Pingle. One of the market’s biggest agents, he charges the seller a 6.5% commission. The buyers pay loading charges on top of that and a fee to the market. He says business is tough—bad debts from customers run at a fifth of sales and he has to pay interest rates of 22% on his own debts. The solution to the onion shortage is obvious, he says. “In China they keep things in storage facilities—if India had the same facilities as China has, prices would be lower.” He says he has seen photographs of Chinese technology on his mobile phone.

By the afternoon thousands of cars and trucks are picking up small batches of onions to take them into Mumbai. In Chembur, a middle-class neighbourhood, Anburaj Madar runs a big sub-distributor. He handles 200 sacks a day which he sells to retailers and restaurants. He buys daily from Vashi market and has space to store only about 12 hours’ worth of stock. Rent is dear and he too reckons cold storage destroys the flavour of onions. He marks up his prices by perhaps 20% but says a chunk of what he buys has to be thrown away—it is either damaged or of inferior quality.

For the onions that do make the cut the next stop is a small shop down the road where they are sold for another mark-up of 10% or so. From here Indubai Kakdi is hand-selecting onions with elaborate care. Buck-toothed and ragged, she sells seven kilos a day from a wooden barrow; she makes a 10% margin. She says climate change has made prices more volatile.

Peeling back the layers of truth

The journey of an onion from Mr Devkar’s field to the end customer in Mumbai takes only a few days but is enough to make you weep. There are some underlying reasons why prices have risen—higher rural wages have pushed up farmers’ costs. But the system is horribly fiddly. Farms are tiny with no economies of scale. The supply chain involves up to five middlemen. The onion is loaded, sorted or repacked at least four times. Wastage rates, either from damage or weight loss as onions dry out, are a third or more. Because India has no modern food-processing industry, low-quality onions that could be turned into paste or sauces are thrown away. Retail prices are about double what farmers receive, although the lack of any standard grading of size or quality makes comparisons hard.

The system is volatile as well as inefficient. Traders who buy onions from farmers may hoard them, but for the supply chain as a whole far too little inventory is stored. As a result small variations in demand and supply are amplified and cause violent swings in price. In the first week of December 2013 prices fell again.

It is easy to see how heavy investment by supermarket chains and big food-producers—whether Indian or foreign—could make a difference. They would cut out layers from the supply chain, build modern storage facilities and probably prod farmers to consolidate their plots.

The shoppers of Chembur agree that Indian onions are the world’s tastiest but are fed up with price swings. No one mentions reform as a solution and until there is popular support and political leadership it is hard to see much changing. And what of the last stage of the onion’s odyssey, to the stomach? By one stall stands an elderly lady who says she likes the vegetable so much that she doesn’t bother to cook it. Instead she chomps on raw onions as if they were apples. At least someone has an eye on efficiency.

http://www.economist.com/news/business/21591650-walmart-carrefour-and-tesco-have-been-knocking-indias-door-without-much-luck-route

High food prices perturb some men and women even bigger than Mr Jain. Raghuram Rajan, the boss of India’s central bank, is grappling with high inflation caused in large part by food prices: wholesale onion prices soared by 278% in the year to October and the retail price of all vegetables shot up by 46%. The food supply chain is decades out of date and cannot keep up with booming demand. India’s rulers are watching the cost of food closely, too, ahead of an election due by May. Electoral folklore says that pricey onions are deadly for incumbent governments.

A year ago it seemed that India had bitten the bullet by permitting foreign firms to own majority stakes in domestic supermarkets. The decision came after a fierce political battle. Walmart, Carrefour and Tesco have been waiting for years to invest in India. They say they would revolutionise shopping. Only 2-3% of groceries are bought in formal stores, with most people reliant on local markets. They would also modernise logistics chains, either by investing themselves, or indirectly, by stimulating food producers to spend on factories, warehouses and trucks, and establish direct contracts with farmers, eliminating layers of middlemen.

On the ground little has happened. Foreign firms complain of hellish fine print, including a stipulation to buy from tiny suppliers. Individual Indian states can opt out of the policy—which is unhelpful if you want to build a national supermarket chain. In October Walmart terminated its joint venture with Bharti, an Indian group. India has reduced the beast of Bentonville to a state of bewilderment. Tesco has cut expatriate staff.

The reaction from politicians has been indifference. “We have liberalised…to the extent that we can. People have to accept this and decide whether they want to invest,” said Palaniappan Chidambaram, India’s finance minister. Despite the apparently obvious benefits of supermarkets and the experience of most other countries, few Indians seem to want change.

You’re not in Bentonville anymore

Just how bad is India’s food supply chain? To find out The Economist followed the journey of an onion from a field in the heart of onion country, in western India, to a shopping bag in Mumbai, a city of 18m onion-munchers. The trip suggests an industry begging for investment and reform.

“The system hasn’t changed much—it’s been the same since the 1970s,” says Punjaram Devkar, an elderly farmer in a white cap. For generations his forefathers have grown onions near a hamlet called Karanjgaon. He owns a crudely irrigated six-hectare (14-acre) plot, larger than the national average farm of just 1.2 hectares. He does not want to buy more land; unreliable electricity and labour mean “it is too hard to manage.” There are four onion crops each year—in a good season production is three times higher than in a bad one. To hedge his bets he also grows sugar cane. Costs have soared because of rising rural wages, which have doubled in three years. He says welfare schemes have made workers lazy. “They just play cards all day.”

Storage facilities amount to a wooden basket inside a shed—at this time of year onions perish within 15 days, although the variety grown in the spring can last eight months. From here one of Mr Devkar’s finest is thrown into a small trailer, along with the produce of nearby farms, and taken to Lasalgaon. The roads are mostly paved but the 32km (19-mile) journey takes a couple of hours in a rickety old tractor.

Lasalgaon neophytes will find their eyes water upon entering its market, a huge car park full of red onions, trucks and skinny farmers. Although the auction is run twice daily by an official body, it doesn’t look wholly transparent. Some farmers complain that Mr Jain and another trader dominate the trade (Mr Jain denies this). Prices vary wildly day by day and according to size and quality, which are judged in a split second by eye. The average price today is $0.33 per kilo.

Neither traders nor farmers agree why prices have risen so steeply of late. They blame climate change, the media, too much rain last year, too little rain this year, labour costs, an erratic export regime. “Our biggest problem is illiteracy,” says one farmer. “We don’t know how to use technology.” Most folk agree that India needs better cold storage but worry that it is too pricey or that it ruins the taste of onions.

Farmers must pay a 1% fee to the auction house and a 4% commission to the traders. Sometimes they also have to stump up for fees for packing and loading. That takes place at several depots surrounding the market where farmers must drop off their loads and pour them onto tarpaulins on the ground. The onions may wait there for days but once put into hessian sacks they are loaded onto trucks operated by separate haulage firms and owned by intricate webs of independent consortiums.

At 8pm Prabhakar Vishad, a 20-year veteran of the onion-express highway from Lasalgaon to Mumbai, climbs into a battered Tata truck with “Blow Horn” painted in big letters on the back. Over the years the roads have improved and power steering has made life easier. Still, it is dangerous work, says Mr Vishad, who had a bad crash last year. By 6am next morning he sets his bloodshot eyes on Vashi market on the outskirts of Mumbai. It handles 100-150 truckloads of onions a day—enough to satisfy India’s commercial capital.

Onions are sometimes unpacked, sorted and repacked, with wastage rates of up to 20%. By 9am the market is a teeming maze of 300-odd selling agents, who mainly act on behalf of middlemen, and several thousand buyers—who are either retailers or sub-distributors. Everyone stands ankle deep in onions of every size. The bidding process is opaque. The selling agents each drape a towel on their arm. To make a bid you stick your hand under the towel and grip their hand, with secret clenches denoting different prices. Average prices today are about $0.54 per kilo. If the seller likes your tickles you hail a porter. He carries your newly bought sacks on his head to a dispatch depot where another group of couriers takes them into the city.

“I’m crazy, like the guys you see in the movies. I don’t negotiate,” declares Sanjay Pingle. One of the market’s biggest agents, he charges the seller a 6.5% commission. The buyers pay loading charges on top of that and a fee to the market. He says business is tough—bad debts from customers run at a fifth of sales and he has to pay interest rates of 22% on his own debts. The solution to the onion shortage is obvious, he says. “In China they keep things in storage facilities—if India had the same facilities as China has, prices would be lower.” He says he has seen photographs of Chinese technology on his mobile phone.

By the afternoon thousands of cars and trucks are picking up small batches of onions to take them into Mumbai. In Chembur, a middle-class neighbourhood, Anburaj Madar runs a big sub-distributor. He handles 200 sacks a day which he sells to retailers and restaurants. He buys daily from Vashi market and has space to store only about 12 hours’ worth of stock. Rent is dear and he too reckons cold storage destroys the flavour of onions. He marks up his prices by perhaps 20% but says a chunk of what he buys has to be thrown away—it is either damaged or of inferior quality.

For the onions that do make the cut the next stop is a small shop down the road where they are sold for another mark-up of 10% or so. From here Indubai Kakdi is hand-selecting onions with elaborate care. Buck-toothed and ragged, she sells seven kilos a day from a wooden barrow; she makes a 10% margin. She says climate change has made prices more volatile.

Peeling back the layers of truth

The journey of an onion from Mr Devkar’s field to the end customer in Mumbai takes only a few days but is enough to make you weep. There are some underlying reasons why prices have risen—higher rural wages have pushed up farmers’ costs. But the system is horribly fiddly. Farms are tiny with no economies of scale. The supply chain involves up to five middlemen. The onion is loaded, sorted or repacked at least four times. Wastage rates, either from damage or weight loss as onions dry out, are a third or more. Because India has no modern food-processing industry, low-quality onions that could be turned into paste or sauces are thrown away. Retail prices are about double what farmers receive, although the lack of any standard grading of size or quality makes comparisons hard.

The system is volatile as well as inefficient. Traders who buy onions from farmers may hoard them, but for the supply chain as a whole far too little inventory is stored. As a result small variations in demand and supply are amplified and cause violent swings in price. In the first week of December 2013 prices fell again.

It is easy to see how heavy investment by supermarket chains and big food-producers—whether Indian or foreign—could make a difference. They would cut out layers from the supply chain, build modern storage facilities and probably prod farmers to consolidate their plots.

The shoppers of Chembur agree that Indian onions are the world’s tastiest but are fed up with price swings. No one mentions reform as a solution and until there is popular support and political leadership it is hard to see much changing. And what of the last stage of the onion’s odyssey, to the stomach? By one stall stands an elderly lady who says she likes the vegetable so much that she doesn’t bother to cook it. Instead she chomps on raw onions as if they were apples. At least someone has an eye on efficiency.

http://www.economist.com/news/business/21591650-walmart-carrefour-and-tesco-have-been-knocking-indias-door-without-much-luck-route

157

This thread's going to deal with examining the proofs of specific theorems / propositions / lemmata of interest in various subjects. Everything from "shit I learned but never appreciated in detail in freshman calculus" to "differentiable topology on Yau-Calabi manifolds expressed as Fourier series known to lie in the complexity class PLS" is fair game. **Here's the current list, organized by subject** (crossed-off indicates "already done in a reply", as in other threads):

*Elementary & Vector Calculus*

- Differential & Integral forms of the Chain Rule

- Theorem for Differentiating Inverses of Fxns

- univariate Taylor's Theorem

- multivariate Taylor's Theorem

- L'Hopital's Theorem

- Fundamental Theorem of Calculus

- Green's Theorem

- Stokes Theorem

- Divergence Theorem

- Clairaut's Theorem

- Chain of Variables Theorem(s)

*Number Theory*

- "Product Rule" for Finite Summations (? unsure of categorization, is calculus-like)

*General Algorithms & Computational Complexity*

- PLS=NP implies NP=co-NP

- weak Nash PPAD-completeness

- strong Nash FIXP-completeness

- NP ≠ co-NP implies no NP-Complete problem in co-NP & no co-NP complete problem in NP

- Integer programming w/ a Unitary Matrix is in P

- Cook's Theorem (direct proof that an NP-complete problem exists)

- Prime factorization: Shor's algorithm, Schnorr-Seysen-Lenstra algorithm, AKS primality test showing PRIMES is in P

- Hierarchy Theorems

- Approximation Algorithms: PCP Theorem

- Time/space complexity of Euclid's algorithm

*Optimization*

- Karush-Kuhn-Tucker conditions: basic sufficient conditions for their use

- KKT conditions: invexity is the broadest class of relevant fxns

- Stochastic Programming is Irrational (d/n respect Stochastic Dominance?)

- Simplex Algorithm convergence

- Simplex Algorithm exponential worst-case complexity

- Simplex polynomial smoothed complexity

- Ellipsoid Method Polynomial-time Convergence

- Convex Programming Polynomial-time Convergence

- No Free Lunch Theorem

- Benders Decomposition convergence proof

*Decision Theory & Mathematical Psychology*

- Cumulative-prospect Theory respects 1st-order Stochastic Dominance

*Stochastic Processes*

- Under ??? conditions (irreducibility?) a Markov Chain has Steady-state Distribution

- MCMC converges to Arbitrary Distribution

*Logic*

- Godel's Completeness Theorems

- Godel's Incompleteness Theorems

*Econ & Finance*

- Debreu's Existence Theorem

- No Trade Theorem

*Game Theory*

- Existence for: Nash, Bayes-Nash, Perfect, Trembling-Hand, etc equilibria

- 3+ player rational games may have only irrational solutions

- 2 player rational games always have rational solutions

- Revelation Principle

*Graph Theory*

- Four-color Theorem

- Sperner's Lemma

*Fixed-Point Theorems*

- Tarski's Fixed Point Theorem

- Banach's Fixed Point Theorem

- Brouwer's Fixed Point Theorem

- Kakutani's Fixed Point Theorem

- C^{1} and C^{N} versions of Rouche's Theorem

- Schauder Fixed Point Theorem

*Stochastic Calculus*

- Ito's Lemma

*Topology*

- Tychonoff's Theorem

- Borsuk-Ulam Theorem (relatedly: Brouwer's Theorem)

- Baire Category Theorem

- Jordan Curve Theorem

- Ham Sandwich Theorem

- Poincaré conjecture

*Convex Analysis*[/i]

- Separating Hyperplane Theorem

- Farkas' Lemma

*Fourier Analysis*

- "Large Classes" of Fxns can be Fourier Expanded

*Functional Analysis*

- Metric spaces are completable

- Normed spaces are completable

- Hahn-Banach Theorem

- Banach Fixed Point Theorem

- Open Mapping Theorem (Functional Analysis)

- Closed Graph Theorem

- Baire Category Theorem

*Analysis*

- Implicit Fxn Theorem

*ODEs and PDEs*

- Poincare-Bendixon Theorem

- Hartman-Grobman Theorem

*Chaos Theory*

- (In)Equivalenc2es b/w Defns of Chaos in Elaydi

- Horseshoe Map is Chaotic (Wiggins)

- Shift Map is Chaotic (Wiggins)

*Probability & Measure*

- Central Limit Theorem(s)

- Strong Law of Large Numbers

*Algebra & Galois Theory*

- Classification/Factorization Theorems for (Finite) Abelian Groups

- Abel-Ruffini Theorem (quintc poly's do not have closed-form solutions in general)

- Examples/proofs of Non-factorization Domains

- Lagrange's Theorem (in group theory)

- sylow Theorems

*Complex Analysis*

- Integral Theorems (whatever they're called, the ones that say a circular path integral gives 0 or sum weird sum involving 2 pi i depending on how many badly behaved points the fxn has)

- Open Mapping Theorem

- C^{1} and C^{N} versions of Rouche's Theorem

- Proof of Euler's CIS (cos + i sin) Formula

*Statistical Inference*

- Ugly Duckling Theorem

- Some sorta general proof that regularization can be done w/ optimization or priors

*Physics*

- Gauss's Law

- Differential & Integral forms of the Chain Rule

- Theorem for Differentiating Inverses of Fxns

- univariate Taylor's Theorem

- multivariate Taylor's Theorem

- L'Hopital's Theorem

- Fundamental Theorem of Calculus

- Green's Theorem

- Stokes Theorem

- Divergence Theorem

- Clairaut's Theorem

- Chain of Variables Theorem(s)

- "Product Rule" for Finite Summations (? unsure of categorization, is calculus-like)

- PLS=NP implies NP=co-NP

- weak Nash PPAD-completeness

- strong Nash FIXP-completeness

- NP ≠ co-NP implies no NP-Complete problem in co-NP & no co-NP complete problem in NP

- Integer programming w/ a Unitary Matrix is in P

- Cook's Theorem (direct proof that an NP-complete problem exists)

- Prime factorization: Shor's algorithm, Schnorr-Seysen-Lenstra algorithm, AKS primality test showing PRIMES is in P

- Hierarchy Theorems

- Approximation Algorithms: PCP Theorem

- Time/space complexity of Euclid's algorithm

- Karush-Kuhn-Tucker conditions: basic sufficient conditions for their use

- KKT conditions: invexity is the broadest class of relevant fxns

- Stochastic Programming is Irrational (d/n respect Stochastic Dominance?)

- Simplex Algorithm convergence

- Simplex Algorithm exponential worst-case complexity

- Simplex polynomial smoothed complexity

- Ellipsoid Method Polynomial-time Convergence

- Convex Programming Polynomial-time Convergence

- No Free Lunch Theorem

- Benders Decomposition convergence proof

- Cumulative-prospect Theory respects 1st-order Stochastic Dominance

- Under ??? conditions (irreducibility?) a Markov Chain has Steady-state Distribution

- MCMC converges to Arbitrary Distribution

- Godel's Completeness Theorems

- Godel's Incompleteness Theorems

- Debreu's Existence Theorem

- No Trade Theorem

- Existence for: Nash, Bayes-Nash, Perfect, Trembling-Hand, etc equilibria

- 3+ player rational games may have only irrational solutions

- 2 player rational games always have rational solutions

- Revelation Principle

- Four-color Theorem

- Sperner's Lemma

- Tarski's Fixed Point Theorem

- Banach's Fixed Point Theorem

- Brouwer's Fixed Point Theorem

- Kakutani's Fixed Point Theorem

- C

- Schauder Fixed Point Theorem

- Ito's Lemma

- Tychonoff's Theorem

- Borsuk-Ulam Theorem (relatedly: Brouwer's Theorem)

- Baire Category Theorem

- Jordan Curve Theorem

- Ham Sandwich Theorem

- Poincaré conjecture

- Separating Hyperplane Theorem

- Farkas' Lemma

- "Large Classes" of Fxns can be Fourier Expanded

- Metric spaces are completable

- Normed spaces are completable

- Hahn-Banach Theorem

- Banach Fixed Point Theorem

- Open Mapping Theorem (Functional Analysis)

- Closed Graph Theorem

- Baire Category Theorem

- Implicit Fxn Theorem

- Poincare-Bendixon Theorem

- Hartman-Grobman Theorem

- (In)Equivalenc2es b/w Defns of Chaos in Elaydi

- Horseshoe Map is Chaotic (Wiggins)

- Shift Map is Chaotic (Wiggins)

- Central Limit Theorem(s)

- Strong Law of Large Numbers

- Classification/Factorization Theorems for (Finite) Abelian Groups

- Abel-Ruffini Theorem (quintc poly's do not have closed-form solutions in general)

- Examples/proofs of Non-factorization Domains

- Lagrange's Theorem (in group theory)

- sylow Theorems

- Integral Theorems (whatever they're called, the ones that say a circular path integral gives 0 or sum weird sum involving 2 pi i depending on how many badly behaved points the fxn has)

- Open Mapping Theorem

- C

- Proof of Euler's CIS (cos + i sin) Formula

- Ugly Duckling Theorem

- Some sorta general proof that regularization can be done w/ optimization or priors

- Gauss's Law

158

Gonna use this thread to keep track of which problems I attempt and do not finish, or skip entirely out of laziness. Also questions that arise in the course of reading an argument of proof that I don't fully resolve. Hopefully will encourage me to return to them at a later date:

**Electromagnetism**

- Electromagnetism, Purcell & Morin, Ch. 1, # 7. (asks for me to write some code; was feeling too lazy to do this)

**Chaos**

- Chaos Theory, Elaydi, Sec. 1.4-1.5, # 4. (stuck on a sub-part of the proof of asymptotic instability of one of the equilibrium points. Had a neat idea for solving it but haven't made it work in practice yet)

**Functional Analysis**

- Functional Analysis, Kreyzsig, Sec. 1.6 # 5(b). (haven't figured this one out; asks for an example of homeomorphic spaces, but one complete while the other incomplete.)

**Classical Mechanics**

- Classical Mechanics, Taylor, Ch. 1, # 43 a). I solved problem fully, but got stuck during one of the two approaches---the more rigorous and potentially satisfying of the two, imo---that I took in trying to derive the polar unit vector hat(phi). There's something I'm missing about how to use the normalization condition to jump from where I got to the final result.

- Classical Mechanics, Taylor, Ch. 2, # 50. I 'solved the problem' in the sense Taylor would have expected of an undergrad in his class, I think, but my justification for interchanging differentiation and infinite summation was just a quick reference to a theorem from real/complex/metric analysis, wasn't carefully detailed, and I didn't prove it independently, nor even prove that the preconditions (uniform convergence or infinite radius of convergence?) held for e^z, just kind of said they did. Shouldn't be too hard to go back later and cite/prove a relevant, specific result, and establish its premises; for completeness of understanding I think I should do so.

- in Taylor's Classical Mechanics, he differentiates a function f(y') by y and claims the result is 0; similarly he differentiates f(y) by y' and gets 0. But y clearly determines y', and the form of y' constraints the form of y implicitly as well, so why don't we have to get all chain rule on this bitch?

**Smooth Manifolds**

- Lee2, Ch. 1, Lem. 1.6: every top n-mani has a cntble basis of precompact coord balls. At the end of p. 8, how do we know the collection of inv imgs of elems of*B* gives us precompact elems in *M*? Probably easy but need to write it out, can't see it immediately.

- Lee2, Ch. 1, in-chapter prob # 1.2. Sketched some ideas on how to show 2nd countability & Hausdorff-ness of*RP*^{n}, but I left a lot of detail unclear. Specifically --- Hausdorff q's: what epsilon should be chosen for the form of the sets I defined in the Hausdorff argument? And are those sets actually open, or do I need to perturb each component to get an open set? 2nd count q's: I think I guessed the correct basis, but I didn't go to any effort to show that it worked, and it has some similar problems to the Hausdorff argument, i.e. it relies on perturbing elements of reps of linear 1-d subspaces. Show rigorously that this does what it should do? Seems like a common theme is I could use a clean formal test for equivalence between the linear subspaces rep'd by a vector; I think just such a test is the "can multiply by a nonzero constant" property, i.e. this is a characterization of reps for a 1-d lin subspace.

**Topological Manifolds**

- Lee1, Ch. 2, Ex 2.28: got a little lazy and identified the discontinuity in arcsin graphically. Also didn't really show injectivity or surjectivity, just kind of stated them as well-known properties of the relevant trig fxns and complex exponential map, resp. Should return and show this stuff analytically.

**Calculus of Variations**

- Fox, Ch. 2, Sec 4, proof of Lemma 2. Fox wants to argue that the product (non-integral) term in a certain integration-by-parts vanishes; the product includes terms of the form t(b)^2/u(b) and t(a)^2/u(a), and we know the numerators are 0, but nowhere does Fox show that u(a), u(b) are nonzero. Later in the chapter he proves that u(x) "cannot have a double root," by which I think he means that only u(x)=0 or u'(x)=0 can be true, not both, for any given x. He remarks that this shows that "both t(b)^2/u(b) and t(a)^2/u(a) vanish since t(a) = t(b) = 0 by hypothesis." I have no idea why he thinks what he's shown implies what he's said it's shown; maybe there's some kind of Taylor expansion & limiting argument being made? He's generally pretty good about spelling out the details though, so I wonder if I'm not just making a stupid oversight.

- Electromagnetism, Purcell & Morin, Ch. 1, # 7. (asks for me to write some code; was feeling too lazy to do this)

- Chaos Theory, Elaydi, Sec. 1.4-1.5, # 4. (stuck on a sub-part of the proof of asymptotic instability of one of the equilibrium points. Had a neat idea for solving it but haven't made it work in practice yet)

- Functional Analysis, Kreyzsig, Sec. 1.6 # 5(b). (haven't figured this one out; asks for an example of homeomorphic spaces, but one complete while the other incomplete.)

- Classical Mechanics, Taylor, Ch. 1, # 43 a). I solved problem fully, but got stuck during one of the two approaches---the more rigorous and potentially satisfying of the two, imo---that I took in trying to derive the polar unit vector hat(phi). There's something I'm missing about how to use the normalization condition to jump from where I got to the final result.

- Classical Mechanics, Taylor, Ch. 2, # 50. I 'solved the problem' in the sense Taylor would have expected of an undergrad in his class, I think, but my justification for interchanging differentiation and infinite summation was just a quick reference to a theorem from real/complex/metric analysis, wasn't carefully detailed, and I didn't prove it independently, nor even prove that the preconditions (uniform convergence or infinite radius of convergence?) held for e^z, just kind of said they did. Shouldn't be too hard to go back later and cite/prove a relevant, specific result, and establish its premises; for completeness of understanding I think I should do so.

- in Taylor's Classical Mechanics, he differentiates a function f(y') by y and claims the result is 0; similarly he differentiates f(y) by y' and gets 0. But y clearly determines y', and the form of y' constraints the form of y implicitly as well, so why don't we have to get all chain rule on this bitch?

- Lee2, Ch. 1, Lem. 1.6: every top n-mani has a cntble basis of precompact coord balls. At the end of p. 8, how do we know the collection of inv imgs of elems of

- Lee2, Ch. 1, in-chapter prob # 1.2. Sketched some ideas on how to show 2nd countability & Hausdorff-ness of

- Lee1, Ch. 2, Ex 2.28: got a little lazy and identified the discontinuity in arcsin graphically. Also didn't really show injectivity or surjectivity, just kind of stated them as well-known properties of the relevant trig fxns and complex exponential map, resp. Should return and show this stuff analytically.

- Fox, Ch. 2, Sec 4, proof of Lemma 2. Fox wants to argue that the product (non-integral) term in a certain integration-by-parts vanishes; the product includes terms of the form t(b)^2/u(b) and t(a)^2/u(a), and we know the numerators are 0, but nowhere does Fox show that u(a), u(b) are nonzero. Later in the chapter he proves that u(x) "cannot have a double root," by which I think he means that only u(x)=0 or u'(x)=0 can be true, not both, for any given x. He remarks that this shows that "both t(b)^2/u(b) and t(a)^2/u(a) vanish since t(a) = t(b) = 0 by hypothesis." I have no idea why he thinks what he's shown implies what he's said it's shown; maybe there's some kind of Taylor expansion & limiting argument being made? He's generally pretty good about spelling out the details though, so I wonder if I'm not just making a stupid oversight.

159

Gonna fill this thread up with videos/wikis/articles/etc that provide accessible ways for remembering/understanding/deriving various elementary facts. This video on deriving the most commonly used values in the unit circle, for example:

160

song is about chlamydia

161

162

Computer scientists at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University have joined forces to put powerful probabilistic reasoning algorithms in the hands of bioengineers.

**In a new paper presented at the Neural Information Processing Systems conference on December 7, Ryan P. Adams and Nils Napp have shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions.**

These algorithms, which use a technique called "message passing inference on factor graphs," are a mathematical coupling of ideas from graph theory and probability. They represent the state of the art in machine learning and are already critical components of everyday tools ranging from search engines and fraud detection to error correction in mobile phones.

**Adams' and Napp's work demonstrates that some aspects of artificial intelligence (AI) could be implemented at microscopic scales using molecules. In the long term, the researchers say, such theoretical developments could open the door for "smart drugs" that can automatically detect, diagnose, and treat a variety of diseases using a cocktail of chemicals that can perform AI-type reasoning.**

"We understand a lot about building AI systems that can learn and adapt at macroscopic scales; these algorithms live behind the scenes in many of the devices we interact with every day," says Adams, an assistant professor of computer science at SEAS whose Intelligent Probabilistic Systems group focuses on machine learning and computational statistics.**"This work shows that it is possible to also build intelligent machines at tiny scales, without needing anything that looks like a regular computer. This kind of chemical-based AI will be necessary for constructing therapies that sense and adapt to their environment. The hope is to eventually have drugs that can specialize themselves to your personal chemistry and can diagnose or treat a range of pathologies."**

Adams and Napp designed a tool that can take probabilistic representations of unknowns in the world (probabilistic graphical models, in the language of machine learning) and compile them into a set of chemical reactions that estimate quantities that cannot be observed directly. The key insight is that the dynamics of chemical reactions map directly onto the two types of computational steps that computer scientists would normally perform in silico to achieve the same end.

This insight opens up interesting new questions for computer scientists working on statistical machine learning, such as how to develop novel algorithms and models that are specifically tailored to tackling the uncertainty molecular engineers typically face. In addition to the long-term possibilities for smart therapeutics, it could also open the door for analyzing natural biological reaction pathways and regulatory networks as mechanisms that are performing statistical inference. Just like robots, biological cells must estimate external environmental states and act on them; designing artificial systems that perform these tasks could give scientists a better understanding of how such problems might be solved on a molecular level inside living systems.

"There is much ongoing research to develop chemical computational devices," says Napp, a postdoctoral fellow at the Wyss Institute, working on the Bioinspired Robotics platform, and a member of the Self-organizing Systems Research group at SEAS. Both groups are led by Radhika Nagpal, the Fred Kavli Professor of Computer Science at SEAS and a Wyss core faculty member. At the Wyss Institute, a portion of Napp's research involves developing new types of robotic devices that move and adapt like living creatures.

"What makes this project different is that, instead of aiming for general computation, we focused on efficiently translating particular algorithms that have been successful at solving difficult problems in areas like robotics into molecular descriptions," Napp explains. "For example, these algorithms allow today's robots to make complex decisions and reliably use noisy sensors. It is really exciting to think about what these tools might be able to do for building better molecular machines."

Indeed, the field of machine learning is revolutionizing many areas of science and engineering. The ability to extract useful insights from vast amounts of weak and incomplete information is not only fueling the current interest in "big data," but has also enabled rapid progress in more traditional disciplines such as computer vision, estimation, and robotics, where data are available but difficult to interpret. Bioengineers often face similar challenges, as many molecular pathways are still poorly characterized and available data are corrupted by random noise.

Using machine learning, these challenges can now be overcome by modeling the dependencies between random variables and using them to extract and accumulate the small amounts of information each random event provides.

"Probabilistic graphical models are particularly efficient tools for computing estimates of unobserved phenomena," says Adams. "It's very exciting to find that these tools map so well to the world of cell biology."

http://www.sciencedaily.com/releases/2013/12/131212160349.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Fcomputers_math%2Fstatistics+%28ScienceDaily%3A+Computers+%26+Math+News+--+Statistics%29

These algorithms, which use a technique called "message passing inference on factor graphs," are a mathematical coupling of ideas from graph theory and probability. They represent the state of the art in machine learning and are already critical components of everyday tools ranging from search engines and fraud detection to error correction in mobile phones.

"We understand a lot about building AI systems that can learn and adapt at macroscopic scales; these algorithms live behind the scenes in many of the devices we interact with every day," says Adams, an assistant professor of computer science at SEAS whose Intelligent Probabilistic Systems group focuses on machine learning and computational statistics.

Adams and Napp designed a tool that can take probabilistic representations of unknowns in the world (probabilistic graphical models, in the language of machine learning) and compile them into a set of chemical reactions that estimate quantities that cannot be observed directly. The key insight is that the dynamics of chemical reactions map directly onto the two types of computational steps that computer scientists would normally perform in silico to achieve the same end.

This insight opens up interesting new questions for computer scientists working on statistical machine learning, such as how to develop novel algorithms and models that are specifically tailored to tackling the uncertainty molecular engineers typically face. In addition to the long-term possibilities for smart therapeutics, it could also open the door for analyzing natural biological reaction pathways and regulatory networks as mechanisms that are performing statistical inference. Just like robots, biological cells must estimate external environmental states and act on them; designing artificial systems that perform these tasks could give scientists a better understanding of how such problems might be solved on a molecular level inside living systems.

"There is much ongoing research to develop chemical computational devices," says Napp, a postdoctoral fellow at the Wyss Institute, working on the Bioinspired Robotics platform, and a member of the Self-organizing Systems Research group at SEAS. Both groups are led by Radhika Nagpal, the Fred Kavli Professor of Computer Science at SEAS and a Wyss core faculty member. At the Wyss Institute, a portion of Napp's research involves developing new types of robotic devices that move and adapt like living creatures.

"What makes this project different is that, instead of aiming for general computation, we focused on efficiently translating particular algorithms that have been successful at solving difficult problems in areas like robotics into molecular descriptions," Napp explains. "For example, these algorithms allow today's robots to make complex decisions and reliably use noisy sensors. It is really exciting to think about what these tools might be able to do for building better molecular machines."

Indeed, the field of machine learning is revolutionizing many areas of science and engineering. The ability to extract useful insights from vast amounts of weak and incomplete information is not only fueling the current interest in "big data," but has also enabled rapid progress in more traditional disciplines such as computer vision, estimation, and robotics, where data are available but difficult to interpret. Bioengineers often face similar challenges, as many molecular pathways are still poorly characterized and available data are corrupted by random noise.

Using machine learning, these challenges can now be overcome by modeling the dependencies between random variables and using them to extract and accumulate the small amounts of information each random event provides.

"Probabilistic graphical models are particularly efficient tools for computing estimates of unobserved phenomena," says Adams. "It's very exciting to find that these tools map so well to the world of cell biology."

http://www.sciencedaily.com/releases/2013/12/131212160349.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Fcomputers_math%2Fstatistics+%28ScienceDaily%3A+Computers+%26+Math+News+--+Statistics%29

163

164

165

Ch9~

13,14,15,16,17 p. 298-299

4,11 p. 301-303

1,2,4,5 p. 306-307

4,7,11,17 p. 311-313

1,5,6,7 p. 315

6, 8, 13, 22, 29, 33, 34, 35, 43 p. 330-335

Ch13~

1,5,7,8 p. 519

3,7,12,20 p. 529-531

1,2,4 p. 535-536

1,2,5,6 p. 545

1,6,10,11 p. 551-552

1,2,3,9,10 p. 555-557

Ch14~

1,3,5,7,9 p. 566-567

2,5,7,20,31 p. 581-585

8,11,12,15,17 p. 589-591

3,5,7,8 p. 595-596

6,10,11,12,13 p. 603-606

13, 15, 18, 33, 37, 38, 44, 51 p. 617-624

2,3,4,5 p. 635-639

2,5,6 p. 644-645

2,8,12,14,15 p. 652-654

166

With all the talk lately about how the bacteria in the gut affect health and disease, it's beginning to seem like they might be in charge of our bodies. But we can have our say, by what we eat. F

"It's a landmark study," says Rob Knight, a microbial ecologist at the University of Colorado, Boulder, who was not involved with the work. "It changes our view of how rapidly the microbiome can change."

Almost monthly, a new study suggests a link between the bacteria living in the gut and diseases ranging from obesity to autism, at least in mice. Researchers have had trouble, however, pinning down connections between health and these microbes in humans, in part because it’s difficult to make people change their diets for the weeks and months researchers thought it would take to alter the gut microbes and see an effect on health.

But

The scientists isolated DNA and other molecules, as well as bacteria, from stool samples from before, during, and after the experiment. In this way, they could determine which bacterial species were present in the gut and what they were producing. The researchers also looked at gene activity in the microbes.

From an evolutionary perspective, the fact that gut bacteria can help buffer the effects of a rapid change in diet, quickly revving up different metabolic capacities depending on the meal consumed, may have been quite helpful for early humans, David says. But this flexibility also has possible implications for health today.

"This is a very important aspect of a very hot area of science," writes Colin Hill, a microbiologist at University College Cork in Ireland, who was not involved with the work.

But how it should be shaped is still up in the air. "

So Hill's best advice for now: "People should ideally consume a diverse diet, with adequate nutrients and micronutrients—whether it's derived from animal or plant or a mixed diet."

http://news.sciencemag.org/biology/2013/12/extreme-diets-can-quickly-alter-gut-bacteria

167

7,

26, 14, 12, 28, 7, 1, 17, 16 from Ch 2

168

2,5 from Ch 1 (Levin)

TBA (K&K)

TBA (Penrose)

169

7,8,13,15,19 from quantum Ch1 (Levin)

TBA (Griffiths)

170

5,7,21 from Ch3

27,31,45 from Ch4

11,41,45 from Ch5

171

8,9,10 from Ch 1

172

fffffffffffffffffffffffuuuuuuuuuuuuuuuuuuuuuuuuuuuccccckkkkkkkkkkkkkkkkkkkkkkkkkk

173

Text contains no problems! Will assign theorems/propositions/examples to be worked out in detail instead.

174

Introduction to Topological Manifolds by Lee (Lee1).

Introduction to Topological Manifolds by Schutz.

Introduction to Smooth Manifolds by Lee (Lee2).

4,14,19 from Lee1, Ch1

Ex. 2.1 from Schutz

Appendices (topol-alg-calc) proofs of statements/theorems/exercises, p. 540-596, Lee2

Ex. 1.1-1.6 from Lee2, (middle of) Ch 1

1,3,4,5,9 from Lee2, (end of) Ch1

175

Sec. 1.2 # 5, 8, 9, 13

Sec. 1.3 # 1, 5, 10, 20

Sec. 1.4 # 4, 6, 8, 14, 16

Sec 2.1 # 1, 6, 9, 10

Sec 2.2 # 2, 3, 6, 7, 12

Sec 2.3 # 1, 2, 12, 13, 14

Sec 2.4 # 1, 2, 3, 6

Sec 2.5 # 3, 7, 9, 11, 12

Sec 2.6 # 7, 8, 9, 10

Sec 2.7 # 2, 6, 7

Sec 2.8 # 1, 2, 3

Sec 3.1 # 1, 5, 7, 8

Sec 3.2 # 3, 6, 8

Sec 3.3 # 1, 4, 5, 8, 11

Sec 3.4 # 3, 5, 6, 7, 9

Sec 3.5 # 3, 5, 9, 12, 14

Sec 3.6 # 2, 6, 8, 9

Sec 3.7 # 3, 5, 6, 8, 9

Sec 3.8 # (write out proof of CLT in detail)

176

(TBA)

177

4,12,14 from 1.6

4,8,13 from 1.7

5,6,16 from 1.8

1,2 from 1.9

2,3,15 from 2.1-2.2

3,6,15 from 2.3-2.4

5,6,7 from 2.5

1,2,3 from 2.6

1,15,18 from 3.1-3.2

3,5 from 3.3

10,14 from 3.4

1,3,11 from 3.5

6,7,10 from 3.6

1,14,15 from 3.7

1,2,6 from 4.1-4.2

1,3,4 from 4.3-4.4

8,10,12 from 4.5-4.7

8,13,15 from 4.8

2,13,14 from 4.9-4.10

2,6,7 from 4.11

1,4,6 from 5.1

2,9,13 from 5.2

5,8,9 from 5.3

1,5,9 from 5.4-5.5

10,15,16 from 6.1-6.2

3,7,15 from 6.3

1,5,10 from 6.4

8,11 from 7.1-7.2

4,7,9 from7.3-7.4

8,9 from 7.5

4,9 from 7.6

178

Ch2~

1,3,7 p. 83

4,5,7,9 p. 91-92

5,7,11,12,18 p. 100-102

1,3,8,11,12 p. 111-112

2,4,8,10 p. 118

6,7,9,10,11 p. 126-129

2,4,6,9,12 p. 133-136

1,3,5,6 p. 144-145

1,3,4,7 p. 145-146 (supp: topol groups)

Ch3~

2,4,5,11 p. 152

5,8,10,12 p. 157-159

1,6,7,8,9 p. 162-163

7,9,10,11,13 p. 170-172

1,3,4,6 p. 177-178

3,4,5,7 p. 181-182

4,7,8,11 p. 186

1,4,5,10 p. 187-188 (supp: nets)

Ch4~

1,3,11,13,15 p. 194-195

1,5,7 p. 199-200

4,5,6,10 p. 205-207

1,2,3,7,10 p. 212-214

1,3,4,6 p. 218

1,3,5,7,8 p. 223-224

1,3,4 p. 227

1,3,4,7 p. 228-229 (supp: basics review)

Ch5~

1,2,4 p. 235-237

3,4,5,9 p. 241-242

Ch6~

2,5,6 p. 248

1,4,8 p. 260-261

2 p. 262

Ch7~

2,3,7 p. 270-271

1,2,3 p. 274-275

3,4,5,7,8 p. 280-281

2,4,6,9 p. 288-290

2,3 p. 292-293

Ch8~

5,9,10,11,12 p. 298-300

1,2 p. 304

1,3,7 p. 315-316

3,7,8 p. 316-318

Ch9~ (Algebraic Topology)

1,2,3 p. 330

1,2,6 p. 334-335

3,4,5 p. 341

1,4,7,8 p. 347-348

1,2,4 p. 353

1,2, p. 356

1,2,4 p. 359

1,4,5,9 p. 366-367

2,3,4 p. 370

3,4,5 p. 375

Ch10~

1,2 p. 380-381

2,3,4,5,6 p. 384-385

1,2,3 p. 393-394

1 p. 398

1,2 p. 406

Ch11~

1,2,5,6 p. 411-412

2,3,4 p. 421

1,3,4 p. 425

1,2,3 p. 433

1,2,4,5 p. 438

1,2,3 p. 441

1,3,4 p. 445

Ch12~

2,3,4,5 p. 453-454

1,2,4 p. 457

1,2 p. 462

1,2,4 p. 470-471

2,4,5 p. 476

Ch13~

3,5,6,7 p. 483-484

1 p. 487

1,2,6 p. 492-494

2 p. 499

1,3 p. 499-500 (supp: topol props and pi_1)

Ch14~

1,2 p. 505-506

2,3 p. 513

1,2,3 p. 515

179

It is often the case that, for a tough, formal subject, textbooks come in at least two flavors: the dense, encyclopedic, painful, rewarding-if-engaged kind, and a lighter kind, with a style dealing in more prose and somewhat less math, and particularly less of a rigid theorem-lemma-proof format. I often find it is helpful to have both kinds of books, since the dense ones are tough to simply read and it can be easy to get lost in their swamp of details if you are not tremendously on top of your game; while the dense tomes are the only path to full understanding of a subject, the lighter froo-froo texts can be awesome for getting a bird's-eye view of the landscape and a quick, often exceedingly useful intuition about a discipline and its tools. Of course these categories are somewhat fuzzy, but I think it is usually clear enough where a text falls.

In short the emphasis in the 'light/Type 2' books is on developing a key nugget of intuition and some appreciation for the most important or main results of a subject, while in the former 'dense/Type 1' books it is on rigorous, exhaustive understanding.*Please note that this thread is ***not** for 'popular' books on a subject, which might be called books of Type 3; Type 2 books are still textbooks, and still engage with the material in a rigorous, formal way, albeit less so than Type 1 books. With those descriptions & caveats in mind, this thread is devoted to the latter kind of textbook, in whatever subject, because they can be somewhat hard to find.

**Here's a list of 'light/Type 2' texts, all much more accessible than the standard in their literature:**

**Abstract Algebra :**

A Book of Abstract Algebra by Pinter

**Differentiable Topology/Manifolds :**

Geometrical Methods of Mathematical Physics by Schutz

Differential Topology with a View to Applications* by Chillingworth

**Stochastic Calculus :**

An Introduction to the Mathematics of Financial Derivatives

**Vector Calculus :**

Div, Grad, Curl, and All That (I think this fits; haven't read it, AD may correct me)

**Quantum Physics :**

Understanding Quantum Physics by Morrison and Its Sequel

**Number Theory :**

Excursions into Number Theory by Ogilvy

**Logic :**

Godel's Proof by Newman & Nagel

**Computational Complexity:**

Computers & Intractability* by Garey & Johnson

**Derivatives Theory:**

Derivatives by Wilmott (note: broader coverage than typical of Type 2, but same style)

**Linear Algebra:**

Introduction to Linear Algebra by Strang

**Might follow-up later with summaries/comments, not sure. Lemme know if you have anything you'd like to add to this list.**

** denotes a book that is also widely cited in the literature. Always find it weird when a 'classic' just so happens to be eminently accessible & readable too.*

In short the emphasis in the 'light/Type 2' books is on developing a key nugget of intuition and some appreciation for the most important or main results of a subject, while in the former 'dense/Type 1' books it is on rigorous, exhaustive understanding.

A Book of Abstract Algebra by Pinter

Geometrical Methods of Mathematical Physics by Schutz

Differential Topology with a View to Applications* by Chillingworth

An Introduction to the Mathematics of Financial Derivatives

Div, Grad, Curl, and All That (I think this fits; haven't read it, AD may correct me)

Understanding Quantum Physics by Morrison and Its Sequel

Excursions into Number Theory by Ogilvy

Godel's Proof by Newman & Nagel

Computers & Intractability* by Garey & Johnson

Derivatives by Wilmott (note: broader coverage than typical of Type 2, but same style)

Introduction to Linear Algebra by Strang

180

Text doesn't contain any problems! Will assign theorems to work through in detail.