Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Agrul

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 21
When I was growing up my mom gave me a multivitamin every day as a defense against unnamed dread diseases.

But it looks like Mom was wasting her money. Evidence continues to mount that vitamin supplements don't help most people, and can actually cause diseases that people are taking them to prevent, like cancer.

Three studies published Monday add to multivitamins' bad rap. One review found no benefit in preventing early death, heart disease or cancer. Another found that taking multivitamins did nothing to stave off cognitive decline with aging. A third found that high-dose multivitamins didn't help people who had had one heart attack avoid another.

"Enough is enough," declares an editorial accompanying the studies in Annals of Internal Medicine. "Stop wasting money on vitamin and mineral supplements."

But enough is not enough for the American public. We spend $28 billion a year on vitamin supplements and are projected to spend more. About 40 percent of Americans take multivitamins, the editorial says.

Even people who know about all these studies showing no benefit continue to buy multivitamins for their families. Like, uh, me. They couldn't hurt, right?

If only it was as simple as popping a supplement and being set for life. But alas, no.

More Evidence Against Vitamin D To Build Bones In Middle Age
In most cases, no. But $28 billion is a lot to spend on a worthless medical treatment. So I called up Steven Salzberg, a professor of medicine at Johns Hopkins who has written about Americans' love affair with vitamins, to find out why we're so reluctant to give up the habit.

"I think this is a great example of how our intuition leads us astray," Salzberg told Shots. "It seems reasonable that if a little bit of something is good for you, them more should be better for you. It's not true. Supplementation with extra vitamins or micronutrients doesn't really benefit you if you don't have a deficiency."

Vitamin deficiencies can kill, and that discovery has made for some great medical detective stories. Salzberg points to James Lind, a Scottish physician who proved in 1747 that citrus juice could cure scurvy, which had killed more sailors than all wars combined. It was not until much later that scientists discovered that the magic ingredient was vitamin C.

Ads often tout dietary supplements and vitamins as "natural" remedies. But studies show megadoses of some vitamins can actually boost the risk of heart disease and cancer, warns Dr. Paul Offit.

A Scientist Debunks The 'Magic' Of Vitamins And Supplements
Lack of vitamin D causes rickets. Lack of niacin causes pellagra, which was a big problem in the southern U.S. in the early 1900s. Lack of vitamin A causes blindness. And lack of folic acid can cause spina bifida, a crippling deformity.

Better nutrition and vitamin fortified foods have made these problems pretty much history.

Now when public health officials talk about vitamin deficiencies and health, they're talking about specific populations and specific vitamins. Young women tend to be low on iodine, which is key for brain development in a fetus, according to a 2012 report from the Centers for Disease Control and Prevention. And Mexican-American women and young children are more likely to be iron deficient. But even in that group, we're talking about 11 percent of the children, and 13 percent of the women.

Recent studies have shown that too much beta carotene and vitamin E can cause cancer, and it's long been known that excess vitamin A can cause liver damage, coma and death. That's what happened to Arctic explorers when they ate too much polar bear liver, which is rich in vitamin A.

"You need a balance," Salzberg says. But he agrees with the Annals editorial — enough already. "The vast majority of people taking multivitamins and other supplemental vitamins don't need them. I don't need them, so I stopped."

I'm still struggling with the notion that mother didn't know best. But maybe when the current bottle of chewable kid vitamins runs out, I won't buy more.

General Discussion / Replication in Science: Backlash & Back-Backlash
« on: December 17, 2013, 01:29:06 PM »
TL;DR: high-profile cancer researcher/biologist Mina Bissell argues in a Nature op-ed that failures to replicate are usually due to improper/non-identical procedure, that failure to replicate is not really very convincing evidence that a phenomenon isn't genuine, and that replication's generally a costlier process than the movement pushing for wider emphasis on replicability of studies would have us believe. Statistician, political scientist, and replication advocate Andrew Gelman picks up on and replies to Bissell's piece at length on his blog, arguing that she's wrong, is being unhelpfully defensive, and that most of the problems she describes with replications are really problems with the limited descriptions given by researchers of their methods.

Bissell poo-poos replication:

Every once in a while, one of my postdocs or students asks, in a grave voice, to speak to me privately. With terror in their eyes, they tell me that they have been unable to replicate one of my laboratory's previous experiments, no matter how hard they try. Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they, and others, have always managed to replicate our previous data.

Articles in both the scientific and popular press1–3 have addressed how frequently biologists are unable to repeat each other's experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials4. But who will evaluate the evaluators? The Reproducibility Initiative, for example, launched by the journal PLoS ONE with three other companies, asks scientists to submit their papers for replication by third parties, for a fee, with the results appearing in PLoS ONE. Nature has targeted5 reproducibility by giving more space to methods sections and encouraging more transparency from authors, and has composed a checklist of necessary technical and statistical information. This should be applauded.

So why am I concerned? Isn't reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months — if not a year — to replicate some of the experiments we have done on the roles of the microenvironment and extracellular matrix in cancer, and that includes consulting with other lab members, as well as the original authors.

People trying to repeat others' research often do not have the time, funding or resources to gain the same expertise with the experimental protocol as the original authors, who were perhaps operating under a multi-year federal grant and aiming for a high-profile publication. If a researcher spends six months, say, trying to replicate such work and reports that it is irreproducible, that can deter other scientists from pursuing a promising line of research, jeopardize the original scientists' chances of obtaining funding to continue it themselves, and potentially damage their reputations.

Fair wind
Twenty years ago, a reproducibility movement would have been of less concern. Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models, especially for human cells, for which engineering new species is not an option.

Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results — something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. Cells in culture are often immortal because they rapidly acquire epigenetic and genetic changes. As such cells divide, any alteration in the media or microenvironment — even if minuscule — can trigger further changes that skew results. Here are three examples from my own experience.

My collaborator, Ole Petersen, a breast-cancer researcher at the University of Copenhagen, and I have spent much of our scientific careers learning how to maintain the functional differentiation of human and mouse mammary epithelial cells in culture. We have succeeded in cultivating human breast cell lines for more than 20 years, and when we use them in the three-dimensional assays that we developed6, 7, we do not observe functional drift. But our colleagues at biotech company Genentech in South San Francisco, California, brought to our attention that they could not reproduce the architecture of our cell colonies, and the same cells seemed to have drifted functionally. The collaborators had worked with us in my lab and knew the assays intimately. When we exchanged cells and gels, we saw that the problem was in the cells, procured from an external cell bank, and not the assays.

Another example arose when we submitted what we believe to be an exciting paper for publication on the role of glucose uptake in cancer progression. The reviewers objected to many of our conclusions and results because the published literature strongly predicted the prominence of other molecules and pathways in metabolic signalling. We then had to do many extra experiments to convince them that changes in media glucose levels, or whether the cells were in different contexts (shapes) when media were kept constant, drastically changed the nature of the metabolites produced and the pathways used8.

A third example comes from a non-malignant human breast cell line that is now used by many for three-dimensional experiments. A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published9.

Repeat after me
The right thing to do as a replicator of someone else's findings is to consult the original authors thoughtfully. If e-mails and phone calls don't solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.

When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers3. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.

One last point: all journals should set aside a small space to publish short, peer-reviewed reports from groups that get together to collaboratively solve reproducibility problems, describing their trials and tribulations in detail. I suggest that we call this ISPA: the Initiative to Solve Problems Amicably.

Andrew Gelman comments & replies on his blog:

Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes, “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.”

I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent.

That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news.

I’ll explain a bit in the context of Bissell’s article. She writes:

Articles in both the scientific and popular press have addressed how frequently biologists are unable to repeat each other’s experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials.

But, as she points out, such replications will be costly. As she puts it:

Isn’t reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months — if not a year — to replicate some of the experiments we have done . . .

So, yes, if we require everything to be replicate, it will reduce the resources that are available to do new research.

Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they [Bissell's students and postdocs], and others, have always managed to replicate our previous data.

If all science were like Bissell’s, I guess we’d be in great shape. In fact, given her track record, perhaps we could some sort of lifetime seal of approval to the work in her lab, and agree in the future to trust all her data without need for replication.

The problem is that there appear to be labs without 100% successful replication rates. Not just fraud (although, yes, that does exist); and not just people cutting corners, for example, improperly excluding cases in a clinical trial (although, yes, that does exist); and not just selection bias and measurement error (although, yes, these do exist too); but just the usual story of results that don’t hold up under replication, perhaps because the published results just happened to stand out in an initial dataset (as Vul et al. pointed out in the context of imaging studies in neuroscience) or because certain effects are variable and appear in some settings and not in others. Lots of reasons. In any case, replications do fail, even with time and careful consideration of experimental conditions. In that sense, Bissell indeed has to pay for the sins of others, but I think that’s inevitable: in any system that is less than 100% perfect, some effort ends up being spent on checking things that, retrospectively, turned out to be ok.

Later on, Bissell writes:

The right thing to do as a replicator of someone else’s findings is to consult the original authors thoughtfully. If e-mails and phone calls don’t solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.

Hmmmm . . . maybe . . . but maybe a simpler approach would be for the authors of the article to describe clearly (with videos, for example, if that is necessary to demonstrate details of lab procedure) in the public record.

After all, a central purpose of scientific publication is to communicate with other scientists. If your published material is not clear—if a paper can’t be replicated without emails, phone calls, and a lab visit—this seems like a problem to me! If outsiders can’t replicate the exact study you’ve reported, they could well have trouble using your results in future research. To put it another way, if certain findings are hard to get, requiring lots of lab technique that is nowhere published—and I accept that this is just the way things can be in modern biology—then these findings won’t necessarily apply in future work, and this seems like a serious concern.

To me, the solution is not to require e-mails, phone calls, and lab visits—which, really, would be needed not just for potential replicators but for anyone doing further research in the field—but rather to expand the idea of “publication” to go beyond the current standard telegraphic description of methods and results, and beyond the current standard supplementary material (which is not typically a set of information allowing you to replicate the study; rather, it’s extra analyses needed to placate the journal referees), to include a full description of methods and data, including videos and as much raw data as is possible (with some scrambling if human subjects is an issue). No limits—whatever it takes! This isn’t about replication or about pesky reporting requirements, it’s about science. If you publish a result, you should want others to be able to use it.

Of course, I think replicators should act in good faith. If certain aspects of a study are standard practice and have been published elsewhere, maybe they don’t need to be described in detail in the paper or the supplementary material; a reference to the literature could be enough. Indeed, to the extent that full descriptions of research methods are required, this will make life easier for people to describe their setups in future papers.

Bissell points out that describing research methods isn’t always easy:

Twenty years ago . . . Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models . . . Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results — something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. . . .

If the microenvironment is important, record as much of it as you can for the publication! Again, if it really takes a year for a study to be reproduced, if your finding is that fragile, this is something that researchers should know about right away from reading the article.

Bissell gives an example of “a non-malignant human breast cell line that is now used by many for three-dimensional experiments”:

A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published.

That’s great! And that’s why it’s good to publish all the information necessary so that a study can be replicated. That way, this sort of exciting research could be done all the time

Costs and benefits

The other issue that Bissell is (implicitly) raising is a cost-benefit calculation. When she writes of the suffering caused by declaring a finding irreproducible, I assume that ultimately she’s talking about a patient who will get sick or even die because some potential treatment never gets developed or never becomes available because some promising bit of research got dinged. On the other hand, when research that is published in a top journal but does not hold up, this can waste thousands of hours of researchers’ time, spending resources that otherwise could have been used on productive research.

Indeed, even when we talk about reporting requirements, we are really talking about tradeoffs. Clearly writing up one’s experimental protocol (and maybe including a Youtube) and setting up data in archival form, that takes work, it represents time and effort that could otherwise be spent on research (or evan on internal replication). On the other hand, when methods and data are not clearly set out in the public record, this can result in wasted effort by lots of other labs, following false leads as they try to figure out exactly how the experiment was done.

I can’t be sure, but my guess is that, for important, high-profile research, on balance it’s a benefit to put all the details in the public record. Sure, that takes some effort by the originating lab, but it might save lots more effort for each of dozens of other labs that are trying to move forward from the published finding.

Here’s an example. Bissell writes:

When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

I worry about this. If people can’t replicate a published result, what are we supposed to make of it? If the result is so fragile that it only works under some conditions that have never been written down, what is the scientific community supposed to do with it?

And there’s this:

It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.

Yeah, right. Tell it to Daryl Bem.

What happened?

I think that where Bissell went wrong is by thinking of replication in a defensive way, and thinking of the result being to “damage the reputations of careful, meticulous scientists.” Instead, I recommend she take a forward-looking view, and think of replicability as a way of moving science forward faster. If other researchers can’t replicate what you did, they might well have problems extending your results. The easier you make it for them to replicate, indeed the more replications that people have done of your work, the more they will be able, and motivated, to carry on the torch.

Nothing magic about publication

Bissell seems to be saying that if a biology paper is published, it should be treated as correct, even if outsiders can’t replicate it, all the way until the non-replicators “consult the original authors thoughtfully,” send emails and phone calls, and “either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours.” After all of this, if the results still don’t hold up, they can be “weeded out naturally from the literature”—but, even then, only after other scientists “repeatedly fail to reproduce them.”

This seems pretty clear: you need multiple failed replications, each involving thoughtful conversation, email, phone, and a physical lab visit. Until then, you treat the published claim as true.

OK, fine. Suppose we accept this principle. How, then, do we treat an unpublished paper? Suppose someone with a Ph.D. in biology posts a paper on Arxiv (or whatever is the biology equivalent), and it can’t be replicated? Is it ok to question the original paper, to treat it as only provisional, to label it as unreplicated? That’s ok, right? I mean, you can’t just post something on the web and automatically get the benefit of the doubt that you didn’t make any mistakes. Ph.D.’s make errors all the time (just like everyone else).

Now we can engage in some salami slicing. According to Bissell (as I interpret here), if you publish an article in Cell or some top journal like that, you get the benefit of the doubt and your claims get treated as correct until there are multiple costly, failed replications. But if you post a paper on your website, all you’ve done is make a claim. Now suppose you publish in a middling journal, say, the Journal of Theoretical Biology. Does that give you the benefit of the doubt? What about Nature Neuroscience? PNAS? Plos-One? I think you get my point. A publication in Cell is nothing more than an Arxiv paper that happened to hit the right referees at the right time. Sure, approval by 3 referees or 6 referees or whatever is something, but all they did is read some words and look at some pictures.

It’s a strange view of science in which a few referee reports is enough to put something into a default-believe-it mode, but a failed replication doesn’t count for anything. Bissell is criticizing replicators for not having long talks and visits with the original researchers, but the referees don’t do any emails, phone calls, or lab visits at all! If their judgments, based simply on reading the article, carry weight, then it seems odd to me to discount failed replications that are also based on the published record.

My view that we should focus on the published record (including references, as appropriate) is not legalistic or nitpicking. I’m not trying to say: Hey, you didn’t include that in the paper, gotcha! I’m just saying that, if somebody reads your paper and can’t figure out what you did, and can only do that through lengthy emails, phone conversations, and lab visits, then this is going to limit the contribution your paper can make.

As C. Glenn Begley wrote in a comment:

A result that is not sufficiently robust that it can be independently reproduced will not provide the basis for an effective therapy in an outbred human population. A result that is not able to be independently reproduced, that cannot be translated to another lab using what most would regard as standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a ‘scientific allegation’.

To which I would add: Everyone would agree that the above paragraph applies to an unpublished article. I’m with Begley that it also applies to published articles, even those published in top journals.

A solution that should make everyone happy

Or, to put it another way, maybe Bissell is right that if someone can’t replicate your paper, it’s no big deal. But it’s information I’d like to have. So maybe we can all be happy: all failed replications can be listed on the website of the original paper (then grumps and skeptics like me will be satisfied), but Bissell and others can continue to believe published results on the grounds that the replications weren’t careful enough. And, yes, published replications should be held to the same high standard. If you fail to replicate a result and you want your failed replication to be published, it should contain full details of your lab setup, with videos as necessary.

Agrulian Archives / General Relativity (undergrad/grad student level)
« on: December 17, 2013, 01:13:57 PM »
Brief Subject Overview: generalization of special relativity, due to Einstein. Models gravitation as the literal geometry of the world, and imposes the speed of light as a limit on the speed of objects (if begun at a lower speed than that of light). If I understand correctly (probably I do not), special relativity argues that all physical laws are invariant across non-accelerating reference frames, as opposed to Newtonian mechanics which only said something to this effect about Newton's first law. General relativity then further generalizes this, insisting that all physical law is invariant in all reference frames, including accelerating (non-inertial) frames.

Text(s): MTW's "Gravitation" OR Carroll's "Spacetime & Geometry" OR Schutz's "A First Course in General Relativity" OR Taylor & Wheeler's "Spacetime Physics" (not sure which of these I feel most comfortable with yet).

Assigned problems:


Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Procedural Texturing (undergrad student level)
« on: December 17, 2013, 01:01:55 PM »
Brief Subject Overview: procedural texturing is study of the automated creation of interesting and/or realistic 2D/3D graphics, landscapes, etc. Has been used in, for example, Minecraft and various Minecraft clones.

Text(s): Ebert et al's "Texturing & Modeling".

Assigned problems:

TBA (not sure this has any problems in it, don't remember any. Might just have to work through each code example as it comes)

Replies will contain worked solutions, discussion, etc.

NITIN JAIN is the big man in Lasalgaon, a dusty town a day’s drive from Mumbai that boasts it has Asia’s biggest onion market. With a trim moustache and a smartphone stuck to his ear he struts past a thousand-odd tractors and trucks laden with red onions. Farmers hurl armfuls at his feet to prove their quality. A gaggle of auctioneers, rival traders and scribes follow him, squabbling and yanking each other’s hair. Asked why onion prices have risen so much, Mr Jain relays the question to the market. “Why?” he bellows. His entourage laughs. He says that the price of India’s favourite vegetable is a mystery that no calculation can explain.

High food prices perturb some men and women even bigger than Mr Jain. Raghuram Rajan, the boss of India’s central bank, is grappling with high inflation caused in large part by food prices: wholesale onion prices soared by 278% in the year to October and the retail price of all vegetables shot up by 46%. The food supply chain is decades out of date and cannot keep up with booming demand. India’s rulers are watching the cost of food closely, too, ahead of an election due by May. Electoral folklore says that pricey onions are deadly for incumbent governments.

A year ago it seemed that India had bitten the bullet by permitting foreign firms to own majority stakes in domestic supermarkets. The decision came after a fierce political battle. Walmart, Carrefour and Tesco have been waiting for years to invest in India. They say they would revolutionise shopping. Only 2-3% of groceries are bought in formal stores, with most people reliant on local markets. They would also modernise logistics chains, either by investing themselves, or indirectly, by stimulating food producers to spend on factories, warehouses and trucks, and establish direct contracts with farmers, eliminating layers of middlemen.

On the ground little has happened. Foreign firms complain of hellish fine print, including a stipulation to buy from tiny suppliers. Individual Indian states can opt out of the policy—which is unhelpful if you want to build a national supermarket chain. In October Walmart terminated its joint venture with Bharti, an Indian group. India has reduced the beast of Bentonville to a state of bewilderment. Tesco has cut expatriate staff.

The reaction from politicians has been indifference. “We have liberalised…to the extent that we can. People have to accept this and decide whether they want to invest,” said Palaniappan Chidambaram, India’s finance minister. Despite the apparently obvious benefits of supermarkets and the experience of most other countries, few Indians seem to want change.

You’re not in Bentonville anymore

Just how bad is India’s food supply chain? To find out The Economist followed the journey of an onion from a field in the heart of onion country, in western India, to a shopping bag in Mumbai, a city of 18m onion-munchers. The trip suggests an industry begging for investment and reform.

“The system hasn’t changed much—it’s been the same since the 1970s,” says Punjaram Devkar, an elderly farmer in a white cap. For generations his forefathers have grown onions near a hamlet called Karanjgaon. He owns a crudely irrigated six-hectare (14-acre) plot, larger than the national average farm of just 1.2 hectares. He does not want to buy more land; unreliable electricity and labour mean “it is too hard to manage.” There are four onion crops each year—in a good season production is three times higher than in a bad one. To hedge his bets he also grows sugar cane. Costs have soared because of rising rural wages, which have doubled in three years. He says welfare schemes have made workers lazy. “They just play cards all day.”

Storage facilities amount to a wooden basket inside a shed—at this time of year onions perish within 15 days, although the variety grown in the spring can last eight months. From here one of Mr Devkar’s finest is thrown into a small trailer, along with the produce of nearby farms, and taken to Lasalgaon. The roads are mostly paved but the 32km (19-mile) journey takes a couple of hours in a rickety old tractor.

Lasalgaon neophytes will find their eyes water upon entering its market, a huge car park full of red onions, trucks and skinny farmers. Although the auction is run twice daily by an official body, it doesn’t look wholly transparent. Some farmers complain that Mr Jain and another trader dominate the trade (Mr Jain denies this). Prices vary wildly day by day and according to size and quality, which are judged in a split second by eye. The average price today is $0.33 per kilo.

Neither traders nor farmers agree why prices have risen so steeply of late. They blame climate change, the media, too much rain last year, too little rain this year, labour costs, an erratic export regime. “Our biggest problem is illiteracy,” says one farmer. “We don’t know how to use technology.” Most folk agree that India needs better cold storage but worry that it is too pricey or that it ruins the taste of onions.

Farmers must pay a 1% fee to the auction house and a 4% commission to the traders. Sometimes they also have to stump up for fees for packing and loading. That takes place at several depots surrounding the market where farmers must drop off their loads and pour them onto tarpaulins on the ground. The onions may wait there for days but once put into hessian sacks they are loaded onto trucks operated by separate haulage firms and owned by intricate webs of independent consortiums.

At 8pm Prabhakar Vishad, a 20-year veteran of the onion-express highway from Lasalgaon to Mumbai, climbs into a battered Tata truck with “Blow Horn” painted in big letters on the back. Over the years the roads have improved and power steering has made life easier. Still, it is dangerous work, says Mr Vishad, who had a bad crash last year. By 6am next morning he sets his bloodshot eyes on Vashi market on the outskirts of Mumbai. It handles 100-150 truckloads of onions a day—enough to satisfy India’s commercial capital.

Onions are sometimes unpacked, sorted and repacked, with wastage rates of up to 20%. By 9am the market is a teeming maze of 300-odd selling agents, who mainly act on behalf of middlemen, and several thousand buyers—who are either retailers or sub-distributors. Everyone stands ankle deep in onions of every size. The bidding process is opaque. The selling agents each drape a towel on their arm. To make a bid you stick your hand under the towel and grip their hand, with secret clenches denoting different prices. Average prices today are about $0.54 per kilo. If the seller likes your tickles you hail a porter. He carries your newly bought sacks on his head to a dispatch depot where another group of couriers takes them into the city.

“I’m crazy, like the guys you see in the movies. I don’t negotiate,” declares Sanjay Pingle. One of the market’s biggest agents, he charges the seller a 6.5% commission. The buyers pay loading charges on top of that and a fee to the market. He says business is tough—bad debts from customers run at a fifth of sales and he has to pay interest rates of 22% on his own debts. The solution to the onion shortage is obvious, he says. “In China they keep things in storage facilities—if India had the same facilities as China has, prices would be lower.” He says he has seen photographs of Chinese technology on his mobile phone.

By the afternoon thousands of cars and trucks are picking up small batches of onions to take them into Mumbai. In Chembur, a middle-class neighbourhood, Anburaj Madar runs a big sub-distributor. He handles 200 sacks a day which he sells to retailers and restaurants. He buys daily from Vashi market and has space to store only about 12 hours’ worth of stock. Rent is dear and he too reckons cold storage destroys the flavour of onions. He marks up his prices by perhaps 20% but says a chunk of what he buys has to be thrown away—it is either damaged or of inferior quality.

For the onions that do make the cut the next stop is a small shop down the road where they are sold for another mark-up of 10% or so. From here Indubai Kakdi is hand-selecting onions with elaborate care. Buck-toothed and ragged, she sells seven kilos a day from a wooden barrow; she makes a 10% margin. She says climate change has made prices more volatile.

Peeling back the layers of truth

The journey of an onion from Mr Devkar’s field to the end customer in Mumbai takes only a few days but is enough to make you weep. There are some underlying reasons why prices have risen—higher rural wages have pushed up farmers’ costs. But the system is horribly fiddly. Farms are tiny with no economies of scale. The supply chain involves up to five middlemen. The onion is loaded, sorted or repacked at least four times. Wastage rates, either from damage or weight loss as onions dry out, are a third or more. Because India has no modern food-processing industry, low-quality onions that could be turned into paste or sauces are thrown away. Retail prices are about double what farmers receive, although the lack of any standard grading of size or quality makes comparisons hard.

The system is volatile as well as inefficient. Traders who buy onions from farmers may hoard them, but for the supply chain as a whole far too little inventory is stored. As a result small variations in demand and supply are amplified and cause violent swings in price. In the first week of December 2013 prices fell again.

It is easy to see how heavy investment by supermarket chains and big food-producers—whether Indian or foreign—could make a difference. They would cut out layers from the supply chain, build modern storage facilities and probably prod farmers to consolidate their plots.

The shoppers of Chembur agree that Indian onions are the world’s tastiest but are fed up with price swings. No one mentions reform as a solution and until there is popular support and political leadership it is hard to see much changing. And what of the last stage of the onion’s odyssey, to the stomach? By one stall stands an elderly lady who says she likes the vegetable so much that she doesn’t bother to cook it. Instead she chomps on raw onions as if they were apples. At least someone has an eye on efficiency.

This thread's going to deal with examining the proofs of specific theorems / propositions / lemmata of interest in various subjects. Everything from "shit I learned but never appreciated in detail in freshman calculus" to "differentiable topology on Yau-Calabi manifolds expressed as Fourier series known to lie in the complexity class PLS" is fair game. Here's the current list, organized by subject (crossed-off indicates "already done in a reply", as in other threads):

Elementary & Vector Calculus
- Differential & Integral forms of the Chain Rule
- Theorem for Differentiating Inverses of Fxns
- univariate Taylor's Theorem
- multivariate Taylor's Theorem
- L'Hopital's Theorem
- Fundamental Theorem of Calculus
- Green's Theorem
- Stokes Theorem
- Divergence Theorem
- Clairaut's Theorem
- Chain of Variables Theorem(s)

Number Theory
- "Product Rule" for Finite Summations (? unsure of categorization, is calculus-like)

General Algorithms & Computational Complexity
- PLS=NP implies NP=co-NP
- weak Nash PPAD-completeness
- strong Nash FIXP-completeness
- NP ≠ co-NP implies no NP-Complete problem in co-NP & no co-NP complete problem in NP
- Integer programming w/ a Unitary Matrix is in P
- Cook's Theorem (direct proof that an NP-complete problem exists)
- Prime factorization: Shor's algorithm, Schnorr-Seysen-Lenstra algorithm, AKS primality test showing PRIMES is in P
- Hierarchy Theorems
- Approximation Algorithms: PCP Theorem
- Time/space complexity of Euclid's algorithm

- Karush-Kuhn-Tucker conditions: basic sufficient conditions for their use
- KKT conditions: invexity is the broadest class of relevant fxns
- Stochastic Programming is Irrational (d/n respect Stochastic Dominance?)
- Simplex Algorithm convergence
- Simplex Algorithm exponential worst-case complexity
- Simplex polynomial smoothed complexity
- Ellipsoid Method Polynomial-time Convergence
- Convex Programming Polynomial-time Convergence
- No Free Lunch Theorem
- Benders Decomposition convergence proof

Decision Theory & Mathematical Psychology
- Cumulative-prospect Theory respects 1st-order Stochastic Dominance

Stochastic Processes
- Under ??? conditions (irreducibility?) a Markov Chain has Steady-state Distribution
- MCMC converges to Arbitrary Distribution

- Godel's Completeness Theorems
- Godel's Incompleteness Theorems

Econ & Finance
- Debreu's Existence Theorem
- No Trade Theorem

Game Theory
- Existence for: Nash, Bayes-Nash, Perfect, Trembling-Hand, etc equilibria
- 3+ player rational games may have only irrational solutions
- 2 player rational games always have rational solutions
- Revelation Principle

Graph Theory
- Four-color Theorem
- Sperner's Lemma

Fixed-Point Theorems
- Tarski's Fixed Point Theorem
- Banach's Fixed Point Theorem
- Brouwer's Fixed Point Theorem
- Kakutani's Fixed Point Theorem
- C1 and CN versions of Rouche's Theorem
- Schauder Fixed Point Theorem

Stochastic Calculus
- Ito's Lemma

- Tychonoff's Theorem
- Borsuk-Ulam Theorem (relatedly: Brouwer's Theorem)
- Baire Category Theorem
- Jordan Curve Theorem
- Ham Sandwich Theorem
- Poincaré conjecture

Convex Analysis[/i]
- Separating Hyperplane Theorem
- Farkas' Lemma

Fourier Analysis
- "Large Classes" of Fxns can be Fourier Expanded

Functional Analysis
- Metric spaces are completable
- Normed spaces are completable
- Hahn-Banach Theorem
- Banach Fixed Point Theorem
- Open Mapping Theorem (Functional Analysis)
- Closed Graph Theorem
- Baire Category Theorem

- Implicit Fxn Theorem

ODEs and PDEs
- Poincare-Bendixon Theorem
- Hartman-Grobman Theorem

Chaos Theory
- (In)Equivalenc2es b/w Defns of Chaos in Elaydi
- Horseshoe Map is Chaotic (Wiggins)
- Shift Map is Chaotic (Wiggins)

Probability & Measure
- Central Limit Theorem(s)
- Strong Law of Large Numbers

Algebra & Galois Theory
- Classification/Factorization Theorems for (Finite) Abelian Groups
- Abel-Ruffini Theorem (quintc poly's do not have closed-form solutions in general)
- Examples/proofs of Non-factorization Domains
- Lagrange's Theorem (in group theory)
- sylow Theorems

Complex Analysis
- Integral Theorems (whatever they're called, the ones that say a circular path integral gives 0 or sum weird sum involving 2 pi i depending on how many badly behaved points the fxn has)
- Open Mapping Theorem
- C1 and CN versions of Rouche's Theorem
- Proof of Euler's CIS (cos + i sin) Formula

Statistical Inference
- Ugly Duckling Theorem
- Some sorta general proof that regularization can be done w/ optimization or priors

- Gauss's Law

Agrulian Archives / Unresolved Questions
« on: December 15, 2013, 01:19:35 PM »
Gonna use this thread to keep track of which problems I attempt and do not finish, or skip entirely out of laziness. Also questions that arise in the course of reading an argument of proof that I don't fully resolve. Hopefully will encourage me to return to them at a later date:

- Electromagnetism, Purcell & Morin, Ch. 1, # 7. (asks for me to write some code; was feeling too lazy to do this)

- Chaos Theory, Elaydi, Sec. 1.4-1.5, # 4. (stuck on a sub-part of the proof of asymptotic instability of one of the equilibrium points. Had a neat idea for solving it but haven't made it work in practice yet)

Functional Analysis
- Functional Analysis, Kreyzsig, Sec. 1.6 # 5(b). (haven't figured this one out; asks for an example of homeomorphic spaces, but one complete while the other incomplete.)

Classical Mechanics
- Classical Mechanics, Taylor, Ch. 1, # 43 a). I solved problem fully, but got stuck during one of the two approaches---the more rigorous and potentially satisfying of the two, imo---that I took in trying to derive the polar unit vector hat(phi). There's something I'm missing about how to use the normalization condition to jump from where I got to the final result.

- Classical Mechanics, Taylor, Ch. 2, # 50. I 'solved the problem' in the sense Taylor would have expected of an undergrad in his class, I think, but my justification for interchanging differentiation and infinite summation was just a quick reference to a theorem from real/complex/metric analysis, wasn't carefully detailed, and I didn't prove it independently, nor even prove that the preconditions (uniform convergence or infinite radius of convergence?) held for e^z, just kind of said they did. Shouldn't be too hard to go back later and cite/prove a relevant, specific result, and establish its premises; for completeness of understanding I think I should do so.

- in Taylor's Classical Mechanics, he differentiates a function f(y') by y and claims the result is 0; similarly he differentiates f(y) by y' and gets 0. But y clearly determines y', and the form of y' constraints the form of y implicitly as well, so why don't we have to get all chain rule on this bitch?

Smooth Manifolds
- Lee2, Ch. 1, Lem. 1.6: every top n-mani has a cntble basis of precompact coord balls. At the end of p. 8, how do we know the collection of inv imgs of elems of B gives us precompact elems in M? Probably easy but need to write it out, can't see it immediately.

- Lee2, Ch. 1, in-chapter prob # 1.2. Sketched some ideas on how to show 2nd countability & Hausdorff-ness of RPn, but I left a lot of detail unclear. Specifically --- Hausdorff q's: what epsilon should be chosen for the form of the sets I defined in the Hausdorff argument? And are those sets actually open, or do I need to perturb each component to get an open set? 2nd count q's: I think I guessed the correct basis, but I didn't go to any effort to show that it worked, and it has some similar problems to the Hausdorff argument, i.e. it relies on perturbing elements of reps of linear 1-d subspaces. Show rigorously that this does what it should do? Seems like a common theme is I could use a clean formal test for equivalence between the linear subspaces rep'd by a vector; I think just such a test is the "can multiply by a nonzero constant" property, i.e. this is a characterization of reps for a 1-d lin subspace.

Topological Manifolds
- Lee1, Ch. 2, Ex 2.28: got a little lazy and identified the discontinuity in arcsin graphically. Also didn't really show injectivity or surjectivity, just kind of stated them as well-known properties of the relevant trig fxns and complex exponential map, resp. Should return and show this stuff analytically.

Calculus of Variations
- Fox, Ch. 2, Sec 4, proof of Lemma 2. Fox wants to argue that the product (non-integral) term in a certain integration-by-parts vanishes; the product includes terms of the form t(b)^2/u(b) and t(a)^2/u(a), and we know the numerators are 0, but nowhere does Fox show that u(a), u(b) are nonzero. Later in the chapter he proves that u(x) "cannot have a double root," by which I think he means that only u(x)=0 or u'(x)=0 can be true, not both, for any given x. He remarks that this shows that "both t(b)^2/u(b) and t(a)^2/u(a) vanish since t(a) = t(b) = 0 by hypothesis." I have no idea why he thinks what he's shown implies what he's said it's shown; maybe there's some kind of Taylor expansion & limiting argument being made? He's generally pretty good about spelling out the details though, so I wonder if I'm not just making a stupid oversight.

Agrulian Archives / Quick Tutorials on Subject Basics
« on: December 14, 2013, 10:01:48 PM »
Gonna fill this thread up with videos/wikis/articles/etc that provide accessible ways for remembering/understanding/deriving various elementary facts. This video on deriving the most commonly used values in the unit circle, for example:

Spamalot / I feel the love. And I feel it burn
« on: December 14, 2013, 05:01:38 PM »
song is about chlamydia

Agrulian Archives / The Arbitrary Mathematical Vomit Thread
« on: December 13, 2013, 02:21:05 AM »
Documenting whimsical observations / conversations 'bout math here.

General Discussion / Machine Learning for Adaptive Drugs
« on: December 13, 2013, 12:43:43 AM »
Computer scientists at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University have joined forces to put powerful probabilistic reasoning algorithms in the hands of bioengineers.

In a new paper presented at the Neural Information Processing Systems conference on December 7, Ryan P. Adams and Nils Napp have shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions.

These algorithms, which use a technique called "message passing inference on factor graphs," are a mathematical coupling of ideas from graph theory and probability. They represent the state of the art in machine learning and are already critical components of everyday tools ranging from search engines and fraud detection to error correction in mobile phones.

Adams' and Napp's work demonstrates that some aspects of artificial intelligence (AI) could be implemented at microscopic scales using molecules. In the long term, the researchers say, such theoretical developments could open the door for "smart drugs" that can automatically detect, diagnose, and treat a variety of diseases using a cocktail of chemicals that can perform AI-type reasoning.

"We understand a lot about building AI systems that can learn and adapt at macroscopic scales; these algorithms live behind the scenes in many of the devices we interact with every day," says Adams, an assistant professor of computer science at SEAS whose Intelligent Probabilistic Systems group focuses on machine learning and computational statistics. "This work shows that it is possible to also build intelligent machines at tiny scales, without needing anything that looks like a regular computer. This kind of chemical-based AI will be necessary for constructing therapies that sense and adapt to their environment. The hope is to eventually have drugs that can specialize themselves to your personal chemistry and can diagnose or treat a range of pathologies."

Adams and Napp designed a tool that can take probabilistic representations of unknowns in the world (probabilistic graphical models, in the language of machine learning) and compile them into a set of chemical reactions that estimate quantities that cannot be observed directly. The key insight is that the dynamics of chemical reactions map directly onto the two types of computational steps that computer scientists would normally perform in silico to achieve the same end.

This insight opens up interesting new questions for computer scientists working on statistical machine learning, such as how to develop novel algorithms and models that are specifically tailored to tackling the uncertainty molecular engineers typically face. In addition to the long-term possibilities for smart therapeutics, it could also open the door for analyzing natural biological reaction pathways and regulatory networks as mechanisms that are performing statistical inference. Just like robots, biological cells must estimate external environmental states and act on them; designing artificial systems that perform these tasks could give scientists a better understanding of how such problems might be solved on a molecular level inside living systems.

"There is much ongoing research to develop chemical computational devices," says Napp, a postdoctoral fellow at the Wyss Institute, working on the Bioinspired Robotics platform, and a member of the Self-organizing Systems Research group at SEAS. Both groups are led by Radhika Nagpal, the Fred Kavli Professor of Computer Science at SEAS and a Wyss core faculty member. At the Wyss Institute, a portion of Napp's research involves developing new types of robotic devices that move and adapt like living creatures.

"What makes this project different is that, instead of aiming for general computation, we focused on efficiently translating particular algorithms that have been successful at solving difficult problems in areas like robotics into molecular descriptions," Napp explains. "For example, these algorithms allow today's robots to make complex decisions and reliably use noisy sensors. It is really exciting to think about what these tools might be able to do for building better molecular machines."

Indeed, the field of machine learning is revolutionizing many areas of science and engineering. The ability to extract useful insights from vast amounts of weak and incomplete information is not only fueling the current interest in "big data," but has also enabled rapid progress in more traditional disciplines such as computer vision, estimation, and robotics, where data are available but difficult to interpret. Bioengineers often face similar challenges, as many molecular pathways are still poorly characterized and available data are corrupted by random noise.

Using machine learning, these challenges can now be overcome by modeling the dependencies between random variables and using them to extract and accumulate the small amounts of information each random event provides.

"Probabilistic graphical models are particularly efficient tools for computing estimates of unobserved phenomena," says Adams. "It's very exciting to find that these tools map so well to the world of cell biology."

Brief Subject Overview: a number of phase transitions are familiar to us from everyday experience: freezing, melting, evaporation, etc. There are others but to be blunt I don't know much about them, though it's one of the areas of physics I am most interested in appreciating. My dim understanding is that in the classical theory of phase transitions there arose a number of limits of quantities that diverged, and no one could quite figure out how to sensibly force them to converge; the renormalization group theory is what ultimately corrected this issue. The text I'll be using is about this theory.

Text(s): Zinn-Justin's "Phase Transitions and Renormalization Group".

Assigned problems: TBA

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Classical Field Theory (??? level)
« on: December 12, 2013, 09:49:07 PM »
Brief Subject Overview: classical field theory concerns the interaction of physical fields with matter. The text I'll be using focuses on fluid dynamics and elastic deformations, and on the gravitational and electromagnetic fields; it is both non-relativistic and non-quantum.

Text(s): Soper's "Classical Field Theory".

Assigned problems: TBA

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Abstract Algebra & Galois Theory (grad student level)
« on: December 12, 2013, 02:16:16 PM »
Brief Subject Overview: abstract algebra identifies the properties that make elementary algebra on the real numbers work as it does, and uplifts these properties to the status of axioms; it then explores the consequences of various combinations of these properties. Central objects of study are groups, rings, fields, etc.; straight-forward questions of algebraic interest include, for example, whether and when a polynomial equation has a solution in some underlying set (the reals being the most familiar example) on which we're capable of doing algebra. Abstract algebra intertwines with many other aras of higher mathematics, as, for example, in the definition of Lie groups, which combines ideas from differential topology and abstract algebra. So far I've taken a one-semester course in this subject, and hope to take a second semester of it in the spring; however, IIRC we won't be covering much of chapter 13 (field theory) or any of chapter 14 (galois theory), and I've always wanted to at least understand the proof that quintic polynomials don't have a closed-form solution (which I believe has something to do with galois theory), so I'd like to teach this subarea to myself. We will be covering chapter 9 (polynomials over fields) in the spring, so the plan of study will be a bit redundant with that, but there's nothing like repetition to breed mastery.

Text(s): Dummit & Foote's "Abstract Algebra". Focus on chapters 9, 13, 14.

Assigned problems:

13,14,15,16,17 p. 298-299
4,11 p. 301-303
1,2,4,5 p. 306-307
4,7,11,17 p. 311-313
1,5,6,7 p. 315
6, 8, 13, 22, 29, 33, 34, 35, 43 p. 330-335

1,5,7,8 p. 519
3,7,12,20 p. 529-531
1,2,4 p. 535-536
1,2,5,6 p. 545
1,6,10,11 p. 551-552
1,2,3,9,10 p. 555-557

1,3,5,7,9 p. 566-567
2,5,7,20,31 p. 581-585
8,11,12,15,17 p. 589-591
3,5,7,8 p. 595-596
6,10,11,12,13 p. 603-606
13, 15, 18, 33, 37, 38, 44, 51 p. 617-624
2,3,4,5 p. 635-639
2,5,6 p. 644-645
2,8,12,14,15 p. 652-654

Replies will contain worked solutions, discussion, etc.

General Discussion / Extreme Diets can Quickly alter Gut Bacteria
« on: December 12, 2013, 01:00:48 AM »

With all the talk lately about how the bacteria in the gut affect health and disease, it's beginning to seem like they might be in charge of our bodies. But we can have our say, by what we eat. For the first time in humans, researchers have shown that a radical change in diet can quickly shift the microbial makeup in the gut and also alter what those bacteria are doing. The study takes a first step toward pinpointing how these microbes, collectively called the gut microbiome, might be used to keep us healthy.

"It's a landmark study," says Rob Knight, a microbial ecologist at the University of Colorado, Boulder, who was not involved with the work. "It changes our view of how rapidly the microbiome can change."

Almost monthly, a new study suggests a link between the bacteria living in the gut and diseases ranging from obesity to autism, at least in mice. Researchers have had trouble, however, pinning down connections between health and these microbes in humans, in part because it’s difficult to make people change their diets for the weeks and months researchers thought it would take to alter the gut microbes and see an effect on health.

But in 2009, Peter Turnbaugh, a microbiologist at Harvard University, demonstrated in mice that a change in diet affected the microbiome in just a day. So he and Lawrence David, now a computational biologist at Duke University in Durham, North Carolina, decided to see if diet could have an immediate effect in humans as well. They recruited 10 volunteers to eat only what the researchers provided for 5 days. Half ate only animal products—bacon and eggs for breakfast; spareribs and brisket for lunch; salami and a selection of cheeses for dinner, with pork rinds and string cheese as snacks. The other half consumed a high-fiber, plants-only diet with grains, beans, fruits, and vegetables. For the several days prior to and after the experiment, the volunteers recorded what they ate so the researchers could assess how food intake differed.

The scientists isolated DNA and other molecules, as well as bacteria, from stool samples from before, during, and after the experiment. In this way, they could determine which bacterial species were present in the gut and what they were producing. The researchers also looked at gene activity in the microbes.

Within each diet group, differences between the microbiomes of the volunteers began to disappear. The types of bacteria in the guts didn't change very much, but the abundance of those different types did, particularly in the meat-eaters, David, Turnbaugh, and their colleagues report online today in Nature. In 4 days, bacteria known to tolerate high levels of bile acids increased significantly in the meat-eaters. (The body secretes more bile to digest meat.) Gene activity, which reflects how the bacteria were metabolizing the food, also changed quite a bit. In those eating meat, genes involved in breaking down proteins increased their activity, while in those eating plants, other genes that help digest carbohydrates surfaced. "What was really surprising is that the gene [activity] profiles conformed almost exactly to what [is seen] in herbivores and carnivores," David says. This rapid shift even occurred in the long-term vegetarian who switched to meat for the study, he says. "I was really surprised how quickly it happened.”

From an evolutionary perspective, the fact that gut bacteria can help buffer the effects of a rapid change in diet, quickly revving up different metabolic capacities depending on the meal consumed, may have been quite helpful for early humans, David says. But this flexibility also has possible implications for health today.

"This is a very important aspect of a very hot area of science," writes Colin Hill, a microbiologist at University College Cork in Ireland, who was not involved with the work. "Perhaps by adjusting diet, one can shape the microbiome in a way that can promote health," adds Sarkis Mazmanian, a microbiologist at the California Institute of Technology in Pasadena, also unaffiliated with the study.

But how it should be shaped is still up in the air. "We're not yet at a point where we can make sensible dietary recommendations aimed at 'improving' the microbiota (and the host)," Hill writes. He and others are cautious, for example, about the implications of the increase seen in one bacteria, Bilophila wadsworthia, in the meat-eaters that in mice is associated with inflammatory bowel disease and high-fat diets. Says Knight, "There's still a long way to go before causality is established."

So Hill's best advice for now: "People should ideally consume a diverse diet, with adequate nutrients and micronutrients—whether it's derived from animal or plant or a mixed diet."

Agrulian Archives / Electricity & Magnetism (undergrad student level)
« on: December 11, 2013, 09:37:52 PM »
Brief Subject Overview: the study of electricity, magnetism and --- on realizing their intimate connection --- electromagnetism, is a very old area of classical physics, and one of the most successful, as judged by the limited need for changes to it in light of new evidence. In fact, Maxwell's equations, which (together with Newton's laws) determine the behavior of electromagnetic systems, were a central part of Einstein's inspiration in generating relativistic mechanics, and in my dim understanding essentially needed no revision because of their key role in this development. Anyway, electromagnetism is the study of precisely what it sounds like, primarily on length and speed scales that are within the realm of classical mechanics. An interesting feature, to me, of electromagnetcs is its tireless, creative use of the vector calculus; although gradients always seemed to me to have an obvious interpretation, one often doesn't hear very lucid explanations for the intuition behind vector operations like the curl and divergence. Here we see exactly that, as those operators are fundamental to the models handled.

Text(s): Purcell & Morin's "Electricity and Magnetism".

Assigned problems:

7,29,10,17,25 from Ch 1
26, 14, 12, 28, 7, 1, 17, 16 from Ch 2

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Statistical Mechanics (undergrad student level)
« on: December 11, 2013, 09:31:46 PM »
Brief Subject Overview: statistical mechanics is the branch of physics that deals with many-particle systems, and with the properties of matter and physical experience that arise from the statistical, average properties of many individual particles interacting. In a sense statistical mechanics is not a "true" foundational area of physics, in that it is in principle derivable entirely from the laws of classical, quantum, electromagnetic, and relativistic mechanics. Of course, in practice nobody can solve the systems of equations that would result from attempting such a reduction, and so in practice statistical mechanics stands on its own as the study of such properties as pressure, heat, phase transition in matter, etc., that arise fundamentally from the interactions of many particles, typically on the order of Avogrado's constant, about 623.

Text(s): Mandl's "Statistical Physics", Kittel & Kroemer's "Thermal Physics", Penrose's "Foundations of Stat Mech". Not really sure which of these books I like best so far; Mandl seems too terse, and assumes a lot of prior physics background. Currently focusing on K&K; jury's still out on how that'll work out. Penrose from what I recall is clear and satisfyingly deductive but assumes familiarity with the Hamiltonian formulation of classical mechanics, which I don't have yet; may return to his book once I work through that chapter in Class Mech.

Assigned problems:

2,5 from Ch 1 (Levin)
TBA (Penrose)

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Quantum Mechanics (undergrad student level)
« on: December 11, 2013, 09:24:55 PM »
Brief Subject Overview: quantum mechanics is the physics of the very small, on the order of Planck's constant, or about 6.62 x 10-34 (squared meter)-kilograms per second. When this number doesn't seem negligibly small in comparison to the lengths of the things with which you're working, then quantum effects will probably start to become important. Quantum physics contains a number of major deviations from classical physics; most texts seem to begin by explaining particle-wave duality, particularly as applied to light, and motivated by a sequence of experiments with a pedigree of several hundred years, maybe the most iconic of which is the two-slit experiment.

Text(s): Levin's "An Introduction to Quantum Theory" & Griffith's "An Intro to Quantum Mech". Was originally working out of Levin, but later read Griffiths and find him, at least thus far, both clearer and more concise, so using Griffiths as the primary text now.

Assigned problems:

7,8,13,15,19 from quantum Ch1 (Levin)
TBA (Griffiths)

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Classical Mechanics (undergrad student level)
« on: December 11, 2013, 09:17:42 PM »
Brief Subject Overview: classical mechanics is, in my understanding, the foundation upon which most of a typical physics education is built. It is Newton's physics of motion, or Lagrange's, or Hamilton's --- all equivalent formulations of the same set of physical laws. They describe motion (position, velocity, acceleration, etc) with great fidelity for nearly all sorts of physical things over a very broad range of masses, sizes, and speeds. For the very small and the very quickly moving classical mechanics breaks down and gives way to quantum mechanics and (special/general) relativistic mechanics, but for most everyday ranges it works as an exceedingly successful approximation. Moreover, these other subject areas tend to explain themselves by contrast with and through examples familiar from classical mechanics; so, classical mechanics is important for understanding physics generally. This applies equally well to other areas of physics: for example, Maxwell's laws and the theory of electromagnetism describe the behavior of electromagnetic forces, but nevertheless obey Newton's three laws (in the realm of classical approximation), and one could hardly make practical use of Coloumb's Law, which gives the force exerted by one electric non-moving charge on another, without knowing, for example, that F=ma, from classical mechanics.

Text(s): Taylor's "Classical Mechanics".

Assigned problems: (NOTE -- odds only bc Taylor only has odd solutions)

10,24,43,44 from ClassMech Ch1
1,9,19,25,39,49,50,53 from Ch2
5,7,21 from Ch3
27,31,45 from Ch4
11,41,45 from Ch5

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Vibrations and Waves (high school level)
« on: December 11, 2013, 09:05:43 PM »
Brief Subject Overview: vibrations and waves is about the physics, and to some extent the mathematics, of waves that move in various media---light waves, sound waves, etc.

Text(s): King's "Vibrations and Waves".

Assigned problems:

8,9,10 from Ch 1

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / i dont know how to sticky threads (fffffuuuuuu level)
« on: December 11, 2013, 08:54:53 PM »

Agrulian Archives / Copulas (grad student level)
« on: December 11, 2013, 04:55:44 PM »
Brief Subject Overview: in probability theory, multivariate probability distributions or density functions describe the probability that vectors of, say, N variables, will occur together; i.e., the probabiltiy that variable 1 will have value x at the same time that variable 2 will have value y at the same time that variable 3 will have value z, and so on. In general knowledge of the value of one variable may tell us something about the likely values of another variable; that is, the probability distribution may contain a dependence between its variables. Copulas are a standard way of expressing and analyzing this dependence structure. They have become widely used in, for example, quantitative finance, as a result of which the Gaussian copula (which is just a single, widely used kind of copula) and its limitations became the focus of at least one scathing article in the wake of the financial crisis.

Text(s): Nelsen's "An Introduction to Copulas".

Assigned problems:

Text contains no problems! Will assign theorems/propositions/examples to be worked out in detail instead.

Replies will contain worked solutions, discussion, etc.

Brief Subject Overview: differential topology and the theory of (differentiable) manifolds is mostly concerned with studying topological or differentiable 'manifolds.' Topological manifolds are structures that look locally like Euclidean N-space in the sense that, for each point of the manifold and some neighborhood around that point, there is a continuous bijection with continuous inverse (a homeomorphism) between that neighborhood and some open ball in Euclidean N-space. Differentiable manifolds are topological manifolds with an extra property: if a homeomorphism f and the inverse of another homeomorphism g, as defined above, happen to have overlapping domains (as they often will in practice), then taking f(g^{-1}()) gives us a map from R^N -> R^N and we can impose the condition that this map be differentiable (one, two, etc times, as desired). This kind of differentiability condition on a manifold gives it a great deal of extra structure, and essentially lets us "do calculus" in any space that "looks smooth like R^N" if you get close enough to it.


Introduction to Topological Manifolds by Lee (Lee1).
Introduction to Topological Manifolds by Schutz.
Introduction to Smooth Manifolds by Lee (Lee2).

Assigned problems:

4,14,19 from Lee1, Ch1
Ex. 2.1 from Schutz

Appendices (topol-alg-calc) proofs of statements/theorems/exercises, p. 540-596, Lee2
Ex. 1.1-1.6 from Lee2, (middle of) Ch 1
1,3,4,5,9 from Lee2, (end of) Ch1

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Measure Theory (grad student level)
« on: December 11, 2013, 04:38:54 PM »
Brief Subject Overview: meaure theory studies, as the name suggests, the construction and properties of "measures." Measures are a generalization of probability distributions, and measure theory helps to place probability theory on a clear basis theoretically (it was rather informal prior to the theory of measures, I believe). Measure theory also helps us to define more general notions of integration than the basic Riemann integration; by partitioning the range rather than the domain of a function before performing the finite summations that form the basis for our infinite summation (which is all integration is), we become capable of integrating over even very badly behaved functions like the Dirichlet function, which has the value 1 on all rational numbers and 0 on all irrational numbers, and so is ugly as shit because it oscillates very wildly between 1 and 0---doing so infinitely often within any arbtirarily small intervl, and in fact uncountably often there---and is  effectively impossible to visualize or graph properly. Measure theory is also a pretty rich source of examples with interesting and theoretically useful behavior, such as the Cantor function (which itself forms a key illustration in the theory of fractals) or the "non-measurable" sets that we know must exist, such as Vitali's sets. As an aside, my adviser---who is a bayesian statistician by training---has emphasized to me that he thinks measure theory is mostly used by probabilists to make their work more intimidating and impenetrable, and not because it actually helps them get anything extra done. I don't really know enough to weigh in one way or the other on that, but I think it's certainly true, for the time being and on net, of some areas that I've studied.

Text(s): Adams & Guillemin's "Measure Theory & Probability".

Assigned problems:

Sec. 1.1 # 16, 8, 11, 12, 19
Sec. 1.2 # 5, 8, 9, 13
Sec. 1.3 # 1, 5, 10, 20
Sec. 1.4 # 4, 6, 8, 14, 16
Sec 2.1 # 1, 6, 9, 10
Sec 2.2 # 2, 3, 6, 7, 12
Sec 2.3 # 1, 2, 12, 13, 14
Sec 2.4 # 1, 2, 3, 6
Sec 2.5 # 3, 7, 9, 11, 12
Sec 2.6 # 7, 8, 9, 10
Sec 2.7 # 2, 6, 7
Sec 2.8 # 1, 2, 3
Sec 3.1 # 1, 5, 7, 8
Sec 3.2 # 3, 6, 8
Sec 3.3 # 1, 4, 5, 8, 11
Sec 3.4 # 3, 5, 6, 7, 9
Sec 3.5 # 3, 5, 9, 12, 14
Sec 3.6 # 2, 6, 8, 9
Sec 3.7 # 3, 5, 6, 8, 9
Sec 3.8 # (write out proof of CLT in detail)

Replies will contain worked solutions, discussion, etc.

Brief Subject Overview: computational complexity theory is the studied of computational problems, particularly those expressible in the binary language of digital computers. Complexity theory is concerned primarily with classifying tasks according to their difficulty (and, of course, solving those that can be solved); to achieve this, it assigns problems to complexity classes, such as the famous classes P and NP. There are many more classes as well: PLS, co-NP, PPAD, FIXP, etc. Defining new classes is often the first step in a complexity theorist's attempt to understand how hard a problem is to solve. Generally, problems in P are efficiently solvable; problems in NP are not. There also exist easier problem classes than P, and harder ones than NP; the most extreme example of the latter is the class of "intractable" problems, which cannot be solved by any computer, even in principle, with an arbitrarily large amount of time and space in which to compute, etc. Recently complexity theory has also been applied to problems outside of standard computer science, such as the complexity of finding solution concepts in game theory and economics, with the basic idea being that problems without efficient solutions are not good solution concepts, since nobody could be expected to find them in practice. A curious feature of computational complexity theory is that it is the source of many unsolved conjectures which are nevertheless strongly believed to be true; i.e., nobody has shown but most theorists strongly believe that P does not equal NP, and likewise that NP does not equal co-NP.

Text(s): Rich's "Automata, Computability, and Complexity".

Assigned problems:


Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Chaos Theory (undergrad student level)
« on: December 11, 2013, 04:15:39 PM »
Brief Subject Overview: 'chaos' is a phenomenon found in discrete-time and continuous-time, deterministic and stochastic dynamical systems, i.e., systems that update their state from time period to time period according to some predetermined rule (with the rule defining a probability of moving from each state to another in stochastic systems), either in time periods 1,2,3,4... or in time periods indexed by all time points in an interval, such as [0,∞). Intuitively, chaos is a situation in which the deterministic behavior of the system leads to seemingly random, unpredictable behavior in the large; formally, chaos has a number of different (and not all equivalent) definitions. Most of these definitions have as their centerpiece some kind of "topological mixing;" i.e., chaos requires that all solutions starting in some open set eventually end up in any other open set, given enough time. Another common, better known condition is that solutions starting arbitrarily close together should separate from one another exponentially fast in time (up to some limit, at least, if the state space itself is bounded in size). Chaos theory is about defining and proving the existence of chaos in formal systems, understanding its determinants and behavior, undertanding what limitations chaos does or doesn't imply for predictability, and identifying chaos in practice in real-world systems. It also concerns the formulation of various definitions of chaos, and studying whether and when they are or aren't equivalent. The text I use is on the low end of technical difficulty in this subject area, primarily because it deals with discrete-time systems; this makes it accessible and easy to get into, which is nice; also, the author provides exercises to work, which most authors on continuous-time chaos seem not to do for some reason.

Text(s): Elaydi's "Discrete Chaos".

Assigned problems:

3,6,11 from Chaos 1.1-1.3
1,2,4 from 1.4-1.5
4,12,14 from 1.6
4,8,13 from 1.7
5,6,16 from 1.8
1,2 from 1.9
2,3,15 from 2.1-2.2
3,6,15 from 2.3-2.4
5,6,7 from 2.5
1,2,3 from 2.6
1,15,18 from 3.1-3.2
3,5 from 3.3
10,14 from 3.4
1,3,11 from 3.5
6,7,10 from 3.6
1,14,15 from 3.7
1,2,6 from 4.1-4.2
1,3,4 from 4.3-4.4
8,10,12 from 4.5-4.7
8,13,15 from 4.8
2,13,14 from 4.9-4.10
2,6,7 from 4.11
1,4,6 from 5.1
2,9,13 from 5.2
5,8,9 from 5.3
1,5,9 from 5.4-5.5
10,15,16 from 6.1-6.2
3,7,15 from 6.3
1,5,10 from 6.4
8,11 from 7.1-7.2
4,7,9 from7.3-7.4
8,9 from 7.5
4,9 from 7.6

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / Topology (grad student level)
« on: December 11, 2013, 03:40:18 PM »
Brief Subject Overview: topology is the study of open sets, independent of any concrete notion of distance; its starting point is in identifying some properties of open sets in familiar settings like real/Euclidean N-space and defining open sets in more general spaces as any collection of sets that have those properties. Topology allows us to talk about, for example, continuity of functions or the number of holes in very abstract settings and spaces, where a familiar notion of distance may not be available, and artificially imposing one may be awkward. In short topology helps us to identify properties that depend only on the structure imposed on a space by its open sets, so that we do not need to worry about unimportant, extraneous detail in describing these properties.

Text(s): Munkres "Topology".

Assigned problems:

1,3,7 p. 83
4,5,7,9 p. 91-92
5,7,11,12,18 p. 100-102
1,3,8,11,12 p. 111-112
2,4,8,10 p. 118
6,7,9,10,11 p. 126-129
2,4,6,9,12 p. 133-136
1,3,5,6 p. 144-145
1,3,4,7 p. 145-146 (supp: topol groups)

2,4,5,11 p. 152
5,8,10,12 p. 157-159
1,6,7,8,9 p. 162-163
7,9,10,11,13 p. 170-172
1,3,4,6 p. 177-178
3,4,5,7 p. 181-182
4,7,8,11 p. 186
1,4,5,10 p. 187-188 (supp: nets)

1,3,11,13,15 p. 194-195
1,5,7 p. 199-200
4,5,6,10 p. 205-207
1,2,3,7,10 p. 212-214
1,3,4,6 p. 218
1,3,5,7,8 p. 223-224
1,3,4 p. 227
1,3,4,7 p. 228-229 (supp: basics review)

1,2,4 p. 235-237
3,4,5,9 p. 241-242

2,5,6 p. 248
1,4,8 p. 260-261
2 p. 262

2,3,7 p. 270-271
1,2,3 p. 274-275
3,4,5,7,8 p. 280-281
2,4,6,9 p. 288-290
2,3 p. 292-293

5,9,10,11,12 p. 298-300
1,2 p. 304
1,3,7 p. 315-316
3,7,8 p. 316-318

Ch9~ (Algebraic Topology)
1,2,3 p. 330
1,2,6 p. 334-335
3,4,5 p. 341
1,4,7,8 p. 347-348
1,2,4 p. 353
1,2, p. 356
1,2,4 p. 359
1,4,5,9 p. 366-367
2,3,4 p. 370
3,4,5 p. 375

1,2 p. 380-381
2,3,4,5,6 p. 384-385
1,2,3 p. 393-394
1 p. 398
1,2 p. 406

1,2,5,6 p. 411-412
2,3,4 p. 421
1,3,4 p. 425
1,2,3 p. 433
1,2,4,5 p. 438
1,2,3 p. 441
1,3,4 p. 445

2,3,4,5 p. 453-454
1,2,4 p. 457
1,2 p. 462
1,2,4 p. 470-471
2,4,5 p. 476

3,5,6,7 p. 483-484
1 p. 487
1,2,6 p. 492-494
2 p. 499
1,3 p. 499-500 (supp: topol props and pi_1)

1,2 p. 505-506
2,3 p. 513
1,2,3 p. 515

Replies will contain worked solutions, discussion, etc.

Agrulian Archives / 'Light' Textbook Walkthroughs of Tough Subjects
« on: December 11, 2013, 03:21:11 PM »
It is often the case that, for a tough, formal subject, textbooks come in at least two flavors: the dense, encyclopedic, painful, rewarding-if-engaged kind, and a lighter kind, with a style dealing in more prose and somewhat less math, and particularly less of a rigid theorem-lemma-proof format. I often find it is helpful to have both kinds of books, since the dense ones are tough to simply read and it can be easy to get lost in their swamp of details if you are not tremendously on top of your game; while the dense tomes are the only path to full understanding of a subject, the lighter froo-froo texts can be awesome for getting a bird's-eye view of the landscape and a quick, often exceedingly useful intuition about a discipline and its tools. Of course these categories are somewhat fuzzy, but I think it is usually clear enough where a text falls.

In short the emphasis in the 'light/Type 2' books is on developing a key nugget of intuition and some appreciation for the most important or main results of a subject, while in the former 'dense/Type 1' books it is on rigorous, exhaustive understanding. Please note that this thread is not for 'popular' books on a subject, which might be called books of Type 3; Type 2 books are still textbooks, and still engage with the material in a rigorous, formal way, albeit less so than Type 1 books. With those descriptions & caveats in mind, this thread is devoted to the latter kind of textbook, in whatever subject, because they can be somewhat hard to find.

Here's a list of 'light/Type 2' texts, all much more accessible than the standard in their literature:

Abstract Algebra :
A Book of Abstract Algebra by Pinter

Differentiable Topology/Manifolds :
Geometrical Methods of Mathematical Physics by Schutz
Differential Topology with a View to Applications* by Chillingworth

Stochastic Calculus :
An Introduction to the Mathematics of Financial Derivatives

Vector Calculus :
Div, Grad, Curl, and All That (I think this fits; haven't read it, AD may correct me)

Quantum Physics :
Understanding Quantum Physics by Morrison and Its Sequel

Number Theory :
Excursions into Number Theory by Ogilvy

Logic :
Godel's Proof by Newman & Nagel

Computational Complexity:
Computers & Intractability* by Garey & Johnson

Derivatives Theory:
Derivatives by Wilmott (note: broader coverage than typical of Type 2, but same style)

Linear Algebra:
Introduction to Linear Algebra by Strang

Might follow-up later with summaries/comments, not sure. Lemme know if you have anything you'd like to add to this list.

* denotes a book that is also widely cited in the literature. Always find it weird when a 'classic' just so happens to be eminently accessible & readable too.

Agrulian Archives / Set Valued Analysis (grad student level)
« on: December 11, 2013, 03:08:30 PM »
Brief Subject Overview: analysis is the "theory of calculus," and set valued analysis concerns the generalization of this theory from the standard calculus, which focuses on functions mapping elements of a set S to individual real numbers, to a broader setting in which functions map a single element of a set S to multiple elements in some other set T. More general definitions are given in set valued analysis for their corresponding notions in the more usual "real-valued" analysis/calculus, such as for limits (two different notions of limit that are equivalent in real-valued analysis turn out to be non-equivalent in set-valued analysis) of sequences, functions, etc. Set valued analysis is the proper area in which proofs of Kakutani's Fixed Point Theorem emerges, which is used in many settings, and in particular in game theory, to show that a problem has a solution.

Text(s): Aubin & Frankowska's "Set Valued Analysis".

Assigned problems:

Text doesn't contain any problems! Will assign theorems to work through in detail.

Replies will contain worked solutions, discussion, etc.

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 ... 21