Jun 042013

I received this most fabulous message a few days ago… and I couldn’t resist sharing it with you. Plus, it raises some serious points about my approach that I discuss below too.

SO! I quit listening to P/A a few months ago because (hear me out) – I started noticing that I agreed with *everything* that was coming out of your lovely face. I started growing a little worried that I was getting super lazy and losing (or worse, ignoring) the whole critical thinking thing.

Naturally, being a mad social scientist, I decided to test the theory. I followed the questions for a few weeks and jotted down what I thought the correct responses should be, and for the past few days I’ve been listening to podcasts at work (productivity soared). So guess what! My worryworting was totally unjustified. I got the gist of plenty of the questions spot on, although with a miserable fraction of the detail you provided. I’m so damn proud.

Anyhoo, the shamefully unreciprocated consumption of your podcasts on my part is over. As soon as Dwolla verifies my bank account the donations should be coming in biweekly. I adore your work, and not just because it gives me some vain sense of self-righteousness. That’s just a perk.

If you’re ever back home in Maryland and would like me to donate some steak and bacon, just drop me a line, chica!

The style of this email put me into fabulous fits of giggles, but I very much enjoyed its serious point too.

If you’re a fellow Objectivist, the basic answers to the questions I answer on Sunday’s Philosophy in Action Radio aren’t terribly difficult. In most cases, I know that basic answer when I choose the questions, and I bet many of you do too. If my goal were just to inform listeners — Objectivist or not — of the right answer, I’d answer six questions in fifteen minutes… and then shoot myself in the head.

Instead, the goal of the show is to work through the actual thinking required to answer such questions — meaning, to develop and apply relevant principles, to test those principles via real-world examples, to consider objections, and so on. That’s why the show consistently runs over an hour each week. It’s also why preparation for each show usually requires thinking through the issues involved, then some reading and research, then discussion with Greg, Tammy, and Paul, then more in-depth thinking, then hours of writing and organizing those thoughts.

By taking that approach, I’m able to explain my reasons for my answer in sufficient depth that people can (and do) change their minds — rationally, not rationalistically or dogmatically. Moreover, I’m teaching them — implicitly and explicitly — the principles and tools they need to think through new issues on their own in a rational way.

I’m very pleased — and proud — to be doing that. I’m also so grateful that so many others see the value in my approach, particularly when they help spread the word about the show and support it financially. That means the world to me.


On Sunday’s Philosophy in Action Radio, I’ll discuss Judith Thomson’s classic “violinist” argument in favor of abortion rights. It’s an engaging and accessible article which has been widely read and reprinted. If you’ve never read it — or you’ve not read it in a while — you might want to read or re-read it before Sunday’s broadcast. You can do so here: Judith Jarvis Thomson: A Defense of Abortion.

Here’s the introduction to whet your appetite.

Most opposition to abortion relies on the premise that the fetus is a human being, a person, from the moment of conception. The premise is argued for, but, as I think, not well. Take, for example, the most common argument. We are asked to notice that the development of a human being from conception through birth into childhood is continuous; then it is said that to draw a line, to choose a point in this development and say “before this point the thing is not a person, after this point it is a person” is to make an arbitrary choice, a choice for which in the nature of things no good reason can be given. It is concluded that the fetus is. or anyway that we had better say it is, a person from the moment of conception. But this conclusion does not follow. Similar things might be said about the development of an acorn into an oak trees, and it does not follow that acorns are oak trees, or that we had better say they are. Arguments of this form are sometimes called “slippery slope arguments”–the phrase is perhaps self-explanatory–and it is dismaying that opponents of abortion rely on them so heavily and uncritically.

I am inclined to agree, however, that the prospects for “drawing a line” in the development of the fetus look dim. I am inclined to think also that we shall probably have to agree that the fetus has already become a human person well before birth. Indeed, it comes as a surprise when one first learns how early in its life it begins to acquire human characteristics. By the tenth week, for example, it already has a face, arms and less, fingers and toes; it has internal organs, and brain activity is detectable. On the other hand, I think that the premise is false, that the fetus is not a person from the moment of conception. A newly fertilized ovum, a newly implanted clump of cells, is no more a person than an acorn is an oak tree. But I shall not discuss any of this. For it seems to me to be of great interest to ask what happens if, for the sake of argument, we allow the premise. How, precisely, are we supposed to get from there to the conclusion that abortion is morally impermissible? Opponents of abortion commonly spend most of their time establishing that the fetus is a person, and hardly anytime explaining the step from there to the impermissibility of abortion. Perhaps they think the step too simple and obvious to require much comment. Or perhaps instead they are simply being economical in argument. Many of those who defend abortion rely on the premise that the fetus is not a person, but only a bit of tissue that will become a person at birth; and why pay out more arguments than you have to? Whatever the explanation, I suggest that the step they take is neither easy nor obvious, that it calls for closer examination than it is commonly given, and that when we do give it this closer examination we shall feel inclined to reject it.

I propose, then, that we grant that the fetus is a person from the moment of conception. How does the argument go from here? Something like this, I take it. Every person has a right to life. So the fetus has a right to life. No doubt the mother has a right to decide what shall happen in and to her body; everyone would grant that. But surely a person’s right to life is stronger and more stringent than the mother’s right to decide what happens in and to her body, and so outweighs it. So the fetus may not be killed; an abortion may not be performed.

It sounds plausible. But now let me ask you to imagine this. You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, “Look, we’re sorry the Society of Music Lovers did this to you–we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.” Is it morally incumbent on you to accede to this situation? No doubt it would be very nice of you if you did, a great kindness. But do you have to accede to it? What if it were not nine months, but nine years? Or longer still? What if the director of the hospital says. “Tough luck. I agree. but now you’ve got to stay in bed, with the violinist plugged into you, for the rest of your life. Because remember this. All persons have a right to life, and violinists are persons. Granted you have a right to decide what happens in and to your body, but a person’s right to life outweighs your right to decide what happens in and to your body. So you cannot ever be unplugged from him.” I imagine you would regard this as outrageous, which suggests that something really is wrong with that plausible-sounding argument I mentioned a moment ago.

In this case, of course, you were kidnapped, you didn’t volunteer for the operation that plugged the violinist into your kidneys. Can those who oppose abortion on the ground I mentioned make an exception for a pregnancy due to rape? Certainly. They can say that persons have a right to life only if they didn’t come into existence because of rape; or they can say that all persons have a right to life, but that some have less of a right to life than others, in particular, that those who came into existence because of rape have less. But these statements have a rather unpleasant sound. Surely the question of whether you have a right to life at all, or how much of it you have, shouldn’t turn on the question of whether or not you are a product of a rape. And in fact the people who oppose abortion on the ground I mentioned do not make this distinction, and hence do not make an exception in case of rape.

Nor do they make an exception for a case in which the mother has to spend the nine months of her pregnancy in bed. They would agree that would be a great pity, and hard on the mother; but all the same, all persons have a right to life, the fetus is a person, and so on. I suspect, in fact, that they would not make an exception for a case in which, miraculously enough, the pregnancy went on for nine years, or even the rest of the mother’s life.

Some won’t even make an exception for a case in which continuation of the pregnancy is likely to shorten the mother’s life, they regard abortion as impermissible even to save the mother’s life. Such cases are nowadays very rare, and many opponents of abortion do not accept this extreme view. All the same, it is a good place to begin: a number of points of interest come out in respect to it.

Again, you can read the whole article here: A Defense of Abortion by Judith Thomson. Then… please join us on Sunday morning for the live broadcast of Philosophy in Action Radio — or listen to the podcast later.

Apr 092013

A few years ago, I read a fascinating little book entitled Heavy Drinking: The Myth of Alcoholism as a Disease by philosopher Herbert Fingarette. Drawing on a slew of psychological studies, Fingarette presented a compelling case against the disease model of addiction, including the common claim that the alcoholic cannot control his/her drinking.

Back in January, when I answered a question on the nature of addiction, I wanted to re-acquaint myself with Fingarette’s basic arguments. Happily, I found a fabulous article by him — Alcoholism: the mythical disease — that offers many of the same arguments as the book.

As a philosopher, the issue of most interest to me here concerns free will and responsibility — namely, do “alcoholics” lack control over their drinking? In this article, as well as in the book, Fingarette presents some fascinating empirical evidence on that score. (Since the article is available freely as a PDF, I’ll quote the whole section.) Fingarette writes:

In fact, alcoholics do have substantial control over their drinking, and they do respond to circumstances. Contrary to what the public has been led to believe, this is not disputed by experts. Many studies have described conditions under which diagnosed alcoholics will drink moderately or excessively, or will choose not to drink at all. Far from being driven by an overwhelming “craving,” they turn out to be responsive to common incentives and disincentives, to appeals and arguments, to rules and regulations. Alcohol does not automatically trigger uncontrolled drinking. Resisting our usual appeals and ignoring reasons we consider forceful are not results of alcohol’s chemical effect but of the fact that the heavy drinker has different values, fears, and strategies. Thus, in their usual settings alcoholics behave without concern for what others regard as rational considerations.

But when alcoholics in treatment in a hospital setting, for example, are told that they are not to drink, they typically follow the rule. In some studies they have been informed that alcoholic beverages are available, but that they should abstain. Having decided to cooperate, they voluntarily refrain from drinking. More significantly, it has been reported that the occasional few who cheated nevertheless did not drink to excess but voluntarily limited themselves to a drink or two in order to keep their rule violation from being detected. In short, when what they value is at stake, alcoholics control their drinking accordingly.

Alcoholics have been tested in situations in which they can perform light but boring work to “earn” liquor; their preference is to avoid the boring activity and forgo the additional drinking. When promised money if they drink only moderately, they drink moderately enough to earn the money. When threatened with denial of social privileges if they drink more than a certain amount, they drink moderately, as directed. The list of such experiments is extensive. The conclusions are easily confirmed by carefully observing one’s own heavy-drinking acquaintances, provided one ignores the stereotype of “the alcoholic.”

Some people object that these experiments take place in “protected” settings and are therefore invalid. This gets things backwards. The point is that it is precisely settings, circumstances, and motivations that are the crucial influences on how alcoholics choose to drink. The alcohol per se — either its availability or its actual presence in the person’s system — is not decisive.

Indeed, the alcohol per se or its ready availability seems to be irrelevant to how the alcoholic drinks. Among the most persuasive experiments demonstrating the irrelevance of alcohol to the alcoholic’s drinking are several studies in which alcoholic subjects were deceived about whether they were drinking an alcoholic or nonalcoholic beverage. Alan Marlatt and his colleagues, for example, asked a group of alcoholics to help them “taste-rate” three different brands of the same beverage. Each individual subject was installed in a private room with three large pitchers of beverage, each pitcher supposedly containing a different brand of the same beverage. Their task, of course, was phony. Unknown to them, the subjects had been assigned to one of four groups. One group was told that the beverage in the three pitchers was tonic water — which was true. But a second group was told that the beverage was a tonic-and-vodka mix — though in fact it, too, was pure tonic water. Those in the third group were told that the beverage was tonic-and-vodka — which in fact it was. Those in the fourth group were told that it was simply tonic water — whereas in fact it too was tonic-and-vodka. The subjects were left alone (actually observed through a one-way window) and allowed to “taste” the drinks at will, which they did. The total amount drunk and the rapidity of sips were secretly recorded.

The results of this study (and several similar ones) were illuminating. First, none of the alcoholic subjects drank all the beverage — even though, according to the disease theory, those who were actually drinking vodka ought to have proceeded to drink uncontrollably. Second, all of those who believed they were drinking vodka — whether they really were or had been deceived — drank more and faster. Conversely, all of those who believed they were drinking pure tonic — though some were actually drinking vodka — drank less and more slowly. The inference is unambiguous: the actual presence or absence of alcohol in the system made no difference in the drinking pattern; what the alcoholics believed was in the beverage did make a difference — in fact, all the difference.

These results fit into a more general pattern revealed by similar experiments on other aspects of alcohol-related behavior in both alcoholics and non-alcoholics: change the beliefs about the presence of alcohol (or the effect it is supposed to have), and the behavior changes. But the alcohol itself plays no measurable role.

Mark Keller, one of the early leaders of the alcoholism movement, has responded to such evidence by redefining (or as he would say, “reexplaining”) the key concept of “loss of control.” We are now told that this concept never connoted an automatically induced inability to stop drinking. Like other sophisticated advocates of the disease concept, Keller now means that one “can’t be sure.” The alcoholic who has resolved to stop drinking may or may not stand by his resolution. We are told that “loss of control” is compatible, though unpredictably, with temporary, long-term, or indefinite remission. Here medical terms such as “remission” provide a facade of scientific expertise, but the substance of what we are told is that “loss of control” is consistent with just about anything. This precludes prediction, and of course explains nothing. If it retains any empirical content at all, it amounts to a platitude: someone who for years has relied on a certain way of handling life’s stresses may resolve to change, but he or she “can’t be sure” whether that promise will be fully kept. This is reasonable. But it is not a scientific explanation of an inner process that causes drinking. Similarly, the idea that “craving” causes the alcoholic to drink uncontrollably has been tacitly modified. It was plausible in its original sense, which is still the popular understanding: an inordinately powerful, “overwhelming,” and “irresistible” desire. But the current experimental work regards “mild craving” as a form of “‘craving.” Of course the whole point of “craving” as an explanation of a supposed irresistible compulsion to drink is abandoned here. But the word is retained — and the public is misled.

There have been other adjustments in response to new evidence, designed to retain the “disease” terminology at whatever cost. We now read that “of course alcoholism is an illness that consists of not just one but many diseases, having different forms and causes.” We also hear — in pronouncements addressed to more knowledgeable audiences — that alcoholism is a disease with biological, psychological, social, cultural, economic, and even spiritual dimensions, all of them important. This is a startling amplification of the meaning of “disease,” to the point where it can refer to any human problem. It is an important step toward expanding the medicalization of human problems — a trend that has been deservedly criticized in recent years.

Fascinating, no? If you’re interested in the phenomenon of addiction, check out the whole article! Its other findings may surprise you.

Also, if you’ve not yet heard that 27 January 2013 discussion of the nature of addiction on Philosophy in Action Radio, you can listen to or download the podcast here:

For more details, check out the question’s archive page. The full episode – where I answered questions on the nature of addiction, unions for government employees, materialism in marriage, mandatory child support, and more – is available as a podcast too.

Note: I published a version of the above commentary in Philosophy in Action’s Newsletter a while back. Subscribe today!


What a charming interview with the creator of one of my favorite Facebook pages, I Fucking Love Science!

I love her broad interest in all things science-y — and I can very much relate to that, except that my interests center on normative domains, particularly philosophy, psychology, and literature.

Specialists are hugely valuable: the major work wouldn’t get done without them. But to spread the good work of those specialists beyond their scholarly bubbles requires advocates and champions. Those are the enthusiastic and knowledgeable people who translate awesome ideas into laymen’s terms, to show regular folks just how nifty and useful and exciting and beneficial those ideas are.

That’s what I aim to do with Philosophy in Action… and it’s lovely (and useful) for me to see I Fucking Love Science as such a great exemplar in another domain.

Henri’s Christmas Carol

 Posted by on 24 December 2012 at 10:00 am  Animals, Cats, Existentialism, Funny, Philosophy
Dec 242012

“Merry” Christmas, from Henri le Chat:

Yet Another Strange Email

 Posted by on 18 December 2012 at 2:00 pm  Funny, Philosophy
Dec 182012

I received this email early in December. (That’s just around the end of the semester, as it happens!) I couldn’t help but laugh.

hi my name is noel from Tanzania i need philosophical material recently i am conducting proposal about the composition of body and soul are essential united; Plato and Aristotle compared thank I will be glad for your attention

Um, no.


The following comments on the validity of a evolutionary approach to nutrition are from an email that I wrote to an Objectivist philosopher skeptical of the paleo diet. (The email was sent many moons ago, and I only just found it again.) My comments stand pretty well on their own, I think, and I hope that they’ll be of interest to folks interested in thinking about paleo in a philosophical way.

I cannot point you to a single study that definitively proves the superiority of a paleo diet. For a hundred different reasons — most of which probably aren’t on your radar — such a study is not possible. (Gary Taubes and Mike Eades have written on that problem.) Nonetheless, a whole lot of smaller, more delimited studies (as well as well-established biology) support the claims made by advocates of a paleo diet. Plus, people report looking, feeling, and performing better — with improved lab values — on a paleo-type diet. Each of us has our own experiences and experiments to draw on too.

Hence, as I said in a thread on Facebook: “I think I’ve got very good grounds for saying that a paleo diet is (1) healthy for most people, (2) far superior to the diet of most Americans, (3) exceedingly delicious and satisfying, and (4) worth trying to see if you do better on it, particularly if you have certain kinds of health problems.”

I’m not claiming certainty, nor do I assume that my current diet is optimal. We have tons to learn about nutrition and health. Yet that’s hardly a reason to ignore what we do know — or to suppose that we can just keep eating however we please without experiencing pernicious consequences down the road.

Moreover, people are doing themselves harm by eating the standard American diet. In my own case, I was on my way to type 2 diabetes (based on my doctor’s blood glucose tests) and liver disease (based on a CT scan showing non-alcoholic fatty liver disease). We can’t assume that the standard American diet is a safe default just because it’s all around us — just as people shouldn’t assume that the standard American religion is a safe philosophical default.

To address your skepticism about an evolutionary approach to nutrition, let me ask you the following… Imagine that you were given a dog to care for, but you’d never seen or heard of a dog before. Would you say that the fact that dogs are very close relatives of wolves is irrelevant to the question of what you ought to feed this dog? Wouldn’t that evolutionary fact suggest that the dog needs meat, meat, and more meat — not tofu or corn or alfalfa?

That evolutionary inference certainly wouldn’t be the last word on proper diet for the dog by any stretch of the imagination. Yet that inference would help guide your inquiry into the optimal diet for the dog — and guide your feeding of him in the meantime. That evolutionary perspective would be particularly helpful if the government and its lackeys were busy promoting a slew of false views about optimal canine diet. Ultimately, it would help integrate and explain your various findings about canine nutrition, since the nature of the canine was shaped by its evolutionary history.

On this point, your comparison to evolutionary psychology is not apt. Evolutionary psychology is a cesspool. But that’s not because inferences from our evolutionary history are difficult, although that’s true. Evolutionary psychology is a cesspool because it depends heavily on some false philosophical assumptions — particularly determinism and innate ideas.

The same charges cannot be made against an evolutionary approach to nutrition. We know that every organism is adapted to eat certain kinds of foods rather than others. We know that human biology was shaped over the course of millions of years, during which time we ate certain kinds of foods but not others. That suggests the kinds of foods that we’re best adapted to eat. Moreover, we can see in skeletal remains that when people switched to other kinds of foods, particularly grains, they declined remarkably in basic measures of health. Then consider what know about the nature of wheat, including its effects on the gut. Top that off with the positive effects people experience — improved well-being, fat loss, better lab values, less autoimmunity — when they stop eating wheat. Then you’ve got a compelling case against eating wheat.

The evolutionary perspective is not merely a useful starting point in such inquiries, to be discarded with advancements in modern science. It’s relevant causal history: it explains why we respond as we do to wheat. That enables us to integrate disparate findings about wheat (and other foods) into a unified theory of nutrition. That’s hugely important to developing nutrition as a science.


Last week, I listened to Leonard Peikoff’s podcast question on the election results. Given my strong disagreements with his October statement on the election, I wasn’t too surprised to find that I disagreed with much that he said. However, I didn’t expect to disagree with almost his whole analysis.

Here, I want to focus on two points: (1) the reasons why people voted for Obama over Romney and (2) the “catastrophe” of these election results. However, before reading my comments below, please listen to Dr. Peikoff’s statement for yourself. It’s less than five minutes long.

First: The Voters

Peikoff claims that the election shows that some American sense of life is left, but less than he thought earlier. He claims that Obama effectively bought off the country, and that something like 47% or 50% of people are only concerned with handouts from the federal government. He claims that immigrants are coming to America en masse for the sake of the welfare state, lacking any American sense of life.

Such claims cannot be substantiated. The election concerned a wide range of topics, and people voted for one candidate over the other for a wide range of reasons. Yes, some Obama voters wanted their government handouts, but I know many people who voted for him for other, better reasons. Similarly, some Romney voters wanted to impose a social conservative agenda, but I know many people who voted for him for other, better reasons. Also, we should remember that most people just barely care about politics. As a result, they’re remarkably ignorant about even the basics of political events and elections.

As I explained in this blog post, this election was not any kind of referendum on fundamental values that could magically reveal America’s sense of life. Contrary to the claims of some Objectivist intellectuals of late, a culture’s sense of life is complex, multi-faceted, and far deeper than politics. It cannot be fairly judged by yet another election between two statist candidates of slightly different flavors. Judging America’s sense of life on the basis of this presidential election is about as reliable and fair as judging a person’s sense of life based on which of the two abysmal movies he chooses to see at his small-town duplex. (For a lengthy discussion of cultural sense of life, see Ayn Rand’s comments in “Don’t Let it Go” in Philosophy: Who Needs It.)

Much of the problem, of course is that Romney didn’t just run an “empty campaign,” as Peikoff claims. Romney wanted to initiate a trade war with China, crack down on illegal immigration, massively increase military spending (presumably for even more pointless and debasing wars abroad), force women to carry unwanted pregnancies to term, socialize medicine at the state level, and deny gays the right to marry and adopt children. Such positions are not “empty.” They are deeply wrong — and they clash with better elements of American culture, including its respect for individuals and their rights.

I do not blame ordinary voters for refusing to vote for Romney due to these abysmal positions of his. Even many Obama voters determined to preserve entitlements and subsidies were not motivated by personal greed for handouts, as Peikoff claims, but rather by a confused stew of semi-altruistic ideals. That’s bad, but it’s not the same as being bought off.

Undoubtedly, Obama will be worse than Romney would have been on many issues. Undoubtedly, Obama’s spending is dangerously out-of-control, and ObamaCare will be entrenched over the next four years. I fear another financial crisis. Yet the fact is that Romney didn’t even campaign for economic liberty. Instead, he consistently me-too’ed Obama on taxes and regulations, he supported state-level ObamaCare, and he planned to continue to spend like a drunken sailor. The result was that the two candidates didn’t look terribly different to voters, even on economics.

Second: The Catastrophe

Peikoff describes the election as a “catastrophe,” “the worst political event ever to ever occur in the history of this continent,” and even “worse than the Civil War.”

Let’s get some perspective. The secession of the southern states threatened the very existence of America, including the union of the northern states. The secession of the southern states, unless crushed, would have set a very dangerous precedent in which secession would become the solution to any political dispute. As James McPherson describes in his stellar history of the Civil War, Battle Cry of Freedom, the secession of the southern states inspired northern states and cities to contemplate their own secession from the union. (Bye-bye, New York City!) The result of that would have been very bloody anarchy. Lincoln knew that, and that’s why preserving the union was his primary objective.

However, preserving the union was not an easy task by any stretch of the imagination. The Confederacy might have won the war, particularly given the skill of Lee in comparison to the string of abysmal Union generals before Grant and Sherman emerged in the west. An independent Confederacy would not have been content to remain in its own territory: its longstanding agenda was to create an “empire of slavery.”

Also, the Civil War killed over over 600,000 Americans. Proportionately, that would equal about six million people today. That was truly catastrophic.

The secession and Civil War constituted a grave existential threat to the United States. To say, as Peikoff does, that it was known that “freedom and normalcy” would return at the end of the war is false. Americans didn’t know who would win the war. They didn’t know what kind of government or nation they would have after the war. And they didn’t know what freedoms would or would not be respected and upheld by the government after the war. Such is only known to us now, when the historical perspective smooths away the painfully rough edges and unknowns of the past.

Another four years of President Barack Obama will be damaging, undoubtedly. (Four years of Romney would have been damaging too, just in somewhat different ways.) Yet that cannot be fairly compared with the Civil War: they’re not even remotely in the same category.

In addition to the comparison to the Civil War, Peikoff said that Obama’s re-election means that “it’s going to be four years of a government single-mindedly out to destroy America at home and weaken it abroad.” Such a dire prediction is not supported by Obama’s record or by his plans. With the House controlled by the GOP, Obama will not even have the latitude that he did in his first two years in the White House, let alone any “single-minded” government at his disposal. Moreover, when is government ever “single-minded”?

Obama is not a defender of individual rights by any stretch of the imagination. Yet, as I explained in my own post-election podcast, his views are significantly better than the Republicans on some important issues. Hence, Obama’s second term offers hope for strengthening abortion rights, reforming our insane immigration laws, and repealing of the Defense of Marriage Act. Those would be positive developments not possible under Republicans.

Peikoff also indicated that totalitarian dictatorship was now perilously close, although “even after four years [of Obama], it is too early to achieve complete totalitarianism.”

Undoubtedly, America has its share of political problems. Many of those problems are quite serious, and most are unlikely to improve under Obama. Still, I simply cannot take secular apocalypticism seriously: the full context of facts paints a very different and far more hopeful picture of our future. Moreover, as I explained in this post, accurate political prediction are nearly impossible even for those immersed in the political news, and Peikoff’s 2004 prediction about the effects of a second Bush’s term is grounds for doubting his current prediction about the effects of a second Obama term.

In my view, the roots of American culture run deep — deeper than Peikoff and many other Objectivist intellectuals seem to think. On the whole, America respects the rule of law, free speech, and political dissent. It lauds achievement, technology, and hard work. It values honesty, integrity, and justice. These core values were not undone by this election, nor revealed to be illusory. They cannot and will not be undone by four more years of Obama in the White House.

America will survive Barack Obama — just as America survived George W. Bush, Bill Clinton, George H. W. Bush, Ronald Reagan, Jimmy Carter, and so on. America will survive Barack Obama — just as America would have survived Mitt Romney.

The Way Forward

Unfortunately, many Objectivists have been griping of late about how the election revealed the supposedly dismal state of the American culture. That’s unwarranted and unproductive in my view. You don’t win hard-working, responsible people over to your side by painting them as America-hating welfare queens.

American culture is far from perfect, but it’s improved tremendously in recent decades in many ways, as Dr. Eric Daniels explained in this interview on Progress in American History. Still, I recognize that free market ideas have taken a beating of late. The cause was not Obama: Obama just cashed in on the utter failure of the pragmatism and “compassionate conservatism” of George Bush and his fellow Republicans. Honestly, I’m slightly relieved that Mitt won’t be able to inflict further damage of that kind on America, as he surely would have done.

At this point, instead of bemoaning the abysmal state of American culture, advocates of free markets need to start asking themselves: “Why aren’t these ideas resonating with more Americans?” That’s a critical question to ask because many, many Americans are intelligent, thoughtful, hardworking, fair-minded, benevolent, and reasonable people, yet they don’t understand or support free markets.

I will not blame Americans for that disconnect. I want to strengthen and leverage the genuine values and virtues commonly found in Americans, whatever their political views at present. It’s my job as an intellectual to figure out how to do that well, not bemoan the supposed death of America.

Personally, my focus with Philosophy in Action Radio is finding effective ways to persuade people to embrace the principles required to live happy, healthy, and joyful lives. I want to strengthen people’s understanding and practice of justice, independence, responsibility, rationality, and other virtues in their relationships, careers, and parenting. Based on the growth of my audience (here too), I’m doing something right.

Basically, my goal is to foster people’s rationality and value-seeking — and thereby create a more rational, value-oriented culture. I don’t often gripe about the current state of politics. When I discuss politics, I much prefer to discuss the contours of a free society. I’d rather offer a positive vision of what the future can and ought to be, rather than bemoan the problems of the present.

Over the past few months, I’ve realized that promoting a free society requires more than just the usual “moral arguments for capitalism” typically offered by Objectivist intellectuals. For most people, such arguments are too far removed from their daily lives and values to even capture their attention, let alone resonate with them. That’s part of why the surge in interest in Ayn Rand hasn’t amounted to much cultural or political change, including in this election.

In my view, lasting advances in freedom require that people connect political liberty with their own deeply-held and actively-practiced positive values. First and foremost, people need to personally experience the benefits of pursuing their values on the basis of rational principles. Before they can understand and embrace rights as a principle, they need to live by reality, reason, and egoism as dominant themes in their lives. In essence, political activism can be worthwhile, but it cannot create cultural change by itself. Ultimately, I think, political change depends on cultural change, and cultural change depends on personal change.

Over the course of decades on the air, religious conservative advice talk show host Dr. Laura gradually drew that connection between practical ethics and politics for the religious right, and we’re reaping her bitter fruit today. We need to use that same method to create a culture that preaches and practices reason, egoism, and ultimately, rights.

I’m not belittling political activism. It matters, and if that’s what you want to do, that’s wonderful. My point is that lasting political change requires strengthening the basic philosophic values of the culture, at a deeper level than most Objectivists suppose.

America has time to do that, in my view. So as I work on it via Philosophy in Action Radio, I’m busy enjoying all that America has to offer, culturally and economically, thanks to the fact that we are still a fundamentally free society. That’s what I was most grateful for during this delightful Thanksgiving holiday.

Essential Versus Optional in Paleo

 Posted by on 24 November 2012 at 10:00 am  Epistemology, Food, Health, Philosophy
Nov 242012

When I developed my list of Modern Paleo Principles in early 2010, I’d hoped to be able to sort out the essential principles from the optional tweaks. So forgoing grains would be essential to eating paleo whereas intermittent fasting would be just an optional tweak that a person might never even try. Sounds reasonable, right? Perhaps so, but the attempt was a total non-starter.

Almost as soon as I sat down to write out my list of principles, I realized that I couldn’t possibly separate them into “essential” and “optional,” except in a few clear cases. Similarly, I couldn’t rank its principles by priority except in a very rough way. Despite the core features of the diet captured in my definition — avoiding grains, sugars, and modern vegetable oils in favor of high-quality meat, fish, eggs, and vegetables — that just wasn’t possible.

But… why not? Why can’t we identify the essential versus optional principles of a paleo diet or rank its principles by priority? The answer is more interesting than I supposed at first. I see three major obstacles — (1) the value of health, (2) individual differences, and (3) the science of nutrition. Let’s examine each in turn.

Health Is Not Your Ultimate Value

Health is a major value, but it’s not a person’s proper ultimate value. Health is not all that matters in life.

A person’s ultimate value is (or rather, ought to be) his own life. Consequently, people can make legitimate trade-offs with respect to health, in order to serve other, higher values. For example, a paleo-eater might choose to eat restaurant salads with canola oil dressing at business lunches because that’s what best serves her career, even if that risks some harm to her health. Or a paleo-eater might enjoy the occasional “Mo’s Bacon Bar,” because the taste is just so worth the sugar hit. Such choices would be totally legitimate: optimizing health shouldn’t be treated as an out-of-context duty.

What does that mean? It means that no principle of paleo can be treated as “essential” — in the sense that if you violate it, then you’re doing wrong, you’ve fallen off the wagon, you’re no longer paleo. Paleo is not a religious dogma: it has no Ten Commandments — nor even a “thou shalt.” (That’s for the vegans!)

Instead, paleo involves a set of principles to help guide the actions that impact our health, particularly diet. However, if a person is willing to pay the price for deviating occasionally from those principles — if that’s not a sacrifice for him but an enhancement of his life — then he ought to deviate. That’s the rational approach.

Your Health Depends on Individual Context

People are not merely fodder for the aggregate statistics of epidemiologists. They are individuals — and each person’s particular background, constitution, and circumstances matter to his choices about diet.

For example, one paleo-eater might be diabetic, another hypothyroid, and another in perfect health. One person might be disposed to heart disease, whereas another would be more likely to suffer from cancer or stroke. One person might suffer terrible effects from eating wheat, whereas sugar might be the downfall of another. A paleo-eater might be able to find a source of grass-fed beef that matches his budget — or not. A person might have 200 pounds of fat to lose — or 20 pounds of muscle to gain. One person might look, feel, and perform better eating starchy tubers while another does better avoiding them. One person might need to work hard to eliminate the soy from his diet, whereas another has none to remove. One person might live with a supportive spouse, while another lives with a hostile vegan roommate. One person might prepare all his meals at home, while another must eat in restaurants, while another must eat in the college dorm.

In short, people’s backgrounds, constitutions, and circumstances are often hugely different in ways that will affect what they can and should eat. People will implement a paleo diet in very different ways, based on those differences. To claim, as a universal generalization, that certain paleo principles are essential while others are merely optional would be to run roughshod over those individual differences. Instead, each person needs to discover what’s more essential versus more optional for him. Each person need to focus on his own life and values. The experiences of others are often useful guides or hints, but they don’t determine what’s essential versus optional for you.

The Science of Nutrition Is in Its Infancy

Ideally, with further development of science, we might be able to identify certain universal mid-level principles, such as “avoid foods that irritate your gut” or “avoid foods that promote the formation of small LDL.” Then people could focus on those principles, rather than adapting the particular recommendations of paleo to their own cirucmstances. Those kinds of integrations would be useful, undoubtedly, but I see at least three problems with aiming for that.

First, the science of nutrition is not as advanced or definitive as we might like, except on a few issues. I’m routinely amazed by how much we still have left to learn — on the value of tubers, on the different kinds of fats, on carbohydrate sources, and so on. So right now, we’re not in a position to clearly define and defend such mid-level principles. The science needs to be more settled for that.

Second, such mid-level principles wouldn’t be particularly helpful for guiding a person’s everyday choices about what to eat — unless he already knew, for example, what irritates guts in general and his gut in particular. So even if armed with a slew of solid mid-level principles, a person would still need to discover how to implement those principles well in his choices of what to eat for breakfast, lunch, and dinner.

Third, even if all that were known, individuals would still vary in their responses to foods, and they’d have to determine much of their own optimal diet by their own n=1 experiments. For example, people respond very differently to gluten. Personally, even small quantities of gluten give me migraines, but no digestive upset. Others have a different response — or no response at all.


One important conclusions from these reflections on the value of health, individual differences, and the science of nutrition is that even though the various paleo diets have a common core, the principles of paleo cannot be designated “essential” versus “optional” nor ranked in order of importance.

Of course, we can define a paleo diet, because it means something definite. We can also identify the general principles of a paleo approach to health; that’s what I hope that I’ve done with the Modern Paleo Principles. That’s crucial for doing paleo well, I think.

Yet to think of some of these principles as universally “essential” versus universally “optional” would be a mistake. Instead, they should stand in our minds as “more or less important for me.”

Of course, as an advocate of people, I’m interested to know what’s more or less important for most people or for people with certain medical conditions. Still, the individual’s mileage will always vary.

Also, a person often requires a few weeks or even months to learn how to implement the basic principles of paleo well in his own life, then even longer to tweak and optimize. For people really concerned to eat well — and to be fully healthy — that can be well worth the trouble!

Even with the broad range of paleo, we cannot hope to find a “one-size-fits-all” diet, except in a very broad way.

Jul 132012

From Great ‘Hello’ Mystery Is Solved:

Alexander Graham Bell invented the telephone. But Thomas Alva Edison coined the greeting. The word “hello,” it appears, came straight from the fertile brain of the wizard of Menlo Park, N.J., who concocted the sonorous syllables to resolve one of the first crises of techno-etiquette: What do you say to start a telephone conversation?

Here’s the interesting part, for me:

Like the telephone, the punchy “hello” was a liberator and a social leveler. “The phone overnight cut right through the 19th-century etiquette that you don’t speak to anyone unless you’ve been introduced,” Mr. Koenigsberg said. And “hello” was the edge of the blade.

Thank goodness for that! I abhor any and all forms of artificial social hierarchy. A person that you don’t know as an individual should be treated with the same respect and consideration — whether rich or poor, young or old, male or female, of respectable family or not, and so on. A person is not entitled to deference just because he was born into wealth or title. A person’s preferences should not be dismissed just because she’s young. A person is not of dubious character just because he’s poor. Moreover, a person’s wrong behavior should not be excused or indulged because of his talents or accomplishments, even if considerable.

Now, unless you read a good chunk of 18th and 19th century literature, you might not be familiar with just how much social leveling has happened in the 20th century. The rules of introduction used to be extremely strict. For example, consider this passage from Jane Austen’s Pride and Prejudice, where the ridiculous mess of pomposity and idiocy known as Mr. Collins introduces himself to Mr. Darcy.

“I have found out,” said [Mr. Collins], “by a singular accident, that there is now in the room a near relation of my patroness. I happened to overhear the gentleman himself mentioning to the young lady who does the honours of the house the names of his cousin Miss de Bourgh, and of her mother Lady Catherine. How wonderfully these sort of things occur! Who would have thought of my meeting with, perhaps, a nephew of Lady Catherine de Bourgh in this assembly! I am most thankful that the discovery is made in time for me to pay my respects to him, which I am now going to do, and trust he will excuse my not having done it before. My total ignorance of the connection must plead my apology.”

“You are not going to introduce yourself to Mr. Darcy!” [said Elizabeth Bennet.]

“Indeed I am. I shall entreat his pardon for not having done it earlier. I believe him to be Lady Catherine’s nephew. It will be in my power to assure him that her ladyship was quite well yesterday se’nnight.”

Elizabeth tried hard to dissuade him from such a scheme, assuring him that Mr. Darcy would consider his addressing him without introduction as an impertinent freedom, rather than a compliment to his aunt; that it was not in the least necessary there should be any notice on either side; and that if it were, it must belong to Mr. Darcy, the superior in consequence, to begin the acquaintance. Mr. Collins listened to her with the determined air of following his own inclination, and, when she ceased speaking, replied thus:

“My dear Miss Elizabeth, I have the highest opinion in the world in your excellent judgement in all matters within the scope of your understanding; but permit me to say, that there must be a wide difference between the established forms of ceremony amongst the laity, and those which regulate the clergy; for, give me leave to observe that I consider the clerical office as equal in point of dignity with the highest rank in the kingdom—provided that a proper humility of behaviour is at the same time maintained. You must therefore allow me to follow the dictates of my conscience on this occasion, which leads me to perform what I look on as a point of duty. Pardon me for neglecting to profit by your advice, which on every other subject shall be my constant guide, though in the case before us I consider myself more fitted by education and habitual study to decide on what is right than a young lady like yourself.” And with a low bow he left her to attack Mr. Darcy, whose reception of his advances she eagerly watched, and whose astonishment at being so addressed was very evident. Her cousin prefaced his speech with a solemn bow and though she could not hear a word of it, she felt as if hearing it all, and saw in the motion of his lips the words “apology,” “Hunsford,” and “Lady Catherine de Bourgh.” It vexed her to see him expose himself to such a man. Mr. Darcy was eyeing him with unrestrained wonder, and when at last Mr. Collins allowed him time to speak, replied with an air of distant civility. Mr. Collins, however, was not discouraged from speaking again, and Mr. Darcy’s contempt seemed abundantly increasing with the length of his second speech, and at the end of it he only made him a slight bow, and moved another way. Mr. Collins then returned to Elizabeth.

“I have no reason, I assure you,” said he, “to be dissatisfied with my reception. Mr. Darcy seemed much pleased with the attention. He answered me with the utmost civility, and even paid me the compliment of saying that he was so well convinced of Lady Catherine’s discernment as to be certain she could never bestow a favour unworthily. It was really a very handsome thought. Upon the whole, I am much pleased with him.”

As much as I love Jane Austen, I’m so glad not to live in her world. If I did, I’d surely be some family’s troublesome and impertinent servant, taking liberties left and right!

I’m even glad that our culture has become less formal in the last 30 years, such that people are immediately on a first-name basis. Really, I just couldn’t imagine calling Paul “Dr. Hsieh,” as we see in Jane Austen novels. (Then again, my pet name for Paul is “Mr. Woo,” which is partly a joke on the conventions of Jane Austen’s time!)

In sum, thank you, dear telephone!

Oh, and one final thought: This seems to be a clear case in which a major cultural shift was instigated by technology, rather than by any intellectual or philosophical changes. Of course, the culture had to be ripe for the change — and America’s ethos of the self-made individual is far more compatible with an etiquette of social equality than with an etiquette of social hierarchy. Nonetheless, the technology made a huge difference — and the internet and social media is pushing us even further in that direction. (Yay!)

In essence, cultural change requires and involves philosophy, but that doesn’t mean that philosophy — let alone philosophy departments — will be the catalyst.

Suffusion theme by Sayontan Sinha