Common Sense

This page is a sub-page of our page on Disagreement Management.


Related KMR-pages:




Reclaiming common sense: Finding truth in a post-truth world,
by Robert Curry, 2019, Book review by By John Tamny, September 24, 2019.

Sensemaking – What Makes Human Intelligence Essential in the Age of the Algorithm,
by Christian Madsbjerg, ABACUS, 2017.


Other relevant sources of information:

• …


Quoting (Curry, 2019, p. 23):

The war on common sense

When Thomas Paine appealed to “common sense” to make the case for American independence, it probably never crossed his mind that there would ever be a need to make the case for common sense itself, at least not in America. But common-sense thinking has fallen out of favor. Because it has been under attack for a very long time, it no longer gets the respect it once commanded. Deep thinkers have discarded it, elites have learned to disdain it, and many of us have had our confidence in its value badly shaken.

The consequences are enormous. Faltering belief in common sense is behind the rejection of the Founders‘ idea of America. More broadly, it is behind the astonishing rejection of Western civilization by its own people – a rejection that has reached what looks to be a civilization-ending crisis in Europe.

Examples of the war on common sense are now everywhere in public life. How about the denial of the plain fact that humans are either male or female? Not long ago, a boy in a tuto and a tiara who claimed he was a girl would still be regarded as a boy. Today, academic and cultural elites as well as government officials insist that “gender identity” is more real than biology. They say there are many genders, and one website tells me there are sixty-three. Elites tell us we had better get with the many-gender program, or else. And while we are at it, we had better get politically correct about marriage. We are told that marriage no longer means one thing, a union between a man and a woman. How long will it be until we have sixty-three varieties of marriage?

The war on moral common sense has reached new heights of absurdity. If we point out a need for common-sense steps to protect ourselves from Islamic terrorists, we are said to suffer from a psychological condition called “Islamophobia.” But unlike other phobias, such as claustrophobia, this condition is said to make us victimizers rather than victims. Similarly, if we say that America needs to secure its borders, we are met by cries that “walls are immoral.” Evidently the common-sense wisdom that good walls make good neighbors has been taken down by the masters of political correctness.

Political correctness is quite simply a war on common sense. It is a war by the elites on the common people and on the shared understanding of basic realities of life that has made it possible for us to rule ourselves under the Constitution. Once this common-sense understanding of reality has been vanquished, it is “mission accomplished” for the Americans who reject America.

The Founders realized that there might eventually come a time when their achievement was no longer understood – when perhaps one generation simply failed to pass the understanding on to the next. But the possibility that common sense could be abandoned to the extent it is today would most likely have been beyond their imagining. And no wonder, for a great deal of effort has gone into assailing it. Proponents of irrationalist doctrines that came on in wave after wave beginning in the nineteenth century – romanticism, Hegelianism, Marxism, progressivism, existentialism, postmodernism, and the like – have been pounding away at common sense for a very long time.

Because we no longer learn about the philosophy of common-sense realism that once marked the education of America’s elite, let’s begin with how Arthur Herman defines it:

The power of common judgment belongs to everyone, rich or poor, educated or uneducated; indeed, we exercise it every day in hundreds of ways. Of course, ordinary people make mistakes – but so do philosophers. And sometimes they cannot prove what they believe is true, but many philosophers have the same problem. On some things, however, like the existence of the real world and basic moral truths, they know they don’t have to prove it. These things are … self-evident, meaning they are “no sooner understood than they are believed” because they “carry the light of truth itself.”

The core idea of common-sense realism is that there are self-evident truths which do not need to be proved. Self-evident truths are the foundation of human understanding; they are the necessary basis for knowing anything at all. We know self-evident truths by means of our common sense.

Does that phrase “self-evident truths” ring a bell? Yes, it echoes the familiar line from the Declaration of Independence. When Jefferson and the other Founders wrote “We hold these truths to be self-evident,” they revealed their reliance on the philosophy of common-sense realism.

/////// End of quote from (Curry, 2019)

The duality between the Declarations of Dependence and Independence

The Declarations of Dependence specialize into three different types:

• The Declarations of Intradependence
• The Declarations of Interdependence
• The Declarations of Supradependence

Intradependence = dependence within me
Interdependence = dependence between us
Supradependence = dependence above me

More on these concepts can be found at our page on Social Algebra.


/////// Quoting (Curry, 2019, p. 29):

The Philosopher of Common Sense

If there are certain principles, as I think there are, which the constitution of our nature leads us to believe, and which we are under a necessity to take for granted in the common concerns of life, without being able to give a reason for them; these are what we call the principles of common sense; and what is manifestly contrary to them, is what we call absurd.

An Inquiry into the Human Mind
on the Principles of Common Sense

A brief visit with the founding father of the philosophy of common sense will help us to get our bearings as we explore the fascinating subject of common sense.

Thomas Reid was Professor of Moral Philosophy at the University of Glasgow, and he referred to his philosophical method as “common sense realism.” He published his great work, An Inquiry into the Human Mind on the Principles of Common Sense in 1764. It is difficult to overstate his importance to the American founding. As we have seen, Arthur Herman emphasizes this point, writing that “Common Sense Realism was virtually the official creed of the American Republic.”

Reid’s philosophical purpose was to provide a foundation for morality and for knowledge. He argued that there is an endowment of human nature that makes both morality and knowledge possible, and he called it common sense. Because we possess it, we human beings can master speech, geometry, ship building, and a host of other skills that are unique to humankind. It enables us to serve on a jury and weigh the evidence presented in a trial. With it we are able to make rational judgments and moral judgments. Common sense is the human attribute that makes it possible for us to be rational creatures and moral agents.

Reid’s fundamental insight was that our ability to make sense of our experience presupposes certain first principles. Because these principles are implicit in our conduct and our thought, they cannot be proved; there are no other truths from which they can be derived. However, to deny or even to doubt any of them is to involve ourselves in absurdity.

Consequently, the principles of common sense have the special authority of first principles: we cannot operate without them. They are inarguably true and “self-evident.” In Reid’s words (which have been quoted by Herman), common-sense judgments are “no sooner understood than they are believed,” because they “carry the light of truth itself.”

Whatever is contrary to the principles of common sense is, in Reid’s term, “absurd.” Let’s examine that word. According to Roget’s Thesaurus, its synonyms are “ridiculous, ludicrous, laughable, silly, loony, stupid, crazy, nonsense.” No doubt “nonsense” is the one that best captures Reid’s meaning. To reject the self-evident principles of common sense, then, is to indulge in nonsense.

As you know, the American Founders claimed they were guided by self-evident truths. They relied on self-evident truths because their deliberations were deeply informed by the thinking of Thomas Reid. And Reid continued to be at the center of American thought for more than a century. Generations of American academics were common-sense realists, and as noted earlier, Allen Guelzo points out that until the Civil War, every major American collegiate intellectual was a common-sense realist.

Reid is all but forgotten in America today
. He is routinely missing from survey courses in philosophy and even from courses in American thought. You may very well have never heard of him.

/////// End of quote from (Curry, 2019)

/////// Quoting (Curry, 2019, p. 49):

Doing Science

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory.

A Brief History of Time

You hear people speaking of “scientific proof,” don’t you? That phrase is used only by people who do not understand science. When someone – even a scientist! – tells you that science has proved something, you can be certain that “science” has not done so. Gregory Bateson makes the point emphatically in Mind and Nature (with capitals and italics): “SCIENCE NEVER PROVES ANYTHING. Science sometimes improves hypotheses and sometimes disproves them. But proof would be another matter.” He then puts it concisely. “Science probes, it does not prove.”

While mathematics and logic work by proofs, scientists don’t deal in proofs. “The work of the scientist consists in putting forward and testing theories,” as Karl Popper explained. Scientists test theories with experiments. All scientific theories are provisional, and may be contradicted by new evidence at any time. That is the significance of the word “theory.”

Newton’s theory of gravitational motion once defined an epoch. Newton unified the preceeding theories of Galilei and of Kepler by correcting them and also explaining them on a higher level of universality. This was an awesome achievement. For over two centuries, its explanatory powers made it seem invincible. But it was eventually superceded by Einstein’s theory of relativity. There are scientists right now working on problems in physics generated by Einstein’s awesome theory. As robust and well-tested as it is, no doubt some new theory will someday do to Einstein’s theory what Einstein did to Newton’s.

Both Newton’s and Einstein’s theories are testable. A scientific theory must necessarily be stated in such a way as to be testable, and there must be a possibility of demonstrating that it is erroneous. In other words, it must be falsifiable.

Einstein was perfectly clear about this. He insisted that his equations had to be tested experimentally, and he offered three tests for this purpose. The most famous concerned his prediction of precisely how much a ray of light would be bent by passing close to the sun during a solar eclipse. He said it would be 1.745 seconds of arc. But a verification of that prediction was not enough for Einstein. He insisted that all three tests needed to be met. Concerning the third experiment, he wrote: “If it were proved that this effect does not exist in nature, then the whole theory would have to be abandoned.” Keep in mind that those experiments could not prove the theory to be true; they could only provide corroboration by eliminating specific ways it might be in error.

Although Einstein in no way suggested that “truth is relative,” his place in the history of science illustrates the provisional nature of scientific findings. Karl Popper elaborates on this point:

Scientific results are “relative” (if this term is to be used at all) only in so far as they are the results of a certain stage of scientific development and liable to be superseded in the course of scientific progress. But this does not mean that truth is “relative.” If an assertion is true, it is true forever. It only means that most scientific results have the character of hypotheses, i.e. statements … which are … liable to revision at any time.

/////// End of quote from (Curry, 2019)

/////// Quoting (Curry, 2019, p. 87):


It seems to me to be the greatest single shift in the consciousness of the West that has occurred, and all the other shifts that have occurred in the course of the nineteenth and twentieth centuries appear to me in comparison less important, and at any rate deeply influenced by it.

The Roots of Romanticism

Isaiah Berlin identified a transformation of the European consciousness without parallel in the years between 1760 and 1830, when the era of the Enlightenment was giving way to the era of romanticism. The romantics, he wrote, discarded “the tradition that there is a nature of things which must be learnt, which must be understood, which must be known, and to which people must adjust themselves.” They rebelled against the traditional understanding that there is an objective pattern to the world and the cosmos. That tradition, as C.S. Lewis observed in A Preface to Paradise Lost, was “never, in essence, assailed until rebellion and pride came in the romantic era, to be admired for their own sake.”

In Berlin’s description, the people who initiated and promoted the romantic movement “were not primarily interested in knowledge or in the advance of science.” The movement was driven by “a great turning towards emotionalism” and by “an outbreak of craving for the infinite.” The romantics admired “wild genius, outlaw, heroes, aestheticism, self-destruction.” Therefore, “common sense, moderation, was very far from their thoughts.”

Emotionalism is a doctrine in both aesthetics and ethics, making emotions the measure of value in art and in conduct. Emotional intensity was the romantic guiding star.

Emotional intensity characterizes the dream experience, as we have seen. The dream world lacks the “objective pattern” we know from the common-sense world of waking experience. By embracing emotional intensity and rejecting the belief in an objective pattern to the world, the romantics were making dreams, not reality, their touchstone – an astonishing development!

This gives credence to Isaiah Berlin’s bold claim that romanticism represented “the greatest single shift in the consciousness of the West that has occurred.”


Berlin reels off some examples from the vast range of characteristics that were said to define romanticism:

Romanticism is the primitive, the untutored, it is youth, the exuberant sense of life of the natural man, but it is also pallor, fever, disease, decadence, the maladie du siècle, La Belle Dame Sans Merci, the Dance of Death, indeed Death itself … It is the strange, the exotic, the grotesque, the mysterious, the supernatural, ruins, moonlight, enchanted castles, hunting horns, elves, giants, griffins, falling water, the old mill on the Floss, darkness and the powers of darkness, phantoms, vampires, nameless terror, the irrational, the unutterable.

Berlin goes on at length in this vein, emphasizing the problems raised by the contradictions among these various examples of romanticism.

If we consider the list from the standpoint of the dream world, however, the problem disappears. After all, a scholar of dreams would not try to define the dream by accumulating many examples of dreams and searching for what they have in common. That would be absurd. The content of dreams would not reward such an approach. Rather, the dream world might best be defined in terms of what it lacks: an objective order. Similarly, romanticism is marked primarily by its turning away from objectivity.

“It was the Romantics,” writes Bruce Fleming, “those grandfathers and -mothers of the modern world, who first rejected the notion of the representational window as [a] means out of the room of the self.” They exalted subjectivity instead. In the new way of thinking that followed, “All inquiry, all thought, all academic enterprises – even science! only there, of course, the scientists demurred, but what did they know? – were subjective, or at least we could never achieve an objective world even if it turned out to exist.”

The romantics believed they were rejecting the rationalism of the Enlightenment. They were, but they actually went much further. They rejected not only the continental Enlightenment with its emphasis on reason, but also the Scottish Enlightenment with its emphasis on common sense – the branch of the Enlightenment that greatly influenced the American Founders. Perhaps carried away by their reliance on emotionalism, the romantics broke with the idea that there is an objective, knowable pattern to the world.

By discarding objectivity and common sense, they opened the floodgates to waves of irrationalist doctrines: Hegelianism, Marxism, progressivism, existentialism, postmodernism, and all the rest.

/////// End of quote from (Curry, 2019)

/////// Quoting Madsbjerg, (2017, p. 195):

Meaningful Differences

Great philosophers like Martin Heidegger, Albert Borgmann, and Hubert Dreyfus, argue that the kind of skill exhibited in the mastery displayed by Heen, Vestager, Voss, and Corison is the navigational skill of finding a perspective. At the heart of this navigation is a phenomenon they call meaningful differences.

To help understand this concept, first try to imagine a world without meaningful differences. This is the world of nihilism I described in corporate cultures that have lost their way. When we experience the world as lacking i meaningful differences, everything and everyone is merely a resource to be optimized. Resources are so fungible, they can be used to any end. Cathy Corison‘s grapes can be interchanged with steal from Margrethe Vestager‘s Italian factory. The extension of this understanding is that humans themselves are resources, hence the term human resources.

In perhaps his best essay, “The Question Concerning Technology,” from 1954, Martin Heidegger describes this modern ideology, our world without meaningful differences. He cites technology as our modern way of “Being,” our lens on the world. But his use of the word technology has very little to do with devices or other technical inventions. Technology – or ‘technicity’, as he calls it – is the logic that pervades our entire existence.

Whereas Romans, or pre-modern societies, saw God’s work in everything; and the Enlightenment thinkers felt that we humans were rulers of the universe, Heidegger argues that technology, today, is now at the center of our being. Not only has it replaced the gods, it has also replaced us.

In Heidegger’s world, the spirit or logic of technology is “optimization“: the relentless pursuit of squeezing every bit of value out of physical matter around us – including trees, water, and even people. Two hundred years ago, a carpenter would look at a piece of wood and work with it to make the finest creation possible – a door handle, say – given the grain and texture of that particular piece of wood. Today, however, we optimize wood by making all wood into wood pulp and reassembling it again as standardized, non-unique, and perfectly flexible “wood.” In Heidegger’s mind, this is the invisible structure that is ordering the reality of our world today: we standardize, optimize, and make things available and flexible.

In a Silicon Valley state of mind, we experience the malaise that Heidegger characterizes across all aspects of our daily lives. Everything is available; everything is equivalent to everything else: every day, hour, and second are the same as every other day, hour, and second. We are no different from all the other cogs and widgets being shipped here and there in the transportation system.

Our school system is built to create flexible accountants that are interchangeable and available for use and optimization. Companies and governments can take in new people or throw them out again with great ease because everyone is trained in the same methods by the same institutions. Technology is what makes our existence so flexible, but it is also what makes it so easy to manipulate and then to dispose of. That is progress. Or is it?

Masters like Heen, Vestager, Voss and Corison have a very special role to play in this age of technology. Their acts of phronesis – engaging in their worlds with a knowledge and experience that is necessarily context-dependent – work to dispel the modern malaise. They are not fungible resources in the global system of goods. Instead, they respond to the call of their worlds.

Hubert Dreyfus, professor of philosophy at the University of California, Berkely, further explained the unique role of these kinds of masters to me: “When we finally understand mastery and this calling of the world, then we understand the source of meaning in our lives isn’t in us, as the Cartesian model suggests – it is in being in the world. When people are doing a skillful thing in the world, they lose themselves. The distinction between the master and the world disappears. Seeing what masters can do, and what we can do, we can bring out what is best in ourselves.”

With their craft, these masters give us a glimpse of what it is to transcend ourselves. But this type of journey takes courage. “Risk is absolutely important in acquiring any skills.” Dreyfus told me, “because you have to leave the rules behind and leave behind what one generally does and stretch out into your own experience of the world. What distinguishes between risks we’re interested in and mere bravado is whether the risks are taken in the interest of what one is committed to, what one has defined themselves in terms of, and what makes the meaningful differences in their life. That kind of risk is a necessary step in becoming a master of anything.”

In other words, you will never see meaningful differences unless you actually give a damn. Do you?

/////// End of quote from (Madsbjerg, 2017)

/////// Quoting (Madsbjerg, 2017, p. 199):

The Future of Caring

A leading global provider of health care equipment and systems was trying to understand the future of long-term care for the elderly, especially giving the aging populations in countries as varied as Japan, France, Canada, and the United States. We partnered with them for a sensemaking study that involved interviews and observations with more than 450 people across thirty-three different institutions in these four countries. The study included retirement homes, dementia homes, cay care, and assisted living, and the goal was to form a perspective on where long-term care was headed and how the patient and caregiver experience was changing.

Until recently, the model for long-term care was following the curve of so many other rapidly growing industries. Financial pressures and the increasing elderly burden were – and still are – creating a strong demand for efficiency. Caregivers and their institutions felt a need to put more focus on the parts of the care that were easily measured and that provided an obvious return on investment – including high resident-to-caregiver ratio, a low number of falls, and low occurrence of pressure ulcers.

In this way, caregivers and institutions started to view the residents primarily through their bodily needs – bathing, toileting, moving in and out of bed – and they standardized their care to be as efficient as possible. As one caregiver in the study put it, “I don’t want to say we are robots, but we do have a job to do … there simply isn’t time getting to know them. You don’t know their history.”

The drive for efficiency in long-term care resembles the trajectory we know so well from the corporate farming system and its reliance on monoculture, or the education system and its drive for metrics, accountability, and standardized testing. This is the culmination of our modern era of management science: a highly optimized system that measures patients by quantitative accounts.

But a pattern emerged in the study that provides a new vision for long-term care. It is a development that has much to teach us about our attitudes toward abstracted knowledge in the form of measurements and returns on investment. Ultimately, it offers us a clearer understanding of what people are for.

Randall and the 3PM Solution

It is the changing of the shift at an assisted living facility in California and “Randall”, an 87-year-old resident, is beginning to grow agitated. This is a regular pattern with Randall, who suffers from dementia. The shift change – which occurs at 3PM every day – is filled with the commotion of moving bodies, new faces, the whirl of change. It triggers something from deep inside Randall and he often starts acting out, engaging in angry conversations with his own hallucinations.

“Barbara,” one of his caregivers, ushers Randall into the dining room and slowly, methodically, clears the rest of the residents out of the space. Randall starts to wander through the room, tipping tables and moving chairs. He grabs Barbara’s arm and grips it with intensity for several seconds. She moves her arm away with a nonchalant comment, “I don’t bend that way, dear.” And then she tries to distract him: “Look at the light coming in the windows, Randall. Look at how bright it is out there.”

All the while she performs a subtle dance with the other residents. She gives some of them a a tap, waking them up if they have fallen asleep at the table, offering them support while they get out of chairs and get out of the room. She glides seamlessly around residents in wheelchairs and walkers, pushing them one by one into the hallway so Randall can be isolated. When the room is completely empty except for Randall, she shuts the doors and stays to observe him, making certain he is safe by watching through the glass.

“We put him in the dining room so he can get his energy out,” she told researchers in the study. “This space has more room for him and better light when he is feeling agitated. When the other residents see Randall in there, they know they don’t want to be part of it.”

This type of distress is typical for residents with dementia, and it is game changer in the world of long-term care. The Organization for Economic Co-operation and Development (OECD) projects that between 2015 and 2035, dementia in the United States will increase by 55.6 percent. In Canada, they estimate an increase of 63 percent, and in Japan an increase of 74 percent.

The agitation that typifies some dementia patients is so time-consuming for everyone involved that the entire model of efficient care is being turned upside down. Facilities need to focus on how to keep the mood of agitation from spreading and how to minimize friction, both inside and outside of care tasks. Easily quantified measurements like occurrence of pressure ulcers have no bearing on the care a resident like Randall needs. What matters much more are the personalized strategies – artful hacks – that his caregivers perform every day.

This kind of care requires that the team of caregivers get to know Randall better, both his previous life and how he experiences being in the institution. For example, Barbara discovered that he was a schoolteacher for several decades. His reaction to the 3PM shift change was not random. His muscle memory responded to the hours of the school day – the energy of the 3PM release, with kids in the hallways and movement away from the classroom and toward home. When the shift change occurs at the facility, Randall feels confused and frustrated that he doesn’t recognize his surroundings. He senses that something is going on but doesn’t know what he is supposed to do.

His caregiver team put these pieces together by placing his behavior in a context with his life outside of the institution. Barbara calmed Randall down by addressing him as “Mr. Johnsson,” the name his students used in class. In anticipation of the chaos of 3PM, she and the rest of the staff began distracting him with songs and stories while the new caregivers signed in.

“We’ve got it down to a kind of science,” Barbara told the researchers. “Sometimes he just needs a change so we know when to switch people. Sometimes he responds really well to a softer voice that is almost like a whisper. But sometimes he needs you to respond in exactly the same tone as his. You kind of have to feel him out.”

And when none of the artful hacks manage to calm him, Randall’s team has devised a protocol. They guide him gently into the dining hall and isolate him briefly so he can “get his energy out.”

Randall’s caregiving strategies are written up on whiteboards and on typed sheets of paper and passed back and forth among his caregiving team: “Randall was a woodworker. Offer him Montessori blocks,” and “Randall was a guidance counselor. Ask him about his work.”

There is no way to codify this type of knowledge, because it is only relevant for the care of one person: Randall. If his institution attempted to apply management science to it – to “scale up” this knowledge – they would find it difficult to apply. The best care involves getting to know each patient within social context and then devising a series of strategies that best suit their individual needs.

This personalized caregiving may seem like an elaborate and costly strategy, but, in fact, Randall’s facility has found that it is the most efficient way to deal with Randall and his dementia. With the right combination of hacks, cues, and distractions, they can get Randall bathed, fed, and calmed much faster than if they expediently checked off their list of bodily needs and ignored Randall’s growing discomfort and specific triggers. And, it goes without saying, personalized caregiving aligns more fully with the vision of the caregivers. There is less burnout and stress and greater sense of purpose for the staff when they are encouraged to get to know each patient as a person.

You might look at these examples and say, well sure, we all want good care, but it’s too expensive to maintain. What we are discovering, however, is that the real expense is care metered out only through management science and codified knowledge. As it turns out, good-quality personalized care is cheaper care when it comes to dementia. The cost-effectiveness is a direct result of its efficiency. According to caregivers and their management in every country we studied, this more personalized care of dementia patients saves time in the day. When the entire culture inside the facility is calmer and more peaceful – when friction is averted – fewer patients suffer from falls and there are fewer pressure ulcers. The entire system works better.

The focus is shifting more from the task to the person,” one administrator in the study told us. “If you can create relationships with the people you’re caring for, it’s going to be better for them. They’re probably going to have fewer behaviors, more quality of life and peace. This then makes our work easier and quicker.”

Such a shift necessitates a fundamental change in our assumptions about time and cost. This “new efficiency” in dementia care is entirely local and contextualized: it cannot be abstracted and scaled. Because there is no other Randall in the world, there is no standardized solution that can account for the behavior of Randall. Caregivers today rely on analog, “homemade” ways of sharing resident-specific knowledge – tips and tricks – with each other to maker their work easier: notes on paper or whiteboards, or simply conversation. More can be done to make their experience-based, resident-specific knowledge readily available to them when it matters the most, often in care situations where residents easily become scared or agitated.

The exiting potential of new technology is not that it makes standardized procedures quicker but that it can help support personalized care. In other words, caregivers need support knowing exactly which “buttons to push” with each resident and adapting the care accordingly – rather than support pushing the same buttons more and more quickly.

In many situations – certainly not all, but many – human intelligence is still the most efficient intelligence for addressing contextual challenges. It is an efficiency based not on scalable knowledge but on a profound understanding of the particular.

Breaking the Spell

At age 82, Wendell Berry is an American treasure. He has been farming the same plot of land in Henry County, Kentucky, for several decades as well as teaching at his alma mater, the University of Kentucky, and has written over forty works of fiction, nonfiction, and poetry. In the 1980s, from his porch on his farm, Berry had a front-row seat to the transforming landscape of American agriculture. He wrote a prescient essay in 1985 entitled, “What Are People For?” that serves as a perfect coda to our sensemaking journey.

In the essay, Berry tracks the development of increasing urbanization and the hollowing out of American rural life and community. He calls attention to the name economists give to the masses of people that used to work in the farmlands: the “permanently unemployable.” Acccording to the agricultural economists these were the least efficient producers: [T]oday, with hundreds of farm families losing their farms every day, the economists are still saying, as they have said all along, that these people deserve to fail … and the rest of us are better off for their failure.”

The knowledge these people held and the work that they did has now been eradicated by various combinations of machinery and chemicals. Some called this displacement a triumph of agricultural science, but Berry wonders what is going to happen to all those people now deemed useless. “Is the obsolescence of human beings now our social goal?” he asks.


There is a vast wealth of knowledge contained in the small but significant gestures that occur in our worlds every single day. And we dismiss that knowledge at great risk to our future well-being, productivity, safety, and the nourishment of our own human spirits. When I echo Wendell Berry in asking “What are people for?,” I am not suggesting that we do away with algorithms and machine learning.

This is not a nostalgic call to return to the ways of the past, nor is it an attempt to hide away on a technology-free island. When I ask “What are people for?,” I am not asking an either/or question. Instead, I am reminding us that a culture frozen under the spell of the hard sciences is not much of a culture at all. When we revere technology and its solutions above all else, we stop seeing the agility and nuance that characterize human intelligence at its best. By putting technology above us, we stop synthesizing data from other sources. We miss out on a sustainable efficiency that comes from holistic thinking, not from optimization.

Most important for sensemaking, when I echo Wendell Berry’s question, I ask why, in the Western world – and in America in particular – the idea of engaging in cultural inquiry has become a notion of needless luxury. Why are art, poetry, and music something we only dabble in as a hobby on week-ends? Why is watching a play or going to a concert a privilege for snotty people, and reading a novel a waste of time? Art, we assume, is only relevant for the lucky few. “What’s the impact to the bottom line?,” people ask. There are no billable hours from time spent on stories or songs. Serious poetry and rigorous theory are discussed only by ladies who lunch. Snuggling up with a novel is my time, not productive time.

And yet, the answer to the question is clear: “What are people for?” People are for making and interpreting meaning. And the realm of humanities is an ideal training ground for such an endeavor. It offers us more than two thousand years of material as our playground. Yes – of course – works in the humanities exist to bring us delight and pleasure. But they are also helpful, practical tools for dealing with the core questions of strategy that exist in any culture or organization: how to understand other worlds, customs, meanings, and competitive markets. These skills – at the heart of sensemaking – are the very skills that can never be outsourced. Machine learning will never come close to achieving insights into them. This is because they require a perspective, and algorithms simply have no point of view.


Let go of the shallow dogma of business schools and the promise of universal principles from the natural sciences. The humanities aren’t a luxury; they are your competitive advantage.

So before you ridicule your daughter for wanting to study Confusian philosophy or look down on the people who choose to major in medieval French poetry, remember that you are quite likely working for such a person. It wouldn’t be surprising if the chairman of the board or the president who runs your company is a former history major or really into Slavic studies or an expert on ancient Greek. If your son is truly passionate about math, by all means encourage him to go into the world of STEM. But pushing yourself or your children away from the humanities and into natural science solely as a utility function isn’t actually very useful – either for them or for the future of our society. We certainly need masterful chemical engineers, mathematicians, and software developers, but we also need brilliant poets, singers, philosophers, and anthropologists. We need to synthesize the best ideas from all of these points of view, not attempt to optimize ourselves as individuals or as a culture.

For when we optimize ourselves, we lose sight of the meaningful differences between Randall – an old man with dementia – and any other old body taking up space in a residence. Optimization is about counting resources in an effort to scale them and technology is a master of scale. But it need not be our master. Let us demote technology to a colleague or, better yet, a well-trained assistant or sidekick. When we claim our space as the sole interpreters of culture, we can emancipate ourselves and see technology for what it is: simply one more tool in the arsenal. It can help us to arrive at extraordinary places, but we still need to figure out what to do once we get there. The answers to that dilemma will only ever be solved by inspired acts of mastery that are drawn out of us by our context.


What are people for? Algorithms can do many things but they will never actually give a damn. People are for caring.

Leave a Reply

Your email address will not be published. Required fields are marked *