We could become far more intelligent than we are by adding to our stock of concepts, and by forcing ourselves to use them even when we don't like what they are telling us. This will be nearly always, because they generally tell us that our self-evidently superior selves and ingroups are error-besotted. We all start from radical ignorance in a world that is endlessly strange, vast, complex, intricate, and surprising. Deliverance from ignorance lies in good concepts — inference fountains that geyser out insights that organize and increase the scope of our understanding. We are drawn to them by the fascination of the discoveries they afford, but resist using them well and freely because they would reveal too many of our apparent achievements to be embarrassing or tragic failures. Those of us who are non-mythical lack the spine that Oedipus had — the obsidian resolve that drove him to piece together shattering realizations despite portents warning him off. Because of our weakness, "to see what is in front of one's nose needs a constant struggle" as Orwell says. So why struggle? Better instead to have one's nose and what lies beyond shift out of focus — to make oneself hysterically blind as convenience dictates, rather than to risk ending up like Oedipus, literally blinding oneself in horror at the harvest of an exhausting, successful struggle to discover what is true.
Alternatively, even modest individual-level improvements in our conceptual toolkit can have transformative effects on our collective intelligence, promoting incandescent intellectual chain reactions among multitudes of interacting individuals. If this promise of intelligence-amplification through conceptual tools seems like hyperbole, consider that the least inspired modern engineer, equipped with the conceptual tools of calculus, can understand, plan and build things far beyond what da Vinci or the mathematics-revering Plato could have achieved without it. We owe a lot to the infinitesimal, Newton's counterintuitive conceptual hack — something greater than zero but less than any finite magnitude. Far simpler conceptual innovations than calculus have had even more far reaching effects — the experiment (a danger to authority), zero, entropy, Boyle's atom, mathematical proof, natural selection, randomness, particulate inheritance, Dalton's element, distribution, formal logic, culture, Shannon's definition of information, the quantum…
Here are three simple conceptual tools that might help us see in front of our noses: nexus causality, moral warfare, and misattribution arbitrage. Causality itself is an evolved conceptual tool that simplifies, schematizes, and focuses our representation of situations. This cognitive machinery guides us to think in terms of the cause — of an outcome having a single cause. Yet for enlarged understanding, it is more accurate to represent outcomes as caused by an intersection or nexus of factors (including the absence of precluding conditions). InWar and Peace, Tolstoy asks "When an apple ripens and falls, why does it fall? Because of its attraction to the earth, because its stem withers, because it is dried by the sun, because it grows heavier, because the wind shakes it….?" — with little effort any modern scientist could extend Tolstoy's list endlessly. We evolved, however, as cognitively improvisational tool-users, dependent on identifying actions we could take that lead to immediate payoffs. So, our minds evolved to represent situations in a way that highlighted the element in the nexus that we could manipulate to bring about a favored outcome. Elements in the situation that remained stable and that we could not change (like gravity or human nature) were left out of our representation of causes. Similarly, variable factors in the nexus (like the wind blowing) that we could not control, but that predicted an outcome (the apple falling), were also useful to represent as causes, in order to prepare ourselves to exploit opportunities or avoid dangers. So the reality of the causal nexus is cognitively ignored in favor of the cartoon of single causes. While useful for a forager, this machinery impoverishes our scientific understanding, rendering discussions (whether elite, scientific, or public) of the "causes" — of cancer, war, violence, mental disorders, infidelity, unemployment, climate, poverty, and so on — ridiculous.
Similarly, as players of evolved social games, we are designed to represent others' behavior and associated outcomes as caused by free will (by intentions). That is, we evolved to view "man" as Aristotle put it, as "the originator of his own actions." Given an outcome we dislike, we ignore the nexus, and trace "the" causal chain back to a person. We typically represent the backward chain as ending in — and the outcome as originating in — the person. Locating the "cause" (blame) in one or more persons allows us to punitively motivate others to avoid causing outcomes we don't like (or to incentivize outcomes we do like). More despicably, if something happens that many regard as a bad outcome, this gives us the opportunity to sift through the causal nexus for the one thread that colorably leads back to our rivals (where the blame obviously lies). Lamentably, much of our species' moral psychology evolved for moral warfare, a ruthless zero-sum game. Offensive play typically involves recruiting others to disadvantage or eliminate our rivals by publically sourcing them as the cause of bad outcomes. Defensive play involves giving our rivals no ammunition to mobilize others against us.
The moral game of blame attribution is only one subtype of misattribution arbitrage. For example, epidemiologists estimate that it was not until 1905 that you were better off going to a physician. (Semelweiss noticed that doctors doubled the mortality rate of mothers at delivery). For thousands of years, the role of the physician pre-existed its rational function, so why were there physicians? Economists, forecasters, and professional portfolio managers typically do no better than chance, yet command immense salaries for their services. Food prices are driven up to starvation levels in underdeveloped countries, based on climate models that cannot successfully retrodict known climate history. Liability lawyers win huge sums for plaintiffs who get diseases at no higher rates than others not exposed to "the" supposed cause. What is going on? The complexity and noise permeating any real causal nexus generates a fog of uncertainty. Slight biases in causal attribution, or in blameworthiness (e.g., sins of commission are worse than sins of omission) allow a stable niche for extracting undeserved credit or targeting undeserved blame. If the patient recovers, it was due to my heroic efforts; if not, the underlying disease was too severe. If it weren't for my macroeconomic policy, the economy would be even worse. The abandonment of moral warfare, and a wider appreciation of nexus causality and misattribution arbitrage would help us all shed at least some of the destructive delusions that cost humanity so much.