I worry that as the problem-solving power of our technologies increases, our ability to distinguish between important and trivial or even non-existent problems diminishes. Just because we have "smart" solutions to fix every single problem under the sun doesn't mean that all of them deserve our attention. In fact, some of them may not be problems at all; that certain social and individual situations are awkward, imperfect noisy, opaque or risky might be by design. Or, as the geeks like to say, some bugs are not bugs—some bugs are features.
I find myself preoccupied with the invisible costs of "smart" solutions in part because Silicon Valley mavericks are not lying to us: technologies are not only become more powerful—they are also becoming more ubiquitous. We used to think that, somehow, digital technologies lived in a national reserve of some kind—first, we called this imaginary place "cyberspace" and then we switched to the more neutral label of "the Internet"—and it's only in the last few years, with the proliferation of geolocational services, self-driving cars, smart glasses, that we grasped that, perhaps, such national reserves were a myth and digital technologies would literally be everywhere: in our fridges, in our belts, in our books, in our trash bins.
All this smart awesomeness will make our environment more plastic and more programmable. It will also make it very tempting to design out all imperfections—just because we can!—from our interactions, social institutions, politics. Why have an expensive law enforcement system if one can design smart environments, where no crimes are committed simply because those deemed "risky"—based, no doubt, on their online profiles—are barred access and are thus unable to commit crimes in the first place? So we are faced with a dilemma: do we want some crime or no crime? What would we lose—as a democracy—in a world without crime? Would our debate suffer, as the media and courts would no longer review the legal cases?
This is a very important question that I'm afraid Silicon Valley, with its penchant for efficiency and optimization, might not get right. Or take another example: If, through the right combination of reminders, nudges and virtual badges, we can get people to be "perfect citizens"—recycle, show up at elections, care about urban infrastructure—should we go ahead and take advantage of the possibilities offered by smart technologies? Or should we, perhaps, accept that, in small doses, slacking off and idleness are productive in that they creates spaces and openings, where citizens can still be appealed to through deliberation and moral argument, not just the promise of a better shopping discount, courtesy of their smartphone app?
If problem-solvers can get you to recycle via a game, would they even bother with the less effective path of engaging you in moral reasoning? The difference, of course, is that those earning points in a game might end up not knowing anything about the "problem" they are solving, while those who've been through the argument and debate have a tiny chance of grasping the complexity of the issue and doing something that matters in the years to come, not just today.
Alas, smart solutions don't translate into smart problem-solvers. In fact, the opposite might be true: blinded by the awesomeness of our tools, we might forget that some problems and imperfections are just the normal costs of accepting the social contract of living with other human beings, treating them with dignity, and ensuring that, in our recent pursuit of a perfect society, we do not shut off the door to change. The latter usually happens in rambunctious, chaotic, and imperfectly designed environments; sterile environments, where everyone is content, are not well-known for innovation, of either technological or social variety.
When it comes to smart technologies, there's such a thing as too "smart" and it isn't pretty.