Our machines can do amazing things. Our mapping and travel tools can span numerous transit agencies and modes of transport to conveniently navigate us across the land. They still mess up, which is acceptable. But when they fail, we don’t even know that they have errored, or how, and this is less OK.
On an intermediary leg of a marathon journey from Washington, DC to Nairobi that included a DC Metrobus, a ZipCar, a BoltBus, a commuter train, an airtram, two 6+ hour flights, I needed to simply get from Penn Station to JFK Airport. I already knew that the Long Island Railroad was the best combination of price and speed for my needs, and HopStop’s website confirmed it. Unfortunately my BoltBus ran an hour late, and I found myself recalculating the trip from my phone using HopStop’s mobile app. For whatever reason, whether an errant filter or another limitation of the mobile app, HopStop no longer showed me any LIRR options. In this case, I knew I wasn’t seeing the results I needed. I just couldn’t do anything about it.
Eli Pariser talks about the societal implications of opaque social algorithms in The Filter Bubble, where we don’t know what we don’t know, and couldn’t see it if we did. The ability to understand what we aren’t seeing is also a simple usability affordance. A few apps break the general trend in this department:
Hipmunk intelligently sorts the best flights available by eliminating the obviously bad choices (70% of possible results, according to cofounder Steve Huffman in this Forbes piece extolling Hipmunk’s many virtues). But the site also wisely allows the user to re-expose similar flights, and dive into the larger world of possibilities when your price or time is severely constrained.
Gmail’s Priority Inbox attempts to order your email based on your rules and habits. In my experience, it’s not quite there yet, but by hovering over the Priority icons, you can at least see why the feature sorted your email as it did, and correct for future cases.
If you’re not going to share the secret sauce of how decisions are made, you should at least let users circumnavigate when the decisions are poorly made. An admittedly small group of users care about this sort of thing. And maybe the apps we build will get smarter and smarter and smarter and the exposure of results the machine guesses are wrong will be considered an in-between technology as the machine’s guesses become more perfect. But I think it’s more likely that we’ll still want a grayer, more complicated version of what the machine tells us is possible, even as the machine’s computational abilities exceed our continuously evolving definition of magic. Let us see.