I started my morning with an exciting (to me) development: Verizon had finally approved the Jelly Bean update to my Android phone. One of its features, Google Now, works like the iOS 6’s Passbook. Both systems pull from the vast number of things your phone and connected services know about you to surface relevant information when and where you need it. The day’s weather, the score of your favorite sports team, and traffic on your commute home are pushed to you so that you don’t even have to search. It was a fitting way to start the morning as I headed to the PhD. thesis defense of Polychronis Ypodimatopoulos (or Pol, or @ypodim, if you’re a fan of brevity).
The question that has guided Pol’s research is how we can enable people to easily tune in to what’s going on around them. When a curious mind walks into the Media Lab, how do they find out all of the amazing things happening inside this building? The Lab has Glass Infrastructure screens, but most other buildings do not.
The pain point here is the “If only I had known…” feeling. We work on projects similar to others, without ever connecting. We could better exchange products and services, engage in joint activity, or pool resources, like finding a roommate.
Cities offer many resources and opportunities, but navigation remains a daunting challenge. High rise buildings and crowds make us feel intimidated, not empowered. How can technology help us see our neighborhoods as the rich hives of potential they really are?
Pol’s answer is the decentralized social network. His first application was a mesh network on a mobile device that showed you what was around. Writing apps on multiple platforms and limitations of battery life were issues, though. So he moved the network to the cloud with Ego. Ego sought to put the user at the center of activity, with apps circling us, rather than our current model of competing platforms, where users revolve around sites like Facebook and Amazon.
Pol added indoor location-sharing to the mix with Bluetooth devices. The Twitter-like interface allows you to literally follow your friends. He charted the aggregate location results of two competing sessions at the same event, and could visualize the depressing effect poor venue selection has on the size of your audience.
The aforementioned Glass Infrastructure is a place-based social information system comprised of 40 touch screens placed around the building. It maps out the Media Lab’s many people, groups, and projects for visitors. By exposing this information in a public place for the first time, students rushed to update their projects and headshots (this novelty effect has worn off, however).
The GI architecture consists of a large touch screen in vertical orientation, which can read RFID tags on the nametags of passerby. This allows the infrastructure to provide different applications for different user classes (Media Labbers, sponsors, visitors).
The Bird’s Eye View application for the Glass Infrastructure provides a collage of faces. When you walk by the screen, your photo is pinned for the next 5 minutes so that others can see your recent presence and potentially reach out to you.
These two projects introduced a decentralized social network and a place-based social information system, but Pol sought to create a discovery mechanism that works across different contexts. Siri and Google Glass are interesting ways to present the information around us, but Pol thinks there’s room for improvement in creating the actual content of what’s interesting around us.
One such application is discovering the experts around us. This is usually done by starting with a corpus of information, such as emails or a forum, and then identify and suggest expertise. But when you’re in a new space, you don’t have such a corpus. FourSquare doesn’t tell us much about the people around us. Twitter has hashtags, but we have to advertise that hashtag’s existence before people know to use it. Starbird, et al. proposed hashtagging everything, which gets messy quickly. And combining Facebook + Highlight limits you to your existing social network’s reach.
Pol pulls up his own Facebook Profile. There’s a lot of information here, but it’s not enough to capture him, especially when you throw variables like location into the mix.
Brin.gy is a discovery service that centralizes our skills across users. It’s designed so that multiple discovery services could compete using the same format. You, the subject, list objects (topics) paired with a predicate (talk to me about these topics).
Pol simulated the discovery service with thousands of users at the scale of a city block. A coffee shop owner would be able to determine how many people walking by match the profile of a person likely to buy coffee. Pol sees this as a way to make markets more efficient, and let consumers group themselves to achieve better prices.
This being MIT, most of the skills listed in the demo are software development skills. Your personal taxonomy consists of tags for your skills, gender, and languages spoken. The average user contributed 15 values about themselves, which is actually the same amount of datapoints Pol found in Facebook Profiles.
Pol tested Brin.gy at O’Reilly Ignite Boston 9 to help attendees find people they might want to talk to. TED has actually produced a similar iPhone app, TEDConnect, with the same “Talk to me about” field.
This list of information doesn’t scale, though. It quickly becomes unwieldy. So Pol looked at exposing information selectively based on the user’s current context. The lists become applications based on specific geographically-defined areas depending on whether you’re looking for dinner or a study partner.
Pol also mined a useful Media Lab listserv discussion to extract the knowledge within and import it into Brin.gy. He mapped the tips and displayed . He asked users which format they preferred. People like the email thread for the personal touch and the contextual information provided in each email. But it takes a long time to digest many emails in a thread. Brin.gy extracted the valuable data, but not the personal stories behind it.
Me: Are we forever doomed to choose between inefficient but meaningful personal narratives and rich, if soulless, databases?
Pol has attempted to design a sweet spot between the two, and points to databases that link back to the personal context as a solution. Bring.gy provides a map with database entries of rich, concrete information, but each entry also shows the faces of the people who made the recommendation and a link to the email where they tell the story behind that great shot of espresso. You can have the best of both worlds.
Eyal asks if Pol has considered the emergency applications of such a skills database. Could we quickly determine if there’s a doctor in the vicinity?
Pol points out that verification of one’s medical degree would be important in such a scenario. Pol has played out a scenario where your train station is closed, and you are able to hitch a ride with someone headed the same direction. But realtime location sharing apps will probably leapfrog Pol’s tag-based attributes in this specific application.
Catherine Havasi notices a number of Google Maps pins in the Charles River and asks about moderation and verification. It turns out this was intentional, for a flashmob-by-sailboat app. But in general, Pol relies on user flagging for crowd moderation.
Privacy is managed by users themselves, who can set how many degrees of Facebook friend can view their location and attribute information.