All Posts in Google Tech Talk

July 26, 2012 - Comments Off on The Many Eyes of the Internet

The Many Eyes of the Internet

Scary Smash

Another month, another Google Tech Talk. I first have to gratuitously thank Google for these great free talks. The quality and variety of speakers is truly astounding. This month's talk was The Distributed Camera: Modeling the World from Online Photos , where speaker Noah Snavely went over the work he and his team have done at Cornell involving 3D reconstruction of scenes using crowd-sourced photos. The project will be quite familiar to anyone who saw the TED talk on Microsoft's Photosynth awhile back. However, being a technical talk, Snavely roughed out how his team's feature recognition algorithm worked.

In effect, the idea is to grab hundreds to thousands of photos from sites like Flickr of a single landmark. An algorithm then defines features within each photo that are unique using a keypoint detector technology called SIFT. Each photo is then compared to every other photo for similar features. Now that the feature points have been matches between photo, the algorithm can begin solving for the camera position and angle computing the 3D point from the 2D projection provided in each photo.

The algorithm does not use any camera GPS or time data since both can be quite inaccurate depending on the conditions under which the photo was taken (bad GPS signal, indoors v. outdoors, etc.). The output of all this hard work is a 3D point cloud where each camera is shown as a small pyramid representing the camera position and angle (if you want more technical details on the algorithm see Snavely's paper here).

Venice

One can easily see the benefit of such technologies. If you watch the TED talk on Photosynth you'll see how it can allow for a Google Maps-like zoom of landmarks by using the multitude of different photos taken by people for different angles and levels of detail. Moreover it gives a fairly accurate 3D model of a building. Such a model can be used for many tasks. For instance imagine automatically updating street views, associating new photos with existing models, or even annotation. However, the system isn't quite perfect.

How could such a model cover the uninteresting and banal parts of cities when the number of photos is small to nonexistent? Snavely's solution was to turn the task of generating this data into a crowd-sourced game as has proven so successful in recent projects (think FoldIt). The inaugural competition between the University of Washington and Cornell led to a narrow Cornell victory--after they discovered how to game the system by taking extreme closeups of buildings generating extra 3D, and thereby in-game, points. There are also other sorts of data one could potentially pull including satellite imagery, blueprints, and more.

Another major weakness is inherent to the algorithm itself; since the computer simply compares similar features, any building with symmetry can lead to egregious errors. For example, given a dome with eight-fold symmetry the algorithm can mistakenly think each of the eight sides is the same, duplicating and rotating all the camera positions and points about the dome! Such a short coming can be overcome by giving the algorithm a basic understanding of symmetry making matching less greedy, possibly by comparing multiple features in each photo to see if an angle is different or not.

Lastly the algorithm is slow, O(n2m2), where n is the number of photos and m is the average number of features per photo, by my estimates (though I haven't computed big-O in years so don't take my word for it). Snavely admitted that even using up to 300 machines, it can take them days to process a couple thousand of photos.

The talk was incredibly interesting and informative. Such technologies leverage crowd-sourcing as a natural extension of our new data-infused world. As the amount of data out there continues to go beyond our abilities to sort it into meaningful models such automated systems will become increasingly important. I'd like to thank Google and Snavely again for giving us a peak into this fascinating future.

June 20, 2012 - Comments Off on Google Tech Talk: Continuous Integration

Google Tech Talk: Continuous Integration

Yesterday Angela and I left work a bit early to attend Google's latest tech talk in their NYC office on 15th Street. The topic was "Tools for Continuous Integration at Google Scale." Firstly, if you live in NYC and are reading this blog you should definitely try to attend these events hosted via Meetup. They always have great speakers in their lovely office cafeteria and the fellow attendees are always good for a bit of networking. You can see past talks (and I hope this one soon) here.

We here at the Mechanism are always looking to improve our development pipeline, especially when it comes to version control. Seeing how Google handles these problems was simply mind-blowing. John Micco walked through the overall setup at Google which allows their multitude of engineers to collaborate on projects easily with subversion, testing and deployment all bundled onto a master branch. Their model consumes huge amounts of resources, mainly due to the tests run on each submit as well as a live dependency list which determines which tests need to be run for every submit. This allows the system to promise a 90 minute return on every submit.

While Micco could share how the tool was setup, he had no idea how it was being used by individual teams. For example, some teams such as Google+ have extremely high turnover and submit rates, deploying a product update every few weeks! Others, such as the Google Core team take much longer and release less often for obvious reasons. Likewise, some teams force all changes to be tested locally before being sent to the network while others have no restrictions at all. This is the product of Google's team independence philosophy.

Obviously such a continuous integration system isn't suitable for every company. In fact only those with many engineers and dynamic product necessitate such tools. Yet there are many benefits that small companies can likewise exploit. For Google, the biggest plus of this system is the ability to see exactly who broke what so there's no need to untangle who's bad commit broke the code. This emerges from Google's desire to, in Micco's words, avoid "tribalization" of knowledge in the company while still allowing teams to act freely. This is key for any company and is a tenant of coding culture in general. Code should be clear and concise such that any other developer can easily figure out how its working and how to fix it. If only a small tribe of people (or even an individual) knows how to fix something easily, it hurts the company as a whole in the long run as that group changes or leaves.

Micco was keen to point out that all the Google teams using the system loved it and have become more and more addicted to it. Even the mention of the system going down for maintenance is met with horror. Yet there is a fundamental problem to such a system: as the number of users, tests and commits increases over time, the computing resources required escalate exponentially such that keeping ahead of demand is impossible! Yet now the engineers are hooked so Micco posed this problem: how do we optimize resource utilization while still being able to provide a quick turnaround?

Beyond optimizing expensive operations such as testing and dependency mapping, Micco's simplest suggestion was also the most likely: to impose quotas on development teams--then avoid the rationing meeting.