It has been a long time since our last blog post, and we are aware that many of you want to know desperately what has happened to Enkin since then. The past months have been a busy and exciting time for us. Today, we are happy to break the silence and tell you the whole story from the beginning, finally answering the questions many of you have asked us.
In 2007, the two of us were graduate students of computational visualistics in Koblenz, Germany. We both went to Japan to spend a semester at Osaka University, doing research in robotics. During our time there, we heard about the Android Developer Challenge. Android, an open source platform for mobile devices, had just been announced and the Challenge aimed to motivate developers to create great applications for the upcoming Android-powered phones. Having some extra time on our hands, we decided to give it a try and develop a useful application. In the beginning, we had no idea what we had gotten ourselves into. We didn't even know what kind of application it should be.
Soon we discovered that the Android SDK offers access to a number of powerful technologies available on the latest smartphones: fast 3D graphics, the camera, GPS, orientation sensors, always-on internet, and a big touch screen. It was clear that there is great potential in combining all of these in the palm of your hand. In our conversations, a promising concept emerged, which we described as bridging the gap between maps and reality. The idea was to make navigation a lot easier by displaying information on top of the live camera image instead of leaving it up to the user to connect reality and the information on a map. Where the view was blocked, one could virtually hover up in the air and get an overview of the surroundings using a 3D landscape textured with satellite images.
We started coding, but there was one big problem: with no actual Android devices available at the time, all development was forced to be based on emulators running on our laptops. The emulator had no camera, no sensors, and no GPS. Without real input, there was no way to properly test the user experience of our application. So we bought a webcam, a digital compass, and a GPS receiver, and we developed a program that could send live video, sensor data, and the current location to the emulator in real time. Development could continue and our application began to take shape. We were well aware, however, that this solution might fail in the judging process of the Challenge, where the judges wouldn't have access to our little contraption of laptop, camera, GPS receiver, and digital compass. We decided to shoot and publish a short video demonstrating what the application could do and explaining how we extended the emulator. We submitted our application, which we were calling Enkin at that time (from the Japanese 遠近: perspective, near and far), and hoped for the best.
Publishing the demo video had another effect: it went viral, and within a few weeks we had hundreds of thousands of people watching it and telling us how excited they were about our project. We caught the attention of many, including Google. They expressed interest in acquiring the technology and having us join them to continue working on the project using the great resources available at Google. We said yes, moved to California, and this is where we are today. Did we mention that we hadn't won anything in the Challenge? It didn't matter much anymore.
We think that Google, with all its infrastructure and its innovative and playful culture, is exactly the right place for us to further pursue our ideas, which you know as Enkin.
3 comments:
So were you involved in developing Google Goggles?
There goes my theory that a Google Hit Squad took you out.
Well congrats on the opportunity at Google. Has enkin tech made it into any google product (hmm.. goggles was released today) or is it still all behind the scenes?
I can't wait to see this hit the marked. You mentioned in your video that you would be vouching for better accelerometer-technology (as I understood it) is that something we could see in future android phones?
If this was included new and interesting modes of interaction with the phone could be achieved.
Post a Comment