Restoring Vision Through the Eyes of a Smartphone
For millions of people, the ability to navigate around this complex world requires the assistance of a cane or a faithful companion. With the ubiquity of smartphones today, and their ever increasing capabilities, there is an opportunity to create an application that could assist the visually impaired in becoming functionally independent.
After much research, we have found an application, the vOICe, that promotes functional independence, however we believe it can be improved. This application utilizes the smartphone’s camera to continuously capture images, which are processed and converted to sound. The key opportunity for enhancement would be to simplify the captured image before converting it to sound. To improve this process, we can simplify the image by removing the background and preserving what is important. We will leverage existing image processing techniques to assist us in this task.
A challenge for this technology is the learning curve required for a user to become comfortable using the application. A challenge we believe will be made significantly easier by only giving them the part of the scene that is important. Our biggest contributor to this simplification comes from the utilization of depth-of-field blur. By removing the blurred portion of the scene we can translate the portion of the scene most important to the user. Seeing through sound will be easier when the user no longer has to distinguish relevant from irrelevant soundwaves. Figure 1 shows the current implementation’s output while our intended output can be seen in figure 2.
Our team is excited to push forward with this project and are hopeful to have a working prototype by the end of summer 2012. At which point we will look into the possibility of increasing the translations per second to communicate to the user a sense of motion!