The question:
Can a wearable be used to aid color-blind people? Can we combine eye-gaze technology and audiovisual frequency conversion to detect color and let color blind people listen and read color??
Current situation:
The EYEBORG is an inspirational project in the field of assistive technology. Here’s a great video about the device: EYEBORG
Eyeborg uses audio frequency to differentiate colors, it assigns different audio frequency tones to different colors. A few shortcomings of Eyeborg are:
1.It only detects colors right in front of the device.
2.The audio frequency heard by the user is continuous and randomly assigned.
3.It doesn’t support self-learning of color (This is especially a problem for people who are color-blind since birth) . Example: I would need someone to tell me that the frequency i’m listening refers to ‘Green’ color.
Target audience:
Color blind population
Relevant learning theories:
Dual Coding theory: The device provides visual information in the form of textual image of the color as well as aural information by narrating the color. This use of visual as well as audio information for learning colors is based on dual coding theory which states facilitates representational(visual), referential(textual) and associative(combination of both) processing of information. This leads to deep learning and stronger mental models of the recently acquired information.
Product Design:
A wearable which has an integrated eye tracker, visual output (small screen, sun glasses?, google glass?) & audio output. It will track the eye gaze and then read the visual frequency of the area under eye-gaze which can be processed to calculate the color of the area and then display the color (using text) on the screen in real time. Hence if a person is looking at a can of coke then the screen will display the text ‘RED’ on the can. Along with that the person can choose to listen to a voice which would recite the name of the color.
Research design:
The research question is to determine if the wearable is able to deliver to it’s promise, it needs to tested in various conditions and with different people to test the accuracy of frequency conversion, accuracy of the eye-gaze and their combined accuracy. The initial research would consist group of 10 people (who can see color) with different lifestyles using the device it for 2 weeks. They have to use it at least 4 hours everyday in their routine life and report instances where the device gave incorrect output (Wrong color). Reporting instances would require test subjects to use a report button which would result in the device automatically taking a picture of the scene (external camera) and the screenshot of the output screen (screen capture) at the same moment. This collection of images can be used to data mine patterns of situations in which the device fails. It can also be used to find if the color reported by the device was associated to an object located close to the object the user was looking at, so that it can be deduce if the problem is with the eye-gaze or the frequency conversion.
Group members:
Jiahui Li
Jennifer Zhou
Shashank Pawar