Ninth lecture
-
Interview
I wanted to start with a small interview, my presentation of the product is not written of course.
Adrien : Thanks for answering those question about my prototype, I am going to show it to you and you will give me on the moment your impressions. Ok ?
Julie : yes let’s go.
Presentation
Julie : I find it is a good initiative. However, I have a few comments. First, it is great to depend on a hat, but what if there is some wind or rain ? Does it still work ? What if it falls ?
Adrien : Well of course, the electronic components of the hat will not be like in this blender, there will be a double bottom to protect it. And we will need to chose a waterproof textile. But indeed I do think it won’t be relevant in extreme conditions.
Julie : Ok for two bottoms, but what about the size then ? And there will be several sizes ?
Adrien : Components are not that big so I don’t think it is an issue. However indeed we would need to do many sizes. For such a product, it could even be possible to do them tailored.
Julie : Ok, and the hat is connected to the glasses? It could be disabling to have glasses all day if you walk, they should be tinted at least if it is sunny.
Adrien : Yes it is the main problem I think of this project, the need of 2 differents objects. And ok for the sun.
-
Experience
The idea
I decided to make a funny experiment in my room. Julie used wireless noise canceling headphones to prevent her from hearing the world around her. I called her through the phone connected to play the role of my product. Indeed, as she was walking, I was telling her that some event would occur a few seconds before it did. For instance, I told her she had to catch a ball coming from her left.
The results
It appeared that, without the habit of it, such behaviours is far more complicated that we thought. Indeed, eye take seconds (which is very long) to understand exactly a new element. However, with enough preparation time for an event, the deaf does not suffer from any more difficulty than others.
Conclusion
Giving information thanks to AR to deaf people could be a great solution to help them in their daily life, to struggle for autonomy. However, telling half of a second before that something (without precision) is happening to your right is a bit useless. I think that to work, such an idea would need a code for each type of event and how dangerous it is.