A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab

July 24, 2019 posted by



hi everyone today I'm going to present the application that we designed to help people with visual impairments to recognize their friends in social activities I'll also discuss the lessons we'll learn one we are designing and evaluating AI technologies to address an important accessibility problem so imagine if a person with visual impairments walks into this big conference room it will be very difficult for her to recognize like her around where everyone states who she should go to talk to and even who the presenter is so this is actually a major challenge for her fiza's that can raise both emotional and also social problems so here is an experience that gal who is 64 years old a person with visual impairments described so she said my plan husband and I were going to have dinner and decided to meet inside the front door of a mall i won in one side and he went through the other finally some woman came up and said are you meeting a blind gentleman he's standing about 10 feet away so this example actually emphasized the challenge of recognizing people the visually impaired people face even though there are so many technologies around us so the goal of our research is to address this problem by designing a faced prototype research prototype to help people with visual impairments recognize their friends in social activities and also as the faster development of a technology we want to leverage the state-of-the-art computer vision to have a accurate recognition and also we want to leverage the users existing social network on Facebook so the users don't have to man we collected data set to train the algorithm and also we now all know how important it is to protect the people's privacy so in our design we wanted to take into account the nine users privacy and also try to prevent their concerns of being unknowingly surveilled or recognized so with these research goals we design accessibility but which is a facebook Messenger pad appears as a contact on the facebook messenger platform so there are many benefits that we use this Facebook messenger platform so the major one is it actually provides easy and also reliable access to post the camera and also Facebook API that supports the face recognition services we can use to recognize the users Facebook friends and also this platform is widely used and also available on smartphones so it is much more easier for people to discover and also have access to so our current prototype was build in Android smartphone and also it works with talkback which is a screen reader on Android smartphone so when users turn around the camera of this accessibility pot and skin around the environment our system can automatically do a face detection and also reports the number of faces in front of the camera so for example this in this image it reports two faces so based on this feedback the users can know whether there are faces in the front and then decided whether they want to trigger a face recognition so one thing we were concerned during our design is that the people around the users may have the feeling that they were recorded or recognized without their knowledge so we design an explicit gesture which is a double tap to trigger the face recognition this serves as sort of signal from the users to the nine users informing that some information is going to be captured and after the people tap the face recognition will be triggered and we recognize post the Facebook friends of the users and also their facial features including their appearance and also facial expression we displays all this information on the screen phone screen and also the system automatic lists because of the Facebook friends name from left right based on their position in the photo so for example at this image show it reports to people from left right may include Eric Lee Gerald Taylor and also if the user is specifically interested in one friend he can swipe and navigate this list and listen to more details for example the friend the first friend Eric Lee no eyeglasses eyes open happy mood smelling no facial hair so this is a design of our accessibility part the face recognition algorithm we use is the same algorithm with photo tag suggestion features on Facebook and the suggestions will only be made if people have the face recognition setting up an hour part also follow this standard to protect the users friends privacy so they can choose whether or not they want to be recognized by adjusted this face recognition setting and one thing I want to note is the algorithm actually has a very high accuracy which is 97 percent which is tested by researchers based on a public data set from Flickr photos so based on the design of our part we want to understand how useful it is in a real-world setting like what situations people wants to use that whether it's effective or not and whether it is social acceptable so we conducted our dairy study with six participants with visual impairments they were all legally blind two of them are totally blind two of them have ultra low vision and two of them have moderate low vision which means they still have some functional vision to use and we first conducted an interview and also a tutorial session to teach them how to use our app and then we had a seven day long dairy study with dairy entries so we requested the participants to use the part in at least four days amount of whole week and also every day they needed to fill in the daily entry question with eight questions for example we asked them whether they use the bar today in what situations they used it whether is how for an out and how weather the view is accurate and at the end we conducted an one hour interview and also observed how the user our application to get a general feedback about the application so now I'm going to present some of the key findings we got from our user study so first we found participants actually use the part in many different situations and each participant used at least the part in at least three different scenarios you can see in their chart like most positions we use it at home with their families and also in a small gathering like the rice around with their friends and interestingly we find there are some unexpected use cases for example three participants use our app to take a selfie to check their appearance like how they look like today and also one participant also use the app to take a photo of a physical photo to know who is in that photo so from our study we also find our app is actually helpful for most of the participants four out of six people think it is very helpful and participant a reported that recognizing their Facebook friends is actually very useful and this definitely add credit to their perceived helpfulness accuracy is another important factor that it affected the users experience so in our study we did in the measure the actual accuracy because we could not collected the photos taken by the users considering we could not get the content from all the people who appeared in the photo instead we collected the perceived accuracy by asking the participant how they feel how accurate the field that application is this is actually more important for us because it is the expression of the users experience in the real world of situations so we expected the perceived accuracy should be really high given that our algorithm was tested in lab and we got a 97% accuracy but surprisingly the perceived accuracy varied from point 2 to point 9 which is way lower than we expected so we are supportive and the reason and we find the photo qualities is one of the major problem because the photos can have low illuminance and also the faces can be cut off and also the photos can be blurry sometimes but this one also mentioned they notice that people's photos on Facebook sometimes they don't look the same as the people in their real life which also brought a lot a little difficulty to our recognition so we design an explicit gesture to trigger the face recognition to prevent the concerns of the bystanders but this actually brought negative effect on the users experience so all the participants actually wanted a real-time face recognition so major reason is they have difficulty aiming the camera especially in the real life situation like a dynamic social environment people are talking they are moving it is very hard to take a clear photo even for sighted people and also the turbotech gesture can also cause the photo to people away and also cause the delay so in terms of social acceptability except one participant who felt using the camera can disturb with the other people all the other participants felt our application is very important and is social acceptable and they all believe that they have the equal right to see their friends just like sighted people do and beyond social acceptability people also think safety is an important factor in their life like our application can help them know who are around and also help maybe help them to know where their children are which can't get help guarantee their safety so here is a code from one of our participant he said facial recognition it all there all the time if computers can do it why Cannot I take advantage of it you are sighted you can see and tell Who I am then why Cannot I am NOT taking Ana information but just I want to see who you are so this accurate and emphasized how important these technologies are for people with visual impairments in social activities so now I want to do a little discussion based on our findings so we find that the potential privacy concerns actually posed a constraint on our design we design the explicit gesture to try to help protect the privacy of the nine users but which actually brought me maggot experience to our users so this actually make our think even though we understand the privacy is a very important problem but we should not neglect access and also equity so maybe it is time for us to think about privacy differently by considering people with disabilities and also their needs as an important perspective one discussing the problems reached by AI technology and also surprisingly we find our algorithm doesn't work that well right we have both low perceived accuracy and one may reason is the photos people with visual impairments tube is actually not the same with the typical photos taken by sighted people but our agar is an accurate follow the very standard testing and trimming procedure we use that the very standard dataset so actually the current training procedure do not take into account the people's different disabilities so in the future maybe we should involve people's knees into the training procedure for example we should collect photos taken by people with visual impairments to make the algorithm more effective to summarize we design accessibility paths which can help people with visual impairments recognize their friends to facilitate social activities and also we use the dairy study to understand their use and areas with in daily lives and also we did a reflection on the use of AI technology in the accessibility context so we hope our research is not only about face recognition we also want to raise people's thought and discussion on the relationship between AI technology and accessibility so we should think about when we are applying AI technology in to accessibility how can we make it more effective and also when we are trying to address different problems by AI for example the privacy problem how accessibility could fit in by considering people's different than these thank you so much and at the end I want to thank my collaborators Xiaomei ooh Lindsey Reynolds from Facebook and also my other wiser shiri as a God thank you so much hello walnut Hampshire University of Toronto thanks for the great talk I have two questions the first one is what was the scenarios that your user your participants use the system sorry can't repeat one my person yeah the scenarios that the participants use your system oh so I just I also I did talk about the scenarios of where in what situations they use it because this is a dairy study so we don't limit hit the stick we don't limit hit the situation they use it so we give them that they're the application and we ask them to use it in whatever situation they want to understand in what situation is suitable for this application okay and you mentioned that users can use their phone to take pictures for facial – for face recognition what do you think about using variable cameras for example a camera on the chairs and in the case of using a camera on the chest do you think we still need to guide user to adjust his body posture to take photos yeah that's a very good suggestion and we did consider you know different variable cameras at the very beginning like also for example the smart glasses so people don't have to you know aim the camera back themselves and however we also you know because we also want our applications to be available to a more you know broad population so we first want to start with the full application and also the facebook messenger which is a very widely use the platform so this is just the beginning and after we understand it's you know it's using real world we definitely want to try out different platforms in the future thank you thank you hello I'm Lauren Burke and I really enjoyed your presentation I know that you're focusing on accessibility but what about mothers and other people in the family who love technology you look back 20 or 30 years ago and you see that people their biggest problem was privacy and I guess that still applies today but I wanted to ask you from a different perspective from a hardware perspective not using internet whatsoever but what is what are the requirements for offline processes and is there enough memory is there enough actual power to do this if you were to go offline or is this something that you've only tested using online resources so for now we only use the online resources and one main reason we use this Facebook API because it's very convenient for us to recognize their Facebook friends so they don't have to collect the training data by themselves right so but it's a very good question because if we can do all the recognition locally which is actually can help protect the privacy I totally agree with that and that is definitely you know from both software and hardware perspective we should look into in the future yeah but for now I I have to say I don't have like a solution unlike how to address that like especially like a smart phone has very you know um relatively low you know power and also limited space right so yeah less depends on the advance of the future technology I guess maybe we'll catch up to what we're doing on our end so that would be nice to see as well yeah I'm looking forward to that hi Annie Ross again really great presentation I had some questions about sort of the secondary information you gave so it was named and then sort of things about it like their facial expression their mood I was wondering first what considerations you made as far as what to include because I could think of especially mood the inaccuracy and that might cause some problems as well as if you heard from participants what of that information they used yeah thank you for the question so because of a limited of this session you know I actually cannot present out Network so actually before the designers for accessibility part we conducted a formative study with eight participants with visual impairments and we asked them their experiences challenges and also what information they need in the social scenarios and we also got you know different items for example people's identity their relative location there are physical attributes like how they look like and also their official expression so all of these we actually listed listed based on the priority our participant gave us so we conducted our design based on that so if you are interested in more details you can look at our paper thank you thank you thank you so much [Applause]

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *