WEBVTT 00:02:01.000 --> 00:02:06.000 I think I got a stigma. 00:02:06.000 --> 00:02:12.000 Okay. All right. Good afternoon, everyone. Welcome to the smart health frontiers. 00:02:12.000 --> 00:02:24.000 Oops. Delma is kicked out. Okay. I'm hoping we can get Doma de Silva in. 00:02:24.000 --> 00:02:28.000 Back in. 00:02:28.000 --> 00:02:35.000 Eric, is it possible? 00:02:35.000 --> 00:02:42.000 Hello. Hmm. Yes, you are back end. 00:02:42.000 --> 00:02:43.000 Okay. 00:02:43.000 --> 00:02:45.000 It seems I'm back again. My back in? Oh yeah, it's wonderful. I'm not sure. 00:02:45.000 --> 00:02:54.000 I'm not sure what happened. I am trying to connect from, you know, a spots that should be more reliable. 00:02:54.000 --> 00:02:58.000 But anyway, I met a that should be more reliable. But anyway, I met a conference because I'm also giving a talk very soon. 00:02:58.000 --> 00:03:11.000 Hi, everyone. Good afternoon. I'm Domoda Silva. I'm the acting assistant director for the US National Science and Foundation directed for computer and information science and engineering. 00:03:11.000 --> 00:03:15.000 I'm so pleased to welcome you to the US National Science and Engineering. I'm so pleased to welcome you today. 00:03:15.000 --> 00:03:25.000 To the same puzzle is not health frontiers driving innovation with fundamental science, technology to improve the lives of older adults. 00:03:25.000 --> 00:03:33.000 So today we have 3 very distinguished speakers, all who are working at the cutting edge of biomedical technology. 00:03:33.000 --> 00:03:44.000 But before we get to the talks, I want to take a moment to emphasize how important this smart health program is despite the funds their work. 00:03:44.000 --> 00:03:59.000 We're hosting the same place to celebrate the tenth year anniversary of this Mark House program. So this is a multi-disciplinary collaboration between the National Science Foundation and the National Institutes of Health. 00:03:59.000 --> 00:04:14.000 This much has program is designed to accelerate innovation research in the area, bridging the gap between technological and biomedical research help transform health and medicine in the United States. 00:04:14.000 --> 00:04:25.000 Of this process is to create sustainable partnerships, technologists and biomedical researchers. So that computer science engine engineers. 00:04:25.000 --> 00:04:35.000 Are engaged in the most critical biomedical questions and that biomedical researchers are able to take foot 00:04:35.000 --> 00:04:45.000 Oh, sorry. 00:04:45.000 --> 00:04:47.000 It just happened. 00:04:47.000 --> 00:04:50.000 She disappeared. 00:04:50.000 --> 00:04:56.000 Oh no. 00:04:56.000 --> 00:05:05.000 See the importance of networking in this world. High quality networking is essential. 00:05:05.000 --> 00:05:09.000 Yeah. To have your questions in smart health cards on the message. 00:05:09.000 --> 00:05:15.000 Do we have her back? 00:05:15.000 --> 00:05:16.000 So I am back, Wendy, but I think the most wise thing is that you take over, please. 00:05:16.000 --> 00:05:46.000 Not sure what is going on here with my fiveg hotspot so let me just finish up saying that this is really important, an important program and I think by showing case these 3 amazing talks you get a flavor of what a you know this program has been accomplished over the last 10 years so when it please you know keep the and the description and please introduce our 3 very distinguished. 00:05:48.000 --> 00:05:50.000 Especially. 00:05:50.000 --> 00:06:02.000 All right, well thank you Delma and I would ask that our I think our sign language folks got kicked out do so if we could get our translators back in that would be fabulous. 00:06:02.000 --> 00:06:16.000 So I just wanna say that we're really excited about this and as Dr. De Silva was saying, this bridge is the gap between the technological and biomedical research communities to help transform the health and medicine of the United States. 00:06:16.000 --> 00:06:22.000 So a key component of this process is to create sustainable partnerships between technology researchers and biomedical researchers. 00:06:22.000 --> 00:06:41.000 To ensure the research done by computer scientists and engineers is targeting the most critical by medical questions and that biomedical researchers are available to take to they can take full advantage of the advances in this science. 00:06:41.000 --> 00:06:50.000 So as I'm sure you're all aware, artificial intelligence and other technological advances are poised to make widespread changes in multiple specters. 00:06:50.000 --> 00:06:57.000 Sectors of the American society get the integrations of these advancements has been slow in the health-related fields. 00:06:57.000 --> 00:07:02.000 So today's showcase will. 00:07:02.000 --> 00:07:17.000 Today's showcase 3 to symposium showcases 3 amazing projects which are using advancements and technology to improve the lives of older adults who experience chronic health conditions at a much higher rate than other age groups. 00:07:17.000 --> 00:07:24.000 But the percentage of older adults in the United States projected to grow. The ensuing strain on the healthcare system underscores the urgency for innovative solutions. 00:07:24.000 --> 00:07:37.000 Our speakers today will discuss how advances in computer science can contribute to the management of a variety of aging-related illnesses and impairments. 00:07:37.000 --> 00:07:44.000 So, I'm gonna introduce, well, I'm gonna, I'm gonna move to our next slide and introduce our first speaker. 00:07:44.000 --> 00:08:00.000 So our first speaker is John Stankovic and Dr. Stankovic is the BP America Professor Emeritus and Computer Science Department at the University of Virginia and he was the director of the School of Engineering Link Lab. 00:08:00.000 --> 00:08:12.000 He's received too many awards that are too numerous to mention. But I will say he's, he's gotten the Distinguished Achievement Awards and he's just an amazing, amazing researcher. 00:08:12.000 --> 00:08:21.000 He's a prolific author and is frequently cited by other scholars and has over 68 to 68,000 citations to his work. 00:08:21.000 --> 00:08:27.000 He's going to present on learning and improving Alzheimer's patient and in-home caregiver relationships via smart healthcare technology. Dr. 00:08:27.000 --> 00:08:32.000 Stankovic. 00:08:32.000 --> 00:08:37.000 Alright, thank you, Andy. So rather than. Re-estating what the was just read as the title. 00:08:37.000 --> 00:08:44.000 I just want to highlight that this is about helping the caregiver. Of Alzheimer's patience. 00:08:44.000 --> 00:08:50.000 There's a lot of work on helping the patients themselves, but much less on the caregiver. 00:08:50.000 --> 00:09:05.000 And it's collaborative between Ohio State, Virginia and Tennessee. It requires the expertise by different areas such as myself there as thank you as expert in sensing systems and some areas of machine learning. 00:09:05.000 --> 00:09:06.000 Hmm. 00:09:06.000 --> 00:09:19.000 Expert reinforcement learning. Karen Rose who has worked with dementia patients for decades. She's from a nursing school and Christina Gordon with psychology where we need to make recommendations to caregivers to help them. 00:09:19.000 --> 00:09:27.000 Reduce their stress and she works with. You know, behavioral science. Okay, next slide. 00:09:27.000 --> 00:09:37.000 One quick slide on motivation. There are over 6 million Americans living with Alzheimer's and projections by 2060 to be close to 14 million. 00:09:37.000 --> 00:09:46.000 In particular, what's sometimes forgotten is there's over 11 million Americans. Serving as informal caregivers and they themselves often get sick. 00:09:46.000 --> 00:09:53.000 You know, they're worried they get angry, conflict stress and this reduces their own health. Quality. 00:09:53.000 --> 00:09:59.000 So let's not forget the caregivers. All right, so to go next slide. 00:09:59.000 --> 00:10:10.000 So, I've used this slide as a brief overview of what we did. And so if you look there at the patient, ultimately now sitting in a chair has Alzheimer's, maybe his daughter is helping him. 00:10:10.000 --> 00:10:18.000 At home and, experiences anger and conflict. If you follow the hour around, we have a microphone in the environment. 00:10:18.000 --> 00:10:27.000 The microphones job is to be very, very passive and so that they people can go about their lives in the way they normally do. 00:10:27.000 --> 00:10:39.000 The, there is a 4 stage pipeline in dealing with the microphone outputs, but. What's important is that we're looking for this anger conflict stress and so on through voice. 00:10:39.000 --> 00:10:47.000 So we have to employ machine learning. Then we, if we find that the person is. Angry organ conflict. 00:10:47.000 --> 00:11:02.000 Then we invoke the reinforcement learning to develop various recommendations and have adaptive recommendations. That's where that computer like chip is there and then it sends that information down to a smartphone to interact with the with the caregiver and giving recommendations. 00:11:02.000 --> 00:11:12.000 How to reduce the stress and this is learned over time. And get feedback back and forth from the from the caregiver. 00:11:12.000 --> 00:11:18.000 The I'm not gonna talk about the cloud and any of that we're skipping we only have 15 min. 00:11:18.000 --> 00:11:23.000 And, I think what's important is we actually implemented this, put it in 11 homes. 00:11:23.000 --> 00:11:30.000 Hi, and each deployment lasted a full 4 months. So that's kind of, I think very important as well. 00:11:30.000 --> 00:11:38.000 Next slide. So since this is NSF, I wanted to focus on some of the scientific questions that we worked on. 00:11:38.000 --> 00:11:46.000 And that was informed by the medical problem. And So the first one is, you know, how can we build passive sensing systems? 00:11:46.000 --> 00:12:00.000 To detect negative emotions and This is complicated because it has to work in the presence of household noise. In particular, TV where there's lots of voices on the TV and some people have their TV on all day. 00:12:00.000 --> 00:12:10.000 Air conditioners and many other their microphone is somewhere in the room so it picks up the speakers, voices from different amplitudes. 00:12:10.000 --> 00:12:17.000 The different environments of different vibration content. And so it causes differences in the processing of the voice. 00:12:17.000 --> 00:12:18.000 And. The context changes over time. You can't just train it just for a particular situation. 00:12:18.000 --> 00:12:29.000 They might buy a new couch, put new drapes in and so on. And so it's very complicated. 00:12:29.000 --> 00:12:32.000 Next slide. 00:12:32.000 --> 00:12:43.000 So in a. Speech processing part of the voice we have this acoustic sound coming in and it's actually a 4 stage pipe that I'm sure we just 2 stages here. 00:12:43.000 --> 00:12:51.000 One is to identify the speaker and this uses a neural net. And we have to. We're worried about the caregiver. 00:12:51.000 --> 00:13:00.000 So. We want to understand it's the caregiver and not the patient or vice versa and whether there's guests or the TV is on and so on. 00:13:00.000 --> 00:13:07.000 And then they, the bottom, the world that they are looks for different moods like angry conflict and so on. 00:13:07.000 --> 00:13:18.000 And both of these require novel solutions, which we have papers on. The I wanna emphasize that. Without being able to go into details. 00:13:18.000 --> 00:13:19.000 Okay. 00:13:19.000 --> 00:13:26.000 That's it detect anger. We have a novel architecture of a CNN. Combined with outer data distribution techniques. 00:13:26.000 --> 00:13:33.000 And for conflict, we have a kind of a very, it's actually complicated architecture. That uses. 00:13:33.000 --> 00:13:44.000 Generate adversarial network types style to attack conflicts. Next slide. So another question is, you know, how and. 00:13:44.000 --> 00:13:52.000 When do we intervene? To mitigate these negative emotional states. Again, this is a very complex problem. 00:13:52.000 --> 00:14:02.000 And. We need the nursing partners to help us understand. Exactly what dementia patients are like and how they change over different stages and so on. 00:14:02.000 --> 00:14:11.000 And we also need experts at the aerial science so that we can. Implement these various techniques that we would, use the reinforcement learning to actually decide. 00:14:11.000 --> 00:14:16.000 What to recommend to the patients. Next slide. 00:14:16.000 --> 00:14:25.000 So the different, we develop the recommendations into different categories, which I, which I list here, which you can read and. 00:14:25.000 --> 00:14:34.000 This is what they reinforcement techniques will, and learning techniques will. Will understand and learn about what works for a different person. 00:14:34.000 --> 00:14:50.000 And so the adaptive part is very important because it even changes over time. Over the 4 months. And we found that this, encouraging words in the morning is actually very very valuable it's actually more valuable than most of the other ones. 00:14:50.000 --> 00:14:54.000 Next slide. 00:14:54.000 --> 00:15:03.000 So the solution we developed. Was. Call contextual bandit federated learning so The contextual part has to do with. 00:15:03.000 --> 00:15:04.000 Hmm. 00:15:04.000 --> 00:15:14.000 Good idea of we need to understand what time today it is, what different emotions the people are feeling and so on and we would make different recommendations based on that. 00:15:14.000 --> 00:15:25.000 The bandit part is the reinforcement learning strategy and then the federated learning is because we We can collect information from different families and potentially even. 00:15:25.000 --> 00:15:42.000 Use knowledge from other families to, help. Okay, a particular family. And this also is very complicated because Most of federated learning does not deal with energies clients, so each of our clients is a different reward system. 00:15:42.000 --> 00:15:49.000 There's privacy issues, so we wanna take the data local in the individual clients's laptops. 00:15:49.000 --> 00:15:55.000 And not share it in the cloud. There's asynchronous updates, which is also not common. 00:15:55.000 --> 00:16:01.000 Is very amounts of data per clients. Some clients need a lot of recommendations. Others don't need so many. 00:16:01.000 --> 00:16:12.000 And our paper also does some technical things about proving upper bounds. On what's called accumulated regret and it's trade-off with respect to communication costs. 00:16:12.000 --> 00:16:16.000 Next slide. 00:16:16.000 --> 00:16:24.000 And just a brief. Inkling of some of the data we're getting in terms of the findings with respect to. 00:16:24.000 --> 00:16:33.000 The recommendations. So on the left there's a CG 2 caregiver 2. And it seems like that person turns out that they're not really. 00:16:33.000 --> 00:16:37.000 Liking mindful activities they don't like those and don't use those. Wow, CG 3 on the right hand side really likes mindful activities. 00:16:37.000 --> 00:16:51.000 And so this is. Kind of known, but they wrap on the day. Understanding it and developing it for and making sure it's adaptive for the individuals. 00:16:51.000 --> 00:16:54.000 Next slide. 00:16:54.000 --> 00:17:01.000 I know this scientific question really has to do with privacy most of the time I present this people's always ask questions. 00:17:01.000 --> 00:17:08.000 So obviously privacy is a function of personal preferences, different psychological and emotional issues people going through. And one we do need technical support to achieve it. 00:17:08.000 --> 00:17:17.000 So next slide. 00:17:17.000 --> 00:17:23.000 So privacy, I wanna break it into 2 parts. The first is that recruiting time. The second is during deployments. 00:17:23.000 --> 00:17:31.000 So at recruiting time, we, at the first 100 people we tried to recruit. This, 68 rejected the study. 00:17:31.000 --> 00:17:38.000 Right, and the reasons are shown here for their primary reason of rejecting it and so worried about privacy issue and bowl there. 00:17:38.000 --> 00:17:45.000 25% of people said that. So it's still an important issue obviously and but it's not the only one. 00:17:45.000 --> 00:17:55.000 Now, next slide. So if you go to deployment time. What can we do during deployment to help with privacy and how will use this? 00:17:55.000 --> 00:18:04.000 React to that. So basically we implemented a strategy and many things including users can set their start and then time for the system. 00:18:04.000 --> 00:18:09.000 So let's say it could be from 7 in the morning to 7 or 9 or 3 in the afternoon to midnight. 00:18:09.000 --> 00:18:18.000 Nick and change that easily. They can turn off. The system just by hitting a button on the phone and they can turn it back on whatever they want. 00:18:18.000 --> 00:18:27.000 Also any voices from non registered speakers are not recorded and you know we can explain that to them and how that works. 00:18:27.000 --> 00:18:37.000 The content of speech is not recorded. This was one of the main things that seemed to, ameliorate their problems with privacy and say that. 00:18:37.000 --> 00:18:44.000 We're only looking in things like tone and pitch and we show them wiggly lines showing that here's the acoustics that looks like this. 00:18:44.000 --> 00:18:59.000 This is what we're collecting, not the words. And, we even show that to them at various meetings we have with them over time and data is not shared across users through that reinforcement learning technique. 00:18:59.000 --> 00:19:04.000 Next slide. And so with privacy. It turned out that all 11 diodes Oh, I had said that there was privacy was not a problem by the end of the study. 00:19:04.000 --> 00:19:18.000 They had concerns in the beginning and as we've gone through but at the end they I think they saw the value of the system versus the the privacy issue. 00:19:18.000 --> 00:19:23.000 Next slide. 00:19:23.000 --> 00:19:37.000 Overall, I don't wanna go through this. I won't have time, but, but we are saying things like on the left, various positive experiences and I think 2 I want to just point out is one is the box that says being conscious of stress. 00:19:37.000 --> 00:19:48.000 Just having the system there, having their recommendation coming and making people aware that they are in distress. They do are in conflict or having and they do claim that the system help relax them. 00:19:48.000 --> 00:19:54.000 You know, on the right, we try to in some ways stress. Negative things and pull the negative issues out. 00:19:54.000 --> 00:20:00.000 And, here again, I'm not gonna go through all of them, but the 2 that I wanna mention is They still felt the system was time consuming. 00:20:00.000 --> 00:20:14.000 They many of these caregivers are just swamped with time and any little thing even that adds something extra to do makes them feel like they're overworked. 00:20:14.000 --> 00:20:23.000 And, sometimes stress is not related to gear giving, but. We didn't really have a way of recognizing stress versus caregiving versus stress not. 00:20:23.000 --> 00:20:28.000 But I think we can probably do it. Next slide. 00:20:28.000 --> 00:20:38.000 I wanna make one more comment is that most of this work was done during COVID and so we weren't allowed to meet with people or, you know, bring systems to people's home. 00:20:38.000 --> 00:20:49.000 So we developed, I'm sure I think a very interesting and novel result. We can box up our system and mall it and the caregiver just takes it out of the box and they're able to put it together. 00:20:49.000 --> 00:20:57.000 Caregivers that have no technical experience. And they were able to do it in, on average these 11, deployments. 00:20:57.000 --> 00:21:02.000 Took him about an hour, 1.3 h. And the value is that. We now can recruit nationwide. 00:21:02.000 --> 00:21:09.000 You don't have to have people that are within driving distance of your students. Or with driving distance of the. 00:21:09.000 --> 00:21:19.000 Anyone who's gonna deploy your system. And not only that. It also is much more robust. 00:21:19.000 --> 00:21:27.000 And it makes the system work and run in a much, much better way. So a lot of these techniques can be used post COVID. 00:21:27.000 --> 00:21:33.000 19. Okay, next slide. So finally, up next steps. 00:21:33.000 --> 00:21:43.000 I do think we can detect stress a little bit better. With using cycle of physiological parameters, but the problem with this is then now you're requiring them to wear say a smartwatch. 00:21:43.000 --> 00:21:56.000 And they may or may not do that. And so that's a trade-off. We do feel like we try to develop solutions that at development time are so robust that they're gonna work when we deploy them. 00:21:56.000 --> 00:22:01.000 But that isn't always work and how do we build a new methodology for doing that I think. 00:22:01.000 --> 00:22:24.000 Similar to things like simple engineering they know how to put together as strongly as possible a development of a a solution for a new bridge and then they put the bridge up and it most of the time stands up are almost always And, I think the adaptive personalization could still be used, to be enhanced. 00:22:24.000 --> 00:22:31.000 Okay, yeah, next slide. And this is just the list of the people from the 3 universities that. 00:22:31.000 --> 00:22:36.000 Contributed most of this project. Thank you. 00:22:36.000 --> 00:22:49.000 Thank you. That was that was a fabulous presentation. Again, if people have questions. There's an email on the screen now. 00:22:49.000 --> 00:22:51.000 SCH. correspondence@nsf.gov. Please send your questions to that email address. 00:22:51.000 --> 00:22:58.000 And we'll get them to the speaker. So thank you so much. So our next. So our next speaker is Laurel Reich and Dr. 00:22:58.000 --> 00:23:03.000 Reich is a professor of computer science and engineering at the University of California San Diego. She also has appointments in the Department of Emergency Medicine. 00:23:03.000 --> 00:23:26.000 Contextual Robotics Institute and the Design Lab. So Dr. Rag indirects the healthcare robotics lab and needs research in human robot teaming, a system robotics, embodied AI, and health informatics and builds intelligence systems with work with proximity with people. Dr. 00:23:26.000 --> 00:23:37.000 Reich's current research projects have applications in acute care. Nero rehabilitation and home health and further health equity through community driven research efforts. 00:23:37.000 --> 00:23:40.000 Dr. Reich has received way too many, awards to note from both from NSF and other agencies as well as multiple best paper awards. Dr. 00:23:40.000 --> 00:23:56.000 Reich will be presenting cognitively assistive robots for people with mild cognitive care. Dr. 00:23:56.000 --> 00:24:09.000 Thanks very much and thanks for the introduction. It's really great to be here today. So I work on technology, that is really key on supporting disabled people and their families. 00:24:09.000 --> 00:24:12.000 Oh, could you do next slide please? 00:24:12.000 --> 00:24:17.000 Yeah, thanks. And, we also do work, which almost will be talking about today. 00:24:17.000 --> 00:24:24.000 We also do work that supports the clinical workforce, including clinicians. Community healthcare workers and professional caregivers. 00:24:24.000 --> 00:24:44.000 And we take a health equity and human centered approach in all of our work where we center the voices and ideas of people who are historically excluded from technology development to ensure that any systems we create are well aligned with their needs and reflective of their ideas. Next slide. 00:24:44.000 --> 00:24:53.000 So the active care work, which means supporting others and supporting oneself, is undervalued. People who engage in care work are overworked and underpaid. 00:24:53.000 --> 00:24:57.000 And many live below the poverty line unable to make ends meet. What's worse is how invisible all of this work is. 00:24:57.000 --> 00:25:14.000 Car workers do so much to keep others afloat, but they're often taken for granted. Oh really nice paper by joining and colleagues draws attention to this invisibility and critically examines the role technology plays or fails to play in addressing this. 00:25:14.000 --> 00:25:22.000 They highlight the complex dynamics and nature of care work and how adding technology risks exacerbating these social ills. 00:25:22.000 --> 00:25:26.000 Next slide, please. 00:25:26.000 --> 00:25:37.000 So give this question one big context is why AI? Why robots? A lot of technologists frame things around careback gaps and how AI is going to solve all these problems. 00:25:37.000 --> 00:25:41.000 It's going to augment human care work. It's going to empower disabled people and so on. 00:25:41.000 --> 00:25:47.000 But ultimately the sentiments fall flat when you start to ask the why questions. Why are there care gaps? 00:25:47.000 --> 00:25:57.000 Why is there a shortage of care workers? Why is their work being undervalued? Why do disabled people face so many barriers to access and care? 00:25:57.000 --> 00:26:03.000 Once you start to learn the answers to these questions, you begin to realize the problem is systemic and societal. 00:26:03.000 --> 00:26:12.000 We don't value disabled people and we don't value those who provide care work. Next slide, please. 00:26:12.000 --> 00:26:19.000 So AI is basically a band-aid. It is not going to fix these major systemic and societal problems other in itself. 00:26:19.000 --> 00:26:28.000 It's going to potentially be creating a lot more. However, we are at an excellent inflection point to pause and reflect on how we develop and use AI. 00:26:28.000 --> 00:26:35.000 And how to consider if when and how it can support caregivers and disabled people. Next slide. Please. 00:26:35.000 --> 00:26:42.000 When we connect with the communities we want to work in, when we bring them into the research and development process, when we give them ownership of technology design. 00:26:42.000 --> 00:26:48.000 When we center their voices and ideas. We have a much better chance of getting to where we want to go. 00:26:48.000 --> 00:26:53.000 Next slide. Please. For many years, my students and I have been working closely with people with dementia and mild cognitive impairment. 00:26:53.000 --> 00:27:04.000 Their care partners and clinicians to envision the future of dementia care. We've been particularly motivated by questions of access. 00:27:04.000 --> 00:27:14.000 We explore how we can use technology to extend the reach of clinical and social services and how we can design accessible and effective robot and AI delivered health interventions. 00:27:14.000 --> 00:27:21.000 And we explore how to do this in a way that reflects the goals of our community partners. Next slide, please. 00:27:21.000 --> 00:27:32.000 So, as we've heard, MCI and dementia affects approximately 50 million people worldwide and it will affect approximately 131 million people by 2,050. 00:27:32.000 --> 00:27:51.000 So MCI, which is a prodramal stage between regular aging and dementia, can affect a person's cognitive abilities ranging from mild symptoms such as reduce concentration and challenges with prospective memory in the early stages to the inability to engage in instrumental and functional activities of daily living in the later stages. 00:27:51.000 --> 00:27:57.000 People with MCI and dementia require personalized and unique interventions to support their health and independence. 00:27:57.000 --> 00:28:08.000 There's currently no known cure or effective treatment for MCI or dementia. But there are behavioural and lifestyle interventions that can effectively slow the progression of the disease. 00:28:08.000 --> 00:28:15.000 And there are many things we can do to improve quality of life and support care partners in the later stages. 00:28:15.000 --> 00:28:23.000 So we've been developing robots to support people across the spectrum. For MCI, we've been working with people with MCI and with clinicians to develop Carmen. 00:28:23.000 --> 00:28:28.000 A physically embodied robot that can deliver a practice-based and evidence-based intervention for MCI. 00:28:28.000 --> 00:28:35.000 Next slide, please. 00:28:35.000 --> 00:28:41.000 We've also been working with people with advanced dementia and care partners to design robots that can make life more fun and enjoyable. 00:28:41.000 --> 00:28:48.000 We can't cure anything and we're not trying to. Our A. Mr. Bryant is to provide enrichment and to improve quality of life. 00:28:48.000 --> 00:28:55.000 Today I'll mostly be focusing on Carmen and people with MCI, but I'll briefly discuss our other work at the end. 00:28:55.000 --> 00:28:58.000 Next slide, please. 00:28:58.000 --> 00:29:10.000 The intervention used on Carmen is CCT or Compensatory Cognitive Training. It teaches people metacognitive strategies to help strengthen cognitive functioning and minimize the impact of impairment on their daily lives. 00:29:10.000 --> 00:29:15.000 For example, we might encourage someone to make a habit of placing their keys next to the door when they were home. 00:29:15.000 --> 00:29:26.000 We're using strategies such as visual imagery to compensate for memory difficulties. Next slide, please. 00:29:26.000 --> 00:29:34.000 We're adopting CCT to be delivered by physically and embodied robots. The inspiration here is that robots can extend the reach of an intervention. 00:29:34.000 --> 00:29:41.000 And this is the theme across many of our projects that the robot can help with the homework between sessions with a clinician to reinforce concepts at home. 00:29:41.000 --> 00:29:48.000 We hope that one day we can extend this work to the point where the robot chiefly delivers the intervention to help to expend its reach. 00:29:48.000 --> 00:30:09.000 Now, you might ask, why a robot? Why not an app? Well, something that we found both in our own work, as well as by a number of other people in the field, Is that there's something unique about a robot's physical embodiment that helps people sustain engage with it when it delivers intervention and sustained working with it much more so than an app. 00:30:09.000 --> 00:30:14.000 Next slide, please. 00:30:14.000 --> 00:30:20.000 So happy translate intervention which we know works well in a clinical setting onto a robot to be delivered at home. 00:30:20.000 --> 00:30:27.000 We know long-term interaction leads to better adherence and outcomes, but how do we do this for people with MCI who have unique requirements? 00:30:27.000 --> 00:30:37.000 For example, they can be easily frustrated or overly stimulated. We conducted some formative research to explore this in concert with our clinical collaborators and with people with MCI. 00:30:37.000 --> 00:30:46.000 And this yielded a number of helpful design considerations for designing robots for people with MCI and design patterns for how to translate clinic-based interventions into the home. 00:30:46.000 --> 00:30:53.000 Next slide, please. 00:30:53.000 --> 00:31:01.000 One of our most striking findings was how clinicians emphasize rewarding a person's pervherance over their performance. 00:31:01.000 --> 00:31:10.000 And they, as they might not necessarily show improvement throughout an intervention. So here, a robot could reward users for behaviors such as maintaining a streak or for trying out new strategies. 00:31:10.000 --> 00:31:15.000 Next slide please. 00:31:15.000 --> 00:31:19.000 It's also important to connect the intervention to the real world. So that people can transfer what they've learned into their everyday lives. 00:31:19.000 --> 00:31:32.000 Clinicians might ask people to reflect on opportunities where they can practice a cognitive strategy such as remembering names at a social gathering or grocery list. 00:31:32.000 --> 00:31:36.000 Robots could also employ similar strategies. Next slide, please. 00:31:36.000 --> 00:31:47.000 It's also important to include a person's interest in an intervention to make it enjoyable. Robots could do this, for example, by leveraging games or books that a person enjoys to make things more interesting. 00:31:47.000 --> 00:31:50.000 Next slide. It's also important to help people set intervention goals to help them be more aware of its impacts. 00:31:50.000 --> 00:32:08.000 So generally clinicians will work really closely with people to set very achievable goals. And a robot could facilitate this by asking users to reflect on their cognitive concerns and helping them to identify potential solutions. 00:32:08.000 --> 00:32:12.000 Next slide, please. 00:32:12.000 --> 00:32:19.000 Finally, it's very important to make robots cognitively accessible to people with MCI by minimizing cognitive demand. 00:32:19.000 --> 00:32:27.000 This can be achieved by keeping intervention content simple and succinct. And organizing information logically to help users maintain focus. 00:32:27.000 --> 00:32:37.000 Similarly, repeating information can help users review the material, so it's important to reiterate important points and ask people if they'd like anything to be repeated. 00:32:37.000 --> 00:32:47.000 Next slide, please. So we've used these findings to develop our robot Carmen, which is the cognitively assistive robot for motivation and nerve rehabilitation. 00:32:47.000 --> 00:32:53.000 Carmen delivers compensatory cognitive training over time and autonomously to people with MCI at home. 00:32:53.000 --> 00:33:03.000 It gives people with MCI an opportunity to practice compensatory strategies such as remembering lists of words like grocery lists or using a calendar or engaging a note-taking. 00:33:03.000 --> 00:33:09.000 Carmen is based on the open source robot platform Flexi, which was developed by colleagues at the University of Washington. 00:33:09.000 --> 00:33:15.000 And we made a number of modifications to the hardware and we also wrote all of the software from scratch. 00:33:15.000 --> 00:33:19.000 Next slide, please. 00:33:19.000 --> 00:33:27.000 So carbon uses a content controller to determine what intervention content to give to a person and how it should behave while delivering that content. 00:33:27.000 --> 00:33:41.000 It then considers a person's responses via tablet and speech and adjust its behavior accordingly. For example, if a person completes an activity and performs poorly, Carmen might give them encouragement, whereas it might celebrate if they performed well. 00:33:41.000 --> 00:33:51.000 At the end of the day, the robot sends interaction data it collected to a database and each night we have a machine learning system that can adjust Carmen's behavior to prepare for the next session. 00:33:51.000 --> 00:33:58.000 Next slide. Please. So I'd like to show you a brief demo of Carmen teaching one of these CCT strategies to a participant in their home. 00:33:58.000 --> 00:34:09.000 This was during a week long deployment of the robot. And you can play the video please. 00:34:09.000 --> 00:34:17.000 Oh, I'm not sure if there's audio. 00:34:17.000 --> 00:34:20.000 Let's try it again. 00:34:20.000 --> 00:34:27.000 It can be helpful to have a notes section in your layer where you can quickly take notes throughout your day. 00:34:27.000 --> 00:34:31.000 Let's practice note taking. 00:34:31.000 --> 00:34:44.000 We'll play the current game to have this note digging. Oh, great. 00:34:44.000 --> 00:34:50.000 I'm sorry. 00:34:50.000 --> 00:34:57.000 Great job. You remember 5 colors with practicing no taking. 00:34:57.000 --> 00:35:02.000 Okay, next slide, please. The number one. 00:35:02.000 --> 00:35:06.000 The number one. 00:35:06.000 --> 00:35:18.000 So we wrote a feasibility study first with clinicians and then with people with mild cognitive impairment and each used the robot for one week and they also completed a companion workbook which the robot instructed them on how to use. 00:35:18.000 --> 00:35:31.000 We collected a number of measures including several questionnaires, usage data, and we also conducted semi-structured interviews with participants to understand their impressions, their expectations, and their experiences using Carmen. 00:35:31.000 --> 00:35:40.000 We learned a lot of interesting things which all summarized briefly for brevity. Next slide, please. 00:35:40.000 --> 00:35:43.000 Participants with MCI expressed their willingness to try strategies with Carmen that they previously wrote off as impossible. 00:35:43.000 --> 00:35:54.000 For example, after learning the over-learning strategy with the robot, participants talked about using it to help remember information for a class they were taking. 00:35:54.000 --> 00:36:01.000 They also talked about using the note taking strategy to remember names and using the mindfulness strategy to de-stress. 00:36:01.000 --> 00:36:13.000 This was really exciting given that our clinical participants said that even if people with MCI decided not to incorporate every strategy in their lives, just learning and using a few new habits can lead to marked changes in cognition. 00:36:13.000 --> 00:36:16.000 Next slide. Please. 00:36:16.000 --> 00:36:26.000 Prediction said that they wanted Carmen to exhibit greater degrees of autonomy. So for example, they wanted the robot to be more involved and provide greater support when completing their CCT activities. 00:36:26.000 --> 00:36:36.000 This was really interesting and raises questions on how much autonomy we want the robot to have versus how much clinicians might want participants to do the work on their own. 00:36:36.000 --> 00:36:39.000 Next slide, please. 00:36:39.000 --> 00:36:51.000 Our clinical participants talked about translating, Carmen to other populations. For example, maybe it could support children with ADHD, where the child could complete activities with the robot before unlocking screen time. 00:36:51.000 --> 00:36:58.000 The flexibility of Carmen system design enables extending its use quite easily. So we're very excited about exploring applications like this. 00:36:58.000 --> 00:37:08.000 Next slide. In future work, we might explore using wearables and privacy preserving IoT sensors to detect real world strategy usage. 00:37:08.000 --> 00:37:20.000 So for example, a user could remember their keys when they left the house and Carmen could provide praise or encouragement and help connect the dots between the intervention and the real world. 00:37:20.000 --> 00:37:32.000 Next slide. So, very briefly, I want to talk about the work that we're doing at the other end of the spectrum, which is supporting people with late speech dimension in their care partners. 00:37:32.000 --> 00:37:38.000 This work is much more exploratory. We're starting with the community first and then we're building technology outward. 00:37:38.000 --> 00:37:44.000 Unlike the prior project with Carmen, we don't have an intervention and we don't necessarily expect to have one. 00:37:44.000 --> 00:37:52.000 Instead by engaging in collaborative work with community partners, we can start to understand what the shape of future technologies in this space might look like. 00:37:52.000 --> 00:38:01.000 Next slide. So another thing about dementia is it can lead to isolation and a loss of agency as the disease progresses. 00:38:01.000 --> 00:38:10.000 Next slide. In formal caregivers provide the majority of the support, but as I mentioned earlier, they're overburdened by a huge lack of social and financial support. 00:38:10.000 --> 00:38:14.000 Next slide. 00:38:14.000 --> 00:38:29.000 So researchers across a lot of disciplines are exploring ways to support both populations. A lot of researchers in the space adopt a critical dimensional lens, which examines dementia's social construction to understand embodied expressions of personhood. 00:38:29.000 --> 00:38:39.000 And in public health, researchers explore how social determinants of health can impact health equity, both for caregivers and for people with dementia, and explore methods to address them. 00:38:39.000 --> 00:38:49.000 I work. Explores the intersection of these 2 approaches to dementia care and explore how we can design technologies that promote agency for people with dementia and alleviate caregiver burden. 00:38:49.000 --> 00:38:57.000 And how we can incorporate community healthcare practices into dementia health dementia technology design to advance health equity. 00:38:57.000 --> 00:39:09.000 Next slide. So we've been adopting these research lenses into our collaborations with dementia community healthcare workers, dementia care partners, and people with advanced dementia. 00:39:09.000 --> 00:39:19.000 And we've explored a lot of different research questions and are starting to triangulate on some key concepts for how to design technologies that can support people with advanced dimension care partners. 00:39:19.000 --> 00:39:29.000 For example, we've learned a lot about how technologies can help support interactions between care partners and people with dementia to support joy and laughter and fun and engagement. 00:39:29.000 --> 00:39:38.000 Our care partners have built robots that support them in times of difficulty, but also in times of joy and thought about ways to help them reconnect with their loved ones. 00:39:38.000 --> 00:39:49.000 We've been exploring how robots can be designed to help to affirm agency, not expecting people with dementia to rely on their memory, but rather to support more casual interactions that affirm their personhood. 00:39:49.000 --> 00:39:54.000 We've tackled questions like how technologies can support emotional well-being and task assistance and promote sensory experiences such as music and touch. 00:39:54.000 --> 00:40:05.000 Something well preserved in later stages of dementia. And finally, we've been thinking about what it means for robots to reflect and mirror people's personalities. 00:40:05.000 --> 00:40:08.000 Act as a listener and act as a student to draw on a person's wisdom rather than being a fire hose of information. 00:40:08.000 --> 00:40:14.000 And what that means for robot design. These perspectives and designs are coming from our community partners. 00:40:14.000 --> 00:40:23.000 Thus we're hopeful that we're on the right track as we move forward in our work. 00:40:23.000 --> 00:40:38.000 So, next slide please. So to wrap up, I talked about our collaborative work with people with MCI care partners and clinicians to design and develop future technologies like Carmen that deliver clinic-based neuro-behabilitative interventions for people with MCI in their homes. 00:40:38.000 --> 00:40:44.000 I've also discussed our other work designing technologies with people with advanced dementia and with their care partners. 00:40:44.000 --> 00:40:51.000 Our hope is that these technologies can help to extend the reach of interventions. And also to help bring community care practices into the home. 00:40:51.000 --> 00:41:04.000 Overall improve the experience of MCI and dementia for all. Last slide. I'd like to thank our community partners, my students, and our sponsors, especially NSF. 00:41:04.000 --> 00:41:09.000 And I thank you for your attention. I look forward to your questions later. 00:41:09.000 --> 00:41:16.000 Thank you. So much. Our final speaker, last but definitely not least, is Octave Chapar, who's a professor in the Department of Computer Science at the University of Iowa. 00:41:16.000 --> 00:41:34.000 His research interests lie at the intersection of systems, networking, and machine learning. His interdisciplinary research aims to develop novel methods to evaluate, optimize, and personalize hearing aids. 00:41:34.000 --> 00:41:37.000 He's received, he too has received many awards and the test of time award at Census in 2,022 and has had multiple best paper awards. 00:41:37.000 --> 00:41:55.000 And has had multiple best paper awards. Dr. Chapar is going to press that towards personalizing hearing amplification. 00:41:55.000 --> 00:41:56.000 Okay. 00:41:56.000 --> 00:42:00.000 Next slide. 00:42:00.000 --> 00:42:10.000 Thank you. Very much, Wendy, for the introduction. It is a pleasure to talk about some of the work that I have been doing with my collaborators on personalizing hearing aids. 00:42:10.000 --> 00:42:14.000 Next slide. Let us start by considering the prevalence of hearing loss to get an understanding of how important this problem is. 00:42:14.000 --> 00:42:39.000 If we are looking at the United States about one in 6 adults. Reports having some trouble hearing in the aggregate this amounts to about 37.5 million americans who suffer from hearing glass Next slide. 00:42:39.000 --> 00:43:02.000 If we project this worldwide, the World Health Organization, projects that about one in 4 people are going to have some problem hearing by twenty-fifty, the increase in the prevalence of hearing glass is primarily driven by an increase of the age of the population. 00:43:02.000 --> 00:43:11.000 If we are looking at the impact that hearing loss has on people, hearing less stance to shrink our words. 00:43:11.000 --> 00:43:25.000 It really has a profound impact on our quality of life. Research studies have shown that it can be, associated with social isolation, anxiety, depression. 00:43:25.000 --> 00:43:27.000 Fortunately, we do have a interventions that tend to be quite effective. We have hearing aids. 00:43:27.000 --> 00:43:47.000 That are very effective at dealing with hearing loss particularly for the other population and as well as cognitive, issues. 00:43:47.000 --> 00:43:59.000 The reality though is that even though we have this interventions that are available they are not as widely deployed as we would like them to be. 00:43:59.000 --> 00:44:10.000 Even in the United States, only about one in 4, people over the age of 20, who could actually benefit for using a hearing gate actually use them. 00:44:10.000 --> 00:44:20.000 There are many factors that contribute to the situation. 3 of the more important ones are, well, first of all, they tend to be quite expensive. 00:44:20.000 --> 00:44:25.000 If you buy a heirigade, it costs about $2,500 when they come bundled with visits. 00:44:25.000 --> 00:44:34.000 To the out theologists. Speaking of our ideologies, well, we don't have enough of them. 00:44:34.000 --> 00:44:55.000 Particularly in rural regions of the United States. And despite our best engineering efforts, still a significant fraction of our users tend to be unsatisfied with the performance of hearing aids particularly in noise conditions. 00:44:55.000 --> 00:45:05.000 One of the things that I'm really excited about is the recent adoption, approval of the FDA, next slide, Wendy. 00:45:05.000 --> 00:45:23.000 Net, the recent adopt, approval of the FDA for over the counter hearing aids and one of the reasons that I'm really excited about these devices is because they come at a much lower cost than traditional hearing dates. 00:45:23.000 --> 00:45:33.000 Unfortunately, the reason why they come with this lower cost is because they come with no, this is to our be the audiologists. 00:45:33.000 --> 00:45:42.000 It is up to the end user to configure how they amplify sounds and to make the best out of them. 00:45:42.000 --> 00:45:55.000 At the University of we have developed various methods to do over the counter personalization. But before we go, I, before I present some of our work, I want to go to the counter personalization. 00:45:55.000 --> 00:46:02.000 But before we go, I, before I present some of our work, I want to go, I, before I present some of our work, I want to go through the traditional method that the audiologist will go about to fit hearing aids. 00:46:02.000 --> 00:46:19.000 So the first step that ideologies typically do is to perform a hearing test and the result of that hearing tests are is an audiogram that assesses how well a person hears in different frequency pads. 00:46:19.000 --> 00:46:49.000 What's we they have this audiogram they use what is called a fitting rationale such as non L 2 to determine what is the initial configuration of the hearing gate this configuration defines how sounds are amplified for several visits back to the audiologists, this configuration is then refined based on the feedback from the user to better meet their hearing needs. 00:46:49.000 --> 00:47:07.000 Next slide. So at the very high level, one of the way that hearing aids work is that they take the incoming sound and they divide it in multiple frequency bands or channels. 00:47:07.000 --> 00:47:21.000 And each one of these, channels is amplified by a given gain. These gains are determined by the amount of hearing loss that a subject can have. 00:47:21.000 --> 00:47:49.000 Next slide. However, usually you don't really only get one number. You actually get a number that changes depending on the how loud the sounds are so if you have soft sounds usually the gains tend to be higher if the sounds get too loud then the gains are reduced to maintain the comfort of the subject. 00:47:49.000 --> 00:48:11.000 The take home message from this slide is here I'm showing a hearing gate that uses only 2 channels but in reality a hearing gate can use anywhere from 6 to 8 channels, but in reality, a hearing gate can use anywhere from 6 to 8 channels resulting in a large space of possible configurations that we need to pick a mark. 00:48:11.000 --> 00:48:22.000 Next slide. So if we are to be able to support, users. Tuning their over the counter hearing aids. 00:48:22.000 --> 00:48:36.000 We need to create come up with methods that enable them to do 3 key things. One of them is to navigate this large space of possible, configurations. 00:48:36.000 --> 00:48:46.000 They need to find the one configuration that meets their individualized hearing needs. And we need to expose to them user interfaces that allow them to provide feedback. 00:48:46.000 --> 00:48:57.000 To us about what are their preferences. Next slide. 00:48:57.000 --> 00:49:09.000 At the University of Iowa, we have developed an approach to, tuning and personalizing over the counter hearing aids. 00:49:09.000 --> 00:49:25.000 Our solution is data driven and based on machine learning. The way that we consider this problem is we start with populations statistics about the characteristics of hearing it of hearing glass. 00:49:25.000 --> 00:49:34.000 And then we identify a population of users that we want to target with an over-the-counter hearing aid. 00:49:34.000 --> 00:49:49.000 Then what we do is that we account for their individual preferences. We take this large space of possible configurations and discretize them to a small number of presets. 00:49:49.000 --> 00:50:00.000 That we exposed to users via user interfaces to choose a monk. In the following, I will provide some additional details about this process. 00:50:00.000 --> 00:50:09.000 Next slide. So if you remember when I introduced the process of that I would theologists use to prescribe hearing aids. 00:50:09.000 --> 00:50:25.000 I mentioned that they use a fitting rationale. Nullinal 2 that takes in an audiogram and maps data with your program to a hearing gate configuration. 00:50:25.000 --> 00:50:36.000 It is out that if 2 people have similar audiograms, unsurprisingly they have very similar hearing aid prescriptions. 00:50:36.000 --> 00:50:54.000 This, it can be a problem because the audio grants do not fully characterize the hearing loss of people and it also is a problem because each user has a subjective assessment of what how they prefer sounds to sound like. 00:50:54.000 --> 00:51:10.000 So what we have done at IOI is that we have developed methods that take in an average hearing gate configuration and we have augmented those, those possible configurations. 00:51:10.000 --> 00:51:26.000 To think about what are the possible deviations from that configuration that are present for users with similar hearing glass Next slide. 00:51:26.000 --> 00:51:30.000 Using all these configurations that account for the variability of users and also how hearing glass varies within a population. 00:51:30.000 --> 00:51:42.000 We use genetic algorithms to try to identify a small subset of presets, discrete configurations. 00:51:42.000 --> 00:52:04.000 That are good enough for a large number of people, within our target population. One of the very interesting results that we are obtaining is that a small number of presets are actually sufficient to meet the needs of a significant. 00:52:04.000 --> 00:52:30.000 Fraction of the users that we are considering. So for example, when we are considering a population with mild to moderate hearing glass we can satisfy the meets the needs of those users using approximately, 30 presets and cover 80% of the users. 00:52:30.000 --> 00:52:40.000 Now that we have this very few presets, we can present to the to the user. And we have looked at various user interfaces. 00:52:40.000 --> 00:52:58.000 That we can expose the user we have considered a user interface that is based on per wise comparison where the feedback from the user is do you prefer A before B and this feedback is fed into a machine learning algorithm. 00:52:58.000 --> 00:53:09.000 That it can help identify what is the best configuration for the user within a small number of per was comparisons. 00:53:09.000 --> 00:53:19.000 Next slide. Next slide. Similarly, there are other types of user interfaces that one can develop. 00:53:19.000 --> 00:53:29.000 And here we have used dimensionality reduction techniques to take this large space of configuration and project it into 2 dimensions. 00:53:29.000 --> 00:53:38.000 And then we provide users with a slider based user interface to navigate this two-dimensional space. 00:53:38.000 --> 00:53:41.000 Next slide. 00:53:41.000 --> 00:53:44.000 Next slide. 00:53:44.000 --> 00:53:50.000 I am very excited about the work that we have done, but I'm even more excited about the some of the future opportunities. 00:53:50.000 --> 00:54:02.000 That they are available to us. To We have we are considering 2 different directions to push our work on over the counter hearing aid personalization. 00:54:02.000 --> 00:54:26.000 One is what we are calling shallow hearing aid personalization where we are trying to extend the reach of personalizing hearing gauge without specifically changing the signal processing algorithms that are used by hearing aids. 00:54:26.000 --> 00:54:45.000 We are specifically interested in how can we personalize other aspects such as noise reduction or being forming and moving from just focusing on the act of the initial configuration of the device to support continuous refinement. 00:54:45.000 --> 00:55:00.000 We are interested. We are previous slide. We are interested in developing a user interfaces that are context sensitive and goal oriented. 00:55:00.000 --> 00:55:22.000 And, those will be one of the things that will be following. In our research. The other really exciting direction is, the opportunity to perform much deeper personalization by replacing the current signal processing algorithms. 00:55:22.000 --> 00:55:27.000 That with. Deep neural networks. There is already some work done, in this area. 00:55:27.000 --> 00:55:41.000 However, additionally people have been looking at audio metric measurements to drive this fitting process. 00:55:41.000 --> 00:55:53.000 In contrast, we are looking at uses EG measurements that better capture brain processing and use that feedback. 00:55:53.000 --> 00:56:02.000 Training the DNS. Next slide. I want to thank all my collaborators and students that have participated. 00:56:02.000 --> 00:56:08.000 In this, projects. Thank you very much. 00:56:08.000 --> 00:56:11.000 Thank you. Thanks to all of our speakers. I just want to, we have some questions. 00:56:11.000 --> 00:56:19.000 So I'm gonna start with our questions. And actually, Dr. Shabar, I'll start with you. 00:56:19.000 --> 00:56:31.000 Amplification only works for some types of hearing loss. The distortions a complex issue. Do you have a way to address distortions? 00:56:31.000 --> 00:56:32.000 We don't have a foolproof way of, handling distortion. 00:56:32.000 --> 00:56:52.000 We do we are working on better ways on quantifying the impact of distortions. And the trade-off that people have between how much distortion we introduce and their personal preferences. 00:56:52.000 --> 00:56:57.000 So for example, some people would like to have would like to have longer delays in processing but lower artifacts. 00:56:57.000 --> 00:57:14.000 Whereas other people would prefer much shorter delays and latency in signal processing. But also lower quality in denoising. 00:57:14.000 --> 00:57:15.000 Thank you. And now I have a question for all 3 and I'll and I'll let you start. 00:57:15.000 --> 00:57:26.000 People wondering what how do you do recruit participants for this and what's the major, people wondering what, how do you recruit participants for this? 00:57:26.000 --> 00:57:30.000 And what's the major motivation for the study participation. And what's the major motivation for this? And what's the major motivation for the study participation? Dr. 00:57:30.000 --> 00:57:32.000 Stankovich. 00:57:32.000 --> 00:57:43.000 So we actually recruited primarily initially through a dementia center at Ohio State, but then because of COVID we actually went online. 00:57:43.000 --> 00:58:03.000 And use a social media to recruit. And, different kinds of social media. And the motivation really was that caregivers are overwhelmed and they need help and they know they need help and when this study was first introduced I got so many calls people saying could I have the system now gonna have the system now? 00:58:03.000 --> 00:58:13.000 So that they feel like they can. Improve their lives and reduce their stress. That's their motivation. 00:58:13.000 --> 00:58:17.000 Thank you. Talk to Reich. 00:58:17.000 --> 00:58:28.000 Yeah, I mean, I agree. I, we get a, we get a lot of calls from people that are really excited about the potential for these robots to be supportive. 00:58:28.000 --> 00:58:34.000 You know, everywhere from people with at the MCI stage to those with advanced dementia. Definitely caregivers, care partners are really informal caregivers are really excited about this. 00:58:34.000 --> 00:58:46.000 And this work. And also professional caregivers as well. I'm just looking for more ways to help with. 00:58:46.000 --> 00:58:57.000 People's engagement ways to consider new we've been working a lot with our collaborators on thinking about ways to support new activities that can be engaged with and. 00:58:57.000 --> 00:59:00.000 And that's been really exciting. I think we've really learned a lot from these collaborations. 00:59:00.000 --> 00:59:09.000 We've spent about 8 years building these very strong community tides and we've been really fortunate to have these these collaborators have given us so many ideas. 00:59:09.000 --> 00:59:12.000 Great, thank you. And Dr. Chapara. 00:59:12.000 --> 00:59:21.000 Yeah, we are really lucky, here to have a very good communication sciences and disorder department that I collaborate with and lots of our subjects come from a local registry of people with hearing glass. 00:59:21.000 --> 00:59:41.000 Everybody is really excited to part participate in research and lend their time for us to try to develop better. With better methods to improve sound amplification. 00:59:41.000 --> 00:59:53.000 Thank you. Dr. Stankovic, I'm one for you. How do you assess and ensure accuracy of the system speech categorization, knowing that it's actually an angry voice or. 00:59:53.000 --> 01:00:04.000 Yes, this is a very good question and complicated. The. The idea that most of the time people develop these models based on data sets and then they Test them and then they deploy them. 01:00:04.000 --> 01:00:16.000 And once they deploy them, they have no idea what's really true. One option is to have an EMA like system where you ask the caregiver in our case. 01:00:16.000 --> 01:00:22.000 Were you angry at this point and they would say yes or no? So they consider ground truth. The caregiver. 01:00:22.000 --> 01:00:35.000 However, this is a good example of meeting. Behavioral psychologists and dementia experts because It's clear that many times people do not recognize their own emotions. 01:00:35.000 --> 01:00:44.000 So the caregiver when they say they're no I wasn't angry were they really so we we were able to take in addition 01:00:44.000 --> 01:00:53.000 The audio clips that were really there where our algorithm said they were angry and then what did the caregiver say? 01:00:53.000 --> 01:01:02.000 And then label them after the fact as well. So now we have 3 different areas of, assessment. 01:01:02.000 --> 01:01:10.000 And we find all sorts of mixed results. That you know that our algorithm says are angry the person says no and the labels say yes. 01:01:10.000 --> 01:01:24.000 And all different combinations. And so we're really studying this carefully now and trying to understand. What we can learn from the fact that we now have these multiple ways of Identifying the accuracy. 01:01:24.000 --> 01:01:32.000 And I think we're still far from. Having, you know, highly accurate algorithms. 01:01:32.000 --> 01:01:53.000 Thank you. And so I wanna say we're at the end of our time and I want to say thank you and I think this was been such a wonderful example of how the foundational science is really enhancing what these important societal problems which you know hearing and cognition and caregiver stress. 01:01:53.000 --> 01:02:01.000 You know these are you know and dementia these are really important societal issues and I think you all have given us such great examples of how the NSF science is driving that forward. 01:02:01.000 --> 01:02:09.000 In partnership with our with our biomedical community. So with that, I'm gonna say thank you. 01:02:09.000 --> 01:02:12.000 This is the first of our frontier series. So look forward to more later this year. Thank you. 01:02:12.000 --> 01:02:20.000 Thank youThank you