WEBVTT 00:00:00.000 --> 00:00:00.000 So, here are you going to make the intro Correct, yes. Alright, so our spotlight you now, 00:00:00.000 --> 00:00:30.000 and enjoy your webinar. 00:00:49.000 --> 00:01:03.000 Hello, I'm Henry accounts division director of information and intelligence systems in the directory for Computer and Information Science and Engineering at the National Science Foundation. 00:01:03.000 --> 00:01:11.000 I'm very pleased to host our speaker today oldest Chadwick Jenkins. 00:01:11.000 --> 00:01:34.000 Chad is a professor of the computer science and engineering at University of Maryland, and also helps direct the robotics Institute is research addresses problems and interactive robotics human robot interaction robot robot perception and robot, learning 00:01:34.000 --> 00:01:36.000 from demonstration. 00:01:36.000 --> 00:01:48.000 He also has helped initiate very innovative work on undergraduate education in computer science, using robotics. 00:01:48.000 --> 00:02:06.000 He's been a recipient of many honors, including the Presidential Early Career Award for scientists and engineers p case which is the highest award, we have in this country right now, I guess. 00:02:06.000 --> 00:02:11.000 Next to the highest award we have for for scientists and engineers. 00:02:11.000 --> 00:02:18.000 He also is editor in chief for the ACM Transactions on human robot interaction. 00:02:18.000 --> 00:02:36.000 Some of you may have met him at our last size community of visitors meeting is been a great asset to this up to the general size community and to National Science Foundation, as well. 00:02:36.000 --> 00:02:57.000 I'd like to remind people that if you're in the audience, you can enter questions. Click on the q amp a tab, not the chat tab the q amp a tab at anytime and enter questions at the end of a chat presentation I'll moderate. 00:02:57.000 --> 00:03:00.000 A q amp a with him. 00:03:00.000 --> 00:03:12.000 For those of you who are NSF employees. You can also join us this afternoon for an informal office hour with the chat. 00:03:12.000 --> 00:03:16.000 And with that, I'd like to turn it over to Chad. Chad. 00:03:16.000 --> 00:03:27.000 Thank you so much, Henry, this is a, this is an immense privilege and pleasure and, and I, I'm grateful for that wonderful introduction. And so, I do have a lot to say. 00:03:27.000 --> 00:03:40.000 So I'm going to grow at a pretty pretty fast clip, I these days when I give talks it's more I'm like one of those debaters I just say everything is as fast as possible to get that out there and then that will lead to further discussion so so so apologies 00:03:40.000 --> 00:03:52.000 if it's a little too dramatic but but I but I do have, I do want to take them make the most of this opportunity to speak to this very important and, and, and amazing audience. 00:03:52.000 --> 00:04:06.000 Before I do that though I found out this morning I just want to recognize, you know, Andrew who left us and and I had the great privilege to work with Andrew, and through care WP. 00:04:06.000 --> 00:04:11.000 It was, it was I. She was a great steward of our field. 00:04:11.000 --> 00:04:24.000 I you know I didn't get enough opportunities to work with her and I, and I'm just, and I'm, I'm grateful from having the opportunity to work with her to help provide more opportunities in our field and help make our world a better place. 00:04:24.000 --> 00:04:28.000 And so, so, so if so for those of you that know Andrea. 00:04:28.000 --> 00:04:36.000 You know, my heart goes out and so and so I just wanted to take just a quick moment to say that. 00:04:36.000 --> 00:04:39.000 And, and so. 00:04:39.000 --> 00:04:50.000 So, as I as I just jumped right into it as the leader from my research group the laboratory for progress perceptive robotics and granted reasoning systems as Associate Director of MIT University Michigan robotics Institute. 00:04:50.000 --> 00:05:04.000 I've been in computer science for for for a while I've been an AI and robotics for a while in this picture of me. I'm just a quick question for you to think about what is what do I get asked the most when I'm just walking around campus. 00:05:04.000 --> 00:05:16.000 When people see me on campus, what do you what do you think people ask me, the most the most the most popular question that I get in the classroom I've learned, and you can feel free to enter in the chat or just think about it in the classroom I've learned 00:05:16.000 --> 00:05:27.000 that that I should take a step whenever I'm asking the question so they can think about it, I'll take a quick step. 00:05:27.000 --> 00:05:31.000 All right. So, if so, so I hope you have your answers and thought about it. 00:05:31.000 --> 00:05:36.000 But the but the answer the question I get the most of my with the football team. 00:05:36.000 --> 00:05:47.000 So, and so I basically have to say, say no if you actually see a real division one football player you realize how much I'm not not even though I'm still still a sizable dude. 00:05:47.000 --> 00:05:56.000 But the actual answer is that I was with the football team, way back in 1988. And so I played football I was not good at it. 00:05:56.000 --> 00:06:09.000 Which is why I'm glad that there are other opportunities for me in life. And so one of those great opportunities that I've had is, is to be able to be a roboticist and to enter the field of computing and this has really been sponsored you know it's been 00:06:09.000 --> 00:06:18.000 enabled by a lot of the great work, the investments that the nation has made me which I'm incredibly grateful for from a number of different agencies. 00:06:18.000 --> 00:06:32.000 And who's worked who doesn't always go recognized, you know, the people at NSF and so I'll just give one big shout out to Doug Fisher if you haven't seen Doug fishers, you know, blog posts on the CCC but I would highly recommend it. 00:06:32.000 --> 00:06:37.000 I know that people at NSF do do a tremendous amount of work and oftentimes go undervalued. 00:06:37.000 --> 00:06:41.000 And but it is a tremendous asset to the nation and so. 00:06:41.000 --> 00:06:52.000 So those are. That's something that I'm incredibly grateful for because when I was when I was, you know, 14 right there you know I never could imagine that this was that this was a possibility for me. 00:06:52.000 --> 00:07:06.000 In fact, if I look back, you know, when I was a kid. This is me as a kid, you know, I don't know if my dad may have seen this, for me it's too so this is this is my, my dad is awesome Alabama where where we lived in my dad worked for the Federal Bureau 00:07:06.000 --> 00:07:21.000 Prisons. And so we moved around to all of your favorite prison towns. Oftentimes with an L, in the name so Lewisburg Leavenworth Lom Polk Lexington. That's where we. That's where we lived this happened to be an Alabama and Talladega. 00:07:21.000 --> 00:07:34.000 And if you would ask my dad you know what I was going to end up doing which is why I played football is that that really you know football was the path of opportunity my dad grew up in Beaumont, Texas, my mom grew up in Port Arthur, you know, they, you 00:07:34.000 --> 00:07:45.000 know, you know, they, you know, football was, was the way to come up in segregated in the segregated South because there were no opportunities professional opportunities just weren't there. 00:07:45.000 --> 00:07:55.000 In fact, a lot of my dad's friends, you know, had to play football in the preview Interscholastic league which was a segregated league that where they couldn't compete with with white high schools. 00:07:55.000 --> 00:07:58.000 And my dad saw those examples. 00:07:58.000 --> 00:08:12.000 You know as the way to come up in the world My dad's good friend Mel far guess that no far if you're a distributor you know him for for for his car dealership, you know, he also do Baba Smith Yes, that Baba Smith from police academy and and all sorts 00:08:12.000 --> 00:08:30.000 of movies. And so that was what my dad saw as, as the, you know, for his generation that was the path to, to, to opportunity but for me I saw something different and it started right in front of that TV in 1981, and it can just be condensed with Atari 00:08:30.000 --> 00:08:45.000 right, My life was all about Atari, me and this TV spent a lot of quality time together with Super Nintendo Sega Genesis, that we just, you know, like, I mean that was the root of a lot of bad grades, I have to say it but it was inspiring. 00:08:45.000 --> 00:08:58.000 And so I was happy that you know I was, I was I didn't know that I was going to be good enough to be a computer scientist that I was going to be smart enough, but I really love video games I love how do I make these and what is this machine and what does 00:08:58.000 --> 00:09:09.000 it do that produces these magical results on the screen. And so when I got when I went to college I said let's give this a try. I'm gonna try a computer science and, and it has been an amazing ride. 00:09:09.000 --> 00:09:24.000 Since then, you know, I couldn't have imagined when I was in high school that I would be able to work with incredible graphics projects, work with robots be able to get a PhD, having great mentors like my like like Jessica Hodgins that took me in at Georgia 00:09:24.000 --> 00:09:38.000 Tech and gave me my first research experiences. My metallic was my PhD advisor when I was at Brown I was fortunate to have. I you know that opportunity he came from Philip Klein just talking to me at cvpr and saying hey you know what are you doing and 00:09:38.000 --> 00:09:50.000 taking the opportunity to reach out, having great mentors like Michael black and Tom Dean Tom Dean who was the first one it says that said neural networks may actually be viable and I was like really, but he said it anyway and. 00:09:50.000 --> 00:10:03.000 And so this was just, you know, this pathway has just been filled with great experiences meeting with people. So when I see students. So oftentimes we have students that come into our lab they want to feed robots and they asked me if this a real job can 00:10:03.000 --> 00:10:08.000 I be a roboticist and I say absolutely yes, you can do it. 00:10:08.000 --> 00:10:20.000 You know, you just have to know that it's possible. And this leads to kids asking me all sorts of questions like, you know, will robots take my job are you making RTD too, can you robots Bring me a drink and all of those things, and more question I can 00:10:20.000 --> 00:10:27.000 typically answer which is one of the reasons I'm just really bad at email sorry if I'm, I've dropped emails or even things that are more important. 00:10:27.000 --> 00:10:30.000 But, but I'm trying to do my best. 00:10:30.000 --> 00:10:40.000 But I want to take. I want to just take a subset of these questions, and just sort of, you know, sort of dress them in this talk will robots take over the world. 00:10:40.000 --> 00:10:47.000 So where's my robot. And maybe we'll get to Kenya robot Bring me a drink, maybe we'll see. 00:10:47.000 --> 00:10:56.000 Let me take the first one will robots take over the world. And the answer is, is no not really I don't think robots are really that smart I don't know if they'll. 00:10:56.000 --> 00:11:00.000 I think by, by, for my grandchildren's grandchildren, they may they may reach human level capability. 00:11:00.000 --> 00:11:14.000 capability. But I don't, I don't see that and, but I'd also say more importantly, you know, we program robots we can tell the robots what to do and we can make them help humanity and that's really what I think our goal should be. 00:11:14.000 --> 00:11:24.000 You seen one I've only given one TED talk, and I did that with Henry Evans, and this was one of the most, most you know inspiring opportunities of my career. 00:11:24.000 --> 00:11:26.000 Henry. 00:11:26.000 --> 00:11:27.000 Henry. 00:11:27.000 --> 00:11:41.000 This is Henry working with, with people from Willow Garage to old Willow Garage, and Henry suffered a stroke when he was about 40, and he requires caretaking from from from from lots of people that do just the basic things in life, to be able to eat a 00:11:41.000 --> 00:11:47.000 meal scratch an itch daily hygiene caretaking of the house, all of those things. 00:11:47.000 --> 00:12:00.000 But, Henry just happened to, to see from some work that they were doing at Willow Garage with Charlie camps group at Georgia Tech, and they started making interfaces for robots for Henry to use to just do some basic things like consuming a meal or scratching 00:12:00.000 --> 00:12:14.000 an edge, or as you'll see in a second being able to shave. And I thought this was great because they were using some of our work to do that they took from the work that we had we had created when I was at Brown University in order to make interfaces that 00:12:14.000 --> 00:12:21.000 Henry could use Henry had the ability to, you know, to just have basic capability of controlling a mouse cursor. 00:12:21.000 --> 00:12:34.000 And so he could, you could quickly make web interfaces that allowed Henry, you know that he could use on on on on accessible computer devices, to be able to control robots and this gave us a glimpse into what this could look like. 00:12:34.000 --> 00:12:39.000 And this was like this was this was truly amazing to see this. 00:12:39.000 --> 00:12:45.000 And this really built off of off of something that come out came out of my lab, which is robot web tools, right. 00:12:45.000 --> 00:13:00.000 So robot web tools is really about how can we make TCP IP HTTP level interoperability for robotics, we, you know, the foundation of is something called Ross bridge, which is a which is meant to be an applications layer protocol a client server protocol 00:13:00.000 --> 00:13:14.000 which meant to be minimal simple general and enable interoperability from the wide range of things that will expect that robots need to connect to in our, in our ecosystem so that could be the robot operating system LCM and different types of middleware 00:13:14.000 --> 00:13:25.000 systems web and cloud systems, different types of programming environments MATLAB Java prologue who knows, as well as embedded systems that can't take on massive software dependencies. 00:13:25.000 --> 00:13:34.000 And so really, how do we start to create this ecosystem, that, that, that we've seen with the internet and we've seen with, with the World Wide Web, how can we help you have robots. 00:13:34.000 --> 00:13:45.000 Be a part of that. And so we're glad that when my colleague said Henry one to meet me and and see if we could do something because Henry want to fly a drone and I was like, Oh, totally I can make that interface. 00:13:45.000 --> 00:14:00.000 So I so I happen to be on sabbatical at Willow Garage at the time, in Menlo Park, California, so I drove by the Palo Alto mall bata bata AR drone and a day you know actually an afternoon I had to this web interface. 00:14:00.000 --> 00:14:23.000 That was nice and big had nice big buttons that Henry could use through his through his through his devices for work with the computer, and then I just took it up to his house in the hills of Palo Alto and, and this was our first flight. 00:14:23.000 --> 00:14:38.000 It was just amazing to see him fly for somebody who had lost mobility, to be able to move around and to regain that through robotics, do something that we produce Henry was a much more assertive pilot then then I usually I'm because I'm like I don't know 00:14:38.000 --> 00:14:49.000 what these drones going to do but Henry's like let's go for it. He went off and saw the solar panels on a group that he had never been in a field before he was able to grow it and go into a garden and look around in a way that that he couldn't. 00:14:49.000 --> 00:14:58.000 And you know, and even though this was just sort of a prototype resolve it gave gave us a sense of what this could be and what this could look like, which I thought was amazing. 00:14:58.000 --> 00:15:10.000 And we built on that you know and so putting this protocol out there the Rosberg protocol we've seen you know lots of groups be able to use it to create all sorts of new and different types of interfaces and so you know it's been pretty it's been pretty 00:15:10.000 --> 00:15:18.000 popular and pretty prolific, which was great to see all of this is available on GitHub through the robot web tools organization. 00:15:18.000 --> 00:15:30.000 We see we've seen just rise and growth and so so Ji Hoon was it was a student mind, you know, led the project up to 2018 we've just seen great youth, great uptick in youth. 00:15:30.000 --> 00:15:40.000 And these uptake usually occur around around around paper deadlines so, so, so the roboticists who just got their papers in for Iris robot web tools downloads. 00:15:40.000 --> 00:15:50.000 We saw similar sort of effect in 2015 between the paper deadline on the video deadline, we see these spikes and interest in downloads. 00:15:50.000 --> 00:16:05.000 Not only that, you know, if you look on GitHub. You know, the main robot robot operating system repository which is used all across robotics, you know, we wrote Ross, Ross bridge robot web tool that usually is about half of that right and so that's a 00:16:05.000 --> 00:16:15.000 a lot of traffic given, given what we've seen so this has been really cool to be able to enable this level of opera interoperability and capability in robotics. 00:16:15.000 --> 00:16:29.000 And so when I tell students that until their story. They're like wow this is cool and awesome and inspiring and, you know, feels like Silicon Valley, either Silicon Valley the place for Silicon Valley the TV show depending on who it is, but inevitably 00:16:29.000 --> 00:16:41.000 this comes down to two students asking so all right we've done all this, where's my robot where is this where is this, this thing that you know that's supposed to be a part of my life and doing things for me and and helping us. 00:16:41.000 --> 00:16:45.000 And I would tell people your robot is already here. 00:16:45.000 --> 00:17:00.000 So if you're if you're looking for your robot just, you know, just look around, we've got drones. We've got a ton of cars we have robots that are working in manufacturing context telepresence robots, you know, but usually when people think of robot they 00:17:00.000 --> 00:17:12.000 think of what we call mobile manipulation robots robots that can work at a human scale, you know, everybody, every, you know, every faculty candidate who's who glances upon robots since uh since show Rosie the robot from the Jetsons right. 00:17:12.000 --> 00:17:18.000 I think that's kind of what they're what they're looking for and so these are really the type of robots that were that were talking about. 00:17:18.000 --> 00:17:27.000 At this point, I have to remark that, that one of my favorite Shows of All Time is the prices right, I love the prices right when I have a bad day on campus. 00:17:27.000 --> 00:17:38.000 I come home, I watch the prices, right, because it just makes me feel good. And so, with that, I would ask our had these items up for bid, how much do you think these robots costs. 00:17:38.000 --> 00:17:50.000 So just make a guess, you know, I mean, any guesses finally feel free to, you know, just think about them taking another step so you can think about how much these robots cost. 00:17:50.000 --> 00:17:53.000 All right, and I can tell you the answer is not $1. 00:17:53.000 --> 00:18:05.000 It's so, so, so with that I will, I will say that that really you know the Willow Garage robot when I bought it, it was about $400,000. That's more than a lot of people's homes. 00:18:05.000 --> 00:18:12.000 When I got to miss to the University of Michigan and 2015. I've my first purchase was, was a fetch mobile phone, mobile manipulation robot that was about $100,000. 00:18:12.000 --> 00:18:24.000 Fetch mobile manipulation robot that was about $100,000. If I compare that to the first robot that I actually started to work that actually really worked with which was a massive robot, which I'm guessing is around a million and a half dollars when they 00:18:24.000 --> 00:18:29.000 made it in and around, around 2000 2001. 00:18:29.000 --> 00:18:41.000 I extrapolated, a few years back I extrapolate and said well if I, if I took my, my tools from linear algebra for for for plotting for taking data points and fitting a polynomial to it. 00:18:41.000 --> 00:18:48.000 I would make the guests that your robot the robot that actually going to be affordable that could that you could use, it's going to show up at around 2020. 00:18:48.000 --> 00:18:54.000 And, and I think that gas is, you know, it'd be about 440 thousand dollars right. 00:18:54.000 --> 00:19:08.000 And that was kind of kind of right you know i mean i think we've seen robots like hello robot which is a mobile manipulator for the home with about $17,000, Boston Dynamics has their robots, I am a big fan of the agility robotics digit, which comes in 00:19:08.000 --> 00:19:21.000 about $250,000 we're going to see robots at this at this phase, and we're going to be able to produce more they only produced I think about, about three to five of the robot not the PR tues were produced and I think there were 60 of those produced fetch 00:19:21.000 --> 00:19:32.000 fetch robots, I see them everywhere. I'm sure there are hundreds of those that are produced when we start seeing the economies of scale and types of robots that are going to be made in the thousands or 10s of thousands, those those costs are going to 00:19:32.000 --> 00:19:44.000 go down and so we're going to be thinking about make programming robots the same way we programmed computers in the coming years. But this is only the hardware if we if we continue to computing analogy. 00:19:44.000 --> 00:19:48.000 What do we do with on to make an autonomous to actually put the computing on board. 00:19:48.000 --> 00:20:02.000 And so when I was a student. It was all remote control was just, you know, the, the, I would just sort of exchange my embodiment for the robots and body when I put a head mounted display on and really I can't, it's not me delegating tasks to the robot. 00:20:02.000 --> 00:20:10.000 But we've seen over the last 20 years or so, is that robots are we're able to start delegating tasks with the robot to basically have robots, put that there. 00:20:10.000 --> 00:20:22.000 So if you think about one of the Thomas car does it comes to location A picks you up, and then drops you off and location being you are really just putting that there are mobile manipulation robots essentially do the same thing they pick something up 00:20:22.000 --> 00:20:26.000 in one location, drop it off at another. 00:20:26.000 --> 00:20:38.000 What we want to be able to have robots do is to be able to have a robot do a task for you, and have babies say this is what I want the outcome to be this is what I want you to accomplish and the robot can reason over all the things that has to do to accomplish 00:20:38.000 --> 00:20:43.000 accomplish that goal, and then you and then, and then be able to carry those actions out. 00:20:43.000 --> 00:20:54.000 And so really having a robot doing this task for you. Similar to how digital computing made information programmable robots are going to make your world programmable and how are we going to do that. 00:20:54.000 --> 00:21:06.000 And so I, you know, we oftentimes think about the different types of environments that are robots have to consider but I would just even say that kitchen is still is still a major frontier for robotics the diversity of objects the form and function of 00:21:06.000 --> 00:21:19.000 all of these objects, being able to deal with the inclusions and the uncertainty. This still is a major challenge but what we want to be able to do if we want to see what our goal is, as we want to be able to take have any robot developed computing systems 00:21:19.000 --> 00:21:30.000 software systems that allow for any robot any capable robot to be to be shown to be trained to perform any task in any in any environment. And that's really where we're going to typically the area that's been thinking about this is learning from demonstration. 00:21:30.000 --> 00:21:45.000 And typically the area that's been thinking about this is learning from demonstration. And so robot learning from demonstration when I do my literature review started with the copy of the MIT Media Lab copy demo, this was, this came out of Marvin, Marvin 00:21:45.000 --> 00:21:55.000 Minsky and Patrick Winston, and really what that if you look back on it what it would be is that you would you would demonstrate you would go up to a robot and demonstrate a scene, you would perceive that scene. 00:21:55.000 --> 00:22:06.000 The robot would perceive that scene, and then be able to reproduce the the scene that it that it's been shown and so this is just for like the basic baseline baseline demo that that would that would occur. 00:22:06.000 --> 00:22:17.000 And so this is something that we're trying to do more and more and more have robots not just copy a scene but be able to understand what do you really really trying to show what is the what is the goal that that's being demonstrated. 00:22:17.000 --> 00:22:31.000 And so this really is the essence of learning from demonstration but learning from demonstration is just one possibility. What is the paradigm that we should use the program robots, it won't be the case that robots just learn and that's it and we, we 00:22:31.000 --> 00:22:46.000 can just say Case Closed. There's a whole paradigm that has to go around it. It can be cognitive architectures, it could be traditional programming languages, it could be different types of of what we call interactive task learning systems. 00:22:46.000 --> 00:22:58.000 In recent years, people are sort of said that the ultimate task learning system will be you know be machine learning based will be neural networks transformers are hot right now, but I would caution that even if you're artificial intelligence right 99% 00:22:58.000 --> 00:23:06.000 of the time which you know you're seeing those types of those accuracy rates, what happens in the 1% of the time that you're wrong. and who's going to pay the cost for that. 00:23:06.000 --> 00:23:15.000 And so, Robert Williams, for example, you know, was arrested by the Detroit police based off of bad facial recognition hit. 00:23:15.000 --> 00:23:22.000 What happens when we have to put this into practice and you know and you have failures one out of 100 I don't know if people are going to be willing to tolerate that. 00:23:22.000 --> 00:23:47.000 What's going to happen to the people whose work is being replaced. And maybe potentially not being reinstated. We've seen all sorts of issues with with bias and toxicity in terms of these types of learning systems and who is training them and what they're 00:23:47.000 --> 00:23:52.000 the product and who's going to be impacted by the products. 00:23:52.000 --> 00:24:03.000 And so, so what I would make the argument is that we should think about next generation learning from demonstration in a different way. And I think of this as sort of like you know I don't can't say that we're going to, it's going to be easier to program 00:24:03.000 --> 00:24:17.000 but I think we can make them more PR more performance and scalable to new task, the way I do it is that we want to show the similar to the ai, ai, MIT lm Papi demo, we want to be able to have robots that can learn from workplace demonstration I just want 00:24:17.000 --> 00:24:29.000 to go to the robot and show it. What I wanted to do, and it sort of takes the advances we've seen in 3d robot perception for three semantic mapping and combining with declarative planning based programming paradigms and this is what my group has been 00:24:29.000 --> 00:24:32.000 calling semantic robot programming. 00:24:32.000 --> 00:24:44.000 And so what we want the robot to do in this case is we want to be able to just go up to the robot like we would maybe another person say, all right, this is this arbitrary thing I'd like to be able to show you, and I want you to be able to understand 00:24:44.000 --> 00:24:59.000 that, and on, and so we'll build off of combinations of neural networks and probabilistic inference to be able to perceive that scene represented let's say as a as a as a home in PDL as a scene graph as a way that expresses the goal condition that we'd 00:24:59.000 --> 00:25:09.000 like the robot to understand. So once that goal has been fed. If the robot comes back to that scene at a later time or it's been messed up or, or, or moved in different ways. 00:25:09.000 --> 00:25:26.000 The robot could perceive that scene as well, and then be able to represent that as a as a as a scene graph in PDL, and then the robot could then be able to say to transit from the current theme into the goal thing. 00:25:26.000 --> 00:25:35.000 By using task and motion planning methods which are which have we've seen great advances where we have a number of reasonable options that are still, still still emerging. 00:25:35.000 --> 00:25:45.000 So in this case, what we're doing is we're not we're not SSH into the robot and telling it run this and do that are you know we just sort of showing the robot what we want to do. 00:25:45.000 --> 00:25:56.000 And then the robot can then understand this is what you what you want, and then it's reasoning on its own through all the tasks and motions in order to, to be able to accomplish this task. 00:25:56.000 --> 00:26:08.000 And so the robot is has has its semantic understanding of the world. And then we can then we establish a common ground that we both can understand that me and the robot can understand, so that we can. 00:26:08.000 --> 00:26:24.000 And that I think is an interesting way to program robots. And so we've been we did this on a number of themes and so we tried this this is work from from back from, from 2018 and so you're saying up in the upper left hand corner of each square goals that 00:26:24.000 --> 00:26:38.000 were shown to the robot. And then, and then, and then the robot x executing and carrying out these tasks and a number of of different environments and so, for the sake of time I'm just gonna, I will just say the videos online. 00:26:38.000 --> 00:26:51.000 Trust me the robot succeeded but it's able to accomplish these tasks and that that's that's really good but the thing that I sort of hand wave over, is that in order for this to work, robots have to be able to perceive and cluttered themes, we have to 00:26:51.000 --> 00:26:53.000 be able to estimate and clutter. 00:26:53.000 --> 00:27:01.000 You know, and so we want to be able to understand the scene graph and deal with all the different types of uncertainty that could come through occlusion stalking off balance constraints piles of stuff. 00:27:01.000 --> 00:27:16.000 And we should be able to do this without the usual crutches that we see in research no green screens no simulated objects blocks world was break for the 70s I think we can do better, and they are tags, I don't think we can use them but we don't need them. 00:27:16.000 --> 00:27:29.000 And we're want to follow the successful model that we've seen an autonomous navigation in that, in that if we took, if we see those laser rangefinder is on top of the cars, they can build fabulous models of space of three dimensional space so this is 00:27:29.000 --> 00:27:39.000 work from my colleagues Ryan used to fund that Olson, they that they did where they put labouring firms on top of the car and were able to build fabulous maps of Ann Arbor This is the football stadium. 00:27:39.000 --> 00:27:53.000 This is downtown Ann Arbor what you're seeing in blue is the is is or what the robot built out previously and what you're seeing and red and orange and red and yellow is what the what the robot is seeing at the current time from this laser range finding, 00:27:53.000 --> 00:28:06.000 we can build, you know, as a field we can build fabulous models of three dimensional space, but our robots don't necessarily know what is that what if we know the geometry of space but not the semantics of space. 00:28:06.000 --> 00:28:18.000 What is the, you know, for all these points which points belong to the car which points belong to the building to the, to the, to the, to the bicyclist that are going through this intersection based off of that we need to understand what those are and 00:28:18.000 --> 00:28:20.000 treat them appropriately. 00:28:20.000 --> 00:28:36.000 But building on this we'd like to build, we'd like to follow the the successful pattern that we've seen from a girl directed navigation, or perception of is probabilistic, but then our reason we take an aspirin off of that distribution, and then we. 00:28:36.000 --> 00:28:49.000 And then we do, we do motion plant we do a reasoning and then and controls and action off of that. And so what would this look like for our mobile manipulation robots Well, our robots have to do seen estimation, then they can do task and motion planning 00:28:49.000 --> 00:28:57.000 and carry out those actions. And so we've. And so, from this we started take some some strides into this area so my student Alphonse us. 00:28:57.000 --> 00:29:11.000 Let me know that these that this exists and these aren't replanting algorithms and so we started to explore a number of different replanting algorithms and so and and put them and try to get them to work with the digits such for tasks such as, as packing 00:29:11.000 --> 00:29:12.000 groceries. 00:29:12.000 --> 00:29:21.000 And so I'm really excited about this area because perception will not be perfect but it will be good enough for us to work with. 00:29:21.000 --> 00:29:32.000 And but in order for this to work our scene estimation has to be good enough that we get what I would call a low entropy police state, not perfect, but good enough to work with the uncertainty is overcome. 00:29:32.000 --> 00:29:44.000 And so we've been trying to work with this and terms some some of our. Some of our work has been looking at this, this is our, our work, our project some which is where we're, we're we're just sort of dumping stuff onto a table, and the robot base class 00:29:44.000 --> 00:29:58.000 to perceive the different objects that are on the, on the object on perceive like recognize and do pose estimation for six degree of throughput estimation for all the objects that are on the table, and it's building those into laundry I don't think you'd 00:29:58.000 --> 00:30:11.000 go in the laundry room, such as such as what you're seeing on the left and items that should go on going on, going somewhere else in the house. You should note that it put a comment in the, in the band should go in the laundry, and sometimes people mistake 00:30:11.000 --> 00:30:21.000 mistake that for what the robot did Brock, that's my students my students don't do enough laundry, or dishes. So they made that mistake that's not the robots mistake. 00:30:21.000 --> 00:30:36.000 And so we went too fast. 00:30:36.000 --> 00:30:39.000 Sorry for the errant click. 00:30:39.000 --> 00:30:51.000 So the way this works. I'm going to move ahead. So the way this works is, I don't trust neural networks and so what we're gonna do is we're going to use a combination of neural networks and sort of do the front end pass of the objects that we see that 00:30:51.000 --> 00:30:59.000 that we see in the scene. And then we're going to use that as sort of a starting point for doing probabilistic inference, sometimes a particle filter sometimes non parametric bleep propagation. 00:30:59.000 --> 00:31:09.000 And then we're going to use that to hone the estimates of belief, so that we can try to get the best of both neural networks and and probabilistic inference. 00:31:09.000 --> 00:31:17.000 And for and that's what it's gonna allow our robots to take, take a reasonable action, maintain probability mass so even if the robots wrong, it should be able to recover. 00:31:17.000 --> 00:31:24.000 Once with new information that comes out and so that's sort of what we're what we're thinking about. 00:31:24.000 --> 00:31:38.000 And so, so are our semantic robot programming, we've made a number of advances in this and that, you know, we can, I can go through the list in terms of making probabilistic inference more efficient, dealing with the translation and transparent objects 00:31:38.000 --> 00:31:51.000 that dominate our environments, thinking about the different types of objects but the other than pick and place inferring goal conditions and new types of themes from multiple demonstrations and going beyond the tabletop or robots have to operate and 00:31:51.000 --> 00:31:58.000 Bill at the building level, maybe even the city level. And so how can we give our robots, an idea of object permanence. 00:31:58.000 --> 00:32:12.000 And so this is sort of where we're going and I'm and, and this should this really should leave me encouraged right we have a number of great things we're making a lot of progress I think we're doing a lot from a technical perspective, I have a number 00:32:12.000 --> 00:32:26.000 of great students. So, two of them have gone off and one best paper award, Karthik over here to the, you know, is doing a postdoc with theatre Fox, who in theater Fox, who I'm grateful for for taking part again and providing additional great mentorship 00:32:26.000 --> 00:32:33.000 theatre fox is one of those roboticist that I strive to be I strive to be as good I probably will never achieve it, but I appreciate him. 00:32:33.000 --> 00:32:47.000 I appreciate his example and and and just good collegiality. I have a great research group so those are the ones that are graduated but I've got more great students that are coming along right now and doing a number of great work, a number, working on 00:32:47.000 --> 00:32:59.000 a number of great projects. I've been fortunate over the past few years to be elected Fellow at a number of great societies, triple A and triple A I should also mention if you're in triple A s and you're not a member of Section t you should be a member 00:32:59.000 --> 00:33:06.000 of Section T and so that's the that's the section on information computing and communications. 00:33:06.000 --> 00:33:19.000 I would also say that that Michigan is afforded me afforded us the chance to define the discipline of robotics, we are creating undergraduate major we're creating a department that's coming online for next year, and really robotics is emerging from being 00:33:19.000 --> 00:33:31.000 a just a research area to being its full on discipline and a full on discipline and, and we need to do that I have tremendous respect to Jesse Grizzle and his and his leadership and mentorship of me. 00:33:31.000 --> 00:33:44.000 I am the De La Soul to to Jesse's Johnny Cash and so I've always loved that partnership. And so I should feel encouraged but I'm not, but I'm struggling right you know I'm, I'm asking myself. 00:33:44.000 --> 00:33:52.000 Will my world actually make it better, make the world a better place. Am I doing the right things Talk is cheap right you can. A lot of people say that but are they doing it. 00:33:52.000 --> 00:34:00.000 Now usually come back to the show Silicon Valley on HBO where you have people always saying that they're that whatever they're doing is going to make the world a better place. 00:34:00.000 --> 00:34:13.000 you know, making the world a better place through constructing elegant hierarchies for much more code reuse and extend stability. All right, making the world a better place, making the world a better place through scalable fault tolerant distributed databases 00:34:13.000 --> 00:34:17.000 with acid transactions. Okay, Sure. 00:34:17.000 --> 00:34:18.000 Maybe. 00:34:18.000 --> 00:34:24.000 But one thing that really got me was when they were talking about Hulu, which was the big you know conglomerate. 00:34:24.000 --> 00:34:28.000 With you know with with questionable tech, ethics, our ethics. 00:34:28.000 --> 00:34:42.000 They would you know their CEO who was sort of a shady character would say that Hulu, is about making the world a better place through minimal message oriented transport layers and that killed me when he said that in season one episode one, because if 00:34:42.000 --> 00:34:57.000 you change Cooley, with Ross bridge back at you could either be just talking about me right if I liked him I like that guy. I'm not, I'm not sure. It makes me question Am I will I actually be able to make the world a better place. 00:34:57.000 --> 00:35:06.000 And to be honest about it, you know, you know, over, over recent years I've been discouraged. I'm not sure that I'm going to have that impact. 00:35:06.000 --> 00:35:19.000 And a lot of that came out during summer of 2020. You know, I think there was a lot of a lot of concerns about what's going to happen with, you know, as with artificial intelligence and the work that we're doing, and how that's going to change society. 00:35:19.000 --> 00:35:35.000 I wrote an op ed for VentureBeat was nice enough that to have me and walk me through the process of what it takes to write an op ed. And I will just summarize the the argument here and that that investment and artificial intelligence robotics is absolutely 00:35:35.000 --> 00:35:50.000 critical for our for our future for our nation. I'm glad to see that that Congress is taking that up, and I don't balk at those big huge price tags as those are some massive price tags because what we've seen over the last 100 years or so. 00:35:50.000 --> 00:36:04.000 Over the last 100 years is that investment in science, especially as science and innovation has paid off massive returns on investments. I mean, I'm speaking to you right now, over, over the Internet over zoom, I never would have imagined that as a kid. 00:36:04.000 --> 00:36:22.000 These are these, these investments are absolutely needed. However, um, our current peer and merit review systems undercut that progress. We are, we are, you know, even without investment we don't get the most of it because of how we how we how we judge 00:36:22.000 --> 00:36:33.000 and analyze each other. And so in order to make the most of the investment we have to rethink our peer review processes, and it comes down to something that has always been told to me, which is don't eat your food porn. 00:36:33.000 --> 00:36:46.000 If you think of the National Science Foundation of the nation's venture capitalist for for science and technology. What you have to think about in terms of the investment portfolio, and that whenever you're funding something or whenever you're accepting 00:36:46.000 --> 00:36:55.000 something into your conference, or you're giving something opportunity, you're not just giving up the things that you have rejected but you're giving up what those would have produced in the future. 00:36:55.000 --> 00:37:09.000 And I worry that when we look at our ecosystem that that we don't necessarily have a healthy ecosystem, right, that if you take that investment and you, and you put it into an unhealthy ecosystem, all you get a self preservation, you get some things that 00:37:09.000 --> 00:37:23.000 bloom but it's not necessarily flourishing but you feel with a healthy ecosystem, you can take that as what prospers growth and and and you know and really you know, grid the imprint and maximize that return on investment. 00:37:23.000 --> 00:37:28.000 I'm not great at agriculture because that's Michigan State University Michigan. 00:37:28.000 --> 00:37:40.000 But your Michigan State is where you go to do Ag and so I'm going to butcher this a little bit, but like, but I think when we, when we're thinking about what we want to invest in we want to be able to have healthy, healthy ecosystem healthy for all the 00:37:40.000 --> 00:37:56.000 that that investment into, but not necessarily the unhealthy soil and I would sort of say, this is where diversity matters and that that if you that the foundation of, of, of being able to make the most an investment is to have an inclusive mindset to 00:37:56.000 --> 00:38:08.000 let us know that we should let 1000 flowers bloom based on top of that, you have to cultivate the diverse ecosystem and that diversity is something that can be instantaneously changed right you change it at that moment. 00:38:08.000 --> 00:38:16.000 And that's sort of like your derivative of change but if you want to let it flourish. You have to do it over time, it's not something that just changes, you know, instantaneously. 00:38:16.000 --> 00:38:29.000 You have to do it over years and decades to produce the equity that leads to the growth but I worry what we do in our, in our, in our review systems that we underestimate people we undervalue them over time, and we end up marginalizing them and that that 00:38:29.000 --> 00:38:44.000 is what's what's stifling our growth. So we should think about what incentivizes the behavior of our of our of us a scientist, in practice, right, not necessarily what what what motivates us a scientist I should just say that. 00:38:44.000 --> 00:38:55.000 And you know, and all of the third into it, or into this because we pride, innovation and Excellence right we are motivated by like these amazing things that we could do and that's great and that's awesome. 00:38:55.000 --> 00:39:02.000 We want to try that we like to think that we balance that with providing equity and opportunity and that's usually where I am. 00:39:02.000 --> 00:39:05.000 But both of these things do not exist in a vacuum. They have to be. 00:39:05.000 --> 00:39:19.000 They have, there has to be an economic incentive they have to be, they have to be sustainable, from an economic perspective and an ideal world, we'd be at the at the vertex of all three of these but unfortunately I think just historically computing and 00:39:19.000 --> 00:39:35.000 up to this point as well. Computing robotics and AI or ecosystems are usually divorced from from from from equity and opportunity. And what that led to it sort of, you know, the this disparate impact it's in between and that that you really have to have 00:39:35.000 --> 00:39:47.000 to choose between these right, should I be a good steward of my field, and, you know, with a small where there's a small economic incentive, or should I you know should I be, should I focus on my own professional success where I know I'm going to get, 00:39:47.000 --> 00:39:55.000 you know, I'm going to be rewarded economically and I'm going to be able to innovate. But I, you know, but I don't necessarily care will necessarily care about others. 00:39:55.000 --> 00:40:09.000 And so what that leads to is you know is on one side, you know, you know, do I help extend the ladder out for the ladder of opportunity out to others, knowing that I'm going to take a loss for myself professionally or am I going to do what's right for 00:40:09.000 --> 00:40:16.000 me and my own professional success, and maybe I'll be pulling up that ladder of opportunity and that's the problem. 00:40:16.000 --> 00:40:18.000 And what we've been seeing in terms of. 00:40:18.000 --> 00:40:31.000 and when we say disparate impact we should notice we're looking at the system on a whole that's not necessarily it one individual right and when we're looking at the numbers, these numbers from a conservative was made by Intel you know it says that that 00:40:31.000 --> 00:40:40.000 you know that I think these are over estimates in terms of representation of African Americans Latino women in our in our field. 00:40:40.000 --> 00:40:53.000 We should note the disparate impact is different than disparate treatment oftentimes when we talk about district different treatment that's explicit sort of segregation the segregation that we've seen in the past where you know where I would call discriminate 00:40:53.000 --> 00:40:54.000 of discrimination. 00:40:54.000 --> 00:41:08.000 That is if I have some opportunity. I'm going to look at an individual and the system will say, given this individual and what they are. We're going to have an explicit rule that says how we're going to treat them and their access to this opportunity. 00:41:08.000 --> 00:41:17.000 And that's different. And so that's looking at each individual but disparate impact to something is something different, which I, which I would consider to be generative discrimination. 00:41:17.000 --> 00:41:29.000 That would say that if we looked at the system the system is you know has a joint distribution and says everybody could be anything but there's a latent prior that biases that system in the back, which allows us to to claim that we're doing. 00:41:29.000 --> 00:41:36.000 We have equitability, but you know and, in, in and then that's what just presented out to the world. 00:41:36.000 --> 00:41:49.000 But, in the back there is is prior that's you know that's still segregating, and that's what leads to the same outcomes that we've seen in the past. And so that's a that's a that's a huge issue and I worry that we're that in technology we're still producing 00:41:49.000 --> 00:42:04.000 the same things because we have our economic incentive as a society is still the same thing as we want to generate exploitable labor, with the labor than in the past or in America's history as a as an act as an agrarian society or industrial factory based 00:42:04.000 --> 00:42:18.000 society where you have to have people do this, you know, since the introduction of Moore's law in 1965 we've just seen a ramp up in automation and so, so really what you're seeing is is you know is replete if using information technology and robotics 00:42:18.000 --> 00:42:24.000 to replace you know to replace that labor and marginalize a lot of demographics. 00:42:24.000 --> 00:42:33.000 And this is why we have civil rights legislation. This is why we have the civil rights movement which says that you know that in order for for there to be equity for for everybody. 00:42:33.000 --> 00:42:48.000 There has to be in order for there to be equity for anybody there has to be equity for everybody. This is why we've, we've passed a number of statutes to basically say that you can't have, we can't have disparate treatment but we also can't have disparate 00:42:48.000 --> 00:43:00.000 impact. And it's created the type of environment that has that a lot of people cherish. If you like the university environment that we have now you have to live up to the civil rights, or we're going to spoil it we're going to we're going to create an 00:43:00.000 --> 00:43:13.000 unhealthy environment. And I know that this type of legislation impact has basically shaped the mentors who then shaped me. And so this is really important that we live up to this. 00:43:13.000 --> 00:43:26.000 And in this regard you know the lessons of the civil rights movement is how you treat black people how you treat your most vulnerable groups in society as an indicator of what, how, of how the how the systems operating for everybody and so this for the 00:43:26.000 --> 00:43:39.000 the treatment of black people as a as an indicator of problems that affect everybody and that those same vulnerabilities that exploit black people can be exploited to can use to explain you the things that are used to stack the deck against another group 00:43:39.000 --> 00:43:51.000 group could also use the stacked against that against you. And this is really what we saw with George Floyd, you know I think a lot a lot of people saw that this that this is the, the system in action. 00:43:51.000 --> 00:44:00.000 And this is really you know only this is a high profile case but there's many more out there, and we should think about all the people that that have been impacted. 00:44:00.000 --> 00:44:05.000 There's been, these are just the stories that we know about. 00:44:05.000 --> 00:44:13.000 Throughout our history and there's more, more people that that that oftentimes suffer in silence that we really do need to think about when we're creating our systems and we're carrying them out. 00:44:13.000 --> 00:44:21.000 And I would just mention one of these in that, in that relates to artificial intelligence and that's the case of Robert Williams who was arrested, pure off of. 00:44:21.000 --> 00:44:32.000 They ran grainy video from a from a robbery ran it through a neural network didn't do anything else and then arrested them in front of his family his friends detained him for 30 hours. 00:44:32.000 --> 00:44:44.000 Ai, the work we're doing is going to have major consequences. And this is how we've, we've been failing systemically facial recognition is something we should care about and and you know that we should be given been given a lot of thought to how we're 00:44:44.000 --> 00:44:55.000 actually doing and who we're training to do this. And so an artificial intelligence we've, you know, We've seen that we've produced just few and marginalized black and black and Iris black AI researchers. 00:44:55.000 --> 00:45:08.000 Just as a note I'm showing the the work from Viola Jones here, if you go back and read the viola Jones paper they had diversity they said that you should need to have diversity in your training set so like, you know, so, so, I think, from if we would 00:45:08.000 --> 00:45:17.000 have followed that wisdom, I think we may may have been a better place for for facial recognition, those people that become the researchers then not only are they producing the ideas of the future. 00:45:17.000 --> 00:45:28.000 They're also producing the people in the classrooms, we are impact as faculty as researchers both the classroom and the research lab. And so we've seen low production of black computer science graduates. 00:45:28.000 --> 00:45:42.000 Those computer science graduates then go and make the technology that can be under that lead to sort of uneducated unwise use of AI technology. And a lot of people have been been thinking about what I call the the front end problem, you know, and you 00:45:42.000 --> 00:45:54.000 know that, how do we think about fairness algorithmic fairness for for these types of areas and so there's been great work by tend to gebru and others that have thought about the thought about issues especially with respect to facial recognition, but 00:45:54.000 --> 00:46:07.000 it also impacts many different areas missions hiring lending etc. as somebody who's who's in higher education I think about the back end problem the server side, you know, what are we doing to produce to think about the research lab to classroom that 00:46:07.000 --> 00:46:20.000 done the spring well for how we for for all the innovations that are, that occur. And I worry that when we think about our behavior, there's certain playbook that we use, and that that you know that that is pervasive across research and that is, and I'm going to sort of 00:46:20.000 --> 00:46:30.000 going to sort of joke about it a little bit but but I think this there's, there's a kernel of truth of this that if you want to succeed in research without with without you want to succeed. 00:46:30.000 --> 00:46:43.000 Really, what you do is you create a bubble of competent looking people that exudes success. You use the that that that that dumb that exuding of success to get your to get some funding, you get your proposal funding and produce results us that that that 00:46:43.000 --> 00:47:00.000 Use that that that mindshare to form alliances and build, build, build networks of like minded researchers, you collude to promote the views of Alliance, oftentimes through the review process where you're just sending where you where you dismiss the same 00:47:00.000 --> 00:47:13.000 reviews but usually I'm sort of civil but maybe passive aggressive way. And then we use that patriots to play short that you build up to place your people in industry and academia, and then you use that patriot patronage to get more funding and more people 00:47:13.000 --> 00:47:25.000 and then you just rinse wash repeat and do the same thing over and over and over again and you build your empire. I worry that this is this is exactly what what law enforcement is doing in terms of police unions where we need things like qualified immunity 00:47:25.000 --> 00:47:37.000 to have proper accountability. It's not that we want to defend people absolutely not, we need to fund the beliefs, but we also need accountability the same way that I'd want accountability for the airline pilots is going to, it's going to get me from 00:47:37.000 --> 00:47:39.000 one place to another. 00:47:39.000 --> 00:47:51.000 And we look at the playbook and how we run it, there's a couple things I've been showing this slide for a while there's two things I forgot to say, which was, if we want that, that doing this you have to tokenize, to give the appearance that you are accountable, 00:47:51.000 --> 00:48:00.000 but the appearance of being diverse and inclusive but avoid accountability and if you can get away with it, then you can act with impunity and you can just start to do whatever you want to. 00:48:00.000 --> 00:48:14.000 And this has led to a lot of bad behaviors because these incentives impact, everybody, at least to a lot of things that are problematic in our culture in terms of hyper competition collusion hitting local minima in terms of the increment ality of our 00:48:14.000 --> 00:48:28.000 research results and overfitting to two particular ideas, at least the indifferent mentorship and reviewing where people were where we can have a hostile environment, and then it also breeds misconduct because people think they can get away with things. 00:48:28.000 --> 00:48:42.000 And I just want to talk about the real in that, in that I would say that in terms of hyper competition, you know, it's all about getting this money it's all about that's what our students are interested in, they treat us like they treat, or they oftentimes 00:48:42.000 --> 00:48:55.000 treat our CS programs as, as, as trade schools. If you ask your students. The hard part of computer science as the give them the class there's talked in this new york times article they're talking about UT Austin, led by by by Don fulfil who's a good 00:48:55.000 --> 00:49:01.000 guy who I have had a lot of great interactions with, and has been very helpful for creating a better environment. 00:49:01.000 --> 00:49:14.000 But even though they're talking about UT Austin, it's a nationwide problem that hits that hits Michigan just as well, and really what's creating this is you know is the economic incentive, because we've seen just massive growth in terms of computer science 00:49:14.000 --> 00:49:28.000 enrollments computer you know computing disciplines are computing programs are over a third of our College of Engineering and really it's because students know that they can make you know 30 to 40 times, 30, to 430, to $40,000 more coming out of college, 00:49:28.000 --> 00:49:40.000 being in computing that any other area. And this is, and they're being motivated not necessarily by me of the professor but by the companies where we need better stewardship, we need to work with them, so we can address these these issues, the, and the 00:49:40.000 --> 00:49:52.000 negative effects are coming our computer Sonic computing programs. Students are asking for more we haven't necessarily grown the academy with all the opportunities, you know, you know, students are always asking for robotics, when can I How can I do robotics 00:49:52.000 --> 00:50:01.000 master say, well, you need to get through discrete math and a number of other things before we can do robotic and maybe the third or fourth year and we're going to fix that. 00:50:01.000 --> 00:50:06.000 If you look in terms of how and, in terms of how this has led to the inclusion of our programs. 00:50:06.000 --> 00:50:19.000 Michigan has a massive Computer Science program and that we have over 2500 almost 2600 majors, only 39 of them are black, because we are, because it's super competitive and we weed out people. 00:50:19.000 --> 00:50:27.000 I would you know I would I don't have time to go into data around a story but if you want to look up Patreon she's one of our students that did not get through unfortunately. 00:50:27.000 --> 00:50:32.000 But I think she highlights a number of issues about our culture that we can improve. 00:50:32.000 --> 00:50:42.000 This doesn't just affect minority students, it affects students like, like, like my PhD student Jen Jen came up to me when she was my ta for class. She came up to me and she said I need to go. 00:50:42.000 --> 00:51:05.000 I need to go change my room reservation before for her office hours I was like why do you need to do that, we have this room reservation. Well, it turned out, you know she was being pushed out of there because because because other other other other teams 00:51:05.000 --> 00:51:13.000 I went down there to advocate for her and say you have to give her her space, and you know because she had the room reservation, and they did it begrudgingly. 00:51:13.000 --> 00:51:25.000 I said, you know that's not you know and and a lot of the students said that they didn't like my tone and I was like, I don't know what to tell you because you know because we had the room reservation, and I'm, you know, and I'm just trying to advocate 00:51:25.000 --> 00:51:35.000 for my for my students, and you know and I was worried about that I left that day shaken because I thought I might get, I might you know face disciplinary action. 00:51:35.000 --> 00:51:43.000 But to my credit to the credit of my colleagues, you know, they said this isn't this isn't acceptable. 00:51:43.000 --> 00:51:51.000 And so I think this is something that you know that that you know that affects that affects me and my student it affects her ability to do her job. 00:51:51.000 --> 00:52:02.000 But it's really ginned up by the competition because students feel like they have they have to, they have to compete and do whatever it takes to get through the program. 00:52:02.000 --> 00:52:16.000 I always ask myself that if I'm treated like this as a tenured faculty member, how are, how much first year students feel, and they don't feel great. Oftentimes, you see Reddit posts like this where they say, sometimes I think I picked the wrong field 00:52:16.000 --> 00:52:18.000 but they do love computing. 00:52:18.000 --> 00:52:32.000 I would say that we see that the oftentimes entities are saying you know people are saying well universities are under strain to keep pace with student interest in computing but it's not just the universities right, it's the economic incentive before 00:52:32.000 --> 00:52:46.000 you start to inject inject more money and pick winners and losers and computer science departments, competing as a field, you have to think about your stewardship and think about the, the, the, think about the, the ecosystem that you're creating. 00:52:46.000 --> 00:53:01.000 And this gets to where we start to hyper competition starts to breed collusion in that oftentimes when we think about you know about what we're what our review system during they're talking about marriage, what is marriage. 00:53:01.000 --> 00:53:15.000 Well you know marriage, oftentimes, you know, oftentimes people say well we need to have more merit in our systems we shouldn't have diversity diversity is sort of a side thing, but you know but we should you know but merit is really the, the important 00:53:15.000 --> 00:53:29.000 thing. But that's not necessarily true. And you know because we don't necessarily know who does your merit is something that that is that is a belief, it's not a fact, it is something that that we have to decide on collectively to choose what we're going 00:53:29.000 --> 00:53:33.000 to believe in what we're not going to believe and that is what we call merit. 00:53:33.000 --> 00:53:46.000 And that we don't necessarily produce ideas that's not solely what we're doing. It's not fear shirt surely intellectual pursuit. We produce ideas, and people and I worry that some of some people in the field, some people in the academy are just saying 00:53:46.000 --> 00:53:48.000 fallacies and they don't really realize it. 00:53:48.000 --> 00:53:58.000 And the result is that because of our sort of different views are because of our sort of work views on what American be that leads to collusion problems that we have. 00:53:58.000 --> 00:54:13.000 And that we are that what we think is merit is actually really just recognition it's really just a belief by others, good impression from others. And this is led to, to think in Bad, bad in front of them that what Professor what was talked about in the 00:54:13.000 --> 00:54:26.000 rings article was talking about issues that came out with computer architecture community where, where a student felt so badly about about trying to get their paper about getting about submitting work that they didn't think was good and they were pressured 00:54:26.000 --> 00:54:40.000 by a faculty member who was colluding to get papers through the review system that that let us do feel so bad that they committed suicide. And but I think this is only the tip of the iceberg I worry, and that what we're seeing is just modern collusion 00:54:40.000 --> 00:54:54.000 which are the first order transactions of our of our system but this is building on the on the on the vulnerabilities that were created by traditional collusion, or we have pedigree rings that a lot that the basically a decide who is going to be good 00:54:54.000 --> 00:55:01.000 enough, based off of who off of based off of how they're regarded in the field, not necessarily the quality of their work. 00:55:01.000 --> 00:55:15.000 We've seen this in terms of in terms of how, in terms of faculty hiring which is probably the most depressing part of my job, about how people move around from the, from the top, from the top universities, and if that merit, that that's producing this 00:55:15.000 --> 00:55:33.000 or is this just a just just sort of group dynamic. When we look at what incentivize behavior I'm, I'm the editor of of a journal, I worry that that often that we could gain the system to try to try to produce our to maximize our impact by forcing people 00:55:33.000 --> 00:55:46.000 to cite more of our work but we don't do that because I worry that citation practices incentivize collusion in fact we were trying to get indexed and the index indexing agency said well you're impacts not good enough because you're not. 00:55:46.000 --> 00:55:59.000 You're not being cited enough across the field and I was like, but did you actually look at the quality of our work. Did you know if it's all about citation counts and that's where we need to do to get indexed and then I should be trying to just push 00:55:59.000 --> 00:56:13.000 push people to fight me and push more, more, more, more, more work into conferences, that doesn't really care about the quality of the work and just cares about you know about the all the accolades all the credentials that go with it. 00:56:13.000 --> 00:56:22.000 I oftentimes worried that when we judge people by their Google scholars where I don't know if this number of the citation numbers merit popularity or collusion. 00:56:22.000 --> 00:56:31.000 It makes me think about Donna Strickland who was who at the time she got her Nobel Prize, she didn't have a Wikipedia page she was denied a Wikipedia page. 00:56:31.000 --> 00:56:43.000 I think about my own interactions with Wikipedia and that when I look at the robot operating system which has a big Wikipedia page. I'm worried that I'm not cited on that, just to be honest about it I think there's a lot of people I talked to a lot of 00:56:43.000 --> 00:56:56.000 my minority colleagues, a lot of them feel like they haven't gotten the recognition because even though we've had impact, it's not, it doesn't go it doesn't go recognized open force requires funding in order to do this, I'd love to be able to go and engage 00:56:56.000 --> 00:57:06.000 more. It's robot operating systems community but I don't because I don't have funded, I've been defunded. But the thing that has increased as my service that has gone through the roof Because I'm trying to be a good steward. 00:57:06.000 --> 00:57:17.000 and I worried that what's happening is that we've created this incentive structure where you have a lot of minorities who feel like they want to give back, but don't necessarily mean they spent that that time that they spend to it put put the time they 00:57:17.000 --> 00:57:24.000 spend on being a good steward puts them at a competitive disadvantage and we see it in the in the portfolios of. 00:57:24.000 --> 00:57:27.000 When I see it at the at the PI meetings for robotics. 00:57:27.000 --> 00:57:39.000 You know, I don't see any, any minority is just a small number. When I look at, at, you know the the programs are meant to address these issues of bias and explain ability and artificial intelligence, I have no idea what's going on inside them but I don't 00:57:39.000 --> 00:57:48.000 see minority participation, we see granted cheating and classes, and the admission systems, and this is really a major problem. 00:57:48.000 --> 00:57:59.000 And so, so you know, so this type of collusion oftentimes leads to people to be to just go for it, go for the short term metrics that are going to ensure success professionally. 00:57:59.000 --> 00:58:09.000 But where does that going to take us long term. Are we are our review system incentivizing transformation transformative work. We're just ratification of what we've already seen in the past. 00:58:09.000 --> 00:58:21.000 And so, I you know I don't have time to go to another quiz for this one but I love this quote in that the community, also had to overcome it's, it's biases and fears, think a little bit about who said that. 00:58:21.000 --> 00:58:22.000 Right. 00:58:22.000 --> 00:58:35.000 And so here's the here's the quote exactly and that I will just reveal parts of it that I'm that they were that they were were they were diminished because people didn't understand what their system did that they you know that they, they were trying to 00:58:35.000 --> 00:58:47.000 think about their own brains and, you know, we don't really understand those either. But, but we want them, they want to model the internal state of the systems and they were called for that they were called unscientific and nuts and, in fact, they were 00:58:47.000 --> 00:59:02.000 card neural nuts, and those in the in those neural nuts actually revolutionize speech and translation problems because the person that I'm talking about if Alex viable, who's the, who's the who's know who who create who was the first to introduce the 00:59:02.000 --> 00:59:17.000 convolution neural network trained by gradient descent and backdrop which has now transformed our field. He gave a talk that I was just that I took this picture, because I was just amazed by this talk, and this was for the, for the, for the tire tracks 00:59:17.000 --> 00:59:28.000 that has shown the investment of the investments that have made in in over the decades and how they have have not just catalyzed our field but led to a confluence of impact across our society. 00:59:28.000 --> 00:59:39.000 And the thing, and this is really, this is until those neural networks that were cast on the field has now become the mainstream because we had, we had to let 1000 flowers bloom we had to have that, that diversity. 00:59:39.000 --> 00:59:52.000 If you liked your vaccine think your think something think you have to think a researcher who had to fight and claw and was was literally defended herself in order to in order to bring on these new ideas. 00:59:52.000 --> 00:59:59.000 There's so many stories out there that have talked about that, the of the stories of overcoming that I think are really important. 00:59:59.000 --> 01:00:15.000 In fact, Brian, New York is one person that everybody in AI should know and should be a, and I'm sure my colleagues at NSF know Brian. Brian is the original black and AR, and so he you know so I'm just showing his, his, his 1977 publication and Hedgecock 01:00:15.000 --> 01:00:18.000 a great, great work in the computer vision realm. 01:00:18.000 --> 01:00:30.000 And Brian would tell you we need to fund black scientists, as my colleagues and by the biomedical space have have shown because that's what's going to be good for entire ecosystem, because we don't want to overfit just the artificial intelligence quantum 01:00:30.000 --> 01:00:41.000 quantum and robotics, we have to create a broad, a broad spectrum of ideas, because we never know where transformation is going to come from. And if you're a graduate student, you're just like what is this guy talking about with diversity and all the 01:00:41.000 --> 01:00:51.000 things. If you don't if you think your advisors not treating you well, they're not incentivized to treat you well. Some people are great advisors and great mentors because they do it because they care, but that doesn't mean that we're incentivized to 01:00:51.000 --> 01:00:54.000 actually be those good mentors and advisors. 01:00:54.000 --> 01:01:05.000 And so, you know, so this event in different mentorship and reviewing is really reviewers to being being stopped. I know I'm going over time I'm sorry, I'm going to just keep going because there's something I really want to say, which is about, which 01:01:05.000 --> 01:01:22.000 about misconduct in that, in that right in the language of the unheard in that. When I look at my own profile. One thing that's noticeable, is that you know that I've, I've published with Walter. Walter and I started Michigan at the same time. 01:01:22.000 --> 01:01:38.000 Walter, you know, with, with a with a, you know, he was I thought minority fact, a young minorities faculty member with talent, who, you know, who was a little full of himself but you know but I think I want to work with him to help them get better. 01:01:38.000 --> 01:01:48.000 After we published our first, our first article together I bought this shirt for him because I was just joking around with them and I tried to provide the mentorship to tell him you know, this is how you should do things and this is how you should be 01:01:48.000 --> 01:01:58.000 this How should behave in our society, in our in our environment but Walter, you know, I worry that he didn't listen to that he fathered rewards you follow the playbook for how to be successful. 01:01:58.000 --> 01:02:09.000 And he just, you know, and he did it, he thought other examples in our, in our field and and follow that he followed the money he didn't necessarily necessarily try to be a good steward. 01:02:09.000 --> 01:02:22.000 And you know, and to be honest, I just, I Walter hit my limit and I said you know there were certain point I was like Walter I can't talk to you anymore, because I don't think you're respecting me. 01:02:22.000 --> 01:02:37.000 And you know, and, and, and it was a problem so I was not surprised to see Walter being, you know, accused of doing of misconduct of treating somebody who just wanted to be a good colleague who just want to network, talk to people. 01:02:37.000 --> 01:02:54.000 And because of that, you know, within a position where we're, we're, we're, we're assault occurred, or I should say default was was was described, and that this is that the many people have said this, and I worry about that. 01:02:54.000 --> 01:03:03.000 I am very this breaks my heart to hear somebody that I was collaborating with being able to being accused and do and doing things like this. 01:03:03.000 --> 01:03:12.000 It's terrible violation of people have no place in our fields, and I worried that what we're that when you run the playbook. This is what we get out of it. 01:03:12.000 --> 01:03:24.000 I worry that we are that that we that we are that that we're not incentivizing the good stewardship that we need of our fields so that we, so that we produce the people that we're going to be proud of. 01:03:24.000 --> 01:03:34.000 But I also know that with the, this meeting right here, but I would that it changed my perspective because certain point I was like Walter you need to go do your own thing. 01:03:34.000 --> 01:03:36.000 This is your problem. 01:03:36.000 --> 01:03:50.000 But, um, but at this meeting, I, we were talking, if this came up where we were, where the issue of Walter came up and our students were really unhappy about it where I realized we lost the trust and respect of our students. 01:03:50.000 --> 01:04:04.000 But in that room there are also a lot of minority students that felt like they were being ignored that they weren't getting the time to discuss their struggle who discrete math to destruct describe their issues that they're having academically in the 01:04:04.000 --> 01:04:07.000 program how they feel isolated. 01:04:07.000 --> 01:04:11.000 It made me realize that accountability is not necessarily punishment. 01:04:11.000 --> 01:04:23.000 Because, because if you look at the numbers, oftentimes when we punish that leads to actually just continued disparate outcomes continued disparate impact that don't produce diversity. 01:04:23.000 --> 01:04:37.000 I worry that people don't realize that when they show things like this when they're when they're so single minded about punishing you know a punishing people that they feel to be violators that for minority students and faculty that evoke something or 01:04:37.000 --> 01:04:45.000 the path of the people are probably not intending, but you know but it's it's it but given the numbers that feels really bad. 01:04:45.000 --> 01:04:49.000 It feels like we're being marginalized the gap 01:04:49.000 --> 01:04:53.000 that accountability, if atonement and progress. 01:04:53.000 --> 01:05:06.000 It has to be recognition for recognition of what what the problems are and how do we how do we improve for the future. My dad who worked in the Federal Bureau of Prisons I said one time I said, you know, like why are you treating these prisoners, you 01:05:06.000 --> 01:05:18.000 know, why are you worried about like how they feel about things, and he would tell me that the goal is not punishment. The goal is to help, help them understand where they went wrong and help them be better so they can help contribute to our society. 01:05:18.000 --> 01:05:31.000 And that's really where we need to be thinking about the signs of the toxicity are not just in these issues that we've been been seeing, but they've been all across, you know, when I look at what's alienated you know a lot of minority students, they're 01:05:31.000 --> 01:05:34.000 all over the place. 01:05:34.000 --> 01:05:45.000 And I would just give my own experience and that when I was a young faculty member in a faculty meeting somebody said when we're talking about faculty hiring that says well we have to worry that we could end up with a token higher lead chat and I was 01:05:45.000 --> 01:06:00.000 like, what, I was like, you know, I didn't know what to say because somebody had said something that I think, I think, turned out to be kind of, you know, that hurt and turned out to be pretty right on in that, in that you know Brown University here had 01:06:00.000 --> 01:06:14.000 a history of slavery, you know, if I you know and and discrimination, that's intended to be continued by brown computer science and that if I looked at my entire time there, you know, I was, you know, there were no other black fact that hired or interviewed 01:06:14.000 --> 01:06:27.000 no postdocs, one graduate admin, you know, divert you know no diversity in our undergraduate program but lots of funding, which hurt, and it, and even though I think I'm you know i'm i'm pretty resilient, it took its toll over time. 01:06:27.000 --> 01:06:35.000 You know I had, I had a in 2016 I had, I had shingles with with bells palsy like half my face didn't work. Right. 01:06:35.000 --> 01:06:50.000 And you know, and because I'm carrying this with me, and I have to realize that you know that carrying that you know that burden is not good. I can't, I can't focus on on on the bad that people do and how much I may have been heard. 01:06:50.000 --> 01:07:04.000 fun of me she, I my face is not moving, it's just that's just the way it was and she just made fun of me. I can't care, you can't carry that burden, because I want to be encouraged, I want to be positive, but it realized but we will we need our system 01:07:04.000 --> 01:07:20.000 is clear, like a sincerity here eyes full heart can't lose, but it but we have to be honest about what this is, because what we're talking about is not just how what's in it for me funding is not a given, you're not entitled The federal funding your entire 01:07:20.000 --> 01:07:31.000 entitled public money, you have to ask yourself What are you doing for others and that really is what it comes down to at the end of the day, feel free to read my, my opinion editorial again. 01:07:31.000 --> 01:07:42.000 I give talks like this all over the place so you can find a bunch online, it really comes down to, at the end of the day what you could do you know you should eliminate those double standards, and how you treat the other people in our field, and really 01:07:42.000 --> 01:07:54.000 look and see how diverse is your research lab your classroom your committees. How does it stand up to the to what we've seen in history, because we want to eliminate those disparate impact. 01:07:54.000 --> 01:07:56.000 You can be an open minded reviewer. 01:07:56.000 --> 01:08:05.000 Right. You know, think about the long term, every time I've reviewed for NSF panel, somebody comes in and says, we're looking for not just the best idea right now but what could be in the future. 01:08:05.000 --> 01:08:08.000 Be open to those ideas. 01:08:08.000 --> 01:08:20.000 Think about how you can interact with, with different organizations, different institutions of higher education that don't have the same resources and work with them as people partners and so we've tried to do that by, we're calling distributed teaching 01:08:20.000 --> 01:08:33.000 collaborative, putting linear algebra again, I had a calculus and trying to teach it the fun way and working with our colleagues at Morehouse University and also colleagues and Maria and Berea College to work together and create an equal partnership for 01:08:33.000 --> 01:08:42.000 teaching, we this is part of what we want to do to transform our field through through undergraduate major in robotics and I think it's I'm really excited about it. 01:08:42.000 --> 01:08:57.000 Because at the end of the day, what we want to do is this disparate impact. We want to be able to to eliminate that and bring it closer together, where we can't work, the work that we do that then semantic robot programming protocol for usable robots 01:08:57.000 --> 01:09:10.000 incentivized and helps us realize those equal opportunity where we can eliminate the disparate impact and computing and live up to the civil rights movement, I know I've gone over time, I really appreciate people staying in lifting but this is, this is 01:09:10.000 --> 01:09:18.000 how I how I how I felt about it so I appreciate the extra time without I'm done. Thank you very much. 01:09:18.000 --> 01:09:25.000 Wow, so thank you. I wish I think we'd have people standing and applauding. 01:09:25.000 --> 01:09:26.000 that. 01:09:26.000 --> 01:09:33.000 That was amazing and moving presentation. 01:09:33.000 --> 01:09:36.000 I just want to check with our AV people. 01:09:36.000 --> 01:09:45.000 I'm seeing in the q amp a box, only August Raptor three question now is everyone in the. 01:09:45.000 --> 01:09:54.000 Is everyone who's watching the webcast, are they able to type in q amp a question I'm just a little surprised there weren't a lot more. 01:09:54.000 --> 01:10:10.000 And they're coming in now. They're coming in now. Okay, well while while they come in, why don't we first turn the floor over for the first question from our, our beloved, head of of Computer and Information Science and Engineering market or Tennessee. 01:10:10.000 --> 01:10:17.000 I thank you so much. I figured I would raise my hand while people add questions into the q amp a part. 01:10:17.000 --> 01:10:31.000 Thank you, Chad for that was an extraordinary talk, and I am glad you went to all the places you went I know that probably isn't easy to do all the time, but I really appreciate it that you raised it. 01:10:31.000 --> 01:10:47.000 I thought I would sort of kick off q amp a, there's an awful lot of questions that we could talk about right now. I thought I would kick off q amp a by asking you if you had a magic wand, and you could change one thing about, since we're here, and SF 01:10:47.000 --> 01:10:49.000 review processes. 01:10:49.000 --> 01:10:56.000 Now, or maybe over the next five years, or what would you wave, wave your magic wand to do. 01:10:56.000 --> 01:11:12.000 Right. 01:11:12.000 --> 01:11:24.000 they're presented to do and I think they do. They, there is a there, they provide as much stewardship as they can possibly do and we need to provide more support for our for for people at NSF. 01:11:24.000 --> 01:11:42.000 But I would say that that we have to come to the point where we're willing to, to then if we have, we have institutions or departments or individual faculty members that haven't shown they have a record of being inclusive of producing equity that we should 01:11:42.000 --> 01:11:46.000 not necessarily be funding them anymore. 01:11:46.000 --> 01:11:59.000 And I think that also applies to that is living up to the statutes of the civil rights of civil rights and that if you can't produce disparate if you can't eliminate disparate impacts in your, in your, in your organization that you shouldn't be eligible 01:11:59.000 --> 01:12:01.000 for federal funding. 01:12:01.000 --> 01:12:14.000 You know that the hard part of that is what, what is equitable and that's where I believe the algorithmic fairness community actually can do a cute service and helping us understand what this could look like. 01:12:14.000 --> 01:12:25.000 What does it mean for for you to, for you to be equitable on your outcomes. And how should, how should the support How should the funding levels, be sort of proportional to that. 01:12:25.000 --> 01:12:30.000 I would also say that, as somebody who's now a fellow of triple A s in AAA I. 01:12:30.000 --> 01:12:39.000 We should also let, I would hope that those organizations would say, we're not going to make you a fellow in the future if you're if you haven't produced equitable outcomes. 01:12:39.000 --> 01:12:45.000 But that requires telling the faculty and the graduate students now that this is going to be the case. Not necessarily. 01:12:45.000 --> 01:12:48.000 20 years into their career. 01:12:48.000 --> 01:12:54.000 Thanks, Jen. It looks like there's 11 questions in the q amp a field now. 01:12:54.000 --> 01:13:00.000 Hey, um, so here's here's an interesting one. 01:13:00.000 --> 01:13:14.000 How do you keep people from from gaming, the equity and inclusion issue because people seem to be awfully good at gaming incentives and to say we have certainly seen this. 01:13:14.000 --> 01:13:35.000 We can talk in more detail during the you know the offline session about how sometimes, you know, we put out programs that we thought were really aimed at at helping funders, helping fund, people who have difficulty getting funds. 01:13:35.000 --> 01:13:48.000 And suddenly, the wards are all going to people from Ivy League schools and then we have to go back and think of new rules and think so. So what we have you have any general thoughts on that that issue and you got these expert gamers out there. 01:13:48.000 --> 01:14:00.000 Absolutely, you know i mean i think that's that you know I forgot. One of my colleagues said that there's a law out there that says once once a, you know, once 01:14:00.000 --> 01:14:07.000 an aspiration turns into a metric, it's going to be exploited right and so metrics are always are always going to be hard. 01:14:07.000 --> 01:14:21.000 I would say that it's, it's about maybe ways to, to, to. It's not I don't like necessarily punishing people for what they're trying to do, but trying to incentivize those types of good good outcomes. 01:14:21.000 --> 01:14:25.000 Part of it is is recognizing the invisible labor that goes on. 01:14:25.000 --> 01:14:36.000 You know, I, you know, for instance, I know a lot of great people that that you know spend their time with black computer that that spend their time with for various CRA events. 01:14:36.000 --> 01:14:47.000 You know I see the same people over and over, I can't tell you how many times I'm just going to call it these names. I see Nancy tomato and Maria Jeannie everywhere doing the same service that I'm doing. 01:14:47.000 --> 01:14:53.000 And I don't know if they're getting the credit for it I don't know if they're getting the funding for it. If they did, then I think you would see more more people trying to do that. 01:14:53.000 --> 01:14:56.000 And I think that would lead to better stewardship. 01:14:56.000 --> 01:15:06.000 But also I think if the record, if you look at people in who they're producing, we are as academics, we are the goal of faculty is to produce more faculty. 01:15:06.000 --> 01:15:19.000 Oftentimes, and so we look in terms of who you've produced and if you've been in our field for 30 years, and you haven't produced a black graduate student you have lower than expected female participation you haven't necessarily engaged in something other 01:15:19.000 --> 01:15:28.000 than your direct interest I think that's something that, that our that our, our, our panels have to think about a bit more. 01:15:28.000 --> 01:15:44.000 I personally would be okay with, with a little bit more intervention from the program officers over necessarily what what what the, what external reviewers say, to be honest. 01:15:44.000 --> 01:15:54.000 What are their couple of questions that ask sort of how people use citation counts, because there are a number and they're available. 01:15:54.000 --> 01:16:14.000 And do you think it's it's feasible and worthwhile for there to be to be actual numeric measures of the, um, so I think I actually believe that is true I think every time I've seen a, sort of, if people have seen these graph analysis graph analyses, like 01:16:14.000 --> 01:16:23.000 you produce a word cloud of like, you know, of, of different authors and where are they related in the field right and if you get the CCC these clusters emerge. 01:16:23.000 --> 01:16:24.000 Right. 01:16:24.000 --> 01:16:41.000 I think we've got a lot of graph tools that can that can use to look at the citation graphs to see what to see this, to elucidate certain types of behavior that are more than just sort of one first order or second order relationships between authors but 01:16:41.000 --> 01:16:51.000 the look and see, are there clusters are forming that look like little clubs, right. And I think we can, I think there's a lot of computational work that could go that could go into that. 01:16:51.000 --> 01:17:02.000 But I would, I would also say that, that we should we should treat citations as a very skewed and noisy measure, we should treat it as a likelihood and not a fact. 01:17:02.000 --> 01:17:15.000 You know, and so, you know, so you could say that you know right now we treat sort of sort of citation count as, as, as, as normally distributed right that citation number is, is the expectation and there's a you know modal Gaussian around it but that's 01:17:15.000 --> 01:17:26.000 not actually true right i think there's, there's a lot of modality there's certain people that absolutely earn those citation counts but the insights that they've done that they've done but some people are just branding themselves well. 01:17:26.000 --> 01:17:40.000 And when you, when you dig into it and you start to you start to see where it goes to be vaporware. Um, and so, you know, so I think we should just we should relax our expectations as a field overall. 01:17:40.000 --> 01:17:47.000 And maybe a related question, not on the, but you mentioned open source. 01:17:47.000 --> 01:17:56.000 Could we be doing more to count contributions to open source in in both funding decisions 10 your decisions and so on. 01:17:56.000 --> 01:17:59.000 Um, I don't know. 01:17:59.000 --> 01:18:11.000 I'll admit I don't know on that one because sometimes open source can just be like, I made a quick tool to do something, it doesn't necessarily add new insight, it's just it's just you know a nice tool that does something, other times they're open source 01:18:11.000 --> 01:18:24.000 tools that really have invigorated, the field have new insights underneath them. And you know, and I think we have you know the roboticist and he says, We have somewhat of a, I think of the world probabilistic Lee, and then we have a big sensor fusion 01:18:24.000 --> 01:18:33.000 problem that we have all these different modes that we have to think about, and we have to sort of put them together in terms of how we make our overall overall assessment. 01:18:33.000 --> 01:18:43.000 But I would say that we have to start thinking a little bit differently in that, that we know that there's going to be noise around these things, we have to calibrate our expectations. 01:18:43.000 --> 01:18:56.000 we have to be a bit more open minded, but there, one there is one thing in terms of the tenure, the hiring and tenure review process I think is really important is as making sure that you have a diverse set of letter writers for these cases that come 01:18:56.000 --> 01:19:09.000 through. So, if you have, you know, if the letter writers look very much like the same people that are in the that are in the other on the inside. Then you're going to get the same outputs over and over and over, It's time to broaden that pool of letter 01:19:09.000 --> 01:19:14.000 writers for people that you wouldn't have thought about and places that you wouldn't have thought about previously. 01:19:14.000 --> 01:19:25.000 Great. I see several questions that are specifically about way we could improve the NSF review process. 01:19:25.000 --> 01:19:47.000 I those might be better for the, you know, for the, the office hour on address them. If nobody puts in, in, in more questions but let's see what else do we have this clearly a general interest. 01:19:47.000 --> 01:19:54.000 Well actually, let me put in a one of my own questions well when I'm picking between them. 01:19:54.000 --> 01:20:02.000 I've observed, what the following paradox and I wonder if you have to and could have any ideas about what's causing it. 01:20:02.000 --> 01:20:15.000 There are the sort of brand name companies, the Microsoft and Google's, everyone applies to them, they get huge numbers of applications, 90% of the applications are not even read by him. 01:20:15.000 --> 01:20:18.000 So this weeded out algorithmically. 01:20:18.000 --> 01:20:30.000 And I said, I know several people who just graduated with their bachelor's and most got weeded out algorithmically, despite, you know, Sterling credentials great, there's a recommendation. 01:20:30.000 --> 01:20:53.000 The one person I know who did get in, is, is being paid an enormous amount of money to do now to do very mundane work of fixing bugs and badly designed legacy software that in no way lives up to how competitive it was to get there and they're just seems 01:20:53.000 --> 01:21:10.000 to be something really broken with with with with this system where we have this very, you know we have a lot of people that we make them fight each other to get in and then we pay them a lot of money and then we have them doing work that somebody with 01:21:10.000 --> 01:21:16.000 a couple years of an online course could do so so. 01:21:16.000 --> 01:21:18.000 Yeah. 01:21:18.000 --> 01:21:21.000 I have 100% observed that. 01:21:21.000 --> 01:21:33.000 And I believe it's a part that is that is the primary thing that is fun, that is fueling the sort of toxic nature that the toxic culture that we're seeing a lot of computer science programs. 01:21:33.000 --> 01:21:47.000 It's making students I think unhappy, even though they love computing, a lot of people love computing. I believe that we have to have better stewardship from these from from these companies because they are in setting the at the incentive, they are setting 01:21:47.000 --> 01:22:04.000 the agenda, more than anybody else we we need them to come in and work with us to figure out what's going to be a more humane a more productive way to cultivate a innovation workforce. 01:22:04.000 --> 01:22:18.000 You know I think right now we're sort of treated you know when I'm say we're eating, eating our seed corn, it feels like we're just being rated by, by, by, by companies for for talent at all levels for our students for graduate student for undergraduate 01:22:18.000 --> 01:22:28.000 students graduate students, faculty, and that's just you know and that's really sort of, sort of, you know, sort. And that really is part of the strange that we're seeing right now. 01:22:28.000 --> 01:22:32.000 You know, if I, if I could wave my magic wand. 01:22:32.000 --> 01:22:47.000 As Margaret said, not necessarily for for NSF but I think the bigger issue. Well one of the, just as big of issue is, is the companies and how they work with us we need to find better programs better pathways, where, where students are not just trying 01:22:47.000 --> 01:23:02.000 to go get the biggest salary possible but we but but train them for, for what they what they actually need not just now but in the future, not just for the needs of this company to do code clean up, but like, to help transform those ideas and so, so every 01:23:02.000 --> 01:23:06.000 time I hear about a public public private partnership. 01:23:06.000 --> 01:23:22.000 I want to say, I want, I want that to be not just this is convenient now but to be a long term committed relationship where where we are all working for for for the good of the overall ecosystem and I think companies really need to work with us better 01:23:22.000 --> 01:23:31.000 pick another quick question from from the, from the q amp a. 01:23:31.000 --> 01:23:46.000 So, said the writer says I heard a comment from a grad student of color who stated need to clean the water and the tank before encourage more people of color to enter STEM fields where they encounter racism and. 01:23:46.000 --> 01:24:01.000 And so, maybe you could respond so you obviously encourage people of color is all kinds of people to encourage stem and and we've made it pretty clear they're going to face that so so so so you don't think we should stop but what. 01:24:01.000 --> 01:24:15.000 So I guess how do you prepare them for what they're going to face. Um, I would tell it what I will tell a student, the thing that you need to do more than anything else, is when you come into an undergraduate program and can computing or even a graduate 01:24:15.000 --> 01:24:25.000 find a study group. Find a, you know, I think there's a myth that that students come in and they do it, they get through these programs all by themselves and that's just not true. 01:24:25.000 --> 01:24:36.000 A lot of students have support, you know, either. You know, some support system on campus either you know culturally or academically are usually a combination of both. 01:24:36.000 --> 01:24:48.000 And you know and i think you know people are going to behave the way they're going to behave right but if you have somebody there that you have a group there that can help you understand you know when something goes wrong, you know did did I caused this 01:24:48.000 --> 01:24:57.000 and do I need to do I need to fix this, or has something crazy just happened to me. And you know, and I have to go find, find, I have to go deal with this. 01:24:57.000 --> 01:25:11.000 I had to find recourse at a higher level. And that really is about you know that that that the people that can help you provide that calibration and help you provide that support that you can vent to when things go wrong, that can that can lift you up. 01:25:11.000 --> 01:25:22.000 You know, I can tell you that that a lot of the mentors that I've had. Along the way, when I've hit a low point. They've said, you know, take, you know that low point take it with a grain of salt, I believe in you. 01:25:22.000 --> 01:25:24.000 I want you to, I know you can continue. 01:25:24.000 --> 01:25:32.000 And I think that really made mental meant the world to me. We have these bigger systemic issues but if you have support. 01:25:32.000 --> 01:25:45.000 And you have and you're you don't feel isolated, that that really is going to be I think that's going to be the that's good that's really going to help help catalyze more more participation from groups that are typically underrepresented. 01:25:45.000 --> 01:25:56.000 Great. And I'm going to use my privilege as moderator for the last question that I wrote down during the first part of your thought. So this will go in with this is actually a scientific question. 01:25:56.000 --> 01:26:03.000 I guess it I guess so it relates to some of your questions about the structure of how science is done. 01:26:03.000 --> 01:26:20.000 So there's that. There's a community research community on AI planning that essentially the descendants of the strips were on it, and this is very little interaction with that with the robotics community. 01:26:20.000 --> 01:26:36.000 And it seems like they should be working more together and I wonder if you, if you see that as sort of an realized opportunity at the moment, I see that as one of the major ambitions of my group to help bring that together. 01:26:36.000 --> 01:26:50.000 You know I have, I've always wondered why are there there things where, where are these this great work that happens in the AI planning community that thinks about how we can reason, over, over, you know, at the task level and in roboticist we're just 01:26:50.000 --> 01:26:59.000 we're just doing small pick and place tasks and you know, and, you know, and in order to catalyze those, and we're I think that the next transformations that are going to occur in robotics. 01:26:59.000 --> 01:27:02.000 We have to bring these two worlds together. 01:27:02.000 --> 01:27:03.000 Absolutely. 01:27:03.000 --> 01:27:17.000 But the problem is is that usually in our, in our, in our peer review systems were rewarded for being siloed to just show that incremental contribution and so, so there's no real incentive for us to bring these two worlds together. 01:27:17.000 --> 01:27:25.000 We have to hope that there is some that there are big enough names out there that will that will help lead the this transformation. 01:27:25.000 --> 01:27:32.000 I would note that, that when I started in, in, in, in an AI and machine learning. 01:27:32.000 --> 01:27:48.000 You know, you know, neural networks were completely pushed out right and and you know it took somebody it took people of the level of stature of like a Geoff Hinton in order to bring these to bring these into the mainstream and a lot of people that we 01:27:48.000 --> 01:28:02.000 that they probably are not fully recognized and you know people get stuck stuck with it. Oftentimes with funding that didn't come from the United States, you know, when we look at the Turing Award winners for neural networks right you know you to our 01:28:02.000 --> 01:28:04.000 Canadian, right. 01:28:04.000 --> 01:28:15.000 And I think it says something about what we need to do in terms of our in terms of how we're funding and supporting these ideas and, and for somebody who who completely agrees that we have to bring the AI planning world together with the robotics world. 01:28:15.000 --> 01:28:26.000 I don't necessarily know if that's something that's a mainstream view right now which is hard to support but I do believe that that's where our next level of transformative impact will come from. 01:28:26.000 --> 01:28:49.000 Oh, great. And with that we're, we're out of time, so I just want to thank, Chad once again. So imagine, again, big standing ovation. And I hope that everyone who tuned in from the webcast enjoyed the presentation, and you know we at NSF are always happy 01:28:49.000 --> 01:29:00.000 to hear from the public so so if you hear if you have this have thoughts you'd like to respond like we can't promise to answer you know every question. 01:29:00.000 --> 01:29:30.000 But if you, you know, we're a public service agency, so so if something here inspired you. You know, feel free to write to me.