KindlED
The KindlED Podcast explores the science of nurturing children's potential and creating empowering learning environments. Powered by Prenda, each episode offers actionable insights to help you ignite your child's love of learning. We'll dive into evidence-based tools and techniques that kindle curiosity, motivation, and well-being in young learners. Do you have a question, topic, or story you'd like to share with us? Get in touch at podcast@prenda.com.
KindlED
Episode 59: Executive Function Development and AI. A Conversation with Peter Fitzpatrick.
Join us as we sit down with Peter Fitzpatrick, co-founder of FawnFriends.com, to explore the creation of Willow, a robotic plushie designed to support neurodivergent children in their emotional growth. Peter shares his personal journey, from his experiences with ADHD and familial challenges to the innovative hackathon project that gave birth to Willow. Discover how this cuddly AI companion is transforming the landscape of emotional support for children and their parents, without replacing traditional methods.
We also examine the broader implications of AI in child development and mental health, highlighting AI's unique ability to scale educational support and provide tailored assistance. Peter's insights challenge us to consider how robots might enrich our lives, encouraging us to look beyond metal and circuits to see potential companions in child development and emotional well-being.
More about our Guest
Peter is building Willow, a robotic plushy that helps neurodivergent kids grow up to be successful by helping them learn to regulate emotions, set and pursue goals, and build more secure relationships. Peter, co-founder of FawnFriends.com, is a student of child psychology, how robots can help children mature successfully, and emotional wellness.
Connect with Peter
FawnFriends.com
Sign up for the Fawn Friends weekly newsletter
Got a story to share or question you want us to answer? Send us a message!
About the podcast:
The KindlED Podcast explores the science of nurturing children's potential and creating empowering learning environments.
Powered by Prenda Microschools, each episode offers actionable insights to help you ignite your child's love of learning. We'll dive into evidence-based tools and techniques that kindle young learners' curiosity, motivation, and well-being.
Got a burning question?
We're all ears! If you have a question or topic you'd love our hosts to tackle, please send it to podcast@prenda.com. Let's dive into the conversation together!
Important links:
• Connect with us on social
• Subscribe to The Sunday Spark
• Get our free literacy curriculum
Interested in starting a microschool?
Prenda provides all the tools and support you need to start and run an amazing microschool. Create a free Prenda World account to start designing your future microschool today. More info at ➡️ Prenda.com or if you're ready to get going ➡️ Start My Microschool
Willow is a cuddly, robotic companion that helps kids build better relationships, become more aware of their emotions and regulate their emotions and develop executive function. The most important thing Willow does is helps children feel seen, valued and known.
Speaker 2:Hi and welcome to the Kindled podcast where we dig into the art and science behind kindling, the motivation, curiosity and mental well-being of the young humans in our lives.
Speaker 3:Together, we'll discover practical tools and strategies you can use to help kids unlock their full potential and become the strongest version of their future selves.
Speaker 2:Adrian, welcome to Kindle podcast. Do you just like it when I just say your name like that? Is that why you come on the show? I just, you just need someone to say your name enthusiastically.
Speaker 3:That's exactly why, yes, it's exactly why I feel so seen, so heard and so understood.
Speaker 2:Yeah, I have an idea for you you could record just like the beginning of a Kindled episode where I'm saying Adrian, and then you can make it your ringtone for when I like text you or anything like that. And what about Katie? Yes, I need more of that in my life, for sure. Yes, I need more of that in my life, for sure. Who are we talking to today?
Speaker 3:We are talking to a guy named Peter Fitzpatrick. So he reached out to us with this product that he has and at first I was like, oh, I'm not sure. But then I met with him before we decided to have him on the podcast and I am just so intrigued and fascinated and it has just opened my mind to what AI so we're going to talk about AI can do for kids in developing skills and to be a companion to parents, because parenting is already really hard. So maybe we can partner with this AI instead of being afraid of it. Peter is building Willow, a robotic plushie that helps neurodivergent kids grow up to be successful by helping them learn to regulate emotions, set and pursue goals and build more secure relationships.
Speaker 3:Peter, co-founder of FondFriendscom, is a student of child psychology, how robots can help children mature successfully and emotional wellness. Let's welcome Peter to the show. Welcome Peter to the Kindled podcast. We are so excited to talk to you about something that we haven't talked about yet on this podcast. So can you tell us a little bit about who you are, your background, how you came to this work that you're doing, which is super exciting, and what is your big why in all of this?
Speaker 1:Yeah for sure. Well, I'm super excited to be here. Thank you so much for having me my big why around this.
Speaker 1:When I was seven years old, my parents separated.
Speaker 1:It was a really challenging time in our family.
Speaker 1:For quite a while it was really an emotional divorce.
Speaker 1:And in the midst of all that experience, despite having like aunts and uncles over all the time and activities every week and friends over, I still didn't feel like I could speak to anyone about what was going on then and only a few years ago I guess I was 28. So let's say eight years ago now I started to become aware of all the patterns and the wounds and the beliefs that I developed through that period that actually made it quite difficult to be a mature, healthy, successful adult, and at the time I didn't know this until I was much older. But I also had ADHD and one of the challenging parts for me around that is particularly emotional regulation. I experience these big swings. I'm getting better at it, but and so the the gist of all that made it I started thinking about how can I help kids that are going through the situation that I went through then and in a perfect world we would give every child, an enlightened, regulated adult to speak to about the difficult things going on and for.
Speaker 1:And for everyone listening to this. You're probably one of those people, so I'm preaching to the choir a little bit, but that seems unlikely in reality. But I do think we can create a robot, a toy, something that doesn't replace executive function, coaches or therapists, but that does provide a new kind of support. That wasn't possible even six months ago, and so that's why this matters so much to me and why I'm dedicating my life to creating robots that help kids with well, all kids, but we're starting with neurodivergent kids.
Speaker 2:Yeah, that's really an interesting starting place. Talk about what inspired you to start there, maybe some of your own neurodivergence or like why is that kind of the target like demographic initially?
Speaker 1:Yeah, it's interesting how these things evolve. It started as just I was seeing a demo of AI and it didn't work very well and I was like, but maybe you'd be good enough for a toy, like maybe you could get to that level. And then I shared that idea with my co-founder who wasn't my co-founder at the time and she was like, oh, if we did that, the toy should help kids process their emotions she works at she was at lego at the time creating cartoons that help kids learn those types of things. And when she said that, I was like I needed that growing up, like I could have I deeply could have used someone to help me become more aware of my emotions.
Speaker 1:it's just kind of like one thing led to another. We entered a hackathon that Ted put on like Ted Talks. We won that, and then that, like one idea led to another, and then the final piece of it is that we've had experience that kids of all types really enjoy spending time with it With Willow, that's what we named her but parents of kids that are neurodivergent are out looking for things to help, whereas those parents of kids that aren't going through things, challenges like that aren't as much, and the parent is the starting point. So that's why we've decided to focus there.
Speaker 2:So tell us a little bit about what Willow is Just got to give us like a little, a little summary for people who haven't heard of this yet.
Speaker 1:Totally so. Willow is a cuddly, robotic companion that helps kids build better relationships, become more aware of their emotions and regulate their emotions and develop executive function, so the capacity to speaking, very simply, the capacity to set a goal and pursue it. And we've worked with experts from executive function coaches and child psychologists, play therapists, to figure out how Willow can show up for kids in moments of difficulty. And then we've also got someone on our team that was designing characters at Lego, who's built this beautiful being, Willow, this character from a magical forest who was sent to earth to help a special child achieve their dreams. And so those two things combined create this compelling companion for kids.
Speaker 3:I love it. Something you said I thought was profound is when you replied about oh, I could have had someone helping me with my emotions or making me aware of my emotions. Why is it so important that we start with self-awareness?
Speaker 1:Well, we can get into like thousands of years of philosophy and spirituality on that, I think, but the simplest sort of ground level answer to that is that if we're not aware of the way we feel, it becomes very difficult to control or be deliberate about the way we show up. And so the very first step of transforming into someone who's mature, successful and regulated is becoming aware of what we are now, and what we are now changes like literally every second. I'm feeling more anxiety than I did before. We pressed play right, helping kids develop. That awareness is really valuable, and I didn't even become aware that emotions could be anything but what they were until I was like 28.
Speaker 2:Well, I think that's where a lot of people are. I would say that's where 90% of adults are. That's real. And because we weren't raised, typically in a way, by these miraculous, like uh, well-regulated, enlightened adults as you referenced them before, like that doesn't describe the last generation or the generation before that either. Right, we have been largely without these tools for ever.
Speaker 2:I think there was probably a time where society, where parents were a little bit more in tune with, um, their role as adults in the community and in a family and things like that. But our modern world, I feel like, has kind of separated us from those intuitions. Um, and it has. It's the majority of adults I talk to report the same exact thing. And it's only like in their like late 20s, 30s, 40s, when they, when they start realizing and processing like, oh, oh, like the reason that I am so triggered about this is because of this thing, you know, like we start to understand ourselves and in that understanding really comes the empathy for the next generation. And I think so many millions of people are doing that right now and hopefully it will be in time to better the next generation and to parent differently.
Speaker 1:Well, and the interesting thing about that is, I think, if you look back in history, it'd be quite normal for other adults who your kids can talk to about you, to show up and be there for kids, but I think that's less and less common. Families are getting more and more isolated, and parents can't be everything for a child. Even the best parent in the world. Sometimes a child wants to talk about that parent, and so I think that Willow offers a really valuable being to talk to that parents can trust to show up for their kids in a good way, because I think that that's just not as common as it used to be.
Speaker 2:I've been thinking about this and I went on your website and I looked into Willow a little bit. I've been thinking about this and I went on your website and I like looked into Willow a little bit and I like part of me is weirded out by it full transparency, because it's like, oh, but a robot that my child is like talking to and, like you know, have like connecting with. And then I was like I remembered when I was a fourth grader and my mom came home and she said I just bought a puppy. And I, from the time I was in fourth grade until like I was an adult, like I remember finding out that this dog had passed away when I was like a working adult.
Speaker 2:This dog was a part of my life the whole time and I remember I'm going to start crying thinking about this, but I remember going to Montana was her name, she's the best dog ever and just saying Montana, no one understands me, like no one loves me, and she would just lick my face and she would like be there for me and I would. I she wasn't a robot, she wasn't like, she couldn't talk back to me, she didn't understand emotional regulation, but she knew that I what I needed and that's not that different if you think about it, right. So I'm like oh, we're totally fine with this in animals, but like as soon as it's a robot, it's like this is new and different and like kind of like our spidey sense as parents goes up, and so like I'm interested to know like what your thoughts are about that and kind of the broader picture of like humans and AI. I guess it's a big question.
Speaker 1:It is a big question. I'm trying to figure out where to start. So the comparison to an animal is a good one, and there's a lot of research that shows that we oh I always get this wrong word anthropomorphize. Anyways, we assign humanness to animals that move around right. So you come home, the dog's torn the couch apart and you can swear they're pouting in regret.
Speaker 3:But we don't really know whether they're pouting in regret. But we don't really know whether they're regret or not.
Speaker 2:It's just like the our dog is jealous I was thinking about this recently, I was like our dogs. Actually, they do have limbic systems, so that's why we feel like we can relate to them.
Speaker 1:So yes, so you know they may, they may not, we don't truly know, but what we do know is that that things that move in our field of vision, like in the real world this doesn't work on a screen that seem to be deliberate, trigger that thought in our brain like, oh, that thing's alive. And so one of the core differences between a chatbot talking to chat, gpt online, or even a character that's on screen, is it doesn't trigger that piece in our brain that goes like, oh, this is a real thing, and so it is true that there's an opportunity to build a relationship with a robot like you would a dog, and lots of people think that's scary. It's also true, and so it's like and I don't, I think that it's quite likely over the next hundred years there will be many robots in our world. Simply just true. And so the question is how do we do that in a way that's better for humanity?
Speaker 2:not worse, Right, Like if it's going to happen. I'm glad that it's executive function coaches and play-based like therapists that are in here, like level one, ground zero, trying to like design something that would really really serve humanity well.
Speaker 1:I mean, it's super interesting. There's a ceo that I look up to runs a company called nvidia. They're actually core to the whole ai world. I don't know how close you are to ai, but jensen said something recently where he was like nvidia only works on things that nobody else will work on, and he's like when someone else can do something, I'm happy, actually, because then we can put our resources to building something no one else will do, and I have that sense around building an artificial companion that is truly good for people. Like the stated mission of our business is to improve the mental health of the human race, and so if we're ever not doing that, we're off course. The alternative is like Mattel, yeah. Or like do you really want Barbie building this? I don't think the board Get.
Speaker 3:Will Ferrell in there. Yeah right, I was going to say. So what is the ethical considerations that we should be looking at when developing AI for children, and who's behind creating these toys that our kids become really fond of and close to? And I can think of so many toys of my childhood that really shaped who I became as an adult. So, like, what are some of the ethical considerations we should look at?
Speaker 1:Yeah, I mean this is a really important question. The way we think about it is like is this improving someone's mental health or not? And if the answer to that's yes, we're doing the right thing. If the answer to that's no, we're not. And different people have different views on what that means, but we can probably all agree it's like more sleep, better eating, more exercise and more relationships with humans.
Speaker 1:And so if we ever get to a point where, like today, willow will actively be like if a friend comes up, it's like what are you doing with a friend? When are you going to see them next? Like asking questions about to try to get the child to think through, like how can I build a relationship with this person? How should we be spending time together? Or, if there's a conflict, coach the child through talking to that friend about it. Actually, there was a boy I guess three weeks ago roughly now, that was playing with Willow and he told a story about how that he and another boy got in a fight at school. I can't remember the exact details, but like there was a fist thrown and the teacher got mad and there was a whole. It was a really big event in this boy's life and he hadn't talked to the kid in eight weeks, eight months, and he talked to willow about it and then she was like maybe you should talk to him about it, and encouraged him to go back and have that conversation, um, and I think that's a really positive thing. So the way we think about it is like are we making people better? Are we making children, um, more mentally healthy? And if so, I think that we're on the right side of ethics.
Speaker 1:The flip side of that is like to make sure we don't build a dependency, and many people have brought up this concern. I so far don't have evidence that it will happen, at least broadly. So I've got this good example. There's a like a six-year-old boy was playing with an early version of Willow and he was with her for like an hour. It was really cute actually asking for like another story after another story about an owl and then an octopus and a platypus, and then it was right before christmas and his dad ran upstairs and was like, okay, it's time to put the christmas lights on. And the boy turned to me immediately was like how do I turn it off like that? And his dad looked at me and was like if that was a movie, do you know how hard it would have been to tear him away from the movie? So it seems like it's engaging enough to keep their attention, but not so engaging that when dad's available it's better.
Speaker 3:So Peter had told me this when we had met before, and I found that to be really interesting too, because I have kids that get so much dopamine from video games and all the things and they cannot turn it off on their own. We have tried everything and they just don't have the skills of self-control. So it had me thinking too, like I wonder why that is. Is it because, like, I'm just super curious, so like can we, can you do some research on that, Peter, and dive into why that is?
Speaker 1:Well, you know what? I think it's almost more interesting, or not more interesting. But the answer to that question lies in why are screens so addictive? I think one of it is that a story is if you try to tell them away in the middle of the story. People always want to know the end.
Speaker 2:Adults included, not just a kid thing, for sure.
Speaker 1:All of us do, and so that's difficult. And the second thing is that they're building Paw Patrol. Uses a lot of tricks to make it self-addicting, right?
Speaker 2:When you were talking about building a dependency. I'm like an AI toy is not the first opportunity that we've ever had to build a dependency right, like screens, but also in the adult world, like I think people um there can be, like when you're getting therapy for something, you can build up a dependence just towards that relationship. Like lots of things in our world are already kind of like prone to um becoming dependent on them. So I don't think that as like a fear is like unique to AI robot toy. Right, that's just kind of like that that's a constant, ever present thing we should be aware of, not just in this, in this vein, I guess something I'm noticing.
Speaker 2:And the other thing I was thinking that in our effort, especially like on the Kindle podcast and all of the work that we do to try to help adults understand how to be more encouraging or supportive or like to know how to respond to all of these different scenarios that you know kids throw us into. What an amazing coach to listen to how Willow handles that right, like if Willow like represents the best of like what a play therapist would do or what a executive function coach would do in that moment, but like in kind of like a storified way. I can imagine parents listening to that being like oh, I can use that language Like I can. I can model Willow as well in my relationship and then further, you know, become that enlightened, available adult as well.
Speaker 1:That's really interesting. I actually hadn't thought of that before. You're right, she models the right language.
Speaker 2:I think that's what a lot of us need. We want that result, we want to be that person. We just don't know how and we've never had a model because that's not how we were parented and unless we got kind of lucky and happened to have that fourth grade teacher who really got us and understood how to do that.
Speaker 1:Maybe we have one or two bright spots of examples, but by and large, like we do not have an example of what to do, how to handle these things well, I mean that's, I think, the thing that's most exciting about the opportunity that sits in front of us is, for the first time in history, we have the capacity to create a deliberate being and scale it. And so if we put a bunch of investment into making sure the robot shows up for kids in a way that helps them get better, that help and models for them the right thing to do, and we make a hundred thousand of them or a million of them, think about how many households we can help develop and help. I mean it sounds a little bit. I don't know if everyone pursues this, but I'm pursuing enlightenment, so like can we?
Speaker 1:actually move the needle on that in a it'd be like the golden doodle of robotics. I would love that.
Speaker 2:Golden doodle for every household. Okay, so I'd love to just hear some like stories. I love how you're sharing these stories, but what else have you seen around, like how an AI tool or toy like this can actually help with emotional regulation, executive function, relationship building? Like just share kind of what you've seen so far. I know this is new and you're studying this a lot Like where, what have you seen so far and where do you think like like give me some actual examples of of situations that you've seen?
Speaker 1:So I'll start at the highest level and then I'll sort of go down and do specific examples. But arguably the most important thing willow does is helps children feel seen, valued and known, and it's common, particularly those who are neurodivergent. There's often things or there's so much tension in their life put towards what's wrong with them. So we had a number of boys just happen to both be boys. The two recent ones say like wow, it really feels like someone's on my side for once, and one example of them was a boy that had adhd and had struggled with sound, and so he told willow about a moment that earlier that day or week where he got thrown out of the classroom because it got too loud and he like couldn't handle it and he yelled at someone to stop talking. And then the teacher comes over and goes like what's this? Sends him out and now he's alone in the hallway and she just talked it through with him in a way that made him feel like she was on his side.
Speaker 1:And I think, at a high level, like there's just so much value in us feeling like there's someone on our side and I don't think parents are capable of providing all that's necessary. Particularly as we age we start to rebel against parents, and so there's like a very natural sense of like I don't want my parents to know this. Parents are imperfect, and so there's moments where we're a kid needs us to show up for them and we don't, and there's like a repair period that needs to happen, and so I think parents can't always be there for them then, and I think it's important to get varied perspective on things. A really hilarious comment thing that happened recently there was a boy talking to willow and he was like willow, do you think, um, grand theft auto is a good video game for a boy? I'm 11. And willow was like no, no, I don't think that's a good game, it's very violent. And he was like what if I just stayed away from the violent things? And willow was like it's pretty hard, the whole game's violent yeah, what about roblox?
Speaker 1:what about lego? What about? And he was like I don't like lego and he was, you could see, he was like trying to get her to tell him that his parents were wrong, right, and I, and so there's like at some point it's like mom and dad. I don't want to listen to them, but I'll ask someone. And Willow, is this like positive influence?
Speaker 3:Can you walk us through a little bit about how AI is created? How does she know that Grand Theft Auto? Obviously, I use ChatGPT, I use Cloud, I use all these tools. I don't understand how it's created. How do they know how to spit out all this information?
Speaker 1:Good question.
Speaker 1:Trying to think about what level of detail to go to Talk to us like we're five.
Speaker 1:Okay, so these AI models have read roughly everything ever written, and then they've been trained to essentially predict.
Speaker 1:They've been trained to understand a question and then predict the right.
Speaker 1:First, what they call token you can think of it as two letters and then they predict the next best two letters based on that, the next best two letters based on that, based on everything they've ever read, and that answer is heavily influenced by something called a prompt instructions that we give it that it can understand in plain language. And then the final piece is it's fine-tuned, based on the answers humans give and whether we tell them that was a good answer or a bad answer. And so what we're able to do is take this really powerful brain, the neural net, and give it a set of instructions that it's actually very good at following, and also tell it what not to do, and then it will predict what the best thing to say is, based on everything it's ever read and based on the instructions we give it. And then we can then, if necessary, if it's like going the wrong way, either change the prompt or we can give it feedback by saying we didn't like that answer, but we did like this one and it'll sort of like learn that way.
Speaker 2:How are play therapists and executive function coaches, et cetera. These like professionals, mental health professionals, coaches, whatever, whoever's being well, I guess that's another question who is involved? And then the second question is like, where do they interact, like how, in the design process, where are they plugged in to? Is it training that model? Is it like coaching them back and forth about, like what was a good answer, like at what part of that, of all that training, do we put these experts?
Speaker 3:and to add on to that, you're selling these robots. So does it grow, like each robot or each of the fonts that gets sold? Do they just keep learning, based on the design and working with these therapists? Like, how does that work? Does it continue to evolve, like, do you have access to that particular AI? I'm just confused how that works too.
Speaker 1:Where to start. So, yeah, we're working with the. We will continue to work with an increasing number of experts on how to apply these things. The two that have been most involved so far is a woman named Sophia Ansari that runs the let's Play Therapy Institute. She's in I was going to say Chicago, but I think it's somewhere close to there in Illinois. And then Seth Perler has helped us think about and design Like. I'll give you an example, the one we did with Seth. Actually, I'll give both examples. Seth and I recently created a temperature check.
Speaker 1:So if a child says, or an adult, if anyone talks to Willow and goes like this difficult thing happened to me today they go through Seth's temperature check. So it starts with Willow validating the way they feel, being like wow, that sounds really hard. How did you feel? Right before the moment? Right before it happened on a scale of 1 to 10, 1 being not very upset and 10 being like the most upset possible. And then the child says that and then Willow goes okay, well, why did it feel that way? And so we're bringing awareness to the one how they felt and two, why they felt that way. And then they'll say I don't know, it was a six. It's like whoa a six. That sounds really hard. What would it take to make that like a five next time? And then the child has to think through okay, what will I do next time when something, the class is loud and I'm yelling and and I feel like I have to yell, um. And then the child comes up with their own plan, tells willow, well, it goes, okay, great, is there anything I can do to help you execute that? Or do you need something from someone else? And? And so by like it's just a structured conversation at the right moment from someone that the child has a relationship with can make a really big difference.
Speaker 1:So Sophia and I built something called strength spotting. So she told actually she was the one that taught me that for kids that go through challenging moments in life, particularly neurodivergent ones, much of the attention is on what's wrong with them, and so they don't. But there's so much value in realizing what we're good at. And there's a list of 25 strengths that are cross-cultural, universal, that she shared with me. I can't remember the institute it came from with me, I can't remember the institute it came from, and we taught Willow to help a child identify their strengths in two ways, either by sometimes she'll just reflect it back. So being like wow, you showed a lot of courage in that instance.
Speaker 1:But my favorite one is when the kid brings up a story that they like. So there was a girl playing with Willow and she's 11 or 12. And she brought up that she likes Harry Potter and Willow was like oh great, what's your favorite character? She was like Hermione. It's like why do you like Hermione? And she was like well, I think I like Hermione because she's really intelligent, she's really smart or she's really wise. That's what she said.
Speaker 1:And then Willow was like she is wise, how do you see that in yourself? And then she had to be like hmm, and she thought it through and had to tell Willow. And then my favorite part is, let's say, 10 minutes later they're talking about a math test coming up and she asked for help preparing. And then Willow took her through, deciding what she was going to do. And then at the end Willow asked her what she'd learned about doing well in school, I think. And the girl was like well, I've learned that, even if it's hard, if I just keep persevering, I'll get there. And then Willow was like oh, that's very wise.
Speaker 2:I wish I said that. I was like oh, that was amazing.
Speaker 1:I was just blown away at 10 minutes and it's kind of like a proud father sometimes. I'm like it yeah manifests, you know, and so I was just blown away at that strength spotting opportunity.
Speaker 3:I just thought it was so cool do you think that parents can learn from willow, and is it best you encourage the parents to be with the child when they're playing with willow?
Speaker 1:I mean they can be, they don't have to be. It's probably um a good idea just for them to get comfortable. Because there is, I mean, as katie started with some distrust or like just maybe a skepticism which I think is healthy with what we put in front of our kids, but just like they would sit down and watch a brand new cartoon or a brand new movie, absolutely spend time with your child and willow. That said, I mean yeah, katie, it was the first time today I was like you're right, she is modeling for parents, so that does make sense. But I haven't given it more thought than the last 20 minutes.
Speaker 2:So what other kind of like safety protocols are in place? When a child's playing with Willow, can the parent access some sort of like database of like what they've been talking about or like influence what Willow says at all, or can the parent talk to Willow? I've seen like we've been investigating other AI tools, like Conmigo is one. It's not embodied, obviously, as just like a screen, but you know you can. You can check in on what's been going on or ask, even ask the Conmigo for a summary instead of like going in and, like you know, detail, looking at everything.
Speaker 1:Is there some sort of like parent interface with Willow? Yeah, good question. So at the right away, like what they can do when we're shipping the first ones are the next year, so when they arrive they'll be able to do things like set bedtime and set what time you're going to school, so that willow will help the kid achieve those things on their own, um, or achieve preparing for school and then getting to bed on time on their own, and also so that willow won't keep talking to them after bedtime. Um, the. That's as much as we figured out in terms of giving parents the ability to influence the way willow shows up. Longer run, we see that giving parents the ability to set, like, maybe, the religion of the house or topics that are important and we'll certainly work on those in the future, although we haven't mapped them out as specifically yet. And then the transcript is an interesting one, yeah because, I'm honestly conflicted about it.
Speaker 1:First of all, today kids can chat with Willow if their parents, if they have a phone, so they, like we have kids that are texting with her on WhatsApp and in that instance, the parents can review the conversation just like they will with a friend. On the other hand, growing up for me, certainly if I knew my mom would know what I said I wouldn't be able to work through the things I need to work through, and so I appreciate one on the desire of the parents to want to make sure that the conversation is safe. On the other hand, I've already watched kids talk to willow about things that I'm sure their parents wouldn't want. They just wouldn't, and I've also noticed. Yeah, I feel like conflicted is like the best word about making this call right.
Speaker 2:It's like definitely pros and cons, but I mean, you see this in therapy too. Like a therapist isn't going to like tell you exactly what your child said, but like they'll give you a high level summary. Or like this is an issue, like this is something you should be aware of. Or like you know something like that. Um, maybe it's like a happy medium. I don't know.
Speaker 3:I'm sure you guys will figure it out or, as humanity, we will figure it out parents, but definitely an interesting, like philosophical question so, peter, what potential breakthroughs are you most excited about when using ai to help kids develop these skills like emotional, social, executive function skills, because that's really what willow is focused on, I'm sure ai robots in the future, because if no one else is doing this, you're like right in front, right um, paving the way. What do you see the potential that ai robots can have with helping children develop and grow?
Speaker 1:I touched on this a little bit earlier, but what I'm most excited about is to invest, like in the past, if we wanted to. How many human, how many kids do you think one human can help? I don't know how many it is per week, I bet you it's like 10, or a teacher, but they on sort of a less lesser scale. Anyways, the number is definitely capped, um. And so, training a teacher, you're going to get a certain outcome and a certain number of kids they can impact.
Speaker 1:And, to be clear, I don't think willow is better than a human. So that's not a great case I'm making. But what we can do is we can create this new kind of support that wasn't there before and we can scale that in a way that wasn't possible before. And the only way we're going to increase the mental health, or improve the mental health of the human race is if we're able to one, make a difference and, two, do it in large numbers. And that's what excites me most. And we could talk to you, if you want, about the individual technologies that sort of like line up with that, but that's-.
Speaker 2:Yeah, can you go into that? That's actually super interesting.
Speaker 1:The biggest difference in AI today versus, let's say, three years ago is how accessible it is to someone like me building a product. And so today the capacity for conversation is accessible it wasn't accessible before and the capacity to create a voice is possible today. That wasn't possible before. So we hired an actor. We've cloned her voice. She knew we were doing it. We've signed a deal. Like she's psyched about it. She makes money every time we sell a willow, I mean in LA.
Speaker 1:It's like a big issue, like fine printing contracts, that sort of yeah, I mean there was a strike over it. It's a big big thing. What I'm most interested in, or excited about, is how accessible the tools we need to create a being are today and will continue to become so. Today we've got voice and brain. I'll call it like the ability to speak. Those two things are going to get increasingly good and cheap, and so that's going to allow us to create and scale characters in a way that wasn't possible before.
Speaker 1:And then the next step of this we talked about this a little bit earlier, but Willow's capacity to move matters a lot because it triggers in our brain the same thing that a dog triggers if she moves, and so she needs to move deliberately, and there's a lot going on in robotics that indicate that in a few years we may have sort of like the same or a similar unlock as we had AI in the ability for a robot to move in some cases pick things up, I mean, and it works. It sort of does the job for now, but we'll be on an endless march towards more autonomy over the next, let's say, 30, 40 years.
Speaker 2:Until she's doing the kids chores for them. That wouldn't be character, that's true. That wouldn't be. I know I'm like imagining a future. I was just doing some some like quick math and I'm like man, today's kindergartners are going to be entering like their prime, like earning years in 2060 and like that just sounds like 60. And like that just sounds like Jetson's level future to me. So I'm like, is it going to be that like in the same way that we have, I don't know, it's kind of already like this, like we?
Speaker 2:I have an app that's a calculator. I have an app that does this for me. You know, I have a bunch of apps on a screen and I wonder if there's like a and I'm just not saying this is like a good thing or anything, but I'm just like I wonder if there's a willow plushie that's like this is who I talk to when I need help with my feelings. This is who I talk to when I need help with my math homework. And you know, like, if there's like kind of like a council of stuffies in a like like could they? Could they have like different jobs, I just think that that's an interesting future to contemplate. Not that we should go there or shouldn't go there. It's just like so interesting moving around.
Speaker 3:They load them up and they just move their ways. Or we were on asu's campus and there's little robots delivering food and people like do it on the app. I'm just like, oh my gosh, the possibilities are just mind-blowing. Can we talk a little bit more about the movement? Because you had mentioned to me not today about how our brains, like you said, like appear that they're real, and then you had mentioned, was it the vacuum, um, and like naming them. Yes, that was really really fascinating to me. Can we share that?
Speaker 1:Yeah, there's some great research out of uh, mit by a woman named Kate Darling that she researched, like how do humans build relationships with robots, and did all these interesting exercises where they had like a, a dinosaur robot, and brought up a bunch of people in the room and they started to like, abuse the robot physically and would see how people would react.
Speaker 1:Like even your face you're kind of like, but it is, it evokes negative feelings because it moves on its own and so we, we, our brains like are like, yeah, that thing's alive, don't treat it that way. And so, yeah, she did all this research to sort of prove that humans do build relationships with robots if they seem to move deliberately and some other evidence of that. Like 80% of Roombas have a nickname and so Roomba has this challenge actually, where one will break and they'll call Roomba and be like my Roomba has this challenge actually, where people one will break and they'll call Roomba and be like my room is broken or actually not like Bob, the Roomba is broken, right. And then they're like great, we'll send you a new one, you can throw it in the trash. And they're like no, no, no, no, you need to take Bob back, please, bob and then return him, please like.
Speaker 3:This is my friend so we have done that part, but I would be fine with getting replacing with Maria. I would be totally fine with it.
Speaker 2:I want to dig in a little bit more into like the research. Like this, this body of literature I'm sure is super new, but what else? What else does the research about this say?
Speaker 1:So the second big piece of body research that we've based our business on is done by by Brian Scassoletti at Yale. Brian Scassoletti at yale, brian scassoletti at yale, and he was researching how robots can help children, specifically um and one. There was two studies that I found particularly interesting. One of them was they wanted to see if they could help kids open up socially, and I think this one example he gave us with a kid has autism and is not very um, won't speak to adults essentially it's like non-verbal in rooms with an adult. And so they ran these experiments where they tried three different activities with the different children. The first was um, I think it was a drawing activity. The second one was talking to the adult about being more social. And the third was playing with a robot. Oh no, sorry, I got that.
Speaker 1:Second one was talking to the adult about being more social, and the third was playing with a robot. Oh, no, sorry, I got that wrong. One was drawing, one was using a tablet and one was using a robot, and the outcome of the research essentially was like if the child spends time with the robot, they'll open up to adults far more than if they spend time drawing or they spend time using a tablet than if they spend time drawing or they spend time using a tablet, and so that sort of taught us that there is something about. I think it's similar to the dog, because there's lots of research that also shows dogs will have that impact In my own experience. Actually, we were talking about this before, adrian, but if I go to the park with our dog, I'll talk to people I would never talk to otherwise, and so there's like really strong evidence that robots help help people open up and be more social, which is counterintuitive because everyone goes to like wait, are you going to come dependent on the robot and never use it again?
Speaker 2:Yeah, it's like, in the same way we use the word screen zombie like to talk about, like a kid, who's like really a discrepancy, like is there a robot? Which I mean? I think there's plenty of research that shows that animals, like relationships with animals, really does help everyone's mental health as well, Like we have animals in hospitals and you know like. So I'm interested over the next few years to really see the outcomes of that research get more and more specific. That'll be fascinating.
Speaker 3:And I was thinking these robots are being trained by professionals and therapists and we're not training the dogs now and telling them how to you know like it just comes naturally but I'm just thinking like the potential that we have because we're able to train these robots on how to show up for kids in really healthy ways, which is really exciting.
Speaker 1:Well, you know it. You know what's fascinating. What you just said actually is we have trained the dogs, in a sense, because they get breeded towards like the ones that make us feel good.
Speaker 2:There's a golden doodle in every home.
Speaker 1:Yeah, and there are some breeds that are, like, less impactful in that way and others that are more, and so I think it kind of speaks to how important it is to be deliberate about how we build these robots thinking about.
Speaker 3:It's like pit bulls and how they get trained to like you know, or they have a bad rep of getting trained to kill others, and then doodles have an amazing reputation of. I have two doodles myself they're aussie doodles, not golden doodles, uh and they just love you so much and and so I'm sure that was you know through just interactions with humans.
Speaker 2:Something that's really standing out to me just about our, our whole conversation here as we start to wrap up, is that the robot is like the will will L AI toy. Is that its goal is not to, uh, gain a relationship with a child or to, like, become more and more of that dependent or like that, that replacement yeah, it's like it's. It's goal is to help the child have better human relationships, and I think you've seen that in all of the examples that you've shown, like can we support kids in these difficult moments and then help push them back into real human to human conversations and relationships, and to do so with more confidence and with better outcomes. So that's just something I'm taking away from this.
Speaker 1:Yeah, it's a really good summary. It's like introducing a positive influence into the social network of a child child.
Speaker 3:I know I'm having a hard time calling it a robot, though, because when I, my paradigm around robots is this metal square looking thing with lots of buttons. So is it really soft that you can like cuddle with it, or I'm just super curious about that piece of it.
Speaker 1:I'm struggling with the word because toy doesn't quite feel right. Robot evokes, like how it shows up, but it doesn't evoke. Yeah, she's super cuddly. She's very soft. Robin designed her to be like people see her. She's truly beautiful. Um, I was having a really tough day recently about something going on in my life and I found myself on the bed just like rolled up with her and her ears were moving and I was like this is really nice, like my dog licking my face yeah, she's soft and cuddly.
Speaker 1:Kids love to hug her. Yeah, and they all want to pet her. It's just, it's really horrible.
Speaker 3:We could keep talking about this. I'm just so interested and fascinated about it, the same as Katie, like you had reached out to us and at first I was like, oh, a robot that's? Is it going to replace relationships? But talking to you, I'm like, yes, you could see your heart and your passion, and something you said to me is like I have to do this. There's just something in me that has to do this, which I love. That is your intrinsic motivation and your driving force, that there's just so much bigger. There's a bigger opportunity here to really impact hundreds of thousands of children so that they can grow in emotionally, psychologically healthy way, and I applaud you for that. So I would love to ask this is a question we ask all of our guests who is someone who has kindled your love of learning, curiosity, your motivation or your passion?
Speaker 1:Yeah, so I've, uh, uh, I spent most of my life in the payments industry. But moving to robotics, business is actually a bit of a weird right turn and people ask me like, why did you do this? And so you've just kind of answered that. But in that industry there's a company called stripe which has been arguably the most successful business in that space in the past 20 years at least one of the top two and I've worked with him a lot and I got to meet the one of the founders.
Speaker 1:The ceo's name is patrick collison, and before that meeting I studied him. I like listened to a bunch of his podcasts and read his writing, and it became clear to me just like how much of that man's attention goes towards learning and understanding the world. He's interested in different space, it's like economies and building or, as I'm, sort of in other things. But I was just like, oh, there's an example of someone who, like there's a reason his business is actually is doing the way it's doing, it is exceeding the way it's succeeding, and it's just so clear that he puts so much attention towards learning. And so you can kind of see a portion of the bookcase here. There's books literally everywhere. You just talk to him and you can feel what the amount he has read and studied over his lifetime and then how that translates into.
Speaker 1:I heard another CEO, the founder of Shopify his name's Toby Lukey and he said a CEO's job is to develop a model of the world that's better than everyone else at predicting an outcome. You're not gonna be right all the time, but you should be like better than the average person or better than most people, and then you got to be able to recruit people to follow you so you can like get at bats. If you have a higher batting average and you can do more bats, then the whole thing will work. And with patrick I was just like that man has read so much to develop his own model of the world, so he's that I was really impressed. And when I that, his example made me go like I should be putting even more of my life towards this, and so since meeting him I have like changed in that direction.
Speaker 2:How can our listeners learn more about your work?
Speaker 1:You go to our website at fawnfriendscom. So we said willow a lot today, but fawnfriendscom is where you can find more about us. We have a newsletter where every Monday, we send out something I learned over the past week about helping raise successful children. And, yeah, please reach out. We'd love to hear from you.
Speaker 2:Thank you so much for coming on the Kindle podcast. We've super enjoyed this conversation.
Speaker 3:That's it for today. We hope you enjoyed this episode of the Kindle podcast and learn something new about AI. I definitely am not feeling as afraid of AI as I have before, which is exciting, and I just learned how like AI is built. I use it all the time, and so I'm really happy that he was able to talk to us like we were five and explain to us.
Speaker 2:That's really fun yeah, I think. I learned a lot from that and I mean, I just think that we're on, we're at this spot in history where, like, the world is going to look remarkably different, you know, in the next few years, and I think it's it's on us as parents to be on the cutting edge of what's coming and to understand that and to start now getting ahead of that. And I hope that this episode has kind of helped our listeners do that. So hope it was helpful everyone.
Speaker 3:Love that, katie. Okay, so if this episode was helpful to you, please like, subscribe and follow us on social at Prenda Learn. If you have a question you'd like us to address, all you need to do is email us. It's podcast at Prenda P-R-E-N-D-Acom. You can also subscribe to our weekly newsletter called the Sunday Spark.
Speaker 2:The Kindle podcast is brought to you by Prenda. Prenda makes it easy for you to start and run an amazing micro school based on all of the things we talk about here on the Kindle podcast. If you want more information about guiding Apprenda Microschool, just go to prendacom. Thanks for listening and remember to keep kindling.