Artificial intelligence is advancing faster than ever, with a new crop of generative AI programs that are creating art, videos, humor, fake news, and plenty of controversy. The technologies powering this latest slate of tools have been in the works for years, but the public release of these programs—particularly a new chatbot enabled by OpenAI’s GPT system—represents a big step forward for machine intelligence. Same with the image-generating app Lensa, which creates painterly selfies that have captured the public’s imagination. Now, engineers are asking chat programs for coding help, students are using AI to generate book reports instantly, and researchers are testing the tools’ ethical boundaries. It's all gotten very weird, but AI is about to get bigger and even weirder still.
This week on Gadget Lab, WIRED's artificial intelligence reporter, Will Knight, joins us to talk about ChatGPT, how generative AI has grown up since the early days, and what the latest tools mean for everything from kids' homework to online disinformation campaigns.
Read Will’s WIRED story about ChatGPT. He’s also written a bunch of recent stories about generative AI. Follow all of WIRED’s AI coverage. Read more about Lensa from Olivia Snow. Try the new chatbot for yourself.
Will recommends the Tractive GPS Tracker for Cats. Mike recommends Das Keyboard MacTigr mechanical keyboard, which he reviewed this week. Lauren recommends that you keep an eye on your keys using a Tile or AirTag tracker.
Will Knight can be found on Twitter @willknight. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.
How to Listen
You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:
If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.
Lauren Goode: Mike.
Michael Calore: Lauren.
Lauren Goode: Mike, how concerned are you that your job is going to be replaced by artificial intelligence?
Michael Calore: Not very.
Lauren Goode: OK, but how concerned are you that you might eventually fall for misinformation that's generated by an AI that has taken another reporter's job?
Michael Calore: That is much more concerning to me.
Lauren Goode: Yeah. It's a pretty scary thought, right?
Michael Calore: Yeah, it's like a doomsday scenario in our world.
Lauren Goode: Well, we're on the Gadget Lab, where we talk about doomsday scenarios, so let's get to it.
Michael Calore: Great.
[Gadget Lab intro theme music plays]
Lauren Goode: Welcome to Gadget Lab, the podcast where we explore the latest and greatest gadgets in technology. Join us as we unbox, test, and review the newest devices and apps and provide expert analysis on the latest trends in the tech world. Whether you are a tech enthusiast or just looking for some buying advice, Gadget Lab has something for everyone, so grab your headphones and join us as we dive into the exciting world of gadgets and technology.
OK. So that's not our usual intro. That is the intro that was written by ChatGPT, a new generative AI system from OpenAI. But first, let's do our actual intros. I'm Lauren Goode, and I'm a senior writer—senior robot writer?—at WIRED.
Michael Calore: My name is Michael Calore, and I'm a senior editor at WIRED.
Lauren Goode: We are also joined by WIRED senior writer Will Knight, who joins us from Cambridge, Massachusetts, and who has such a delightful accent that we couldn't help inviting him on. Hi, Will.
Will Knight: Hello. Thank you for having me.
Lauren Goode: Is that your real accent or was that generated by a robot?
Will Knight: It's entirely AI-generated.
Lauren Goode: Really? This stuff is getting really good. OK. So if you're a very online person or you spend a lot of time on Twitter, you've probably noticed people sharing Q&As with random questions, and the answers at least appear to be incredibly smart, thorough, and authoritative. These are being generated by the new AI chatbot, ChatGPT, and it's part of a broader trend of what's known as generative AI. This is artificial intelligence that's not just making our software a lot smarter behind the scenes. It's actually creating its own human-ish conversations, videos, and even advanced art. Will, I've said before in conversation, this feels like the year of generative AI, but this stuff has been in the works for years now. So I was hoping that first, you could define generative AI for us, and then tell us a little bit about the history of these giant entities like DeepMind and OpenAI.
Will Knight: OK, sure. Yeah. So you're right that this stuff has been in the works for a while, and the first wave of AI advances we saw were algorithms that could do discriminatory things so they would recognize what's in an image or recognize the words in speech. But for a long time, there have been other kinds of algorithms, including ones that will learn from a distribution of data and then be able to reproduce parts of it, and that's what these generative models are. They're able to reproduce what's found in a data set provided by human-made images or human-made music or whatever. And this has been an amazing year with the release of these AI art tools and now ChatGPT. But the thing that's really important to remember is that they are just slurping up and regurgitating in a statistically clever way stuff that people have made.
Michael Calore: So in the case of a chatbot, what is the data set that these software engineers are feeding it in order to get it to spit out speech that sounds human and well put together?
Will Knight: Yeah, that's a good question. The truth is that they're not a hundred percent clear on exactly where it all comes from, but it's clear when you look at some of the text that it comes from the web, they've scraped huge amounts of data from the web, I think they're also feeding in lots of books. So it's just millions and millions and billions of lines of text written by people from all over the place. And so it can find these patterns in this text that will resemble the way a person writes, but it can also find bad things in that text like bad words, biases, horrible stuff.
Michael Calore: One of the big standout features of ChatGPT that makes it feel all the more human is that it can follow a sustained conversation. Like everybody who has experimented with Alexa or Siri or Google Assistant knows, if you ask one of those AI chatbots, which is essentially what they are, a question and then you follow up with an additional question that plays off of that first question, often those systems don't know what to do because they treat every question like a brand new query but—
Lauren Goode: They're getting a little bit better.
Michael Calore: Yeah, they have been getting a little bit better. Yeah.
Lauren Goode: You could say, “Who is LeBron James,” and then follow up with, “And how tall is he?” And it might get it, but they're not very clever.
Michael Calore: Right. This is something that all of these companies have really been pushing as the next big step in this technology, but ChatGPT seems to be able to follow a conversation pretty well over a series of questions, right?
Will Knight: Yeah, that's right. There are a couple of interesting things. One of them is that it's got this memory, short-term memory. We're starting to see that more with these chatbots, these large-language-model chatbots. The other thing is that it had this additional training to make it good at answering questions. So it's using this model that's been out for a while that can extract information from the web, but they trained it in this way that they gave it human answers to questions and gave it a reward like a treat to try and make it better at answering the questions properly. And it turns out that that produces a much better answer to your questions, more coherent, more meaningful. And adding some memory as well has made something that's really compelling to people and seems to have caught everybody's imagination. The funny thing is, going back to the very early days of AI, the first chatbots, people were willing to believe that those were human. There's famously this one that was made at MIT called ELIZA, where it was a fake psychologist and people would tell it their secrets. So I think we're just really, really well designed to use language and conversation as a way to imbue intelligence on something. I think that's part of what's happening here.
Lauren Goode: What are some of the most interesting responses you both have seen from ChatGPT over the past week or so?
Will Knight: I think for me, some of the most interesting things, interesting answers are some of the ones that are amazingly articulate and they look very good, but they're actually complete nonsense. They're just totally made up. And that's one of the things about these models, is that they're so different from human intelligence, and that they learn this pattern that is human language, and a lot of the signals make it coherent, but some of the important facts you need to know or you would need real-world experience, they just completely don't care about it at all. So it's fascinating to see that, and it would be very strange if a person was that good at making things up.
Michael Calore: For me, I think the most interesting things that I've seen are people who ask it to write book reports. So whether they're really students or they're just doing something student-like, they're saying, like, “Hey, I need 300 words about Catcher in the Rye.”
Lauren Goode: I just knew you were going to say that for some reason.
Michael Calore: Catcher in the Rye?
Lauren Goode: Yeah.
Michael Calore: Yeah. It's a very book-reporty book. And then ChatGPT is able to spit out something that the student could turn in, mostly unedited, and it might have a few errors in it, and like you said, it might have some nonsense in it that has crept in somehow, but it's pretty scarily accurate.
Lauren Goode: Yeah. You could definitely see how this would lead to some plagiarism. Is it really plagiarism? That's a whole other question. I think I've asked it to do some creative things, and I've gotten mixed responses. I asked it to write a children's book proposal about an adorable boy cat named Nugget, which happens to be the name of my objectively adorable boy cat, and it did write a potential children's book, but it wrote it in poem form. It wasn't entirely rhyming, but it was pretty clever. I was really impressed by that. But I also, Mike, you'll appreciate, I wrote, “Write a sitcom where Lauren Goode and Michael Calore are housemates.” And it said—
Michael Calore: Oh no.
Lauren Goode: We combine the cats. “Unfortunately, I am unable to write a sitcom as I am a large language model trained by OpenAI and do not have personal experiences or creative abilities.” But that's not necessarily true. I've seen on Twitter, the CEO of Box, Aaron Levie, has been having a lot of fun with this over the past few days. He's posting every day some new idea. And he just posted the other day an exchange where he prompted it to write a business proposal for a cloud company named Box that is looking to get into the cardboard box business. And it actually wrote a rather funny and clever proposal on why a cloud storage company would want to create an additional revenue stream. And so the joke is that this is going to replace CEOs. It's really not. I don't think it is. It's really not going to replace journalists, but there are some really fascinating examples being shown right now.
Will Knight: Yeah, some of the silliest things are the best. There was another one where somebody asked for, I think it was a history essay, but the person writing couldn't stop bragging about the pumpkins that they had grown. It was actually really well done.
Lauren Goode: So, Will, before we go to break, we should talk about the folks behind ChatGPT. OpenAI is a super interesting company. It claims its mission is to make AI open and accessible and safe. It started as a nonprofit, but now it has a for-profit arm. It actually held back on launching one of its earlier chat tools because it deemed it too dangerous, which created a lot of attention for that chat tool. But some researchers and AI experts say that OpenAI is just really good at marketing and, in some ways, maybe playing to people's fears about AI. And we should also note that Google owns a company called DeepMind that is working on similar large language models. Will, how alarming is it to you that these pretty big, well-funded, powerful entities are the folks who are building this AI, and what else do we need to know about them?
Will Knight: I think we should definitely be concerned if AI is just built by these big, powerful companies with vested interests in certain directions they want to go. One of the jokes about OpenAI is that they're not particularly open. They definitely produce really cutting-edge good research, but they trickle it out sometimes, and they're not necessarily producing the algorithms, which makes it harder for other people to reproduce these things. They get to decide what they can and can't do, and I think there are those who believe that these tools should be more open, should be available to anybody, so they can experiment with them and see what they can and can't do. DeepMind is probably the top AI research company there is. It's as if somebody built a large university department dedicated to AI, which is doing a ton of stuff, and the direction of that is, though, defined by them and by their owner, Alphabet. One of other the recent things that happened was this completely open tool from a company called Stability AI, which was an art tool, but they released it so that anybody could use it, and they made it much more accessible and so that the controls on it, the limits on what you could do, have been removed, which was controversial, but I think there's a really good argument that these tools should be more available and not just in the hands of these big companies.
Lauren Goode: And because we can't get through a show these days without mentioning Elon Musk, we should also mention that Elon Musk is a cofounder of OpenAI.
Will Knight: He is a cofounder, although he distanced himself from the company at some point, claiming that there would be a conflict of interest with Tesla's AI work.
Lauren Goode: Interesting. That wouldn't seem obvious to me, but …
Michael Calore: So real quick. We know that GPT-3 is the model that the chatbot is built on, and we're expecting GPT-4. Is that right?
Will Knight: Yes. The rumor is that GPT-4 will come out next year and will blow everybody's minds again, as GPT-3 and 2 have. I'm sure it'll still have fundamental problems. It would no doubt be a lot better. But yeah, the AI behind ChatGPT is called “GPT-3 and a half,” or 3.5. So we're seeing already some of the progress towards 4. One of the things that OpenAI did, which has led to some of this success it's seen is, it just determined that building much bigger models with more data and more compute to a ridiculous degree produces new results. So people didn't really believe, or some people suspected, you wouldn't get great new results if you just scaled everything up. But they just put a huge amount of resources into making their AI algorithms bigger and hungrier.
Lauren Goode: Well, they might be bigger and hungrier, but that doesn't necessarily mean they're more accurate. We're going to take a break, and when we come back, we have to talk about misinformation.
Lauren Goode: OK. We need to talk about misinformation—and know that this part of the show is not an AI-generated script. This is a real human-written script, one that, as far as I know, isn't littered with untruths. Because one of the biggest concerns with generative AI is its potential to mess with our general consensus of what is a fact. A chatbot like ChatGPT takes such an authoritative tone and spits out answers so quickly that its answers might seem wholly believable, but it might also be lacking truth or nuance, and in some cases, it could even be downright racist or misogynistic. Will, let's move away from OpenAI for a few minutes and talk about Meta, because Meta faced some of this recently when it released its chatbot, Galactica. What happened there?
Will Knight: Right. Yeah. Meta made this chatbot which was designed to answer questions about science, so it'd learned from tons of scientific publications, and they put it online just like ChatGPT to see how people use it. But quickly, they found, or people found I should say, that it would spit out these awful biases, these terrible ideas, partly because there are traces of those sorts of biases in the data, and partly just because it has no idea what it's talking about and it will conjure up all sorts of things, even if they are horrible. So after a very short period really, Meta took it offline.
Lauren Goode: And one of the more ridiculous things that it was spitting out was the history of bears in space. Did you know that bears were in space?
Michael Calore: Oh yeah, I read all about it.
Lauren Goode: Yeah. I'm sure WIRED has done some articles about this in the past too. So obviously, these things are rife with misinformation. So what happens with large language models when someone takes a script that was generated by a chatbot, and let's say that script is factually inaccurate, and then uses that and feeds it to another large language model, and the cycle just goes on and on and on? Are we entering an era of meta-untruths? And I don't mean Meta. the company. I mean untruths layered on top of untruths. These falsehoods just perpetuate. And I guess maybe I'm wondering if this is any different from a quack on YouTube saying something with authority about our health that's just not true.
Will Knight: I think we are already in that era, to be honest, because Stack Overflow, the website that posts answers to coding questions, already banned people from using ChatGPT-generated code, because it can look really good, but it has these flaws or it may have flaws, it may have bugs and whatever. One of the worries I have is that there are already these companies that are using language models to generate tons and tons of content like marketing copy blog posts, which they're putting online. So there may be this huge, a growing amount of stuff online that's completely unmoored from truth not made by people, and is just weird. That could eventually feed back into these models and just exacerbate that issue I guess.
Michael Calore: So we're seeing these models grow up in public. People use them, we have a lot of fun with them. We notice some of their flaws, and then the next version of it gets better. So all of these models are perpetually iterating and getting a little bit better each time. And one of the big pitfalls that we've already talked about a little bit in early AI was that in chats they would often spew racist statements or sexist statements, or they talk about Nazis—usually at the prodding of the person asking the question, because the person is trying to test its limits and see what it will say, but not always. Sometimes it just says these things that are a little bit more hidden or coded messages. They've been getting much better at that particular problem, haven't they?
Will Knight: Yeah, there's a lot of research going into trying to guide these language models not to produce bad stuff, hateful messages and so on. And yeah, there's a lot of work going into that. It's very far from solved. And you can see with something like ChatGPT that sometimes it's a hack. They try and prevent certain words going in. You can often get around those. But how to make these models behave how you want them to inherently is much more challenging. It's because that has to be part of the learning, and it's an unsolved problem. How to make them accurate is an even bigger conundrum, because understanding what's true really does require some greater understanding of the world. One of the weird things about these language models, they're frozen in time. So if you ask ChatGPT some stuff, it'll come up with a very dated understanding of politics or whatever current events. And so how you create something that has this real-world understanding and therefore understands the truth is going to be a very big challenge. And there is also the fact that truth is not all the time æ There are certain scientific truths which are accepted, and there is an awful lot of stuff which is gray area and that people will disagree on. That's going to be interesting, and who knows what the ramifications will be of language models that lean too far to the left or right.
Lauren Goode: Yeah. When you asked, is it getting better, I guess my question is, what does better mean? Does better mean that it's getting faster, smarter, more human-like, that we find better use cases for it? And then does that run in parallel? Do those improvements happen in parallel with combating misinformation? Because it seems to me like the technology could get better, smarter, faster, but the misinformation problem would persist.
Michael Calore: It might. It might. And that is, I think, a big problem, because you want to have the best and smartest product out there before everybody else does. But when you put it in front of people, people are going to test it in ways that you didn't think of. Right?
Lauren Goode: Right. Yeah. We're humans at the end of the day who are putting these prompts in there, in this chat box. And also, it's pulling from texts on the internet. If it's not limited to rigorous scientific research papers, the foundational text could be flawed.
Will Knight: I think it's also a problem because humans are very well evolved to respond to language in inadvertently really strong ways. And so if you have something that just gets better and better at persuading, convincing, articulating things, people are going to buy that. And so you could imagine chatbots that are going to become very, very difficult, much more difficult to distinguish, even for a person who's putting the effort in, from a machine. And those could be just unmoored from reality or designed to give you a particular type of misinformation.
Michael Calore: Something else that humans respond to is flattery. I want to ask about Lensa. This is the app that is currently very hot right now. You feed it selfies and it gives you a magical painterly version of you. But from what I can tell, everybody is giving it selfies and it's kicking back versions of them that are really hot. They make everybody look super attractive. Even if it's already an attractive person, it enhances some of their qualities. There's a lot of women who've noticed that when they feed it selfies, it shows them pictures with larger breasts—
Lauren Goode: Huge boobs.
Michael Calore: … fuller lips, bigger eyes, all of these things that are psychologically coded as attractive to the Western mind. This is misinformation, just of a different sort, right?
Will Knight: Yeah, totally. It's interesting that there's so many filters you use on things like TikTok, and even some of these video apps will do that in real time. There's a Chinese one, which is their replacement for Zoom. And if you go into the details—I don't think you can do this in Zoom, or maybe I've just never seen it—but you can go in and change your face in any dimension you want. You can make your lips fuller, all this in real time. I guess we are heading toward some era where you can't believe anything you see or read or whatever.
Michael Calore: So what you're saying is that you're not actually a fairy prince?
Will Knight: No, sadly not.
Michael Calore: Surrounded by lilies? Because that's what I see when I put your images into Lensa.
Will Knight: Oh, is it? OK. I have to try that.
Michael Calore: I hope you don't mind that I've been feeding Lensa photos of Will Knight.
Will Knight: No. As long as I can see them, as long as I can take a look.
Lauren Goode: I guess it depends on how far you want to extend that analogy. If it's misinformation to take the app du jour, which happens to be this super-weird creative AI app, and create these AI-generated images of yourself, how different … and those are not real. Those aren't what you look like. How different is that from just putting a smoothing filter on Instagram?
Michael Calore: Right. Not that much different probably.
Lauren Goode: So we just have our online avatars, our online personas and our IRL experiences, and those things are now wholly divergent, and that's it. That's the world we live in. It'll have a second life.
Michael Calore: Also, who's to say which one is reality and which one is the simulacrum?
Lauren Goode: Oh, such a good word. That's such a good WIRED word. I feel like that appears in at least every issue of WIRED. That's true. Who is to say? On that note, let's take a break, and we're going to come back with recommendations. Will they be human-generated or will they be AI-generated? TBD.
Lauren Goode: This is the part of the show where we give our weekly recommendations, but given today's subject matter, we thought we would turn this part over to the AI chatbot again. So I asked ChatGPT, "If you had to recommend something to a friend, a book, a podcast episode, an app, a food, an album, what would you recommend?" And it responded, "As a large language model trained by OpenAI, I do not have personal experiences or preferences so I am unable to make recommendations based on my own interests or experiences. My sole purpose is to assist you" … well, you're not really assisting me now … "with your questions and provide you with information based on the input I receive. I'm not able to browse the internet or access external information, so I'm unable to provide recommendations for books, podcasts, apps, foods or albums." So I think we have to turn back to good old fashioned flawed humans for our recommendations this week. Will, that's not to call you flawed, but we should start with you. What is your recommendation this week?
Will Knight: Proudly flawed and happy to. I can recommend the GPS tracking collar that we've got for our expensive hypoallergenic cat, which makes it very fun to see where we can … We know where she is if we need to try and find her, which I had to do a couple of times, and you can see where she's been through the night, all this little GPS trace. There's even a little leaderboard of local cats and national cats, and we have some friends with cats who are on there, so it's a little social network. It's a bit creepy, a bit big brother-ish, but at least we know where she is. Unfortunately, yes, unfortunately, she got frightened by something and is stuck at home for the last three days, but I can see where she was, so I may go and try and figure out what it was that terrified her.
Lauren Goode: And how was she doing on the leader board?
Will Knight: She was number one.
Lauren Goode: Number one. Leona.
Will Knight: In the neighborhood, yes. She's very active.
Lauren Goode: That's amazing. Also, it's so funny that you said it's like a social network for cats, considering cats are incredibly antisocial.
Will Knight: That's not true. That's a myth. They're very social.
Michael Calore: It's misinformation.
Lauren Goode: Mine is social with me but he hates everyone else.
Will Knight: Yeah. Well, I'll cross reference the GPSs because I think they're going around together and doing things maybe. There was a brilliant British TV show where they put cameras on cats and they actually found that they had much bigger territories than they thought and they found a lot of social activity that they never realized existed before.
Michael Calore: What's the name of the collar?
Will Knight: It's called Tractive.
Lauren Goode: And how much does it cost?
Will Knight: Oh, I think it's $50 and then there's a subscription. I can't remember exactly what that is. I try and put it out of my mind, but my wife loves it so much though. We're well and truly subscribed.
Lauren Goode: So basically, we know who runs your household and it's Leona, the cat.
Will Knight: Absolutely, a hundred percent.
Lauren Goode: That's a pretty good one. I think we're becoming the podcast for smart pet collars because when Andrew was on a few weeks ago, he recommended a smart collar for his dogs.
Michael Calore: Yeah, we're all about surveillance on this show.
Lauren Goode: Yeah, clearly we are. We're our animal overlords. Mike, what's your recommendation?
Michael Calore: I would like to recommend a mechanical keyboard. This is something that I just reviewed, so if you're interested in learning more about it, you can read my review on WIRED.com where my writing appears. It is a keyboard by Das Keyboard, a Texas company that makes excellent, professional, high end mechanical keyboards, and it's called the MacTigr, so it's also cat themed. It's specially made for Mac compatibility. So with mechanical keyboards, most of them have a PC, like a Windows ready layout on them. So when you plug them into a Macintosh, you can get the numbers and the characters, the letters to show up, but all the other stuff doesn't usually work like the spaces keys, the modifier keys like command and control. If there's a volume knob or a play/pause, media control keys on a mechanical keyboard, often they won't talk to whatever application you're using on a Macintosh. So a Macintosh out of the box, Macintosh compatible mechanical keyboard is a rare thing. There are some good examples out there, but this one is new and it's very nice and it's very exciting. I've been using it for about three months and I really love it. So I gave it a good rating and I highly recommend it. I will say the MacTigr is expensive. It's over $200, which is a lot. Mechanical keyboards usually top out at about $160, $170 for the nice ones. So it's a deluxe keyboard, so you will pay a deluxe price for it. But I'll also say the holidays are coming up. Mechanical keyboards go on sale quite often. So if you're interested in it, maybe set up a price alert or something for it.
Lauren Goode: Is it ergonomic?
Michael Calore: What do you mean?
Lauren Goode: Well, when you're using it, do you feel that it's more comfortable for your hands and wrists?
Michael Calore: I do. There's one thing about it that I don't like, which is that a lot of keyboards have little feet that are on the back of the keyboard that adjusts the tilt. This doesn't have that. It has a pretty comfortable wedge shape to it, so if you type on it, it feels comfortable to type on. Also, the switches on it, the thing that gives mechanical keyboards their springy character, it's a really nice switch. I think it's a red switch for switch heads out there. So it's comfortable because your hands bounce along as you type on it so it is a very comfortable experience.
Lauren Goode: Thank you for that recommendation.
Michael Calore: Sure thing.
Lauren Goode: I will not be buying it but it sounds cool.
Michael Calore: Maybe it's not your type.
Lauren Goode: No. It sounds like if I wanted to up my typing, I would just use GPT chat, ChatGPT, whatever the heck it's called. I'll just have it type for me. Isn't it the promise of AI anyway?
Michael Calore: Yes, do all of your work for you.
Lauren Goode: Yeah.
Michael Calore: What is your recommendation this week, Lauren?
Lauren Goode: My recommendation is somewhat related to Will's recommendation, except instead of tracking your cat, maybe you should track your keys because, ladies and gentlemen, I have lost my keys again for the second time in three months. I lost them yesterday. I was a little bit distracted going from point A to B. I have no idea where they are. It's a little concerning to me.
Michael Calore: Yikes.
Lauren Goode: Yeah. It includes my car key too, which is if you've ever had to get a car key replaced, it's really something.
Michael Calore: Thankfully, you have some experience in getting a car key replaced.
Lauren Goode: Yes, because three months ago, my keys went into the ocean. I was standing in a wetsuit on the side of the road for a few hours waiting for the locksmith.
Michael Calore: So what's your proposed solution?
Lauren Goode: Oh yeah. OK. So the solution. I could just complain for the next 10 minutes. Would you like me to do that and talk about how distracted I am? I put on Twitter last night, what is the universe trying to tell you if you are losing your keys at alarming frequency? And someone actually responded, it was quite funny, and said, "Avoid crypto," Because obviously, you wouldn't want to lose your crypto keys. That was the best response. But a couple of people were like, "You may have ADHD." And I was like, "Oh, OK. Twitter diagnosis." Speaking of not believing health information you read online. So anyway, a couple … I'm not distracted. A couple of other people said, "Just get a Tile or an AirTag." Of all the people who do not have a Tile or AirTag on their key chain at this point in time.
Michael Calore: The personal technology reporter for WIRED.
Lauren Goode: Yes, it should be me. And also, I should have learned this lesson in the fall and I didn't. So now, my recommendation would be slap one of those bad boys on your key chain unless you have serious and legitimate concerns about being tracked or stalked in some way. I absolutely understand that. But otherwise, just put a Bluetooth lost item thingy on your key thingy and then hopefully, you don't end up in the position I'm in again.
Michael Calore: Are you going to get the AirTag?
Lauren Goode: I think I will. I might even have AirTags floating around at home.
Michael Calore: Solid.
Lauren Goode: Yeah. So that is my recommendation. Keep track of your keys, folks, and your crypto keys.
Michael Calore: Well, that's a solid recommendation, Lauren.
Lauren Goode: Thank you very much.
Michael Calore: I really sincerely, for your own sanity, hope that you follow through with this and that it works for you.
Lauren Goode: I think you're hoping for your sanity.
Michael Calore: No.
Lauren Goode: Because twice in the past two weeks, I have texted you and said, "Mike, do you still have that spare house key of mine because I've lost it?"
Michael Calore: Always happy to help you out.
Lauren Goode: Thank you.
Michael Calore: But I'd be more happy if you were not stressing about losing your keys again.
Lauren Goode: Thank you. That's our show for this week. Will, thank you so much for joining us.
Will Knight: Thank you so much for having me.
Lauren Goode: And thanks to all of you for listening. If you have feedback, you can find all of us on Twitter. Just check the show notes. We'll put our handles in there. Our producer is the excellent and very human, Boone Ashworth. Goodbye for now. We'll be back next week.
[Gadget Lab outro theme music plays]
Lauren Goode: It's from the chatbot. Here is a potential children's book about an adorable boy cat named Nougat:
Nougat, the cat, was an adorable little boy with soft, fluffy fur and a playful toy. He left to chase balls and climb up the curtains and cuddle up close when it was time for his naps. That doesn't rhyme. Nuogat was curious and always on the go, Exploring the world with a happy go lucky glow. He made friends with the birds and the mice in the walls and always looked forward to his next adventure, big or small. One day—uh-oh—Nougat went outside to explore. He climbed up a tree and jumped onto the shore. But when it came time to come back home, he realized he was all alone. Nougat was scared and didn't know what to do. He meowed and cried, but no one was near. He wandered and wandered but couldn't find his way until a kind lady saw him and took him home to play. Nugget was grateful and cuddled up close. He knew he was saved and loved the most, and from then on, he always remembered to stay close to home and never wander.
The end. It wrote a book for me and Nougat.
Michael Calore: That's pretty good.
Lauren Goode: That's pretty good, except for when it didn't rhyme.