NASA & AI Innovation:
James Villarrubia Fireside Chat at Digital Summit Minneapolis 2025
How do you bring all the elements of a company like NASA together to innovate and deliver with artificial intelligence? In this fireside chat from Digital Summit Minneapolis 2025, Steve Krull, CEO of Be Found Online, interviews James Villarrubia, NASA’s Head of Digital Innovation and AI.
Learn about James' background, how NASA uses and trusts data for new discoveries, what sparked his interest in AI, and how to bring AI and curiosity to work. James speaks to overcoming organizational barriers, deciding when an idea isn't worth it, ensuring projects are adopted by other teams, and collaborating using AI.
In this chat, James shares:
- Key learnings from mistakes
- How to leverage and build AI within organizations
- How to coach teams to see through the AI noise.
- Emerging technologies in AI
- A recipe for discovering whitespace and gaps in AI.
This is a longer chat, so we timstamped the video to help guide you!
Want more digital marketing insights in video form? Follow BFO on LinkedIn!
Stay up to date on digital marketing news - Join our monthly newsletter!
Video Transcription (formatted, not edited)
Steve Krull:
Can we agree that AI is moving fast and that as marketing leaders, some of you are perplexed, some of you are embracing the technology, some of you are challenged by it, some are way out on the bleeding edge of the technology and that as leaders in our organizations and some of you in larger companies that deal with bureaucracy on an everyday basis, how do you get teams to work together? How do you innovate under this umbrella of AI? How do you make sense of it all and have a corporate mission to sort of take this thing to the next level, bring people together, innovate, manage personality, and I'll go on record because we're talking to a guy from NASA manage a little bit of ego because there's got to be some brilliant people at NASA, including James.
So how do you bring all of those things together in a cohesive package? To innovate and deliver and if you think about NASA and what NASA has to deliver, well they gotta get shit right, don't they? So when we think about innovation and we think about AI, I am so humbled and honored to invite James Villarrubia to the stage. He is head of innovation and. AI at digital innovation and AI at NASA and he's offered to share his time with us today to share some of those insights and some of that information about how they really get things done at NASA and what he does to bring teams together and how they're leveraging AI to do that. So I believe there'll be time for your questions, James, come on up, let's get this party started. Welcome.
So, all right, all we got to say is, well, NASA, and, and so I got a chance to talk to James last week for about 20 or 30 minutes and it was super cool to chit chat and get ready for this and thank you to Kyle and the digital summit team for the prep that they did for this as well. But let, let's start with NASA. You tell me, tell me how you got to NASA and what being at NASA means because you've got quite a storied career to this point.
James Villarrubia:
Yeah, so I think I'm a bit different than I would say most, careerists at NASA. I am not a career NASA person. I don't have a PhD. There's a lot of PhDs at NASA. My career was a bit more roundabout, so I started as a statistician with the Pentagon, and sort of got dropped into a lot of crazy sort of data problems, missing data, missing. You know, Humvees or troops or whatever it is in the Middle East, like, hey, why don't, why is our data wrong?
So a lot of crisis management at the Pentagon for the Joint Chiefs, and then got picked up and went over to the White House to work on Fukushima, yet another crisis, and then the DOJ, for, oh gosh, we need to move to the cloud and we have all these data centers yet again another crisis, so that was sort of knowing it.
I sort of kicked off a career in change management, and tech, and I think what was sort of interesting about that was sort of seeing the progression as alongside my career as tech changed, I was sort of able to sort of pick these things up like, OK, well this is new, let's try this because nothing else is working, eventually, DOJ crisis, they were not, the crises at DOJ were too boring. So, I was like I'm gonna go to startups because that seems like a good idea, and then I went into cryptography, and then, human capital analytics and remote work before it was cool, then ed tech and AI, because I thought there was money there that I was totally wrong, and then, I had the misfortune pleasure, I don't know, of getting involved in a company doing AI and automation, in healthcare, 4 months before the pandemic hit.
So I was like, oh cool, another crisis. I am in the right place. It finds me. So, burned out a little bit on health care, and then put my hat in the ring for this White House fellowship, in innovation. The White House, started a while back, I think under Obama, a, a, you know, fellowship program, and I encourage anyone who's interested to, to look at it. Well they said, hey, we're gonna take late or mid to late career technologists who are really good at driving innovation in orgs outside of government and bring them in and just pepper them through the federal government and say, hey, just just shake things up, do something a little different, right, get the ball rolling, so it's the White House presidential Innovation fellows program, so they reached back out and they reached out with NASA and said, hey, do you wanna go build AI at NASA?
As an engineer when NASA says, you know, wanna come say hi with us, you just say yes, right? It's like, I didn't ask too many questions. I was like, Oh, sure, cool, because it was gonna be an adventure, and I thought I was gonna be there for a year, ended up being there for 3. NASA does that to people, so yeah, so I am, you know, wrapping up, 3.5 years, almost 4 at NASA.
Steve Krull:
Wow, that's really, really incredible. So I heard something in there that I want to ask about like finding lost Humvees. how does one look at data to find out what's going on there for whatever you can share about like because I know it's problem solving and we will get into AI. That's sort of my next question, but you sparked my interest by saying I found lost trucks and lost military equipment. Tell me, tell me exactly how that goes when you're staring at a data set from 7000 miles away.
James Villarrubia:
Oh, I, it wasn't 7000 miles away. I was there. I wasn't a bunker there, to be clear, it was, I would say one of the things that was sort of frustrating about government data and sort of large bureaucratic data is that, the gap between sort of where you are as a statistician or someone on the back end and where that data was collected and what went into that process of collection, it's, it's huge. When I mapped it out, I think it was 126 steps between where I saw the data and where it was first ingested. It's 126 opportunities for it to fail and to be wrong or for assumptions to get sort of baked in, and it's great when you're working sort of digital marketing and I can imagine that you probably have similar things sort of crop up.
It's like, hey, we're not seeing any traffic on this page, you know, maybe it's not converting as well we need to cut it or, you know, get rid of it, do something else, and then if you know looked under it like oh well like the link to that page in your site has been broken. Like that was the underlying reason and if you're not asking those sorts of questions, you will miss a lot.
so I think one of the things that I learned, at least on sort of data that scale and complexity of that scale is always, always challenge the assumptions that go into the data set that you were given, because, what you were told about it is almost always wrong, but there's usually something buried in there that is of real value. You just like it's not the statistics after the fact, it's sort of like unearthing the root assumptions that went into it that's usually. The goal that's a fabulous takeaway is only trust the data that you know, yeah, right?
If you can't back it up and figure out which data you need and you want, it's a that's a great lesson for any of us in what we're doing, whether it's marketing or whether it's working for the government.
Steve Krull:
So you've worked on the cutting edge of government.
You've been in crisis management, and, and this question is sort of the idea is what sparked an interest in AI and it's more did you find AI or did AI find you?
James Villarrubia:
Yeah, so also caveat, disclaimer, my views expressed today are not the official views of NASA or the GSA or any other government agency that I will mention. I'm here in my personal capacity, quote, personal capacity, right, clear, so, yeah, so AI, I think was, I say I found it, mostly because I was trying to build a toolbox of things to solve these sort of moments of crisis. When everything's on fire and you're like I cannot get you to perfect, but I can get you to sort of surviving, people tend to throw out a lot of rules. They're like, yeah, whatever you need, so in my career I would end up sort of like, OK, well let me, we have all these tools and they didn't work because they sort of helped get us to this problem. How do we get out of it? We might have to look somewhere else, so I was looking sort of building this toolbox throughout my career of all these different analytical techniques, And at the time when I started natural language processing sort of now LLMs was really still very early.
I mean we were sort of categorizing words and sentiments and it was sort of early days of that, and it was stuff that you could do on a laptop or your own machine you did not need a billion dollars of compute because the the science just wasn't even there to make use of billion dollars of compute yet. We've clearly, gone past that, but it was always one of these sort of these tools that I sort of like, you know, took along with me in these journeys of sort of problem discovery but one of the things that I think has been most valuable in my career is to sort of keep that curiosity of what are the new tools. If I'm not trying something new in a new project, I'm not learning and if I'm not learning, I'm probably not gonna be as prepared for the next crisis. I'm gonna get stuck in the same rut of thinking, that probably got the team to that crisis in the first place. I don't wanna, you know, add to that flame. I don't want to pour gasoline on the fire, as it were.
Steve Krull:
So you're gonna break some rules to get to the answers.
James Villarrubia:
Yes, yes.
Steve Krull:
How does that work? So you're obviously a naturally very curious person and finding these technologies and peeling back the union, but then how do you bring your curiosity into an organization that's got sort of, and we're talking about government here, they've got rules, they've got guidelines, they've got procedures, they've got books and reams and reams and reams. I think what did Marcus say reams and reams and reams yesterday. How do you, how do you get through to the same people who you're trying to help when you're trying to operationalize some of your ideas?
James Villarrubia:
Yeah, I think one of the things that I have seen a lot of organizations, NASA included, I think it speaks to this culture of risk aversion, and it sort of drives to the curiosity, sort of the inverse of that, so I look at, I get, I usually get drawn into an organization or brought in because they have a tech problem, and I get in there and I'm like, OK, cool, but like, but not really and they're like, oh yeah, but please solve our tech. I'm like, yeah, yeah, I'll do that, but I need to talk to your HR person first, right? Oh, let me talk to your process manager. OK, OK, we're hiring the wrong people.
They're like, what does that have to do like, because if we don't solve that problem, even if we fix the tech in 6 months, there'll be a new problem, because you haven't hired the right people to avoid. You keep hiring arsonists, and then you hire me to put out fires, so you know there is some culture change there, and I think I have learned as my career progressed that culture is usually the root cause of a lot of these big problems, when they get to the sort of scale.
I sort of think back to NASA and I look at, you know, it's a great professor of mine once said it's like, you know, culture is reflected by what you put on the walls. And you know at NASA we have, you know, pieces of, you know, like photos of, of, you know, major events, but there are also NASA centers with, you know, you know, elements of crashes and exploded, shuttles where lives were lost, so they take safety very, very, very seriously, so much so, that when you go into a gift shop, which I think is also a pretty good reflection of, of a culture is like what they sell in their gift shop. They have little key chains that say failure is not an option, which I think is hilarious, and I think it's hilarious because no one at NASA ever said that.
That is a quote from Ed Helms in the movie Apollo 13. So another way to think about culture is it is not history, it is the story you tell yourself about your history. So they have adopted that framing from the movie to sort of rewrite their own history of like that's how deeply ingrained it is, that you know it's not even actually what they did because I think the Apollo missions were actually pretty risk seeking, right?
They were, they were sort of cowboys, to some, to some extent, but that's not the culture today. So when you're trying to drive innovation in a culture that is so risk averse, you have to really think strategically about again some of the people that you get involved, but also know that like there are just gonna be walls, you know, put up one of the sort of examples that I sort of talk about typically is sort of when I was building the first AI sort of tool I think when GPT 3 came out, I was like this is gonna be big, and you know I was, you know, on Twitter, one of those, you know, early followers, and I was like, oh, I wanna get my hands on this, because this is cheap and it's really good, I think I submitted the waiver, like security waiver to get it into my like, like that evening, probably the first or second person at NASA broadly to get access to this, and it's like they don't, they don't know what I'm doing yet, so just, you know, ask, ask before they've realized what I've, what I've done, but I got that permission, and then I start building it.
It took 2 weeks to build the first prototype on ChatGPT. It took 9 months to get approval for the cloud sandbox to deploy it. And that is an insane inversion of like reward versus risk versus like effort and I I I keep thinking about that and that experience was like OK this is what I expected in government because I've been in government before so I had to have like an out I had to have another way and the approach that I typically say is that it's hard to get funding, it's hard to get budget, it's hard to get approval, so I try to build coalitions of the willing.
I just build a prototype, a tiny thing, and I just put it in front of people that I think are curious, and they get excited and I'm like, oh yeah, I'll help. And the people who are really good in your orgs at like driving that curiosity that wanna drive new and interesting things and change, they will find you if you put yourself out there.
And eventually if you build a team of 5 or 6, you start to build a little value and then you can take it to leadership and say, hey, like we're we're doing good stuff there's there's a there there and then suddenly you get a little money and you can snowball that way but it is typically I, I think it's a bottoms up approach of the coalitions of the willing.
And that really is also a good way to sort of signal hey I'm trying to do something that's maybe different than our existing culture and if you think that's a new culture that you wanna be part of, come join us, right? And if that outward signal of culture change that you're really promoting and that gets you that gets you interest because you don't wanna you know like oh that's a cool project, but I'm gonna come in and be a naysayer. I'm gonna say, you know, crap on it all day. You want something that's like, oh, yes and, my wife said, bringing out the improv.
Oh yeah, my, my wife's an actress, so we, there's a lot of yes and references in our house.
Steve Krull:
I like, I like it so, yeah, so it's really interesting to hear about this, and, and there's a concept that I'm familiar with too. You talk about the coalition of the willing. I've studied bottom line change, which is if you've got an idea, you need to bring in a few people, and ultimately one or two of those people should be your skeptics because if you can flip a skeptic, get them to 51% means they're all in now you can bring them along and they're gonna tell people that it's a good thing. Now thinking about talking to the other departments that you've spoken of, obviously you've run into some walls and you probably hit your head on a few of them. How do you, how do you sort of break through those and, and have you been stopped in some of these efforts? Because right now it's it feels like oh it's been smooth sailing.
I built some GPT and then we went to do this thing and I had, well, the, the 9 months of bureaucracy to get something implemented in a sandbox is obvious. So tell me about the walls that you run into there and how we as marketers, but how we can address some of those things in our organizations and what sort of negotiations you went through to get the right people on your side to bring this to life.
James Villarrubia:
yeah, so I think I should probably explain a little bit of sort of the, the, the group that I work with inside NASA because I think we are even weird for within sort of the weirdness of NASA. So the best way to explain it quickly, is who here has seen a sci-fi movie, where like there's some like chaotic thing that happens, climate change or, like the movie The Core. I was like, oh, the, the molten core of the earth has stopped spinning and we're gonna fix it with nukes, but note that they're like, oh, but like how do we get the nukes there? I was like, oh, there's always a team from NASA.
It's like some guy in a warehouse or in a basement who's like, oh, we actually have a guy who's been working on this, right? Like who's who's seen that movie, right? Or any movie of that, right? It's, it is a trope within sci-fi, so that does sort of exist, within NASA. There are teams working on the sort of the weird stuff even within the weird stuff of NASA just like the, the, not the basement dwellers I think it is a little a little mean, but they're the relatively underfunded groups, but there are those teams who are like, hey, like we're building a prototype of something, that if you solve this, if you crack this, it will be like, yeah, it's 99% likely to fail, but if you crack it, it will change humanity, and one of the things that I think was sort of cool to get to work on sort of this group was that, well, they were struggling with this idea of of of that culture of sort of, you know, failure is not an option.
I was like, well, you cannot have failure is not an option and then also be swinging for the fences at like a 99% failure rate is your goal and that like they were having sort of that struggle of culture change. So when I joined, the first thing I had to sort of do again had this conversation with this project leader was like, hey, yeah, like you're not failing enough. And we need to like I'm gonna talk to you about HR, and he's like, but you're the AI guy. I like, I know I'm gonna talk to you about HR and he, he, you know, so I think are the first skeptic, but I sort of walked him through like that.
I went through the listening tours I walked him through a hear about the processes and the programs, and you have these brilliant people who wanna do things and they can't seem to get the projects done. So it's not that you don't have the ideas, it's that something about how you validate and how they are assembled and how they are approved, that's what's breaking down. So let's rethink this structure. And once I got him on board we started producing more failures, and that was actually a good thing and part of that was like, hey, you are looking at 10 that you know, 10 ideas a year.
OK, that's, that's actually not enough because if you pick 3 ideas to fund and you have 10, that means you've, you have a sort of a failure rate of 70%, but it also means your threshold for good ideas 30%, top 30%, and I was like I wanna aim for 1000 ideas. And that was like a sort of an accountability change like oh don't think of it as how many ideas we want to succeed. We still only want to fund three ideas. I'm not asking for any more money. I want to prove 99997 ideas wrong. That is a very different framing.
And I think that worked really well within the structure of the organization because again I'm not asking for more and more funding. I'm asking for a pace of change and I'm asking for us to like ring the bell on our failures as a success. Look how many ideas that we tried and we looked at and said that didn't work. I can prove that that didn't work because if your process threshold. Has to get you to a, from a 30% threshold to a 0.3% threshold. The process will look a lot different, right?
You're gonna ask different questions, you're gonna dig a lot deeper and I think from like a experimentation phase and you're thinking like, oh like we're trying all these new marketing ideas, we are facing the change in AI in our ecosystem we don't know what's gonna work if you are struggling and you're feeling that in your orgs, I would say don't worry so much about having more good ideas. Struggle first with having more bad ideas.
Get in the habit of having a lot more bad ideas, and then over the process of filtering them down, you will get really good at identifying what was a good idea, what is the filter that pass filter that gets you to that 0.3%. And then the rigor by which you approach those ideas, the data that you have to fix or find whatever it is, will get better and better over time. And then eventually you'll know a lot better what a good idea is, and I think like that, that is sort of the the mechanism we took at NASA once we sort of start proving and waving the flag of look how many ideas we've thrown away because we're really good. The ideas that we came up with at the end were rock solid. Right? Some of the ideas that like you might have seen in the news, right, California wildfires were going out there we needed to.
The group that we're working in within NASA of the weird kids we were in sort of focused on my group was aviation, so there's, you know, space science, and there's like forgotten stepchild of aviation. We do planes too, but like you think of all the things that you experienced as a person out in the world, you are not going to space, you are not on a satellite taking satellite images, right? You were probably flying though you probably flew to get here, right? You probably you might be buying your kids or yourselves a drone. Those are the things that the aviation part of NASA is dealing with.
So, we were focusing on things that were much more sort of human, a lot more sort of, you know, dealing today, today, so we could see that value sort of transferred into NASA very quickly, are swinging from the fensive ideas of, hey, it would be great if we had a really robust drone that could fly over these California wildfires and and deal with the heat of these massive heat columns and not melt so that I could find where the fire was spreading and find missing firemen lost in the fog, lost in the smoke, but I could operate in without GPS without satellites because, oh, like the the you know the the heat is messing with the satellite connections and oh the fire has burned down all the cell towers, so GPS is basically you know lost unless you have some military grade satellite.
So how do you deploy that system at scale? That was a really cool problem that we solved and it came out of that that sort of process of like filtering out what is the most valuable thing today. We're able to deploy that and that that's something we built that is being used today and that I think is again you have those sorts of wins, then you can take that win and say hey look, this was a weird idea that no one thought was gonna work, but look at it now. You gotta, you gotta give us a little bit more breathing room. And you can't sell them answers because when you start you don't know the answer you have to sell them the opportunities you sell them like the proof of how many things you're gonna prove or wrong proof, you know, proof in the failure, right? And that's sort of the way that we started to shift broader NASA culture.
Steve Krull:
You've identified yourself as a disrupter just through our conversation thus far. What I heard in that is that and and NASA right, life is on the line if you fail, right? So, it's like fail now so you don't fail later is really sort of obvious, but in a culture that's so ingrained in being risk averse, and it's so funny to think risk averse when you're sending people to the moon. I get the risk aversion, but we're taking huge risk in sending people to space and when thinking about that and breaking down the culture, when do you decide an idea isn't worth it anymore?
What’s that inflection point or do you set up, you try to be objective at the start when, when you're obviously it's like Google has what they call moonshots, right? And there's this a farm of crazy idea people that say, let's go build stuff and we're gonna fail a lot. When do you know it's not right? When do you know to throw it away?
James Villarrubia:
In NASA, so, yeah, so we, so we talk a lot about DARPA and Google X, and like, oh, they're moonshot factories. We can't use the term moonshot factory because we actually have a moonshot factory, so we're like we're a wicked problem solver, right?
It's it's we, we need marketing help, that's what we need,, but I would say that one of the sort of philosophical changes and particularly in the era of AI was so it's like there's no such thing as actually a bad idea and like it's not like a camp counselor thing of like, oh all ideas are good, it's welcome to kindergarten, it's your first day, no, what we sort of like the entropy of an idea, the fact that a human. You know, some neurons fired and said hey maybe this that is of value, even if it is a total failure, I want that I want to captured. I want I wanna sort of, you know, pull that out because oh you're a PhD you have so much institutional knowledge of this vast field you're at the top of your field? And you had that sort of manual like brain spark? It may sound crazy. I want it.
And once we said, hey, I'm gonna capture all the ideas and we're gonna start celebrating the weirdest ideas and every bad idea might just be an idea that needs to be combined with something else later or it's an idea that hasn't quite hit its time yet or yeah it's that that's gonna be a problem but let's wait till we see a few more market signals that that's actually gonna be a real issue in 20 years, let's start, you know, researching it now.
So, that was really sort of the shift of all ideas are good, we want to capture them all, but not everything gets funded today, and that's OK and like that shift also again sort of freed against sort of that that sense of risk and failure of the team of like ideation session we had sort of wicked wild days where like we invite all of NASA onto like a massive Zoom call on like one mirror board and we go through like hey we're gonna.
Like, let's imagine that like, you know, like something crazy happens, you know, some, some foreign foreign country releases some sort of tries to do some geoengineering and they put some chemical in the in the clouds and it it has a massive algae kill off or you know you know phytoplankton kill off in the ocean until they all fish die What do you do?
And you get like 500 PhDs across a whole bunch of fields to, you know, idea and brainstorm, you're gonna come up with some weird ideas, a lot of very straightforward ideas, but you want to get them into the weird, we would give an award out for the weirdest idea, and if you like win the weirdest idea award at NASA, like that's man, that's something you wanna put on your resume,
So, I like that sort of stuff, it was like again it was a way to change the culture that changed the conversation of what we celebrated and what the success looks like, that then started to sort of spread out, within NASA, which was very powerful when we started doing.
Steve Krull:
When you're working with these people, are these people on your team on different teams? I mean, you, you obviously cross bridges to other teams and we've all faced in in our careers we've faced the not invented here problem where you show up with what you believe might be a good idea and it just sits on a desk or gets it's moved to the side, filed in the circular as they say.
How do you combat that in an organization such as NASA? And I, I sort of want to bring ego into the conversation and I imagine them because given the scientists that you have working at NASA and the big brains at NASA that there might be a little bit of ego involved in some of what you're doing.
James Villarrubia:
Yes, so one of the things we had to sort of adjust was again so it goes back to that HR strategy that I pitched was sort of the we, I would rather have a team of 3 of very excited, like, like correctly motivated people than a team of 50 that are mostly naysayers, and that was like a hard thing to sort of hear, but the way that I described it is like I think it was sort of this helmet syndrome.
I promise it'll make sense. Go with me on this journey. So you're meeting these PhDs, these are scientists at the top of their field. They are like, oh, I'm the, you know, department director of this, of, of chemistry for this sort of thing, at NASA, they are the world leader, right? And they might have been doing something a certain way for 20 years. And at this point, right, that way is effective like they hit the wall, right, and they are banging their head on that wall trying to break through over and over and over again, and now you're pitching to them it's like, hey, we could, you know, hey, I've got this AI or I've got this new idea like we just go, let's go around it like the walls no doesn't matter anymore. Llet's just go around it and they're like, no, no, no, you don't understand. This is my wall that see that little dent? That's the dent I've been making with my head. I've got a special helmet that I've been designing to protect me when I make this dent. Like I, I, I can go like 60 miles an hour at this wall with this helmet. It keeps me safe.
I'm like, great, cool, but we don't need the helmet. We don't need the wall. Let's just go around. They're like, I, and you, you're talking just past each other because they wanna tell you about how cool their helmet is, at hitting their head on the wall, and you wanna just like move on to the next problem.
So like there, there is a lot of not necessarily ego, I think it is dedication and excitement with that problem, right? Because they're they're they're committed to that problem. But what that means is when you get those people into a room to start trying to idea and brainstorm, they're like, oh, you like, hey, we're doing something and we're bringing in like do something vaguely related to to commercial aircraft. Oh, let's bring in a wing guy, let's bring in a, you know, an FAA guy. Let's bring in a fuel guy and the fuel guy in the room he's like, great, I'm here to talk about fuel. If you're not talking about fuel, I'm just gonna be on my phone. That is not really great for aviation, so we had to sort of flip that narratives like, OK, you can be on this team, but you gotta, you gotta, you gotta apply for it.
You gotta, you gotta work for it like we're not just gonna take someone on the bench at NASA whose project has lost funding.No, no, no, you gotta want to join this, and we would let anyone. It's like, yeah, oh, we, we will eventually need a fuel person, but they're just gonna. Ask a few que answer a few questions about fuel. I don't need them for the whole process. I would much rather have someone really curious.
So like, hey, that marketing person over there in like the NASA comms team, yeah, they, they can participate in the sci scientific discussion. We, we even got a contractor, to sort of help us bring in sort of the weird people that NASA didn't know have typically hire like, Great, we're gonna bring in a social scientist. We're gonna bring in a sci-fi writer, we got a guy who wrote like sci-fi short stories and he was great. He's like he just, he would just constantly come up with super weird ideas and ask really good questions.
I was like, yes, and like we're suddenly having conversations about, you know, like the ethics of like the Warhammer 40K universe. I was like, I don't know where this is going, but I'm really excited to be a part of this and to have my AI just like taking notes and just like, all right, I'm really where does this go? I don't know. And like that sort of change in culture of like, yes you can, yes you can come into this conversation, but you have to leave your expertise at the door, not just your ego but your expertise at the door. You're only allowed to answer questions or bring up ideas and things you've never worked in because we're all gonna be a little naive on this, but that is the right way to start this conversation, and eventually, yeah, oh well, actually I, I, you know, I can't answer that question. We are on the field topic. Great, cool. You gotta earn the right to talk about your PhD field, by being weird enough to like survive the first few rounds of like really deep crazy ideation.
Steve Krull:
So tell me something. So AI is part of your job title.
James Villarrubia:
Yes.
Steve Krull:
Was AI always part of your job title? Was it part of the job description or because of your embracing of AI and what you did to shift the culture, did AI get added to your job title?
James Villarrubia:
No, so, I mean the job title was sort of that had a digital in sort of an invented title like the title is like, government titles, right, so like information specialist level blah blah blah blah blah, and then they're like, oh, you're White House presidential innovation fellow, and they're like, yeah, but what do you do?
I was like, OK, head of digital innovation AI for, convergent Aeronautic Solutions, yeah, so the AI was always part of the job description, And certainly like a key descriptor, but when I joined, it was just prior to this current AI revolution, so I was expecting to come in and have to use sort of legacy and that's 5 years old legacy AI, to do a lot of this work, and the job was, or the, the theory was, can we get, we're doing doing all this crazy crazy ideation, but we are still struggling to get to that 1000 ideas maybe we can get to 100 from 10 to 100 with our human process.
We can't get to 1000. AI might get us there and AI might help us sort of sort and sift through all of this information, right, because we aren't trying to solve like the things that you know Lockheed or Boeing is gonna solve next year or 5 years. We don't care about that. They're cool. We don't care about the things that are, you know, near term we care about solving problems that are 20 to 50 years away but are so big that you need to start solving them now to have a hope of solving them in 50 years, right, to actually have sort of completed it in 50 years, big crazy ideas.
So we're looking at anything and everything, so the scope of this AI search, and this AI sort of data capturing was any problem humanity might face in the next 10 to 50 years. Go. OK, cool, so like, you know, right, right, so like my, my, you know, my wife would ask like what are you working on? I was like I don't know. Like I, I honestly don't, and 29 days out of 30 you are mostly get like down rabbit holes of like, oh this is the way that humanity is going to end. But then there's that 30th day we're like, oh, but wait, this and this and this might come together and we could, we could actually we could actually solve it. Oh, that's a doable thing we could, we could crack that that nut. That's really cool.
So fighting for those, for those things, but we, we wanted to like, well, can we do this a scale? So that became sort of like we wanted this AI to help us traverse this massive amount of data that was out there every weak signal in the market, what you know what investors were investing in what they weren't investing in what they're leaving the table, anything and everything. That's a lot of data, and sort of a terrifying problem.
GPT then gets announced and launched. I was like, oh, OK, I have a billion dollars LLM that I can rent for, you know, sort of cents, yeah, cents per request. This is crazy. OK, I can definitely leverage this, and it really started to like move us past the data collection with AI to the data sifting and sorting and ranking with AI. Oh, you know, help us pick, you know, evaluate those 997, to then, like, finally, what are the things like we sort of reached the level like the limits of human cognition, some of these problems are so complex like you could get 5 PhDs in a room and it will take them a full lifetime to understand each other, to even start the conversation about solving this problem.
You think about like how do we solve climate change, right? And there's little bits and pieces but you're thinking like, no, like really like solve it so we don't ever have this problem again in humanity like that is a that is a big juicy problem. Many PhDs, dissertations, lots of fields have been written about that, but that was sort of the scale that we were dealing with.
So we started to look at, well, what are the problems that we as humans are biased towards looking at? What are we not looking at because it's maybe even too scary for us? Where is the white space between what is known in this AI, like what the AI has and what we are finding.
And that was the really cool thing of like where we sort of really started to like you know I think hit pay dirt as it were was the we sort of to bend the AI to start looking for gaps in information gaps in the research, things that no one was talking about but we should be talking about and draw all those like, you know, lines and connections and predict those futures, and that was like a really cool component of AI.
I like to think of it as like the nice way to put it is like we're trying to teach induce AI into not hallucinating but dreaming. Can you get AI to dream? But of course again 29 days out of 30 it's a dystopian terrible future so it's more like giving AI nightmares and then hoping that one day you have a really nice dream, but yes, a lot of AI nightmares, poor AI, but it was a big part of the, the job initial and it became just a huge part of sort of how we think about ideation and these futurism because why not use AI to better collaborate? Why not help AI translate and get those 5 PhDs in that room to sort of move faster and come up with better ideas and leverage their expertise, you know, at a scale that we just couldn't imagine before?
Steve Krull:
So you talk about collaboration in AI, and I think of any time I think about collaborating, I think about collaborating on spreadsheets or documents or or presentations, right, that this is how we generate collaborate. Tell us how you collaborate using AI. We all work in organizations with many people. How do you get people to collaborate in AI? How does that work?
James Villarrubia:
So I think part of that challenge is that I mean think that sort of like a lot of the collaboration tools that we'll call it 1015 years ago from like largely a software software service you know software sector, they were sort of someone digitized a paper process. And even sort of like we think about like oh digital whiteboards, the term is literally describing a physical thing in a room, but now it's digital, so we were so stuck with those tools because that is the the the the user experience, the mechanism of collaboration that humans are used to that are comfortable with.
I don't know if we've quite as a society have even figured out what what that next form of collaboration is, but we have Zoom meetings, we have like someone taking notes, now AI taking notes, we have a digital whiteboard capturing ideas.
The thing that I cared about is like great, pick any one of those human ideas, reduce the friction for the humans to put as many ideas on there as possible. But I wanted something that could be live and collaboratively edited, so everyone's working from the same space. I don't want to have to email things around, and I want something that is API accessible so the AI can go in and make comments and add things and edit and like that was sort of table stakes for us because like I can make the AI meet where the humans are, it's a lot harder to change the whole paradigm of human collaboration.
So, OK, start there, but one of the things that I sort of also insisted was let these documents sort of build, don't delete things, just add, just keep adding, leave traces of the things you the assumptions you made, and they're like, oh yeah, and then oh that was wrong. I want the assumption and then I want how you figured out what was wrong and then your next idea don't delete it, don't edit the document, to sort of delete things just keep adding because AI with that additional context can be much, much more valuable because now it's not just gonna tell you, oh yeah, these are the things you would likely understand or get to the end because sometimes it'll be wrong.
It will actually walk you through as an org here are the things you probably would have like what you would have started with and the mistakes you would have made or oh you your team keeps making this mistake and keeps making this assumption and then you find out later that it's wrong. So like let's maybe get ahead of that or it offers that as a suggestion of from, hey, go find this out first before you waste 4 weeks getting to this assumption that you eventually prove wrong.
So like having AI have that additional context can be much more valuable. It is like a very good research assistant if you give it a lot of context, but you should give it a lot of context, and that means adapting your collaboration processes to be to leave a lot more breadcrumbs behind, you know, don't pick them up, don't eat them. You know, don't, don't make the Hansel and Gretel mistake, you know, like you, you, you wanna leave all of the evidence behind for not only the team that might come behind you, it's proof of those failures, it's proof that you did the work, right?
It's like, oh, I spent 8 months and I finally got to one idea. It's like, no, I spent 8 months and I got to, you know, 99 ideas and I proved them all wrong to get to this 100th. That is valuable, but it's also valuable to your leadership and to the AI, so like why not?
Steve Krull:
Does anybody, so thinking about failing so much in terms of generating so many ideas, is there ever a spot where you sort of get a side eye with saying, hey, you guys are really failing a lot. Can we get an idea to win here?
James Villarrubia:
Yes, so, one of my one of my colleagues described it as like we need brownies along the way. It's like leadership does not like failure. Failure is not an option, James. You can't talk about how many, how many times they fail.
It's like, OK, what can I talk about? He was like we need, we need brownies. It's like I don't know what a brownie is. He's like, what do you, what do you mean? I am I physically bringing them brownies? I've got some recipes. You ask an AI, Hey, I need brownies, it will give you recipes. It will not give you sort of like shortcuts, and then the conversation becomes around, oh, quick wins.
I was like, I don't even like that as an idea because then you are surrendering the, the long term 50 year goal for the short term goal. And I would think it's more like what I want is more of a, you know, described as like a simple win. I want something that proves something along the journey. So you can design this like hey I want a couple of failures before we have some sort of validation win, right? And you can frame a we proved this wrong as a win it like you know it's it removes one of those branches, but I would say that you have to have some of those simple wins along those journeys because you are designing something that needs, you know, political and legal and, you know, ethical and social staying power, not just technical staying power, but assume that like the teams that you're working on these projects might exist. This project needs to survive, be validated, and avoid funding cuts for 30 years.
Like people talk about like James Webb Space Telescope and not my department of NASA, but I think it's a, you know, worth talking about when they launched it, right, it had 364 single points of failure. And all of them had to succeed. So you do the numbers on that like the, the likelihood of any any one of those components failing, I think it was like 6 9s. Every single one of those components had to pass at a 69s level SLA, sort of like a, it can only fail like once in every million opportunities, sort of crazy level precision, every single one of them, and all of those components were probably built by a different team, and they're all on a that's cool, but now imagine they're like, oh they're on a 30 year journey. When they, when they asked Congress for money for this, they underbid.
They're like, yeah, yeah, we, we know day one we're already lying about how much this is gonna cost, we're just, we're just not telling Congress like, yeah, we need this much, and like they knew that from the start that it was gonna cost more, but they knew that that was what's gonna get the political football moving. And then there's gonna be some momentum, and they played that political game for 30 years on a project that had massive big terrible failures. There was one, it was like a $900 million failure where they shook the satellite to like make sure it would withstand the launch and bolts started just like flying off of it.
Steve Krull:
Wow.
James Villarrubia:
And you're like, oh God, like it's a OK, where'd they go? Where did they come from? Oh, we don't know. We have to take it apart and put it all back together again, figure out where the bolts went, but also we should probably find a different bolt vendor, just saying, so that was like, like a test that we needed to do and it was revealed a, you know, a, a $900 million dollar problem. But at that point, like, all right, yeah, we're gonna spend the $900 million.
But to keep it politically alive for that long, like that is a that's real skill, and I think you know you should think about the political life of these projects, but the people who come after you and like if you really wanna make these changes, you should be thinking about like what the ecosystem looks like if you build these AI tools for example I talk a lot about what the AI. Assume that your funding for your team gets cut and the people coming behind you may not be as qualified or as good at this. They were not as versed in the world pre-AI in the sort of traditional digital marketing, so they won't know the trade-offs that you had to make.
So you have to sort of leave the bread crumbs for them, leave that documentation behind so that you can sort of survive the political length to like get to the end of these big projects. But again, James Webb Space Telescope, 364 points of failure. All of them succeeded 30 years, way over budget, way over timeline, but probably one of the greatest scientific, you know, achievements of humanity. We are looking at light that is effectively from like the Big Bang at like a like think of it as like. The precision of that satellite, it's looking at a match like a single match lit on the surface of the moon from Earth. That is the precision like one photon per minute sort of precision that is insane amount. We had to invent physics and cooling systems that didn't whole areas of physics and cooling that did not exist when we started the project like, yeah, we'll figure it out eventually. Like that sort of work has to be done by someone. But they were clearly thinking about that politics and the culture and how again to survive these like long term changes.
Steve Krull:
So outside of the failures that you're seeking in AI, what are some of the key learnings or mistakes that you've made along the way like because we, we talk about hallucinations, AI hallucinates. We've heard it, I don't know, 1520 times in the last 24 hours. So obviously it hallucinates and obviously there's gonna be mistakes made. Give me, give me a pitfall or something else where you said oops we didn't intend to do that and I know that's a failure, but you're like it's that wake up call and the learning that comes from it.
James Villarrubia:
Yeah, so we, I think there's even in NASA there was a a big concern about hallucinations like oh what if it lies to us, and I had to do a little bit of internal marketing around the idea of hallucinations like, oh no, no, no, no, we want them that's good we want weird ideas, we want it to be wrong because these things haven't happened. So that was a bit of rebranding, but I think there's a, there's a, a risk that you have to just acknowledge if you were asking for factual things, the AI is gonna be probably wrong. If you are asking it to come up with potential ideas that you need to then go and validate, and you can even have an AI then go and validate them, hey, just like go find the source and I'm gonna click on it, yep, OK, that's right. You can do that, but don't ask it to sort of wholesale invent facts and like trust. Ask it to go find things to go search for things, to come back with suggestions for things.
Think of the process of like you wanted to be a partner in that ideation because. You know, you know, like things like Grammarly or spell check, right? It used to be like humans write the thing and AI does the error checking. Now we're getting to like AI writes the thing. Humans need to do the air checking, but you have to leave the error checking in there and it has to be human. And this is where sort of we sort of talk about sort of like what the future of like early careers look like.
If you had to sort of go, if you're trying to hire someone like, oh wait, what is an early dev job or a marketing job look like it used to be, oh, I do all of this grunt work to learn how to write it well so I could identify the errors so that I could then check the next generation.
Well, if AI is doing that, what does like that early career path look like? And I would say again sort of stepping those people up, they need to quickly get to the you're validating and the error checking. That is the skill that you need to develop day one, because the AI is probably gonna write a lot of stuff better than you.
But you need to be good enough to tell when the AI is not good, and that may only be 90, you know, 10% of the time, 5% of the time, but that is the skill that you sort of early career people should really start developing and they can, it's not impossible, but it is a shift in sort of the way, the way the world is working.
Steve Krull:
Talk about skill and what we should develop. Let's spin it back marketing a little bit and talk about people who might be reluctant to start or played around with it or skeptical. How do we begin to start leveraging AI and or our organizations? You talked some about collaboration, but what do we do and how do we go about building some of this stuff?
James Villarrubia:
I mean, if you aren't using AI, I'm sorry, you were going to, you're going to get destroyed. If my job is partially using AI to imagine the futures, and I understand both AI and like a lot of futurism, the world is moving very fast, and I would say that, I, you could look back for an example when I just do more traditional keynotes, I don't talk about the future, I talk about the past.
I go through history and it's like, hey, here are all these weird examples from history. That basically the exact same thing happened. Now let's look at what's happening in AI. Like there's a lesson to be learned here, right? History doesn't repeat it rhymes a lot of rhymes. Social media, social media management didn't really exist in, you know, 2006, 2020 years ago, right?
The era of millennials graduating from college, entering in a terrible job market and like, oh, here's this thing Facebook or like, hey, like all of our jobs are missing. These are my friends like, let's invent this whole thing called social media management and that became part of brand and it became now a huge component of marketing. It didn't exist 20 years ago, but I think you, we could probably say that the, the where we are and so that S curve of innovation is like we've probably diminishing returns on innovation in social media at this point.
The innovation is coming from AI, so we are probably as orgs as marketing people, right? You're probably gonna transition from that social media S curve to something that's more AI core as that innovation S curve, and we're in the early days of that. What does it look like? I don't know. But to pretend that you are not at the sort of diminishing return level on the S curve of social media is so silly, right? It's been 20 years. We are, we're doing stuff. What is even new in social media is being largely driven by AI.
And what we're talking about sort of that that sort of where the sort of career looks like you should be thinking about how am I integrating AI into the traditional old mechanisms but also how am I rethinking what AI is gonna change in the user experience of people who like participate in my brand who know my brand.
The example I, I, I was giving a lot of thought to this before coming because I don't typically do a lot of marketing work and I, I was thinking like what are brands that I just top of mind to me. And I kept coming back to, I don't know if Liberty Mutual is in the room, but I, I've been binging a lot of, I think it's Prime, and I, I see, gosh darn, so, so many Liberty Mutual commercials on repeat, and they are, man, they're memorable like the, the Liberty Biberty guy, he gets me every time. It's, it's, it's a really, it really works and like the, the emu, like, you know, the old farmer saying but you'll never make it and you know, saving people car insurance. I absurd idea, but it is sticky. It works.
But that being said, that is a brand that is very top of mind for me. I get advertised a lot and I recently moved and how it was changing a lot, you know, changing a lot of like my insurance. I was like, hey, it's every 5 years. I'm finally buying insurance. The first thing I did is I went to AI and Liberty Mutual did not come up and I was like, OK, cool. I, I trust the AI's like, you know, perusal of all these various sources. I, you know, ask a few questions. I, you know, great that that recommendation.
And I think it's not only sort of trying to get your brand like to be accessible like generative or AI engine optimization, GEO whatever the term is, I don't think we've even decided what the term is yet, but like that is gonna be real. But you also have to imagine like even sort of like what does that do from a trust perspective in the brand, the exposure level like if you're not in there and the cutoff is a great, it's only gonna show 5. It's not you can't like being on it's not Google 1st 10, it's the 1st 5, it's the 1st 3. And they'll like you can update your you know your SEO you can change that. I don't know if the models will update very quickly because it inherent and baked into that model is if your brand's been around a lot some sort of reputation of that brand. Oh, you had a big incident 10 years ago, 5 years ago, that's gonna be in there because these data companies, they, they are. They are starving for more data and the likelihood of them being, oh yeah, we we'll just delete that. Oh well we we don't care that that happened to your company 5 years ago. It's been 5 years that's that's gonna be there.
So like the nature of your reputation as a brand and how you engage and what that user experience is is gonna change pretty dramatically and yes being like available to these AI models to search. But I don't think even so that the company's building models have really decided what that looks like.
So, there's a level of like being nimble in this current ecosystem that you need to drive for and I think part of that for your organization is. Not necessarily failure, but like just start trying, try stuff with AI, get it out there, even if you fail, the success you should be really aiming for is how many people in my company are comfortable enough with AI to know where it is bad. If they're using it and they're like, yeah, it's good at this and not good at that, great, you are ready.
That's it, that's the level that your whole workforce basically needs to be at to just have enough awareness that they know where, where to use it and where it's gonna fail and where they need to fix it. And if you get to there, you will be in a good place moving forward because I can't tell you like oh you need to focus on this tech or that tech because I don't know but if you have a team that is looking and trying and always trying to figure out like where it sort of breaks down, you'll be at that cutting edge because you'll know what it's bad at, and then you can sort of adjust your resources accordingly.
Steve Krull:
This level of disruption and technology is, is something we haven't seen before, right? The ground is shifting beneath us on a daily basis, right? Something new, new tool, this AI that AI pick your AI pick an AI tool, an AI tool built on this, built on that. How do you coach teams to sort of see through it, it's kind of a fog, right, of all of this coming at us. How do you coach teams and humans to sort of see through that to stay in the in the zone of really problem solving and identifying the right tools to solve the problems.
James Villarrubia:
I would like to say it's a lot like surfing, but I've never surfed, so I like it seems conceptually a lot like surfing, just like ride the wave, just, and I, the, the way that we, I sort of, I had this problem at NASA because there's just so much coming out and by the time I built my first prototype at NASA and got it approved to build. The tech was already out of date, so when we went to build like the second version, like, oh like let's scale it. I was like, no, I have to I have to rebuild it first. Sorry, hold on, totally new tech stack, and that was like months. It was just like a couple of months had passed, so we're still in that space and I think for the first time in a long time software developers and their curiosity are actually driving the ecosystem of change here.
It's not actually brand uptake, it's software development uptake because I can people can build products faster than customers can experience them and give you feedback on them, and that is a total shift, right? Used to be, oh, it took 10 developers for every like 1 product manager and one designer. That ratio now I think is more like 2 to 1 because an AI enabled developer can build things very quickly. 5-ish, I don't know about 10x, but 5 times faster. So now you have people just building and building and building and building. I tend to look at it like, OK, well I don't want to try things that just come out because there's so much coming up.
I try to look for things like not for a marker market-emergent like company but a market emergence. What is the new field of products that is coming out? Oh, there's 2, there's 3, there's 4 companies trying to do that. Now I'm gonna pay attention to that product space. Because if there's 3 or 4 companies trying to get getting enough traction, they got a little bit of funding. OK, great, like that there's probably a there there. One company is not a signal, one tech product, not a signal. At this point, whole emergent new fields of products that use and leverage AI, a whole new customer experience that leverages AI, like that, that is indicative from the sort of deep technical side.
I would not invest and my team did not invest in any technical solution that like OpenAI or or or Google or anybody offered until at least one other major provider offered it. I wanna be just like not like first leading edge. I wanna be just like one generation behind that. I wanna know what when to have adopted that because then I'm it's probably gonna stick around and I can make some technical investments and sort of strategize around it because there are a lot of things that first generation, oh God bless them, there's so many startups. GPT 3 comes out. So many startups got funding, and then 3 months later, OpenAI came out and they're like, Hey, we've launched all these products, and it was like they just offered out of the box like solutions that so many startups had like built their whole model on and they all disappeared overnight along with all that investor money.
So I think investors got a little bit smarter after that first wave, that first culling, but you need to be thoughtful about that as well because you wanna be moving fast. You can't be left behind, but you can't be swinging at every ball, so I, you know, wait for the products to have at least one other competitor is basically sort of the rule of thumb. It's like look for at least one other legitimate competitor that has raised money and has actual traction like real customers, and that I think is a a good threshold.
Steve Krull:
So without giving away state secrets, are there emerging teechnologies? Talk about 3 or 4. What excites you? Are there certain technologies that you can share with us that you're excited about in this field of AI, what you see in in emerging technology?
James Villarrubia:
So the two things that I think, are more so the commercial side, so one of the things I think is their AR and VR is coming at some point. It is a terrible user experience because the headsets are heavy. And the things that again like the market signals that I look for is like OK, but like when do like like fashion brands and like gaming companies where like this would be like cool like when are they starting involved?
Ray-Ban partnered with Facebook, and then Louis Vuitton partnered with like League of Legends gaming. I was like, oh, OK, so like gamers are gonna drive that tech because AR and VR and gaming are like a marriage made, they just don't know yet, maybe they do, they're, they're engaged, not, not, at the altar yet, But now they've got major brands of fashion labels that are gonna push consumer experience out there. OK, it's like, so that is coming, but the user experience still very heavy headsets. Once you couple AI like a really useful AI into the experience, it will be a game changer. We're just not quite there from a comfort and a price point yet.
The other sort of half of that is like what are the other things that, like, let me ask this question to the audience, like, who here, who here has an iPhone, right? I have an iPhone. Who here loves Siri? Yeah, that's what I think one hand cool. Who here has ever owned like an Amazon Alexa? Who here just feels like the Alexa really like changed their life. Yeah, right, a couple people who here right has fiddled with chat GBT and it's like, oh man, this is sort of life changing. It's changed the way I work, right?
Right, so something about GPT and this era of LLMs has crossed the user experience threshold of quality. People like it. Something that didn't quite really happen with Siri or Amazon Alexa. So one of the areas that I think is like I fully expect Christmas 2026 would be my guess, maybe Christmas 2027 or like just after Christmas, I don't know, like Labor Day sale or something, there's gonna be, there's gonna be some marketing event, right, where those brands launch relaunch the home experience that is like this generation LLM enabled, and I, I, if they open that market up to be like, hey, we don't wanna be in the hardware game we wanna be in the software game like all bets are off because that changes just like the nature of the home experience, how you shop, how you like how you how your kids do homework, everything will start to change and that will be for everyday consumers. Like I was like I plop the Amazon Alexa down. I was like, hey, homework mode, right? And like it, you know, works with my kid to do homework. He's 3, he's probably not gonna be doing homework anytime soon, but by the, you know, by the time he's doing homework, it will be the nature of that, right? It was like, hey, we're not gonna give you the answer, we're gonna ask you thought provoking questions to get you there. And it like works with my kid and then maybe I'm there learning too alongside like I don't remember how to do long division. Let's do this funky new long division ways, right? Common Core whatever they came up with, so, so like that is gonna be a, a future experience.
I have been trying to like rig an my Amazon Alexa to just like update my shopping list. It's like, hey, I'm in the, I'm in the kitchen. I see something's missing. Hey Alexa, add this to my shopping list. God help me, it cannot do it. It is so annoying, right? And I'm a technologist. I'm, I right, I do this stuff for a living and I'm still having trouble like making these APIs like work together, but there will come a day when I can get my laptop to do it, right? I can hold the function down and it's like, you know, I tell Line I'm like, Hey, add this to my shopping list. Oh, OK, great, cool, and it sends it off to to do it, to do it then updates like my shopping list that I share with my wife. I roll in the shopping, you know, the, the, the grocery, and it's all there.
I, I think that, you know, as a futurist, I would say that's coming, that's right around the corner, and the question I would then pose to you across a lot of brands, you're probably not the Amazon Alexa team or the Siri team here, sorry if you are. What is gonna be the impact to your brand as the follow on? What is the, the nature of consumer experience after that? What is the, the nature of shopping and, and brand engagement look like with not only AI but like just the, the all these experiences in the home? What does that look like?
Then couple in, call it 5 years from now, maybe, you know, maybe 10 at the outset like AR and VR right? The oh I'm shopping for clothes, I'm shopping for this. I get to experience the car and AR and VR. It's really high precision AR and VR to like get a sense for it. What's left? My wife is an interior designer and she is dealing with like, oh, AR, like AI is coming for interior design, and I, you know, I, I say like yeah, but like it can't do like touch and feel. It can't do experience. You should be selling experience. You should be telling clients like, hey, this is how the sofa will feel. This is how it'll feel to move around that space and how hard you'll like how far you have to reach to grab something. AI AI can sort of do that. AI can sort of make nice pretty pictures. It can't do that. That is your value prop. That's what's gonna be different in the future.So she's working on that.She's changing that, and that's something that she felt was like totally disconnected from AI.
But it's not, so I think my like last suggestion is that those are brand, you know, those are the things that the brand experience of the tech, the tech experience is gonna change that consumer threshold very quickly and that will ripple out and impact almost every brand and every consumer experience. Be prepared for that now, start planning for that now and be wrong, fail a few times, but get in the habit of being nimble with how you respond. So when the when the shift finally does happen, that GBT 3 moment happens for that new consumer experience, you'll be ready to launch and roll out and like, oh yeah, we got this, we'll be first to market.
Steve Krull:
So yeah, you, can I get a time check because I can't see the countdown. Well, that's a shame. Sorry about that. Do we have questions in the audience because I could keep going for hours. Who's got a question for James? Anybody, anybody anyway? You're just looking forward to the next, oh, we got one back here. Good. Hey, Ry, would you do me a favor and run the catch box for us? Thank you. Right here in the middle, straight back. Appreciate that. Wait, oh. You're good.
Audience Member:
To go back to your comment about the idea of this thought of getting AI to dream, you mentioned one of the objectives was to identify some of the gaps in AI research or the white space. Are there any of those that you're comfortable sharing with us?
James Villarrubia:
It was sort of white space broadly in any area, so I, I can give you, how about call a quick recipe for doing this, and this might be helpful sort of thinking brand strategy. If you're not doing scenario planning for your brands, you're missing out on like a real strategic opportunity.
The quick recipe, I would say is, do like a broad sell or steeple analysis of society, technology, environment, ethics, legal, something, something, yeah, policy, So do like ask him to do a steeple analysis of your field and then say OK list all of the actors and players in this market or this field and not just your competitors but anything that might impact your market. So like that could be the president, that could be a tariff, that could be anything. And then say, OK, now go one by one, pick a, pick an actor like how why reasoning letter like pick an actor, pick a reason, have them do something pick another actor, pick a reason, have them respond, go back and forth for like a few rounds and just do that over and over again and that's a very sort of quick shoehorn way to have an AI create a nice little futurist scenario that is somewhat based in fact and you can sort of add an article based on this article that I just found, or these two articles or whatever it is, add a little entropy up front.
Then run it based on this article, play this game with me. If it does that, it will create this nice little narrative for you of like, well like imagine this happens if you do that enough times you'll start to see like a lot of trends and interesting things that the AI would probably not tell you outright it's probably not explicitly in an article anywhere, but it's telling you something like latent in the space. One of the things that I found was really interesting, the white space revealed is that no matter how many times we ran that sort of scenario builder with AI, every single time, teenagers would find a way around something. Always, oh, like, you know, China does this and the US does this and Europe responds to this and they're like, oh, and then the teenagers found a way around it. Like I was like, oh man, it's AI is sort of telling me something about humanity that like no matter what happens, teenagers are like, yeah, screw the rules, we're doing it anyway, right? I love that, right?
And then like that happened and then like a year later we see Algospeak, right? Well, we banned all these bad words on TikTok. OK, the kids just use different words. Oh, he didn't die, he's unalived. I was like, OK, that we all know what that means now. It's weird, but OK, right. So like it it sort of AI sort of detecting and proving what was a latent thing in the ecosystem that maybe we hadn't put a name on. So but you can do this with the current models and you can do it for yourself and you should do it for your brands like play those games out for like what happens if there is an economic drop, and boom, like play those games out and you will be a lot more prepared to sort of deal with the sort of the brand fallout or the brand opportunities, in the future.
Steve Krull:
Cool. Any other questions up front here?
Digital Summit Minneapolis:
Steve, we're actually at time, but any other questions that you guys have for James, you can ask him with Steve.
Steve Krull:
Thank you, James, it's been a pleasure.