As artificial intelligence (AI) continues to impact how we work and learn, learning leaders need robust AI skills to leverage this innovative technology for more impactful — and efficient — training. However, with AI quickly advancing, and new AI-powered learning technologies frequently hitting the market, building your AI skill set can seem daunting to busy training professionals.
In this episode of The Business of Learning, we spoke with Marc Ramos, an industry vet with 20 years of experience with Google, Novartis and Accenture, who was most recently the chief learning officer at Cornerstone OnDemand, and Melissa Brown, CPTM, learning and development leader at Holland and Hart LLP, about building your AI skills as a learning leader.
Tune in now for their insights and advice on:
- The skills learning leaders need in an AI-driven world, and how to develop them.
- Tips for becoming proficient with AI-enabled tools and technologies.
- The importance of “keeping a human in the loop.”
More Resources:
- Job Aid: AI Use Cases for L&D
- Wiki: AI in Training
For more insights on AI’s impact on L&D, download a free preview of the e-book, “The AI Revolution in L&D: Tomorrow’s Training Today.”
The transcript for this episode follows:
[Ad]
As a learning leader, your organization looks to you to close skills gaps, meet compliance standards, prepare your business for the future and more — which is a lot of responsibilities, especially for a team of one! That’s why we created Tia, your AI-powered Training Industry Assistant. Tia is upskilled on Training Industry’s vast library of resources including research, job aids, course content and more. Use Tia as your guide for tackling any L&D task. By becoming a Training Industry member, you’ll gain access to Tia in addition to becoming a part of the membership community, where you can connect with like-minded L&D practitioners and experts. Driving the business of learning can be challenging, but remember: You don’t have to go it alone! Try Tia today for just $30 per month. Sign up at TrainingIndustry.com/membership.
Michelle Eggleston Schwartz: Hi, and welcome back to the business of learning. I’m Michelle Eggleston Schwartz, editor in chief at Training Industry.
Sarah Gallo: And I’m Sarah Gallo, senior editor. Recent Training Industry research has found that over half of organizations are using AI for training fairly often. And as more companies embrace AI and its potential to streamline L&D, the need for AI upskilling has inevitably emerged. Research also found that the majority of learning leaders are not confident in AI effectively. Today, we’re speaking with two experts to learn more about how learning leaders can build their AI skills. With us, we have Marc Ramos, Marc is an industry vet with 20 years of experience with Google, Novartis, Accenture, and was most recently the chief learning officer at Cornerstone On Demand. And with us, we have Melissa Brown, a Certified Professional in Training Management and learning and development leader at Holland and Heart LLP. Marc and Melissa, welcome to the podcast.
Melissa Brown: Great to be here.
Marc Ramos: Thank you very much. Appreciate it, Sarah.
Michelle Eggleston Schwartz: Yes, welcome. I’m so excited for this conversation today, because as we all know, AI is really top of mind for a lot of learning professionals, and so I just thought to kick things off today and get started. Can you both share some common use cases for AI and L&D, if there’s any areas where you’re seeing or not seeing AI used on the job?
Marc Ramos: Yeah, I guess I can just jump in first. It’s a great question, and it’s a big question and it’s a hot question by that I mean, it’s not just L&D folks that I think that are being impacted by AI from a workplace perspective. It’s probably across the board, HR, talent, and then other teams as well. Thinking about it from more of an L&D lens, I think, Michelle and Sarah, there’s maybe three common use cases that I’m coming across. The first one’s very obvious, I think, and that’s the production of content, right? Using your prompts to insert some text to create X, to create a video, to create. A PowerPoint slide to create some content, to create some, some graphics, or maybe even a song. So I think the content set of use cases is just, it’s abundant. The other thing is, you know, we need to do a lot of research related to, uh, our jobs and whether you’re in Marceting or whether you’re in L&D, you know, you’re trying to figure out this thing called, you know, requirements or needs analysis. So, you know, is there a large language model or an LLM that can help you accelerate the process to make things a little more efficient, but it’s not always just about efficiencies. It’s about efficacy to make sure that the output is still the right output. And then I think the third piece is just, you know, what I consider like general purpose bucket, right? The ability to, you know, work with your inbox a lot, a smoother, your ability to do editing of a document much faster. The ability to maybe structure some thinking, right? Where if you have a concept in your head, it’s quite abstract, jot it down, and the it’s okay to ask Gemini or Claude or your favorite large language model, “What do you think? What’s your opinion?” Without giving you this verbose, so to speak, dialogue or heavy response, but just to kind of give you to be thinking a little bit more creatively or differently related to that idea. So those are maybe three that I’m, that I’m saying.
Melissa Brown: Really excellent cases, Marc, you know, and great, excellent points about use cases for AI, obviously not just within L&D but speaking directly on how we approach AI and L&D, you know, I think I’m seeing kind of an interesting disconnect in how we can use AI to create content in L&D. We personalize learning paths or analyze data to predict outcomes, but I think we’re missing something pretty critical, and that’s that we’re overlooking that practical applications for individuals, dare I say, like at the front-line. So, you know, I work at a law firm right now. And so from a legal perspective, we’re focusing a lot on how attorneys can use AI to do legal research. And therefore, that’s a fairly simple use case from an L&D perspective. How do we build that that sort of training now? But what we’re not looking at and I don’t want to imply that this particular firm, it’s kind of an overarching theme is that we’re missing a large share of individuals that could also benefit. So we’re looking at, you know, paralegals, for example, or legal assistance or customer service agents they could be using a I not just receiving training that was created from a learning and development perspective, but that learning to development should potentially be looking to be more involved in getting this sort of access, knowledge, execution of all insights to these individuals to enhance their daily work to improve their client communications or to manage their complex document coordination, whatever that looks like. But these use cases, you know, so that we’re looking at not just providing L&D and how we can use AI, if you will, to create training or change training, but really democratizing access across the board to individuals. And I think L&D has a big, big, big piece of that, that maybe we’re missing out on, or there could be a disconnect around.
Sarah Gallo: Very cool use cases from both of you there. So thanks for sharing a little bit more about that. And I’m sure we’ll see even more use cases as AI continues to evolve over time as well. As you both mentioned, many of these use cases currently are related to content development and delivery, so I’m hoping you could share a little bit more on really how you see AI impacting the future of training, , development and delivery.
Melissa Brown: So, you know, we were talking about just in the last question, you know, again, I’m just a big proponent of this. This is this is really, really dear to my heart right now, it’s central to my doctoral thesis. There’s a lot of things going on in my brain around this. But again, about how L&D should be a central element or a conduit to ensure that we can help effectively leverage AI. So when we talk about that future of training, design and delivery, some component of that should be the future of training and design and delivery. To help others actually use AI what you know, it’s concerning me. I just finished the SHRM AI plus human intelligence specialty certification. And that was, I think, a total of nine separate workshops, two hours apiece. And there was a lot of talk about, if you will, they didn’t say these terms, but the haves and the have nots, those that are adopting and getting involved and those that are not. And I fear that as we look further in, into that future, that we potentially could be creating some sort of, if you will, digital cast system where there’s individuals that are very deep and very, very. Advanced into their use of that, be it for L&D or whatever those cases, if you will, from that future perspective, but that that workplace divide indeed have very, very long lasting implications for workforce development, employee engagement, really a lot that we’re probably not thinking a lot about right now. So I think our L&D … how we can make L&D accessible and valuable to everybody from that. How do we look at the future perspective at every organization is very, very important.
Marc Ramos: Yeah, I think, Melissa, you’re spot on. A couple points. One is, you know, the first one that really resonates , based on this almost democratization theme that you brought up, Melissa, which I think is just so powerful. You know, does AI drive new forms of equity or not. It’s a very provocative question. The one thing that I really, really like what you said is: Does the L&D team have a certain responsibility or a certain function to help educate, to help provide benefits and impacts to other folks in, in the company? What’s really fascinating to me is, and maybe I’m a little bit too selfish or a little bit too myopic, the, the L&D teams or people creating content, they’re starting to see the value of what AI can do big time, more so than I think many other functions within the company, if that’s the case. If the learning and development function is the furthest upstream to see the benefits and the value of what AI can do for the other teams, more sidestream or downstream finance, sales, whatever it might be, does L&D then have a responsibility, a new set of duties to really help further scale and accelerate AI for the enterprise? I have kind of this very cheesy term that I think about, and that is, you know, is L&D the new R&D, right? Are we the new research and development function in very parallel, similar ways to the formal research and development function for a pharmaceutical company or a high-tech company. You know, we do the research, we do the investigation, we do the … we understand or we try to understand where the clients or the user’s needs [are], we do the prototyping, et cetera, testing different iterated releases. My point here is just echoing Melissa’s point, you know, do we have that responsibility to actually help guide others to the betterment of what AI can do? And then getting back to your question as well, in terms of the future? There’s a lot going on. I think there’s probably, um, two or three things that come to mind. This whole point around agentic AI, so AI agents, it’s, it’s profound and it’s here. And it’s going to be incredibly, incredibly valuable. The whole point of an, of a agent is, think about when you’ve had some sort of online help desk, right? With a little kind of chatbot thing pops up and answers your questions or resolves the initial questions that you have, if you have a problem with your telephone or something like that, well, an agent can do that, but an agent actually does work for you. So the first thing I think is really interesting is how agents. Um, can really support the different methodologies and approaches that L&D teams have, whether it’s the ADDIE approach, you know, analysis, design, development, implementation and evaluation, could an agent do any one of those for you or with you? That’s just one really quick example related to this agentic AI piece. And then the other thing, which is really fascinating is around skills and how you can work with, um, whether it’s your large language model or an agent. To identify the skills gap, this is the catalog of what we have to fill that gap, whether it’s courses or whether it’s people and to help build that path for you. And the last thing, which is, I think, a little bit way out there, but, but I’m starting to see this on the fringes, you know, we need to stop thinking about courses, the two-hour course, the 20-hour course as this thing. That is, uh, that you take that is in a container, right? It is in a set fixed duration of two or 20 hours. It’s in a set fixed capability of only addressing so many use cases. Uh, it’s a, it’s tend to only support 1 type of role, whether you’re an engineer or a salesperson. I think what’s going to be really fascinating, maybe in two or three years is. AI’s ability to basically remove the container of a course and make sure that, uh, learning is provisioned dynamically. It’s alive. It’s a living, breathing entity. And if we get to that point, it’s going to be a radical change for, I think the vast majority of learning content providers, as well as from a data architecture and systems perspective. And then obviously from a learning perspective. This is this in the flow of, well, what if it actually can happen, but happened dynamically based on what these 12 similar users are going through and identifying, but grabbing from them their best practices, whatever that might look like, and then feeding it to you, because otherwise you might not know otherwise. So those are some things maybe a little far out that might be around the corner, but really, really exciting times and just echoing again, Melissa’s point about really having that responsibility to help the other teams and teammates in your company.
Michelle Eggleston Schwartz: Love that. Those are such good insights. Like AI is really opening the door to be able to extend training, reinforce it on the job in the flow of work. Like, there’s just so many possibilities. And to the point you talked about, like the agent aspect Marc, there’s really a lot of great use cases for AI as Sarah mentioned earlier, we know that learning leaders need AI upskilling. What skills do learning leaders need to be successful in this AI driven world?
Melissa Brown: If you don’t mind again, this I’m kind of really in the trenches on this one. So Marc, thank you for letting me step in. I think this is where somebody would expect me to say or anybody to say, you know, they need to be great prompt engineers, or they need to be iterative prompters, or they need to create a chatbot and customize it to do this one thing and that’s great. But when we’re talking about learning leaders, I think that it’s not about tech prowess again, it’s that ability to bridge that gap. Not that… I love me some AI, right? But it’s not that technical component that I sent, think that can set a leader apart. I think in fact, there are three critical capabilities for this space. One is the vision to see the potential and as simplistic as that sounds. And maybe even to this audience [that might sound like] “Well, duh,” … [but] the reality is that’s not the case everywhere. And there are ample number of leaders that also, uh, maybe tinkering with AI to some degree, but to actually see the vision behind that again, um, harking back to, to some of the concepts or ideas from the agentic AI that Marc was referencing, then the wisdom and this was. This is kind of goes back, you know, what is it the, you know, the wisdom to know the difference and the courage. So wisdom to make it accessible to everybody. Again, this goes back to that leadership capacity. If we’re not as leaders thinking about this from an organizational perspective without that wisdom to really foster and champion the notion that this should be accessible to everyone, then we’re going back into creating a deeper digital divide. And then finally, that courage that I mentioned, which is to challenge what we have right now, that status quo. Depending on, you know, what kind of industry you’re in, in particular, I’ve been in the call center space where things indeed did move at the speed of light because we were building by the minute and every single breath that someone took cost somebody a dollar. Industries that are more traditional in framework or hierarchy that may be a little bit slower to evolve or to change from various reasons, even regulatory or from a legal perspective, but to challenge that status quo, change who gets access to these sorts of things. I think that that makes a really big difference. So when we’re implementing AI at the top again, as leaders failing to — I’m going to keep using the same word — democratize that throughout the organization, we’re creating these, if you will, experts in … it may be those from the IT department or C-suite or whatever groups, those are in and leaving out that broader workforce. That’s not just a skills gap. That is indeed a leadership failure. And so when we’re talking about leadership, we’re talking about that again, being from an L&D perspective or in the more broad application, that’s very, very important that we’re looking at. And that is being considered in that case.
Marc Ramos: I think Melissa is a spot on again. And the way I kind of look at it in the leadership skills, the leadership traits, the leadership attributes. I think there’s the there’s the skills, right? That leader should take on. But there’s also the responsibility of leaders. To help guide and be the beacon, for lack of better words, for the rest of your team, the rest of your division, your function, and obviously your, your company, and perhaps even your community and the ecosystem. There’s a really good friend of mine, Ross Stevenson, that I think this audience might want to check out. Ross is great. Ross recently came out with these five, I guess skills that leaders should have in the AI domain. And, um, one of them has to do with, uh, analytical judgment. And this is like an obvious one, you know, related to critical thinking and so forth, but it’s also making the right logical decision when you’re working with AI, whether it’s a GPT, whether it’s a large language model, you have to have some sort of human or humane rationale in terms of, “Okay, what’s the right thing to do here?” The second one that Ross talks about is, is creative thinking, and this sounds pretty straightforward. The way that I kind of look at it is, you need — and Melissa was talking about this — you know, challenging the status quo. Part of creative thinking also means being a divergent thinker, thinking differently, but thinking differently with an intention to act or help train or teach others to also think differently. Then there’s obviously the innovation piece. There’s the other side, the flip side of the creative thinking coin, which is what does risk look like, so on and so forth. I think there’s a social piece here. Maybe the third skill, I’ll call it social influence, I think is what Ross calls it. And that’s important because it’s not only you as a leader and how you influence others. social in person, whatever it might be. But the other thing is at some point, if this whole agentic AI thing takes off, guess what? Those are going to be other entities. they’re going to be part of your social domain. So you need to think a little bit more divergently, a little more creatively related to what, you know, what is your power related to X, Y or Z kind of social domain. The fourth one is around a digital intelligence, which is pretty straightforward. And that includes, you know, technical proficiency and so forth. And the fifth one, which is really interesting is adaptability, but adaptability in like, I use the word humane, word choice beforehand. You know, when you think about adaptability, there’s emotional adaptability because I think a lot of folks, whether you’re an L&D or HR [leader], whatever, there’s a certain emotional response about what is going on. And it’s just going to affect my, my vocation. Is this going to affect my career? Is this going to affect my role? Is this going to affect my job? And those are all very, very. important considerations. But there’s that emotional piece that you need to step back and understand how to adapt to it. And then, obviously, there’s the open mindedness and the learning agility piece and so forth. So I think those are the five that I would consider again coming from my friend Ross Stevenson. But I think it ultimately comes down to there’s the skill that a leadership should have that a leader should have. Then there’s the responsibility of what you do with those skills for others.
Sarah Gallo: I love all the skills you outlined, especially around that emotional piece because it is so easy to forget. I do want to touch on one other challenge that we often hear from our audience, and that’s really around AI resistance coming from senior leaders and stakeholders. So I’m hoping you can both share some tips on how to really upskill yourself on AI when those senior leaders and stakeholders are resistant.
Marc Ramos: Yeah, that’s a very topical question, the bigger question, the meta question, so to speak, is how much of that is because of AI. And the reason why I’m bringing it up in terms of influencing stakeholders, stakeholders are also going through the same questions. They don’t really know what the absolute impact is going to be for their world, right? If a stakeholder is [in] finance, cause you need to get money, if a stakeholder is [in] procurement, because you need to renew it, you need a renewed agreement, right? They’re in the same kind of land of unconscious incompetence. As they say, you don’t know what you don’t know. The thing is that they’re not readily available to admit that. So I think when you’re in an L&D function or another support function where you have to fight to get the budget that you need, understand that’s also where they’re coming from and have that higher levels of empathy in terms of, yeah, they have a responsibility to make sure that there’s efficient spend and efficient allocation. But keep in mind, they’re in the same kind of arena of you don’t know what you don’t know. So keep that in mind. The other thing is, well, as I mentioned beforehand, if L&D is a new R&D, you have responsibility to help educate them on the benefits and the values of if you’re in legal, right, instead of, you know, going through many hours to renew a contract. If you basically know that renewing 10 similar contracts require 90 percent of identical effort, then AI can help streamline that process for you, but they might not know that. So you have a responsibility to teach them and to train them on these new benefits. The benefit of that is then it’ll start [to] click. It’s like, wow, right? You know, Melissa’s training team came to me and taught me how to do this. This isn’t that. Oh my gosh. This is so great. I need more. How can I give you more funding? Now that’s very idealistic and that’s not always the case, but that rationale that thinking is occurring. And I think there’s a lot of other factors. You know, at the end of the day, I think whether it’s because of AI or [a] pandemic or whatever, you really need to build trust. You have to have a high level of trust with the people that are your sponsors, as well as the people that are your stakeholders. And you need to be truthful, particularly related to the benefits of AI, because we’re all in the same ballpark, you know, these high states of, of unknowns and ambiguity. So I don’t know if that helps, but that’s the first thing to come to my head.
Melissa Brown: I think it helps a lot, Marc. I mean, I think, I just kind of feel like there’s the Melissa version of things that I say, and then the things that sometimes I have in my head, and I don’t quite articulate, perhaps as well as I want to, or at all, and Marc handles all that for me, so this is a very good pairing, Marc, and I very much appreciate that. So harking again on that question about, you know, keeping ourselves upskilled when the organization and leadership, wherever those the purse string holders or those decision makers are resisting to it, I think resistance to this is no different than resistance to that. We have to potentially the cell phone or, you know, when Kodak had resistance about going digital, you know, we hear the story about, you know, compact didn’t think that everybody was going to want to have a computer in their house. And that’s looking at it backwards. That’s looking at it starting with. Who wants a computer in their house, rather than what problems, what, what things happen in our life? What are our challenges? What are our pain points? Be it frankly, personal or professional, and how can AI in this case solve those problems? Not looking at AI and saying, “Okay. What can you do?” Because that’s backwards, but that’s how we traditionally, you know, look at things. Oh, I got a new car. It can park itself. Not looking at the problem, which is I cannot, I, you know, I haven’t been able to parallel park since I took my driving tests when I was 16. You know, let’s get AI to solve that problem, to making that valuable and relatable to overcome that resistance, I think there’s three key things. And anybody that knows me knows that I’m always going on and on about that is the first thing is to start again with those familiar challenges. So it could be as simple as, um, you know, your inbox is out of control, you know, let’s use Copilot, for example, to organize your work; use it to craft better meeting summaries or better email replies to your clients or to [help with] basic, simple, relatable, familiar challenges. Second is treat those interactions, encourage individuals to treat those interactions like conversations. I think prior to this, everybody’s experience with something that maybe they could correlate in their mind as I was with Google, and when we try to interact with AI, for example, you know, any sort of general, uh, generative AI, there’s a tendency, I think, especially for those completely unfamiliar with it is to treat it like Google 2.0 — and it’s not. Of course, we all know that those of us on this, on this, uh, listening here, but to encourage individuals to have conversations, not search queries, right? Search queries are Google. This is a different, you know, we want to have these. conversational approaches. Now that doesn’t mean that, you know, we can say, you need to say, you know, please, and thank you, though there have been a number of studies to prove that that actually can be helpful. But again, explaining the difference, this was actually a big indicator when I did a number of these workshops with my own team, and really defining, you know, it’s not Google, we don’t want to treat it like Google. We want to treat it very differently. And then third is practice with real scenarios. Again, back to the workshop that I held with my team, we went into, know, these are the things that we do in our job. This is how long it takes. Is it necessarily hard? No. Does it take longer than you would like it to? Yes. Is your final product as best as you feel like it could possibly be? Likely not. Let’s use that pain point. Use that problem and solve it with AI. And this was a simple as again, going from this theoretical example to working on actual projects. Um, in the workshop we did, you know, drafted real communications from communications that had been received by the department. We created actual training materials. We worked on writing their goals for, you know, 2025, which is something that we, you know, are required to do. It’s a mandate that comes in from HR. People often dread doing it. Those are real -world applications. And turned this team of, I wouldn’t say resistant, but just. It’s a nice to have, but I don’t need it to true adopters of that. And I think the more early champions that you have, just, you know, following Kotter’s eight principles, those early champions really can make all the difference and it can spread organically.
Sarah Gallo: I love that. And good for you for turning those resistors into true adopters. That is awesome. We’ll be right back after a brief message from our sponsor.
[Ad]
Artificial intelligence is impacting nearly every industry — including learning and development. As a learning leader, it’s essential to build AI skills to stay ahead of the AI revolution and effectively leverage AI for improved efficiency. That’s why we developed the AI Essentials for Training Managers Certificate, a program designed to develop and validate your ability to use AI for L&D. The program covers everything from writing effective prompts to identifying key use cases for AI in your training function and more. To learn more about the program, visit: trainingindustry.com/aicertificate. Don’t get left behind in the AI-driven future of work. Register now and equip yourself with the skills needed to thrive in the AI-driven future of work.
Michelle Eggleston Schwartz: I like the point that you made earlier, Marc, around you don’t know what you don’t know. Kind of looking through that lens, could you provide maybe some tips on how learning leaders can become more proficient with these AI enabled tools and technologies, like tips on how to write effective prompts? How can learning leaders essentially get more comfortable with these tools?
Marc Ramos: Yeah, that’s a great question. Let me kind of just kind of work backwards just for a second. I think. There’s a lot of… so you’re familiar with Gartner’s hype curve, right? The first part of the roller coaster goes up, there’s a lot of excitement around this new thing. Then it kind of peaks and then reality kicks in. Then it goes into this, this thing called the trough of disillusionment. And then it kind of comes out of the trough and it kind of plateaus. We are somewhere in the trough. I think people get the, the hype and the excitement. And the adventure and the threat, but people kind of, they can kind of see it now. There’s still so much stuff around the corner, I think, Michelle, but what’s really fascinating to me, and I’ll kind of get back to your question is a lot of folks, especially if you go to these conferences, you can just tell the temperament of the people walking around and the typical responses from a lot of the vendors and so forth. And there’s spot on-responses. What’s really interesting now is people know. There’s a lot of questions that we need to be asking a lot more thoroughly. So call it this, these trough people are in the trough of disillusionment, but getting back to your, the “you don’t know what you don’t know,” what’s going to take care of that disillusionment. It’s okay to be skeptical. That’s healthy. As we used to say in Google, it’s okay to be uncomfortably excited because that’s where we’re at. And part of that uncomfort allows us, affords us the ability to ask a lot more questions. And I think that’s what’s happening now. Part of the depth of this trough disillusionment, which is supposed to end at the bottom of the slide, but the trough is getting and deeper, right? It’s going below the waterline, but that’s fine because we’re starting to ask a lot more questions. And then getting back to the, to the core of your question too, there’s just a lot of cool things that are happening now that’s going to help drive adoption and stickiness. I mean, an obvious one, whether you’re learning AI or learning how to make, you know, sourdough bread, hang out with an expert. Find somebody that does it really, really well and just see if you can just hang out, follow them, or ideally do a project with them. The other one too is, if you want to learn a product, use the product. So if you want to learn more about what the heck is a large language model, well check it out. You can find it any, in any search or LLM method. Here’s the top five. Go in. The majority are free. And play around, ask it some crazy questions. Ask it to do crazy things. with certain ethical standards, right? But just ask, just play with it. So if you want to learn it, you got to use it. So that’s the, that’s what I would also say is it’s not like rocket science-ish, right? It’s just like, you know, what’s, what’s the highest form of learning in my opinion? It’s, did you apply it or not? And did you apply it successfully? It’s about application. And part of application means, uh, Melissa has mentioned this beforehand in terms of practice. You need to practice. You got to get in there. If you want to practice and not be embarrassed, go ahead, do it by yourself. That’s fine, but you got to practice and the virtue of understanding what you don’t know, and the size of the, of that domain of, of unconscious incompetence, that’s totally fine, but you won’t know unless you try. So I don’t know if that’s the best answer, but it’s the first thing to come to mind.
Melissa Brown: That’s great. And I honestly couldn’t think of a better segue from what Marc just said about you have to practice. And again, kind of going back to the of everything, thinking about that And I think how do they practice? I am all about everything. Marc said is exactly what I’ve told my team. You know, I’ve told them, guys, I didn’t just like come out of the womb knowing how to do this. Right? Admittedly, you know, I am a subscriber to I think everything. I think the only thing that I don’t subscribe to is MidJourney. I do not have a MidJourney subscription. But besides that, I’m a junkie. Those are all personal expenses of mine. What’s interesting is that my job and every other job I’ve ever had has always, um, reimbursed me for my cell phone, but there has yet to be talk about, are we reimbursing individuals for their AI subscriptions? And that sounds so. different out there. But the reality is if I’m to use this product and I want to go out and endorse everyone to practice, practice, practice very quickly. And I learned this in my workshop, you know, ChatGPT said, you’ve run out of, okay, I don’t know what it tells you because I have a paid subscription, but ChatGPT says you’ve run out of available queries or whatnot. You must wait until X amount of time. And I, you know, the shame or the bad part about that is that I, you know, we want to encourage practice. We want to encourage, or at least I do, that individuals, you know, become familiar with different platforms. Certainly not to become an expert in each, but you know, I know that while some individuals prefer Platform X, from a, you know, generative AI perspective, I am moderately in love with Claude. Don’t tell my husband. But it, you know, it is my absolute favorite. I love its artifacts; I love its projects. But I learned that because the platform that is paid for by my organization. I don’t want to imply that, you know, we’re not [providing] tools, but we’re providing single tools oftentimes. And I want to explore all the tools. So there was a recent study published by Wharton and Marc, I had seen this before I saw it … but Marc, you actually posted it recently as a repost on LinkedIn, and it comes out and it talks about, you know, 72% of C-suite or senior leaders are using AI weekly. And I thought, great. And there was absolutely no data about anything else or anyone else. Not [about] are they allocating, you know, budget for others? What percentage of others in their organizations are adopting as well? And I found that just, again, you know, interesting … overall, that’s really not practical. It’s not practical to want people and know that they’re going to get good by practicing with something when we were not effectively providing them those tools. It’s like expecting people to be great at Excel and not giving them a license to M. S. Office. Right? Like those that it’s not practical. And at the end of the day, frankly, it’s not fair. And I understand that we’re just at the cusp of this, but these are decisions that have to be made that, you know, when we talk about practical tips to become proficient, if you do not have adequate access to these tools, because again, in many cases I’m talking about, you know, front-line this, frontline that, you know, 20 bucks here, 20 bucks there, that adds up really fast. That me helping talking about prompting techniques and all of that. And I could, I could run a whole seminar just on that, but at the end of the day, again, that digital accessibility is going to become a barrier.
Sarah Gallo: I did want to touch a little bit on, um, the human oversight element here. I know we’ve heard a lot from our own audience about the need for LND leaders to really kind of act as that trusted partner to AI. So can you talk a little bit more about how we can make sure we’re using AI while still keeping that human element and human in the loop?
Marc Ramos: Yeah, I’ll go first. Really, really good question. What’s striking to me is whether it’s, you know, AI agents or whether it’s the time you spend, you know, in Claude or, or Gemini to discover something, to research something, you’re interacting with a non-human entity, but what’s interesting is the value that is gained as human, as a result of that interaction. Does your attitude change? Do you find yourself more optimistic? Are you more willing to share what you’ve learned, right? What’s that old statement? Teaching is learning twice. So are, are you in that, that ability now because of that interaction with that thing that’s not as alive as you are, it might be breathing in its own technical data, digital domain, but so that’s really, I think that’s a way that I would kind of interpret the question. There’s this whole belief related to collective intelligence. Right? And the whole aspect of a collective intelligence is not anything new, so to speak, is the collection of humans and the diverse collection of diverse thoughts ends up creating a better output. But when you start to think about AI as being part of that, that intelligence, again, whether it’s agents or not, that’s really, really interesting. It’s, and then, you know, Melissa was talking about equity and so forth. In fairness, it’ll be interesting to see what, at what point is AI in that social ladder, what, you know, are we going to be in the part of the, of the technical cast system, or the highest, or is it going to be the other way around? I don’t know if we’re ever going to have an answer because I’m optimistic about how humans and how humanity will adapt and grow from AI, and I’m actually pretty confident that AI is not going to do [a] really bad, mean thing, and it’s going to figure out humans and how to help them even better. But there’s still that interaction. So what that looks like in the future? I don’t know. Again, I’m just being, you know, optimistic, pollyannic, so to speak, or I guess poly attitude to a certain degree, but we’ll see what happens. I think it’s a really, really intriguing question.
Melissa Brown: I mean, I love this question, you know, that having that while still keeping humans, human… [keeping] humans in the loop. I recently did a, a whole series and it carried out throughout the year that I had the opportunity to part with partnership with some of my peers about change management and change resistance and change fatigue and change saturation. And then I went right from that quite literally the very next day and did a three-day AI series workshop. And I thought, wow, like, this is exactly what we’re talking about. And one of the components that I used in the change management series, excuse me, um, we talked about the Kubler-Ross Model and you might be hearing me say that right now, I’m thinking what in the world, how is that connecting to what we’re talking about, about partnering with AI, keeping the human loop. And it’s exactly that, because when we’re talking about the human loop and we’re talking about, you know, that AI partnership. You know, organizations, individuals, you know, going from denial through the stages of resistance, bargaining, negotiating, accepting. I think this is the same thing. I think that and frankly, you can apply this model to so many different things. It’s … this might be embarrassing on how many times I used it to write a paper, but when we’re looking at, you know, organizations going from that denial or individuals, I can’t do that. I just talked to somebody the other day. They were talking about coding something and say, you know, I can’t do that. And I thought, wow, I don’t know a lot about coding like [a] computer. I do know it can definitely code. And now I didn’t know the specifics of what she was talking about. But again, that human element, I think what I was hearing was a fear from her that indeed, AI or its capabilities without [keeping a] human [in the] loop. It could replace her, you know, bargaining when we see, “Oh, I can just do basic stuff. You know, I don’t need it to fix my email. I can handle my email,” you know, that kind of bargaining in there. When we look into keeping that, again, that loop going acceptance and understanding is when that partnership comes into play. And to Marc’s point, I don’t know where we are on that. And I don’t know that we can say that from a societal perspective. Is that an organizational perspective? Or does that always come down to an individual’s perspective? Um, my mom doesn’t have a cell phone, right? So apparently that that model of, know, when is she going to come around to acceptance? She hasn’t gotten there. So to keep humans in the loop is, is basically recognizing that each individual has their unique perspective and value on that. And so while we might be as, as L&D leaders or leaders at any capacity might be at one stage, recognizing that not everybody is at that stage. So that human AI feedback loop needs to continue and be perpetuated so that as individuals are going through that that model or through those stages. Excuse me, we can reach efficiency gains without those individuals seen as AI is a threat, a threat to their job, a threat, a threat to their industry, a threat to whatever that case may be. And, you know, the human loop, it’s just a great phrase in this question, because I think it’s exactly that ensuring that that human aspect of it always remains in consideration.
Marc Ramos: If I can add something to that, I really, really, really love what Melissa was saying. It kind of just a little off the wall. It kind of reminds me of the film Blade Runner, you know, Blade Runner takes place in the future and there’s, there’s robots and, but there’s a, there’s this tagline, this main theme related to why the latest and greatest robots are awesome. Why? Because they are more human than human. And sometimes when I think about. Is AI going to be more human than human and whether it’s, you know, an artificial general intelligence, AGI thing, that’s really interesting to me because in order for AI to be more human than human, it needs to recognize this one thing. And that’s humility. So will AI ever be in a state of humbleness, gratitude, and humility? When that kicks in, I think we have an awesome new friend, but what does it look like in reality that’s still to be discovered, but there’s this, this combinations correlation, this unity, this marriage, whatever it’s, it’s near. And, uh, there’s the excitement, at least for me is trying to figure out what the heck it’s gonna look like.
Michelle Eggleston Schwartz: Love that. Kind of on that note, I have thoroughly enjoyed this conversation today. And I think we could keep talking for another hour, two hours about this topic. but I want to thank you both so much, um, for speaking with us today, Melissa, Marc. How can our listeners get in touch with you after this episode if they’d like to reach out?
Melissa Brown: I can actually be reached at, oh, you guys will like this one, Melissa Brown, easy enough.
Marc Ramos: Mine is similarly easy. I think just, just find me in LinkedIn, Marc Ramos. There’s a few of us, but you’ll find me. That’s probably the easiest way.
Melissa Brown: And that’s a good point. I’m just good old Melissa L with middle initial L, Brown, because otherwise you’ll find a gynecologist and somebody else. So always middle initial L on LinkedIn, Melissa L. Brown.
Sarah Gallo: For more resources on AI in L&D, check out the description for this episode and the show notes on our website TrainingIndustry.com/podcast. And don’t forget to rate and review us wherever you tune in to The Business of Learning. Until next time.