As artificial intelligence (AI) reshapes the way we work, the need for employees to understand and use data has never been more important. According to recent research from DataCamp, 86% of leaders say data literacy is essential for their teams’ daily tasks — and 62% say the same about AI literacy. But without these foundational skills, organizations risk falling behind.
In this episode of The Business of Learning, we sat down with Dr. Bill Brantley, CPTM, president and chief learning officer at BAS2A, and Lauren LePage, learning strategist at Chick-fil-A. Tune in now to explore how L&D can build AI and data literacy across the workforce and gain insights on:
- How to assess AI readiness and measure data literacy using tools like skills mapping and competency models.
- The critical role of leadership in modeling data-informed decision-making and creating a culture of ethical AI use.
- Key steps to begin building an AI- and data-literate workforce.
More Resources:
- [Wiki] AI in Training
Learn to navigate your organization’s AI strategy and gain the frameworks needed to design learning programs that drive workforce readiness and AI adoption with the soon-to-be-launched AI Adoption and Workforce Readiness Certificate.
The transcript for this episode follows:
[Ad]
As a training professional, your job is to effectively manage the business of learning. You probably tune into this podcast to gain insights on L&D trends being used by some of the most innovative thought leaders in our market. But did he know that Training Industry also provides data-driven analysis and the best practices through our premium research reports? Our entire catalog including reports on deconstructing 70-20-10, learner preferences, the modern learner experience and AI’s impact on L&D, just to name a few, can be found at trainingindustry.com/shopresearch. New insights create new ways for L&D to do business. Let Training Industry research reports take your training initiatives to new heights. Go to trainingindustry.com/shopresearch to view the entire catalog.
Speaker 1:
Welcome to The Business of Learning, the learning leader’s podcast from Training Industry, where we speak with industry experts and thought leaders on all things learning and development.
Michelle Eggleston Schwartz: Hi, welcome back to The Business of Learning. I’m Michelle Eggleston Schwartz, editor in chief at Training Industry, and I’m Sarah Gallo, senior editor.
Sarah Gallo: Compared to its 2023 report, the World Economic Forum’s 2025 Future of Jobs report found an increase in relevance for big data and AI skills. And it’s not hard to see why. As AI continues to impact nearly all industries and businesses are increasingly relying on data-driven decision making, employees across functions need those foundational AI and data skills. In this episode, we’re excited to explore how L&D can help build AI and data literacy across the workforce. And with us, we have Dr. Bill Brintley, a Certified Professional in Training Management and president and chief learning officer at BAS2A, and Lauren Le Page, learning strategist at Chick-fil-A. Bill and Lauren, welcome to the podcast.
Lauren LePage: Thanks, Sarah.
Dr. Bill Brantley: Thank you.
Michelle Eggleston Schwartz: Yes, welcome to get started today. Could you discuss a little bit about why AI and data literacy are becoming essential skills for today’s workforce and what business risk my organizations face when employees lack these skills?
Dr. Bill Brantley: So I can go ahead and take that one. And the reason why you need to have employees that are skilled in data analysis and also AI is because more and more companies are depending on AI and data analysis. And we’ve seen this trend rising. And, you know, if you don’t have employees that really have the understanding on how these work, there’s just going to be some issues and some problems. And a lot of times what we were seeing is maybe a couple of things here, like for the data, you can do all this great statistical analysis, this data analysis and the AI can help you with it. But if you don’t have the skills to understand what it’s outputting, you can lead to some real problems here. And, you know, sometimes I’ve seen with some companies where they have a heavy reliance on AI and data analysis. And the things are, you know, spit out by the system, people don’t know enough to question it. And that could be an issue right there. And also, you know, there’s also, we’re seeing this now where they’re calling it lazy brain, where people are more and more using AI to do their work. So, you know, if you use ChatGPT all day to write your emails, after a while we’re seeing people don’t really know how to start an e-mail or write an e-mail. So it’s kind of like, yeah, you need to have the data analysis skills, but that helps you learn the weaknesses and the strengths of the tools you’re working with. And, you know, to give you a kind of a really, it was a really blatant example of this back when I worked at the Office of Personnel Management, we had a database of all the federal civilian employees. And every month we would collect information. We’d do what’s called edits because the data would come back. And I remember 1 particularly amazing example where we had someone that was a Vietnam vet, had three master’s degrees and you know, all these different attributes. Then when you looked at the age column, you saw that they were four years old. So that’s a clear indication that there’s something wrong with the data. Sometimes the data is not that blatant. So you do need to have people that can double check that because you don’t want to turn your thinking over to the AI system.
Lauren LePage: That’s so true. I would just [say], I love what you said there. For us at Chick-fil-A, our culture is at the forefront of everything that we do. And so when I think about the risks in data and interpreting that incorrectly, we’re always trying to train on what’s, what’s the story behind this and what’s the true story about this? Because you can get AI to filter through all the all of this data and it comes up with a story. And maybe it’s maybe it’s something that you were like, “Yeah, I was thinking that too.” And it feeds that more unconscious bias that you may have. But it’s really important to look through that and evaluate it, get other humans to look at it too, to make sure that you’re understanding the full story before you make some of the key decisions that the data might really impact her.
Sarah Gallo: Yeah, I think that’s key. I also want to talk a little bit about the measurement and sort of baselining aspect here. How can organizations sort of assess their workforces, AI readiness and measure improvements in data literacy over time? And do you have any tips on using methods such as ills mapping or competency models to establish a baseline and track progress from there?
Lauren LePage: That’s a great one for Bill.
Dr. Bill Brantley: OK, well, one thing I look at is with many organizations that are increasingly using AI, look at their value chains. And what we mean by a value chain is just all the things they do to create value. And once you’ve mapped that out, you can start putting AI into parts of it. Like for example, with my company, I had this kind of book… If you write a book, I can help you create a course out of it. And I have these different parts of the process. And for example, I know as training folks, we probably don’t like writing the questions for the knowledge checks and the exams all the time because it takes forever it seems like. I have used AI to help me. Now, I don’t like AI writing my content, but once I’ve written the content, I’ll let AI use that to create the knowledge checks. OK, so once I have AI put into my system, I then can see, OK, here’s how I need to use AI here, here’s how I need to use AI there. So I then take a look at this, look at the skills that are needed, maybe do some competency mapping, and then I can train my people because I’ve run into situations where, you know, companies [say], “Oh, I got to train everyone on AI,” and they train everyone on AI but it’s all kind of like scattershot, you know, it’s just what are you actually trying to teach them? And there’s a lot to AI that they don’t need to know and that there’s, you know, we want to be specific about what they need to learn. And then how are they helping us in the organization with the processes that kind of, you know, go back into the value chain?
Lauren LePage: Yeah, I would say at Chick-fil-A, we’re a little early on in this in this process. It’s more exploratory right now. And I think where we come from probably more of a conservative front on the use of AI. And so just being incredibly clear about what goes into AI is kind of the first step because people are interested, they’re going to use it. Like, everyone’s talking about ChatGPT and it’s more like, what are you putting into ChatGPT? So it would be, it’d probably be wrong to come in and be like, “No, you cannot use this tool until we’ve fully done all the process to vet it and go through all the things.” But just being super clear on what goes into it. And that’s kind of more the training that we’re focusing on. And that message from leadership is really about we need to be careful about what we put into it.
Dr. Bill Brantley: And if I can offer, because as a Chick-fil-A consumer, I’m really love your ordering process and how quickly you can get stuff through. And I’ve watched as your workers are using tablets and such. And I can see AI being probably in there because that’s a specific use in their value chain, getting those orders out.
Lauren LePage: Yeah, definitely.
Michelle Eggleston Schwartz: Lauren, that’s such a good point you made about training employees on what goes into ChatGPT and these tools. So, I’d like to dig in a little more there around training employees on AI and data skills. Can you share any actionable tips or effective strategies or best practices?
Lauren LePage: Yeah, so I’m like I said before, it’s more of this exploratory thing. So, I think in L&D, a lot of the challenge is really getting people to want to learn. And so, what’s really cool about getting a project like AI is a lot of people that’s kind of the easy selling point is like AI is going to make my job easier. OK, I’m in. So, I think getting the buy-in is, is it’s interesting to see because everyone’s really asking for it. It’s like, “When can I get Copilot?” When can I get what … we have an internal kind of ChatGPT, like when can we get access to this? And so, and then what’s also cool about those tools as you can ask them, how do I use this and what can I use for? I think circling back to where we began of, like it’s more about how what you put into it and the right prompts. Bill, and I were talking a little bit about this of like being really clear on the prompts that you are entering into it so that you get what you want. But then on the other side of that, and especially from a training perspective, is I can, if I have my own agenda for what I want output to be, I can find reasons to get that output and keep prompting ChatGPT or, you know, so for example, I had this one quote from Truett Cathy and really wanted to use it. And I was like, “Oh, I bet I can make these connections.” And then I get through and I’m putting into ChatGPT all these things to make these connections. He says something about using time, love and resources and giving that, giving that to people. And I was like, “Oh, this is a perfect framework for training.” And then I got get into it and I’m like; this is not what I had in mind at all. And instead of increasing my productivity, it was a wash. I mean, it was three hours of work where I was like, what was I even trying to do? So, I think it’s kind of training on like, what are you going after and how are you using AI to go after that? And how are you taking your biases out of it? How are you looking at it objectively when you do this? And, and really, it’s just having people practice putting people in learning circles and talking about these best practices to each other, and then also making sure that legals in there and really uncovering some of these risks that we should all just be aware of.
Dr. Bill Brantley: And if I can add to that, I mean, the training is very important. Also, it’s kind of scary to folks because they talked about the data analysis and the statistics. I don’t know, we’ve all probably had statistics classes in college and that was just the math was overwhelming. And one of the things I’ve learned to do when I teach folks about data analysis and statistical analysis, the AI is don’t really focus too much on the math as much as the concepts behind. Because one thing I’ve run into like to use this example, a lot of folks confuse correlation with causation. You know, they see, and this is the thing, AI does not understand context really when you get down to it, the trick with generative AI is it’s just very good at finding the next most likely concept after you give it a concept. So, it doesn’t really understand things. So, I have some things that have been spit out by these systems that are just ludicrous. So, one example I love to use about correlation and causation is if you take a look at the number of Nicolas Cage films that are released in a year, there is a correlation between that and the number of swimming pool accidents. Now, what’s the connection, you know, and what’s happening here? But I’ve seen folks that’ll take something like that from an AI system and say, oh, there must be a causation, there must be some kind of leak. And that’s where you need a little, you don’t have to really know that the exact specific technical statistical formula, but you need to know enough to say, I don’t know, let me check this, let me check on that and make sure there is a correlate, there is an actual link between it. It’s not just some kind of weird correlation that happens. So, one of the things when I teach data analysis, don’t worry so much because the systems can do the analysis for you, very mathematical analysis, it’s more like … learn how to question what it’s coming up with.
Sarah Gallo: Yeah. And that’s sort of analytical mindset is really important here. And I think as we’ve touched on, we know that AI ethics and data privacy concerns are really key for companies and for learning leaders. How can strong data literacy and data skills sort of help employees use AI tools more effectively, especially when it comes to things like data privacy and security and ethical concerns?
Dr. Bill Brantley: Take that one if you want to.
Lauren LePage: Oh, you could take it.
Dr. Bill Brantley: Well, one of the things I learned in this, especially when I was working with the federal government, is that learning data literacy. So, we have different databases, and we have this, you know, protection, private, you know, privacy and personally identifiable information. And one thing that I had to teach folks is that I can take one database, I can run it against another database and take a third database and actually produce a composite picture of somebody. So, you know, actually things pop out from the intersection of this that you don’t think would pop out. So, a lot of times with the data analysis, the critical thinking is, OK, so I’m looking at someone’s birth date, I got that. I get someone’s Social Security number; I’ve got that in another database. You start pulling these together and you can now get; you can breach the privacy there. So, one of the things that when we’re working data analysis and such and also is the data storage, you know, where is it stored? How are we making sure we keep the access? And also in some advanced concepts, I don’t know if you have heard of synthetic data. Basically, I can take people’s day, I can take a database and have actual statistical information in there, but I can have a computer give me a replica of that data, but it’s not connected to anybody. It’s kind of, it’s artificial, you know, it’s basically made-up, but it really replicates the characteristics of the other database. So that way I protect the actual information, but I can still do the statistical analysis I need to do on that. So that takes a little bit more advanced understanding there. And yeah, data privacy with these systems, AI, remember in the early days, you know, with AI, when you’re giving a prompt to it and you may give it some information, that information may show up in someone else’s because the AI systems training off your stuff. So when I do training for law firms, we talk about this a lot and about how to disguise this and sometimes just don’t go out to the public AI system, but have something on premise that basically is not connected to the Internet, but you can still do kind of the search that you want to in the AI analysis. So, yeah. And another layer and security actually are not a difficult concept to learn. It’s just being very conscious of.
Laren LePage: Yeah, that’s it. That’s exactly right. When I think about this, I’m thinking of going to AI and this is collaborating with everyone in the world basically like this is just a giant group that I’m collaborating with of everyone’s different viewpoints and everything. And if you’re not careful, there’s, I mean, there’s a lot that that can happen. And so, I’m not going to put a whole Chick-fil-A franchise agreement in there and say, “Hey, can you create me a training on this?” Because then the whole world has access to it and can give opinions on it. And then that’s out there and people, people can use it and access it. So, it’s really, really important that that’s what we focus on in training to be able to distinguish like what do you put in here and what do you put in here?
[Ad]
As a learning leader, your organization looks to you to close skills gaps, meet compliance standards, prepare your business for the future and more, which is a lot of responsibilities, especially for a team of one. That’s why we created Tia, your AI-powered Training Industry assistant. Tia is upskilled on Training Industry’s best library of resources including research, job aids, house content and more. Use Tia as your guide for tackling any L&D task. By becoming a training industry member, you’ll gain access to Tia in addition to becoming a part of the membership community where you can connect with like-minded L&D practitioners and experts. Driving the business of learning can be challenging but remember you don’t have to go it alone. Try Tia today for just $30 per month. Sign up at trainingindustry.com/membership.
Michelle Eggleston Schwartz: Exactly. There’s so much to consider [to] get all of that information out to your employees and make sure that they’re doing everything, abiding by all these standards like that. That’s a lot to keep track of, but I’d like to switch gears and touch on the leadership aspect here. What role does leadership play in building a culture of AI and data literacy and how can L&D help leaders champion these skills?
Yeah, when I think about leadership coming in and creating a culture of AI, which is an interesting term because culture is really important to Chick-fil-A and there’s a big human component to that.
Lauren LePage: And so, when I think about leadership sharing the message of AI and data literacy, it’s very much … Chick-fil-A is like, what? No, people —we are all about our people, caring for people. We want to be the world’s most caring company. And so I think it’s from an employee perspective, it’s like, well, am I allowed to use this? And it’s like, no, we’re still a business. We still want increased productivity. So, I think that message uniquely for us is like this from leadership is like, “Yes, we want you to use these tools that to help you in your role, in your job, add business value,” and to help people be OK with that. But also, be very clear on if you choose to use this and you can just like I’ve been saying, is just aware of is just the risks that come with. So that’s going to be our huge message with leadership is just being aware of that.
Dr. Bill Brantley: Yeah, I like, I like that a lot. And it makes a lot of sense there. And of course, we know in the training world, not just an AI, but basically you do need the leadership modelling certain behaviors and saying to folks, you know, “This is why you need the training. This is why the training is important.” You know, it’s why I try to get my managers on board whenever I try to pitch a new training piece. And one of the things that managers can do and leaders can do is model, you know, critical thinking skills and also intellectual humility because a lot of what we do with AI and such, and I love this analogy. This guy is a British researcher, so you’re going to see the British phrases in here, but [he] talked about AI [and said] just think about AI is that bloke in the bar who knows everything and will back you up in whatever you say. And I’ve seen this before where, you know, it’s just like it’ll find for you what you want. And many times, they’ll use AI to prove a point or prove that they’re, you know, that they already had an opinion [and] they want that to prove it. And it’s more like with the leaders saying we need to model critical thinking. What is the actual data show us and what are some issues that we may have here? So I think leadership can, you know, foster the use of AI, the responsible use of AI, but also the thing that we bring to the table as human beings is our critical thinking and having our leadership model that to us, especially the intellectual humility, you know, knowing what we don’t know and knowing how to find out what we don’t know in a responsible manner.
Sarah Gallo: Yeah, knowing what we don’t know, that’s key. Well, we’ve covered a lot of ground today, but before we do wrap up, is there one step our listeners can take to begin building a more AI and data literate organization, for those listening?
Lauren LePage: So, I would say, you know, get your IT department to start looking into this, that that tools make sure that they are really, they’re adding it to their cybersecurity training or your L&D department, whoever, whoever’s owning it and just do their research and understand. Like, I think I’m reading a book right now called “The Loop” by Jacob Ward. And I’m just really thinking about all the loops that are happening in AI and even in social media, how you’re, you’re getting prompted with ads and things. And there’s kind of this unsaid agenda behind, behind things that happened. So, it’s really, really important that the research is done for this and, and it’s owned by the right people. And so, I would just say like, just do your research and be really clear in the training on what you should and shouldn’t do if you use these tools.
Dr. Bill Brantley: Yeah. You have to really know what you’re doing with that. And also, just make playing with the AI, you know, a daily habit. I mean, it reminds me back in the ‘80s when we had spreadsheets and they seemed kind of scary, and we didn’t want to work with them too much. But the people that learned the most were the ones that kind of played with it. And you’re not going to really break it, going to break the AI system. I have yet to see anyone break an AI system badly doing this. But, you know, have some fun with it, see how it works and start to get to know its limitations and such. Because we tend to, this is part of what I see in the media and such. We tend to start putting AI on a pedestal and we start saying, oh, you know, this is such an amazing tool. It is amazing tool, but you know, it is there for us to use. The more you use it, the better you will be with it and the more you can appreciate it but also understand its limitations. So, it’s kind of like when I was talking to someone early about the spreadsheet thing and how that led to a large number of mergers and acquisitions and all this stuff. Because we could do economic modeling. There were some folks that took the time to learn what an economic model was and its limitations and how it helps and others that just kind of saw, you know, we’re just going to talk to the Excel guy in the office. I happen to be the Excel guy in office, and I would get all these crazy requests starting to be the AI guy in the office, you know, because I say to people, just play with it, have some fun, talk to it. You can actually talk to AI. They have the voice modes, ask it some, you know, silly questions and such just to get kind of used to it.
Michelle Eggleston Schwartz: Love that. On that note, I would just like to thank both of you so much for speaking with us today and continuing this conversation around AI because it is really taking off and being integrated into our work and home life. How can our listeners get in touch with you after this episode if they like to reach out?
Lauren LePage: Yeah, absolutely. For me, you can email me at lauren.lepage@cfacorp.com or find me on LinkedIn.
Dr. Bill Brantley: Yeah, you can find me on LinkedIn. I’m under Bill Brantley, so you know how to kind of search there. It may show up as Dr. Bill Brantley. That’s fine. It’ll find me. And again, you can email me at bill@bas2aoffice.com.
Sarah Gallo: For more resources on this topic, check out the episode description or visit the show notes on our website at trainingindustry.com/podcast. And don’t forget to rate and review us wherever you tune into the business of learning. Until next time.