
D2L's Teach & Learn
Teach & Learn is a podcast for curious educators. Hosted by Dr. Cristi Ford and Dr. Emma Zone, each episode features candid conversations with some of the sharpest minds in the K-20 education space. We discuss trending educational topics, teaching strategies and delve into the issues plaguing our schools and higher education institutions today.
D2L's Teach & Learn
Exploring AI Literacies, with Dr. Angela Gunder
As AI continues to transform the educational landscape, understanding its impact is crucial for educators.
In today's episode, Dr. Cristi Ford welcomes Dr. Angela Gunder, founder and CEO of Opened Culture (OEC), to discuss the multifaceted concept of AI literacies. Dr. Gunder explains the importance of understanding AI beyond its technical aspects, emphasizing the need for educators to grasp how AI influences collaboration, innovation and learner engagement.
They discuss the eight key dimensions of AI literacies, highlighting the skills and mindsets necessary for effective AI integration in educational contexts. Dr. Gunder shares insights from her research, supported by UNESCO IITE and MIT Open Learning, and provides practical examples of how educators can incorporate AI literacies into their teaching practices. The conversation also explores ethical considerations and cultural sensitivities required to ensure AI applications are inclusive and beneficial for all learners.
Dr. Angela Gunder is the Chief Executive Officer and founder of Opened Culture, a pioneering collaboratory that empowers educators and institutions to advance access to education through open collaboration, academic innovation, and community engagement. Through leading-edge research, professional development, and strategic guidance, she and her team support educational communities in guiding and sustaining cultures of openness to instantiate a more inclusive future of learning.
Dr. Gunder has earned accolades from UNESCO IITE, MIT Open Learning, and UPCEA for her transformative initiatives in fostering cultures of openness in digital learning and artificial intelligence (AI). Her research focuses on open remix practices, opened culture, digital literacies, narrative digital learning practices, and emerging technology for language acquisition. She holds a B.S. in Computer Science and Fine Art from Fordham University, a M.Ed. in Education Technology from Arizona State University, and a Ph.D. in Teaching, Learning, and Sociocultural Studies from The University of Arizona.
Remember to follow us on social media. You can find us on X, Instagram, LinkedIn, or Facebook @D2L. Check out our YouTube channel for the video version of this podcast and so much more. Please take a moment to rate, review and share the podcast, and reach out with comments and ideas for future episodes.
For more content for educators, by educators, please visit the Teaching & Learning Studio where you can listen to more podcast episodes, register for free D2L Master Classes and read articles written by educational leaders. Sign up for our Teaching & Learning Studio newsletter and never miss a thing.
To learn more about how D2L is transforming the way the world learns, visit our website at D2L.com
Class dismissed.
Visit the Teaching & Learning Studio for more content for educators, by educators. Sign up for our newsletter today.
Dr. Cristi Ford (00:00):
With AI being a buzzword at most faculty conferences and trainings, this episode couldn't have come at a better time. Today, we're diving into AI literacies. Yes, that's plural. Our guest will break down why understanding AI isn't just about tech usage and what we make, but also exploring how AI affects the ability to collaborate, to innovate, and to connect to the learners that we serve. We hope you'll join us.
Speaker 2 (00:27):
Welcome to Teach & Learn, a podcast for curious educators, brought to you by D2L.
Dr. Cristi Ford (00:32):
Each week, we'll meet some of the sharpest minds in the K-20 space. Sharpen your pencils, class is about to begin.
(00:39):
Listeners, welcome back to another episode. I'm Dr. Cristi Ford, your host here at Teach & Learn podcast. In today's episode, we're honored to have with us Dr. Angela Gunder, founder and CEO of Open Culture, a pioneering organization dedicated to expanding access through education, through collaboration, academic innovation, and community engagement. Angela probably is not new to many of you because prior to founding Open Culture, Angela served as Chief Academic Officer and Vice President of Learning for the Online Learning Consortium, or OLC. And Angela still finds time to serve as an online instructor for the University of Arizona, a D2L partner institution. Angela, I'm so glad to have you here. Listening to all the things you're doing, I'm just tired thinking about it.
Dr. Angela Gunder (01:28):
Well, thank you for having me on the podcast, Cristi. Long time fan girl, long time listener. But I'm really happy about today because this topic that we're going to discuss is birthday cake for me. And we could go on and on, but I'm really passionate about some of the things that are coming out from global conversations right now that I hope we'll get to hit today.
Dr. Cristi Ford (01:51):
Well, I think this conversation is such a needed conversation in a time like today. We talk about AI, also referred to as generative AI or LLMs, and we're finding that it's prevalent in every faculty development training or conference that you go to. And so I think today's conversation is going to be a real treat for our listeners. We're going to spend some time talking about AI literacies, colleagues. That's right, it's plural. And Angela and her colleagues have enlightened me that this is not just a binary approach to this taxonomy. And so I just want to jump right in, Angela, and ask a little bit about what are the dimensions of AI literacies? What does it mean? And, really, what motivated the research that led to this taxonomy?
Dr. Angela Gunder (02:37):
Yeah, so to tackle this question, I'd love to situate this around what all of us have felt, and all of us have been feeling, around the emergence and ubiquity of AI in education. And we've had a series of volatile shifts in education in the past several years that have really left us with quite a few mixed, conflicting emotions, for not only how we survive as a field, but how do we move into that thriving phase? And AI has been no exception in this polarization.
(03:10):
And I love to liken this to the arrival of an elephant on everybody's doorstep. There was no warning, there was no training manual that came with this elephant, no gift receipt to return it. It was just like, boom, elephant. Benevolent benefactor gave you this elephant. Now, you've got to figure out what to do with it. So we've had to grapple with what this has meant for us. Is this elephant here for good or evil? Am I qualified to have an elephant in my home? Does anyone that I know know what the heck to do with an elephant?
(03:47):
So when we think about AI, and how AI is different than a lot of the digital technology, the ed tech that has shown up on our doorsteps, we're talking about how to bring it into the classroom, and what's the knowledge needed in order to have it there in ethical and impactful ways. And in order to do that, we really need a vocabulary to describe all of the different emotions, the feelings, the perspectives, and the approaches to understanding and tackling that elephant at our doorstep conundrum.
(04:18):
So this work around AI literacies was born of that need that, while there are tons of frameworks that are out there, like checklists that keep on popping up, my colleagues and I knew that we need to take a big step back and create something that would give us a better way to understand, both the skill sets and the mindsets needed for AI usage across diverse roles and different sociocultural contexts. And that really launched us into this research. We've been really privileged to be doing it with the support of UNESCO IITE, with MIT Open Learning and their initiatives that are funded by the Hewlett Foundation. And through these global conversations that we've been having with educators around every continent except for Antarctica, the dimensions of AI literacies took shape and were born.
Dr. Cristi Ford (05:06):
So I'd love for you to take a moment and educate our listeners, because you did such a great favor to me in dropping some knowledge, around why is there a holistic approach to this work? And how can we think about this in an educational context?
Dr. Angela Gunder (05:20):
So I love this question, but I also know it's one that vexes every single spell checker and AI tool that I have to work with now that I have to reproof and retrain all of my systems to remember. Yes, it's literacies. Yes, I mean it as a plural. But the reason that we use this as a plural is based on how we see these literacies appearing across different experiences and environments. So just like what we know about research around digital literacies, and even around literacy versus literacies, going back to the '80s and even earlier, we know that there isn't an on and off switch, or a state of understanding and knowledge. Rather, that as the technology evolves and changes, and especially with AI as it's utilized in different contexts, and as also communities make decisions on the meaning and the impact of usage, our practices change as well.
(06:14):
So we want to move away from that on and off light switch of illiteracy versus literacy, knowing that we all have a spectrum of literacies. And that those AI literacies are an interconnected set of skills needed to understand, to use, and critically evaluate AI within complex and changing environments.
Dr. Cristi Ford (06:34):
So I really love that because it gets away from that deficit model of you either have it or you don't. It also gives us an opportunity to think about the continuous improvement. There is always an opportunity for us to continue to evolve, and have more textured conversations around this work. So I'm really, really excited to delve in a little bit more deeply. So let's talk a little bit about the taxonomy. I understand it outlines eight key dimensions. And for our listeners, we're going to link to the taxonomy so that you can have an opportunity to really focus in on those eight key dimensions. But can we talk a little bit more about the research? And maybe you can provide maybe a little bit of an overview of these dimensions, and explain how they collectively contribute to this comprehensive view of where we are today?
Dr. Angela Gunder (07:21):
Yes. So I have to give a major shout-out to the work of a fellow scholar and friend, Doug Belshaw, who really inspired this work. This taxonomy began as an open remix of his work on the essential elements of digital literacies. And my colleagues and I had observed that the most successful approaches that we were seeing that educators were taking around AI, those approaches that we knew were really moving the needle, were similar to what we were seeing with these essential elements of digital literacies. That it wasn't too different. And we really wanted to resituate these digital literacies across eight dimensions that map directly to AI. So we're talking about four skill sets and four mindsets that make up these eight Cs.
(08:14):
And with the four skill sets, we're talking about cultural AI literacies, which relate to understanding the social norms and practices of AI usage, creative AI literacies, being able to generate ideas on AI usage, act on them, and then evaluate them. So all three of those steps within that cycle. Constructive AI literacies, knowing how to build and do things with AI-enabled environments. And communicative AI literacies, being able to convey meaning for a specific purpose with and through AI.
(08:48):
We also have the four mindsets. So these are confident AI literacies, which I think about building your muscles, and having that self-assurance and self-reliance in your usage of different AI tools, and in environments that might be new to you, where you're exploring, and it takes a little bit of courage to push forward and figure out what you're doing. Cognitive AI literacy is really related to that confident piece. As you're moving along, understanding how these tools work, where the data is coming from, and knowing how to move through them successfully. Critical AI literacies, hopefully, we'll get to dig into this one a little bit more because this is the biggest one that came up in our research and the findings. This has to do with examining the power structures that are embedded within AI environments, and particularly who they privilege and who they disadvantage. And then, last but not least, very closely connected to the critical AI literacies are the civic AI literacies, so understanding how to employ our knowledge and skills for the benefit of society.
(09:52):
And as I ran through those super quickly, and they're rich and deep, it's probably, even at a surface level, easy to see that, while there are eight discrete dimensions, they're all closely linked to each other, and they feed off of each other, which is a finding that was affirmed in the research when we talked to people about their practices.
(10:12):
The last thing I'll say here is that you've noticed that I refer to this as a taxonomy. My colleagues refer to this as a taxonomy. It's meant to be used with frameworks, not instead of a framework. This is a vocabulary for people to start unpacking situated practices, and really understand the culture of AI usage within different socioculturally-driven communities of practice. Whether it's a formal or an informal learning environment, these are words that you can use to start to find linkages and connections, and even surface challenges and new opportunities.
Dr. Cristi Ford (10:50):
I love that inclusive approach. And you keep me honest around the use of taxonomies, and thinking about the connection between other frameworks of learning. And how we can situate this new nomenclature, and really start to expand the use of this work. I guess I'd love to hear, as educators are listening, and probably are excited as I was when you started to unpack these key dimensions. And one of the things I know that you are very proud of, and it's been nice to see, is that it's not just been about the taxonomy, it's also been about looking at that taxonomy in action. So maybe if you can maybe talk with us a little bit about how educators can integrate these dimensions to their teaching and learning. What are the opportunities for leaders to think about the ways in which these key dimensions impact the work that they do? And then, of course, ultimately, how does this impact learners and their connection to this new world and workforce that is being informed by AI?
Dr. Angela Gunder (11:48):
Yeah, so I think a lot of folks are like me, where we want the theoretical and empirical evidence, but at the end of the day, we really want to get to the praxis. We want to know how all of this is applied. And when I think about the AI literacies in application, I think about it as a push and a pull. So it's related to the literacies that we need in order to use AI in ethical and meaningful ways, as well as the literacies that we gain by using AI in intentional ways. So I want to talk about two case examples of this, of the push and the pull.
(12:24):
And I'm going to give a shout-out to my IDs first because, once an ID, always an ID. And when we think about building learning content that is active, that's experiential, that's moving our students into real world experiences, we have to take into account how we can scaffold skill building. And thinking through how we create assignments that build in critical literacies, particularly to be able to verify information, as well as the constructive literacies of knowing how to just build and make things. And on top of that, that layer of those cultural literacies that help learners understand how to connect their learning to their world, and to the broader world. These literacies feed each other together, but are necessary in order for students to successfully complete assignments where AI is there and available. So as an ID, we need to think about how we can be intentional about incorporating context and instructions for why these individual literacies have to be employed within the work that students are doing. That's the push.
(13:34):
The pull, and let's talk about learners in this instance. And this is really related to what literacies are developed by using these AI tools, not the ones that we need to bring to the table, but the ones that we need to take away as we're gaining them, using AI. I think a lot about our communicative AI literacies. So using a generative AI tool, and the prompt engineering that's needed to produce successful outputs, is really a dialogic process, even though you can replicate certain prompts. It's also a personal process. I found out for myself, actually, some people were watching me type into ChatGPT. And they were like, "Wow, you're so kind to your AI tools." And I was like, "Was it because I'm afraid of AI overlords taking over in the future?" And then I was like, "No, I actually talked to AI like it's a learner, almost like it's one of my students." Because it is a learner. It's learning from me, it's learning my preferences, my expectations, my hopes for the experience.
(14:35):
So while there is one-shot prompting that does exist, our learners, as they're engaging more and more with these tools, they learn about how to effectively communicate their wants and their needs. They also learn some amazing things, especially for... Shout-out to my humanities educators, folks that are teaching writing across the curriculum. They have the opportunity to learn about personas, about audience, about genre, about tone. And all of those are communicative literacies that sit within that AI experience. So thinking about what our learners can and should be gaining, it's those communicative AI literacies that are developed through usage, through trial and error, not getting things right the first time. And one of the best things that we can do for our learners is to let them in on this secret. That when they complete this assignment, they're going to be leaving with these new durable skills that are going to help them with AI in the future, definitely in the workplace, but probably help them in terms of their interactions with humankind as well.
Dr. Cristi Ford (15:39):
So, so good. I appreciate you focusing on the educator impact and the leadership angle. But really to your point, we're doing all this to really be able to work with learners, to support learners, empower learners to be able to learn this language, learn this taxonomy. And to be able, to your point, take these durable skills beyond humanities 101 course into the future of work and their lives. And so, I think I want to shift us a little bit, and anchor down and talk a little bit about a couple of key dimensions. Let's start with cultural AI literacies. As I was listening to you give the explanation to our listeners, I think I understand the emphasis around cultural AI literacies is understanding social norms and practices that shape AI tools. And so, if that is the case, how can educators ensure, in this realm, we're talking about bias, we're talking about the importance of being culturally sensitive and inclusive, how can we really ensure that these AI applications in the classroom are doing just that?
Dr. Angela Gunder (16:41):
This was a fascinating one because it brought up a lot of facets of the research that I do in general around open education, around digital learning, and really around care for students. And this was also a finding that, unsurprisingly, came up, largely, from the respondents that are living in the Global South, especially those that are working within the realm of open education. So though there have been significant shifts in how culturally affirming AI outputs can be, we are still not where we need to be in terms of representation. And part of it is the mechanics of AI. AI is going to take the largest body of knowledge, find generalizable knowledge across that data set, which means it's going to have a lot of stuff on the cutting room floor that it's not going to acknowledge unless we train it to have better plurality of perspectives, of lived experiences in the outputs.
(17:43):
So I want to go back to that push and that pull that we talked about for cultural AI literacies also start with teaching critical literacies, which our librarians would say are just good information literacies, from back in the day. It's understanding who benefits, and who's disadvantaged by different outputs. It also means us putting out more content and training data to help AI tools gain a plurality of perspectives. I remember when I first started using AI tools, and trying to push the edges, and explore the growth edges for the tools, I would ask for research on different topics. I would even give it my syllabus, and say, "What are some good articles that I could bring in from scholars?" And it took a while for me to get a plurality of scholars that were different races, genders, located in different parts of the world, had different conceptual frameworks for how they were presenting these ideas.
(18:42):
And that takes a certain level of knowledge to even know that that first output that comes out may not be supportive of the cultures that are in your classroom. And I say cultures because your students are not a monolith, that they're all coming with different lived experiences and perspectives. So this is a process that can and should be socially constructed with your learners, and not just for learners. You have to know who they are in order to know how to represent them. And one of the best ways that you can do that is actually engage them in a lot of these practices of co-building, co-constructing. Not only learning content, but how we think about the content within the class, and how it relates to the world around us.
Dr. Cristi Ford (19:31):
Now, you're singing off my sheet of music here. I love to talk about co-construction. And so, hearing that call out there, I think is really critical as educators are really grappling with themselves. You and I often talk about this topic. And we talk about the fact that if someone comes to you and says that they are AI expert, you shouldn't believe them, right?
Dr. Angela Gunder (19:51):
Be wary.
Dr. Cristi Ford (19:51):
This is something that we're learning. We're learning and we're co-creating together. And so this focus that you've talked about, I think, is really critical. I want to cut across the field a little bit, and move on to another AI literacy and talk about cognitive AI literacies. And as I understood from the earlier example, you talked about cognitive AI literacies really involve developing the skills to navigate the AI environments effectively. And you and I have both seen this with learners, with faculty members. But what are some of the best practices that you've found in the research, or in talking with colleagues, in fostering these skills among students, particularly in relation to prompt engineering? You talked a little bit about that earlier, and the iterative interactions that AI models require.
Dr. Angela Gunder (20:39):
Yeah, so this one's fascinating and, actually, it relates to what we were just talking about too, about nobody's an expert right now. It's okay to have the freedom to fail, to explore, to play. I think about some of the early conversations that I had with George Siemens, who was saying, "Hey, if you're an educator, you should be trying out a new tool every day. Try out some of the tools that you've used before again, every day, because they're changing quite a bit."
(21:09):
But, also, just that notion of we're all learning is really important to make known to our students. So my favorite examples of folks really employing and developing these cognitive AI literacies have to do with faculty that are centering authenticity and transparency. That we are all trying to figure this out. That we want to make this information available to people, as we're exploring and as we're learning. Sometimes, for me, it's as easy as, if I'm on a Zoom call with my colleagues from all over the world, we'll pull up a screen share, and actually show in real time what the prompts are, and what the outputs are that we're receiving, and we talk about them.
(21:56):
We also had some really interesting discussions around the notion of, what does collaboration look like in the world of AI? My colleague, Punya Mishra, was talking about how, in some ways, AI is making us less open because we're working within systems that we can't have AI as an educator with a seat at the table. That we have to leave the table to talk to AI, and then come back with those outputs. So figuring ways to work around the system, like sharing things in Zoom, like writing in-flight reflections about work that you're doing, doing showcases with your fellow faculty. Those are really important practices right now, as our knowledge is changing, for us to get around those cognitive AI literacies.
(22:50):
But my favorite examples, and this is what I was building up to, and I'm going to pick on Liza Long, who is educating, not only learners on writing and writing across a lot of different genres and disciplines, but she has a lot of learners who are international students, whom English is not their first language. And she sees her role as an educator as being a conduit and a gateway for this idea of none of us are experts. We're all trying to figure this out together.
(23:22):
So she actually will pull up, before students go off and do an assignment, before there's any message in the syllabus that says, "You cannot use AI for anything but this, or at all." She's sitting with them in a room and saying, "I'm going to pull up this tool. We're going to put some prompts in. And then we're going to have a discussion about what comes out of this." And if I fail in front of them, if I can't get something to work, that's good. It's good for them to see and to be reassured that we have to have this level of confidence, in order to develop those cognitive AI literacies.
Dr. Cristi Ford (23:53):
Man, I can't stop shaking my head on this one because this is a lived experience that I've had with you around this work. And so I'm going to actually ask a little bit of a follow-up here because I've seen you do this in practice around research. Also, as I think about the days when OpenAI allowed you to share your prompts and your output, just maybe talk a little bit more about, in practice and working with colleagues and other scholars, how this has also been impactful in this way?
Dr. Angela Gunder (24:22):
Yeah, I'm really big on reflective practices. I'm big on autoethnographies. I think the one gripe that folks hear from me is that the research to practice cycle, and shout-outs to my colleagues over at EDUCAUSE, Jenay Robert and Kristen Gay, we do a lot of work around how the knowledge needs to get to the people a lot quicker. And the knowledge can be in flight. There is no perfect, there is no complete. We need to be contributing to the potluck with whatever's in the fridge. We don't have time to go to Whole Foods and get a bunch of fancy stuff. We've got to give what we have, and that's going to really help us to get to where we need to be.
(25:00):
So one of the things that I really lean on, and it helps to have a project that you're working on with other colleagues, but I like to create a bit of a community of practice, almost like a research learning community, a faculty learning community, if you will, that is documenting and sharing practices. And there's a lot of ways that AI can help with this too. So, again, it could be things that you're doing to share your chat transcripts from one of the tools. It could be, as I said before, turning on Zoom, and having it show other people what you're doing in real time, and taking some time to talk about it.
(25:46):
Another thing, and I have mixed feelings about AI transcript tools because, sometimes, it gets the point and the summary, other times it doesn't. We found this big time in our research when we were hand coding our qualitative research, and then we were having an AI tool go through and pull out the emergent findings. The tools that we were using at the time were removing all criticality. Anything that anybody said that was negative about AI usage, it was taking that out, when that was some of the bigger findings that we had, including what is AI doing to the environment? What's the energy consumption like? It was stripping all of that stuff out and saying, "Everybody thinks AI is great for all of these different reasons."
(26:33):
So doing those tests in real time and live are really helpful, including seeing those summaries for what AI is picking up on. And then where's the nuance that's lost that we need to bring our humanity in the space to. I think we're going to see a great and grand evolution of a lot of the tools that we're using for research and for academic purposes. And it's a project that my collaboratory, Open Culture, is actually going to be launching pretty soon. We're going to have a series of bloggers from around the world share their in-flight practices. We're going to do some coding and, hopefully, share our emergent findings out that happens across this series of test case blogs next year at the UPCEA DT&L conference in February of 2026. So really excited about that.
Dr. Cristi Ford (27:30):
So I was expecting you to say all of those, and one more that I just have to plug.
Dr. Angela Gunder (27:35):
Oh, what did I miss?
Dr. Cristi Ford (27:37):
What I really appreciated doing with you, though, is that metacognitive process. So in working on an academic project, or working on a research project, being able to then go back and chronicle the ways in which AI was used in that project. And being able to share and create community of practice around that work, that has been so critical for my own learning. And, really, as I'm trying to work with other individuals, helping them to think about that process as well. So shout-out to you for us being able to do that work together.
Dr. Angela Gunder (28:06):
Well, and I want to tie a bow around that too. The real reason that that came up was because my colleagues and I were starting to see people include AI usage statements. And I think when they first started with scholarly publications, they were very much related to, "We want to police this. We want to make sure that we're shining a light on anybody that is using AI." And it was very punitive and stigmatizing. But then we saw a little bit of a shift, where a lot of journals were saying, "Hey, we know you're going to be using AI, and we also want to learn more about what AI can do for us. Because this is a digital tool, like the many that we use. Can we be transparent, open, about how we're using these things?"
(28:46):
So if you haven't done so already, definitely take some time to explore some of the AI usage statements that are out there. Certainly, in the show notes for this podcast today, we're going to have links to our scholarly research. Please borrow, steal, and remix any of the AI usage statements that we're using. But that's that piece that we're talking about. That's that metacognitive reflection on, what do these tools mean to us in this very specific moment in time, as they're moving and evolving, and as our perspectives are moving and evolving as well?
Dr. Cristi Ford (29:20):
So I know we're getting close on time, but I want to draw in a little bit, because we talk a lot, and we hear educators and institutions talk a lot, about ethical considerations, and bias, and implication around this. And so, if we can maybe just quickly talk about critical AI literacies, and really the focus on ethical and responsible use of AI, before we start to wrap up.
Dr. Angela Gunder (29:41):
Yep, happy to do so. This was, and I mentioned this before, but this was, hands down, the most highly cited of the dimensions. And it really makes sense. It was part of like a sandwich, where constructive AI literacies were first, then critical, and then civic. And it makes total sense because folks are trying to figure out how these tools work. Then they're trying to figure out if they're helpful or harmful. And then, last but not least, for the most part, people are trying to figure out how these tools might benefit society. So the critical to civic piece came up big time with a lot of different areas. As I mentioned before, the environmental costs of using AI, that was a big area, cultural representation, plurality of perspectives, that came up with the critical. With the folks that we were talking about within the realm of open education, they were very concerned about attribution, about licensing, about intellectual freedom. Was AI making us more or less open in many ways?
(30:42):
And when I think about just what's the big takeaway from the critical AI literacies, is that this is a really good place to bring our students into the conversation around how we are better, and how we need to better collaborate and inform our tech providers of what our expectations of these tools must be. And putting our learners in this place is also preparing them for future industry. Many, many, many jobs that need to be filled by folks that are thinking very carefully and critically around these tools. And not just from the lens of, "How do we make a lot of money?" Or even, "How do we replace certain roles and jobs to create more efficiency?" I always think about it as, how do we have AI do the things that it can do well so that we can get to the deeper thinking, the things that we uniquely provide as humans that we arguably don't get the time and the luxury to get to within our jobs?
(31:38):
And there's really great work right now coming out of UNESCO, and other groups, who are opening up that global dialogue around this idea of having us better inform the future of work, the future of learning, and the future of society. Also, have to give a shout-out to all of the community-based work that's happening. I think about Maha Bali and the work that she's been leading in Africa for the entire global community. And, also, the work that my buddy, Doug Belshaw, and I are doing through his co-op, We Are Open. And then, through my collaborative Open Culture, we're really trying to get us to driving the conversation around our shared values, our shared perspectives, and our shared hopes for what this will be in the future for AI and education.
Dr. Cristi Ford (32:22):
That's fantastic. As we're wrapping up here, I guess I want to just take a look ahead, and ask you maybe a two-part question. One, how do you envision the evolution of these AI literacies? But then, most importantly, for folks who are listening today, how do they get involved with the work? And how they have greater exposure to what we've been talking about today?
Dr. Angela Gunder (32:43):
Yeah, so what's my Nostradamus future cast for this? I think we're going to see the continuation of a shift in the literacies employed, which is great. When I first started talking about the AI literacies findings, and it was with the ID community of EDUCAUSE and their conclave, I was pretty sure that I was going to see the creative literacy showing up. That was what I expected that all the IDs were doing. And it was just what I said before. It was the constructive, the critical AI literacies that are showing up the most right now. I think we're going to tip. I think we're going to see more of the communicative AI literacies developed and used. And I would even say we're using them now, but we're going to see them more called out in the future. I think we're going to see better examples of the cultural AI literacies changing and shifting.
(33:38):
But, most importantly, I think we're going to see more of the creative AI literacies employed, as the tools continue to change, as folks are observing how we're using the tools, and most importantly, as we're sharing practice with authenticity, with transparency, we're going to see a lot of really unique uses of these tools that give me a lot of hope. And I'm super excited for what will come from this. I'm not on the bandwagon of, "These tools are going to replace IDs." I'm thinking about, "Oh my gosh, I'm going to have the work that usually takes 16 weeks, as I'm developing a course, that a faculty member is going to teach, or that I'm going to teach at the University of Arizona, whittled down to an hour session, where we hammer out our entire course map due to some of these AI tools being able to take a syllabus and output all sorts of amazing things." So that I can get to using AI tools to build interactive engagements.
(34:39):
I think about H5P, and I think about the AI as it's developed in tools. We use Brightspace at the University of Arizona. I think about Lumi being there so that I can do that fun stuff that I didn't ever get to do before. So I think we're going to see that shift, and that makes me really excited for the future, which might be a divergence of opinion for some folks who are nervous about AI. It doesn't mean that I don't still have critical concerns about it, it just means that I'm looking towards the horizon for the opportunities that are there.
(35:15):
And then connecting to what you all can do, I'm a more the merrier type of person. I'm also a question asker. I'm curious. I always want to learn. So part of what we're talking about right now with the AI literacies work is bringing people together around this concept, having them dip their toes into this concept. And then think about how they have lived examples that they could share so that we can really move the envelope on, or move the needle, putting metaphors together there. We can really move the needle on how we're seeing these literacies show up, and how we can be more intentional about embedding the development of the literacies.
(35:59):
So on the 28th of March, and for folks that are watching this way in the future, it'll be recorded, we're going to do an event for National AI Literacy Day. Doug Belshaw and I are going to be co-hosting a series of folks talking about AI literacies, and how they're seeing them in their different contexts. And then we're going to launch an open course. Open Culture is building an open course on Brightspace about AI literacy. So folks will have the ability to get a primer practice, and learn in a really fun and inclusive environment that is modeling the practices that we want to see in our own classrooms. There are also many links to the research and the publications on the Open Culture site if you want to take a deep dive into these conversations that we had with all of the folks from all over the world.
(36:52):
And then, last but not least, it's really about sharing. So you have the ability to submit a case example on the Open Culture website, and be a part of this continued research. And it could be as large or as small as you want to be in terms of the example. It can be anonymous, or you can get credit. And if you're doing some awesome stuff, we would love to give you a shout-out there too. So if you have resources that you're developing, practices that you've learned, let us know, and we'll be there to amplify and be your biggest cheerleader for the work that you're doing to advance AI literacies.
Dr. Cristi Ford (37:28):
So good, so good, Angela. I just have to offer my immense gratitude to you for being here today, for the work that you're doing. I just find, as we look forward and where the future of education is going, it's these kinds of initiatives, these kinds of conversations, that are really going to catapult us to where we need to be. So thank you, thank you, thank you for joining us on the podcast today.
Dr. Angela Gunder (37:49):
Thanks for the time, this space, and all the love to y'all.
Dr. Cristi Ford (37:53):
Thank you to our dedicated listeners and curious educators everywhere. Remember to follow us on social media. You can find us on X, Instagram, LinkedIn, or Facebook, at D2L. And subscribe to the D2L YouTube channel. You can also sign up for the Teaching & Learning email list for the latest updates on episodes like this one, and articles, and master classes. And if you like what you heard, remember to rate, review, and share this episode. And remember to subscribe to you never miss when something's coming new. Thank you.
(38:23):
You've been listening to Teach & Learn, a podcast for curious educators, brought to you by D2L.
Speaker 2 (38:28):
To learn more about our K-20 and corporate solutions, visit d2l.com. Visit the Teaching and Learning Studio for more material for educators by educators, including master classes, articles, and interviews.
Dr. Cristi Ford (38:42):
And remember to hit that subscribe button. And please take a moment to rate, review, and share the podcast. Thanks for joining us. Until next time, school's out.