
D2L's Teach & Learn
Teach & Learn is a podcast for curious educators. Hosted by Dr. Cristi Ford and Dr. Emma Zone, each episode features candid conversations with some of the sharpest minds in the K-20 education space. We discuss trending educational topics, teaching strategies and delve into the issues plaguing our schools and higher education institutions today.
D2L's Teach & Learn
How AI and Metacognition Can Improve Student Outcomes, with Dr. Sean McMinn
In this episode of Teach & Learn, Dr. Cristi Ford sits down with Dr. Sean McMinn, Director of the Center of Education Innovation at Hong Kong University of Science and Technology. Dr. McMinn shares his insights on the importance of metacognitive awareness in an AI-driven society and provides practical strategies for educators to teach these concepts effectively. Viewers will learn about the role of AI as a copilot in the learning process and discover innovative approaches to integrating AI in education.
They discuss:
☑️ Why metacognitive awareness is crucial in an AI-driven educational environment.
☑️ How Dr. McMinn teaches metacognition to help students critically understand AI.
☑️ When AI should be used as a copilot rather than a decision-maker in the learning process.
☑️ Practical strategies educators can use to integrate AI responsibly in their teaching.
Dr. Sean McMinn is the Director of the Center for Education Innovation at The Hong Kong University of Science and Technology (HKUST). His role includes leading the development of digital education, including AI for education initiatives, at the university. Sean also currently sits on multiple international committees, including the AI and Education International Panel, Digital Education Council, and the Cyber-Physical Learning Alliance. He has won various awards for his work in digital education, including the 2024 Global MOOC and Online Education Alliance (GMA) Award for his work in AI Education, the 2016 School of Humanities and Social Science (SHSS) Teaching Excellence Award and the 2007 Teaching Innovation Award for his work with podcasts and education at HKUST.
Remember to follow us on social media. You can find us on X, Instagram, LinkedIn, or Facebook @D2L. Check out our YouTube channel for the video version of this podcast and so much more. Please take a moment to rate, review and share the podcast, and reach out with comments and ideas for future episodes.
For more content for educators, by educators, please visit the Teaching & Learning Studio where you can listen to more podcast episodes, register for free D2L Master Classes and read articles written by educational leaders. Sign up for our Teaching & Learning Studio newsletter and never miss a thing.
To learn more about how D2L is transforming the way the world learns, visit our website at D2L.com
Class dismissed.
Visit the Teaching & Learning Studio for more content for educators, by educators. Sign up for our newsletter today.
Dr. Cristi Ford (00:00):
With AI and meta-cognition at the forefront of educational innovation, it's crucial for educators to help students critically understand AI's utility and limitations. Join us as my guest shares insightful strategies for effectively teaching these concepts in an AI-driven environment.
Speaker 2 (00:24):
Welcome to Teach and Learn, a podcast for curious educators, brought to you by D2L.
Speaker 3 (00:28):
Each week we'll meet some of the sharpest minds in the K-to-20 space. Sharpen your pencils, class is about to begin.
Dr. Cristi Ford (00:36):
Listeners, welcome back to another episode of Teach and Learn. I'm Dr. Cristi Ford, and today we're diving into the fascinating intersection of AI, meta-cognition, and the future of learning and decision-making. I'm excited to welcome Dr. Sean McMinn to this podcast today. Sean is a director of the Center of Education Innovation at Hong Kong University of Science and Technology. Sean has numerous years, over 20 years of experience in higher education. He's taught numerous courses from digital literacy to social complex systems and has a profound interest in educational technology and network learning. He has also won Teaching Innovation awards, and last fall I had the pleasure of being hosted by him on his campus. Sean, I'm so happy to have you here today.
Dr. Sean McMinn (01:26):
Thank you. Pleasure to be here.
Dr. Cristi Ford (01:28):
So for my listeners, I just want to frame this conversation a bit, Sean, and why I reached out to you. So let me just do a little framing here. For those who are joining us today, you recently published an article entitled Navigating AI Literacy in the Classroom: A Case Study with Generative AI's Data Analysis Tool. And as I saw your post on LinkedIn and started to read it. I was genuinely excited, one, that you've gotten something published so quickly around this work, and I want to make sure we talk about that at some point in this episode. But two, the focus and approach on meta-cognition really spoke to me. So I just want to kind of maybe just jump right in and talk a little bit about the article. You and your co-authors talk in the article about the fact that students are both using AI more, but are also worrying about over-dependence. What do you think is at the heart of this tension as you work with students on campus, and how do you think educators can be responding to some of this?
Dr. Sean McMinn (02:37):
Yeah, so partly this is a result of the survey that we mentioned in the article. I co-teach a course with this author that you're talking about, and we're concerned that students are offloading a lot of their thinking to generative AI. And through other surveys internally within the university, we also see that students are having, they're starting to develop an overdependence on the tool to help explain concepts or help search for information. And we quickly realized that A, as the survey shows, that students don't really fully understand artificial intelligence or how it works. Or if they do, they're still misusing the tool. It's not a tool that should be used for searching for information.
(03:29):
And that got us thinking a little bit more that if they're doing this, how are they using it to make decisions when they're working on assignments? How are they doing it to make decisions on what to learn outside of the classroom to help them understand key concepts? And if they're using it to help them make decisions, are they thinking about that? And if not, why? I've had other conversations with other worldwide leading experts, Professor Rose Luckin, Professor Rose Luckin, she's excellent. And we've had this discussion before. meta-cognition awareness is going to be increasingly important in the age of AI for this very reason. One, because AI can't do that. Rose would say that artificial intelligence is missing key meta-intelligences that humans have meta-emotion, meta-cognition, and meta-context. So it's not, and meta-knowledge as well.
(04:31):
It doesn't know what it doesn't know. It's not aware of the context in which you're using it. It's not aware of how it's thinking and it's not aware of how it's feeling, it might synthesize feelings. But we find that students aren't thinking about that. And because they're not thinking about this, they're using the content that it's generating verbatim, they're just using it to help with assignments. And that could be a dangerous thing, because it could mean that they are not making the right decisions. So really this is what got us thinking a little bit more about why we need to teach more on the social science side of things of artificial intelligence, not just on the technical side, on the computer science. But what is the impact on our thinking and our decision-making? How do we work with each other when AI becomes part of the workflow in problem solving?
Dr. Cristi Ford (05:27):
I love that you have identified meta-cognition as what I call a durable skill or an enduring skill that you're going to need in an AI-driven society. But for educators listening, given what you shared about the ways that you have observed students using AI, how do we start to teach? What does it look like to teach meta-cognition practically in day-to-day ways and different subjects and areas and emphasis?
Dr. Sean McMinn (05:58):
Yeah, I mean, when we start using terms like meta-cognition and meta-cognitive awareness and that sort of thing, it sounds bigger than it really is in some ways, doesn't it?
Dr. Cristi Ford (06:08):
Yeah.
Dr. Sean McMinn (06:09):
It's just helping teachers think about, well, how do we help students regulate their strategies when they're learning something? Well, we get them to think about their thinking. Have they thought of a goal? What is that goal? Then what strategies are they going to take to achieve that goal? So they want to learn A. So they're going to take a strategy, I'm going to read various articles, but then get them to think, well, as they're reading the article, have them think out loud in a sense, is this the right article for me to get to the right goal? How is it helping me reach my goal? And then as they think that, that should help them a little bit in focusing on the key points that should be achieving that goal as opposed to them doing a quick search term with keywords, finding an article and then reading it and then saying, "Well, the source is .edu, therefore it must be credible. I'll put that in."
(07:10):
So just having teachers put into their lessons or into their learning exercises, steps where the students are forced to think about what they're doing. If they're evaluating, the students should ask themselves, how am I evaluating? What tools am I using to evaluate this? Are these the right tools and are they helping me achieve the goal that I want achieve? So that's a very simple way of looking at it. And in a sense, with AI, it's an important process. I'll quickly digress, just a little digression, but there's a recent paper out of Microsoft Research where they were talking very much a little bit about meta-cognitive offloading and using these tools and how companies or designers when they're designing an interface, they should be embedding these sort of questions into the design process of a chatbot.
(08:09):
So for example, if I'm going to write an email to my boss and I go on to ChatGPT, say, "Help me write an email to say this." Then the chatbot should start to question me. It should say, "Well, who is your boss? What tone does your boss tend to like? What's the context?" And so then that's ensuring that you're going to get a more targeted, nuanced email that's appropriate to the context that you're writing for as opposed to a generic email. Now, that's a very simple example, but this is something that you could apply in the classroom.
Dr. Cristi Ford (08:48):
So when you talked in the article about the ways that students are using AI, you talk about from research to resume building. Maybe we can unpack a little bit this idea that you introduce about AI acting as a copilot versus a decision maker. Can we maybe just unpack that a bit and talk about the distinction that you see playing out in faculty meetings as well as in classrooms?
Dr. Sean McMinn (09:14):
Yeah. Well, I think one thing that Microsoft got right, and that was the name in which they're calling their product a copilot. I think that's the right way to frame it, because if we look at AI as the decision maker, well, A, you're eliminating human agency, which is important. You need to maintain human agency. But not just for the sake of saying, we need humans in the loop. Well, we need to ask ourselves, why do we need humans in the loop? Well, because the context might be very specific to, sorry, the content or the decision making might be very contextual to your needs, that the AI may not have a full disclosure of or a full understanding.
Dr. Cristi Ford (10:03):
It doesn't know that it doesn't know it.
Dr. Sean McMinn (10:04):
It doesn't know it, exactly. But there's also other things. It's not just the context, there's empathy. It doesn't know how the other stakeholders in a problem that you're working on might have. It doesn't know for resume building, it doesn't know how good you are in collaboration with other people. You can use some words that can synthesize how you might be, but it will be doing this based on random choice words based on the most probable answers, which will then might not match actually what you are and what you're capable of. And then when you get to the interview stage, you're not able to actually demonstrate it. You're in trouble.
(10:48):
So it could help you, it could help you design, but you need to be making the full decisions, whether it's resume building or deciding how you're going to design a course or design for features that they're using AI to write course materials or write assessments. Again, it might come up with some interesting assessment ideas, but they won't be contextual and appropriate to your students' needs. So you as a teacher, you're the experts, you know who your students are, your context, your school's context, the purpose of your course. So you need to be making the final decision, but it's a great tool to help you get there.
Dr. Cristi Ford (11:38):
That's great. That's great. And I guess I want to just follow up in terms of the approach in faculty meetings. When you're talking to your colleagues on campus or you're having these kind of dialogue and discourse, how are you thinking about or how are you sharing with other faculty members, maybe the ways in which you're doing this and ways in which you... Give a great example when we chatted in preparation for this podcast about an origami task, and you talk about that approach, maybe we can kind of have you share that experience here, and then how do you use that to elevate and socialize with other faculty on campus, this work?
Dr. Sean McMinn (12:20):
Yeah. Okay, so the origami example is a really fun one. It's not scientific in any means, but we've run it about three times over the last three semesters, and it works quite well. The main purpose of this is to help students become meta-cognitively aware of the resources that they choose in order to get a job done. And it is also designed to help them understand the limitations of these tools, particularly in this case, text-based chatbots. So what we do is we assign them into pairs and we tell one person, you're the observer. Take notes of your partner, observe how they solve a problem. And in this case, the problem is they have to learn how to do a frog origami and as the only resource that they can use to learn how to do the frog origami, sorry, that's a tongue twister for me tonight, is to watch a YouTube video. So they must use YouTube to teach them.
(13:27):
So they have one person making it, and then the other person observing their strategies, how do they do this? Then after they've done that task, we give them a second task, and the second task is to make a crane origami. Relatively the same kind of skills, although I think the crane might be a little bit more difficult than a frog, but this time we tell them that you cannot use YouTube, you could only use ChatGPT or a text-based chatbot to teach you how to create it. And what happens is, unsurprisingly, most of the students fail. They can't do it. So the frog origami almost 100% success rate. For the crane origami, very little.
(14:16):
And again, they're being observed by their partner, and then they discuss afterwards what happened? What were their strategies? How did the tool, the teaching tool impact their strategies? Were they thinking about things? And then we discovered that those who were working with ChatGPT on the frog or the crane origami, they were forced to think about their strategy a lot more. They were forced to think a little bit about, well, what prompts? Am I using the right prompts? Why is this prompt not working? So it's forcing them to think about their strategy, but they learned that perhaps this isn't the right tool for the right problem. So that's also another lesson. One thing that came out, we just recently did it two weeks ago, and I didn't mention this to you, is one student was able to do the crane origami very quickly, and I asked him, I said, "Did you know how to do it before?" He said, "Yes, I did, but I needed a reminder."
Dr. Cristi Ford (15:12):
Got you, yeah.
Dr. Sean McMinn (15:13):
"And ChatGPT just helped me remember how." And I said, "Well, that's very interesting." Because this is what studies are starting to show those who know how to use AI well, and those who have expert knowledge far outperform those who have novice knowledge, but can use the AI tools. So is, I mean, it's not a scientific example, but it kind of replicates that idea. And so it's a fun activity. It's a fun activity because A, it forces them to think a little bit about themselves as well, not just about AI, but it gets them to think a little bit about, well, how do I approach learning how to solve a problem and what is the importance of thinking about my strategies and reflecting on it, and then readjusting my strategies as I fail and then try something new and realign and fail again and try something new.
Dr. Cristi Ford (16:07):
Yeah, I love that. I love that. Maybe we can also connect this back to the article. One of the things you talk about you and your co-author talk about is this idea around fused, directed, and abdicated co-creation with AI. As you talk about that recent example, can we maybe connect that to these terminologies or can you at least take some time maybe and unpack for listeners how you have created that reference point for each of those pieces?
Dr. Sean McMinn (16:37):
So we talk a little bit, well, the article, we talk a little bit about three different ways of looking at collaboration with AI. There's the cyborg kind of perspective, although this is what the literature has called it, although in our course we call it the chimera approach, because the chimera is more like you can't really tell where the machine or different parts of the different animals begin an end and they just come out when needed. Versus the centaur approach where there's a distinct difference between the human and the AI, or then there's the complete dependence on the machine and you're offloading all the decision making, which is quite dangerous. So we're recommending taking a chimera approach. And what we're recommending when they're working on that is we're asking them first, understand your environment. What is there that can help you recognize the tools they have? What are the constraints? How do I process the information that's coming to me?
(17:44):
Then I plan, and then I set out my plans and I start to think clearly, how do I achieve them? Either step by step, and actually this is as reasoning models come into play, actually, they actually can help a lot in this way. And then I assess whether or not each step along the way to my goal requires AI assistance or not. Because sometimes you might think, "No, I don't need it right now. I need to think about this." And in fact, in the experiment we're running right now, I noticed that some students are doing this, but they may not be thinking about it. What they're doing is for the low order thinking skills, such as understanding and explaining, they're actually just relying on the tool and they're using it to help explain concepts and helping them understand the task. But when it came to a second part in our task that we're asking them to do, which was to evaluate and justify a solution, they actually moved away from the tool and they started thinking for themselves and they rely less on the machine and they started to write.
(18:53):
Not all though, some did start to use the machine to help them justify. But the point is here, as you're goal setting, you need to determine which points are important for AI assistance. At which point do you need your human intervention in the human insight. Then monitor everything you're doing, question everything along the way. Is this the recommended idea that I'm looking for? Is this logical or is there any bias in this result? Or is there any bias in the question that I ask myself? And this is something that we teach JC, my co-worker and the co-author of this, we teach in the course, you need to recognize your biases as well, just as much as the biases that are in the tool, because sometimes the tool and the output might start or influence confirmation bias.
Dr. Cristi Ford (19:54):
That's right. That's right.
Dr. Sean McMinn (19:55):
And I think more often than not it will do that. So as you work on this and you guard against these kind of issues and you're not being persuaded by the AI tool, you evaluate the situation and you can cross-reference everything that you're doing to other sources of information. And this is another worry we have is that as you become relying upon the tool, that tool start to dominate your resource choice as opposed to other choices of resources, whether it's a colleague or a database of some sort or some other article. So are you cross-referencing it? Are you weighing too heavily on the output of the AI? And then you just through that you adapt and you learn from each experience and you reiterate as you go along. So in a sense, what you're doing is you're planning, monitoring, evaluating, and then you're adapting.
Dr. Cristi Ford (20:56):
Yeah, that's really, really helpful to hear. As you were talking about the origami example, if I remember correctly, you did that with first year students at the university, is that right?
Dr. Sean McMinn (21:08):
Yes, yes.
Dr. Cristi Ford (21:09):
Okay. So when I think about this meta-cognitive AI awareness, and to your point, it is a sexy name for sometimes something that is really simple to do, but I think about from biology to accounting. As you think about the ways how you design that small origami exercise, maybe what are some advice or some parameters or suggestions do you have around how educators in any classroom setting can start to build in some of this meta-cognitive AI awareness in their own content area? How do they get started and what are the things they need to be thinking about or what resources should they be using?
Dr. Sean McMinn (21:54):
Yeah, I mean, I never want to add workload to teachers, because they already work really hard, but sometimes if you are working on an assignment and students are going to use AI, because it can be hard to regulate or you allow them to use AI, have a little subsection of your assessment where they're required to reflect on how AI helped them, and you can easily design a template that forces them to think about these questions. Now, can AI mimic those reflection questions? Of course it can. So how do you get around that? You ask them to take screenshots of their interactions with artificial intelligence, and then they annotate the screenshots, or maybe they present the screenshots and explain, "Here's what I was doing." They could do a video essay. "This is what I was doing. I used AI to help me with this problem. Didn't help, because... Now upon reflecting on that, I realized that AI wasn't the best for that. I needed to look at this other source."
(22:57):
So what you're doing is you've embedded a reflection process with evidence, and it will be authentic if you're using it as a video essay or a presentation or annotate a screenshot. So you know what they're thinking in that sense. And at the same time, they're learning a little bit of AI literacy and how it can or cannot help them with their learning. For everyday classrooms, you could do simple things. I mean, the origami, it just was a fun, creative way of doing it. But you could do something simple as get them to answer a prompt that you might want them to discuss first with AI. Maybe they're a reading in class and tell them to use that reading and have AI analyze it and summarize it for them. And then when they bring it into class, put them into groups, compare their results and discuss, do they agree with what AI was doing? Is there anything missing? Why or why not? So you can do different small things to get them to think a little bit about that.
Dr. Cristi Ford (24:12):
I love that. And what it makes me think of that's going to lead me into my next question is the ways in which we model meta-cognitive behavior in our own AI practices. And so when you talk about the ways in which educators can do that, I remember the day that ChatGPT created that share button where I could take the prompt that I'd gone through, the responses and I could share it with a colleague who could then pick up from that place and take off.
(24:44):
And it was a really eye-opening opportunity to think about the collaborative approach around the work that I was building in ChatGPT, but also separate it apart from that chronicling how I had used it being really clear about the ways in which it had informed my thinking on a theoretical framework or it connected me so that I had this kind of running list so that when I talked with my co-conspirator, we really started to talk about the ways in which we were utilizing this collaborative approach around AI and really documenting that process. I thought it was a real, it was just an eye-opener for me that it then didn't have to be this lonely experience that I was doing this on my own.
Dr. Sean McMinn (25:30):
Yeah, absolutely. I mean, now we're getting into actor network theory here in that sense, where AI is another actor within a social network, and being able to share what you're doing with a chatbot with a colleague and then you can build upon that experience is a fascinating thing.
Dr. Cristi Ford (25:51):
It is.
Dr. Sean McMinn (25:53):
And I'm not even sure we fully know what will emerge out of this new collaborative kind of experience. I know for me, in this sense, what's also useful for me is the sharing, is I being able to look back at the prompts. And the experience you just explained to me, it would work very well from you because you have expert knowledge and pedagogy, you have knowledge of frameworks. So you know in your conversation with OpenAI and then the share as you're sharing it with other colleagues who have that same expert knowledge, you can reflect quite well on whether or not it's taking you in the direction you want or not. It's the people that don't have that expert that I worry a little bit about.
(26:46):
And this is where we need to put in steps somehow to get us to reflect upon that. Because I use AI sometimes for planning certain lessons and activities, and I know exactly what I want, I just might be busy and I need a quick boost and a help by AI to get me there. And I find myself going, "Nope, nope. That's not what I want. Okay, maybe I'm not asking this right. How can I ask this?" And I'm thinking about my questions a little bit more deeply, and I go back and I ask the questions and it gets me closer to what I want. And during that process, I'm actually having a deeper understanding of what I want to do because I'm thinking about it a lot more deeply.
Dr. Cristi Ford (27:29):
And you're refining, again, to your point in the earlier part of this podcast, AI doesn't know what it doesn't know. And so you're refining that context and providing more rich description and persona and all the nuance that helps you really to get the output that you need.
Dr. Sean McMinn (27:46):
Exactly.
Dr. Cristi Ford (27:47):
I am so fascinated by this work. I want to ask you maybe two questions in one as we wrap up here. One, I want to ask, as you think about the work that you're doing, amazing work with students every day, really being able to do this applied research with students around AI, maybe what's one piece of advice or takeaway for educators with respect to AI? What would that be? And then the second thing is I'm amazed at how quickly you're able to get publication and research out there. Do you have any recommendations for all of us? Because one of the things that I hear from educators a lot is that the historical publication cycle is very slow. And so by the time something historically gets published, in the realm of LLMs it's dated. And so I guess I want to ask you maybe for advice on both those fronts.
Dr. Sean McMinn (28:46):
Yeah, well, oh geez. They're simple questions, but hard to answer. The advice for others, I think don't be too afraid about this phenomenon. It is having a large impact on education. But if we put our heads in the sands and we just say, "Well, it's not going to affect my domain." Then you're in trouble because it's going to affect all domains, all disciplines, whether you're a historian or a social scientist or a marketing professor, whatever. So I'm not saying embrace it and say that you must adopt it. I think you have to make sure if you do adopt it in your teaching, if you adopt it in your assessments, you do it responsibly, make sure it fits your needs. Don't just use it because it's the next biggest thing.
(29:43):
I mean, I'm hearing a lot of people talk about DeepSeek, because it's caused a lot of noise in the industry, but I worry that people are just going to say, "Oh, DeepSeek everything, DeepSeek everything." Well, no, that's just one model of AI. There might be other models that are more appropriate to what you want to do, or there might be future models and so on and so on. So I think understanding that you need not be afraid of it, but you do have to pay attention. But don't just use or adopt the tools in your teaching and learning because everyone else is, do it because it's appropriate to what you're trying to teach the students and the problems that you're trying to solve. Don't focus only on whether or not students are using it to cheat or not, because then we're losing the bigger picture.
(30:39):
We're losing the bigger picture that we need to help our students be equipped for a world that's fast-changing. A world where jobs are probably going to require people to have these adaptive skills, these hybrid thinking skills in a sense where they are going to be able to make decisions based on data coming from multiple tools. There's multiple advice points you can give, but those are a couple. Your other question, publishing. I mean, I'm not publishing a lot, but I think I agree with you. Research is slow, and we recognize that our university, we had to act fast. We had to act faster than research because the problem is now not a year later when the article will be published or two years now, the problem is now. So getting the word out, we decided to try to get published, not necessarily in high impact research journals, but more in journals that will be read, still taking the research seriously.
(31:55):
I'm not saying the research isn't serious, but working with organizations like Campus Times Higher Education Campus+ getting articles out through there, or working with organizations like the Digital Education Council, which has over 70 universities worldwide involved in this council, and then some other smaller alliances, and going through publications through these and presenting at conferences and talking about, well, what are we doing? How are we applying what we know empirically and applying it in the classroom now? So that's how we're getting around that issue because if we just rely on your traditional journals, people will still be worried about whether or not AI detectors will detect students cheating, and we won't be actually doing anything innovative in the classroom.
Dr. Cristi Ford (32:51):
That's right. That's really, really great feedback. I mean, and again, to your point, because you've done that, I immediately saw it, I read it and spark this conversation so that we can share this globally with others who are grappling with some of the same issues. We are all in this digital transformation together, and if anybody tells you you're an expert, they're lying because this work is consistently evolving. And so I just want to thank you, Sean, for the time. Thank you for the work that you're doing. We'd love to stay connected. So listeners, as we talk about this article, we'll make sure to link it in this episode so you can take a look at it. Any final thoughts as we wrap up here today from you?
Dr. Sean McMinn (33:35):
No. The final thought is these are exciting times. I'm loving it. It's good to have something so impactful to do-
Dr. Cristi Ford (33:47):
That's right.
Dr. Sean McMinn (33:47):
I guess.
Dr. Cristi Ford (33:49):
That's right. Well, we will leave on that note. Sean, thank you again. Thank you to our dedicated listeners everywhere and curious educators for joining us today. Remember to follow us on social media. You can find us on X, Instagram, LinkedIn or Facebook @D2L and subscribe to D2L YouTube channel so you can see this episode and others like this. We also have a teaching and learning studio email list so you can keep up to date with the latest podcast episodes, articles, and master classes. And if you've liked what you heard, don't forget to rate us, share episodes like this. I think this one's going to get a lot of share. And remember to subscribe so you never miss what's going on in store. So thank you again for listening, until next time.
Speaker 3 (34:30):
You've been listening to Teach and Learn, a podcast for curious educators brought to you by D2L.
Speaker 2 (34:35):
To learn more about our K-20 and corporate solutions, visit D to L.com. Visit the Teaching and Learning studio for more material for educators by educators, including master classes, articles, and interviews.
Speaker 3 (34:49):
And remember to hit that subscribe button and please take a moment to rate, review, and share the podcast. Thanks for joining us. Until next time, school's out.