 
  D2L's Teach & Learn
Teach & Learn is a podcast for curious educators. Hosted by Dr. Cristi Ford and Dr. Emma Zone, each episode features candid conversations with some of the sharpest minds in the K-20 education space. We discuss trending educational topics, teaching strategies and delve into the issues plaguing our schools and higher education institutions today.
D2L's Teach & Learn
How UT SAGE Is Scaling Personalized Learning and Instructional Design, With Dr. Julie Schell
What happens when real collaboration meets practical innovation?
In this episode of Teach and Learn, Dr. Emma Zone sits down with Dr. Julie Schell, Assistant Vice Provost for Academic Technology and Director of the Office of Academic Technology at the University of Texas at Austin, to explore how the University is reimagining personalized education at scale.
First, Dr. Schell and her team focused on the foundation—developing a thoughtful framework grounded in pedagogy, ethics and data privacy. That solid underpinning then became the launchpad for UT Sage, a generative AI platform designed to support both students and faculty. What makes this platform so exceptional is that it is both a chatbot tutor for learners and an instructional design coach for educators.
Schell also shares how UT Sage is helping educators rethink their approach to teaching and learning, offering scalable support while keeping the human element at the forefront. From lesson planning to authentic assessment and beyond, this episode delves into how AI can enhance—and dramatically extend—the educator.
"We don't want to be so forward [thinking] that we're unaware of the limitations of generative AI, and we don't want to be so responsible that we miss the benefits." -Dr. Schell
What You'll Learn:
☑️How UT Austin built a thoughtful framework on which to build UT Sage.
☑️Why putting pedagogy first—and technology second—is essential for meaningful innovation.
☑️What the “Big Six” limitations of AI are, and how educators can spot and navigate them.
☑️The four "Fusion Skills" that help learners and faculty work effectively with AI.
☑️How UT Sage helps faculty design student-centered lessons that scale without sacrificing quality.
🔖📚Resources:
Remember to follow us on social media. You can find us on X, Instagram, LinkedIn, or Facebook @D2L. Check out our YouTube channel for the video version of this podcast and so much more. 
For more content, please visit the Teaching & Learning Studio.
To learn more about how D2L is transforming the way the world learns, visit our website at D2L.com 
 
Class dismissed.🍎 
Visit the Teaching & Learn
Visit the Teaching & Learning Studio for more content for educators, by educators. Sign up for our newsletter today.
Dr. Emma Zone (00:00):
What happens when meaningful collaboration meets practical innovation in generative AI? In this episode of Teach & Learn, we explore how one university is reshaping personalized education at scale with the genAI powered platform that acts both as a chatbot tutor for students and as an instructional design coach for faculty. Today, we're going to dive in and see how it started, how it's going, and what this could mean for the future of education.
Dr. Cristi Ford (00:32):
Welcome to Teach and Learn, a podcast for curious educators brought to you by D2L. Each week we'll meet some of the sharpest minds in the K to 20 space. Sharpen your pencils. Class is about to begin.
Dr. Emma Zone (00:42):
This ambitious endeavor would not be possible without the input, expertise, and the vision of today's guest, and I'm so excited to welcome Julie. Joining me is the Assistant Vice Provost of Academic Technology and the Director of the Office of Academic Technology at the University of Texas at Austin. Julie, Shell, thanks so much for being here. It's an absolute pleasure to have you today.
Dr. Julie Schell (01:05):
Hi, Emma. It's great to be here. Thanks for having me.
Dr. Emma Zone (01:08):
Yes. This has been a conversation that I've wanted to have since we had the chance to meet back in May, so super excited about this. So, Julie, our listeners love to hear about the backgrounds of our guests. So let's start there. Let's talk a little bit about your role and how you work to advance teaching and learning, particularly when we think about the use of technology.
Dr. Julie Schell (01:28):
Yes. Oh, here at the University of Texas at Austin, as you mentioned, I am the Assistant Vice Provost of Academic Technology, and that that means that I help coordinate the academic technology ecosystem on campus. And, uh, one of the things that, uh, we're really focused on in, in the technology environment here on campus, is making sure that we're focused on pedagogy first and technology second. And I think that's always something that is, is close to my heart and something that I've been working on, throughout my career is, is really thinking about what does good learning look like? Uh, what does effective teaching look like? And then what are the different technology possibilities around us that can help foster and cultivate that effective teaching and learning?
Dr. Emma Zone (02:13):
Yeah, I love that. I, I often will say like, when we start with the tool first, we miss something. Um, and so I love that lens of, let's think about it through the lens of the educator, through the ends of the lens of the learner. Um, and, and the technology plays different roles depending on who you are and where you are in that equation. Right? Absolutely. So, love that. Um, you know, when, when we first met back in the spring, you spoke a bit about this framework that really became, it sounds like an underpinning piece, too much of what's happening within, you know, not only your department, but a lot of the initiatives happening across the university when it comes to AI. So can you talk a little bit more about the AI-Responsible AI-Forward Framework and sort of what it is and how it about, because I'll be honest, like we hear a lot about AI frameworks in the conversation. You can't, you know, go to a conference or a meeting about AI with someone either talking about policy or framework.
Dr. Julie Schell (03:10):
Mm-hmm <affirmative>.
Dr. Emma Zone (03:10):
And so, you know, what makes the AI- Responsible AI-Forward Framework either unique or why is it so important to the work you're doing?
Dr. Julie Schell (03:18):
Yeah, thanks so much for, for that question. I think one of the things that makes it unique is that it's really simple. Um, it's simple to understand. Uh, we, we needed a way to help us, uh, address the, the surprise onslaught of generative AI on campus. . Of course, AI on, on campus, and AI research has been part of the University of Texas for a long time. It's even been part of education for a long time. But when generative AI hit, um, on campus, that, that was, that took faculty, uh, and administration by surprise, I think, nationwide. And so we needed a way to help people understand and move, um, and navigate, uh, the, that onslaught of change, uh, with generative AI. And we definitely took a point of view and a position right away that we wanted people to, uh, to be experimenting and innovating with generative AI.
(04:16):
So being AI forward, but also always balancing that, uh, that, that, um, entrepreneurial mindset in teaching and learning with a responsible adoption. And so that, that AI-Forward AI-Responsible Framework is meant to, to help people, um, and remind people that it's a balance. Uh, we don't wanna be so forward that we are ccompletely, um, unaware or unfamiliar with the, the limitations of generative AI, particularly for teaching and learning. And we don't want to be so on the responsible side that we're not thinking about those benefits or those strengths and how they might be able to, to foster meaningful learning in our classrooms. And so, really is a balance. Um, and I think the way that, the thing that keeps that balance is that human in the loop always balancing between being AI-forward and being AI-responsible. So that's the initial framework that we started off with, uh, just to help us make sense of, of the changing environment.
Dr. Emma Zone (05:19):
Sure. Well, and I love the AI-forward piece comes first because I think there have been so many AI conversations grounded in fear from a lot of different people for lots of reasons. Um, and some warranted, and some maybe just fear of change and what this means for our own roles as we think about the ecosystem of higher ed and other places. So I'm an English faculty by trade, so I can understand where that comes from, but I really love the idea of no, there, there is a place for this forward approach and the balance is really what's critical. So when you were coming up with this and doing this work, how did you engage with the community, um, you know, within the university or even elsewhere to help kind of inform what the framework would look like?
Dr. Julie Schell (06:03):
So the first thing that we did actually was we, uh, we developed a project where we created, uh, uh, activities, generative AI based, uh, activities as well as lesson plans. And, uh, we created several templates for those activities and those lesson plans. And as part of the templates and the activities, we had an evaluation component, um, to each one to engage in critical analysis and critical discernment about the output that the, uh, the AI tool was generating. So even though our office is in the Office of Academic Affairs, uh, we wanted to extend the opportunity for faculty, staff, and students across the university to engage in this project. And so we developed a series of activities and lesson plans. And, for example, one of the activities is a word count activity. Um, and so we had this activity built up and, uh, what people could do is, um, engage in this activity using generative AI.
(07:08):
And so the idea was you have, a lot of times when we're writing in academia, we'll, we'll need to meet a particular word count. So for example, if we have to do an abstract or something like that, you have to meet a particular word count. And one of the things that, um, is stressful is trying to figure out how to maintain your, your viewpoint, um, and, and also meet the word count. Right? And so one of the activities we had, uh, were instructions for generating your own abstract, um, with your own, your own original content, and then using generative AI to help give some ideas about how to meet the, meet the word count, but maintain the bulk of your particular content.
Dr. Emma Zone (07:49):
Right.
Dr. Julie Schell (07:50):
And so that's the forward part, is using AI to kind of engage in an activity that is a, a common scholarly or academic activity.
Dr. Emma Zone (07:59):
Sure.
Dr. Julie Schell (07:59):
Um, and then the responsible part was actually, we had a series of questions, uh, that we had, uh, for the user to evaluate the output. So first of all, was the output accurate or were there hallucinations in it? So for example, did it say that you had 50 words, but you really had, you know, um, 38?
(08:21):
Um, was it aligned? So did it maintain the, the actual, you know, sentiment that you had originally hoped for in your, your original abstract? Um, what was the user experience like? Did, did it, um, you know, was, was it a positive or negative user experience in that interaction, um, with a bot and then kind of a net promoter score? Would you recommend using, um this particular generative AI tool to, to do this activity? Why or why not? And so that's kind of the responsible part. And I would, I would put that as, uh, you know, critical evaluation or critical discernment and really always being aware of, of the output and judging that output, keeping that human in the loop, um, in, in there. So I think that was one of the really, um, early activities that we engaged in that we engaged the whole university community around. And we, we had staff, uh, from many different departments as well as, uh, faculty members from several different departments, from pharmacy to, uh, you know, from pharmacy to social work. And, um, and then students also engaging. For example, we had students in, um, the design department in the College of Fine Arts who were interacting with the lesson plans and, and giving feedback.
Dr. Emma Zone (09:35):
Yeah. That's really cool. I love that for a couple of reasons. First of all, going back to what I was saying about the fear piece, or when people don't know, like, what's my entry point? Um, it creates an entry point that's accessible because it uses scaffolding, like you're mentioning with the scholarly activity. Okay. I, I understand I have to have a word count based on an abstract or whatever the direction is for what I'm doing, right? That's, that's not different. You know, that's, that's within my wheelhouse. I get that. Right. Right. Um, so I think that part of it is really nice because it allows for an entry point that doesn't feel so out there that a person would, um, kind of put up that immediate barrier, right? But then there's also the piece that it's experiential and constructiveness at the same time where people are able to then kind of play with it, manipulate it, dispel maybe some of that confidence issue that might be happening, discover something delightful in the process. Um, but then to the responsible piece, like the critical thinking part of it, you know, we're always talking about how we're now helping students understand how to use these tools in a particular way, same as faculty, really, all of us. And so, what a great way, even at the genesis of this work, to be modeling that kind of behavior, even in those situations, because that's what we should be doing anytime we're engaging with any of these tools, whether it's for study, scholarship, or just even personal use. So that's really, that's really awesome.
Dr. Julie Schell (10:56):
Yeah. And I think the important point too is always keeping that human in the loop. So, you know, there's kind of transactional use and transformational use, and that transactional use is, you know, putting your abstract in, um, asking it to reduce it to the correct word count, and that just blindly adopting that and, and submitting it. Um, that, that sort of transactional use. What we're pushing for is the more transformational use where you, you, you have a partnership or it's a hybrid, um, engagement with the AI tool. It gives you the output, but then that human, you're that human in the loop with your expertise going back and reading it and, and, and probably actually doing some additional transformation to the output and not, not blindly adopting it. So that that transactional versus transformational piece was really important for us early on.
Dr. Emma Zone (11:45):
Yeah, I love that, because I think that's what folks are often grappling with. They, and, and maybe it's because the transactional seems to come more naturally because it's put it in, spit it out, but, but we also know there's the, there's inherent risk in a lot of that for lots of reasons, whether you're talking about biases or other things. So, um, such a good point. So, so you have this framework, and so I'm trying to kind of parse through the, the timing of all of this.
Dr. Julie Schell (12:08):
Yeah.
Dr. Emma Zone (12:08):
That was clear from like a groundwork perspective that had to happen. You start to sort of socialize, conceptualize something big. Is that sort of when UT Sage started to emerge, was it in concert with that after? Sort of, can you talk a bit about how that helped underpin that conceptualization?
Dr. Julie Schell (12:25):
Yeah, absolutely. Uh, so we knew that we wanted to have something forward and something responsible for people to engage with that scale.
Dr. Cristi Ford (12:35):
Mm-hmm <affirmative>.
Dr. Julie Schell (12:36):
And we knew that, you know, we made a, you know, we made this series of lesson plans and activities, but, you know, in order to really be beneficial, people needed to be able to engage and make their own, to teach their own particular content or engage in their own particular activities. And so, uh, so from the, from the lesson plan, um, experience, we, uh, we thought how might we, um, scale the experience of designing a lesson and using a generative AI tool? And we, we envisioned the UT Sage project, uh, as an opportunity to do that. And UT SAGE is, uh, is a natural progression from those lesson plans. Those lesson plans are using gold standard learning, experience design, student-centered steer, um, experience design, um, uh, principles, learning science principles. For example, when you're thinking about designing a lesson, you're always thinking about who your learners are, who are their prior knowledge gaps, what are their strengths, uh, and, uh, you know, what grade level are they, what are their needs?
(13:48):
What are their common difficulties or pain points? And really thinking about that to, um, to set the stage for the framework of, uh, of the teaching. And so that's an example of how we're focusing on pedagogy first. SAGE operates from the, from what's called the tetrahedral model of, of classroom learning. And it really prompts the, the teacher to think through who their learners are, what they want them to know and be able to do, um, in a classroom. Uh, what are their common misconceptions or difficulties that they might encounter? What's the instructor's pedagogical content knowledge that, um, they have built up over time? Sort of understanding that if a student has a particular difficulty, what's your expertise that you've built up throughout your career to help students move through those misconceptions and difficulties? And what are some of the different, um, teaching activities that, you know, or learning activities that, you know, throughout your career have been helpful in getting students to, to learn that particular content.
(14:53):
And so SAGE is based on that model, and when you go into, it is a tutor, as you mentioned at the beginning, it is functions as a tutor, um, that a faculty member or a teacher can design. Um, it, and the, and it Sage walks you through those best practices in, in teaching, um, to develop the tutor. So it's kind of got this, uh, it's got a, a very sort of subtle learning science back and, um, that you don't have to be trained as a learning science scientist in order to have a, uh, a learning science powered tutor, because now that SAGE will do that for you.
Dr. Emma Zone (15:30):
Yeah. See, I love that because regardless of the tool we're talking about in technology, you know, AI is the conversation today, obviously, but it really comes down to what is good evidence-based practice? How are you addressing the needs of who is either virtually sitting in front of you or face-to-face, how are you thinking about, you know, the mission of your institution or the, the program goals that you have in your program? And so it, it really kind of shifts the lens, I think, around exactly what you're describing. Not every faculty is, is an expert in learning science or evidence-based practice or methods, right? Because that, that's not necessarily how the system works. And so this is such an interesting way to think through, okay, but what do I need to think about and where do I see those gaps? And how does technology mediate that, um, without replacing me, but leveraging that expertise? And I think that's such a critical conversation when we're talking about any tool, digital or otherwise.
Dr. Julie Schell (16:29):
Yeah, I think, I think leveraging the expertise, but also extending your expertise beyond this short amount of time that we have with students in our classrooms. Um, I think that's one of the things that's most exciting about AI. But SAGE in particular, if as a faculty member I'm teaching right now, I'm using SAGE in my own classroom. And one of the exciting things about it is that I, because I'm training it personally, um, I am building the tutors for my particular class. Uh, I envision SAGE as a way of extending the time that I'm having to engage with the students. Is it a replacement for me? No. Does it prompt, uh, extension of cognitive presence or the amount of time and depth with which the students are engaging in curated content from me? Yes.
Dr. Cristi Ford (17:17):
Mm-hmm <affirmative>.
Dr. Julie Schell (17:17):
Um, and it's a really interesting and interactive way for the students to do that. And so that's something that I find exciting about it.
Dr. Emma Zone (17:24):
That's what I was going to ask, like, in what way are you seeing student feedback and even just like formative assessment data, we're gonna talk about assessment in a little bit. Like, is that helping to inform the decisions you're making around the tool?
Dr. Julie Schell (17:37):
Yeah, definitely. I think that, um, just in class, we, I had, uh, students, uh, actually writing their own learning outcomes. I teach a pedagogy class, so I'm teaching students to teach. And so we, writing a learning outcome for the first time is a very, is a hard thing to do. Yeah. It doesn't come naturally. Um, and we had just had a mini lesson where I was giving a lecture on, uh, how to write a learning outcome. And then the students were doing a, uh, an exercise where they were having to write up a lesson plan and design learning outcomes for that lesson plan live in class. Um, and so one of the things that we did is have them put, um, I have a learning outcomes tutor that I made with Sage, and I had them that enter their learning outcomes into Sage, uh, to get feedback. And then we kind of evaluated the output, uh, in terms of how aligned with it, with what the output I would give, uh, versus how the output that the, that the tutor gave. And it was very aligned, and I think the students were excited about being able to get the immediate feedback from something that I had trained myself.
Dr. Emma Zone (18:49):
Right. Well, that makes sense too, because it helps build the trust factor. This isn't just some random bot that's existing out in the right, the, the ether, right. It's sort of now this is, and it's, it, it matches what they're already listening to and, and, or, or reading and getting to know you. And, and that kind of thing as well. So it's just such a cool interplay. Um, that's really exciting. So, so beyond the student side, there's the, another piece to this related to instructional design too, right?
Dr. Julie Schell (19:17):
Yeah. Yeah. So I think we, we see SAGE as sort of an entry level, um, scaffolding to, to best practices and instructional design. So, um, if you wanted to teach a, a concept, let's say that you're teaching, um, mean, median and mode, uh, you're teaching your students statistics and, and trying to help them understand, um, what an average is versus what the mode is, which is, versus what the median is. I mean, you are, you're, you're, you want to build a tutor. Um, Sage will actually walk you through what I was talking about earlier, that tetrahedral model of classroom learning. It'll, it'll walk you through a student-centered, uh, lesson plan design naturally through conversation, or you can configure that through a form. So, uh, we will, like I said, ask me who are your learners? One of the things that we know about learning, um, is that, that understanding our students' prior knowledge gap, prior knowledge is a primary determinant of successful learning.
(20:23):
And so really trying to understand, um, who our students are and what their prior knowledge, um, state is, is important. And so the, so through engaging with Sage, with Sage, as you're starting to build a tutor, um, it's asking you to think about those things and reflect on those things as one of the first activities that you're doing is really understanding who your learners are. And then Sage will naturally tailor the lesson. So if I say, oh, my students are graduate students in design, it will tailor content to, um, that level and, and examples from design, if I say my students are physics pre-meds, it'll tailor the content to, um, to, to what the LLM, um, believes is, is, is appropriate for that particular audience. So that's just one example. I think the other thing that it does from an instructional design standpoint is of, of course, it helps, uh, teachers write learning outcomes. And we know that from, from large meta-analyses on student learning and student success in higher education, that one of the most impactful things that you can do of all of the different interventions that people have tried is makes it make explicit what success looks like at the beginning of a lesson.
Dr. Emma Zone (21:38):
Yes. Um,
Dr. Julie Schell (21:39):
So really articulating to students what they need, what after doing the lesson, what they should know, what they should be able to do, and any of the attitudes that they should develop or beliefs that they should, uh, mindsets that they should develop. Yes. Particularly confidence with the, with the content, um, through engaging with the lesson. And Sage will walk the faculty member through that, um, through that exercise. And if they don't know how to write a learning, um, outcome or they're unfamiliar, there are resources that it will point the faculty member to. So in, in those ways, and there's some other ways that it does that, it's a, it's a learning science trained. Entry level instructional designer, not meant to replace instructional design. Sure. We think human instructional design is really important, um, but it's really hard to do that at scale.
Dr. Emma Zone (22:25):
Well and, you know, full disclosure, my background and interest is very much rooted in reflective practice. Okay. So, to me, having this not only in terms of the foundation as you've described, but this really is all about how you start to build mindsets for faculty as well. Around reflective practice at all stages of the learning journey, whether you're talking about planning or in the middle of the experience or post, and how we're thinking not only about data around that work, but then ultimately what, what does impact look like for student learning? But I love what you said about what does success look like, because that's just as important for a faculty member to reflect on and defy on the front end as it is for us to tell students. And, you know, admittedly, I, I taught for a very, very long time. I, I think we all know, like, okay, I know what success looks like, but can you actually articulate it?
Dr. Julie Schell (23:17):
Right? Yeah.
Dr. Emma Zone (23:17):
Because that's, that's, that's a very different thing. Like, oh, I know. When I see it, it's like, okay, maybe not though. Right?. I think that's such an, it's, it's a good way to kind of test ourselves as well.
Dr. Julie Schell (23:30):
Yeah. And I think that, um, you know, that's a good example. You think some people think a learning outcome is know the Pythagorean theorem,
Dr. Emma Zone (23:40):
Right?
Dr. Julie Schell (23:40):
And, you know, that's, that's not an effect, that's not an effective learning outcome, you know? And so, you know, know that you can calculate distance with a, using triangles in the Pythagorean theorem, um, is a, is a better learning outcome, right? Because it's more spec, uh, it's more specific, it's more measurable. Um, there are multiple ways that students can demonstrate, could demonstrate, um, their abilities in that particular, with that particular example. And so Sage can help if, help someone start off what most people when they're first start teaching, they would put know the Pythagorean theorem.
Dr. Emma Zone (24:17):
Right?
Dr. Julie Schell (24:17):
Um, they wouldn't know how to make a learning outcome that that would, would have enough wiggle room that students can find, can, can demonstrate mastery with multiple, um, approaches.
Dr. Emma Zone (24:31):
Yes.
Dr. Julie Schell (24:31):
Um, and, and I think that that is something that is, uh, you know, there are other ways to do this, but Sage is designed to actually help coach you through that.
Dr. Emma Zone (24:40):
Yeah. It's also helpful too, to kind of push us to think beyond our, our ways of doing things, right? Because there, especially when you talk about skill acquisition, there are many different ways to acquire skills. Think about all my time teaching and literature classes. There are certain faculty within that particular discipline who believes, I have to teach these particular works for whatever reason, but in reality, the outcomes should be work agnostic, right? There's lots of different ways to get to those different, um, you know, critical thinking and critical reflection skills and rhetoric and writing as well. So this is some interesting, interesting ties there. Um, I wanna pivot a bit to a research article that was published in Frontier and Education that you wrote along with colleague, your colleagues, Kasey Ford, um, and Arthur B. Markman. And, we'll, we'll link that piece in the show notes. It's any of the resources we talk about today, we will link for our listeners. But, um, in the AI-Responsible AI-Forward Framework section, you talk about The Big Six, and I've heard you mention this a few different times in other interviews as well. So I think our audience would love to hear what The Big Six is and sort of why,
Dr. Julie Schell (25:47):
Yeah.
Dr. Emma Zone (25:47):
...that even needs to be considered.
Dr. Julie Schell (25:50):
So when we're thinking about learning AI with AI, not learning AI, but learning with AI. Uh, I think, again, keeping that AI forward, AI responsible framework in mind, that part of being responsible, it's knowing the limitations and, uh, people, the, there's, there's sort of widespread knowledge about the limitations. Sometimes it's hard to make sense of them. So I think just framing them as the Big Six is helpful. And, uh, those Big Six are, uh, there's, there's serious concerns about privacy and security, um, when it comes to AI. So understanding those concerns for yourself as a learner. Um, one example, uh, we, we only want students to use, uh, AI tools that, uh, are, that we have contracts with here at the university to ensure that their privacy and intellectual property and security rights are protected. Um, so really just kind of understanding and being able to make, um, efficacious and, uh, choices with agency around privacy and security is really important.
(26:57):
Um, of course, the most popular one that everyone knows, everyone knows about is that AI hallucinates. And so that's the second of the big six, is just there are hallucinations. You know, AI is, um, and I, I heard someone from from Hugging Face say this once, and I just love the phrase is that AI is always confident, but it's not always competent. Um, and it particularly, um, I think once you start to use AI for, for a long period of time, you start to realize it's not better than you. Um, but when you first start using it, you might think, oh, this is so incredible. But after you use it for a little bit, you realize that it's not better than you. And so you will start to see that it makes mistakes. And, um, you know, one of the things we do in Sage, for example, is right above the chat window, every single time you type into Sage, you will see a note saying AI can make mistakes.
(27:47):
And so just making sure that we're always helping people be aware of that. Um, the third one is misalignment. So that's less, um, less well known as hallucinations, but that's when you ask AI to do a particular task, but it, it does something that's off. This happens a lot in the image-based models where you might ask it to give you an image of, um, you know, a student and her friends, um, and it gives you an image of a student with parrots or something like that, you know, it's just misaligned. The next is, is bias. Um, the models are trained by humans, so they are have bias in them. But, you know, one of the things that a researcher found from Penn that I thought was really interesting is that without intervention, if you use AI to produce, uh, a lesson plan or ask you to help, help you produce a lesson plan, it'll actually come out being teacher centered versus student centered unless you intervene and adjust the prompt.
(28:46):
Because we are tend, we tend to be more teacher centered if we don't intervene with that. Yeah. And so that's an example of different biases that that can come up in these models. Ethics is a huge concern. Um, there, there are labor concerns, um, environmental concerns, uh, you know, um, security, uh, in, in different ways, like with DeepFakes and things like that, right? Concerns around ethics, um, just co the corporations and how they're engaging with our data. And so just having a real awareness of ethics and how, um, and, and I think, uh, one of my colleagues talks about this term called, um, technological somnambulance. And that just means don't sleep, walk through using, um, AI be aware, right? Doesn't mean don't use it, but be aware of all the ethical concerns. And then the six of the Big Six is, um, cognitive offloading.
(29:37):
That's my favorite one. Yeah. Um, and that's when we offload tasks to generative AI, um, we, we need to be careful that we don't just leave that cognitive space, um, unattended, rightB So if a student wants to engage with generative AI, say they're gonna be learning the Pythagorean serum, um, maybe they're asking, um, different things that you can do with the Pythagorean Theorem and they're, um, um, asking it for a problem set, right? Um, that's fine to, to have, use AI to practice in that way, but that space that you're creating for not having to make up the problems yourself, you need to fill with something more challenging to make sure that you're not, you know, you're not, um, atrophying right in the offloading, you're still doing the thinking. Um, and I do think, you know, as we've, as we've moved through the process and trying to help people understand the, the Big Six, you know, there are a set of skills that we're trying to make sure, um, that are actually technology agnostic, that students are, are learning. Um, and happy to share more of those, uh, uh, share those four skills as we, as we continue talking.
Dr. Emma Zone (30:50):
Yeah, I, I like that too, because I was going to comment on how the big six also feels agnostic in terms of like curricular approaches or, you know, whether you're talking fine arts or computer science or other, um, disciplines. I guess it's discipline agnostic as well. Yeah. But to me, it also feels very transferable to preparing students for future workforce in terms of coming to those positions, AI ready, um, which is, you know, obviously top of mind for lots of different parties, whether you're talking about our, our institutional parties, who goes in in the workforce, you know, how are we preparing? And so I appreciate that because it, again, creates a bit of, of training around that, that you start to ingrain this into not only your studies, but also as you think about what does it mean to be a citizen utilizing AI in these different ways, in these different modalities. Um, and to your point, the technology agnostic part, I mean, this really becomes a greater worldview conversation too.
Dr. Julie Schell (31:52):
Yeah, definitely. And, you know, I care very deeply, uh, about my students' careers and um, you know, they're, that they are in my classes as well as here at the university are, are building durable and transferable skills, uh, and that they're able to articulate the meaningful learning that they've experienced in our classes. And using AI, I think does build up, uh, for four skills that are durable and transferable, uh, that I call Fusion Skills. Was I, I got the, that term from, uh, from an HBR article actually, that I, and I said it really resonated that, um, you know, fusion skills are hybrid skills between human and the AI that you're, you're bringing together. And the, the HBR article had three of them. One is intelligent interrogation, which basically means knowing how to prompt AI in order to get output. So having a lot of agency and in, in what you're inputting into the models so that you can get higher quality output. So intel intelligent interrogation. The second one is called reciprocal apprenticeship. Um, and that is giving the AI enough contextual awareness and having it have persistent memory so that it can, again, give higher quality output, reduce those hallucinations, reduce that misalignment, reduce that bias, um, and, uh, like that contextual awareness. So those two skills are having a lot of in high agency on the input. Yes. So teaching students how much agencies required on the input side, I think is really important.
Dr. Emma Zone (33:35):
Yes.
Dr. Julie Schell (33:36):
And then on the output side, um, in the, in the article they talked about, uh, critical discernment or judgment integration actually. So, so with the, with judgment integration, um, that's where you're doing that real evaluation of the output. You're having a lot of agency on, does this make sense? Does this, is this accurate? Right? Um, does this, does this sound correct, <laugh>, does it even sound like me? And then I think this fourth one is important. So I add a fourth fusion skill that I think's really important for our students as well as anyone who's using AI. And that's creative transformation, because we really want, I want to hear my students' voices, and when my staff use, my team uses AI, I want their expertise, I want their human touch, I want their DNA in, in whatever they're producing. So always taking that output and using your own, um, original and creative position to transform it. So that's a creative transformation piece. And just kind of tying this up a little bit, I actually think those four skills are, are technology agnostic. They're AI agnostic. The first is knowing how to write, uh, how to ask the right questions,
Dr. Emma Zone (34:52):
Right?
Dr. Julie Schell (34:52):
Like knowing how to ask good questions. Um, the second is giving enough context and understanding context, having that contextual awareness. The third is critical thinking and analysis. And the fourth is creative problem solving. Yeah. And engaging in creative acts and original, um, original, uh, action from your own point of view. And I think those four skills are, are transferable in a technology heavy or a technology light environment.
Dr. Emma Zone (35:22):
Right. Well, and I see so much commonality with things like university learning outcomes or gen ed outcomes. I mean, there's a lot of synergy around that work. Um, I wanna pivot to the question about assessment, because I alluded to at the beginning of our conversation, and I loved your, um, interview on the Texas standard, and something you said in that interview really resonated with me because it kind of goes back to my, my soapbox when we talk about technology, which is it shines a light on what we are either doing well in our teaching, what our strengths are, or what we need to think about or be critical about. Um, and so in that conversation, you know, you talked a bit about how AI gives us the opportunity to rethink what we've been doing in education. And specifically you call out the need to unpack what we've been doing in assessment. So can you say more about that? Because I couldn't agree more, and I think there's just a lot of room, it actually relates to some of the skills you just talked about, but yeah, we'd love to hear more.
Dr. Julie Schell (36:21):
Yeah. Assessment is a, is a real challenge because it's hard to do authentic assessment at scale. So, uh, when we have a large class, for example, and, you know, we're, we're teaching, um, you know, I teach a concept in my class called strategic empathy, which is something that we use in design, um, to help us understand, um, a user's pain points so that we can, or understand a user experience so that we can design something that, that meets their needs. So it's not empathy to be nice, but empathy, um, to be strategic, strategic empathy. And, you know, when I, you know, the, a traditional mode of assessment would likely be to, or a traditional mode of teaching would likely for me to be, to give a lecture on strategic, uh, empathy, maybe give some examples, assign a reading, and then, uh, have an exam, um, where students have to recognize definition of what strategic empathy is.
(37:23):
Um, and, um, maybe even if I give a couple different examples, they have to recognize whether it's strategic empathy or a different kind of empathy and maybe a more challenging question, um, uh, you know, to, to be able to, to apply strategic empathy or how they might use strategic empathy in a particular problem. Um, those are, those are only, that only gives us sort of a way to evaluate kind of this more surface level learning that students might have about strategic empathy, how they, their, the, their ability to recall that information, their ability to define and potentially apply that to a question that they haven't seen before, but it's not super authentic in terms of really understanding whether they have mastery and whether they'd be able to use strategic empathy in practice. And for that kind of, um, that kind of measuring that kind of learning or giving students the opportunity to show mastery of strategic empathy, I think we need to use project based learning, um, which is sort of, uh, my soapbox, which it's hard to do at scale.
(38:34):
Yes. Um, but I, uh, when we're, we're in situations now where people are recognizing that measuring, uh, the ability to recall information in a, in a, in sort of a, a testing environment is not something that, um, or a more traditional mode of assessment is not, um, is not as of its much value as being able to find ways where we can actually see whether students can demonstrate mastery of a particular topic. That, and I think being, you know, able to demonstrate mastery is really important for their careers. Um, we need them to be able to articulate what they know and what they can do, um, and, and their competencies, um, in their careers when they're, when they're going out there, they don't need to, you know, necessarily have it, have a, have badges that say that, but we need them to be able to articulate that with confidence Yes. What, what they can do. And I think they, they can't get that from doing a, um, they can't get that from doing a multiple choice exam.
Dr. Emma Zone (39:40):
Right, right. Well, it's interesting too, because I think there's also a, a little bit of connection to that idea of what you talked about at the beginning, transactional versus transformational.
(39:51):
Yeah. Whereas that, like exam, you take, it's a transaction. You, you prepare for a test, you sit down, you might take it on the computer, you might take it on paper, it doesn't matter. It's, it's a very transactional, um, engagement, and then it sort of goes away. Right. And, um, whether you then do something with that information or not as a faculty member or as a program administrator or as a student, sort of remains to be seen. But the impact is very different when as you're describing, you're being asked to think about it in a contextual way, connected to other things, and it feels much more grounded in something that's, that's real and and applicable to maybe something, you know, outside of, of just sitting down to take an actual exam.
Dr. Julie Schell (40:33):
Yeah. Like the analogous, like you want, we want students to have analogous, um, experiences to what they're going to do in their futures and in their careers. And there may be cases where retrieving information, um, in a time achieving extensive information and solving problems, um, in a time sensitive environment might actually be applicable. But, but, um, for the most part, um, they're not, it's, that's not as transferable to most careers. It's transferable to some, and in those cases, I think it's appropriate, but I think what, what, what AI has kind of shined a light on is like re-shine sort of shined a light on like, oh, you know, um, the ways in which we're measuring learning or the ways in which we're trying to evaluate student learning and then, and then grow student learning, um, need to be remodeled. Yes, we need new models. Um, and this isn't the first time that this has happened. Um, but I do think that our, uh, I do think AI in the classroom has helped shine a light and is getting people talking about doing assessment in different ways, which I think is something that's really important.
Dr. Emma Zone (41:52):
It is, it is. And it's, it's sort of that idea of what does success look like.
Dr. Julie Schell (41:55):
Right?
Dr. Emma Zone (41:56):
And what does that mean in all of these different contexts, right? It's not just we're defining that so you know, how to be successful in the course. Maybe at that moment, that's what's most relevant for that learner, but ultimately it's about what, what do you take away of, what does this mean for extension, either to the next learning experience you have to the workforce, both. Um, so yeah, I appreciate, see, it all connects in the all full circle. This is why the framework and all of these sort of, uh, seminal pieces fit together. Well, I know that we, we don't have that much more time, and I wanna give you a chance to kind offer some words of wisdom. You know, this work is just astounding. It's been really, uh, fun to follow. I live in Texas. And so I'm always keeping an eye on things and, and you know, like I said, I've been waiting to have this conversation.
(42:40):
I think it's just a testament to the, the work you and your team are doing, um, from foundation through innovation and really understanding how we think about teaching and learning ultimately. And, and that's what our podcast is all about. But that's, I think, a lot of what our passion is about too. I think we have some of that shared passion. So I would love it if you could leave our listeners with a bit of wi, you know, wisdom. They might be listening to this thinking, oh my gosh, well, I would love to do this, then I don't have the resources and this feels like a lot. Or how, you know, how do I start any, anything you could offer to them, you know, in terms of wisdom or other, other tips that you might have.
Dr. Julie Schell (43:14):
I think my, I think the, the wisdom piece that I'd love to leave with people is to, to understand that, uh, preventing cheating with AI is a policy goal. So it's, it's a, it's a, it's a different goal than, um, learning that's a design goal. So if we could lean into designing for authentic learning and ways in which we can measure authentic learning, then I think we're, we're sort of on the right track. And it would negate the need for, um, for policies, um, for policies around preventing cheating with AI. Um, so it's, it's not that I, I certainly don't endorse using AI in ways that are against, uh, an individual instructor's policies. But I, I think it's important to differentiate that preventing cheating is a policy goal and learning is a design goal. And if we can lean into that right side, then I think we don't, we, we don't need to do as much on the, the, the, the first part of that.
Dr. Emma Zone (44:21):
Yeah. Yeah. Absolutely. True. Well, thank you so much. This has been a great way to, uh, spend my afternoon to chat a little bit about this. And I know our listeners are gonna dive into the resources, so just really appreciate having you here. Thank you so much for taking the time.
Dr. Julie Schell (44:36):
Thank you so much. It's been great.
Dr. Emma Zone (44:38):
Yeah. Yeah. And thank you to our dedicated listeners and curious educators everywhere. Remember to follow us on social media. You can find us on X Instagram, LinkedIn, or Facebook at D2L, and of course, subscribe to the D2L YouTube channel. You can also sign up for the Teaching and Learning Sttudio email list for the latest updates on new episodes, articles, and master classes. And if you like what you heard, remember to rate, review, and share this episode. And remember to subscribe so you never miss what we have in store.
Dr. Cristi Ford (45:11):
You've been listening to Teach and Learn a podcast for curious educators brought to you by D2L.
Dr. Emma Zone (45:18):
To learn more about our K through 20 and corporate solutions, visit D2L.com. Visit the Teaching and Learning Studio for more material for educators by educators, including masterclasses, articles and interviews.
Dr. Cristi Ford (45:29):
And remember to hit that subscribe button. And please take a moment to rate, review, and share the podcast. Thanks for joining us. Until next time, school's out.
 
      