Note: This interview originally aired on January 31, 2023 as a Chronicle of Higher Education Virtual Event. The transcript of this event is intended to serve as a guide to the entire conversation. You can also access our summary and a video of the event.
Ian Wilhelm:
Hello and welcome, and thanks for joining us today for what will be a dynamic discussion about the use of AI technology and chatbots in higher education. I'm one of your co-hosts for today, Ian Wilhelm from the Chronicle of Higher Education.
Bridget Burns:
And I'm one of your other co-host, Bridget Burns from the University Innovation Alliance. And today we're going to hear from a panel of experts across higher education research technology, who are going to share their perspectives and vantage points about how higher ed institutions can navigate and benefit from AI.
Ian Wilhelm:
Bridget, it's great to see you. And for the audience out there, we want to make sure this is a conversation that's speaking to your questions. So we will be looking for those questions in the Q&A. And as Bridget and I talked about, we want to make sure this is a conversation that's covering a broad understanding of what AI is, and how it's being used for student engagement. So please do send us your questions. Because as Bridget said, this topic is very timely, as the use of ChatGPT is now dominating headlines. And higher education continues to grapple with adapting to a real technology that's really emerging and changing as we speak.
Bridget Burns:
And AI has already been used widely in self-driving cars and smart assistants and marketing chatbots. We've all experienced that in the private sector. But as much as it makes its way into higher ed, I think the complexity of the questions around its use are only going to become greater. Because we want to make sure that we're using it as a student-centered tool. There's no longer a question of whether or not colleges or universities will adopt up AI, it's a question of how to do it, and to do it well.
Ian Wilhelm:
And Bridget, I think it's a great phrase, you use "student-centered tool." I think there's a lot of what we'll be talking about today. And the concern now is also how to ethically and efficiently implement AI to improve the student experience for that student-centered tool. So in today's virtual forum, a panel of experts, like we said, in educational research and technology, we'll discuss this. And how higher ed institutions can benefit from AI. Kind of avoid some of the common pitfalls as well and how best to implement it.
And just to give us sort of a common understanding, a place to start from, and the first part of today's conversation, we're going to introduce Tim Renick, the Executive Director of the National Institute for Student Success, sort of a very well-respected source, someone we call quite a bit here at the Chronicle for his perspective, and someone who supports many campuses and how they implement new practices like chatbots.
Bridget Burns:
And Tim, thank you so much for joining us today. We really appreciate you giving us some of your expertise.
Tim Renick:
My pleasure.
Bridget Burns:
So let's just set the landscape for folks, because you work with institutions who are contemplating the use of technology and helping support them. But you also have worked at an institution that did this. So I'm hoping you can give us kind of case study sampling. Can you share as you were navigating this idea, can you set some context by what you've learned, we mean, by AI and chatbots? The problem you were trying to solve at Georgia State, and kind of how you went about it?
Tim Renick:
Sure. Bridget, Georgia State, as many of you know, is a very large institution. 50,000 students, we're 60% Pell-eligible, 75% non-white. And we struggle with delivering personalized services to our students at scale. Now, how do you respond in a timely fashion to students? The problem we were facing that prompted our adoption of AI back in 2016 was an admissions problem, so-called summer melt. We were losing almost 20% of our confirmed incoming first-year class every year to melt. Students would come to orientation, they'd sign up for classes. In some cases, they'd put down deposits on housing and so forth. But the beginning of the term would begin and they would be nowhere to be found.
So we looked at the data to try to figure out what was going on. Did all these students change their mind about going to college? What we found is that the majority of these students were getting tripped up by the bureaucracy. It wasn't that they were changing their minds about going to college. We were preventing them from making it to that first day through FAFSA, and transcripts, and immunization requirements, and deposits, and orientation, and registration rules, and so forth. As we looked at the students who were so-called melting, we found each one of them was getting tripped up by the bureaucracy. And disproportionately, these were students from first generation and low-income backgrounds.
We had tried various ways of intervening during the summer. It's a particularly vulnerable time for those students, because they're no longer at their high school, so they don't have their teachers and counselors around. They're not yet on our campuses. How do you engage them? We tried phone campaigns, and we certainly emailed endlessly. But then summer melt rate kept going up. And then we began to engage in AI-enhanced texting. This was a tool that students communicate via all the time. They're always texting their friends and so forth. What if we were able to build up a tool that would help students answer key questions, and navigate key obstacles to enrollment?
So we worked with the partner Mainstay and built up a knowledge base initially of about 2000 answers to commonly asked questions by incoming students. Questions about registration and orientation and the FAFSA and loans -- and all the students' things, questions that students ask. And then these responses, these answers were put on a platform, an AI-enhanced platform. The AI didn't write the answers and doesn't write the answers to the questions that are coming out via our chatbot. The AI uses algorithms when a question comes in, to pick the right response out of the knowledge base, or make a decision -- if there's not a reliable response -- in which case, the AI delivers the question to a staff person who writes and vets the answer so that the knowledge base is constantly getting smarter.
So we launched this new tool in May of 2016 just for our incoming first-year students for that fall, thinking maybe we'd have 6000, 8000 exchanges with the students before the beginning of fall classes. Over those four months, we had over 180,000 exchanges. The average response time when a student asked a question was about six seconds. But we were also nudging the students to get them to do things and kind of proactively guide them through the process. We saw interesting things in the data. The use of the chatbot was heavier at 12:00 midnight than at 9:00 in the morning -- an indictment of our standard business practices. But that's when 17-, 18-year-olds are ready to do the FAFSA, or deal with their registration issues. They're not doing it at nine in the morning. It's after work, it's after school, and they're taking on the questions late at night.
And we asked the students in focus groups after that first summer, what was their experience? And we heard multiple times from students, "I asked the chatbot questions I wouldn't have asked a human being." If you can't get your FAFSA completed because you don't have access to your father's taxes, because you haven't seen your biological father in the last two years, the last thing you want to do is go into some stranger's office and spill out your personal history. The students knew it was a bot. They liked the fact that it was non-judgmental and they could ask any question they wanted to.
And so what was the bottom line? With this new tool in place and with the students having access to easy answers and getting nudged in the right way, and coached to navigate these processes? We reduced summer melt by about 30+%. That's hundreds of additional students, mostly low-income, first generation who are making it to the first day of class. It was a simple way of scaling personalized attention to our students. Saving our staff hundreds of thousands of phone calls and emails, but better cutting through the communication clutter. And because it's worked so well for our incoming students, now every Georgia State student across all our campuses has the chatbot from the time they first are confirmed to enroll at Georgia State to the time they hopefully graduate successfully. It's a tool at their side to guide them through, and coach them through the various processes that they engage in.
Bridget Burns:
So that's super helpful. We're getting lots of questions about the AI elements of the conversation, which I know that we will be able to bring some of the other experts to the table. But Tim, what question did you have around this idea of using AI? Now, this is not just an FAQ that you draw and you pull in. There is an element where it's actually learning from your students. And can you just talk about the discomfort you might have felt about that, and before you engaged? And what you would tell yourself now?
Tim Renick:
Yeah, there were concerns. We were one of the first schools in the country in 2016 to really scale the use of AI chatbot and technology for student engagement in student support services. So, one concern we had was, are the answers that are getting produced by the chatbot accurate? And one way we controlled for that is by setting up a knowledge base and vetting the responses. And not having the AI and the chatbot generate new responses. The second thing we did from the very beginning with the help of our tech partners at Mainstay is monitor very closely what was going on.
We knew for the chatbot to work, it had to respond to a high number of questions immediately. Students get frustrated if they try a tool like this, and half the time, they're getting responses that are gibberish, or "I don't know the answer to it." They're going to move on to something else. So we worked very hard to make sure that the students' responses were getting replied to in a timely fashion, that most of their responses were handled by the AI. And over 90% of them were. And that if there was any other issue that was coming out, somebody was monitoring and looking closely at whether the student wasn't getting the response that they needed.
We in practice had very, very few problems. Because even if a response that was selected was not quite the right one, the student was getting accurate information, because we had vetted all of the responses that were coming out of the knowledge base and so forth. So there are guardrails you can set up. You monitor very closely what's going on. You can look at responses. The other thing you can do is set up trigger words. If a student uses a particular word, rather than the AI responding to it, you can have that text delivered directly to a human being. And I think, as the conversation proceeds, we'll talk about some ways where that sort of function can be really helpful for your Dean of Students office or other professionals on your campus.
Bridget Burns:
Well, thank you so much for that additional context. And like I said, we're getting additional questions. And so, folks, thank you so much for putting those in, and we're going to keep working through the various questions. I know that we want to get more into the AI and more into the ChatGPT connection space. So we can go there in just a moment. But let's first bring on our other panelists so that we can broaden this discussion.
So, first I want to introduce Michael Berman, who is a higher education consultant who has worked at a variety of institutions throughout his career. We also have Drew Magliozzi, who is the CEO of Mainstay, which is the chatbot company that Georgia State uses. So hopefully, we can get a little bit more perspective about that case study. But we also have Dawn Medley who is at Stony Brook University and also uses a chatbot, and can give us perspective about specifically their interpretation of how to apply them. And lastly, we have Aashir Nasim who is from Virginia Commonwealth University. Who, instead of purchasing a chatbot, I believe they actually created their own. And so you can get a little bit more into that build-versus-buy discussion. Now we have a variety of perspectives, and I think we can dive into a broader discussion. So thank you all for joining us.
Michael Berman:
Thank you, Bridget.
Bridget Burns:
Great. So first, I'm going to just go straight to the ChatGPT question just because we're getting a lot of it. And Drew, I'm giving you a signal that I'm heading your way. So can you help us? So, one, I actually did try and use ChatGPT to generate the questions about chatbots. But the system was overloaded today. But can you help folks understand the difference between what we're talking about, but how there might be a connection perhaps?
Drew Magliozzi:
Yeah, there are million flavors of AI, and it's been around for decades. But frankly, some recent discoveries have been game changers, such as ChatGPT, which is made by a company called OpenAI. It's getting a lot of buzz. It's a thing called the large language model. It's trained on vast amounts of data from the Internet, and a ton of computational resources. I think about $100 million went into making it. It's remarkable at producing human-like writing, code, and even more. But it occasionally makes false, misleading, and downright biased content. A lot of educators are worried about the context for cheating, and others are embracing the possibilities.
ChatGPT I've heard compared to a calculator, but for writing, in the sense that the calculator didn't eliminate the need to learn math, and in some ways even accelerated or advanced the learning. But the real question is how to accept a world where ChatGPT not only exists, but is going to get increasingly better over time. And how to adapt education to that reality that's coming very quickly. In fact, experts say the next version of ChatGPT due out in about a year is expected to be about 50 times bigger and better than the current version.
And so I believe it's actually imperative to explore how AI can accelerate teaching and learning. And so we can keep up with the disruption that's coming. And this might sound a little funny coming from the tech CEO in the group, but I don't think AI is the answer. It's really just a tool. And any tool that can be used for good or to cause harm. That's why we've done ten rigorous research studies with various partners like Georgia State to actually see how it impacts students in the real world. We sort of have a Hippocratic oath, I guess, when it comes to AI. And I think maybe more companies and institutions should adopt something like this. Which is at the baseline measure outcomes to do no harm. And then set the bar to deliver proven outcomes that help to close equity gaps.
When it comes to generative AI -- at Mainstay, at least -- we're currently exploring how it can help human advisors incorporate five proven techniques of elite coaching into their student communications. We believe these new language models can actually help generate templates for advisors to modify and use, to better express empathy and support most effectively based on the latest cognitive science. Basically, how we communicate matters a ton. And if used correctly, AI might actually be able to help us consistently be our very best to support students on their journey. There's obviously way more to go, but maybe that's a conversation for another time.
Bridget Burns:
I think it's just a tiny taste, but I appreciate that. I now want to take it to Michael Berman, just because I know you've worked with a variety of institutions again. And I just want to drive to one question that seems to support some of these that we're hearing. I just want you to frame when you're -- If you were giving advice to an institution, what should they watch out for, both good and bad when making a decision about whether or not to use AI enabled tools for their institution?
Michael Berman:
Well, thank you, Bridget, and thanks for including me in the panel. I would say that to follow up on the point that Drew made, perhaps put it in slightly different language. A chatbot is not so much an IT decision as a decision about how you want to serve students. And I know that a lot of people have the impression that it reduces human touch, or that it's somehow alienating. But the data from the students doesn't really suggest that it's an alternative form of communication. That would be like saying, "Well, email is less personal than talking to somebody face-to-face." Of course that's true, but you can't talk to 2000 people or 20,000 or 50,000 people to send them all the same message. So you use email, and we're comfortable with that.
I think that AI is a much more sophisticated way to communicate, in many cases. And so I think that putting it in, having a communication strategy and a student service strategy and understanding that, understanding it in that context. The worst thing you can do is treat it as an IT project and say, "Hey, IT folks, go bring me -- go off and research this and bring me AI chat." Because that's not going to be aligned with the needs of your students.
You have to start with the student. What is it the student needs? What are the goals of your institution? What are the problems that you're trying to solve? And start there. And the good news is that this is now -- this type of AI, the type of AI we're talking about today, is pretty mature. This is not cutting edge stuff anymore. It's well understood. Mainstay and others have a good handle on how to use it effectively, and so you can learn from them.
Bridget Burns:
That's great.
Ian Wilhelm:
Thanks, Michael. I appreciate that sort of nuanced discussion. And you're right, I think people are skeptical when it comes to chatbots. We've got a comment here from someone named Brie in the audience, who says, "How do we ensure that making AI chat so widely accessible doesn't reduce live engagement of students with real humans, especially peer interactions? It seems like students are more and more opting for a passive experience. Are we doing students a disservice by reinforcing that chat with computers as a way to get answers or solve their problems?"
And so, back to your point, I think what I've heard mostly is from folks saying, "Hey, this actually can help with that interaction, that human interaction by making sure the types of questions that some of those folks are asking, those students are asking are ones that they would prefer to ask a chatbot," as Tim recommended? But Dawn, I want to get your thoughts on that. And how you all have implemented this at Stony Brook and had that? Have you had people say, "Do you hold it?" Is this taking away from that human interaction which we want the college experience to be about, especially the residential college experience, ultimately?
Dawn Medley:
So, at Stony Brook, we're known as a very strong STEM school. And so, we actually have a lot of researchers who are doing work on AI technology and how we can use that to enhance not only the student experience, but how we're teaching and how we're researching, and how we're determining things. And so, what we have found time and time again is any tool is available for good use and for misuse. And so, it's really important that when you enter into this that you have a partner that you can have these really honest and open conversations with.
And so, while we have students we believe who are asking questions that they might not normally ask of an individual advisor, or a mental health counselor, the same way Tim was talking, we're used to a residential campus where we're on campus from 9:00 to 5:00, and we support students really well during that timeframe. But what happens at eight o'clock at night? And so, we have been able to utilize the tool to answer those questions in emergent situations, to be able to support students with just-in-time information. And we see a lot of non-traditional students who are also wanting to do that, right?
So, we're seeing students ask questions that they don't want somebody else to know they're asking. Like where can they get a pregnancy test? Where could they find out about STD testing? What to do if they failed the test? Where can they get help? And so, AI is simply a tool that helps us filter down and make sure that we're delivering just-in-time, real-time support for our students. And that became even more evident during the pandemic, right? When we were all virtual and not in place. We wanted to make sure that we could provide services to students no matter where they were, and no matter where our staff members were.
And so, I think the potential of this is to allow us to have those deep-dive, beautiful, supportive, higher ed social work conversations with students, and eliminate some of the simpler questions. And so, it does allow our staff to connect more deeply. It allows them to connect in a way that benefits students, and it allows us to help the students be more successful in all areas.
Bridget Burns:
So Aashir, I want to try and bring you in because I know that you -- We've heard from two institutions who have used technology, an AI chatbot that a vendor has provided. But you moved in a different direction. And we've got a lot of questions in the chat that come around how you make decisions in this space. Like, where people can accumulate the FAQs, things like that. And so, can you give a little perspective about how you navigated that decision of whether or not to do this? And whether or not you should pursue an outside vendor or do it yourself? And any guidance you would give yourself in retrospect?
Aashir Nasim:
Yeah, it's really a pleasure to be here. I'm here at VCU a few years after what Tim and company did at Georgia State. We developed a technology called Climatext. And we really wanted to be able to meet our students where they are. So that's really our value philosophy. But our practice was to develop a SMS messaging tool that actually could talk to students and get feedback from students, when we were talking about listening at scale and being able to respond students. So we developed a technology called Climatext. And Climatext will allow us to send a focus prompt to our students asking them a particular question. And these students could respond to that particular question and we will evaluate that data.
And the great thing about using the SMS technology was that we got a 50% response rate just in a couple of hours. And what we were able to do was to actually evaluate and consolidate this information. And within 48 hours, we developed a document called a climate advisory. And we will send this climate advisory out to about 250 senior leaders on campus, informing them about students' feelings, perceptions, and thoughts about particular issues. So, for one example, during the pandemic, we were able to use Climatext to actually understand what students' attitudes were about returning to campus. And of course, there were a lot of different concerns: concerns about access and accommodations, concerns about compliance issues, and things of that nature. And what we're able to do -- and also concerns about the availability of mental health resources. And so, what we're able to do to that climate advisory, soon after we got this information for our students, was to share it. And we were able to make decisions about the expansion of mental health resources for our students.
Fast forward a little bit toward the end of the pandemic, and last year we really wanted to know what our students' perceptions were about online and remote learning. And we found out that, of course, as all of us know now, that there were different attitudes about remote learning. And people were able to strive in that type of environment. And what we found here at VCU is that our first-year students, our first-generation students, our Pell-eligible students had less favorable ideas about online learning. Less favorable experiences with online learning. And what we're able to do with that information, our provost, our enrollment chief, were able to make decisions about having the desired percentage of classes in person on campus. Particularly for our intro courses, our gen ed courses, and things of that nature that a lot of our first-year students took. Because they really thrived in that face-to-face interaction within the classroom. We were also able to make very good decisions about the extension of withdrawal dates or deadlines, and pass-fail grade policies.
So what we're able to do through this responsive feedback loop is collect information from our students, listening at scale. Meeting them where they are. But able to close that loop by responding and saying, "Hey, this is what we heard. This is how we're responding. This is how we're making the student experience better." And so, those are just a couple examples of what we've been able to do at VCU. And now, working with our partner at Mainstay, we're very excited about with the potential uses of this.
Ian Wilhelm:
Thanks Aashir, appreciate that. And Tim, I want to bring you back into the conversation. We have a question from the audience that regarding, like, what was the monetary investment you made at Georgia State? And I want to hear a little bit about that. But I also want to ask you, how are you taking a, I guess, a research-based approach to the application of AI on campus? What were the findings of that research, and what does that indicate for other institutions?
So, two questions there. One, a question from Liam in the audience about the monetary investment you had to make. And then the question about how are you focusing your work on AI and taking a research-based approach in the sense of, "Hey, what are we learning? How are we changing what we're doing because of those learnings? And then perhaps applying those elsewhere, or helping people apply that elsewhere?"
Tim Renick:
Sure, Ian. Yeah, so the two questions kind of combine in a sense, because part of what we've seen in the research is that this approach has been highly effective in helping students stay enrolled at Georgia State. Now, my main motivation is the moral one. We have an obligation to these students; we want to see them succeed. But that has monetary impact as well. So first about the research, that one thing that we've done every step of the way, we initially used the chatbot and AI to help with summer melt. Then we expanded its use across all of our students to help them navigate mostly enrollment and administrative issues. And then more recently, we begun to work with faculty members in high-enrollment courses with high non-pass rates to say, "The tool is already in the pockets of every student. This is on every smart device of our students. Can it help them succeed in your classes?"
And every one of those iterations, we've used randomized control trials in order to actually do a careful analysis of the impact. In many cases, our partner has been Lindsey Page of Brown University, a researcher in AI and chatbot technology. And what we've seen is really resoundingly positive results. Not always. I saw one comment in the chat about, well, it only improved FAFSA completion at Georgia State by 3.3%. That's fantastic. Across 50,000 students where we've got 60% of them enrolled in -- Excuse me, are Pell eligible and so forth. Getting another 3% or 4% to complete the FAFSA. But that's just one component of what we've seen.
Students are resolving holds on their accounts at higher rates. They're paying off balances. They're coming to see advisors when prompted at higher rates. And in the classroom in some of these piloted courses, they're doing better in their final grades. The American government course was our first pilot. Students' final grades who had the chatbot were seven points higher. If they were a first-generation student, they were 11 points higher. That translates into higher enrollment as well.
So we've made a sizable investment. We pay a technology partner, Mainstay, every year, quite a large sum. And I'll allow Drew to navigate that question so I won't misspeak. And we also have a dedicated chatbot team at Georgia State. We have three FTE now who do nothing all week long other than work the chatbot. But we're holding onto hundreds of additional students. And hundreds of additional students translates into millions of additional dollars from a revenue perspective. So we consider this to be a very positive intervention from an ROI perspective.
Bridget Burns:
That's great. Thank you for that. I also want to -- Drew, I'm going to bring you back in, because we're getting a lot of questions from folks specifically wanting us to go into the classroom. Now, this has been primarily focused on how an institution, specifically administrative side of the house, might be using AI. But there have been a lot of questions about use of ChatGPT and avoiding plagiarism, things like that. And I know that you perhaps can't perhaps answer all of those, but just get a sense of where folks' heads are. I want you to speak to, in general, the risks of implementing an AI chatbot, or just in general the risks of AI. I think you hinted that it is not the savior. But I think that we know that ChatGPT is going to change the world. And the question is how can institutions navigate this and be mindful of those risks and mitigate those risks?
Drew Magliozzi:
I'll take the cheating one first, because I think it's this, it's a little bit more speculative. But I would encourage folks to -- maybe this is the moment in catalyst to embrace the flipped classroom more than ever before. It's been tried and tested and shows promise, and maybe this is a reason to do some writing classes. And then, second, I would encourage folks to find ways to teach with it, not try to avoid it. Say, "Hey, this is what ChatGPT generated, your assignment is to make it better." And strive to see how we can stack the skills of the technology with the skills of the people to create something even stronger. But then again, I think the real innovation is yet to come, and from faculty members who are really pushing the boundaries on it.
When it comes to implementing this and doing it well, we've worked with about more than 225 institutions, and the biggest barrier to innovation in this technology is frankly fear. And we get it. Like, new technology is scary, and people are incredibly busy, and this asks them to do something that's different. But it can have a massive impact with the right approach. There are a few things that are really absolutely essential for launching, whether it's Mainstay or any other vendor, and probably useful for all technology.
So number one, first and foremost, I would strongly advise people to involve students. Particularly your first-gen, historically resilient students to help them with ongoing training and testing. In our experience, students almost never say, "Hey, what's the process for filing the FAFSA?" They just say, "I have no money." And we have to find out and tell them what they need in their exact words, and who better to ask than the students themselves? And it's been touched on a little bit. When you invite students into a conversation, particularly over text message, it's pretty intimate. And you can expect them to be candid, occasionally texting you the very thing that's keeping them up at night at 2:00 in the morning. And this generation of student is more likely to whisper into their smartphones with their thumbs than walk into a stranger's office, as Tim said.
And so, I think it's absolutely essential that whatever tool you use makes sure no messages fall through the cracks. And how you reply to these situations matters a ton. That's why we've partnered with some leading cognitive behavioral and emotion scientists, like the folks at the Yale Center for Emotional Intelligence to ensure that messages are always supportive of students' social emotional needs. And occasionally it's a very small number. Acute circumstances will arise, and we've exchanged about 500 million messages over these several years, and everything that you can imagine being said to us has been said.
And we actually consider ourselves mandatory reporters and automatically detect and escalate sensitive topics to the right people on campus to take immediate action. I obviously can't talk about the specifics of some of these things, but those are the moments there, few and far between, but matter tremendously, I think, on the campuses we operate.
Ian Wilhelm:
Yeah. Appreciate that, Drew. Thanks very much. I'm going to turn back to you, Michael, if I could. Ask you about the myths that can surround AI. What are the common myths that kind of need to be busted in order for institutional leaders to recognize the potential of it on their campus? For both good and for perhaps problems if you don't implement it correctly. I just want to get you a sense of what those myths are that you're going to have to get at when you're talking to folks about it for the first time, perhaps.
Michael Berman:
Sure. Well, I think the first one I'll address is that AI is a thing. It's actually a lot of different technologies that -- it's an umbrella of technologies under a single name. And I know that right now, ChatGPT is kind of the topic of the moment. And people are saying, "Well, it's going to generate answers. What if it generates answers that could harm the student?" I mean, I think the current generation of chatbots, by and by, doesn't use that technology. And one of the reasons they don't is because it's not well understood enough and how to do quality control, and make sure it doesn't present inaccurate or biased information to students.
So, as Tim said, and I think Aashir said the same thing, they're essentially using the AI to understand as well as possible what the student is saying. And then directing them to a well-understood answer, or to a human being to answer if necessary. So, the current generation certainly isn't by and large creating new content on its own. When people say it's learning, it's not like it's learning on its own, like a child. It's learning because human beings are giving it more information and more context to do a better job of responding.
So, imagining the image of an out-of-control AI that's just kind of learning about what's going on is going to start talking to the students in a way you don't understand. I mean, that could be a legitimate concern in the future, but that's not really the present of what we're talking about with this type of tool. And another is that it's going to automatically save you a lot of time and money. It may not. There's a lot of human intervention. And one of the biggest mistakes that campuses make is to buy a product like Mainstay or another product, or to implement something like what Aashir's team did, and then just treat it as something that's going to run itself. "Well, the AI will run it." That's just not the reality.
Again, with the current generations of products, you need a lot of human involvement. And again, you need human beings that literally are going to respond at 2:00 in the morning when that message comes in from a student who's in crisis. You don't want the AI to try to help that student. You want to escalate that to a human being as quickly as you possibly can, and get them the help that they need. And what the AI is doing is, it's creating a portal where that information will get to you in a way that it just might not. Because frankly, who is the student going to call it 2:00 in the morning, if they're not living on campus? Maybe they have a dorm advisor if they're living on campus, but that's not the reality for most of our students, right? They're living off campus. Where do they go? Where do you go when you have a crisis at 2:00 or 3:00 in the morning? Well, it may be strange to think it might be an AI chatbot, but the reality is we know that sometimes it is. And that's a great opportunity to identify those problems and get those students the help and the attention they need as soon as possible.
Bridget Burns:
That's super helpful. And again, thank you all for the comments and the questions. You're really keeping us on our toes, so we're trying to make sure we get through as many as possible. Dawn, I did want to bring you in terms of following up on the conversation we just heard a little bit earlier, when Drew was talking about adult learners. You in particular have focused the implementation of your chatbot to ensure that you were listening and addressing the needs of adult learners. Can you help others understand how you -- what about those with some college, no degree? How you might recommend that they use -- how they might communicate effectively with that group?
Dawn Medley:
Yes. So, at my previous institution, we had a large focus on adult student re-engagement. And I know as we continue to see the demographic cliff of high school graduating seniors, there are a lot of institutions out there that are really looking to, not only how can they engage non-traditional students, but how can they reengage with those students who maybe attended their institution and then left? And so, some of the questions I've been watching in the chat are about how are we truly connecting with these students? And so, if you are a non-traditional student, or you're a student who's been at an institution and you've left, you may not know where the front door is, or basically how to get back in that front door.
And so, one of the things that we were able to do was do very targeted text messaging campaigns out to those non-traditional students, and those students who had some college, no degree, and really address the issues that they may have had. Did they have a past-due balance? Did they have federal loan default? Were they unsure about how to reengage with their advisor? Maybe they wanted to do a major switch.
Non-traditional students have an incredible variety of, basically, things that are in their way. And so it may be everything from childcare issues. It may be how to access their employer's reimbursement program for tuition. They may be looking at veterans' benefits. And by using the AI chatbot, we were able to engage with students where we thought they were. And then they could actually lead us to where we needed to be, to be a student-ready institution to support them as they rejoined our institution.
And so, as we talk about being nimble institutions and we talk about being agile and student-ready, this is a tool that we can use to strengthen our individual connections with students and answer those questions in real time when students have real need. And I think there is no population probably more attuned to that than the non-traditional student population or those who are trying to reengage. Because the one thing we know about students who have left our institution is they know how to leave. We're not always sure that they know how to come back. And so I think that's most important when we're looking at these tools.
Bridget Burns:
Aashir, can you add a little bit more about the building on what you heard from Dawn speaking to the campus climate?
Aashir Nasim:
Yeah, absolutely. I think what Dawn, Tim, Michael, Drew, and everyone are talking about here, I think it's critically important. And that's addressing critical needs of students. But I also think there's an opportunity for us to kind of zoom out here to the 30,000-foot level. In my role as chief diversity officer here, we use the chatbot pretty much to understand organizational culture and climate. So oftentimes, whenever leaders want to make decisions on campus, we oftentimes say, "Hey, what do students think?" And we may go to a student group that maybe the provost or dean has lunch with periodically. We may go to the student government association, or we may only hear from the loudest students in student forums.
But what we're able to do if we're able to zoom out, is that we're able to get the impressions and perceptions and thoughts of all our students. And then we're able to make decisions really quickly within a very short amount of time, in real time, that really can involve the general population of students. And just as Dawn was pointing out, we want to begin to now drill down into these specific niches, is that we can begin to segment this data in such a way to say that, "Hey, these thoughts and perceptions are not universal or not all students are impacted the same." And that's when we started getting to that value practice of meeting students where they are. This particular constituent of students has these particular needs that are different than these other students.
And we're able to make decisions that are more tailored, that are more refined, that actually address the student needs no matter where they occur. So I think there's an opportunity, particularly for CDOs or chief enrollment officers, to look at this from the 30,000-foot level. But it's also opportunities for us to be able to refine this in such a way that we can tailor our interventions, tailor our messages to populations that really need to hear them.
Ian Wilhelm:
Aashir, thanks for that. And I want to bring Tim back into this conversation. And you've been about, we've been talking about sort of, "Hey, how can these bots, these AI tools help us listen to the students better?" And you got a great example of how that work can also lead to -- that listening to students can in that way also influence state policy in Georgia. You've got a good example of that. I'd like to hear a little bit more about that. And I'm going to double piggyback another question, Tim, and also ask you a little bit about risk management, what's -- We talked about, a little bit about that in terms of, "Hey, how this can be a tool for that?" I'd like to just talk a little bit more about how you see the risk management, the benefits and the challenges it might provide.
Tim Renick:
Yeah, good questions. Maybe we'll put them both under the umbrella that Aashir is developing of listening to students and how AI can help --
Ian Wilhelm:
Good point.
Tim Renick:
-- us listen at scale. So, the example of state policy occurred fairly early on during the pandemic, where there was a lot of debate at the state level about when we were going to transition back from entirely remote to begin to do face-to-face classes. And we had some fairly strong guidance from our system office about the need to move back for fall of 2020, so about six months after we had moved to online classes. And so, we did a quick poll of our students -- now, imagine there are 50,000 students all on this tool -- and sent them a quick question: "Are you ready to come back for face-to-face classes this fall? And if we were to offer face-to-face classes, would you be willing to show up?"
And within literally an hour or so, we had something like 8,000 responses. At a time where the state was not having really any access to student voices. The campuses were shut down, and there was very little voice given. Just had a profound impact. The bottom line is many, many of the students, by far the majority, said, "No, I'm not ready to come back." And the state, we relayed those data to the state system office. And the state modified some of its instructions about rotating back to face-to-face classes.
So it's just one example of what Aashir is talking about. This is a way of giving voice to students at scale. If we had tried to do a traditional survey, it would've taken weeks and we'd probably have gotten 3% response rate. But the fact that it was coming across on the home screen of a phone, the students could reply quickly. We got mass response in a short period of time. And we're listening to the students in different ways on a much more personal level as well.
There was a Wall Street Journal article recently that covered Georgia State's use of the chatbot, and one of the students that the Wall Street Journal journalists contacted told a story that I didn't know about. Which is, she was in a difficult course, chemistry class and so forth. And late at night she was asking for help with chemistry tutoring and did it via the chatbot. And said on the chatbot, "I need help in chemistry because I'm losing the will to live, LOL." But because we had certain words set up as trigger words, we didn't treat that as an LOL. The message got delivered some, to somebody in the Dean of Students' office who immediately reached out to that student. And it was not a mere joke, and she was in a very distressed state. And the university was able to connect her to counseling services and so forth.
So it does put an added obligation on campuses. When you're listening to students, then you have to be responsible -- and responsible in responding to what you hear. And we have live staff that is monitoring the chatbot ready to try to respond in these kinds of circumstances. Over the five or so years we've been using this technology, that's only one example of ways in which students have mentioned mental crises that they haven't talked about with a counselor, or family members, or a roommate. But they've talked to the chatbot about it, and we've been able to get them help.
Bridget Burns:
That's super. That's a wonderful example that I think he helps people understand the power of this, but also the responsibility. And so, I want to go to you, Drew, as we've got a question in the chat about privacy data and outside companies, which you represent. But I also want to connect this to what you just heard from Tim, which is, I think, that there are lots of ways to apply a chatbot. We're hearing spaces where you hear a lot of repetitive requests. There was a question earlier about large and small institutions, and I think there are any place -- there are lots of places at every institution where people are getting repetitive requests. And that's where I would think about possibly considering the development of a chatbot, or using a chatbot.
But when we're dealing with mental health challenges, when we're concerned about risk management, which is what I would say, this is where we're going into. I'm just curious about, you know, shared the Hippocratic Oath message, which we appreciated. And that got some good positive feedback. But can you just speak to us about this issue about data privacy? You're an outside company, and from my perspective, because I know that you have actually addressed or intervened to prevent a multitude of suicide attempts from students, that's a huge responsibility for an outside company, and you've navigated it. But help us understand, as a campus, when I'm thinking about an outside company, how you think about these things and the questions I should be asking you?
Drew Magliozzi:
Heck yes. So obviously, we're a vendor and technology company, but really we consider ourselves a partner. And we are not only fiduciaries of your data, so we've run the gauntlet of data privacy reviews. But we're also fiduciaries of your student body. And look, there's no easy way to address these circumstances. In fact, they're very minute numbers, but as you said, they're not repetitive, but they absolutely need attention. We are very careful, as folks have pointed out, to tune the model to really recognize these things. And there's a multitude of ways students have expressed struggles, including potential self-harm.
We don't always close the loop. We escalate this to advisors. But we've never heard of anything bad happening subsequently, which is a good sign. A few pieces of advice that we've seen, Tim, in testament to your team that, you know, had to go to the Dean of Students. Some folks prefer law enforcement as the escalation destination for these things. And we pretty strongly discourage that, because it can actually have an adverse effect when coming to intervene. But really the keys, strangely enough, to having this sort of stuff bubble to the surface and find these little needles in the haystack is that you're not actually looking for them. You're approaching it from other areas, as Tim noted, from, "Hey, we're here for academic support, but we're talking to you in high-stress moments."
And these things will typically occur during midterms and finals, during the start of a semester when there's transitions happening. And so, to be understanding of the fact that they could occur, prepare for the worst, but recognize that it's extremely small number mean. We've done, like I said, hundreds of millions of messages. It's only been about 100 total in the life of the company. But each one of those is of massive importance. And it's really -- a text message is necessary, but definitely not sufficient for addressing these issues directly. So it really is the folks on the ground who are doing the hard work.
Ian Wilhelm:
Thanks, Drew, I appreciate that. Michael, I was going to turn to you on this question. It's a big question, but the -- I'm sure you get asked it a lot, but what advice you might have on these questions of privacy and risk response?
Michael Berman:
Well, first of all, on the issue of privacy for protection and protecting highly sensitive information, which this certainly falls into, campuses have to develop a capability to analyze that themselves. And certainly, if you're on a campus, make sure you find out who are the best people? And if you don't have people, advocate for getting some expertise in managing private data, because it's just simply going to be a growing area, and the attention is going to continue to be greater and greater.
I would say one of the considerations is how many chatbots are you going to have? How many different ways are you going to communicate? If you have a lot of different approaches, if you let a thousand flowers bloom, it's very, very difficult to maintain an understanding of the privacy issues. There's a good argument for having relatively few channels for which you communicate, having relatively few vendors. And then working very closely, and vetting those vendors, and holding them accountable for protecting your data. So make sure you have good partners. Make sure you know what they're doing. And make sure you have expertise on your campus to look into what they're doing, and vet that. This is not something to play around with. Certainly not if you're going to be collecting the kind of very sensitive information we've been talking about recently.
And just to flip the conversation, I think that it's not going to be that very many years until people are going to ask you, "Why you don't have a service like this?" Because where is the student -- where is your student going to go when they're in crisis? And I think about an analogy to the blue phones that most of us have on a physical campus. Where if a student were to be hurt, sadly, late at night, say they'd be attacked and there was no way for them to place a call to public safety, you'd probably be asked, "Well, why didn't you have a way that they could call for help? They saw someone was following you on a path, and there was nothing they could do."
So we provide various approaches. We provide blue phones. We provide patrols to try to protect the student's physical security as best we can, because we care about them, and also because we want to manage the liability. And I think it's going to become very similar when students are off campus, where do they go? Where does a student go to get help? You need to be thinking about that question. And chatbots and AI are not going to be the only answer, but I think they're part of the mix.
Bridget Burns:
That's super helpful context. Thank you, Michael. And Dawn, I wanted to shift now; we've had a broad conversation about use of AI. What exactly it means. I think there's a little bit of discussion about the bias that could exist within AI. But I'm just wanting to pull us back for a second. Clearly there's interest in a follow-up webinar that has to do with ChatGPT in the classroom, and we hear you on that. And we'll definitely work with the Chronicle to try and identify the experts to be able to support that discussion.
But Dawn, I'm just wondering if you can give us kind of your quick synopsis of your recommendations for analyzing when and where to use chatbot? Or whether or not to? And just your advice.
Dawn Medley:
So, I'll give you kind of a foundational kind of simple example. When it comes to the financial aid process, a lot of students are really insecure and unsure about how to go through the financial aid verification process. And so, one of the ways that we have utilized the chatbot is to walk them through the steps. "I don't know what happened here." Here's a question, here's an answer. And so we can walk students through that whole variety of questions and help them get through the financial aid verification. Those are tasks. Those are not advice, those are not support. Those are not necessarily the caring supportive thing that our financial aid counselors will do.
We're talking about other situations where students have to decide whether or not to take out a loan. Whether or not to maybe take a job, to drop down to halftime. How are they going to actually pay for and finance their higher education? Those are deep-dive, personal conversations that we want them to be able to have with our professional staff. That is not, "What is a tax transcript?" Or, "Where do I drop off this document?" So, two very different portions of a process, but things that we as institutions need to be able to support our students through.
So, I see the AI and the chatbot helping us lead students through those task-oriented, "do this, do this, do this." And I see our staff continuing to grow those relationships and be supportive of students, and giving advice, right? A chatbot at this point cannot give advice and can't really comfort in the same way that our professionals do. And I know that comfort may be a word that people aren't comfortable with on their campus, but that's what our professionals do. They allay fears. And that's what we want to them to be able to do and have the time to do. The chatbot takes the task, our staff builds the relationship.
Ian Wilhelm:
Thanks, Dawn. So I like "the chatbot takes the task, staff builds the relationships." We kind of only a few minutes here. I want to keep on going on this idea of asking some of you the best advice, or key takeaways you'd like to make sure that our audience leaves with. And just to echo what Bridget said, yes, the Chronicle and we can work, too, on a session that talks about ChatGPT more specifically. We did put some resources in the chat as well for folks who are more of thinking about how ChatGPT is going to affect or not affect how they teach in the classroom. How it'll affect evaluation, composition, all those things. So again, those resources will be made available there, but also in an email we send everybody after this.
Aashir, I was going to come back to you with your advice. You had a couple different things you said today, which we really appreciate.
Aashir Nasim:
Sure.
Ian Wilhelm:
What was something that you want to make sure someone leaves with today. We've heard a lot of different advice. What's something you want to make sure that they take away?
Aashir Nasim:
Sure. I mean, I think that creating a thriving community on your campus just supports the student experience, supports student success. And part of listening and being able to meet students where they are is part of that process. And I think one of the more important things that we can do as institutions is to basically close that feedback loop. Let students know that they've been heard, that their voice matters, that they belong, that they have agency, and that the university actually listens and responds in such a way. And the extent to which we can close this responsive feedback loop to let students know that they heard us, that reinforces their uses of the chatbot, that reinforces the utility of this particular device.
So that will be my lasting advice to people, is make sure that you let students know that you've heard them. Whether we're talking about the things at this micro level, which we've been discussing. Whether it's mental health resources, whether it's financial services, whether it's advising. Or if we're talking more globally about organizational culture and climate. Let students know that you've heard them, and let them know if this is the action that we took as a result of responding to your need.
Bridget Burns:
Excellent. Other advice that, Tim, do you feel like there's anything else that you would suggest for folks to have in terms of context about this?
Tim Renick:
Yeah, just building upon the comments from Michael and Aashir now, I think some of the comments suggest that people think of this as an all-or-nothing thing. If we're chatting with students via text, then we're not dealing with them personally. And the reality is your student body is complex and varied. We currently already have different modalities where we communicate. Some students want to come in and meet face-to-face. We know other students are never going to knock on a door. So we email them or we call them, or we have video chat and so forth.
And what we found is the texting is not only just another tool, but given the nature of our current students who grow up on these devices and talk to their friends via text and so forth, it's a particularly effective tool. So don't think of this as replacing personal connections. Think of this as another way where you can build on those connections. And maybe tap into students who otherwise wouldn't get the benefit of your support.
Bridget Burns:
And Drew?
Drew Magliozzi:
Yeah. I might actually add a slight variation, which is don't treat it like a tool at all. Treat it as if it's a staff member you're hiring to your team. Definitely don't buy AI technology, put it in the corner of your website, and forget about it. To get the full benefit, you've got to weave it into the process. And just like you would for a new colleague who was a new hire to the team, assign your best people to help train it. Start simple and get more complex over time. And when mistakes inevitably happen, use them as teaching moments to get better. And lead with listening, as everyone's always said. Texting is the most democratizing form of communication. Almost everyone uses it to talk to their family and friends. You have to be incredibly concise, but from the limitation comes a huge opportunity to just get signal right through the noise in both directions.
Bridget Burns:
Super helpful. I appreciate all this context of not treating it like a jump drive. And also really finding a partner or to finding folks inside your institution who are going to approach this with a level of curiosity and attention. And not treat this, you're just an entity to be provided a product to. Because this is so critically important, especially as we think about that risk management piece. And I also think that risk management should be in the conversation in terms of supporting funding and staffing this, to make sure that the right folks are being identified and given the information they need.
So Ian, thanks again for having us. I just want to thank all of our panelists for a rich discussion. And it clearly has indicated that there is a greater degree of interest here. I heard a lot of conversation in the questions that we're going to use to move forward, including how we use this to support faculty and staff mental health. There's just a variety of ways that we can continue to help support this discussion in higher ed. So thank you all for those and for spending your time with us.
Ian Wilhelm:
Thanks, Bridget. Thanks everyone for joining us again. Just to remind you, you will be getting an email with the link to the on-demand version of this broadcast. So please do share that with anyone who you think would be interested, as well as some of the links to the articles we posted today. And again, thanks to all our panelists. And of course, thanks to my co-host, Bridget Burns and the University Innovation Alliance. Bridget, thanks very much again.
Bridget Burns:
Always a pleasure. Thanks everybody.