I was tutoring the other day and a few learners asked me if Schoolhouse, a human-to-human tutoring platform, plans to incorporate AI.
It's a good question, and the answer is an evolving one. In short: we already are, but the particular uses of AI at Schoolhouse may surprise you.
In this post, I want to capture our current approach (as of March 2023) and our lessons so far. Rather than speculate about the future, let's look at what we've done to date.
I'll also note that while it's an exciting time, we've been playing with this technology since 2020. My first stab at using the GPT-3 API was to build a bot that would act as a student and test your knowledge by having you tutor it. TutorTheAI, as it was called. I soft-launched it on Reddit and also tested it out with the Schoolhouse community that was just forming at the time.
AI has evolved since then, but its potential to supercharge human tutors — to augment the human-to-human tutoring experience — was clear from the beginning. This is not about human tutors being replaced by AI, as I've discussed before. If anything, it'd be tutors being replaced by other tutors that are leveraging AI. It's seems likely that it's going to be a symbiotic relationship between AI and tutors.
When trying to identify where AI can be most helpful on our platform — and by "helpful," I mean furthering our non-profit mission of connecting the world through learning — we've found that it's much more of an art than a science.
Two guiding questions that have been helpful when considering the application of AI:
- Does it steer humans towards doing even more of what’s uniquely human?
- And simultaneously, does it play to the AI’s strengths?
For instance, AIs like the latest large language models are incredibly good at linguistic tasks. They sound like humans after all! On the flip side, they still sometimes struggle with math.
Even though Schoolhouse's focus is on math tutoring, language plays a key part. In fact, human dialogue is at the center of our Zoom tutoring sessions — it’s what makes them uniquely human.
As such, what if we could steer humans towards even more productive dialogue in these tutoring sessions using AI? After all, how one teaches (the pedagogy) is just as important as what one teaches (the content). And what if we could use AI to improve our tutors' pedagogy? That could have a transformative impact on tens of thousands of students being tutored.
This line of reasoning then brings up a third question — namely, does this application of AI address a core bottleneck of the platform? And yes it does!
For context, Schoolhouse records all tutoring sessions for safety and quality purposes, and then recruits more experienced tutors to review a subset of these recordings and provide feedback to the tutors. However, not every session recording can be reviewed in depth, and as we grow, it becomes harder to keep up.
That's where AI comes in. By partnering with Prof. Dora Demszky at Stanford, we have begun to provide automated feedback to tutors after their tutoring sessions. Imagine everything from simple stats on active learning:
AI feedback: "Your learners only spoke 14% of the time; next time, try to encourage even more participation."
...all the way to discerning types of dialogue:
AI feedback: "At this point in the session, you explained the solution yourself, but what if you had instead tried asking questions with Socratic dialogue like you had earlier on?"
Natural language processing and large language models alike allow us to detect various forms of language and how effective they may be for learning. Or even imagine an AI tutor that coached other tutors on how to use growth mindset language.
Taking a step back, Schoolhouse can be seen as a platform with two types of tutoring going on. One is the math tutoring itself, where a tutor helps a learner. The other is mentorship, where more experienced tutors mentor newer tutors on effective pedagogy. At the moment, AI is much better at the latter. Perhaps this may seem surprising, as this type of mentorship feels like it's operating at a higher level. But remember that AI's natural proficiencies are in words and not numbers at the moment. This plays to its strengths.
At the same time, we are not stopping our peer review and mentorship systems at Schoolhouse. Rather, this is augmenting those human processes so that every tutor can continue to receive feedback after each session. We're in the process of experimenting with these AI methods on a select few tutors, but in the coming months, we hope to launch this to all tutors.
These tools are aimed at giving tutors feedback after their sessions, but we also have a number of training modules up front that we require new tutors to go through. We started to think about how we could improve these modules and get them even closer to the experience of real tutoring. After all, the best way to prepare for something is often to simply try it out. Perhaps we could use AI to simulate the tutoring experience for new tutors...
Sound familiar? We've decided to bring back the 2020-era TutorTheAI tool, this time in collaboration with Prof. Chris Piech at Stanford and leveraging the latest AI models. We haven't yet released this, but once we do, tutors will be able to experiment with different tutor moves on AI learners before then progressing to real human learners. There'll even be an "Undo" button to replay scenarios. It'll now be possible to explore counterfactuals such as "If only I had introduced the concept to the learner this way..."
This just scratches the surface, but it provides a glimpse of Schoolhouse's experiments in AI so far and the questions we're asking ourselves. Yes, this nuanced take is not as flashy as some of the AI chatbots (we've tried building them ourselves), but it's where we've found the most success in helping students for the time being.
If I had to bet (and the fact that I even have to say this just shows how far society has come in the past few months), humans are not going anywhere. Instead, AI is going to make the human-to-human experience more essential than ever.
Thank you to Elysa K, Justin W, Evan T, Cassy M, and Maya B for providing feedback on this post.