Does AI Fit in Real Ed?

Computer screens

Co-Authored by Joshua Heyman

At COCE, we are committed to helping students succeed and creating positive learning experiences for both our students and instructors. Part of that is exploring available research and technology that supports students and allows instructors more teaching flexibility. Artificial intelligence (AI) stands out as an intriguing and rapidly advancing field that has numerous applications in higher education. But how does AI fit in real education?

Artificial intelligence essentially involves technology that can think much like humans do; particularly, technology that can learn. AI permeates many aspects of our lives, from our smartphones to our self-parking cars, the websites we visit and many more. Elements of AI technology have had tremendous impact on many fields and yet, despite predictions to the contrary, AI hasn’t had the same impact on the education field. Ideas such as virtual mentors or AI tutoring, personalized course design, and even automated essay scoring have been discussed for years, but have not progressed as many thought they would.

AI-Led Tutoring

For example, Intelligent Tutoring Systems (ITS) uses artificial intelligence to analyze a learner’s various attributes to determine and implement a personalized instruction plan. This technology interacts in much the same way that a one-to-one tutor would, working with the student by asking questions about the subject area, providing feedback and adjusting the plan as it gathers more information about the student.

While this technology has yielded some very promising results, it still unclear as to the true effectiveness of these tutoring systems:

  • Wenting Ma and colleagues found that student achievement was highest using ITS systems when compared to other forms of instruction, regardless of whether ITS was used as a supplement to traditional instruction or the sole means of content delivery.
  • Researcher Ryan Baker found that student achievement in ITS studies was heavily influenced by the way student achievement was measured, meaning that it is difficult to compare results of studies on ITS because they often use different definitions of student achievement. He believes technology may be effective in some applications, but still has a way to go in order to achieve its potential.

 ITS technology (and research on it) is prevalent and rapidly growing. However, at least for now, the research has not demonstrated enough consistent results to suggest it be incorporated into our courses.


AI-Designed Courses

Another AI technology that may take shape in the near future is course design personalized to meet a specific learner’s needs. This technology is similar to ITS in that it uses AI to analyze student needs and adapt to them. Instead of working directly with the learner, however, this technology is more focused on content delivery, specifically tailoring the content and even the visual design of the learning environment for the student.

  • Early research on personalized learning paths by those such as Chih-Ming Chen has produced encouraging results in terms of improving student performance.
  • Fabiano Dorça and other researchers have conducted similar studies on a personalized course design approach specifically with this technology in mind. Results of these studies were promising and even showed some potential for being incorporated into existing learning management systems.

In many ways, this technology is similar to some learning management systems that specialize in a design in which students’ learning is adaptive based on their performance on assessments. While intriguing, more rigorous research on this technology needs to be done before it is ready for large-scale use.


AI-Assisted Grading

 The AI technology that we in COCE have researched and explored the most is Automated Essay Scoring (AES). This is what we found:

  • This form of automated grading is not a new idea; in fact, the idea was born in the 1960s, but the technology has been developed and advanced only in the last few decades.
  • AES technology requires initial examples of essays that have been scored by a human and are then used to train the software and build a scoring model to be used for assessment.
  • This technology has been investigated as a means of reducing instructor workload, enabling instructors to focus more time on working with their students.

All of the technology in this area is pretty similar and goes beyond simple word counts and spelling or grammar checking. Think of the phone in your pocket, a small computer in its own right. It may have features such as a voice-activated personal assistant application that understands spoken language, or a text-entry functionality that tries to complete words for you or provides suggested words while you type. These are examples of machine learning, and AES technology is based on similar types of machine learning and natural language processing (NLP) to assess the structure of text. Human-graded essay examples are used to train software on the style and rubric associations in order to build a scoring model. The technology can analyze word choice, syntax, stylistic features and content of writing to assess mastery.

This technology is primarily used in K-12 consortiums for standardized assessments that are aligned to Common Core standards. There has been very little use in higher education. There are a handful of vendors offering AES technology, and though they have slightly different approaches to their solutions, the general concepts are similar:

  • IntelliMetric by Vantage Learning and e-Rater by Educational Testing Service use the approach mentioned earlier (human-graded examples).
  • EASE (Enhanced AI Scoring Engine) uses an ever-growing library to train the system for scoring.
  • The Turnitin Scoring Engine uses customizable rubric criteria to train the engine.
  • Pearson’s WriteToLearn leverages the patented Knowledge Analysis Technologies (KAT) engine to measure reading comprehension.
  • Writing Roadmap by CTB/McGraw-Hill offers immediate feedback for content, organization, grammar, conventions and mechanics, which it scores based on an associated rubric.

Each vendor in this field puts their own proprietary spin on their product’s technology, but the general concepts are not a far departure from one another.

To date, research on AES has focused on how well this technology can assess writing as compared to human raters. Typically this research has focused on short-form essay assignments, such as those used in many state tests and admission tests such as the SAT and the GRE, although there has also been limited research on more complex, longer assignments. Research on the technology has shown mixed results.

Several studies have produced promising results in favor of the accuracy and validity of AES systems:

  • Mark Shermis, an education researcher who has studied AES systems for years, has found strong correlations between the scores of human raters and the scores of AES systems, meaning that the two may be related to one another, statistically (although not necessarily causally).
  • Ou Lydia Liu and colleagues studied the application of AES technology on more complex assignments, which also yielded strongly correlated results between AES systems and human raters.

Some research has challenged AES, both in its effectiveness and how it has been designed:

  • Jinhao Wang and researchers found the technology to be unreliable in assessing written assessments while Chase Geigle and colleagues found AES to be inconsistent with human raters on more complex assignments.
  • Les Perelman challenged the research on AES, believing that correlations were not sufficient support for the technology. He also believes that the technology overvalued the more superficial aspects of writing, such as word count, while ignoring more important elements.
  • Even Mark Shermis, whose studies produced positive results in favor of AES, cautioned against its use as the sole form of grading, at least until research can produce more tangible proof of accuracy.

Perception and Reality

Equally important to consider is the perception of the technology. Many educators are opposed to the idea of AES technology, regardless of its accuracy and legitimacy. They argue that a computer will never understand the human aspects of writing, that it will always in the end be reduced to an algorithm, which in turn, undervalues writing itself.

Due to the conflicting research on the topic, and the vital conversations happening concerning the technology, AES is likely not developed enough for us to pursue.

So, does AI fit in real education? At least for us, it seems that the time is not now. Artificial intelligence in higher education will undoubtedly continue to grow. Despite its somewhat lacking presence, as the research continues and the technology becomes more refined, it will be applied more and more. COCE will continue to explore these and other technologies in its constant effort at improving both the experiences of our students and instructors.

Academically Speaking

Explore more content like this article

An online instructor sitting at a desk and looking at a computer as she video conferences with an online student.

Tips for Being the Best Online Instructor

September 10, 2020

The best online instructor lets go of their ego to allow learning which benefits both them and their students. Dr. Thomas MacCarty offers his tips to be a better online instructor by letting go of ego and embracing an approach that best serves learners.

An online instructor using self-reflection, one of the best practices in teaching.

Best Practices in Teaching: The Reflective Instructor

August 13, 2020

Any instructor may point out where students need to improve their work, but instructors who use regular self-reflection look at what they can do to improve on their own work just as fervently.

An online learner studying on his laptop.

Higher Ed’s Growing Pains: From Awkward to Able

August 05, 2020

Higher education is moving through growing pains. The VUCA (volatile, uncertain, complex, ambiguous) world we were already struggling to navigate has accelerated, and we find ourselves strategizing in months what we thought we had years to evolve into.

Explore Programs