In 2020, Stanford University developed a free online introductory programming course called ” icon in place The course reached 10,000 students in 2020 and 12,000 in 2021. However, the course did not provide student evaluation due to the very large number of participants.
From Computer scientists at Stanford He wanted to correct the situation using artificial intelligence. The system they developed is based on a meta-learning approach that consists of using a first algorithm to improve the performance of the second. This lets the computer know how to learn to evaluate the test.
However, to train a computer, a lot of data is needed. Fortunately, the Stanford University course that started the “Code in Place” program has been around for several years. So the scholars had a wide repertoire of answers suggested by the students and each solution was explained by at least one lecturer. By analyzing how tasks are categorized, AI has learned to assess students’ performance.
Once the learning was over, the computer was tested by correcting the student’s solution which was previously evaluated by several lecturers. The AI performed well in the test, even surpassing human performance in some aspects.
However, the real test was the use of AI to record the 16,000 solutions produced as part of the “Code in Place” cycle. A survey was conducted among students who were evaluated by computer: 97.9% agreed with the evaluation received. This percentage was higher than that among students who received input from a human teacher (96.7%).
Artificial intelligence in the service of education?
According to the researchers, receiving feedback on one’s achievements is essential to learning in order to distinguish between areas that are being mastered and those that need improvement. However, assessing tens of thousands of students is an enormous workload. Scientists estimate that such a task would take 8 months for 400 full-time lecturers.
Thus, an application like the one developed by the team will make it possible to increase the quality of online education by providing feedback to thousands of students. However, computer scientists remember that this strategy is not yet able to replace coaches. Tests on the algorithm also showed that it was less efficient than a human in evaluating more complex works.
In an interview with The New York TimesOren Etzioni, a former professor of computer science at the University of Washington, believes that advice and feedback from a professor, lecturer, or teacher are always better than those provided by an algorithm. Still, the Stanford team’s work is a step in the right direction, he says, because it’s better to receive automated feedback than to receive nothing at all.
“Certified gamer. Problem solver. Internet enthusiast. Twitter scholar. Infuriatingly humble alcohol geek. Tv guru.”