If you’ve ever assigned a series of essays, you know how daunting grading all of them can be. Now imagine that you’re a state department of education, ACT, Inc., or another organization with literally thousands of essays to grade. How long do you think that takes? Days? Weeks? If the Hewlett Foundation has anything to say about it, no time at all, thanks to its contest to create new software that will automate the grading process.
There’s no consensus on whether multiple choice tests or essay tests are more effective. Multiple choice tests are much less subjective and a much more logical venue for simply testing knowledge. Conversely, it’s almost impossible to cheat off of another student during essay tests and making wild guesses that prove nothing about one’s knowledge almost never work. As it currently stands, multiple choice tests are much more common if there’s a time crunch since they can be graded so quickly. We look forward to this contest bearing some fruit!
We wonder how effective any essay test grading automation would be also. How would any type of software react to humor or an uncommon analogy that hasn’t been programmed? Is it even possible to sense if a paper has a consistent flow and train of thought? We look forward to finding out.
Have a great weekend, everyone!