From a distance the idea that a computer can interpret, categorize and describe a written document really does seem fantastical. It is difficult to reconcile the fantasy of a thinking machine with the realism of a world that can’t even build a jet pack.
In reality, autoclassification technologies perform admirably well when applied in the correct environments. There are circumstances where autoclassifiers do truly wondrous things, like discovering trends, interpreting sentiment, and (most important) reducing workload by 90% or more. But there are also circumstances where the tools will fail spectacularly. Thankfully, success is predictable and, even better, likely more often than not.
I’ll give you an analogy. I’m certain by now that you’ve heard about Google’s self-driving car. This is not the first autonomous car ever constructed or tested, but it’s the most successful by far. This isn’t because of the car or the computer, however; it’s because of the data. When Google began their experiments, they had access to highly accurate GPS and satellite data. The imaging algorithm didn’t have to waste its time looking for stoplights and crosswalks, because the system already knew where they were. Google knew in advance where cars were likely parked, and what was likely beyond the next hill or around the next bend. In essence the car has access to critical, timely knowledge.
Autoclassification works beautifully when you can give it enough context, in advance. Taxonomy, content modeling, and smart workflow can transform your raw content into immediately actionable data, and bring much-needed speed and accuracy to your operations.