The significance of natural language processing
Natural language processing (NLP) is currently gaining and maintaining widespread attention, generally in relation to its significance to the continued development of ever more advanced artificial intelligence, or AI.
This is for good reason; historically, natural language, although easily understood by human beings, has been an unprecedented navigational challenge for computers. However, although the degree is up for debate, there is no question that advancements in NLP are changing this latter fact, and computers are ever-closer to being able to effectively understand and manage natural language fruitfully.
What this will eventually lead to is computers that are able to understand not only structured data in the form of programming language, but also to understand data in the form of natural language—both structured and unstructured. It is precisely this impact on AI that is the most revolutionary.
The data-information-knowledge-wisdom model
The significance of the current advancements in NLP—especially in relation to unstructured data—becomes apparent in light of the DIKW Model (often represented in literature by a pyramid), which I have discussed in the past (including specific considerations related to the knowledge -> wisdom peak of the model). The model highlights, among other things, how data and information lead to knowledge, and how knowledge – managed well – leads to organizational wisdom.
What I like about the model is that it also visually represents key relationships and phenomena, one specific example being what I call a ‘knowledge failure’; these are failures to efficiently and effectively manage knowledge to create organizational wisdom (as part of the knowledge -> wisdom peak.)
Knowledge overproduction as knowledge failure
‘Knowledge overproduction’, one such type of failure, is defined as “the development of knowledge in excess of that required to optimally maximize organizational wisdom”. At least, this has been the case historically. A corollary to this is that the challenge for any organization is not to always know more, but instead to know enough.
This is because there are downsides to production of knowledge beyond that which is necessary. I have pointed to the downsides of the overproduction of knowledge: most notably are wasted efforts making sense of things (I am a fan of David Shenk’s visual reference of ‘data smog’). But also, to lost effort directed at knowledge development, sharing, processing, and management (because underlying knowledge creation is effort towards obtaining data and information, and further developing information into knowledge.)
However, with current advancements in NLP, the very nature of knowledge overproduction as a ‘knowledge failure’ is changing.
Knowledge overproduction and natural language processing
The anticipated ability of computer programs to understand not only programming language, but natural language, brings to the field of knowledge management a fundamentally qualitative leap. The overwhelming amount of data and information in the world falls under the spectrum of natural language. This is where NLP holds its potential.
Make no mistake, the field of computing has already brought powerful abilities to bear to knowledge management. This is true at the fundamental level of computing itself. However, historically, efficient and effective management of knowledge has been limited by the individual human beings involved (albeit with the support of software and hardware). But the extent to which those powers shift with NLP is monumental.
What NLP does is make possible the efficient and effective management of inordinate amounts of data and information, beyond what has ever been possible by human capacity. And it would make this, for all intents and purposes, effortless.
Knowledge overproduction: an opportunity
What occurs when a non-human entity understands natural language to a sufficient degree to efficiently, effectively, and almost effortlessly manage limitless knowledge?
Another view of maximizing the knowledge -> wisdom transformation process is to look not at ‘knowledge failures’, but ‘knowledge opportunities.’ Juxtaposed to knowledge failures, these are opportunities to more efficiently and effectively manage knowledge to create organizational wisdom. With its ability to manage unstructured data at a potentially unlimited scale, NLP is contributing (together with secondary factors such as digitization and big data), to the greatest knowledge opportunity of this generation.
As a professional in the health care industry, just one example of unstructured data interaction is that of specialists are employed to review clinical documents on a regular basis. Given that this is frequently unstructured data, a human being historically reviews it for a stated purpose. And this is true not only in hospitals, but also in pharmacies and other health care settings. As just one example, imagine if an organization could quickly have its finger on the pulse of the aggregate and component data related to the minute details of things as significant as long-term patient outcomes. And this is but a speck of the market.
Imagine all of the data in the current era of big data: emails, reports, publications, texts…the list is as endless as technology is relentless in pushing modes of producing, storing, and sharing knowledge. And imagine this across the industries of today – banking and finance, energy, information technology, social services…all of it, with tremendous costs to production and refinement. And historically, in consumption and synthesis.
Imagine AI, with natural language understanding (NLU), radically changing the very foundations of the feasibility and cost structure of the review of unstructured data. NLP has an ability to radically shift how we transform knowledge into organizational wisdom. Eventually, NLP could bring us full human language in all digital systems, with almost effortless knowledge management and synthesis.
This article is published as part of the IDG Contributor Network. Want to Join?