What are the characteristics of a good model for practical knowledge?

Wikipedia credits statistician George Box with the phrase, “All models are wrong but some are useful.”

Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled “All models are wrong but some are useful”. The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an “ideal” gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question “Is the model true?”. If “truth” is to be the “whole truth” the answer must be “No”. The only question of interest is “Is the model illuminating and useful?”.

Source: All models are wrong

This wise advice also applies to developing a good reference model for representing, capturing, managing practical knowledge.

See also, The Insight-Centric Knowledge Model — ICKMOD

So, what is a “model”?

In a post to the Ontolog Forum on 19-feb-2013, Doug Foxvog observes:

At the most basic level, the model [that is, an ontology, PCM] is that of a hypothetical world: the statements are taken to be true for the purposes of reasoning without a need for the system being aware of any grounding of the truth of the statements. [emphasis added] Conclusions can be made by formal logic within the hypothetical world. A detected contradiction proves that at least one of the statements involved in the shortest proof of the contradiction is false.

Foxvog is pointedly correct. Computer ontologies are designed to be formalisms that can be processed by computers. ICKMOD is different. It draws its overall model from observations of how people communicate meaning in the real world. It’s not an ontology or a method for describing ontologies. However, all statements in ICKMOD should be supported as much as possible by formalized Facts and the formalized Concepts used to express those Facts precisely.

I might also add that Foxvog’s characterization may be slightly misleading, because  many ontologies are quite well grounded in a view of reality — for example, an ontology of human anatomy. (Foxvog’s assertion was made in a narrow context. I have no doubt that this omission is just a matter of brevity.)

A useful model for representation of practical knowledge should …

With Box’s admonitions in mind, I have tried to adhere to the broad guidelines stated below. A useful model for representation of practical knowledge should …

  • Support both individual and group efforts. What I know must integrate well with what we know. Both individuals and organizations should see the problem of representing knowledge in the same way. Different perspectives, different interpretations, and different choices are expected. In fact, different interpretations of meaning — when resolved thoughtfully — are a benefit, not a disadvantage.
  • Allow creators/developers to start anywhere … incrementally. My bias, as the name of my model (ICKMOD) indicates, is to start with Insights. But someone else building a knowledgebase might start with a collection of Facts and Observations that seem to be related. The ability to start anywhere fits nicely with my conviction that incremental formalization is extremely helpful.
  • Always ask, “Is this useful?” All software applications approximate reality — except where they are purely about numbers. So also, applications created with TheBrain, ThinkComposer, and other “knowledge representation” tools will be approximations of reality. They are not guaranteed to be accurate or complete representations of reality. The marketplace will have to make judgments about whether they are useful.
  • Identify and keep a record of knowledge that has been consciously excluded as irrelevant. Much of our traditional process of building a reservoir of practical knowledge consists of excluding irrelevant information and assertions. So do we need to retain a browsable record of the stuff we have consciously excluded? Yes. That experience must be transferred to others. My years of reading posts to the Ontolog forum and other forums strongly indicates that the same irrelevant topics, issues, and assertions return to the forum periodically and bring with them prolonged, repetitive, distracting discussions … before they are dismissed once again.
  • Provide consistent, clear terminology. I try to avoid the common academic habit of inventing new names for old things in order to make it seem like you are innovative.  But I probably fail on this account, too. I try to choose terms that make sense to most people — without causing confusion with other common meanings of those terms. This is a challenge. Instead, I try to use an initial capital letter when I use a familiar term with a non-standard meaning. But this practice probably makes little sense for readers of German.
  • A corollary to the preceding point: Help non-experts — including business managers — understand the significance and purpose of representing practical knowledge. Non-experts still do not understand what knowledge management means. People who label themselves as practitioners of KM only have themselves to blame. I am among the blameworthy, even though I proposed a very broad “Knowledge Management Reference Model” (KREF) 20 years ago.
  • Encourage and support the development and improvement of applications. One important aspect of this support: Enabling the transfer of data between competing and complementary tools devoted to representation of practical knowledge.
  • Adapt readily to changes in the captured information — the nodes, the relationships, and the metadata. We are too early for even this kind of premature formalization.
  • Reflect good work already done in related disciplines — and harmonize that work as much as possible — thoughtfully and critically. Such harmonization must be based on how well these assertions from many sources reflect observed reality — not some set of over-arching generalizations — a grand unified theory, if you will. Reality should always trump theory, even when theory is helpful. Professional KR (“Knowledge Representation”) folks can probably lay claim to having produced the largest, most relevant, and most thoroughly vetted body of work in what I describe as “representation of practical knowledge.” But they are strongly biased toward the objective of automatic/computer-supported interpretation of natural language, and they appear to be dismissive of insights from the experience, innovations, and implementations in other fields — for example, simple tools for concept-mapping.
  • A corollary: Ground the model for representing practical knowledge in precise representation of meaning — precision sufficient for the purpose of humans — that builds on good thinking done in the many domains associated with “knowledge representation” in a broad sense of the phrase.
  • Be suitable for addressing practical knowledge — knowledge that reflects a clear understanding of a set of immediate problems … but not necessarily technical mastery of technology and tools needed to execute specific solutions. This is typically the kind of knowledge that makes an area of expertise accessible to non-experts and others within an organization working on different aspects of the same problem.

Implementations of modeled knowledgebases should benefit users

“Knowledgebases” created according to a “good” model and good practices should help users (readers or viewers) in the following ways:

  • Enable users to understand assertions about the realities of the domain.
  • Minimize ambiguity of such assertions.
  • Show relationships among ideas — for example, identify all significant supporting facts for an Insight.
  • See all significant aspects of the domain.
  • Identify arguments that are considered important or significant by most domain experts.
  • Identify arguments that are considered “true” by most domain experts.
  • Identify arguments that are considered “false” or deprecated by most domain experts.
  • Minimize duplication of information — especially information caused by (a) repetition of identical assertions and by (b) statement of identical assertions in different natural language.
  • Find assertions easily. And, conversely, hide information that is not relevant.
  • Easily see the connections between assertions about issues in the domain (on the one hand) and the Contributors or Authorities (predicated Contributors) of those assertions. All Contributors and Authorities must be identified explicitly. Anonymity is not a good thing here.

© Copyright 2017 Philip C. Murray

 

This entry was posted in model for representing knowledge. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *