What is a “Concept”?

I believe it is fair to define a Concept as our mental representation of a discrete thing (tangible or abstract). A Concept often correlates well with a single dictionary meaning of a word or phrase.  Conversely, several different words or phrases in one natural language might describe the same Concept. “Discrete” isn’t the same as “simple.” The universe can be treated as a discrete Concept.

It may be easier to keep this distinction in mind if you think of a Concept as crossing the boundaries of natural languages. Of course, that’s not absolutely true, because language influences our understanding of Concepts. But most of us understand that even the names of physical things in different languages don’t have exactly the same connotations. It’s also not true in practice, because some users of semantic technology do indeed concern themselves primarily with words — literal strings — not the Concepts behind them.

Throughout this resource, I will try to use the word Concept (with an initial capital letter) in this narrow sense. Almost surprisingly, the majority of folks in the Knowledge Representation/Ontology community, in the Library & Information Science/Knowledge Organization community, and in the Knowledge Management community use this term in the same way.

The “triangle of meaning”

The definition of Concept I provide above is sufficient for most purposes, but to be more precise about this very basic and important notion, I should provide a little history from the KR community, which is more consistent in its use of the term than other disciplines.

In the world of semantics, the “triangle of meaning” — also known (in some variations) as

According to John Sowa, this triadic relation has a long history. (Sowa, J. F. (2000). Ontology, Metadata, and Semiotics. Presented at ICCS’2000 in Darmstadt, Germany, on August 14, 2000. Published in B. Ganter & G. W. Mineau, eds., Conceptual Structures: Logical, Linguistic, and Computational Issues, Lecture Notes in AI #1867, Springer-Verlag, Berlin, 2000, pp. 55-81.

In that document, Sowa uses a revised version of the traditional “triangle of meaning.” The good news: Sowa uses Concept instead of Thought or reference and Object instead of Referent. The bad news: He flops the triangle left-to-right, in effect eliminating any directionality of the relationship between Object and Symbol. (Or is that good news?) And in an unintended irony, he uses a graphic symbol — a cat — at the two corners opposite the Symbol corner. But that was just my unfortunate connotative first-glance interpretation. Nevertheless, this is another great paper by Sowa.

But even in a very basic sense, the “triangle of meaning” seems imprecise or incomplete or maybe just plain wrong. Definitely somewhat misleading. To readers like me, it implies things that may not be intended. Or perhaps hard to grasp, given the terminology.

You have to negotiate the meaning of even this most basic representation of meaning itself. I simply have no idea what “causal” means in the relationships in the first graphic. And what is the direction of that relationship? If the “causal” relationship between Symbol and Thought is read left-to-right (as implied in the “imputed” relationship between Symbol and Referent ), does that mean a Symbol evokes a Thought?

Are Concepts “building blocks”?

Another significant implication of this graphic — an implication often evident in discussions of semantics and knowledge representation on the Ontolog forum — is that Concepts are the building blocks of communications, that when we are speaking or writing to each other, we select Concepts and assemble those Concepts into propositions using syntactical  relationships. Then we convert that internal representation of the proposition into natural language.

OK, that’s probably the way it happens some of the time. But there is a potentially infinite number of Concepts, many of which are slightly different from other, similar Concepts. So this “building-block” mindset is somewhat backwards. The speaker [one human communicating with other humans] knows what he means. Concepts are always used within some context. Speakers rarely start with a set of Concepts and assemble them carefully into propositions (and natural language sentences) that can be understood in any context. Maybe never.

People don’t communicate in isolated Concepts. Even a child pointing to a spherical object and saying Ball! is asserting, “I have just seen one of those things whose name is ‘ball.’ It’s like the other balls I have seen. You’re excited that I know this, Dad, right? So nod your head vigorously and say, ‘Yes! You’re a really smart child.'” (And at that point, the child has a very limited understanding of what a ball is.)

Even translators of natural languages do not try to achieve perfect matches of meaning in one language to the same meaning expressed in another. What they are trying to do is to make the meaning as clear as possible in a target language and presumed context — for example, English among professional biologists.

When do we get more formal about the meaning of Concepts?

Humans don’t need perfect representations of every Concept in their communications. But they do need explicit representations of meaning in order to …

  • Achieve reproducible “completeness” of descriptions of meanings
  • Provide a memory of those “complete” descriptions
  • Support addressability of those descriptions — “That, specifically, is what I am talking about.”

And we do negotiate meaning as necessary.

Ultimately, when we need to settle on a meaning and put that meaning to work, we move beyond informal communications to more formal representations of meaning, which may consist of more formal language (for example, legalese) or to programming languages, pictures, and diagrams — usually in combinations with words.

© Copyright 2017 Philip C. Murray

 

This entry was posted in aspects of practical knowledge representation, Concept, terminology. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *