On representational format

 

A somewhat neglected, but, I would argue, crucial issue in theorizing about language, conceptualization, and their interrelations, is that of representation (who would deny this?), and that of representational format (perhaps less widely accepted). In particular, the designer of a representing device (RD) that represents as intelligently as possible, must satisfy certain requirements which, seem to be contradictory, or in tension:

(a) The RD must be able to represent as wide a variety of things as possible (the only RF that satisfies this requirement is symbolic representational format (SRF).

(b) It must be able to represent as compactly as possible (the RF that satisfies this requirement best  is what I will call analogical RF (ARF).

(c) It must be able to use, and especially to store, representations that are as point of view-neutral (or “perspective-neutral”) as possible (ARF is more perspective-neutral than SRF). But also be able, sometimes, to store non-neutral specifications, in any of its available RFs.

(d) It must be able to represent incomplete or fragmentary information or specification (ARF is poorly adapted to this task, SRF well-adapted).

(e) It must be able to represent abstractions (SRF is the most well-adapted RF to this task).

(f) It must be able to use all of its representational capacities to metarepresent (SRF is indispensable to good metarepresentation, which typically involves fragmentary , and may require abstract, specifications).

(g) Especially, it must be able to develop external representation systems to supplement its necessarily limited internal representational capacities. Since this is a demanding task, necessarily done piecemeal, SRF is indispensable to it.

Let me elaborate a little on these points.

For long term memory (LTM) storage, SRF is, quite often, the worst choice, whether the SRF is that of some posited internal (language of thought) LOT, or some natural language. Specification in natural language RF is even worse, for many purposes, than a specification in a SRF LOT would be; in fact, that’s one reason why people have posited LOTs in SRF.

Of course, this does not mean that we do not have LTM specifications in natural language. Indeed we do, for instance in such cases of memorized poems, sayings, songs, prayers, etc. and also, arguably, for such specifications as multiplication tables and other numerical knowledge (cf., for example Spelke and ??, 199?, and DeHaene, 199?), verbal definitions, and all sorts of information that cannot be fully represented in any other way. I argue elsewhere, that, for instance, the information that kangaroos come from Australia, that Paris is the capital of France, that North is the opposite of South, or where the cities of a country are in relation to one another, and many other kinds of specification are strongly language-dependent, in that in learning, and using them, we crucially involve our language-representation capacities, and that these specifications are stored in LTM in partly language-dependent RFs.

But a great deal of the knowledge we use in speaking, understanding, and in thinking (with or without the support of language representations) is not stored in SRF, and it would be less convenient to use if it was. This is the case for various kinds of perceptual and near-perceptual specifications, schemas, episodic memories, melodies, faces, shapes, locations, layouts or “cognitive maps”, etc. The question of just what the different RFs are of these different kinds of LTM specifications is, of course, difficult and controversial, but there are advantages, in numerous cases, to stripping, where this is possible, LTM specifications of their perspectival or attentional attributes.

For instance, it is controversial whether cognitive maps exist, and if so, what their nature is, how they vary from species to species, etc. (cf. JEB ??, Golledge, 1997)  but a cognitive map will be more generally useful if it does not represent an environment from a single perspective, but instead is an abstract schema or set of specifications that can be used to generate, at use-time, a fragmentary Imap (internal map like representation) adapted to a particular situation.

A similar consideration motivates the kind of abstract 3D-model for visual recognition (and imaging) proposed by Marr, and various abstract knowledge structures often posited by psychologists and linguists. None of the specific proposals may be right, my point here is that there are major advantages to having LTM specifications that are abstract enough to serve as input to specific representational resources, the way a 3D-model not accessible to consciousness, could serve to generate introspectively accessible 2 1/2 D representations (cf. Jackendoff, 198?), or a very schematic representation of a melody, a dance movement, a triangle, a horse galloping, could serve to generate, on-line, an open set of distinct introspectively accessible representations. There is no evidence that human memories are snapshot-like, and plausible evidence that memories are generated rather than simply activated (cf. for instance Schacter, 19??, Squire and Kandel, 199?), and, for well-known reasons, a RD would often be better off if this was how it was built: with a capacity to store perspective-neutral specifications in a specific RF, to be used in conjunction with a specific generative module or representational capacity. For instance, it would be able to recognize deviant instances, or new presentations of known instances, handle representations more quickly without being overwhelmed with irrelevant detail, make useful generalizations, and in general, respond more dynamically and creatively to new situations.

This is the reason for giving our RD the capacity to store specifications in LTM in as perspective-neutral, as schematic, and, in fact, as low-level, as possible. It is rather clearly also desirable for it to have the capacity to store specifications in less schematic, and higher-level, hence, inevitably, sometimes is less perspective-neutral RF, and to have some degree of choice over what it stores and how it stores it, as we clearly do, since we can rehearse and learn specifications in NL and other external RFs, for instance arithmetic tables, logical and geometric proofs, maps, music notation, etc.