For which reasons I say language is both. And neither.
Vernon Mountcastle, Professor Emeritus of Neuroscience at Johns Hopkins University and discoverer of the the cortex's columnar organization, postulated the existence of a common algorithm by which the functionality of any region of the cerebrum was largely identical to that of any other. Despite superficial physiological differences among neurons, he posited, what largely accounts for the functional differences between regions of the brain is proximity to a sensory input area. More recent research bears this out in many ways; as we find the neural activity in hierarchically-lower, primary sensory areas somewhat erratic and prone to shifts nearly concurrent to the outside world, the "upstream," hierarchically-higher areas behave much more statically, consonant to the notion of object permanence often considered a hallmark of higher intelligence. That is to say, while we hear, view, or touch the world with the respective sensory apparatus, we "listen," "see," and "feel" in a largely homogeneous, non-unique way -- whether the converging association of neurons (or their host columns) represents a diagonal line, a set of eyes, nose, and mouth comprised of the same, or the lovely female singer they belong to.
Armed with this premise we may begin using language as a powerful inductive tool to offer us a glimpse into the mechanisms of cognition unrivaled by modern cerebral imaging. But how?
As Jeff Hawkins, founder of the companies Palm and Numenta, writes in his 2004 manifesto On Intelligence, there is an inherently "nested" structure to the universe around us to which hierarchically-associative modeling is well-adapted. Additionally, the time-delayed feedback by which the system is modulated provides the temporal context necessary to recognize the sequences in which natural phenomena occur.
As we've described, all cerebral activity must follow this formula in one capacity or another and language, being just one among the myriad faculties of the brain, is no exception. Phonemes beget morphemes, beget words, beget semantically-meaningful phrases, beget conversations, and so on... but the sequence of the inputs is just as important as the set of sounds or words. While this process is evocative of the underlying neural mechanisms of thought, we absolutely cannot discuss it in terms of consciousness without involving an ultimate reference to the organism and a persistent sense of self-agency. I do hope to explore this premise in depth later, but at present let us consider language only in terms of its value in externally representing a universal cortical process.
Throughout this discussion in the days and weeks to follow, I aim to shamelessly exploit language to illustrate the manner by which the brain models and forms predictive associations (and meaningful behavior) from sensory stimuli both top-down (linguistically and psychologically) and bottom-up (chemically and physiologically), eventually reconciling the two in a discussion of the embodiment principle and proposing a practical means by which to test our theory.
I know there aren't (m)any of you out there, but I invite you all to provide your feedback and sharpshoot me at any and all opportunities!