The term intentionality was introduced by Jeremy Bentham as a principle of utility in his doctrine of consciousness for the purpose of distinguishing acts that are intentional and acts that are not. The term was later used by Edmund Husserl in his doctrine that consciousness is always intentional. It has been defined as “aboutness”, and according to the Oxford English Dictionary it is “the distinguishing property of mental phenomena of being necessarily directed upon an object, whether real or imaginary”.
The concept of intentionality was reintroduced in 19th-century contemporary philosophy by the philosopher and psychologist Franz Brentano, who described intentionality as a characteristic of sentience, a “mental phenomena”, by which it could be set apart from insentience, or natural “physical phenomena”. He used such phrases as “reference to a content,” the “direction towards an object” and “the immanent objectivity.” Brentano coined the expression “intentional inexistence” (existence in) to indicate the ontological status of mental phenomena directed upon objects that do not exist. For him, the property of being intentional, of possessing intentional objectiveness, was key to his psychological thesis distinguishing mental phenomena from physical phenomena, as physical phenomena sustains no intentionality.
A major problem within intentionality discourse is that participants often fail to make explicit whether or not they use the term to imply concepts such as agency or desire, or whether it involves teleology. Dennett explicitly invokes teleological concepts in the ‘intentional stance’. However, most philosophers use intentionality to mean something with no teleological import. Thus, a thought of a chair can be about a chair without any implication of an intention or even a belief relating to the chair. For philosophers of language, intentionality is largely an issue of how symbols can have meaning.
In current artificial intelligence and philosophy of mind intentionality is a controversial subject and sometimes claimed to be something that a machine will never achieve. John Searle argued for this position with the Chinese room thought experiment, according to which no syntactic operations that occurred in a computer would provide it with semantic content. As he noted in the article, Searle’s view was a minority position in artificial intelligence and philosophy of mind.