SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Semantic Folding: A Brain Model of Language

By: Francisco Webber

Human language has been recognized as a very complex domain for decades. No computer system has so far been able to reach human levels of performance. The only known computational system capable of proper language processing is the human brain.

While we gather more and more data about the brain, its fundamental computational processes still remain obscure. The lack of a sound computational brain theory also prevents a fundamental understanding of Natural Language Processing (NLP). As always when science lacks a theoretical foundation, statistical modeling is applied to accommodate as much sampled real-world data as possible.

A fundamental yet unsolved issue is the actual representation of language (data) within the brain, denoted as the Representational Problem. Taking Hierarchical Temporal Memory (HTM) theory, a consistent computational theory of the human cortex, as a starting point, Cortical.io has developed a corresponding theory of language data representation: The Semantic Folding Theory.

Semantic Folding describes a method of converting language from its symbolic representation (text) into an explicit, semantically grounded representation called a semantic fingerprint. This change in representation can solve many complex NLP problems by applying Boolean operators and a generic similarity function like Euclidian Distance.

Many practical problems of statistical NLP systems and, more recently, of Transformer models, like the necessity of creating large training data sets, the high cost of computation, the fundamental incongruity of precision and recall, the complex tuning procedures, and so on can be elegantly overcome by applying Semantic Folding. This article will show how Semantic Folding makes highly efficient Natural Language Understanding (NLU) applications possible.

The process of encoding words, by using a topographical semantic space as a distributional reference frame into a sparse binary representational vector, is called Semantic Folding.

The Semantic Folding Theory

The Semantic Folding theory is built on top of the Hierarchical Temporal Memory theory. Both theories aim to apply the newest findings in theoretical neuroscience to the emerging field of machine intelligence.

Hierarchical Temporal Memory

The Hierarchical Temporal Memory (HTM) theory is a functional interpretation of practical findings in neuroscience research. HTM theory sees the human neo-cortex as a 2D sheet of modular, homologous microcircuits that are organized as hierarchically interconnected layers. Every layer is capable of detecting frequently occurring input patterns and learning time-based sequences thereof.

The data is fed into an HTM layer in the form of Sparse Distributed Representations (SDRs).

SDRs are large binary vectors that are very sparsely filled, with every bit representing distinct semantic information. According to the HTM theory, the human neo-cortex is not a processor but a memory system for SDR pattern sequences.

Semantic Folding: A Brain Model of Language

By taking the HTM theory as a starting point, Semantic Folding proposes a novel approach to the representational problem, namely the capacity to represent meaning in a way that it becomes computable. According to the HTM theory, the representation of words has to be in the SDR format, as all data in the neo-cortex has this format.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel