The primary acquisition of a 2D-semantic space as a distributional reference for the encoding of word meaning is called Semantic Folding.
Every word is characterized by the list of contexts in which it appears. Technically speaking, the contexts represent vectors that can be used to create a two-dimensional map in such a way that similar context-vectors are placed closer to each, using topological (local) inhibition mechanisms and by using competitive Hebbian learning principles.
This results in a 2D-map that associates a coordinate pair to every context in the repository of contexts. This mapping process can be maintained dynamically by always positioning a new context onto the map.
This map is then used to encode every single word by associating a binary vector with each word, containing a “1” if the word is contained in the context at a specific position and a “0” if not, for all positions in the map.
After serialization, we have a binary vector that possesses all advantages of an SDR:
The process of Semantic Folding encompasses the following steps: