Semantic Folding Theory - White Paper

59
Semantic Folding Theory and its Application in Semantic Fingerprinting White Paper Version 1.2 Author: Francisco E. De Sousa Webber Vienna, March 2016

Transcript of Semantic Folding Theory - White Paper

Page 1: Semantic Folding Theory - White Paper

SemanticFolding

TheoryanditsApplicationinSemanticFingerprinting

WhitePaper

Version1.2

Author:FranciscoE.DeSousaWebber

Vienna,March2016

Page 2: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 2

Contents

AboutthisDocument.....................................................................................................................5EvolutionofthisDocument.......................................................................................................................5Contact...............................................................................................................................................................5

Abstract..............................................................................................................................................6Part1:SemanticFolding...............................................................................................................8Introduction.....................................................................................................................................................8OriginsandGoalsofSemanticFoldingTheory.................................................................................9TheHierarchicalTemporalMemoryModel....................................................................................10OnlineLearningfromStreamingData..........................................................................................10HierarchyofRegions............................................................................................................................10SequenceMemory.................................................................................................................................11SparseDistributedRepresentations.............................................................................................13PropertiesofSDREncodedData................................................................................................16

OnLanguageIntelligence...................................................................................................................16ABrainModelofLanguage.....................................................................................................................17TheWord-SDRLayer...........................................................................................................................18MechanismsinLanguageAcquisition...........................................................................................20TheSpecialCaseExperience(SCE)...........................................................................................20MechanismsinSemanticGrounding........................................................................................21DefinitionofWordsbyContext..................................................................................................21

SemanticMapping.................................................................................................................................22MetricWordSpace...........................................................................................................................25Similarity..............................................................................................................................................26DimensionalityinSemanticFolding.........................................................................................27

LanguageforCross-BrainCommunication................................................................................28Part2:SemanticFingerprinting...............................................................................................30TheoreticalBackground..........................................................................................................................30HierarchicalTemporalMemory......................................................................................................30SemanticFolding....................................................................................................................................31

RetinaDB........................................................................................................................................................32TheLanguageDefinitionCorpus.....................................................................................................32DefinitionofaGeneralSemanticSpace.......................................................................................32TuningtheSemanticSpace................................................................................................................32RESTAPI....................................................................................................................................................33

Word-SDR–SparseDistributedWordRepresentation.............................................................33TermtoFingerprintConversion.....................................................................................................34GettingContext.......................................................................................................................................35

Text-SDR–SparseDistributedTextRepresentation..................................................................36TexttoFingerprintConversion.......................................................................................................37KeywordExtraction..............................................................................................................................38SemanticSlicing......................................................................................................................................38

Expressions–Computingwithfingerprints...................................................................................38ApplyingSimilarityastheFundamentalOperator......................................................................39ComparingFingerprints.....................................................................................................................41GraphicalRendering.............................................................................................................................41

Page 3: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 3

ApplicationPrototypes............................................................................................................................41ClassificationofDocuments..............................................................................................................41ContentFilteringTextStreams........................................................................................................43SearchingDocuments..........................................................................................................................43Real-TimeProcessingOption...........................................................................................................44

UsingtheRetinaAPIwithanHTMBackend...................................................................................45AdvantagesoftheRetinaAPIApproach...........................................................................................46Simplicity...................................................................................................................................................46Quality........................................................................................................................................................46Speed...........................................................................................................................................................46Cross-LanguageAbility.......................................................................................................................47

Outlook............................................................................................................................................................47Part3:CombiningtheRetinaAPIwithHTM........................................................................49Introduction..................................................................................................................................................49ExperimentalSetup..............................................................................................................................49Experiment1:“Whatdoesthefoxeat?”......................................................................................51Dataset...................................................................................................................................................52Results...................................................................................................................................................52Discussion............................................................................................................................................52

Experiment2:“ThePhysicists”.......................................................................................................53Dataset...................................................................................................................................................54Results...................................................................................................................................................54Discussion............................................................................................................................................55

Part4:References.........................................................................................................................57Literature.......................................................................................................................................................57Web...................................................................................................................................................................59

Page 4: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 4

ListofFigures

Fig.1:Denserepresentationofthewordcat-individualbitsdon’tcarrymeaning.........13Fig.2:Excerptofasparserepresentationofcat-everybithasaspecificmeaning.........14Fig.3:Influenceofdroppedandshiftedbitsonsparserepresentations...............................15Fig.4:Groupingco-occurringfeaturestogetherimprovesnoiseresistance.......................15Fig.5:Theword-SDRHypothesis............................................................................................................19Fig.6:Creationofasimple1D-word-vector.......................................................................................23Fig.7:Distributionofthecontextsonthesemantic2Dmap......................................................24Fig.8:Encodingofawordasword-SDR..............................................................................................25Fig.9:CallingtheRetinaAPItogetinformationontheRetinaDatabase..............................33Fig.10:ASemanticFingerprintinJSONformatastheRetina-APIreturnsit......................34Fig.11WordSenseDisambiguationofthewordapple.................................................................35Fig.12Aggregationofword-SDRsintoatext-SDR..........................................................................37Fig.13Thesparsificationhasanimplicitdisambiguationeffect..............................................37Fig.14Computingwithwordmeanings..............................................................................................39Fig.15Similartextsnippetsresultinsimilarfingerprints..........................................................40Fig.16Distincttextsnippetsresultindissimilarfingerprints...................................................40Fig.17ClassificationusingSemanticFingerprints..........................................................................42Fig.18FilteringtheTwitterfirehoseinreal-time..........................................................................43Fig.19:Asimilaritybasedconfigurationofasearchsystem......................................................44Fig.20TextAnomalyDetectionusingaHTMbackend.................................................................45Fig.21LanguageindependenceofSemanticFingerprints..........................................................47Fig.22Overviewoftheexperimentalsetup.......................................................................................49Fig.23:Concreteexperimentimplementation..................................................................................50Fig.24:The“Whatdoesthefoxeat”experiment.............................................................................52Fig.25:The“ThePhysicists”experiment............................................................................................54Fig.26:Terminallogshowingtheresultsof“ThePhysicists”experiment...........................55

Page 5: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 5

AboutthisDocument

EvolutionofthisDocument

DocumentVersion Date Edits Author(s)

1.0 25.November2015 FirstPublicRelease FranciscoWebber

1.1 01.March2016 Updatedsections FranciscoWebber

ContactYourfeedbackisimportanttous.Weinviteyoutocontactusatthefollowingaddresswithyourcomments,suggestionsorquestions:[email protected](pleaseremovespaces).

Page 6: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 6

Abstract

Thetwofaculties-makinganalogiesandmakingpredictionsbasedon

previousexperiences-seemtobeessentialandcouldevenbesufficientfor

theemergenceofhuman-likeintelligence.

It is common scientific practice to approach phenomena, that cannot be

scientifically explained by an existing set of scientific theories, through the use of

statisticalmethods.Thisishowmedicalresearchledtocoherenttreatmentprocedures

thatareusefulforpatients.Byobservingmanycasesofadiseaseandbyidentifyingand

takingaccountofitsvariouscauseandeffectrelationships,thestatisticalevaluationof

these records enabled themaking ofwell thought out predictions and (consequently)

thediscoveryofadequatetreatmentsandcountermeasures.Nevertheless,sincetherise

ofmolecularbiologyandgenetics,wecanobservehowmedicalscienceismovingfrom

the time-consuming trial and error strategy to a much more efficient, deterministic

procedure that is grounded on solid theories and will eventually lead to a fully

personalizedmedicine.

The science of language has had a very similar development. In the beginning,

extensive statistical analyses led toa goodanalyticalunderstandingof thenatureand

functioningofhumanlanguageandculminatedinthedisciplineoflinguistics.Following

theincreasinginvolvementofcomputerscienceinthefieldof linguistics, it turnedout

that the observed linguistic rules were extremely hard to use for the computational

interpretation of language. In order to allow computer systems to perform language

basedtaskscomparabletohumans,acomputationaltheoryoflanguagewasneededand,

asnosuchtheorywasavailable,researchagainturnedtowardsastatisticalapproachby

creating various computational language models derived from simple word count

statistics. Despite initial successes, statisticalNatural Language Processing (NLP) still

suffers from two main flaws: The achievable precision is always lower than that of

humansandthealgorithmicframeworksarechronicallyinefficient.

Semantic Folding Theory (SFT) is an attempt to develop an alternative

computational theory for the processing of language data. While nearly all current

methods of processing natural language based on its meaning use word statistics, in

Page 7: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 7

some form or other, Semantic Folding uses a neuroscience-rooted mechanism of

distributionalsemantics.

After capturing a given semantic universe of a reference set of documents by

means of a fully unsupervisedmechanism, the resulting semantic space is folded into

each and every word-representation vector. These vectors are large, sparsely filled

binaryvectors.Everyfeaturebitinthisvectornotonlycorrespondstobutalsoequalsa

specificsemantic featureof the folded-insemanticspaceand is thereforesemantically

grounded.

The resulting word-vectors are fully conformant to the requirements for valid

word-SDRs (Sparse Distributed Representation) in the context of the Hierarchical

Temporal Memory (HTM) theory of Jeff Hawkins. While HTM theory focuses on the

corticalmechanismforidentifying,memorizingandpredictingreoccurringsequencesof

SDRpatterns,SemanticFoldingtheorydescribestheencodingmechanismthatconverts

semanticinputdataintoavalidSDRformat,directlyusablebyHTMnetworks.

ThemainadvantageofusingtheSDRformatisthatitallowsanydata-itemstobe

directly compared. In fact, it turns out that by applying Boolean operators and a

similarityfunction,manyNaturalLanguageProcessingoperationscanbeimplemented

inaveryelegantandefficientway.

Douglas R. Hofstadter’s Analogy as the Core of Cognition is a rich source for

theoreticalbackgroundonmentalcomputationbyanalogy.Inordertoallowthebrainto

makesenseof theworldby identifyingandapplyinganalogies, all inputdatamustbe

presented to the neo-cortex as a representation that is suited to the application of a

distancemeasure.

Page 8: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 8

Part1:SemanticFolding

Introduction

Humanlanguagehasbeenrecognizedasaverycomplexdomainfordecades.No

computersystemhassofarbeenabletoreachhumanlevelsofperformance.Theonly

knowncomputationalsystemcapableofproperlanguageprocessingisthehumanbrain.

Whilewe gathermore andmoredata about thebrain, its fundamental computational

processes still remain obscure. The lack of a sound computational brain theory also

preventsafundamentalunderstandingofNaturalLanguageProcessing.Asalwayswhen

sciencelacksatheoreticalfoundation,statisticalmodelingisappliedtoaccommodateas

muchsampledreal-worlddataaspossible.

Afundamentalyetunsolvedissueistheactualrepresentationoflanguage(data)

withinthebrain,denotedastheRepresentationalProblem.

TakingHierarchicalTemporalMemory(HTM)theory,aconsistentcomputational

theory of the human cortex, as a starting point, we have developed a corresponding

theoryoflanguagedatarepresentation:TheSemanticFoldingTheory.

Theprocessofencodingwords,byusingatopographicalsemanticspaceas

a distributional reference frame into a sparse binary representational

vectoriscalledSemanticFoldingandisthecentraltopicofthisdocument.

Semantic Folding describes amethod of converting language from its symbolic

representation(text)intoanexplicit,semanticallygroundedrepresentationthatcanbe

genericallyprocessedbyHTMnetworks.Asitturnsout,thischangeinrepresentation,

by itself, cansolvemanycomplexNLPproblemsbyapplyingBooleanoperatorsanda

genericsimilarityfunctionlikeEuclidianDistance.

Many practical problems of statistical NLP systems, like the high cost of

computation, the fundamental incongruityofprecisionandrecall1, thecomplex tuning

proceduresetc.,canbeelegantlyovercomebyapplyingSemanticFolding.

1Themoreyougetofone,thelessyouhaveoftheother.

Page 9: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 9

OriginsandGoalsofSemanticFoldingTheory

Semantic Folding Theory is built on top of Hierarchical Temporal Memory

Theoryi.TheHTMapproachtounderstandinghowneo-corticalinformationprocessing

works,whilestayingcloselycorrelatedtobiologicaldata,issomewhatdifferentfromthe

moremainstream projects that have either amainly anatomic or amainly functional

mappingapproach.

Neuroscientists working on micro-anatomic models ii have developed

sophisticated techniques for following the actual 3D structure of the cortical neural

meshdowntothemicroscopiclevelofdendrites,axonsandtheirsynapses.Thisenables

thecreationofacompleteandexactmapofallneuronsandtheirinterconnectionsinthe

brain.Withthiswiringdiagramtheyhopetounderstandthebrainsfunctioningfromthe

groundup.

Researchinfunctionalmapping,ontheotherhand,hasdevelopedveryadvanced

imaging and computationalmodels todeterminehow thedifferentpatchesof cortical

tissueareinterconnectedtoformfunctionalpathways.Byhavingacompleteinventoryiii

ofallexistingpathwaysandtheir functionaldescriptions, thescientistshopetounveil

thegeneralinformationarchitectureofthebrain.

In contrast to these primary data-driven approaches, HTM-Theory aims to

understand and identify principles and mechanisms by which the mammalian neo-

cortex operates. Every characteristic identified can then bematched against evidence

fromneuro-anatomical,neuro-physiologicalandbehavioralresearch.Asoundtheoryof

the neo-cortex will in the end fully explain all the empirical data that has been

accumulatedbygenerationsofneuroscientiststodate.

Semantic Folding Theory tries to accommodate all constraints defined by

Hawkins’corticallearningprincipleswhilestayingbiologicallyplausibleandexplaining

asmanyfeaturesandcharacteristicsofhumanlanguageaspossible.

SFT provides a framework for describing how semantic information is

handledbytheneo-cortexfornaturallanguageperceptionandproduction,

down to the fundamentals of semanticgroundingduring initial language

acquisition.

Page 10: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 10

Thisisachievedbyproposinganovelapproachtotherepresentationalproblem,

namelythecapacitytorepresentmeaninginawaythat itbecomescomputablebythe

corticalprocessinginfrastructure.Thepossibilityofprocessinglanguageinformationat

thelevelofitsmeaningwillenableabetterunderstandingofthenatureofintelligence,a

phenomenoncloselytiedtohumanlanguage.

TheHierarchicalTemporalMemoryModel

The HTM Learning Algorithm is part of the HTM model developed by Jeff

Hawkins.ItisnotintendedtogiveafulldescriptionoftheHTMmodelhere,butrather

to distill the most important concepts in order to understand the constraints within

whichtheSemanticFoldingmechanismoperates.

OnlineLearningfromStreamingData

From an evolutionary point of view, the mammalian neo-cortex is a recent

structure that improves the command and control functions of the older (pre-

mammalian)partsof thebrain.Beingexposed toa constant streamof sensorial input

data, it continuously learns about the characteristics of its surrounding environment,

building a sensory-motor model of the world that is capable of optimizing an

individual’sbehaviorinrealtime,ensuringthewell-beingandsurvivaloftheorganism.

Theoptimizationisachievedbyusingpreviouslyexperiencedandstoredinformationto

modulateandadjusttheolderbrain’sreactiveresponsepatterns.

HierarchyofRegions

Theneo-cortex, in general, is a two-dimensional sheet covering themajorityof

thebrain.Itiscomposedofmicrocircuitswithacolumnarstructure,repeatingoverits

entireextent.

Regardless of their functional role (visual, auditory or proprioceptive), the

microcircuitsdonotchangemuchoftheirinnerarchitecture.Thismicro-architectureis

evenstableacrossspecies,suggestingthatit isnotonlyolderonanevolutionaryscale

than the differentiation of the various mammalian families but also that it is

implementingabasicalgorithmtobeusedforall(data)processing.

Page 11: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 11

Although anatomically identical, the surface of the neo-cortex is functionally

subdividedintodifferentregions.Everyregionreceivesinputseitheroriginatingfroma

sensorial organ or being generated by the outputs of another region. The different

regions are organized inhierarchies. Every regionoutputs a stable representation for

each learned sequence of input patterns, which means that the fluctuations of input

patternsbecomecontinuouslyslowerwhileascendinghierarchicallayers.

SequenceMemory

Every cortical module performs the same fundamental operations while its

inputs are exposed to a continuous stream of input data amongwhich it detects and

memorizes the reoccurring sequences of patterns. Every recognized input-sequence

generates a distinct output-pattern that is exposed at the output stage of themodule.

Duringtheperiodwheretheinputflowiswithinaknownsequence,eachmodulealso

generates a prediction pattern containing a union of all patterns that are expected to

followthecurrentone,accordingtoitsstoredsequence(experience).

The above capabilities describe a memory system rather than a processing

systemasonemightexpecttofindinthishighestbrainstructure.Thismemorysystem

iscapableofprocessingdatajustbystoringit.Whatisspecialforthismemoryisthatits

data-input is different from its data-output and specific sequences of several input-

patternsleadtoasinglespecificoutput-pattern.

In contrast, all electronic memory components we integrate into computer

systems,usediscreteaddressestostoreandretrievedataoveracombinedinput/output

port.Thesememory-addressesdonotrepresentdatabythemselvesbutratherdenote

the locationof a specific storage cellwithinauniformlyorganizedarrayof such cells.

Thisstoragelocationisalsoindependentofanyactualdataandholdswhatevervalueis

storedtherewithoutgivinganyindicationonwhatthestoreddatacorrespondsto.All

semantic informationabout thedatahas tobecontributedbytheassociatedprogram,

that stores and reads the data values in every storage cell it uses. This indirection

decouples the semantics of thedata from its actual (scalar) value. It remains the sole

responsibility of the associated software to handle the data in a meaningful way by

executingasequenceofrulesandtransformationsthatactonthestored(scalar)data.

The number of processing steps can vary substantially depending on the intended

semantic goal and all intended semantic aspects have to be known in advance, at the

Page 12: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 12

timetheprogramiscreated.Theinabilitytopredictexecutionlatencyandtheneedto

conceptualizeallsemanticprocessingstepsinadvancemakeithard(ifnotimpossible)

toperformtheinput-tooutput-dataconversioninreal-time.Furthermore,unexpected

inputdataveryoftenleadstoacrashoftheappliedsoftware.

Theonlytechnicalmemoryarchitecturethatcomesclosetothecorticalmemory

describedaboveisthatofContentAddressableMemory,whichcorrespondsinprinciple

to standard memory cells with an address input and a data in/output, but with the

exceptionthatactualdataisdirectlyinterpretedasaddressandfedontheaddressinput

(hencecontentaddressable).Asanexampleletsassumethestring:‘2+3’.Ifeachofthe

threecharactersrepresentsan8-bitASCIIcodewecan interpretthestringasa24-bit

addresspointing toa specificmemory location inanarrayof224 (approx.16Million)

cells, in which the result ‘5’ is stored.Whenever a term like ‘2+3” is entered on the

addressinput,theresultisreturnedonesinglecyclelateronthedata-output.

Althoughthisseemsefficientinprocessingterms,suchanarchitectureneedsvast

amountsofmemorytobecomeageneralpurposeprocessingmechanism.Aquerylike:

“which is the highest mountain on earth” would assume an address space of 38x8-

bits=304bitsassuminganarrayof3.26x1091memorycells.

The fact that memory has been, for a long time, themost expensive part of a

computer has dwarfed theuse of CAMs to very specific and small applications like in

network appliances, where the address space is small and real-time processing

important.

Incontrastthecorticalmemoryhasamechanismtogeneratespecificaddresses

for every memory cell depending on the data to be stored. This means that not all

thinkablememorylocationshavetobepresentbutthattheaddressofamemorycellis

learned whenever data gets stored. As a result, the cortical memory function

implementsbestofbothworlds: indirection-free(noprocessorneeded)semanticdata

storagewithaminimumnumberof actually implementedmemorycells.The resultof

any“query”canalwaysbedetermined inasinglestepwhileallowingquerywidthsof

thousandsifnotmillionsofbits.

This constant andminimal processing delay, independent of the nature of the

processed data, is essential to provide an organismwith useful real-time information

aboutitssurroundings.

Page 13: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 13

AsecondbigadvantageoftheHTM-CAMprincipleisthattheamountofdatathat

can be handled in real time increases linearlywith the amount of availablememory-

modules.Moremodulesmeanmoreprocessingpower,whichisaveryeffectivewayfor

evolutiontoadaptandimprovethemammalianbrain:justbyaugmentingtheamountof

corticalrealestate.

SparseDistributedRepresentations

Thememory needed for a CAM can be substantially reduced if compression is

applied to theaddressesgenerated.TechnicalCAM implementationsusedensebinary

hash values where every combination of address-bits points to a single memory

location.Unfortunately,thecomputationalefforttoencodeanddecodethehashvalues

isveryhighandcounteracts-withgrowingmemoryspace-thespeedadvantagesofthe

CAMapproach.

Thearchitectural limitationtosmallerandconstantwordsizes(8,16,32,64…

bits), corresponding to a dense representation scheme, became the fundament of

standard computer technology and triggered the race for ever increasing clock rates

pacingmoreandmorepowerfulserialprocessingcores.

Fig.1:Denserepresentationofthewordcat-individualbitsdon’tcarrymeaning

By using a dense representation format, every combination of bits identifies a

specific data item. This would be efficient in the sense that it would allow for much

smallerwordsizesbutitwouldalsocreatetheneedforadictionarytokeeptrackofall

the data items recorded. The longer the dictionary list would become, the longer it

would take to findand retrieveany specific item.Thisdictionary could link the setof

stimuli corresponding to the word cat to the identifier 011000110110000101110100

Page 14: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 14

thereforematerializingthesemanticgroundingneededtoprocessthedatageneratedby

thesurroundingworld.

Instead of realizing the semantic grounding through the indirection of a

dictionary, it could also occur at the representation level directly. Every bit of the

representation could correspond to an actual feature of the corresponding data item

that has been perceived by one or more senses. This leads to a much longer

representation in termsofnumberofbitsbut these longbinarywordshaveonlyvery

fewsetbits(sparsefilling).

Fig.2:Excerptofasparserepresentationofcat-everybithasaspecificmeaning

By storing only the positions of the set bits, a very high compression rate

becomespossible.Furthermore,theuseofaconstantlygrowingdictionaryforsemantic

groundingcanbeavoided.

By using a sparse data representation, CAM-computing becomes possible by

simply increasing the number of cortical modules deployed. But one big problem

remains:noise.Unlikesiliconbaseddevices,biologicalsystemsareveryimpreciseand

unreliable, introducinghigh levelsofnoise into thememory-computingprocess. False

activationorfalsedroppingofasinglebitinadenserepresentationrendersthewhole

wordintosomethingwrongorunreadable.Sparserepresentationsaremoreresistantto

droppedbitsasnotalldescriptivefeaturesareneededtoidentifyadataitemcorrectly.

Butshiftedbit-positionsarestillnottoleratedasthefollowingexampleshows.

Page 15: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 15

Fig.3:Influenceofdroppedandshiftedbitsonsparserepresentations

Intheaboveexample,thevariousbinaryfeaturesarelocatedatrandompositions

within the sparsebinarydataword.Aone-to-onematch isnecessary to compare two

data-items. If we now introduce a mechanism that tries to continually group2the

feature-bitsthatfiresimultaneouslywithinthedataword,wegainseveralbenefits.

Fig.4:Groupingco-occurringfeaturestogetherimprovesnoiseresistance

A first advantage is the substantial improvement of noise resistance in the

representationofmessyreal-worlddata.Whenasetbitshiftsslightlytotheleftorthe

right-ablur-effecthappeningfrequentlywhenbiologicalbuildingblocksareused-the

2Inaspatial,topographicalsense.

Page 16: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 16

semanticmeaningofthewholedata-wordremainsverystable,thuscontributingonlya

verysmallerrorvalue.

A second advantage is the possibility to compute a gradual similarity value,

allowingamuch finer-grainedsemanticcomparison,which ismandatory for functions

likedisambiguationandinference.

Ifweassumetheneo-cortextobeamemorysystemabletoprocessdatain

real-timeandtobebuiltoutofrepeatingmicrocircuits,theSparse

DistributedRepresentationistheminimumnecessarydata

configuration,whilebeingthebiologicallymostconvenientdataformatto

beused.

PropertiesofSDREncodedData

I. SDRscanbeefficientlystoredbyonlystoringtheindicesofthe(veryfew)set

bits.Theinformationlossisnegligibleevenifsubsampled.

II. Everybit in a SDRhas semanticmeaningwithinthecontextoftheencoding

sensor.

III. Similar things look similar, if encoded as a SDR. Similarity can be calculated

usingcomputationallysimpledistancemeasures.

IV. SDRsarefaulttolerantbecausetheoverallsemanticsofanitemaremaintained

evenifseveralofthesetbitsarediscardedorshifted.

V. TheunionofseveralSDRsresultsinaSDRthatstillcontainsalltheinformation

oftheconstituentsandbehaveslikeagenericSDR.Bycomparinganewunseen

SDRwithaunion-SDR,itcanbedeterminedifthenewSDRispartoftheunion.

VI. SDRscanbebroughttoanylevelofsparsityinasemanticallyconsistentfashion

byusingalocalitybasedweightingscheme.

OnLanguageIntelligence

Afrequentassumptionaboutmachineintelligenceisthat,inamachineexecuting

asufficientlycomplexcomputationalalgorithm,intelligencewouldemergeandmanifest

itselfbygeneratingoutputindistinguishablefromthatofhumans.

Page 17: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 17

InHTMtheory,however,intelligenceseemstoberatheraprincipleofoperation

than an emerging phenomenon. The learning algorithm in theHTMmicrocircuits is a

comparablysimplestoragemechanismforshortsequencesofSDR-encodedsensor-or

input-data.Wheneveradata itemispresentedtothecircuit,apredictionofwhatdata

items are expected next is generated. This anticipatory sensor-data permits the pre-

selection and optimization of the associated response by choosing from a library of

previouslyexperiencedandstoredoutput-SDR-sequences.Thisintelligentselectionstep

is carriedoutbyapplyingpredictionandgeneralization functions to theSDRmemory

cells.

It has been shown that, on the one hand, prediction SDRs are generated by

creating an “OR” (union) of all the stored SDRs that belong to the currently active

sequence.ThispredictionSDRispasseddownthehierarchyandusedtodisambiguate

unclear data, to fill up incomplete data and to strengthen the storage persistence of

patternsthathavebeenpredictedcorrectly.Ontheotherhand,theoutput-SDRscanbe

regarded as a form of “AND” (intersection) of the stored SDRs within the current

sequence.Thisgeneralizedoutput-SDR ispassedup thehierarchy to the inputsof the

next higherHTM-layer leading to an abstractionof the input data to detect and learn

higher-levelsequencesortolocatesimilarSDR-sequences.

Infact,intelligenceisnotsolelyrootedinthealgorithmused.Aperfectlyworking

HTMcircuitwouldnotexhibit intelligentbehaviorby itselfbutonlyafterhavingbeen

exposed to sufficient amounts of relevant special case experiences. Neo-cortical

intelligence seems to be continuously saved into the HTM-system driven by an input

datastreamwhilebeingexposedtotheworld.

ABrainModelofLanguage

By taking the HTM theory as a starting point, we can characterize Semantic

Folding as a data-encoding mechanism for inputting language semantics into HTM

networks.

Language is a creation of the neo-cortex to exchange information about the

semanticsof theworld,between individuals.Theneo-cortexencodesmental concepts

(storedsemantics)intoasymbolicrepresentationthatcanbesenttomusclesystemsto

form the externalization of the inner semantic representation. The symbols are then

Page 18: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 18

materialized as acoustic signals to become speech or aswriting to become text or as

othermoreexoticencodingslikeMorseorBraille.Inordertoconceivethesemanticsof

a communication, the receiver has to convert the symbols back into the inner

representationformatthatcanthanbedirectlyutilized.

Fromthesemanticpointofview,thesmallestunit3thatcontainsuseful,namely

lexical,informationconsistsinwords.

TheWord-SDRLayer

Hypothesis: All humans have a language receptive brain region

characterizedasaword-SDRlayer.

During language production, language is encoded for the appropriate

communication channel like speech, text or even Morse code or Braille. After the

necessary decoding steps during the receiver’s perception, there must be a specific

locationintheneo-cortexwheretheinnerrepresentationofawordappearsforthefirst

time.

3Fromamoreformallexicalsemanticpointofview,themorphemeisthesmallestunitencapsulatingmeaning.Nevertheless,thebreakingdownofwordsintomorphemesseemstooccuronlyafterasufficientnumberofwordoccurrenceshasbeenassimilatedandprobablyoccursonlyatalaterstageduringlanguageacquisition.Wordswouldthereforebethealgorithm-genericsemanticatoms.

Page 19: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 19

Fig.5:Theword-SDRHypothesis

The brain decodes language by converting the symbolic content of phoneme-

sequences or text strings into a semantically grounded neural representation of the

meaning(s) of aword, the “semantic atom”.These encoding anddecoding capabilities

areindependentfromtheactualsemanticprocessing.Humansarecapableoflearningto

usenewcommunicationchannelssuchasBrailleorMorse,andcanevenbetrainedto

usenon-biologicalactuatorslikebuttonsorkeyboardsoperatedbyfingers,lips,tongue

oranyothercorticallycontrolledeffector.

There is a place in the neo-cortex where the neurological representation of a

wordmeaning,appearsforthefirsttimebywhatevermeansithasbeencommunicated.

AccordingtotheHTMtheory,thewordrepresentationhastobeintheSDRformat,asall

data in the neo-cortex has this format. The word-SDRs all appear as the output of a

specific hierarchical word-SDR layer. The word-SDR layer is the first step in the

hierarchy of semantic processingwithin the neo-cortex and could be regarded as the

languagesemanticreceptiveregion.

Language is regarded as an inherently human capacity. No other mammal4is

capable of achieving an information density comparable to that of human

communication,whichsuggeststhatthestructureofthehumanneo-cortexisdifferent,4Onlymammalshaveaneo-cortex.

Page 20: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 20

in that aspect, from other mammals. Furthermore, all humans (except for rare

disabilities)havetheinnatecapabilityforlanguageandalllanguageshaveverycommon

structures.Therefore,languagecapacityhastobedeeplyandstructurallyrootedinthe

humanneo-corticallayout.

MechanismsinLanguageAcquisition

Althoughthereismuchdiscussionaboutthequestionwhetherlanguagecapacity

isinnateorlearned,theexternalizationoflanguageisdefinitelyanacquiredskill,asno

babyhaseverspokendirectlyafterbirth.

Language acquisition is typically bootstrapped via speech and is typically

extendedduringchildhoodtoitswrittenform.

TheSpecialCaseExperience(SCE)

The neo-cortex learns exclusively by being exposed to a stream of patterns

coming in from the senses. Initially, a baby is exposed to repeated basic phonetic

sequences corresponding to words. The mother’s repeated phonetic sequences are

presented as utterances, increasing in complexity with new words being constantly

introduced.

According to HTM-theory, the neo-cortex detects reoccurring patterns (word-

SDRs) and stores the sequences where they appear. Every word-sequence that is

perceived within a short time unit corresponds to a Special Case Experience (SCE):

comparable to perceiving a visual scene. In the sameway that everyperceived visual

scenecorrespondstoaspecialcaseexperienceofasetofreoccurringshapes,colorsand

contrasts, every utterance corresponds to an SCE, corresponding to a specific word-

sequence. In the case of visual scene perception, the same objects never produce the

exact same retina pattern twice. In comparison, the same concepts can be expressed

withlanguagebyaverylargenumberofconcretewordcombinationsthatneverseemto

repeatintheirsameexactmanner.

Thesumoftheperceivedspecial-case-experience-utterancesconstitutesthe

onlysourceofsemanticbuildingblocksduringlanguageacquisition.

Page 21: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 21

MechanismsinSemanticGrounding

Theprocessofbindingasymbol,likeawrittenorspokenword,toaconceivable

meaningrepresentsthefundamentalsemanticgroundingoflanguage.

Ifweassume thatallpatterns thatascend thecorticalhierarchyoriginate from

sensorial inputs, we can hypothesize that the meaning of a word is grounded in the

sensorialafferencesattheverymomentoftheappearanceoftheword-SDRattheword-

SDRlayer.WheneveraspecificwordisperceivedasanSCE,asnapshotofsome(orall)

of the sensorial afferences is made and tied to the corresponding word-SDR. Every

subsequent appearance of the same word-SDR generates a new5sensorial snapshot

(state)thatisAND-edwiththecurrentlystoredone.Overtime,onlythebitsthatarein

commonwithinall statesremainactive.Theseremainingbitscan thereforebesaid to

characterizethesemanticgroundingofthatword.

The mechanism described above is suitable for bootstrapping the semantic

groundingprocessduringtheinitial languageacquisitionphase.Overtime,vocabulary

acquisitionisnotonlyrealizedusingsensoryafferencesbutalsobasedonknownwords

thathavebeenlearnedpreviously.Initially,semanticgroundingthroughsensorystates

is predominant, until a basic set of concepts is successfully processed; then the

definition of words using knownwords increases and becomes themainmethod for

assimilatingnewwords.

DefinitionofWordsbyContext

The mechanism of sensory semantic grounding seems to be specific to the

developingneo-cortexasthematurebraindependsmostlyonexistingwordstodefine

newones.Thesensorial-statesemanticgroundinghypothesiscouldevenbeextendedby

correlatingthesensorial-groundingphasewiththeneo-cortexbeforeitspruningperiod,

which explains why it is so hard for adults to specify how the generic semantic

groundingcouldhavehappenedduringtheirearlychildhood.

Thematurewayoflinkingaconcepttoaspecificwordisbyusingotherknown

words. Applying the previously introduced concept of a Special Case Experience, the

mechanismcanbedescribedas follows: a sequenceofwords receivedby the sensory

system within a sufficiently short perceptive time-interval can be regarded as an

5Thesubsequentword-SDRsnapshotsarenewinthesensethattheyhavesmalldifferencestothepreviouslystoredoneandtheyarethesameinthattheyhavealargeoverlapwiththepreviouslystoredone.

Page 22: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 22

instance of a Linguistic Special Case Experience, corresponding to a statement,

consisting of one or more sentences. In the case of written language, this Linguistic

SpecialCaseExperiencewouldbeatextsnippetrepresentingacontext.Thistextsnippet

can be regarded as a context for everyword that is contained in it. Eventually, every

word will get linked to more and more new contexts, strengthening its conceptual

grounding. The linking occurs by OR-ing the new Special Case Experience with the

existingones,therebyincreasingthenumberofcontextsforeachword.

SemanticMapping

Asdiscussedpreviouslytherepresentationforawordappearsintheneo-cortex

at the (virtual) receptive area for words. As the neo-cortex is organized as a 2-

dimensional sheet, the world-layer will be organized in a 2-dimensional fashion too.

Whilethebrainiscontinuouslyexposedtoincomingdataandtheextentofthecortical

area isrelativelystable, therehastobeamechanismthatreusestheavailablestorage

spacebystoringtheincomingSCEsinanassociativewayoneontopofeachother.This

can be achieved by the simplemechanism of having incoming bits that are triggered

simultaneously, for features that occur concurrently and which are therefore

semantically related, attract eachother spatiallywithin the2-dimensionalword-layer.

As a result, everyword is represented by its contexts that are distributed across the

word-layerandeverycontext-positionwithinthe2D-areaoftheword-layerconsistsof

semantically highly related SCEs (utterances) that themselves share a significant

number ofwords. After having perceived a large enough number of SCEs the context

positionsendupbeingclusteredinawaythatputssimilar(semanticallyclose)contexts

neartoeachotheranddissimilaronesmoredistantfromeachother.

Thismechanismensuresthattwobrainsthatarecontinuouslyexposedtosimilar

SCEs end up having their contexts grouped in a similar way, creating an implicit

consensuson thesemanticdistributionof featureswithoutrequiring tobeexposedto

the exact same data. This constitutes a primary language learning mechanism that

capturesanddefinestheacquiredsemanticspaceofanindividualwhileassuringahigh

degree of compatibility with the semantic space of his/her peers. Every word is

semanticallygroundedbyexpressingitasabinarydistributionofthesecontextfeatures,

whicharethemselvesdistributedacrossthelearnedsemanticspace.

Page 23: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 23

The primary acquisition of the 2D-semantic space as a distributional reference

for the encoding of wordmeaning is called Semantic Folding. Inmore formal terms:

everywordischaracterizedbythelistofcontextsinwhichitappears.Acontextbeing

itselfthelistoftermsencounteredinapreviouslystoredSCE(utteranceortextsnippet).

Fig.6:Creationofasimple1D-word-vector

Thisone-dimensionalvectorcouldbedirectlyusedtorepresenteveryword.But

tounlocktheadvantagesofSemanticFolding,asecondmappingstepisintroducedthat

notonlycapturestheco-occurrenceinformationbutalsothesemanticrelationsamong

contextstoenableunderstandingthroughthesimilarityofdistributions.

Technicallyspeaking,thecontextsrepresentvectorsthatcanbeusedtocreatea

two-dimensionalmap in such away that similar context-vectors are placed closer to

each, using topological (local) inhibition mechanisms and/or by using competitive

Hebbianlearningprinciples.

Page 24: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 24

Fig.7:Distributionofthecontextsonthesemantic2Dmap

Thisresultsina2D-mapthatassociatesacoordinatepairtoeverycontextinthe

repository of contexts (the sum of all perceived SCEs). Thismapping process can be

maintaineddynamicallybyalwayspositioninganewlyperceivedSCEontothemap,and

isevencapableofgrowingthemaponitsbordersifnewwordsornewconceptsappear.

EveryperceivedSCE strengthens,adjustsorextends theexisting semantic

map.

Thismapisthenusedtoencodeeverysinglewordbyassociatingabinaryvector

with eachword, containing a “1”, if theword is contained in the context at a specific

positionanda“0”ifnot,forallpositionsinthemap.

Page 25: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 25

Fig.8:Encodingofawordasword-SDR

Afterserialization,wehaveabinaryvectorthathasanaturalSDRformat:

• Awordtypicallyappearsonlyinaverysmallnumberofthestoredcontexts.The

vectoristhereforesparse.

• Although the vector is used in its serialized notation, the neighboring

relationships between the different positions are still governed by the 2D

topology,correspondingtoatopological2D-distribution.

• If a set bit shifts its position (up, down, left or right), it will misleadingly

represent a different adjacent context. But as adjacent contexts have a very

similarmeaning due to the folded-inmap, the errorwill be negligible or even

unnoticeablerepresentingahighnoiseresistance.

• Wordswithsimilarmeaningslooksimilarduetothetopologicalarrangementof

theindividualbit-positions.

• The serialized word-SDRs can be efficiently compressed by only storing the

indicesofthesetbits.

• The serialized word-SDRs can be subsampled to a high degree without losing

significantsemanticinformation.

• Several serialized word-SDRs can be aggregated using a bitwise OR function

withoutlosinganyinformationbroughtinbyanyoftheunion’smembers.

MetricWordSpace

The set of all possible word-SDRs corresponds to a word-vector-space. By

applying a distance metric (like Euclidian distance) that represents the semantic

Page 26: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 26

closeness of two words, the word-SDR space satisfies the requirements of a metric

space:

• Distancesbetweenwordsarealwaysnon-negative.

• If the distance between two words is 0 then the two words are semantically

identical(perfectsynonyms).

• If two words A and B are in a distance d from each other, d(A,B) = d(B,A).

(Symmetry).

• For three distinct words A, B, C we have d(A,C) <= d(A,B) + d(B,C). (Triangle

inequality).

By considering theword-SDR space as ametric space,we can revert to a rich

research corpus of mathematical properties, characteristics and tools that find their

correspondenceinthemetricspacerepresentationofnaturallanguage.

Similarity

Similarityisthemostfundamentaloperationperformedinthemetricword-SDR-

space. Similarity should not be directly interpreted as word synonymy, as this only

representsaspecialcaseofsemanticclosenessthatassumesaspecifictypeofdistance

measure,featureselectionandarrangement.Similarityshouldbeseenasamoreflexible

conceptthatcanbetunedtomanydifferent,languagerelevantnuanceslike:

- Associativity

- Generalization

- Dependency

- Synonymy

- Etc.

Theactualdistancemeasureusedtocalculatesimilaritycanbevarieddepending

onthegoaloftheoperation.Astheword-SDRvectorsarecomposedofbinaryelements,

the simplest distancemeasure consists in calculating the binary overlap. Two vectors

are close if the number of overlapping bits is large. But caremust be taken, as very

unequalword-frequenciescanleadtomisinterpretations.Bycomparingaveryfrequent

wordthathasmanyset-bitswithararewordhavingasmallnumberofset-bits,evena

fulloverlapwouldonlyresultinthesmallnumberofoverlap-bitscorrespondingtothe

numberofonesinthelow-frequencyterm.

Page 27: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 27

Otherdistance/similaritymeasuresthatcouldbeapplied:

- Euclidiandistance

- Hammingdistance

- Jaccardsimilarity

- Cosinesimilarity

- Levenshteindistance

- Sørensen–Diceindex

- Etc.

DimensionalityinSemanticFolding

TheSemanticFoldingprocess takessymbolicwordrepresentationas inputand

convertsitintoann-dimensionalSDR-vectorthatissemanticallygroundedthroughthe

2Dmaterialization-stepofthesemanticmap.

The main reason to choose a 2D-map over any other possible dimensionality

primarily lies in the fact that the word-SDRs are intended to be fed into cortical

processing systems that in turn try to implement cortical processing schemes, which

happentohaveevolvedintoa2Darrangement.

In order to achieve actual materialization of the word-SDRs, the neighboring

relationships of adjacent bits in the data should directly translate to the topological

space of neo-cortical circuits. Without successful materialization of the word-SDRs,

semanticgroundingwouldnotbepossible,makinginter-individualcommunication-the

primarypurposeoflanguage-extremelyunreliable,ifnotimpossible.

To propagate the map-topology throughout the whole cortical extent, all

afferences and efferences to or from a specific cortical area have to maintain their

topologicalarrangement.Theseprojectionscanbe linksbetweenregionsorpathways

betweensensoryorgansandthecorticalreceptivefields.

If we consider the hypothetical word-SDR layer in the human cortex to be a

receptivefieldforlanguage,thesimilaritytoallothersensorialinputsystems,usingdata

ina2Darrangement,becomesobvious:

• The organ of Corti in the cochlea is a sheet of sensorial cells,where every cell

transmitsaspecificpieceofsoundinformationdependingonwhereonthesheet

itispositioned.

Page 28: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 28

• Touch is obviously generating topological information of the 2D surface of the

bodyskin.

• TheRetina isa2Dstructurewhere twoneighboringpixelshaveamuchhigher

probabilityofbelongingtothesameobjectthantwodistantones.

Topographic projection seems to be a main neuro-anatomic principle that

maintainsanorderedmappingfromasensorysurfacetoitsassociatedcorticalreceptive

structures.This constitutesanother strongargument forusinga2Dsemanticmap for

SemanticFolding.

LanguageforCross-BrainCommunication

It seems reasonable to assume that a major reason for the development of

language is the possibility for efficient communication. In the context currently

discussed,communicationcanbedescribedasthecapabilitytosendarepresentationof

thecurrent(neo-cortical)brainstatetoanotherindividualwhocanthenexperienceor

atleastinferthecorticalstatusofthesender.Inthiscase,thesensorialafferencesofone

neo-cortexcouldbecomepartoftheinputofasecondneo-cortexthatmightprocessthis

compound data differently than the sender, as it accesses a different local SCE

repository.Thereceivingindividualcanthencommunicatethisnew,alternativeoutput

state back to the first individual, therefore making cooperation much easier. In

evolutionarybiologyterms,thismechanismcanberegardedasawayofextendingthe

corticalareabeyondthe limitsofasinglesubjectbyextending informationprocessing

fromonebrain to thecortical realestateof socialpeers.Theevolutionaryadvantages

resultingfromthisextendedcorticalprocessingarethesamethatdrovethegrowingof

theneo-cortex inthe firstplace:highercomputationalpoweror increased intelligence

byextendingtheoverallcorticalrealestateavailabletointerpretthecurrentsituational

sensoryinput.

Althougheveryindividualuseshisproprietaryversionofasemanticmapformed

along his ontogenetic development, it is interesting to note that this mechanism

nevertheless works efficiently. As humans who live in the same vicinity share many

genetic, developmental and social parameters, the chance of their semantic maps

evolvinginaverysimilarfashionishigh:theyspeakthesamelanguage,thereforetheir

semanticmapshavealargeoverlap.Thefartheraparttwoindividualsare,regardlessof

Page 29: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 29

whetherthisismeasuredbygeographical,socio-culturalorenvironmentaldistance,the

smallertheoverlapoftheirmapswillbe,makingcommunicationharder.

By developing techniques to record and playback language, such as writing

systems,itbecamepossibletonotonlymakebrainstatesavailableoverspacebutalso

overtime.Thiscreatedafastwaytoexposethecortexofanindividualtoalargesetof

historically accumulated Special Case Experiences (brain-states),which equally led to

incrementalimprovementoftheacquiredcorticalabilitytomakeusefulinterpretations

andpredictions.

Page 30: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 30

Part2:SemanticFingerprinting

TheoreticalBackground

HTM Theory and Semantic Folding Theory are both based on the same

conceptual foundations. They aim to apply the newest findings in theoretical

neurosciencetotheemergingfieldofMachineIntelligence.Thetwotechnologieswork

together in a complementary fashion,Cortical.io’s SemanticFolding is theencoder for

the incoming streamof data, andNumenta’sNuPIC (NumentaPlatform for Intelligent

Computing)istheintelligentbackend.

Cortical.iohasimplementedSemanticFoldingasaproductcalledRetinaAPI,that

enablestheconversionoftextdataintoacortexcompatiblerepresentation-technically

called Sparse Distributed Representation (SDR) - and the operation of similarity and

BooleancomputationsontheseSDRs.

HierarchicalTemporalMemory

TheHierarchicalTemporalMemory(HTM)theory isa functional interpretation

ofpracticalfindingsinneuroscienceresearch. HTMtheoryseesthehumanneo-cortex

asa2Dsheetofmodular,homologousmicrocircuitsthatareorganizedashierarchically

interconnected layers. Every layer is capable of detecting frequently occurring input

patternsandlearningtime-basedsequencesthereof.

The data is fed into an HTM layer in the form of Sparse Distributed

Representations.

SDRs are large binary vectors that are very sparsely filled, with every bit

representing distinct semantic information. According to the HTM theory, the human

neo-cortexisnotaprocessorbutamemorysystemforSDRpatternsequences.

When anHTM layer is exposed to a stream of input data, it starts to generate

predictions ofwhat it thinkswould be the next incoming SDRpattern based onwhat

patternsithasseensofar. Inthebeginning,thepredictionswill,ofcourse,differfrom

the actual data but a few cycles later the HTM layerwill quickly converge andmake

more correct predictions. This prediction capability can explain many behavioral

manifestationsofintelligence.

Page 31: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 31

SemanticFolding

InordertoapplyHTMtoapracticalproblem,itisnecessarytoconvertthegiven

inputdataintotheSDRformat.WhatcharacterizesSDRs?

• SDRsarelargebinaryvectors(fromseveralthousandstomanymillionsofbits).

• SDRshaveaverysmallfractionoftheirbitssetto“1”ataspecificpointintime.

• SimilardatalookssimilarwhenconvertedintoSDRformat.

• EverybitintheSDRhasspecific(accountable)meaning.

• TheunionofseveralSDRsresultsinanSDRthatstillcontainsalltheinformation

ofitsconstituentSDRs.

TheprocessofSemanticFoldingencompassesthefollowingsteps:

• DefinitionofareferencetextcorpusofdocumentsthatrepresentstheSemantic

Universethesystemissupposedtoworkin.Thesystemwillknowallvocabulary

anditspracticaluseasitoccursinthisLanguageDefinitionCorpus(LDC).

• EverydocumentfromtheLDCiscutintotextsnippetswitheachsnippet

representingasinglecontext.

• Thereferencecollectionsnippetsaredistributedovera2Dmatrix(e.g.128x128

bits)inawaythatsnippetswithsimilartopics(thatsharemanycommonwords)

areplacedclosertoeachotheronthemap,andsnippetswithdifferenttopics

(fewcommonwords)areplacedmoredistantlytoeachotheronthemap.This

producesa2Dsemanticmap.

• Inthenextstep,alistofeverywordcontainedinthereferencecorpusiscreated.

• Bygoingdownthislistwordbyword,allthecontextsawordoccursinaresetto

“1”inthecorrespondingbit-positionofa2Dmappedvector.Thisproducesa

large,binary,verysparselyfilledvectorforeachword.Thisvectoriscalledthe

SemanticFingerprintoftheword.Thestructureofthe2Dmap(theSemantic

Universe)isthereforefoldedintoeachrepresentationofaword(Semantic

Fingerprint).Thelistofwordswiththeirfingerprintsisstoredinadatabasethat

isindexedtoallowforfastmatching.Thesystemthatconvertsagivenwordinto

afingerprintiscalledtheRetina,asitactsasasensorialorganfortext.The

fingerprintdatabaseiscalledtheRetinaDatabase(RetinaDB).

Page 32: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 32

RetinaDB

TheRetinaDBconsistsofactualutterancesthataredistributedovera128x128

grid.Ateachpointofthematrixwefindonetoseveraltextsnippets.Theirconstituent

wordsrepresentthetopic locatedat thisposition inthesemanticspace.Thechoiceof

implementingthesemanticspaceasa2Dstructureisinanalogytothefactthattheneo-

cortexitself, likeallbiologicalsensors(e.g.theretinaintheeye,theCortiorganinthe

ear,thetouchsensorsintheskinetc.),isarrangedasa2dimensionalgrid.

TheLanguageDefinitionCorpus

By selectingWikipedia documents to represent the language definition corpus,

the resulting Retina DBwill cover general English. If, on the contrary, a collection of

documents from the PubMed archive is chosen, the resulting Retina DB will cover

medicalEnglish.ALDCcollectionofTwittermessageswillleadtoa“Twitterish”Retina.

Thesameis,ofcourse,trueforotherlanguages:TheSpanishorFrenchWikipediawould

leadtogeneralSpanishorgeneralFrenchRetinas.

The size of the generated text snippets determines theassociativitybias of the

resultingRetina.Ifthesnippetsarekeptverysmall,(1-3sentences)thewordSocratesis

linked to synonymous concepts likePlato,Archimedes orDiogenes. Thebigger the text

snippetsare,themorethewordSocratesislinkedtoassociatedconceptslikephilosophy,

truth ordiscourse. Inpractice, thebias is set toa level thatbestmatches theproblem

domain.

DefinitionofaGeneralSemanticSpace

Inordertoachievecrosslanguagefunctionality,aRetinaforeachofthedesired

languageshas tobe generatedwhile keeping the topologyof theunderlying semantic

spacethesame.Asaresult,thefingerprintforaspecificconceptlikephilosophyisnearly

thesameinalllanguages(havingasimilartopology).

TuningtheSemanticSpace

By creating a specific Retina for a given domain, all word-fingerprints make

better use of the available real estate of the 128x128 area, therefore improving the

semanticresolution.

TuningaRetinameans selecting relevant representative trainingmaterial.This

content selection task can be best carried out by a domain expert, in contrast to the

Page 33: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 33

optimizationofabstractalgorithmparametersthattraditionallyrequiretheexpertiseof

computerscientists.

RESTAPI

The Retina engine, as well as an exemplary English Wikipedia Database, is

availableasafreelycallableRESTAPIforexperimentationandtesting.

A web accessible sandbox can be used by pointing a browser to

http://api.cortical.io.Allfunctionalitiesdescribedinthisdocumentcanbeinteractively

testedthere.

Afirstcallto:

API:/retinaendpoint.GetinformationontheavailableRetina

willreturnspecificsforthepublishedRetinas.[

{ "retinaName": "en_associative",

"description": "An English language retina balancing synonymous

and associative similarity.",

"numberOfTermsInRetina": 854523, "numberOfRows": 128,

"numberOfColumns": 128

}

]

Fig.9:CallingtheRetinaAPItogetinformationontheRetinaDatabase

Word-SDR–SparseDistributedWordRepresentation

WiththeRetinaAPIitispossibletoconvertanygivenword(storedintheRetina

DB) into aword-SDR. Theseword-SDRs constitute the SemanticAtoms of the system.

Theword-SDRisavectorof16,384bits(128x128)whereeverybitstandsforaconcrete

context (topic) that can be realized as a bagofwords of the training snippets at this

position.

Duetothetopologicalarrangementoftheword-SDRs,similarwordslikedogand

cat do actually have similar word-SDRs. The similarity is measured in the degree of

overlapbetweenthetworepresentations.Incontrast,thewordsdogandtruckhavefar

feweroverlappingbits.

Page 34: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 34

TermtoFingerprintConversion

Atthe

API:/termsendpoint.Convertanywordintoitsfingerprint

thewordapplecanbeconvertedintotheSemanticFingerprintthatisrenderedasalist

ofindexesofallsetbits:

["fingerprint": { "positions": [

1,2,3,5,6,7,34,35,70,77,102,122,125,128,129,130,251,252,255,258,319,379,380,381,382,383,385,389,392,423,507,508,509,510,511,513,517,535,551,592,635,636,637,638,639,641,643,758,764,765,766,767,768,771,894,900,1016,1017,1140,1141,1143,1145,1269,1270,1271,1273,1274,1

275,1292,1302,1361,1397,1398,1399,1400,1401,1402,1403,1407,1430,1526,1527,1529,1531,1535,1655,1656,1657,1658,1659,1716,1717,1768,1785,1786,1788,1790,1797,1822,1823,1831,1845,1913,1915,1916,1917,1918,1931,2020,2035,2043,2044,2046,2059,2170,2176,2298,2300,2302,2303,2309,2425,2512,2516,2517,2553,2554,2630,2651,2682,2685,2719,2766,2

767,2768,2773,2901,3033,3052,3093,3104,3158,3175,3176,3206,3286,3291,3303,3310,3344,3556,3684,3685,3693,3772,3812,3940,3976,3978,3979,4058,4067,4068,4070,4104,4105,4194,4196,4197,4198,4206,4323,4324,4361,4362,4363,4377,4396,4447,4452,4454,4457,4489,4572,4617,4620,4846,4860,4925,4970,4972,5023,5092,5106,5113,5114,5134,5174,5216,5

223,5242,5265,5370,5434,5472,5482,5495,5496,5497,5498,5607,5623,5751,5810,6010,6063,6176,6221,6336,6783,7174,7187,7302,7427,7430,7545,7606,7812,7917,7935,8072,8487,8721,8825,8827,8891,8894,8895,8896,8898,8899,8951,9014,9026,9033,9105,9152,9159,9461,9615,9662,9770,

9779,9891,9912,10018,10090,10196,10283,10285,10416,10544,10545,10586,10587,10605,10648,10649,10673,10716,10805,10809,10844,10935,10936,11050,11176,11481,11701,12176,12795,12811,12856,12927,12930,12931,13058,13185,13313,13314,13442,13669,14189,14412,14444,14445,14446,14783,14911,15049,15491,15684,15696,15721,15728,15751,15752,158

33,15872,15875,15933,15943,15995,15996,15997,15998,15999,16000,16122,16123,16127,16128,16129,16198,16250,16251,16255,16378 ] } ]

Fig.10:ASemanticFingerprintinJSONformatastheRetina-APIreturnsit

The /terms endpoint also accepts multi word terms like New York or United Nations and is able to represent domain specific phrases like Director of Sales and Business Development or please fasten your seat belts by a unique Semantic

Page 35: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 35

Fingerprint.ThisisachievedbyusingthedesiredkindoftokenizationduringtheRetina

creationprocess.

GettingContext

In order to get back words for a given Semantic Fingerprint (“what does this

fingerprintmean?”)thefollowingendpointcanbeused:

API:/terms/similar_termsendpoint.FindtheclosestmatchingFingerprint

Thisendpointcanbecalledtoobtainthetermshavingthemostoverlapwiththe

input Fingerprint. The termswithmost overlap constitute the contextual terms for a

specificSemanticFingerprint.

As terms are usually ambiguous and have different meanings in different

contexts,thesimilartermsfunctionreturnscontextualtermsforallexistingcontexts.

Fig.11WordSenseDisambiguationofthewordapple

The fingerprint representation of the word apple contains all the different

meanings like computer-related meaning, fruit-related meaning or records-related

meaning.Ifweassumethefollowingsequenceofoperations:

1) getthewordwiththemostoverlapwiththewordapple:thewordcomputer 2) setallbitsthataresharedbetweenappleandcomputerto“0”

Page 36: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 36

3) sendtheresultingfingerprintagaintothesimilartermsfunctionandget:the

wordfruit

4) setallbitsthataresharedbetweenthefingerprintfromstep2andfruitto“0”

5) sendtheresultingfingerprintagaintothesimilartermsfunctionandget:the

wordrecords

6) …continueuntilnomorebitsareleft,

thenwehaveidentifiedallthecontextsthatthisRetinaknowsforthewordapple.

ThisprocesscanbeappliedtoallwordscontainedintheRetina.Infact,thisformof

computationaldisambiguationcanbeappliedtoanySemanticFingerprint.

API:/terms/contextendpoint.Findthedifferentcontextsofaterm

Using this endpoint, any termcanbequeried for its contexts.Themost similar

contextual term becomes the label for this context and the subsequent most similar

termsare returned.Afterhaving identified thedifferent contexts, the similar term list

foreachofthemcanbequeried.

Text-SDR–SparseDistributedTextRepresentation

The word-SDRs represent atomic units and can be aggregated to create

document-SDRs(DocumentFingerprints).Everyconstituentwordisconvertedinto its

Semantic Fingerprint. All these fingerprints are then stacked and the most often

representedfeaturesproducethehighestbitstack.

Page 37: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 37

Fig.12Aggregationofword-SDRsintoatext-SDR

TexttoFingerprintConversion

Thebitstacksoftheaggregatedfingerprintarenowcutatathresholdthatkeeps

thesparsityoftheresultingdocumentfingerprintatadefinedlevel.

Fig.13Thesparsificationhasanimplicitdisambiguationeffect

Page 38: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 38

The representational uniformity of word-SDRs and document-SDRs makes

semanticcomputationeasyandintuitivefordocumentsofallsizes.

Anotherveryuseful sideeffect is thatof implicitdisambiguation.Aspreviously

seen, everyword has feature bits formany different context groups in its fingerprint

rendering.However,onlythebitsofthetopicswiththehigheststackswillremaininthe

final document fingerprint. All other (ambiguous bits) will be eliminated during

aggregation.

Fingerprintsoftextscanbegeneratedusingthefollowingendpoint.

API:/textendpoint.Convertanytextintoitsfingerprint

Basedonthetext-SDRmechanism,itispossibletodynamicallyallowtheRetina

tolearnnew,previouslyunseenwordsastheyappearintexts.

KeywordExtraction

Theword-SDRsalsoallowaveryefficientmechanismtoextractthesemantically

most important terms (or phrases) of a text. After internally generating a document-

fingerprint,eachfingerprintoftheconstituentwordsiscomparedtoit.Thesmallestset

ofword-fingerprintsthatisneededtoreconstructthedocument-fingerprintrepresents

thesemanticallymostrelevanttermsofthedocument.

API:/text/keywordsendpoint.Findthekeytermsinapieceoftext

SemanticSlicing

Itisoftennecessarytoslicetextintotopicalsnippets.Eachsnippetshouldhave

onlyone(main)topic.Thisisachievedbysteppingthroughthetextword-by-wordand

sensing how many feature bits change from one sentence to the next. If many bits

change fromone sentence-fingerprint to thenext, it canbe assumed that anew topic

appearedandthetextiscutatthisposition.

API:/text/slicesendpoint.Cuttingtextintotopic-snippets

Expressions–Computingwithfingerprints

AsallSemanticFingerprintsarehomologous(theyhavethesamesizeandtheir

feature space is equal), they can be used directly in Boolean expressions. Setting and

Page 39: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 39

resetting selections of bits can be achieved by AND-ing, OR-ing and SUBtracting

SemanticFingerprintswitheachother.

Fig.14Computingwithwordmeanings

SubtractingthefingerprintofPorsche fromthefingerprintof jaguarmeansthat

allthesportscardotsareeliminatedinthejaguarfingerprint,andthatonlythebigcat

dotsareleft.SimilarbutnotequalwouldbetomakeanANDofthejaguarandthetiger

fingerprints.

AND-ingorganand livereliminatesallpianoandchurchdots(bits) initiallyalso

presentintheorganfingerprint.

Theexpressionendpointat:

API:/expressionsendpoint.CombiningfingerprintsusingBooleanoperators.

canbeusedtocreateexpressionsofanycomplexity.

ApplyingSimilarityastheFundamentalOperator

UsingtheSemanticFingerprintrepresentationforapieceoftextcorrespondsto

having semantic features in a metric space. Vectors within this metric space are

compared using distancemeasures. The Retina API offers several differentmeasures,

someofwhichareabsolute,whichmeansthattheyonlytakeafulloverlapintoaccount,

othersalsotakethetopologicalvicinityofthefeaturesintoaccount.

Page 40: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 40

Fig.15Similartextsnippetsresultinsimilarfingerprints

Thereare twodifferentsemanticaspects thatcanbedetectedwhilecomparing

twoSemanticFingerprints:

• Theabsolutenumberofbitsthatoverlapbetweentwofingerprintsdescribesthe

semanticclosenessoftheexpressedconcepts.

• Bylookingatthetopologicalpositionwheretheoverlaphappens,theshared

contextscanbeexplicitlydetermined.

Fig.16Distincttextsnippetsresultindissimilarfingerprints

Because they are expressed through the combination of 16K features, the

semanticdifferencescanbeverysubtle.

Page 41: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 41

ComparingFingerprints

API:/compareendpoint.CalculatingthedistanceoftwoFingerprints

ThecomparisonoftwoSemanticFingerprintsisapurelymathematical(Boolean)

operationthatisindependentoftheRetinausedtogeneratethefingerprints.

Thismakestheoperationveryfast,asonlybitsarecompared,butalsoveryscalable,as

everycomparisonconstitutesanindependentcomputationandcanthereforebespread

acrossasmanythreadsasneededtostayinacertaintimingwindow.

GraphicalRendering

API:/image/compareendpoint.DisplaythecomparisonoftwoFingerprints

Forconvenience,imagerepresentationofSemanticFingerprintscanbeobtained

fromtheimageendpoint,tobeincludedinGUIsorrenderedreports.

ApplicationPrototypes

Based on the fundamental similarity operation, many higher-level NLP

functionalitiescanbebuilt.Thehigher-levelfunctionsinturnrepresentbuildingblocks

thatcanbeincludedinmanydifferentbusinesscases.

ClassificationofDocuments

Traditionally, document classifiers are defined by providing a sufficiently large

number of pre-classified documents and then by training the classifier with these

trainingdocuments.Thedifficultyof thisapproach is thatmanycomplexclassification

tasks across a larger number of classes require large amounts of correctly labeled

examples.Theresultingclassifierqualitydegradesingeneralwiththenumberofclasses

andtheir(semantic)closeness.

Page 42: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 42

Fig.17ClassificationusingSemanticFingerprints

WithSemanticFingerprints,thereisnoneedtotraintheclassifier.Theonlything

needed is a reference fingerprint that specifies an explicit set of semantic features

describingaclass.Thisreferencesetorsemanticclassskeletoncanbeobtainedeither

throughdirectdescriptionbyenumeratingasmallnumberofgenericclassfeaturesand

creating a Semantic Fingerprint of this list (for example the threewords “mammal”+

“mammals” + “mammalian”), or by formulating an expression. By computing the

expression: “tiger” AND “lion” AND “panther”, a Semantic Fingerprint is created that

specifiesbigcatfeatures.

Forthecreationofsubtlerclasses,theclassifyendpoint:

API:/classify/create_category_filterendpoint.Createacategoryfilterforaclassifier

canbeusedtocreateoptimizedcategoryfiltersbasedonacoupleofexample

documents.Thecreationofafilterfingerprinthasclosetonolatency(comparedtothe

usualclassifiertrainingprocess),whichmakesontheflyclassificationpossible.

Theactualclassificationprocessisdonebygeneratingatextfingerprintforeach

documentandcomparingitwitheachcategoryfilterfingerprint.Bysettingthe

(similarity)cut-offthresholdaccordingly,theclassificationsensitivitycanbeset

optimallyforeachbusinesscase.Asthecutoffisspecifiedrelativetotheactualsemantic

closeness,itdoesnotcauseanydeteriorationinrecall.

Page 43: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 43

ContentFilteringTextStreams

Filtering text streams is also done using the fingerprint classifier described

before.Themaindifference is that thedocumentsdonotpreexistbutareclassifiedas

they come in. The streaming text sources can be of any kind like tweets, news, chat,

Facebookpostsetc.

Fig.18FilteringtheTwitterfirehoseinreal-time

Since the Semantic Fingerprint comparison process is extremely efficient, the

content filtering can easily keep upwith high frequency sources like the Twitter fire

hoseinreal-time,evenonverymoderatehardware.

SearchingDocuments

Usingdocumentsimilarityforenterprisesearchhasbeenontheagendaofmany

products and solutions in the field. Widespread use of the approach has not been

reached mainly because, lacking an adequate document (text) representation, no

distance measures could be developed that could keep-up with the more common

statisticalsearchmodels.

With the Retina engine, searching is reduced to the task of comparing the

fingerprintsofallstored(indexed)documentswithaqueryfingerprintthathaseither

Page 44: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 44

beengeneratedbyanexampledocument(“Showmeotherdocumentslikethisone”)or

bytypinginadescriptionofwhattolookfor(“Actsofvengeanceofmedievalkings”).

Fig.19:Asimilaritybasedconfigurationofasearchsystem

Afterthequeryfingerprintisgenerated,thedocumentsareorderedbyincreasing

distance.Incontrasttotraditionalsearchengines,whereaseparaterankingprocedure

needs tobedefined, the fingerprint-basedsearchprocessgeneratesan intrinsicorder

for theresult set.Additionally, it ispossible toprovidepersonalizedresultsbysimply

allowingtheusertospecifytwoorthreedocumentsthatrelatetohis/her interestsor

working domain (without needing to be directly related to the actual search query).

These user-selected domain documents are used to create a user-profile-fingerprint.

Nowthequeryisagainexecutedandthe(forexample)100mostsimilardocumentsare

selected and are now sorted by increasing distance from the profile-fingerprint. Like

this, two different users can cast the same search query on the same document

collectionandgetdifferentresultsdependingontheirtopicalpreferences.

Real-TimeProcessingOption

TheRetinaAPIhasbeenimplementedasanApacheSparkmoduletoenable its

usewithintheClouderainfrastructure.Thismakesitpossibletosmoothlyhandlelarge

textdataloadsofseveralterabytesandpotentiallyevenpetabytes.

Page 45: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 45

Theabilitytodistributefingerprintcreation,comparisonetc.acrossanarbitrarily

largeclusterofmachinesmakesitpossibletodoreal-timeprocessingofdatastreamsin

ordertoimmediatelysendatrigger,ifsomespecificsemanticconstellationsoccur.

Documentcollectionsofanysizecanbeclassified,simplifyingtheapplicationof

BigDataapproachestounstructuredtextbyordersofmagnitude.

The efficiency of the Retina API combined with the workload aggregation

capabilityof largeclustersbrings indexfreesearching forthefirsttimewithinreachof

real-world datasets. By just implementing a brute force comparison of all document

fingerprintswiththequeryfingerprint,anindexcreationisnotneededanymore.Most

of the costs in maintenance and IT infrastructure related to large search systems

originatefromthecreation,updatingandmanagementoperationsontheindex.

In an index-free search system, any previously stored document can be found

withinmicroseconds.

UsingtheRetinaAPIwithanHTMBackend

Asstatedinthebeginning,HTMandSemanticFoldingsharethesametheoretical

foundations.Allfunctionalitydescribedsofarissolelybasedontakingadvantageofthe

conversionoftextintoaSDRformat.

Inthefollowing,thecombinationoftheRetinaAPIastext-dataencoderwiththe

HTMbackendassequencelearnerisusedinaTextAnomalyDetectionconfiguration.

Fig.20TextAnomalyDetectionusingaHTMbackend

Being exposed to a stream of text-SDRs, the HTM network learns word

transitionsandcombinationsthatoccurinrealworldsources.Basedonthe(text)datait

Page 46: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 46

wasexposedto,thesystemconstantlypredictswhat(word)itexpectsnext.Iftheword

itpredictedissemanticallysufficientlyclosetothewordactuallyseen,thetransitionis

strengthened in the HTM. If, on the other hand, an unexpectedword (having a large

semanticdistance)occurs,ananomalysignalisgenerated.

AdvantagesoftheRetinaAPIApproach

Simplicity

1. NoNaturalLanguageProcessingskillsareneeded.

2. Trainingofthesystemisfullyunsupervised(nohumanworkneeded).

3. Tuningofthesystemispurelydatadrivenandonlyrequiresdomainexpertsand

nospecializedtechnicalstaff.

4. TheAPIprovidedisverysimpleandintuitivetoutilize.

5. Thetechnologycanbeeasilyintegratedintolargersystemsbyincorporatingthe

APIoverRESTorbytheinclusionofaplainJavalibrarywithnoexternal

dependenciesforlocal(Cloudera/ApacheSpark)deployments.

Quality

1. Richsemanticfeaturesetof16Kfeaturesallowsafine-grainedrepresentationof

concepts.

2. Allsemanticfeaturesareself-learned,thusreducingsemanticbiasinthe

languagemodelused.

3. Thedescriptivefeaturesareexplicitandsemanticallygroundedandcanbe

inspectedfortheinterpretationofanygeneratedresults.

4. Bydrasticallyreducingthevocabularymismatch,farlessfalsepositiveresults

aregenerated.

Speed

1. Encodingthesemanticsinbinaryform(insteadoftheusualfloatingpoint

matrices)providesordersofmagnitudesofspeedimprovementovertraditional

methods.

2. AllSemanticFingerprintshavethesamesize,whichallowsforanoptimal

processingpipelineimplementation.

Page 47: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 47

3. Thesemanticrepresentationsarepre-calculatedandthereforedon’taffectthe

queryresponsetime.

4. Thealgorithmsonlyapplyindependentcalculations(nocorpusrelative

computation)andarethereforeeasilyscaletoanyperformanceneeded.

5. Thesimilarityalgorithmcanbeeasilyimplementedinhardware(FPGA&Gate

Arraytechnology)toachieveevenfurtherperformanceimprovements.Ina

documentsearchcontext,thespecializedhardwarecouldprovideastablequery

responsetimeof<5microseconds,independentlyofthesizeofthesearched

collection.

Cross-LanguageAbility

If aligned semantic spaces for different languages are used, the resulting

fingerprintsbecomelanguageindependent.

Fig.21LanguageindependenceofSemanticFingerprints

ThismeansthatanEnglishmessage-fingerprintcanbedirectlymatchedwithan

Arabic message-fingerprint. When filtering text sources, the filter criterion can be

designed inEnglishwhilebeingdirectlyapplied toallother languages.Anapplication

canbedevelopedusingtheEnglishRetinawhilebeingdeployedwithaChineseone.

Outlook

ARetinaSystemcanbeusedwherever languagemodelsareused in traditional

NLP systems. Upcoming experimental workwill show if using a Retina system could

improve Speech to Text, OCR or Statistical Machine Translation systems as they all

generate candidate sentences from which they have to choose the final response by

takingthesemanticcontextintoaccount.

Page 48: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 48

Anotheractivefieldofresearchistofindoutifnumericmeasurementscouldalso

beinterpretedassemanticentitieslikewords.Inthiscasethesemanticgroundingisnot

donebyfoldingacollectionofreferencetexts intotherepresentationbutbyusing log

filesofhistoricmeasurements.Thecorrelationofthemeasurementswillfollowsystem

specificdependenciesasthecorrelationofwordsfollowlinguisticrelationshipsandthe

systemrepresentedbythesemanticspacewillnotbe“language”butanactualphysical

systemfromwhichthesensordatahasbeengathered.

Athirdfieldofresearchistodevelopahardwarearchitecturethatcouldspeed-

uptheprocessofsimilaritycomputation.Inverylargesemanticsearchsystems,holding

billionsofdocuments, the similarity computation is theonly remainingbottleneck.By

using a content addressable memory (CAM) mechanism, the search-by-semantic-

similarity-processcouldreachveryhighvelocities.

Page 49: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 49

Part3:CombiningtheRetinaAPIwithHTM

Introduction

Itisanolddreamofcomputerscientiststomakethemeaningofhumanlanguage

accessibletocomputerprograms.However,todate,allapproachesbasedonlinguistics,

statistics or probability calculus have failed to come close to the sophistication of

humans in mastering the irregularities, ambiguities and combinatorial explosions

typicallyencounteredinnaturallanguage.

Consideringthisfact,imaginethefollowingexperimentalsetup:

ExperimentalSetup

Fig.22Overviewoftheexperimentalsetup

• AMachineLearning(ML)programthathasanumberofbinaryinputs.ThisML

programcanbe trainedonsequencesofbinarypatternsbyexposing them ina

timeseries.TheMLprogramhaspredictiveoutputs that try toanticipatewhat

patterntoexpectnext,inresponsetoaspecificanteriorsequence.

• AcodecprogramthatencodesanEnglishwordintoabinarypatternanddecodes

any binary pattern into the closest possible English word. The codec has the

characteristicofconvertingsemanticallyclosewordsintosimilarbinarypatterns

andviceversa.Thedegreeofsimilaritybetweentwobinarypatternsismeasured

usingadistancemetricsuchasEuclidiandistance.

• The codec operates using a data width of 16Kbit (16384 bits) so that every

Englishwordisencodedintoa16Kbitpattern(binarywordvector)

• TheML program is configured to allow patterns of 16Kbit as input aswell as

16Kbitwidepredictionoutputpatterns.

Page 50: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 50

• ThecodecislinkedwiththeMLprogramtoformacompoundsystemthatallows

forwordsasinputandwordsasoutput.

• The encoderpart of the codec (word to pattern converter) is linked to theML

program inputs in order to be able to feed in sequences ofwords. After every

wordofasequence,theMLprogramoutputsabinarypatterncorrespondingtoa

predictionofwhat it expectsnext. TheMLprogramgrounds its predictionson

the(learned)experienceofsequencesithadseenpreviously.

• The decoder part of the codec (pattern to word converter) is linked to the

predictionoutputsoftheMLprogram.Inthisway,aseriesofwordscanbefed

into the compound system that predicts the next expected word at its output

basedonpreviouslyseensequences.

Fig.23:Concreteexperimentimplementation

TheMLprogramused in thisexperiment is theHierarchicalTemporalMemory

(HTM)developedbyNumenta.ThecodeispubliclyavailableunderthenameofNuPIC

andactivelysupportedbyagrowingcommunityiv.NuPICimplementsthecorticaltheory

developedbyJeffHawkins.

NuPIC is a Pattern Sequence Learner. This means that the initially agnostic

program6can be trained on sequences of data patterns and is able to predict a next

patternbasedonpreviouslyexposedsequencesofpatterns.

In the following experiments the data consists of English natural language.We

usetheCortical.ioAPItoencodewordsintobinarypatterns,whichcanbedirectlyfed

into the HTM Learning Algorithm (HTM LA). Being an online learning algorithm, the

6InitsinitialstatethealgorithmdoesnotknowofanySDRorsequencethereof.

Page 51: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 51

HTMLAlearnseverytimeitisexposedtoinputdata.Itisconfiguredtostorefrequently

occurringpatternsandthesequencestheyappearin.

AftertheHTMLAhasreadacertainnumberofwords,itshouldstarttopredict

thenextworddependingonthewordsreadpreviously.Thelearningalgorithmoutputs

abinarypredictionpatternofthesamesizeastheinputpattern,whichisthendecoded

bytheCortical.ioAPIbackintoaword7.

ThefullstopattheendofasentenceisinterpretedbytheHTMLAasanend-of-

sequencesignal,whichensuresthatanewsequenceisstartedforeachnewsentence.

Experiment1:“Whatdoesthefoxeat?”

In this first experiment, the setup isused in the simplest form.Adatasetof36

sentences,eachconsistingofasimplestatementaboutanimalsandwhattheyeatorlike,

is fed in sequence into theHTMLA.Anewpattern sequence is started after each full

stopbysignalingittotheHTMLA.Eachsentenceissubmittedonlyonce.TheHTMLA

seesabinarypatternof16Kbitsforeachwordanddoesnotknowanythingaboutthe

input,notevenwhichlanguageisbeingused.

7TheCortical.ioAPIreturnsthewordfortheclosestmatchingfingerprint.

Page 52: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 52

Dataset

Thefollowing36sentencesarepresentedtothesystem:

1. frog eat flies.

2. cow eat grain.

3. elephant eat leaves.

4. goat eat grass.

5. wolf eat rabbit.

6. cat likes ball.

7. elephant likes water.

8. sheep eat grass.

9. cat eat salmon.

10. wolf eat mice.

11. lion eat cow.

12. dog likes sleep.

13. coyote eat mice.

14. coyote eat rodent.

15. coyote eat rabbit.

16. wolf eat squirrel.

17. cow eat grass.

18. frog eat flies.

19. cow eat grain.

20. elephant eat leaves.

21. goat eat grass.

22. wolf eat rabbit.

23. sheep eat grass.

24. cat eat salmon.

25. wolf eat mice.

26. lion eat cow.

27. coyote eat mice.

28. elephant likes water.

29. cat likes ball.

30. coyote eat rodent.

31. coyote eat rabbit.

32. wolf eat squirrel.

33. dog likes sleep.

34. cat eat salmon.

35. cat likes ball.

36. cow eat grass.

Pleasenotethat,forreasonsofsimplicity,thesentencesarenotnecessarilygrammaticallycorrect.

Fig.24:The“Whatdoesthefoxeat”experiment

Results

TheHTMLA isaso-calledOnlineLearningSystemthat learnswhenever itgets

dataas inputandhasno specific trainingmode.After eachpresentedword (pattern),

theHTMLAoutputsitsbestguessofwhatitexpectsthenextwordtobe.Thequalityof

predictions rises while the 36 sentences are learned. We discard these preliminary

outputsandonlyquerythesystembypresentingthebeginningofa37thsentence"fox

eat".Thefinalwordisleftoutandthepredictionafterthewordeatisconsideredtobe

theanswertotheimplicitquestion:

"whatdoesthefoxeat?".Thesystemoutputsthewordrodent.

Discussion

Theresultobtainedisremarkable,asitiscorrectinthesensethattheresponseis

actuallysomethingthatcouldbefoodforsomeanimal,andalsocorrectinthesensethat

rodentsareactuallytypicalpreyforfoxes.

Withoutknowingmoredetails,itlooksasiftheHTMwasabletounderstandthe

meaning of the training sentences and was able to infer a plausible answer for a

questionaboutananimalthatwasnotpartofitstrainingset.Furthermore,theHTMdid

Page 53: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 53

not pick the correct answer froma list of possible answers but actually synthesized a

binarypattern,forwhichtheclosestmatchingwordintheCortical.ioRetina8happensto

berodent.

Experiment2:“ThePhysicists”

The second experiment uses the same setup as in experiment 1. This time, a

differentsetoftrainingsentencesisused.Inthefirstcaseitwasthegoaltogeneratea

simpleinferencebasedonasinglelistofexamples.Now,theinferenceisstructuredina

slightly more complex fashion. The system is trained on examples of two different

professions:physicistsandsingersandwhattheylike(mathematicsandfans)andwhat

theprofessionactorslike(fans).

8ThisRetinahasbeentrainedon400KWikipediapages.Thisisalsothereasonwhyitcouldunderstand(byanalogy)whatafoxiswithouteverhavingseenitinthetrainingmaterial.Whatithasbeenseeingarethewordswolfandcoyote,whichsharemanybitswiththewordfox.

Page 54: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 54

DatasetPhysicists:

1. marie curie be physicist.

2. hans bethe be physicist.

3. peter debye be physicist.

4. otto stern be physicist.

5. pascual jordan be physicist.

6. felix bloch be physicist.

7. max planck be physicist.

8. richard feynman be physicist.

9. arnold sommerfeld be physicist.

10. enrico fermi be physicist.

11. lev landau be physicist.

12. steven weinberg be physicist.

13. james franck be physicist.

14. karl weierstrass be physicist.

15. hermann von helmholtz be physicist.

16. paul dirac be physicist.

What Physicists like:

1. eugene wigner like mathematics.

2. wolfgang pauli like mathematics.

What Actors like:

1. pamela anderson like fans.

2. tom hanks like fans.

3. charlize theron like fans.

Singers:

1. madonna be singer.

2. rihanna be singer.

3. cher be singer.

4. madonna be singer.

5. elton john be singer.

6. kurt cobain be singer.

7. stevie wonder be singer.

8. rod stewart be singer.

9. diana ross be singer.

10. marvin gaye be singer.

11. aretha franklin be singer.

12. bonnie tyler be singer.

13. elvis presley be singer.

14. jackson browne be singer.

15. johnny cash be singer.

16. linda ronstadt be singer.

17. tina turner be singer.

18. joe cocker be singer.

19. chaka khan be singer.

20. eric clapton be singer.

21. elton john be singer.

22. willie nelson be singer.

23. hank williams be singer.

24. mariah carey be singer.

25. ray charles be singer.

26. chuck berry be singer.

27. cher be singer.

28. alicia keys be singer.

29. bryan ferry be singer.

30. dusty springfield be singer.

31. donna summer be singer.

32. james taylor be singer.

33. james brown be singer.

34. carole king be singer.

35. buddy holly be singer.

36. bruce springsteen be singer.

37. dolly parton be singer.

38. otis redding be singer.

39. meat loaf be singer.

40. phil collins be singer.

41. pete townshend be singer.

42. roy orbison be singer.

43. jerry lee lewis be singer.

44. celine dion be singer.

45. alison krauss be singer.

What Singers like:

1. katy perry like fans.

2. nancy sinatra like fans.

Pleasenotethat,forreasonsofsimplicity,thesentencesarenotnecessarilygrammaticallycorrect.

Fig.25:The“ThePhysicists”experiment

Results

Theprogramisstartedusing thedatasetsabove.This is the terminal logof the

runningexperiment:

Page 55: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 55

Starting training of CLA ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finished training the CLA. Querying the CLA: eminem be => singer eminem like => fans niels bohr be => physicist niels bohr like => mathematics albert einstein be => physicist albert einstein like => mathematics tom cruise like => fans angelina jolie like => fans brad pitt like => fans physicists like => mathematics mathematicians like => mathematics actors like => fans physicists be => physicist

Fig.26:Terminallogshowingtheresultsof“ThePhysicists”experiment

Discussion

After training the HTM with this small set of examples (note that the classes

“WhatPhysicists like” and “What Singers like” areonly characterizedby twoexample

sentences),asetofqueriesbasedonunseenexamplesofsingers,actorsandphysicistsis

submitted.Inallcases,thesystemwasabletomakethecorrectinferences,regardlessof

theverbused(be, like).The last fourqueriessuggestthatthesystemwasalsoableto

generalize from the concrete examples of the training sentences towards the

corresponding class labels likephysicists,actors and singers andassociate to them the

correctlike-preferences.

For these inferences tobepossible, thesystemhas tohaveaccess tosomereal

world information.As theHTM itselfhadnopreemptiveknowledge, theonlypossible

Page 56: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 56

sourceforbringinginthisinformationwouldhavebeenthroughthelanguageelements

usedastrainingmaterial.Buttheverysmallamountoftrainingmaterialclearlydoesnot

containallthatbackgroundinadescriptiveordeclarativeform.Sotheonlypointwhere

the relevant context could have been introduced is through the encoding step,

convertingthesymbolicstringintoabinarywordpattern.

Page 57: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 57

Part4:References

LiteratureHigh-LevelPerception,Representation,andAnalogy:ACritiqueofArtificialIntelligenceMethodologyDavidJ.Chalmers,RobertM.French,DouglasR.Hofstadter,CenterforResearchonConceptsandCognition,IndianaUniversityBloomingtonIndiana47408,CRCCTechnicalReport49—March1991SparseRepresentation,ModelingandLearninginVisualRecognition:Theory,AlgorithmsandApplicationsHongCheng,UniversityofElectronicScienceandTechnologyofChinaChengdu,Sichuan,China(2012)BrainStructureandItsOriginsinDevelopmentandinEvolutionofBehaviorandtheMindGeraldE.Schneider,TheMITPress,Cambridge,Massachusetts,London,England(2014)SPARSEMODELINGTheory,Algorithms,andApplicationsIrinaRish,IBM,YorktownHeights,NewYork,USA,GenadyYa.Grabarnik,St.John’sUniversityQueens,NewYork,USAIncompleteNature-HowMindEmergedfromMatterTerrenceW.Deacon,W.W.NORTON&COMPANYNewYork•London(2013)EvolutionofNervousSystems–AComprehensiveReferenceJonH.Kaas,GeorgF.Striedler,JohnL.R.Rubenstein,ElsevierAcademicPress(2007)OntheNatureofRepresentation-ACaseStudyofJamesGibson'sTheoryofPerceptionMarkBickhard,D.MichaelRichie,PreagerScientific(1983)HowBrainsThink.EvolvingIntelligence,ThenandNowWilliamH.Calvin,BasicBooks,NewYork(1996)FoundationalIssuesinArtificialIntelligenceandCognitiveScience-ImpasseandSolutionMarkH.BICKHARD,LorenTERVEEN,Elsevier(1995)EvolutionaryCognitiveNeuroscienceEditedbyStevenM.Platek,JulianPaulKeenan,andToddK.Shackelford,TheMITPress,Cambridge,Massachusetts,London,England(2006)OriginationofOrganismalForm-BeyondtheGeneinDevelopmentalandEvolutionaryBiologyeditedbyGerdB.MüllerandStuartA.Newman,TheMITPress,Cambridge,Massachusetts,London,England(2003)

Page 58: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 58

TheCerebralCode-ThinkingaThoughtintheMosaicsoftheMindWilliamH.Calvin,Cambridge,MA:MITPress(1996)AssociativeMemory-ASystem-TheoreticalApproachT.Kohonen,CommunicationandCybernetics,Editors:K.S.Fu,W.D.Keidel,W.1.M.Levelt,H.Wolter(1977)Content-AddressableMemoriesT.Kohonen2ndEdition,SpringerSeriesinInformationSciences,Editors:ThomasS.HuangManfredR.Schroeder(1989)OnIntelligenceJeffHawkins,SandraBlakeslee,TimesBooks(2004)SurfacesandEssences-AnalogyastheFuelandFireofThinkingDouglasR.Hofstadter,EmmanuelSander,BasicBooks(2013)ExploringtheThalamusandItsRoleinCorticalFunctionS.MurraySherman,R.W.Guillery,SecondEdition-TheMITPress(2005)FoundationsofLanguage-Brain,Meaning,GrammarRayJackendoff,Evolution-OxfordUniversityPress,USA(2003)HumanLanguageandOurReptilianBrain-TheSubcorticalBasesofSpeech,Syntax,andThoughtPhilipLieberman,HarvardUniversityPress(2002)TheoryofLanguage-TheRepresentationalFunctionofLanguageKarlBühler,JohnBenjaminsPublishingCompany(2011)ConnectionsandSymbols(CognitionSpecialIssue)StevenPinker,TheMITPress;1stMITPressedition(1988)TenProblemsofConsciousness–ARepresentationalTheoryofthePhenomenalMindMichaelTye,MITPress(1995)SynergeticsExplorationsintheGeometryofThinkingR.BuckminsterFuller,E.J.Applewhite,Scribner(1975)BigBrain.OriginsandFutureofHumanIntelligenceRichardGranger,PalgraveMacmillan(2009)TheLanguageInstinct-HowtheMindCreatesLanguageStevenPinker,Perennial(HarperCollins)(1995)LanguageintheBrainHelmutSchnelle,CambridgeUniversityPress(2010)

Page 59: Semantic Folding Theory - White Paper

SemanticFoldingTheory

Vienna,March2016 59

Web

iCorticalLearningAlgorithmsWhitePaper,Numentahttp://numenta.com/learn/hierarchical-temporal-memory-white-paper.html

iiHumanBrainProjecthttps://www.humanbrainproject.euiiiTheHumanConnectomeProjecthttp://www.humanconnectomeproject.orgivTheNuPICopencommunityhttp://www.numenta.org

VideosAnalogyastheCoreofCognition,DouglasHofstaedter,StanfordUniversity,2009,retrievedfromhttps://youtu.be/n8m7lFQ3njkLinguisticsasaWindowtoUnderstandingtheBrain,StevenPinker,HarvardUniversity,2012,retrievedfromhttps://youtu.be/Q-B_ONJIEcESemantics,Models,andModelFreeMethods,MonicaAnderson,SyntienceInc.,2012,retrievedfromhttps://vimeo.com/43890931ModelingDataStreamsUsingSparseDistributedRepresentations,JeffHawkins,Numenta,2012,retrievedfromhttps://youtu.be/iNMbsvK8Q8YSeminar:BrainsandTheirApplication,RichardGranger,ThayerSchoolofEngineeringatDartmouth,2009,retrievedfromhttps://youtu.be/AsYOl4REct0

Cortical.ioWebsitehttp://www.cortical.ioRetinaAPIhttp://api.cortical.ioDemosKeywordExtractionhttp://www.cortical.io/keyword-extraction.html

LanguageDetectionhttp://www.cortical.io/language-detection.html

ExpressionBuilderhttp://www.cortical.io/expression-builder.html

SimilarityExplorerhttp://www.cortical.io/demos/similarity-explorer/

TopicExplorerDemohttp://www.cortical.io/topic-explorer.html

Cross-LingualTopicAnalyzerhttp://www.cortical.io/cross-language.html

TopicModellerhttp://www.cortical.io/static/demos/html/topic-modeller.html