neuralcomputing03

上传人:cn****1 文档编号:568460667 上传时间:2024-07-24 格式:PPT 页数:127 大小:1.90MB
返回 下载 相关 举报
neuralcomputing03_第1页
第1页 / 共127页
neuralcomputing03_第2页
第2页 / 共127页
neuralcomputing03_第3页
第3页 / 共127页
neuralcomputing03_第4页
第4页 / 共127页
neuralcomputing03_第5页
第5页 / 共127页
点击查看更多>>
资源描述

《neuralcomputing03》由会员分享,可在线阅读,更多相关《neuralcomputing03(127页珍藏版)》请在金锄头文库上搜索。

1、智能科学Chapter3神经计算史忠植中国科学院计算技术研究所http:/ PropagationRecurrent networkHopfield NetworksSelf-Organization MapsSummary2024/7/242Chap3神经计算史忠植2024/7/243Chap3神经计算史忠植2024/7/244Chap3神经计算史忠植BiologicalNeuralSystemsThebrainiscomposedofapproximately100billion(1011)neuronsSchematic drawing of two biological neurons

2、 connected by synapses A typical neuron collects signals from other neurons through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin strand known as an axon, which splits into thousands of branches. At the end of the branch, a struct

3、ure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity in the connected neurons. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Lea

4、rning occurs by changing the effectiveness of the synapses so that the influence of one neuron on the other changes 2024/7/245Chap3神经计算史忠植DimensionsofaNeuralNetworkVarioustypesofneuronsVariousnetworkarchitecturesVariouslearningalgorithmsVariousapplications2024/7/246Chap3神经计算史忠植神经计算神经计算(1 1)可以充分逼近任意复

5、杂的非线性关系可以充分逼近任意复杂的非线性关系 (2 2)所有定量或定性的信息都等势分布贮存)所有定量或定性的信息都等势分布贮存于网络内的各神经元,故有很强的鲁棒性和容于网络内的各神经元,故有很强的鲁棒性和容错性错性 (3 3)采用并行分布处理方法,使得快速进行)采用并行分布处理方法,使得快速进行大量运算成为可能大量运算成为可能 (4 4)可学习和自适应不知道或不确定的系统)可学习和自适应不知道或不确定的系统 (5 5)能够同时处理定量、定性知识。)能够同时处理定量、定性知识。2024/7/247Chap3神经计算史忠植神经计算40年代心理学家Mcculloch和数学家Pitts合作提出的兴奋

6、与抑制型神经元模型和Hebb提出的神经元连接强度的修改规则,他们的研究结果至今仍是许多神经网络模型研究的基础。2024/7/248Chap3神经计算史忠植McCulloch-PittsNeuron2024/7/249Chap3神经计算史忠植神经计算50年代、60年代的代表性工作是Rosenblatt的感知机和Widrow的自适应性元件Adaline。1969年,Minsky和Papert合作发表了颇有影响的Perceptron一书,得出了消极悲观的论点,加上数字计算机正处于全盛时期并在人工智能领域取得显著成就,70年代人工神经网络的研究处于低潮。2024/7/2410Chap3神经计算史忠植H

7、istory nRosenblatt(1958)wiresMcCulloch-Pittsneuronswithatrainingprocedure.nRosenblattsPerceptron(Rosenblatt,F.,Principles of Neurodynamics,NewYork:SpartanBooks,1962).n1969Minsky:FailurewithlinearlyseparableproblemsXOR(X1=T&X2=F)or(X1=F&X2=T)Weaknessrepairedwithhiddenlayers(Minsky,M.andPapert,S.,Perc

8、eptrons,MITPress,Cambridge,1969).2024/7/2411Chap3神经计算史忠植神经计算神经计算多层网络BP算法Hopfield网络模型自适应共振理论(ART)自组织特征映射理论Hinton等人最近提出了Helmboltz机 徐雷提出的Ying-Yang机理论模型 甘利俊一( S.Amari)开创和发展的基于统计流形的方法应用于人工神经网络的研究,2024/7/2412Chap3神经计算史忠植History Late 1980s - NN re-emerge with Rumelhart and McClelland (Rumelhart, D., McClel

9、land, J., Parallel and Distributed Processing, MIT Press, Cambridge, 1988.2024/7/2413Chap3神经计算史忠植TheNeuronTheneuronisthebasicinformationprocessingunitofaNN.Itconsistsof:1Asetofsynapsesorconnectinglinks,eachlinkcharacterizedbyaweight: W1, W2, , Wm2Anadderfunction(linearcombiner)whichcomputestheweight

10、edsumoftheinputs:3Activationfunction(squashingfunction)forlimitingtheamplitudeoftheoutputoftheneuron.2024/7/2414Chap3神经计算史忠植TheNeuronInputsignalSynapticweightsSummingfunctionBiasbActivationfunctionLocalFieldvOutputyx1x2xmw2wmw12024/7/2415Chap3神经计算史忠植BiasofaNeuronBiasbhastheeffectofapplyinganaffinetr

11、ansformationtouv = u + bv istheinducedfieldoftheneuronvu2024/7/2416Chap3神经计算史忠植BiasasextrainputInputsignalSynapticweightsSummingfunctionActivationfunctionLocalFieldvOutputyx1x2xmw2wmw1w0x0 = +1 Biasisanexternalparameteroftheneuron. Canbemodeledbyaddinganextrainput.HebbianLearningSimultaneousactivati

12、oncausesincreasedsynapticstrengthAsynchronousactivationcausesweakenedsynapticconnectionPruningHebbian,anti-Hebbian,andnon-Hebbianconnections2024/7/2418Chap3神经计算史忠植CommonRulesSimplifythecomplexityTaskspecific2024/7/2419Chap3神经计算史忠植BiologicalFiringRateAverageFiringRatedependsonbiologicaleffectssuchasl

13、eakage,saturation,andnoise2024/7/2420Chap3神经计算史忠植TrainingMethodsSupervisedtrainingUnsupervisedtrainingSelf-organizationBack-PropagationSimulatedannealingCreditassignment2024/7/2421Chap3神经计算史忠植OutlineIntroductionPerceptronBack PropagationRecurrent networkHopfield NetowrksSelf-Organization MapsSummary

14、2024/7/2422Chap3神经计算史忠植Perceptron:SingleLayerFeed-forwardInput layerofsource nodesOutput layerofneuronsRosenblattsPerceptron:anetworkofprocessingelements(PE):2024/7/2423Chap3神经计算史忠植Perceptron : Multilayerfeed-forwardInputlayerOutputlayerHidden Layer3-4-2Network2024/7/2424Chap3神经计算史忠植Perceptron:Learn

15、ingRuleErr=TO O isthepredictedoutputT isthecorrectoutputWj Wj +*Ij *ErrIjistheactivationofaunitjintheinputlayerisaconstantcalledthelearningrate2024/7/2425Chap3神经计算史忠植Perceptron21.5.3=-12(0.5) + 1(0.3) + -1 = 0.3 , O=1Learning Procedure:Randomly assign weights (between 0-1)Present inputs from trainin

16、g dataGet output O, nudge weights to gives results toward our desired output TRepeat; stop when no errors, or enough epochs completed2024/7/2426Chap3神经计算史忠植Least Mean Square learningLMS = Least Mean Square learning Systems, more general than the previous perceptron learning rule. The concept is to m

17、inimize the total error, as measured over all training examples, P. O is the raw output, as calculated by E.g. if we have two patterns andT1=1, O1=0.8, T2=0, O2=0.5 then D=(0.5)(1-0.8)2+(0-0.5)2=.145We want to minimize the LMS:EWW(old)W(new)C-learning rate2024/7/2427Chap3神经计算史忠植ActivationFunctionToa

18、pplytheLMSlearningrule,alsoknownasthedeltarule,weneedadifferentiableactivationfunction.Old:New:2024/7/2428Chap3神经计算史忠植Theactivitieswithinaprocessingunit2024/7/2429Chap3神经计算史忠植Representationofaprocessingunit2024/7/2430Chap3神经计算史忠植Aneuralnetworkwithtwodifferentprograms2024/7/2431Chap3神经计算史忠植UppercaseC

19、anduppercaseT2024/7/2432Chap3神经计算史忠植VariousorientationsofthelettersCandT2024/7/2433Chap3神经计算史忠植Thestructureofthecharacterrecognitionsystem2024/7/2434Chap3神经计算史忠植TheletterCinthefieldofview2024/7/2435Chap3神经计算史忠植TheletterTinthefieldofview2024/7/2436Chap3神经计算史忠植PerceptronClassifierForexample,supposethe

20、reare4trainingdatapoints(with2positiveexamplesoftheclassand2negativeexamples)TheinitialrandomvalueoftheweightswillprobablynotdividethesepointsaccuratelyX1X2Class3406114111202024/7/2437Chap3神经计算史忠植PerceptronClassifierButduringtrainingtheweightvaluesarechanged,basedonthereductionoferrorEventuallyaline

21、canbefoundthatdoesdividethepointsandsolvetheclassificationtaskeg4X1+-3.5X2=02024/7/2438Chap3神经计算史忠植DichotomisingPerceptronsDichotomisingPerceptronsdividedatainto2groups.TheyhaveasingleoutputnodeandabinaryresponseforthatnodeWith2inputvariablesthedecisionlineisplacedina2-dimensionalspaceThepresenceofa

22、biasinputmeansthelinedoesnothavetopassthroughtheorigin2024/7/2439Chap3神经计算史忠植Thesetofweightscanberepresentedasamatrix,WnormallyeachrowcorrespondstoaweightvectorleadingtoanoutputnodeWithasingleoutputnodetherewouldbeasinglerowintheweightmatrixThepresentationofasetofinputs,X,canberepresentedasamatrixmu

23、ltiplicationW.XNormalyeachcolumnofthematrixcorrespondstoasetofinputsfeedingintothenetworkThismeansa1*nmatrixismultipliedbyan*1matrixproducingthesingleweightedsumthatisthenodeactivation2024/7/2440Chap3神经计算史忠植Forexample,asetofweightsW=(1,2,-1)TheinputvaluesareX1=2,X2=0andthebiasconstantis1Sothemultipl

24、icativesumthenodereceivesisW.Xf(W.X)representstheapplicationofthetransferfunctiontotheresultofthematrixmultiplication.ief(3)andtypicallyonlymakessenseforasinglevalue2024/7/2441Chap3神经计算史忠植TrainingaDichotomizingPerceptron1.Initialiseweightstozero2.Propagatetheinputandcomputeactualoutputforthenode(o)3

25、.Compareactualoutput(o)withdesiredoutput(d)4.Changeweightsifdesiredvaluenotobtainedusingwt+1=wt+*(d-o)*xtLargevaluegiveslargeweightchange(andpotentiallyquickertraining);Smallvaluearepotentiallyslower.maybelargeatstartoftrainingandsmallattheendAlsobigerrors(do)causebigweightchanges5.Repeatfrom2untila

26、llinputspresentedandweightsnolongerchange2024/7/2442Chap3神经计算史忠植ExampleofTraining(assumethresholdT=0)Singlepieceoftrainingdata.InputX=(1.5,-1)withbiasinputXis(1.5,-1,-1).Targetoutputis0.InitialweightvectorWt=0=(0,0,0)and=1,Tistransferfunction,T=0f(W*X)isf(0*1.5+0*-1+0*-1)=f(0)=1.Incorrectsochangewei

27、ghtsW t=1 = (0, 0, 0) + 1*(0 - 1) * (1.5, -1, -1) = (-1.5, 1, 1)2024/7/2443Chap3神经计算史忠植Presentnextinput,sayX=(1.5,-1,-1)againwithtargetoutput0.Computef(W*X)withthenewweightvectorW t=1=(-1.5, 1, 1)Thisisf(-1.5*1.5+1*-1+1*-1)=f(-2.25+-1+-1)=f(-4.25)=0.Thereisnownoerrorwiththetargetvalueandsonofurtherw

28、eightchangeisneeded;trainingiscomplete(notehowthedecisionline/weightmatrixhasevolvedasasearchwasmadeforthecorrectweights)2024/7/2444Chap3神经计算史忠植OutlineIntroductionPerceptronBack PropagationRecurrent networkHopfield NetowrksSelf-Organization MapsSummary2024/7/2445Chap3神经计算史忠植Back-PropagatedDeltaRuleN

29、etworks(BP)InputsareputthroughaHiddenLayerbeforetheoutputlayerAllnodesconnectedbetweenlayers2024/7/2446Chap3神经计算史忠植LearningRuleMeasure error Reduce that error nBy appropriately adjusting each of the weights in the network2024/7/2447Chap3神经计算史忠植BPNetworkDetailsForwardPass:Erroriscalculatedfromoutputs

30、UsedtoupdateoutputweightsBackwardPass:ErrorathiddennodesiscalculatedbybackpropagatingtheerrorattheoutputsthroughthenewweightsHiddenweightsupdated2024/7/2448Chap3神经计算史忠植LearningRuleBack-propagationNetworkErri=TiOiWj,i Wj,i +*aj*ii=Erri*g(ini)g isthederivativeoftheactivationfunction gaj istheactivatio

31、nofthehiddenunitWk,j Wk,j +*Ik*jj=g(inj) *iWj,i *i2024/7/2449Chap3神经计算史忠植BackpropagationNetworksTobypassthelinearclassificationproblem,wecanconstructmultilayernetworks.Typicallywehavefully connected,feedforwardnetworks.I1I21Hidden LayerH1H2O1O2Input LayerOutput LayerWi,jWj,k1s - biasI312024/7/2450Ch

32、ap3神经计算史忠植Back PropagationBack PropagationWe had computed:For the Output unit k, f(sum)=O(k). For the output units, this is:IHOWi,jWj,kFor the Hidden units (skipping some math), this is:2024/7/2451Chap3神经计算史忠植LearningRuleBack-propagationNetworkE=1/2i(TiOi)2 =-Ik*j2024/7/2452Chap3神经计算史忠植BPAlgorithmLe

33、arning Procedure:1. Randomly assign weights (between 0-1)2. Present inputs from training data, propagate to outputs3. Compute outputs O, adjust weights according to the delta rule, backpropagating the errors. The weights will be nudged closer so that the network learns to give the desired output.4.

34、Repeat; stop when no errors, or enough epochs completed2024/7/2453Chap3神经计算史忠植BackpropagationVerypowerful-canlearnanyfunction,givenenoughhiddenunits!Withenoughhiddenunits,wecangenerateanyfunction.HavethesameproblemsofGeneralizationvs.Memorization.Withtoomanyunits,wewilltendtomemorizetheinputandnotge

35、neralizewell.Someschemesexistto“prune”theneuralnetwork.Networksrequireextensivetraining,manyparameterstofiddlewith.Canbeextremelyslowtotrain.Mayalsofallintolocalminima.Inherentlyparallelalgorithm,idealformultiprocessorhardware.Despitethecons,averypowerfulalgorithmthathasseenwidespreadsuccessfuldeplo

36、yment.2024/7/2454Chap3神经计算史忠植PredictionbyBP2024/7/2455Chap3神经计算史忠植OutlineIntroductionPerceptronBack PropagationRecurrent networkHopfield NetowrksSelf-Organization MapsSummary2024/7/2456Chap3神经计算史忠植RecurrentConnectionsAsequenceisasuccessionofpatternsthatrelatetothesameobject.Forexample,lettersthatmak

37、eupawordorwordsthatmakeupasentence.Sequencescanvaryinlength.Thisisachallenge.Howmanyinputsshouldtherebeforvaryinglengthinputs?2024/7/2457Chap3神经计算史忠植ThesimplerecurrentnetworkJordannetworkhasconnectionsthatfeedbackfromtheoutputtotheinputlayerandalsosomeinputlayerunitsfeedbacktothemselves.Usefulfortas

38、ksthataredependentonasequenceofasuccessivestates.Thenetworkcanbetrainedbybackpropogation.Thenetworkhasaformofshort-termmemory.Simplerecurrentnetwork(SRN)hasasimilarformofshort-termmemory.2024/7/2458Chap3神经计算史忠植JordanNetwork2024/7/2459Chap3神经计算史忠植ElmanNetwork(SRN)Thenumberofcontextunitsisthesameasthe

39、numberofhiddenunits2024/7/2460Chap3神经计算史忠植Short-termmemoryinSRNThecontextunitsrememberthepreviousinternalstate.Thus,thehiddenunitshavethetaskofmappingbothanexternalinputandalsothepreviousinternalstatetosomedesiredoutput.2024/7/2461Chap3神经计算史忠植RecurrentNetworkwithhidden neuron(s):unitdelayoperatorz-1

40、 impliesdynamicsystemz-1z-1z-1Recurrentnetworkinputhiddenoutput2024/7/2462Chap3神经计算史忠植OutlineIntroductionPerceptronBack PropagationRecurrent networkHopfield NetworksSelf-Organization MapsSummary2024/7/2463Chap3神经计算史忠植HopfieldNetworkJohnHopfield(1982)AssociativeMemoryviaartificialneuralnetworksSoluti

41、onforoptimizationproblemsStatisticalmechanics2024/7/2464Chap3神经计算史忠植AssociativememoryNatureofassociativememorypartofinformationgiventherestofthepatternisrecalled2024/7/2465Chap3神经计算史忠植PhysicalAnalogywithMemoryThelocationofbottomofthebowl(X0)representsthestoredpatternBallsinitialpositionrepresentsthe

42、partialknowledgeIncorrugatedsurface,canstoreX1,X2,Xnmemories,andrecallonewhichisclosesttotheinitialstate2024/7/2466Chap3神经计算史忠植KeyElementsofAssociativeNetItiscompletelydescribedbyastatevector v =(v1,v2,vm)Therearesetofstablestatesv1,v2,vn.ThesecorrespondtothestoredpatternsSystemevolvesfromarbitrarys

43、tartingstatevtooneofthestablestatesbydecreasingitsenergyE2024/7/2467Chap3神经计算史忠植HopfieldNetworkEverynodeisconnectedtoeveryothernodesWeightsaresymmetricRecurrentnetworkStateofthenetisgivenbythevectorofthenodeoutputs(x1,x2,x3)2024/7/2468Chap3神经计算史忠植NeuronsinHopfieldNetworkTheneuronsarebinaryunitsTheya

44、reeitheractive(1)orpassiveAlternatively+orThenetworkcontainsNneuronsThestateofthenetworkisdescribedasavectorof0sand1s:2024/7/2469Chap3神经计算史忠植StateTransitionChooseonerandomly,fireit.Calculateactivationandtransitthestatetransittoitself,ortoanotherstateatHammingdistance2024/7/2470Chap3神经计算史忠植StateTrans

45、itionDiagramTransitiontendtotakeplacedownShowtoreflectthewaythesystemdecreasesitsenergyNomatterwherewestartinthediagram,finalstateswillbeeither3or62024/7/2471Chap3神经计算史忠植DefiningEnergyfortheNetIftwonodesiandjisconnectedbypositiveweightifi=1,j=0,andjfires,inputfromithroughpositiveweightletjbecomes1if

46、bothnodesare1,theyreinforceeachotherscurrentoutputDefineenergyfunctioneij=-wijxixjTheenergyofthenetisthesummingofthem2024/7/2472Chap3神经计算史忠植ThearchitectureofHopfieldNetworkThe network is fully interconnectedAll the neurons are connected to each otherThe connections are bidirectional and symmetricThe

47、 setting of weights depends on the application2024/7/2473Chap3神经计算史忠植DefiningEnergyfortheNetSincetheweightissymmetricIfnodekisselected,2024/7/2474Chap3神经计算史忠植DefiningEnergyfortheNetEnergyafternodekisupdated:Eak0,thenoutputgoesfrom0to1orstays1.Ineithercasexk0,andE0ak0,thenoutputgoesfrom1to0orstays0.I

48、neithercasexk0,andE0Foranynodeisselected,theenergyofthenetdecreasesorstaysthesameAtmostNchangesofthestates,astablestatehasbeenreached2024/7/2475Chap3神经计算史忠植Asynchronousvs.SynchronousupdateAsynchronousupdate:onlyonenodeisupdateatanytimestepSynchronousupdate:everynodesareupdateatthesametimeneedtostore

49、boththecurrentstatevectorandthenextstatevector2024/7/2476Chap3神经计算史忠植FindingtheWeightStablestatenodesdonotchangetheirvaluesweightsreinforcethenodevaluessamevaluedpairsarereinforcedbypositiveweightdifferentvaluedpairsarereinforcedbynegativeweightStorageprescriptionofHopfieldnetisvi=-1,1:spinrepresent

50、ation2024/7/2477Chap3神经计算史忠植HebbRuleTrainingalgorithmFor each training patterns present the components of the pattern at the outputs nodes if two nodes have the same valuethen make small positive increment to the internode weightelse make small negative decrement to the internode weight endifendfor

51、2024/7/2478Chap3神经计算史忠植HebbRuleThelearningrule(HebbRule)maybewrittenOriginalruleproposedbyHebb(1949)The organization behaviorWhen an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes parts in firing it, some growth process or metabolic change takes place in one or

52、 both cells such that As efficiency, as one of the cells firing B, is increased.That is, the correlation of activity between two cells is reinforced by increasing the synaptic strength between them.2024/7/2479Chap3神经计算史忠植UpdatingtheHopfieldNetworkThestateofthenetworkchangesateachtimestep.Therearefou

53、rupdatingmodes:SerialRandom:ThestateofarandomlychosensingleneuronwillbeupdatedateachtimestepSerial-Sequential:Thestateofasingleneuronwillbeupdatedateachtimestep,inafixedsequenceParallel-Synchronous:AlltheneuronswillbeupdatedateachtimestepsynchronouslyParallelAsynchronous:Theneuronsthatarenotinrefrac

54、torinesswillbeupdatedatthesametime2024/7/2480Chap3神经计算史忠植TheupdatingRuleHereweassumethatupdatingisserial-RandomUpdatingwillbecontinueduntilastablestateisreached.Eachneuronreceivesaweightedsumoftheinputsfromotherneurons:Iftheinputispositivethestateoftheneuronwillbe1,otherwise0:2024/7/2481Chap3神经计算史忠植

55、ConvergenceoftheHopfieldNetworkDoesthenetworkeventuallyreachastablestate(convergence)?Toevaluatethisaenergyvaluewillbeassociatedtothenetwork:Thesystemwillbeconvergediftheenergyisminimized2024/7/2482Chap3神经计算史忠植ConvergenceoftheHopfieldNetworkWhyenergy?Ananalogywithspin-glassmodelsofFerro-magnetism(Is

56、ingmodel):Thesystemisstableiftheenergyisminimized2024/7/2483Chap3神经计算史忠植ConvergenceoftheHopfieldNetworkWhyconvergence?2024/7/2484Chap3神经计算史忠植ConvergenceoftheHopfieldNetwork(4)ThechangesofEwithupdating:IneachcasetheenergywilldecreaseorremainsconstantthusthesystemtendstoStabilize.2024/7/2485Chap3神经计算史

57、忠植TheEnergyFunction:Theenergyfunctionissimilartoamultidimensional(N)terrainGlobalMinimumLocalMinimumLocalMinimum2024/7/2486Chap3神经计算史忠植HopfieldnetworkasamodelforassociativememoryAssociativememoryAssociatesdifferentfeatureswitheacotherKarengreenGeorgeredPaulblueRecallwithpartialcues2024/7/2487Chap3神经

58、计算史忠植NeuralNetworkModelofassociativememoryNeuronsarearrangedlikeagrid:2024/7/2488Chap3神经计算史忠植SettingtheweightsEachpatterncanbedenotedbyavectorof-1sor1s:Ifthenumberofpatternsismthen:HebbianLearning:Theneuronsthatfiretogether,wiretogether2024/7/2489Chap3神经计算史忠植LearninginHopfieldnet2024/7/2490Chap3神经计算

59、史忠植StorageCapacityAsthenumberofpatterns(m)increases,thechancesofaccuratestoragemustdecreaseHopfieldsempiricalworkin1982AbouthalfofthememorieswerestoredaccuratelyinanetofNnodesifm=0.15NMcCliecesanalysisin1987Ifwerequirealmostalltherequiredmemoriestobestoredaccurately,thenthemaximumnumberofpatternsmis

60、N/2logNForN=100,m=112024/7/2491Chap3神经计算史忠植LimitationsofHopfieldNetThenumberofpatternsthatcanbestoredandaccuratelyrecalledisseverelylimitedIftoomanypatternsarestored,netmayconvergetoanovelspuriouspattern:nomatchoutputExemplarpatternwillbeunstableifitsharesmanybitsincommonwithanotherexemplarpattern20

61、24/7/2492Chap3神经计算史忠植OutlineIntroductionPerceptronBack PropagationRecurrent networkHopfield NetworksSelf-Organization MapsSummary2024/7/2493Chap3神经计算史忠植Self-OrganizationMapsKohonen(1982,1984)Inbiologicalsystemscellstunedtosimilarorientationstendtobephysicallylocatedinproximitywithoneanothermicroelec

62、trodestudieswithcatsOrientationtuningoverthesurfaceformsakindofmapwithsimilartuningsbeingfoundclosetoeachothertopographicfeaturemapTrainanetworkusingcompetitivelearningtocreatefeaturemapsautomatically2024/7/2494Chap3神经计算史忠植SOMClusteringSelf-organizingmap(SOM)AunsupervisedartificialneuralnetworkMappi

63、nghigh-dimensionaldataintoatwo-dimensionalrepresentationspaceSimilardocumentsmaybefoundinneighboringregionsDisadvantagesFixedsizeintermsofthenumberofunitsandtheirparticulararrangementHierarchicalrelationsbetweentheinputdataarenotmirroredinastraight-forwardmanner2024/7/2495Chap3神经计算史忠植FeaturesFeature

64、sKohonensalgorithmcreatesavectorquantizerbyadjustingweightfromcommoninputnodestoMoutputnodesContinuousvaluedinputvectorsarepresentedwithoutspecifyingthedesiredoutputAfterthelearning,weightwillbeorganizedsuchthattopologicallyclosenodesaresensitivetoinputsthatarephysicallysimilarOutputnodeswillbeorder

65、edinanaturalmanner2024/7/2496Chap3神经计算史忠植InitialsetupofSOMConsistsasetofunitsiinatwo-dimensiongridEachunitiisassignedaweightvectormi asthesamedimensionastheinputdataTheinitialweightvectorisassignedrandomvalues2024/7/2497Chap3神经计算史忠植WinnerSelectionInitially,pickuparandominputvectorx(t)Computetheunitc

66、withthehighestactivitylevel(the winner c(t)byEuclideandistanceformula2024/7/2498Chap3神经计算史忠植LearningProcess(Adaptation)Guidetheadaptationbyalearning-rate(tuneweightvectorsfromtherandominitializationvaluetowardstheactualinputspace)Decreaseneighborhoodaroundthewinnertowardsthecurrentlypresentedinputpa

67、ttern(mapinputontoregionsclosetoeachotherinthegridofoutputpattern,viewedasaneuralnetworkversionofk-meansclustering)2024/7/2499Chap3神经计算史忠植LearningProcess(Adaptation)2024/7/24100Chap3神经计算史忠植NeighborhoodStrategyNeighborhood-kernelhciAguassianisusedtodefineneighborhood-kernel|rc-ri|2denotesthedistanceb

68、etweenthewinnernodec andinputvectoriAtime-varyingparameterenableformationoflargeclustersinthebeginningandfine-grainedinputdiscriminationtowardstheendofthelearningprocess2024/7/24101Chap3神经计算史忠植Within-ClusterDistancev.s.Between-ClustersDistance2024/7/24102Chap3神经计算史忠植SOMAlgorithmInitialise NetworkFin

69、d FocusUpdate FocusUpdate NeighbourhoodAdjust neighbourhood sizeGet Input2024/7/24103Chap3神经计算史忠植Learning Learning Decreasingtheneighborensuresprogressivelyfinerfeaturesareencodedgradualloweringofthelearnrateensuresstability2024/7/24104Chap3神经计算史忠植BasicAlgorithmInitializeMap(randomlyassignweights)Lo

70、opovertrainingexamplesAssigninputunitvaluesaccordingtothevaluesinthecurrentexampleFindthe“winner”,i.e.theoutputunitthatmostcloselymatchestheinputunits,usingsomedistancemetric,e.g.ModifyweightsonthewinnertomorecloselymatchtheinputFor all output units j=1 to mand input units i=1 to nFind the one that

71、minimizes:where c is a small positive learning constantthat usually decreases as the learning proceeds2024/7/24105Chap3神经计算史忠植ResultofAlgorithmInitially, some output nodes will randomly be a little closer to some particular type of inputThese nodes become “winners” and the weights move them even clo

72、ser to the inputsOver time nodes in the output become representative prototypes for examples in the inputNote there is no supervised training hereClassification:Given new input, the class is the output node that is the winner2024/7/24106Chap3神经计算史忠植TypicalUsage:2DFeatureMapIn typical usage the outpu

73、t nodes form a 2D “map” organized in a grid-like fashion and we update weights in a neighborhood around the winnerI1I2Input LayerI3Output LayersO11O12O13O14O15O21O22O23O24O25O31O32O33O34O35O41O42O43O44O45O51O52O53O54O552024/7/24107Chap3神经计算史忠植ModifiedAlgorithmInitializeMap(randomlyassignweights)Loop

74、overtrainingexamplesAssigninputunitvaluesaccordingtothevaluesinthecurrentexampleFindthe“winner”,i.e.theoutputunitthatmostcloselymatchestheinputunits,usingsomedistancemetric,e.g.ModifyweightsonthewinnertomorecloselymatchtheinputModifyweightsinaneighborhoodaroundthewinnersotheneighborsonthe2Dmapalsobe

75、comeclosertotheinputOvertimethiswilltendtoclustersimilaritemscloseronthemap2024/7/24108Chap3神经计算史忠植UpdatingtheNeighborhoodNodeO44isthewinnerColorindicatesscalingtoupdateneighborsOutput LayersO11O12O13O14O15O21O22O23O24O25O31O32O33O34O35O41O42O43O44O45O51O52O53O54O55c=1c=0.75c=0.5Consider if O42 is w

76、inner for some other input; “fight” over claiming O43, O33, O532024/7/24109Chap3神经计算史忠植SelectingtheNeighborhoodTypically,a“SombreroFunction”orGaussianfunctionisusedNeighborhoodsizeusuallydecreasesovertimetoallowinitial“jockeyingforposition”andthen“fine-tuning”asalgorithmproceeds2024/7/24110Chap3神经计算

77、史忠植HierarchicalandPartitiveApproachesPartitivealgorithmDeterminethenumberofclusters.Initializetheclustercenters.Computepartitioningfordata.Compute(update)clustercenters.Ifthepartitioningisunchanged(orthealgorithmhasconverged),stop;otherwise,returntostep3k-meanserrorfunctionTominimizeerrorfunction202

78、4/7/24111Chap3神经计算史忠植HierarchicalandPartitiveApproachesHierarchicalclusteringalgorithm(Dendrogram)Initialize:AssigneachvectortoitsownclusterComputedistancesbetweenallclusters.Mergethetwoclustersthatareclosesttoeachother.Returntostep2untilthereisonlyoneclusterleft.PartitionstrategyCutatdifferentlevel

79、2024/7/24112Chap3神经计算史忠植HierarchicalSOMGHSOMGrowingHierarchicalSelf-OrganizingMapgrowinsizeinordertorepresentacollectionofdataataparticularlevelofdetail2024/7/24113Chap3神经计算史忠植PlasticSelfOrganisingMapsFamilyofsimilarnetworksAddsneuronsusingerrorthresholdUseslinklengthsforpruningUnconnectedneuronsrem

80、ovedConvergesquickly2024/7/24114Chap3神经计算史忠植PSOMArchitectureNeuronWeightNeuron Vector0.10.60.20.42024/7/24115Chap3神经计算史忠植PSOMAlgorithmInitialiseAccept InputFind focusIs euclideanDistance betweenInput and focus larger than an?Update focus andneighbourhoodCreateNeuronsAgelinksRemoveLong linksRemoveUnc

81、onnectedneuronsYesNo2024/7/24116Chap3神经计算史忠植神经网络集成 1996年,Sollich和Krogh 将神经网络集成定义为:“神经网络集成是用有限个神经网络对同一个问题进行学习,集成在某输入示例下的输出由构成集成的各神经网络在该示例下的输出共同决定”。2024/7/24117Chap3神经计算史忠植神经元整体神经元整体 目前人工神经网络的研究不具有从信息处理的整体结构进行系统分析的能力, 因此, 很难反映出人脑认知的结构。 由于忽视对于整体结构和全局结构的研究, 神经网络对于复杂模型组织结构的层次化和功能模块化组织机理还处于十分无知的情况。2024/7/

82、24118Chap3神经计算史忠植功能柱 20世纪60年代末,美国科学家发现,在大脑视觉皮层中,具有相同图像特征选择性和相同感受野位置的众多神经细胞,以垂直于大脑表面的方式排列成柱状结构功能柱。30多年来,脑研究领域一直将垂直的柱状结构看作大脑功能组织的一个基本原则。但是,传统的功能柱研究还不能阐释视觉系统究竟是如何处理大范围复杂图像信息的。2024/7/24119Chap3神经计算史忠植功能柱功能柱 中科院上海生命科学研究院神经科学研究所李朝义实验室通过对猫的视皮层的研究,发现在初级视皮层中存在一种与处理大范围复杂图形特征有关的功能结构。与目前所有已知结构不同,它不是柱状的,而是形成许许多多

83、直径约300微米的小球,分散地镶嵌在已知的垂直功能柱中。这是在简单特征功能柱基础上所形成的第二级功能筑构,处理各种更复杂的图像信息。视觉系统可能正是通过这种神经机制,以有限的信息量把目标物从复杂的背景图像中分离出来。2024/7/24120Chap3神经计算史忠植神经场神经场 如何进行功能柱建模? 神经场研究的出发点是信息处理系统的整体结构, 一般的系统表示为非欧氏空间( 在一定拓扑结构下形成流形 )。 研究的一个关键就是建立环境结构流形与神经流形的耦合关系, 用流形的思想、 拓扑的概念和统计推理来研究整体结构所具有的性质, 利用整体不变性质, 处理和分析表示结构与神经流形的优化逼近过程。 2

84、024/7/24121Chap3神经计算史忠植神经场神经场 . . . . . . . . . . . .Hidden UnitsInput Units Output Units x yzField Organization ModelField ActionModel W1:Field Organization WeightsW2:Field Action Weight2024/7/24122Chap3神经计算史忠植神经场神经场 神经场理论框架, 体现整体信息处理的结构在两方面: 一方面, 表示结构的编码和模型结构通过拓扑结构进行表示, 具有层次化、 模块化的组织, 形成树型链结构,模型结构

85、具有扩展成无限模型的性质和分维组织机理, 分解成层次化的结构。 我们试图用代数拓扑的方法来描述这种结构, 体现整体不变性质, 神经网络具有对于结构进行模型化的机理, 对于系统结构的逼近。 另一方面, 复杂的模型由简单的模型集成, 简单的模型嵌入更复杂的结构中, 信息几何研究局部和整体不变度量的关系,研究全局不变量,即研究学习的全局优化过程。2024/7/24123Chap3神经计算史忠植脑功能区之间的耦合脑功能区之间的耦合 对于以细胞、分子事件为基础的局部神经网络如何组装起来构成庞大的复杂的脑来实现高级功能,既缺少有成效的研究手段,在理论上也只有很模糊的想法。2024/7/24124Chap3神经计算史忠植展望展望 我们要创立一系列新方法,包括若干原理上全新的方法,把离子通道、突触、神经元的工作机理与脑的高级功能沟通起来。 基于神经生物学的最新研究成果,开展神经计算的研究。 2024/7/24125Chap3神经计算史忠植References1SimonHaykin.NeuralNetworks:AComprehensiveFoundation.2ndEdition,1998,PrenticeHall2NeuralNetworksSoftware,URL:http:/

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 建筑/环境 > 施工组织

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号