Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going...
Transcript of Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going...
![Page 1: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/1.jpg)
GoingDeeperwithConvolutionsChristianSzegedy,WeiLiu,Yangqing Jia,PierreSermanet,ScottReed,DragomirAnguelov,Dumitru Erhan,VincentVanhoucke,andAndrewRabinovichPRESENTEDBY:KAYLEEYUHASANDKYLECOFFEY
![Page 2: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/2.jpg)
AboutNeuralNetworks• Neuralnetworkscanbeusedinmanydifferentcapacities,oftenbycapitalizingontheirsharedskillswithAIs:• Objectclassification,suchaswithimages
- Givenimagesof2differentwolves,canidentifysubspecies• Speechrecognition• Throughinteractivemediumssuchasvideogames,identifyhowpeoplerespondtodifferentstimuliinvariousenvironmentsandsituations
• Thisworkrequiresaheftyamountofresourcestorunsmoothly
• Traditionalneuralnetworkarchitecturehasremainedmostlyconstant
![Page 3: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/3.jpg)
Howtoimproveontraditionalneuralnetworksetups?• Increasingtheperformanceofaneuralnetworkbyincreasingitssize,whileseeminglylogicallysound,hasseveredrawbacks:• Increasednumberofparametersmakesthenetworkpronetooverfitting• Largernetworksizerequiresmorecomputationalresources
l
l Greenline:overfitting
![Page 4: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/4.jpg)
Howtoimproveontraditionalneuralnetworksetups?
• Howtoimproveperformancewithoutmorehardware?
• Byutilizingcomputationsondensematrices
• Thissparsearchitecture’snameisInception, basedonthe2010filmofthesamename
• Introducingsparsityintothearchitecturebyreplacingfullyconnectedlayerswithsparseones,eveninsideconvolutions,iskey.
• Mimicsbiologicalsystems
![Page 5: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/5.jpg)
InceptionArchitecture:NaïveVersion
• Thepaper’sauthorsdeterminedthiswastheoptimalspatialspread,“thedecisionbasedmoreonconveniencethannecessity”
l Thiscanberepeatedspatiallyforscalingl Thisalignmentalsoavoidspatch-alignmentissues
• However,5x5modulesquicklybecomeprohibitivelyexpensiveonconvolutionallayerswithalargenumberoffilters
In short: Inputs come from the previous layer, and go through various convolutional layers. The pooling layer serves to control overfitting by reducing spatial size.
![Page 6: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/6.jpg)
InceptionArchitecture:DimensionalityReduction
• Bycomputingreductionswith1x1convolutionsbeforereachingthemoreexpensive3x3and5x5convolutions,thenecessaryprocessingpoweristremendouslyreduced
l Theuseofdimensionalityreductionsallowsforsignificantincreasesinthenumberofunitsateachstagewithouthavingasharpincreaseinnecessarycomputationalresourcesatlater,morecomplexstages
![Page 7: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/7.jpg)
GoogLeNet• AniterationofInceptionthepaper’sauthorsusedastheirsubmissiontothe2014ImageNetLargeScaleVisualRecognitionCompetition(ILSVRC).
• Thenetworkwasdesignedtobesoefficientitcouldrunwithalowmemoryfootprintonindividualdevicesthathavelimitedcomputationalresources.l IfCNNsaretogainafootholdinprivateindustry,havinglowoverheadcostsisespeciallyimportant.
HereisasmallsampleofthearchitectureofGoogLeNet,whereyoucannotetheusageofdimensionalityreductionasopposedtothenaïve.
![Page 8: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/8.jpg)
GoogLeNet• Becausetheentiretyofthearchitectureisfartoolargetofitlegiblyinoneslide.
![Page 9: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/9.jpg)
GoogLeNet• GoogLeNet incarnationoftheInceptionarchitecture.
• “#3x3/#5x5reduce”standsforthenumberof1x1filtersinthereductionlayerusedbefore3x3and5x5convolutions.
• Whiletherearemanylayerstothis,themaingoalofitistohavethefinal “softmax”layersgive“scores”totheimageclasses.
• i.e.dogs,skindiseases,etc.
• Lossfunctiondetermineshowgoodorbadeachscoreis.
![Page 10: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/10.jpg)
GoogLeNet• GoogLeNetwas22layersdeep,whencountingonlylayerswithparameters.
l 27ifyoucountpoolingl About100totallayers
• Couldbetrainedtoconvergencewithafewhigh-endGPUsinaboutaweek
l Themainlimitationwouldbememoryusage• Itwastrainedtoclassifyimagesofintooneofover1000leaf-nodeimagecategoriesintheImageNethierarchy
l ImageNetisalargevisualdatabasedesignedspecificallyforvisualsoftwarerecognitionresearch
l GoogLeNetperformedquitewellinthiscontest
![Page 11: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/11.jpg)
GoogLeNet
• GoogLeNetwas22layersdeep,whencountingonlylayerswithparameters:27ifyoucountpooling,withabout100layersintotal.
• Left:GoogLeNet’sperformanceatthe2014ILSVRC:itcameinfirstplace.
• Right:Abreakdownofitsclassificationperformancebreakdown.
l UsingmultipledifferentCNNsandaveragingtheirscorestogetapredictionclassforanimageresultsinbetterscoresthanjust1CNN.See:theinstancewith7CNNs.
![Page 12: Going Deeper with Convolutionscis.csuohio.edu/~sschung/CIS601/CIS 601 Presentation 2-2 Going De… · PRESENTED BY: KAYLEE YUHAS AND KYLE COFFEY. About Neural Networks •Neural networks](https://reader034.fdocuments.in/reader034/viewer/2022050315/5f779721f398f8629078210d/html5/thumbnails/12.jpg)
Summary• Convolutionalneuralnetworksarestilltopperformersinneuralnetworks.• TheInceptionframeworkallowsforlargescalingwhileminimizingprocessingbottlenecks,aswellas“chokepoints”whereifitscalestoacertainpoint,itbecomesinefficient.
l Italsorunswellonmachineswithoutpowerfulhardware.• Reducingwithusing1x1convolutionsbeforepassingitto3x3and5x5convolutionshasprovenefficientandeffective.
• Furtherstudy:ismimickingtheactualbiologicalconditionsuniversallythebestcaseforneuralnetworkarchitecture?Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A. (2015). Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2015.7298594 .Chabacano.(2008,February).Overfitting.RetrievedApril08,2017,fromhttps://en.wikipedia.org/wiki/Overfitting