深度学习近年来关于模型解释性的相关论文。
按引用次数排序可见引用排序
159篇论文pdf(有2篇需要上scihub找)上传到腾讯微云。
不定期更新。
YearPublicationPaperCitationcode2020CVPRExplainingKnowledgeDistillationbyQuantifyingtheKnowledge32020CVPRHigh-frequencyComponentHelpsExplaintheGeneralizationofConvolutionalNeuralNetworks162020CVPRWScore-CAM:Score-WeightedVisualExplanationsforConvolutionalNeuralNetworks7Pytorch2020ICLRKnowledgeconsistencybetweenneuralnetworksandbeyond32020ICLRInterpretableComplex-ValuedNeuralNetworksforPrivacyProtection22019AIExplanationinartificialintelligence:Insightsfromthesocialsciences6622019NMIStopExplainingBlackBoxMachineLearningModelsforHighStakesDecisionsandUseInterpretableModelsInstead3892019NeurIPSCanyoutrustyourmodel'suncertainty?Evaluatingpredictiveuncertaintyunderdatasetshift136-2019NeurIPSThislookslikethat:deeplearningforinterpretableimagerecognition80Pytorch2019NeurIPSAbenchmarkforinterpretabilitymethodsindeepneuralnetworks282019NeurIPSFull-gradientrepresentationforneuralnetworkvisualization72019NeurIPSOnthe(In)fidelityandSensitivityofExplanations132019NeurIPSTowardsAutomaticConcept-basedExplanations25Tensorflow2019NeurIPSCXPlain:Causalexplanationsformodelinterpretationunderuncertainty122019CVPRInterpretingCNNsviaDecisionTrees852019CVPRFromRecognitiontoCognition:VisualCommonsenseReasoning97Pytorch2019CVPRAttentionbranchnetwork:Learningofattentionmechanismforvisualexplanation392019CVPRInterpretableandfine-grainedvisualexplanationsforconvolutionalneuralnetworks182019CVPRLearningtoExplainwithComplementalExamples122019CVPRRevealingScenesbyInvertingStructurefromMotionReconstructions20Tensorflow2019CVPRMultimodalExplanationsbyPredictingCounterfactualityinVideos42019CVPRVisualizingtheResilienceofDeepConvolutionalNetworkInterpretations12019ICCVU-CAM:VisualExplanationusingUncertaintybasedClassActivationMaps102019ICCVTowardsInterpretableFaceRecognition72019ICCVTakingaHINT:LeveragingExplanationstoMakeVisionandLanguageModelsMoreGrounded282019ICCVUnderstandingDeepNetworksviaExtremalPerturbationsandSmoothMasks17Pytorch2019ICCVExplainingNeuralNetworksSemanticallyandQuantitatively62019ICLRHierarchicalinterpretationsforneuralnetworkpredictions24Pytorch2019ICLRHowImportantIsaNeuron?322019ICLRVisualExplanationbyInterpretation:ImprovingVisualFeedbackCapabilitiesofDeepNeuralNetworks132018ICMLExtractingAutomatafromRecurrentNeuralNetworksUsingQueriesandCounterexamples71Pytorch2019ICMLTowardsADeepandUnifiedUnderstandingofDeepNeuralModelsinNLP15Pytorch2019ICAISInterpretingblackboxpredictionsusingfisherkernels242019ACMFATExplainingexplanationsinAI1192019AAAIInterpretationofneuralnetworksisfragile130Tensorflow2019AAAIClassifier-agnosticsaliencymapextraction82019AAAICanYouExplainThat?LucidExplanationsHelpHuman-AICollaborativeImageRetrieval12019AAAIWUnsupervisedLearningofNeuralNetworkstoExplainNeuralNetworks102019AAAIWNetworkTransplanting42019CSURASurveyofMethodsforExplainingBlackBoxModels6552019JVCIRInterpretableconvolutionalneuralnetworksviafeedforwarddesign31Keras2019ExplainAIThe(Un)reliabilityofsaliencymethods1282019ACLAttentionisnotExplanation1572019EMNLPAttentionisnotnotExplanation572019arxivAttentionInterpretabilityAcrossNLPTasks162019arxivInterpretableCNNs22018ICLRTowardsbetterunderstandingofgradient-basedattributionmethodsfordeepneuralnetworks2452018ICLRLearninghowtoexplainneuralnetworks:PatternNetandPatternAttribution1432018ICLROntheimportanceofsingledirectionsforgeneralization134Pytorch2018ICLRDetectingstatisticalinteractionsfromneuralnetworkweights56Pytorch2018ICLRInterpretablecountingforvisualquestionanswering29Pytorch2018CVPRInterpretableConvolutionalNeuralNetworks2502018CVPRTellmewheretolook:Guidedattentioninferencenetwork134Chainer2018CVPRMultimodalExplanations:JustifyingDecisionsandPointingtotheEvidence126Caffe2018CVPRTransparencybydesign:Closingthegapbetweenperformanceandinterpretabilityinvisualreasoning79Pytorch2018CVPRNet2vec:Quantifyingandexplaininghowconceptsareencodedbyfiltersindeepneuralnetworks602018CVPRWhathavewelearnedfromdeeprepresentationsforactionrecognition?302018CVPRLearningtoActProperly:PredictingandExplainingAffordancesfromImages242018CVPRTeachingCategoriestoHumanLearnerswithVisualExplanations20Pytorch2018CVPRWhatdodeepnetworksliketosee?192018CVPRInterpretNeuralNetworksbyIdentifyingCriticalDataRoutingPaths13Tensorflow2018ECCVDeepclusteringforunsupervisedlearningofvisualfeatures382Pytorch2018ECCVExplainableneuralcomputationviastackneuralmodulenetworks55Tensorflow2018ECCVGroundingvisualexplanations442018ECCVTextualexplanationsforself-drivingvehicles592018ECCVInterpretablebasisdecompositionforvisualexplanation51Pytorch2018ECCVConvnetsandimagenetbeyondaccuracy:Understandingmistakesanduncoveringbiases362018ECCVVqa-e:Explaining,elaborating,andenhancingyouranswersforvisualquestions202018ECCVChooseYourNeuron:IncorporatingDomainKnowledgethroughNeuron-Importance16Pytorch2018ECCVDiversefeaturevisualizationsrevealinvariancesinearlylayersofdeepneuralnetworks9Tensorflow2018ECCVExplainGAN:ModelExplanationviaDecisionBoundaryCrossingTransformations62018ICMLInterpretabilitybeyondfeatureattribution:Quantitativetestingwithconceptactivationvectors214Tensorflow2018ICMLLearningtoexplain:Aninformation-theoreticperspectiveonmodelinterpretation1172018ACLDidtheModelUnderstandtheQuestion?63Tensorflow2018FITEEVisualinterpretabilityfordeeplearning:asurvey2432018NeurIPSSanityChecksforSaliencyMaps2492018NeurIPSExplanationsbasedonthemissing:Towardscontrastiveexplanationswithpertinentnegatives79Tensorflow2018NeurIPSTowardsrobustinterpretabilitywithself-explainingneuralnetworks145Pytorch2018NeurIPSAttacksmeetinterpretability:Attribute-steereddetectionofadversarialsamples552018NeurIPSDeepPINK:reproduciblefeatureselectionindeepneuralnetworks30Keras2018NeurIPSRepresenterpointselectionforexplainingdeepneuralnetworks30Tensorflow2018NeurIPSWorkshopInterpretableconvolutionalfilterswithsincNet372018AAAIAnchors:High-precisionmodel-agnosticexplanations3662018AAAIImprovingtheadversarialrobustnessandinterpretabilityofdeepneuralnetworksbyregularizingtheirinputgradients178Tensorflow2018AAAIDeeplearningforcase-basedreasoningthroughprototypes:Aneuralnetworkthatexplainsitspredictions102Tensorflow2018AAAIInterpretingCNNKnowledgeviaanExplanatoryGraph79Matlab2018AAAIExaminingCNNRepresentationswithrespecttoDatasetBias372018WACVGrad-cam++:Generalizedgradient-basedvisualexplanationsfordeepconvolutionalnetworks1742018IJCVTop-downneuralattentionbyexcitationbackprop3292018TPAMIInterpretingdeepvisualrepresentationsvianetworkdissection872018DSPMethodsforinterpretingandunderstandingdeepneuralnetworks7132018AccessPeekinginsidetheblack-box:AsurveyonExplainableArtificialIntelligence(XAI)3902018JAIRLearningExplanatoryRulesfromNoisyData155Tensorflow2018MIPROExplainableartificialintelligence:Asurvey1082018BMVCRise:Randomizedinputsamplingforexplanationofblack-boxmodels852018arxivDistill-and-Compare:AuditingBlack-BoxModelsUsingTransparentModelDistillation302018arxivManipulatingandmeasuringmodelinterpretability1332018arxivHowconvolutionalneuralnetworkseetheworld-Asurveyofconvolutionalneuralnetworkvisualizationmethods452018arxivRevisitingtheimportanceofindividualunitsincnnsviaablation432018arxivComputationallyEfficientMeasuresofInternalNeuronImportance12017ICMLUnderstandingBlack-boxPredictionsviaInfluenceFunctions767Pytorch2017ICMLAxiomaticattributionfordeepnetworks755Keras2017ICMLLearningImportantFeaturesThroughPropagatingActivationDifferences6552017ICLRVisualizingdeepneuralnetworkdecisions:Predictiondifferenceanalysis271Caffe2017ICLRExploringLOTSinDeepNeuralNetworks272017NeurIPSAUnifiedApproachtoInterpretingModelPredictions14112017NeurIPSRealtimeimagesaliencyforblackboxclassifiers161Pytorch2017NeurIPSSVCCA:SingularVectorCanonicalCorrelationAnalysisforDeepLearningDynamicsandInterpretability1602017CVPRMiningObjectPartsfromCNNsviaActiveQuestion-Answering202017CVPRNetworkdissection:Quantifyinginterpretabilityofdeepvisualrepresentations5402017CVPRImprovingInterpretabilityofDeepNeuralNetworkswithSemanticInformation562017CVPRMDNet:ASemanticallyandVisuallyInterpretableMedicalImageDiagnosisNetwork129Torch2017CVPRMakingtheVinVQAmatter:ElevatingtheroleofimageunderstandinginVisualQuestionAnswering5822017CVPRKnowingwhentolook:Adaptiveattentionviaavisualsentinelforimagecaptioning620Torch2017CVPRWInterpretable3dhumanactionanalysiswithtemporalconvolutionalnetworks1632017ICCVGrad-cam:Visualexplanationsfromdeepnetworksviagradient-basedlocalization2444Pytorch2017ICCVInterpretableExplanationsofBlackBoxesbyMeaningfulPerturbation419Pytorch2017ICCVInterpretableLearningforSelf-DrivingCarsbyVisualizingCausalAttention1142017ICCVUnderstandingandcomparingdeepneuralnetworksforageandgenderclassification522017ICCVLearningtodisambiguatebyaskingdiscriminativequestions122017IJCAIRightfortherightreasons:Trainingdifferentiablemodelsbyconstrainingtheirexplanations1492017IJCAIUnderstandingandimprovingconvolutionalneuralnetworksviaconcatenatedrectifiedlinearunits276Caffe2017AAAIGrowingInterpretablePartGraphsonConvNetsviaMulti-ShotLearning37Matlab2017ACLVisualizingandUnderstandingNeuralMachineTranslation922017EMNLPAcausalframeworkforexplainingthepredictionsofblack-boxsequence-to-sequencemodels922017CVPRWorkshopLookingunderthehood:Deepneuralnetworkvisualizationtointerpretwhole-slideimageanalysisoutcomesforcolorectalpolyps212017surveyInterpretabilityofdeeplearningmodels:asurveyofresults992017arxivSmoothGrad:removingnoisebyaddingnoise3562017arxivInterpretable&explorableapproximationsofblackboxmodels1152017arxivDistillinganeuralnetworkintoasoftdecisiontree188Pytorch2017arxivTowardsinterpretabledeepneuralnetworksbyleveragingadversarialexamples542017arxivExplainableartificialintelligence:Understanding,visualizingandinterpretingdeeplearningmodels3832017arxivContextualExplanationNetworks35Pytorch2017arxivChallengesfortransparency832017ACMSOPPDeepxplore:Automatedwhiteboxtestingofdeeplearningsystems4312017CEURWWhatdoesexplainableAIreallymean?Anewconceptualizationofperspectives1172017TVCGActiVis:VisualExplorationofIndustry-ScaleDeepNeuralNetworkModels1582016NeurIPSSynthesizingthepreferredinputsforneuronsinneuralnetworksviadeepgeneratornetworks321Caffe2016NeurIPSUnderstandingtheeffectivereceptivefieldindeepconvolutionalneuralnetworks4362016CVPRInvertingVisualRepresentationswithConvolutionalNetworks3362016CVPRVisualizingandUnderstandingDeepTextureRepresentations982016CVPRAnalyzingClassifiers:FisherVectorsandDeepNeuralNetworks1102016ECCVGeneratingVisualExplanations303Caffe2016ECCVDesignofkernelsinconvolutionalneuralnetworksforimageclassification142016ICMLUnderstandingandimprovingconvolutionalneuralnetworksviaconcatenatedrectifiedlinearunits2762016ICMLVisualizingandcomparingAlexNetandVGGusingdeconvolutionallayers412016EMNLPRationalizingNeuralPredictions355Pytorch2016IJCVVisualizingdeepconvolutionalneuralnetworksusingnaturalpre-images281Matlab2016IJCVVisualizingObjectDetectionFeatures27Caffe2016KDDWhyshoulditrustyou?:Explainingthepredictionsofanyclassifier35112016TVCGVisualizingthehiddenactivityofartificialneuralnetworks1702016TVCGTowardsbetteranalysisofdeepconvolutionalneuralnetworks2412016NAACLVisualizingandunderstandingneuralmodelsinnlp364Torch2016arxivUnderstandingneuralnetworksthroughrepresentationerasure)1982016arxivGrad-CAM:Whydidyousaythat?1302016arxivInvestigatingtheinfluenceofnoiseanddistractorsontheinterpretationofneuralnetworks412016arxivAttentiveExplanations:JustifyingDecisionsandPointingtotheEvidence542016arxivTheMythosofModelInterpretability13682016arxivMultifacetedfeaturevisualization:Uncoveringthedifferenttypesoffeatureslearnedbyeachneuronindeepneuralnetworks1612015ICLRStrivingforSimplicity:TheAllConvolutionalNet2268Pytorch2015CVPRUnderstandingdeepimagerepresentationsbyinvertingthem1129Matlab2015ICCVUnderstandingdeepfeatureswithcomputer-generatedimagery109Caffe2015ICMLWorkshopUnderstandingNeuralNetworksThroughDeepVisualization1216Tensorflow2015AASInterpretableclassifiersusingrulesandBayesiananalysis:Buildingabetterstrokepredictionmodel3852014ECCVVisualizingandUnderstandingConvolutionalNetworks9873Pytorch2014ICLRDeepInsideConvolutionalNetworks:VisualisingImageClassificationModelsandSaliencyMaps2745Pytorch2013ICCVHoggles:Visualizingobjectdetectionfeatures301论文talk
评论