guildsman TensorFlow library for Clojure

我要开发同款
匿名用户2021年11月11日
44阅读
开发技术Python
所属分类人工智能、机器学习/深度学习
授权协议Apache-2.0 License

作品详情

GuildsmanResignation

AsofJuly6,2018,I'mresigningfrommyworkonGuildsman.WhileIhavemanythingsworking,Ineverreleasedaversion.ThereareallkindsofreasonswhyI'mstoppingthisproject.ThemostsignificantisthatmycareerhadbeentakingmeintoML,butthatisnolongerthecase.

Thankstoeveryonewhogavemesupportandencouragement!

UPDATE--Mar1,2018

https://bpiel.github.io/guildsman/posts/creeping-2018-03-01/

Resources

Ispokeabout,anddemonstrated,GuildsmanatConj2017inBaltimore.Youcanwatchhere:

https://www.youtube.com/watch?v=8_HOB62rpvw

Ifyouwanttoknowmore,pleasereachout.Anyofthesework:

fileanissuetwitter:@bpielemail:onmygithubprofilepageslack--https://clojurians.slack.com#tensorflowbpiel

Duringthispre-releasephase,I'lltrytoaddtothisREADMEasitbecomesclearthroughconversationswhatothersaremostinterestedinorconfusedby.Oncethishitsalpha,theprojectshouldbeabletomaintaintheREADMEitself,bylearningfromexamplesofothergoodREADMEs.Thisisknownas"self-documentingcode".

YOUCANHELP!

AfewpeoplehaveexpressedinterestinhelpingoutwithGuildsman.Thestateoftheprojectmakesitimpracticalforanyonetocontributedirectly(ienodocs,notests,highlyunstable).BUT,youcancontributetoTensorFlowinawaythathasaVERYmeaningfulimpactonwhatGuildsmaniscapableof--byimplementinggradientsinTensorFlow'sC++layer.

NOTE:There'sbeenconfusionaroundthis,soIwanttobeveryclear.Thesec++gradientimplementationsaregoingdirectlytoTensorFlow'scodebase.YousubmitaPRtoTensorFlow.AtnopointisthiscodeinGuildsman.

Thereasonswhythesegradientsaresoimportantarelaidout(partially,atleast)inthevideo(linkedabove,especiallystartingaroundthe18minmark).

AGuideToImplementingC++GradientsinTensorFlowPrerequisiteKnowledge

MoreImportant:

familiaritywithPythonfamiliaritywithC++(Myc++isweak,butI'vegottenby.)

LessImportant:

familiaritywiththeunderlyingmath

ThemathematicallogicisalreadywrittenoutinPython.YouonlyneedtoportittoC++.Thedifficultyofimplementingagradientvarieswildlydependingonitscomplexity.Mostarenottrivial.But,Idon'tthinkadeepunderstandingofthemathmakestheportingprocesseasier.

Ifyoudowanttolearnmoreaboutthemath,thiswikipediaarticleisoneplaceyoucouldstart.Itdescribesthecontextinwhichtheindividialgradientimplementationsarebeingused,whatthehighergoalis,andhowitisacheived.

https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation

TheProcess

Besidestheactualcoding,you'llneedtodeterminewhichgradienttotackle,calldibs,andgetyourPRaccepted.Eachofthosestepshavetheirownuniquesetofchallenges.Ifyouhavequestions--AFTERreadingallofthis:)--pleasegetintouch.

HereareinstructionsfromTFrelatedtocontributing,bothgenerallyandgradientsspecifically.Iwrotemyownnotesbelow,butpleasereadthesefirst.

https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc/gradients

Legalstuff!

YouMUSTsignaContributorLicenseAgreement.IfyoureadtheTFinstructionsthatIlinkedtoabove,thenyoualreadyknowthat.Thiswasapainlessprocessforme,butthat'saffectedbyyourlegalrelationshipwithyouremployer,oranyonewhomightownsomepartofyourtime/output.

FindagradientimplementationintheTFPythoncodethatdoesn'thaveacounterpartinc++.

SeethegradientTODOlistThisgithubsearchshouldreturnallthepygrads:https://github.com/tensorflow/tensorflow/search?utf8=%E2%9C%93&q=%22ops.RegisterGradient%22&type=Thisshouldreturnallthec++grads:https://github.com/tensorflow/tensorflow/search?utf8=%E2%9C%93&q=%22REGISTER_GRADIENT_OP%22&type=Whichoneshouldyoudo???Foryourfirstone,justtrytofindasimpleone.LinesofcodeisagoodindicatorAfterthat,theoptimalchoicewouldmaximize(valuetocommunity)/(yourtime)ChecktheprioritizedlistinthegradientTODOlistsection.Anythinginmath_grad.pyornn_grad.pyisprobablynotbad.Anynewgradientisbetterthannogradient.Justdoit!Youmaybeabletofindgithubissuesthatrequestaspecificgradient.Here'sone(currentlyopen)thatIfiled:https://github.com/tensorflow/tensorflow/issues/12686

Implementthething.

I'mnotevengoingtoguessaboutwhatwouldbethemosteffectivewordstowritehere.Instead,there'sexamplesbelow.

Implementatest.

Again,seeexamplesbelow.Thetestsareshockinglysimple.ThegoodGoogleTFpeoplehaveimplementedsometesthelpertoolingthattakesanyoperation,calculatesthecorrectgradientvaluesandcomparesthemtotheoutputofagradientimplementation.Ifthetwoagreewithinsomemarginoferror,thetestpasses!Implementingatestisjustamatterofwiringtheoperationanditsgradient(thatyouwrote)uptothisgradientverifier.

Runthetest.

Googlehasitsownbuildtool,bazel,thatTFuses.Inadditiontocompilation(andwhoknowswhatelse),youalsousebazeltoruntests.Ifthere'salotofcompilationthatneedstooccurrbeforeatestcanberun(ex:thefirsttimeyourrunatest),youmaybewaitingforhours.Don'tworry,subsequentrunswillbefast(though,stillnotasfastasI'dlike).Here'sanexampleshowinghowIrunthenn_gradtests:

sudotensorflow/tools/ci_build/ci_build.shCPUbazeltest//tensorflow/cc:gradients_nn_grad_test

ThatwouldgetcalledfromtherootdiroftheTFrepo.

Fixcode,runtest,fixcode,runtest,fixcode,runtest.......testspass!submitPR!DefinitelyccmeonthePRwhenyoudo!(@bpiel)Example-BiasAdd

ThefirstPRofmineacceptedintoTensorFlowimplementedthegradientforBiasAdd.BiasAddisjustaspecialcaseofmatrixadditionthatisoptimizedforneuralnetworks,butthat'snotimportantforthepurposesofthisexample.Whatisimportantisthatthisisasimplecase.It'smadeespeciallysimplebythefactthatthegradientforBiasAddisalreadyimplementedasitsownoperation,BiasAddGrad.AllIhadtodowaswritesomegluecodeandregisteritsothattheautodifferentiationlogiccouldfindit.Thisisnotusuallythecase,butthereareotherslikethis.

MyPR:https://github.com/tensorflow/tensorflow/pull/12448/files

PythonCode(thecodetobeported)https://github.com/tensorflow/tensorflow/blob/e5306d3dc75ea1b4338dc7b4518824a7698f0f92/tensorflow/python/ops/nn_grad.py#L237

@ops.RegisterGradient("BiasAdd")def_BiasAddGrad(op,received_grad):"""Returnthegradientsforthe2inputsofbias_op.Thefirstinputofunused_bias_opisthetensort,anditsgradientisjustthegradienttheunused_bias_opreceived.Thesecondinputofunused_bias_opisthebiasvectorwhichhasonefewerdimensionthan"received_grad"(thebatchdimension.)ItsgradientisthereceivedgradientSummedonthebatchdimension,whichisthefirstdimension.Args:op:TheBiasOpforwhichweneedtogenerategradients.received_grad:Tensor.ThegradientspassedtotheBiasOp.Returns:Twotensors,thefirstoneforthe"tensor"inputoftheBiasOp,thesecondoneforthe"bias"inputoftheBiasOp."""try:data_format=op.get_attr("data_format")exceptValueError:data_format=Nonereturn(received_grad,gen_nn_ops.bias_add_grad(out_backprop=received_grad,data_format=data_format))

TheC++codeIwrote:https://github.com/tensorflow/tensorflow/blob/e5306d3dc75ea1b4338dc7b4518824a7698f0f92/tensorflow/cc/gradients/nn_grad.cc#L106

StatusBiasAddGradHelper(constScope&scope,constOperation&op,conststd::vector<Output>&grad_inputs,std::vector<Output>*grad_outputs){stringdata_format;BiasAddGrad::Attrsinput_attrs;TF_RETURN_IF_ERROR(GetNodeAttr(op.output(0).node()->attrs(),"data_format",&data_format));input_attrs.DataFormat(data_format);autodx_1=BiasAddGrad(scope,grad_inputs[0],input_attrs);grad_outputs->push_back(Identity(scope,grad_inputs[0]));grad_outputs->push_back(dx_1);returnscope.status();}REGISTER_GRADIENT_OP("BiasAdd",BiasAddGradHelper);

ThetestIwrote:https://github.com/tensorflow/tensorflow/blob/e5306d3dc75ea1b4338dc7b4518824a7698f0f92/tensorflow/cc/gradients/nn_grad_test.cc#L150

TEST_F(NNGradTest,BiasAddGradHelper){TensorShapeshape({4,5});TensorShapebias_shape({5});autox=Placeholder(scope_,DT_FLOAT,Placeholder::Shape(shape));autobias=Placeholder(scope_,DT_FLOAT,Placeholder::Shape(bias_shape));autoy=BiasAdd(scope_,x,bias);RunTest({x,bias},{shape,bias_shape},{y},{shape});}

RelevantDocs:

https://www.tensorflow.org/api_docs/cc/https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/bias-addhttps://www.tensorflow.org/versions/master/api_docs/cc/class/tensorflow/ops/bias-add-gradhttps://www.tensorflow.org/api_docs/cc/struct/tensorflow/ops/bias-add-grad/attrs

https://www.tensorflow.org/api_docs/python/tf/nn/bias_add

https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/placeholder

Examples-TODO

I've(currently)hadthreeothergradsacceptedinthefollowingtwoPRs.I'lltrytogettoexpandingthoseintonicerexamplewrite-upsliketheoneabove.

https://github.com/tensorflow/tensorflow/pull/12665https://github.com/tensorflow/tensorflow/pull/12391

GradientTODOList

asofOct18,2017

Prioritized

Theseseemtobemoreimportant.Orderedbypriority:

SoftmaxCrossEntropyWithLogits

Floor

Cast

GatherV2

Pow

Sub

Prod

ConcatV2

Slice

Tile

TopKV2

AllGradientsthatareinPython,butnotC++

Atan2

AvgPool

AvgPool3D

AvgPool3DGrad

AvgPoolGrad

BadGrad

BatchNormWithGlobalNormalization

Betainc

BiasAddGrad

BiasAddV1

Cast

Ceil

Cholesky

ComplexAbs

Concat

ConcatV2

Conv2DBackpropFilter

Conv2DBackpropInput

Conv3D

Conv3DBackpropFilterV2

Conv3DBackpropInputV2

CopyOp

copy_override

CropAndResize

Cross

CTCLoss

Cumprod

Cumsum

CustomSquare

DebugGradientIdentity

DepthwiseConv2dNative

Digamma

Dilation2D

EluGrad

Enter

Erfc

Exit

ExtractImagePatches

FakeQuantWithMinMaxArgs

FakeQuantWithMinMaxVars

FakeQuantWithMinMaxVarsPerChannel

FFT

FFT2D

FFT3D

Fill

Floor

FloorDiv

FloorMod

FractionalAvgPool

FractionalMaxPool

FusedBatchNorm

FusedBatchNormGrad

FusedBatchNormGradV2

FusedBatchNormV2

Gather

GatherV2

IdentityN

IFFT

IFFT2D

IFFT3D

Igamma

Igammac

InvGrad

IRFFT

IRFFT2D

LoopCond

LRN

MatrixDeterminant

MatrixDiagPart

MatrixInverse

MatrixSetDiag

MatrixSolve

MatrixSolveLs

MatrixTriangularSolve

MaxPool3D

MaxPool3DGrad

MaxPool3DGradGrad

MaxPoolGrad

MaxPoolGradGrad

MaxPoolGradV2

MaxPoolWithArgmax

Merge

NaNGrad

NextIteration

NthElement

PlaceholderWithDefault

Polygamma

Pow

PreventGradient

Print

Prod

ReadVariableOp

ReciprocalGrad

RefEnter

RefExit

RefMerge

RefNextIteration

RefSwitch

ReluGrad

ResizeBicubic

ResizeBilinear

ResizeNearestNeighbor

ResourceGather

Reverse

RFFT

RFFT2D

Rint

Round

RsqrtGrad

SegmentMax

SegmentMean

SegmentMin

SegmentSum

Select

SelfAdjointEigV2

SeluGrad

SigmoidGrad

Slice

SoftmaxCrossEntropyWithLogits

Softplus

SoftplusGrad

Softsign

SparseAdd

SparseDenseCwiseAdd

SparseDenseCwiseDiv

SparseDenseCwiseMul

SparseFillEmptyRows

SparseMatMul

SparseReduceSum

SparseReorder

SparseSegmentMean

SparseSegmentSqrtN

SparseSegmentSum

SparseSoftmax

SparseSoftmaxCrossEntropyWithLogits

SparseSparseMaximum

SparseSparseMinimum

SparseTensorDenseAdd

SparseTensorDenseMatMul

SplitV

SqrtGrad

StridedSlice

StridedSliceGrad

Sub

Svd

Switch

TanhGrad

TensorArrayConcat

TensorArrayConcatV2

TensorArrayConcatV3

TensorArrayGather

TensorArrayGatherV2

TensorArrayGatherV3

TensorArrayRead

TensorArrayReadV2

TensorArrayReadV3

TensorArrayScatter

TensorArrayScatterV2

TensorArrayScatterV3

TensorArraySplit

TensorArraySplitV2

TensorArraySplitV3

TensorArrayWrite

TensorArrayWriteV2

TensorArrayWriteV3

TestStringOutput

Tile

TopK

TopKV2

TruncateDiv

UnsortedSegmentMax

UnsortedSegmentSum

Zeta

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论