public class SimpleRnn extends BaseRecurrentLayer<SimpleRnn>
Layer.TrainingMode, Layer.Type| Modifier and Type | Field and Description |
|---|---|
static String |
STATE_KEY_PREV_ACTIVATION |
stateMap, tBpttStateMapgradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParamscacheMode, conf, dropoutApplied, dropoutMask, epochCount, index, input, iterationCount, iterationListeners, maskArray, maskState, preOutput| Constructor and Description |
|---|
SimpleRnn(NeuralNetConfiguration conf) |
| Modifier and Type | Method and Description |
|---|---|
org.nd4j.linalg.api.ndarray.INDArray |
activate(boolean training)
Trigger an activation with the last specified input
|
org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Calculate the gradient relative to the error in the next layer
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
org.nd4j.linalg.api.ndarray.INDArray |
preOutput(boolean training) |
org.nd4j.linalg.api.ndarray.INDArray |
rnnActivateUsingStoredState(org.nd4j.linalg.api.ndarray.INDArray input,
boolean training,
boolean storeLastForTBPTT)
Similar to rnnTimeStep, this method is used for activations using the state
stored in the stateMap as the initialization.
|
org.nd4j.linalg.api.ndarray.INDArray |
rnnTimeStep(org.nd4j.linalg.api.ndarray.INDArray input)
Do one or more time steps using the previous time step state stored in stateMap.
Can be used to efficiently do forward pass one or n-steps at a time (instead of doing forward pass always from t=0) If stateMap is empty, default initialization (usually zeros) is used Implementations also update stateMap at the end of this method |
org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
tbpttBackpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon,
int tbpttBackLength)
Truncated BPTT equivalent of Layer.backpropGradient().
|
rnnClearPreviousState, rnnGetPreviousState, rnnGetTBPTTState, rnnSetPreviousState, rnnSetTBPTTStateaccumulateScore, activate, activate, calcL1, calcL2, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, hasBias, initParams, iterate, layerConf, numParams, params, paramTable, paramTable, preOutput, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, setScoreWithZ, toString, transpose, update, updateactivate, activate, activate, addListeners, applyConstraints, applyDropOutIfNecessary, applyMask, batchSize, conf, feedForwardMaskArray, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, migrateInput, numParams, preOutput, preOutput, setCacheMode, setConf, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, validateInputequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitactivate, activate, activate, activate, activate, calcL1, calcL2, clearNoiseWeightParams, clone, feedForwardMaskArray, getEpochCount, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, migrateInput, preOutput, preOutput, preOutput, setCacheMode, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setMaskArray, transpose, typeaccumulateScore, addListeners, applyConstraints, batchSize, clear, computeGradientAndScore, conf, fit, fit, getGradientsViewArray, getOptimizer, getParam, gradient, gradientAndScore, init, initParams, input, iterate, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update, validateInputpublic static final String STATE_KEY_PREV_ACTIVATION
public SimpleRnn(NeuralNetConfiguration conf)
public org.nd4j.linalg.api.ndarray.INDArray rnnTimeStep(org.nd4j.linalg.api.ndarray.INDArray input)
RecurrentLayerinput - Input to this layerpublic org.nd4j.linalg.api.ndarray.INDArray rnnActivateUsingStoredState(org.nd4j.linalg.api.ndarray.INDArray input,
boolean training,
boolean storeLastForTBPTT)
RecurrentLayerinput - Layer inputtraining - if true: training. Otherwise: teststoreLastForTBPTT - If true: store the final state in tBpttStateMap for use in truncated BPTT trainingpublic org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseLayer<SimpleRnn>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.public org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> tbpttBackpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon, int tbpttBackLength)
RecurrentLayerpublic boolean isPretrainLayer()
Layerpublic org.nd4j.linalg.api.ndarray.INDArray preOutput(boolean training)
public org.nd4j.linalg.api.ndarray.INDArray activate(boolean training)
LayerCopyright © 2018. All rights reserved.