Package apple.metalperformanceshaders
Class MPSCNNLoss
- java.lang.Object
-
- org.moe.natj.general.NativeObject
-
- org.moe.natj.objc.ObjCObject
-
- apple.NSObject
-
- apple.metalperformanceshaders.MPSKernel
-
- apple.metalperformanceshaders.MPSCNNKernel
-
- apple.metalperformanceshaders.MPSCNNLoss
-
- All Implemented Interfaces:
NSCoding,NSCopying,NSSecureCoding,NSObject
public class MPSCNNLoss extends MPSCNNKernel
MPSCNNLoss [@dependency] This depends on Metal.framework. The MPSCNNLoss filter is only used for training. This filter performs both the forward and backward pass computations. Specifically, it computes the loss between the input (predictions) and target data (labels) and the loss gradient. The loss value can be a 1 x 1 x 1 image containing a scalar loss value or an image (of the same size as the input source image) with per feature channel losses. The loss value is used to determine whether to continue the training operation or to terminate it, once satisfactory results are achieved. The loss gradient is the first gradient computed for the backward pass and serves as input to the next gradient filter (in the backward direction). The MPSCNNLoss filter is created with a MPSCNNLossDescriptor describing the type of a loss filter and the type of a reduction to use for computing the overall loss. The MPSCNNLoss filter takes the output of the inference pass (predictions) as input. It also requires the target data (labels) and optionally, weights for the labels. If per-label weights are not supplied, there is an option to use a single weight value by setting the 'weight' properly on the MPSCNNLossDescriptor object. The labels and optional weights need to be supplied by the user using the MPSCNNLossLabels object. The labels and weights are described via the MPSCNNLossDataDescriptor objects, which are in turn used to initialize the MPSCNNLossLabels object. If the specified reduction operation is MPSCNNReductionTypeNone, the destinationImage should be at least as large as the specified clipRect. The destinationImage will then contain per-element losses. Otherse, a reduction operation will be performed, according to the specified reduction type, and the filter will return a scalar value containing the overall loss. For more information on the available reduction types, see MPSCNNTypes.h. Also see MPSCNNLossDescriptor for the description of optional parameters. Here is a code example: // Setup MPSCNNLossDataDescriptor* labelsDescriptor = [MPSCNNLossDataDescriptor cnnLossDataDescriptorWithData: labelsData layout: MPSDataLayoutHeightxWidthxFeatureChannels size: labelsDataSize]; MPSCNNLossLabels* labels = [[MPSCNNLossLabels alloc] initWithDevice: device labelsDescriptor: labelsDescriptor]; MPSCNNLossDescriptor *lossDescriptor = [MPSCNNLossDescriptor cnnLossDescriptorWithType: (MPSCNNLossType)MPSCNNLossTypeMeanAbsoluteError reductionType: (MPSCNNReductionType)MPSCNNReductionTypeSum]; MPSCNNLoss* lossFilter = [[MPSCNNLoss alloc] initWithDevice: device lossDescriptor: lossDescriptor]; // Encode loss filter. // The sourceImage is the output of a previous layer, for example, the SoftMax layer. The lossGradientsImage // is the sourceGradient input image to the first gradient layer (in the backward direction), for example, // the SoftMax gradient filter. [lossFilter encodeToCommandBuffer: commandBuffer sourceImage: sourceImage labels: labels destinationImage: lossGradientsImage]; // In order to guarantee that the loss image data is correctly synchronized for CPU side access, // it is the application's responsibility to call the [labels synchronizeOnCommandBuffer:] // method before accessing the loss image data. [labels synchronizeOnCommandBuffer:commandBuffer]; MPSImage* lossImage = [labels lossImage]; For predictions (y) and labels (t), the available loss filter types are the following: Mean Absolute Error loss filter. This filter measures the absolute error of the element-wise difference between the predictions and labels. This loss function is computed according to the following formulas: Compute losses: losses = |y - t| Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) Mean Squared Error loss filter. This filter measures the squared error of the element-wise difference between the predictions and labels. This loss function is computed according to the following formulas: Compute losses: losses = (y - t)^2 Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) SoftMax Cross Entropy loss filter. This loss filter is applied element-wise. This loss filter combines the LogSoftMax and Negative Log Likelihood operations in a single filter. It is useful for training a classification problem with multiple classes. This loss function is computed according to the following formulas: Compute losses: losses = -t * LogSoftMax(y) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) If reductionType is MPSCNNReductionTypeMean, the accumulated loss value is divided by width * height instead of width * height * featureChannels. Sigmoid Cross Entropy loss filter. This loss filter is applied element-wise. This loss function is computed according to the following formulas: Compute losses: losses = max(y, 0) - y * t + log(1 + exp(-|y|)) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) Categorical Cross Entropy loss filter. This loss filter is applied element-wise. This loss function is computed according to the following formulas: Compute losses: losses = -t * log(y) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) Hinge loss filter. This loss filter is applied element-wise. The labels are expected to be 0.0 or 1.0. This loss function is computed according to the following formulas: Compute losses: losses = max(1 - (t * y), 0.0f) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) Huber loss filter. This loss filter is applied element-wise. This loss function is computed according to the following formulas: Compute losses: if (|y - t| <= delta, losses = 0.5 * y^2 if (|y - t| > delta, losses = 0.5 * delta^2 + delta * (|y - t| - delta) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) Cosine Distance loss filter. This loss filter is applied element-wise. The only valid reduction type for this loss filter is MPSCNNReductionTypeSum. This loss function is computed according to the following formulas: Compute losses: loss = 1 - reduce_sum(y * t) Compute overall loss: weighted_loss = weight * loss Log loss filter. This loss filter is applied element-wise. This loss function is computed according to the following formulas: Compute losses: losses = -(t * log(y + epsilon)) - ((1 - t) * log(1 - y + epsilon)) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) Kullback-Leibler Divergence loss filter. This loss filter is applied element-wise. The input (predictions) is expected to contain log-probabilities. This loss function is computed according to the following formulas: Compute losses: losses = t * (log(t) - y) Compute weighted losses: weighted_losses = weight(s) * losses Compute overall loss: loss = reduce(weighted_losses, reductionType) For predictions (y) and labels (t), the loss gradient for each available loss filter type is computed as follows: Mean Absolute Error loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = (y - t) / |y - t| Compute weighted gradient: weighted_gradient = weight(s) * gradient Mean Squared Error loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = 2 * (y - t) Compute weighted gradient: weighted_gradient = weight(s) * gradient SoftMax Cross Entropy loss. The loss gradient is computed according to the following formulas: First, apply the same label smoothing as in the MPSCNNLoss filter. Compute gradient: d/dy = y - t Compute weighted gradient: weighted_gradient = weight(s) * gradient Sigmoid Cross Entropy loss. The loss gradient is computed according to the following formulas: First, apply the same label smoothing as in the MPSCNNLoss filter. Compute gradient: d/dy = (1 / (1 + exp(-y)) - t Compute weighted gradient: weighted_gradient = weight(s) * gradient Categorical Cross Entropy loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = -t / y Compute weighted gradient: weighted_gradient = weight(s) * gradient Hinge loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = ((1 + ((1 - (2 * t)) * y)) > 0) ? 1 - (2 * t) : 0 Compute weighted gradient: weighted_gradient = weight(s) * gradient Huber loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = |y - t| > delta ? delta : y - t Compute weighted gradient: weighted_gradient = weight(s) * gradient Cosine Distance loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = -t Compute weighted gradient: weighted_gradient = weight(s) * gradient Log loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = (-2 * epsilon * t - t + y + epsilon) / (y * (1 - y) + epsilon * (epsilon + 1)) Compute weighted gradient: weighted_gradient = weight(s) * gradient Kullback-Leibler Divergence loss. The loss gradient is computed according to the following formulas: Compute gradient: d/dy = -t / y Compute weighted gradient: weighted_gradient = weight(s) * gradient The number of output feature channels remains the same as the number of input feature channels.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class apple.NSObject
NSObject.Function_instanceMethodForSelector_ret, NSObject.Function_methodForSelector_ret
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedMPSCNNLoss(org.moe.natj.general.Pointer peer)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description boolean_supportsSecureCoding()This property must return YES on all classes that allow secure coding.static booleanaccessInstanceVariablesDirectly()static MPSCNNLossalloc()static java.lang.ObjectallocWithZone(org.moe.natj.general.ptr.VoidPtr zone)static booleanautomaticallyNotifiesObserversForKey(java.lang.String key)static voidcancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)static voidcancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)static NSArray<java.lang.String>classFallbacksForKeyedArchiver()static org.moe.natj.objc.ClassclassForKeyedUnarchiver()static java.lang.StringdebugDescription_static()floatdelta()static java.lang.Stringdescription_static()MPSImageencodeToCommandBufferSourceImageLabels(MTLCommandBuffer commandBuffer, MPSImage sourceImage, MPSCNNLossLabels labels)Encode a MPSCNNLoss filter and return a gradient.voidencodeToCommandBufferSourceImageLabelsDestinationImage(MTLCommandBuffer commandBuffer, MPSImage sourceImage, MPSCNNLossLabels labels, MPSImage destinationImage)Encode a MPSCNNLoss filter and return a gradient in the destinationImage.floatepsilon()static longhash_static()MPSCNNLossinit()MPSCNNLossinitWithCoder(NSCoder aDecoder)NS_DESIGNATED_INITIALIZERMPSCNNLossinitWithCoderDevice(NSCoder aDecoder, java.lang.Object device)support MPSCNNLossinitWithDevice(java.lang.Object device)Standard init with default properties per filter typeMPSCNNLossinitWithDeviceLossDescriptor(MTLDevice device, MPSCNNLossDescriptor lossDescriptor)Initialize the loss filter with a loss descriptor.static NSObject.Function_instanceMethodForSelector_retinstanceMethodForSelector(org.moe.natj.objc.SEL aSelector)static NSMethodSignatureinstanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)static booleaninstancesRespondToSelector(org.moe.natj.objc.SEL aSelector)static booleanisSubclassOfClass(org.moe.natj.objc.Class aClass)static NSSet<java.lang.String>keyPathsForValuesAffectingValueForKey(java.lang.String key)floatlabelSmoothing()intlossType()See MPSCNNLossDescriptor for information about the following properties.static java.lang.Objectnew_objc()longnumberOfClasses()booleanreduceAcrossBatch()intreductionType()static booleanresolveClassMethod(org.moe.natj.objc.SEL sel)static booleanresolveInstanceMethod(org.moe.natj.objc.SEL sel)static voidsetVersion_static(long aVersion)static org.moe.natj.objc.Classsuperclass_static()static booleansupportsSecureCoding()static longversion_static()floatweight()-
Methods inherited from class apple.metalperformanceshaders.MPSCNNKernel
appendBatchBarrier, clipRect, destinationFeatureChannelOffset, destinationImageAllocator, destinationImageDescriptorForSourceImagesSourceStates, dilationRateX, dilationRateY, edgeMode, encodeToCommandBufferSourceImage, encodeToCommandBufferSourceImageDestinationImage, encodeToCommandBufferSourceImageDestinationStateDestinationImage, encodeToCommandBufferSourceImageDestinationStateDestinationStateIsTemporary, encodingStorageSizeForSourceImageSourceStatesDestinationImage, isBackwards, isResultStateReusedAcrossBatch, isStateModified, kernelHeight, kernelWidth, offset, padding, resultStateForSourceImageSourceStatesDestinationImage, setClipRect, setDestinationFeatureChannelOffset, setDestinationImageAllocator, setEdgeMode, setOffset, setPadding, setSourceFeatureChannelMaxCount, setSourceFeatureChannelOffset, sourceFeatureChannelMaxCount, sourceFeatureChannelOffset, strideInPixelsX, strideInPixelsY, temporaryResultStateForCommandBufferSourceImageSourceStatesDestinationImage
-
Methods inherited from class apple.metalperformanceshaders.MPSKernel
copyWithZone, copyWithZoneDevice, device, encodeWithCoder, label, options, setLabel, setOptions
-
Methods inherited from class apple.NSObject
accessibilityActivate, accessibilityActivationPoint, accessibilityAssistiveTechnologyFocusedIdentifiers, accessibilityAttributedHint, accessibilityAttributedLabel, accessibilityAttributedUserInputLabels, accessibilityAttributedValue, accessibilityContainerType, accessibilityCustomActions, accessibilityCustomRotors, accessibilityDecrement, accessibilityDragSourceDescriptors, accessibilityDropPointDescriptors, accessibilityElementAtIndex, accessibilityElementCount, accessibilityElementDidBecomeFocused, accessibilityElementDidLoseFocus, accessibilityElementIsFocused, accessibilityElements, accessibilityElementsHidden, accessibilityFrame, accessibilityHint, accessibilityIncrement, accessibilityLabel, accessibilityLanguage, accessibilityNavigationStyle, accessibilityPath, accessibilityPerformEscape, accessibilityPerformMagicTap, accessibilityRespondsToUserInteraction, accessibilityScroll, accessibilityTextualContext, accessibilityTraits, accessibilityUserInputLabels, accessibilityValue, accessibilityViewIsModal, addObserverForKeyPathOptionsContext, attemptRecoveryFromErrorOptionIndex, attemptRecoveryFromErrorOptionIndexDelegateDidRecoverSelectorContextInfo, autoContentAccessingProxy, awakeAfterUsingCoder, awakeFromNib, class_objc, classForCoder, classForKeyedArchiver, copy, dealloc, debugDescription, description, dictionaryWithValuesForKeys, didChangeValueForKey, didChangeValueForKeyWithSetMutationUsingObjects, didChangeValuesAtIndexesForKey, doesNotRecognizeSelector, fileManagerShouldProceedAfterError, fileManagerWillProcessPath, finalize_objc, forwardingTargetForSelector, forwardInvocation, hash, indexOfAccessibilityElement, isAccessibilityElement, isEqual, isKindOfClass, isMemberOfClass, isProxy, methodForSelector, methodSignatureForSelector, mutableArrayValueForKey, mutableArrayValueForKeyPath, mutableCopy, mutableOrderedSetValueForKey, mutableOrderedSetValueForKeyPath, mutableSetValueForKey, mutableSetValueForKeyPath, observationInfo, observeValueForKeyPathOfObjectChangeContext, performSelector, performSelectorInBackgroundWithObject, performSelectorOnMainThreadWithObjectWaitUntilDone, performSelectorOnMainThreadWithObjectWaitUntilDoneModes, performSelectorOnThreadWithObjectWaitUntilDone, performSelectorOnThreadWithObjectWaitUntilDoneModes, performSelectorWithObject, performSelectorWithObjectAfterDelay, performSelectorWithObjectAfterDelayInModes, performSelectorWithObjectWithObject, prepareForInterfaceBuilder, provideImageDataBytesPerRowOrigin_Size_UserInfo, removeObserverForKeyPath, removeObserverForKeyPathContext, replacementObjectForCoder, replacementObjectForKeyedArchiver, respondsToSelector, self, setAccessibilityActivationPoint, setAccessibilityAttributedHint, setAccessibilityAttributedLabel, setAccessibilityAttributedUserInputLabels, setAccessibilityAttributedValue, setAccessibilityContainerType, setAccessibilityCustomActions, setAccessibilityCustomRotors, setAccessibilityDragSourceDescriptors, setAccessibilityDropPointDescriptors, setAccessibilityElements, setAccessibilityElementsHidden, setAccessibilityFrame, setAccessibilityHint, setAccessibilityLabel, setAccessibilityLanguage, setAccessibilityNavigationStyle, setAccessibilityPath, setAccessibilityRespondsToUserInteraction, setAccessibilityTextualContext, setAccessibilityTraits, setAccessibilityUserInputLabels, setAccessibilityValue, setAccessibilityViewIsModal, setIsAccessibilityElement, setNilValueForKey, setObservationInfo, setShouldGroupAccessibilityChildren, setValueForKey, setValueForKeyPath, setValueForUndefinedKey, setValuesForKeysWithDictionary, shouldGroupAccessibilityChildren, superclass, validateValueForKeyError, validateValueForKeyPathError, valueForKey, valueForKeyPath, valueForUndefinedKey, willChangeValueForKey, willChangeValueForKeyWithSetMutationUsingObjects, willChangeValuesAtIndexesForKey
-
-
-
-
Method Detail
-
accessInstanceVariablesDirectly
public static boolean accessInstanceVariablesDirectly()
-
alloc
public static MPSCNNLoss alloc()
-
allocWithZone
public static java.lang.Object allocWithZone(org.moe.natj.general.ptr.VoidPtr zone)
-
automaticallyNotifiesObserversForKey
public static boolean automaticallyNotifiesObserversForKey(java.lang.String key)
-
cancelPreviousPerformRequestsWithTarget
public static void cancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)
-
cancelPreviousPerformRequestsWithTargetSelectorObject
public static void cancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)
-
classFallbacksForKeyedArchiver
public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
-
classForKeyedUnarchiver
public static org.moe.natj.objc.Class classForKeyedUnarchiver()
-
debugDescription_static
public static java.lang.String debugDescription_static()
-
delta
public float delta()
-
description_static
public static java.lang.String description_static()
-
encodeToCommandBufferSourceImageLabels
public MPSImage encodeToCommandBufferSourceImageLabels(MTLCommandBuffer commandBuffer, MPSImage sourceImage, MPSCNNLossLabels labels)
Encode a MPSCNNLoss filter and return a gradient. This -encode call is similar to the encodeToCommandBuffer:sourceImage:labels:destinationImage: above, except that it creates and returns the MPSImage with the loss gradient result.- Parameters:
commandBuffer- The MTLCommandBuffer on which to encode.sourceImage- The source image from the previous filter in the graph (in the inference direction).labels- The object containing the target data (labels) and optionally, weights for the labels.- Returns:
- The MPSImage containing the gradient result.
-
encodeToCommandBufferSourceImageLabelsDestinationImage
public void encodeToCommandBufferSourceImageLabelsDestinationImage(MTLCommandBuffer commandBuffer, MPSImage sourceImage, MPSCNNLossLabels labels, MPSImage destinationImage)
Encode a MPSCNNLoss filter and return a gradient in the destinationImage. This filter consumes the output of a previous layer, for example, the SoftMax layer containing predictions, and the MPSCNNLossLabels object containing the target data (labels) and optionally, weights for the labels. The destinationImage contains the computed gradient for the loss layer. It serves as a source gradient input image to the first gradient layer (in the backward direction), in our example, the SoftMax gradient layer.- Parameters:
commandBuffer- The MTLCommandBuffer on which to encode.sourceImage- The source image from the previous filter in the graph (in the inference direction).labels- The object containing the target data (labels) and optionally, weights for the labels.destinationImage- The MPSImage into which to write the gradient result.
-
epsilon
public float epsilon()
-
hash_static
public static long hash_static()
-
init
public MPSCNNLoss init()
- Overrides:
initin classMPSCNNKernel
-
initWithCoder
public MPSCNNLoss initWithCoder(NSCoder aDecoder)
Description copied from interface:NSCodingNS_DESIGNATED_INITIALIZER- Specified by:
initWithCoderin interfaceNSCoding- Overrides:
initWithCoderin classMPSCNNKernel
-
initWithCoderDevice
public MPSCNNLoss initWithCoderDevice(NSCoder aDecoder, java.lang.Object device)
support - Overrides:
initWithCoderDevicein classMPSCNNKernel- Parameters:
aDecoder- The NSCoder subclass with your serialized MPSKerneldevice- The MTLDevice on which to make the MPSKernel- Returns:
- A new MPSKernel object, or nil if failure.
-
initWithDevice
public MPSCNNLoss initWithDevice(java.lang.Object device)
Description copied from class:MPSCNNKernelStandard init with default properties per filter type- Overrides:
initWithDevicein classMPSCNNKernel- Parameters:
device- The device that the filter will be used on. May not be NULL.- Returns:
- A pointer to the newly initialized object. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
-
initWithDeviceLossDescriptor
public MPSCNNLoss initWithDeviceLossDescriptor(MTLDevice device, MPSCNNLossDescriptor lossDescriptor)
Initialize the loss filter with a loss descriptor.- Parameters:
device- The device the filter will run on.lossDescriptor- The loss descriptor.- Returns:
- A valid MPSCNNLoss object or nil, if failure.
-
instanceMethodForSelector
public static NSObject.Function_instanceMethodForSelector_ret instanceMethodForSelector(org.moe.natj.objc.SEL aSelector)
-
instanceMethodSignatureForSelector
public static NSMethodSignature instanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)
-
instancesRespondToSelector
public static boolean instancesRespondToSelector(org.moe.natj.objc.SEL aSelector)
-
isSubclassOfClass
public static boolean isSubclassOfClass(org.moe.natj.objc.Class aClass)
-
keyPathsForValuesAffectingValueForKey
public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey(java.lang.String key)
-
labelSmoothing
public float labelSmoothing()
-
lossType
public int lossType()
See MPSCNNLossDescriptor for information about the following properties.
-
new_objc
public static java.lang.Object new_objc()
-
numberOfClasses
public long numberOfClasses()
-
reductionType
public int reductionType()
-
resolveClassMethod
public static boolean resolveClassMethod(org.moe.natj.objc.SEL sel)
-
resolveInstanceMethod
public static boolean resolveInstanceMethod(org.moe.natj.objc.SEL sel)
-
setVersion_static
public static void setVersion_static(long aVersion)
-
superclass_static
public static org.moe.natj.objc.Class superclass_static()
-
supportsSecureCoding
public static boolean supportsSecureCoding()
-
_supportsSecureCoding
public boolean _supportsSecureCoding()
Description copied from interface:NSSecureCodingThis property must return YES on all classes that allow secure coding. Subclasses of classes that adopt NSSecureCoding and override initWithCoder: must also override this method and return YES. The Secure Coding Guide should be consulted when writing methods that decode data.- Specified by:
_supportsSecureCodingin interfaceNSSecureCoding- Overrides:
_supportsSecureCodingin classMPSCNNKernel
-
version_static
public static long version_static()
-
weight
public float weight()
-
reduceAcrossBatch
public boolean reduceAcrossBatch()
-
-