Package apple.metalperformanceshaders
Class MPSCNNYOLOLossDescriptor
- java.lang.Object
-
- org.moe.natj.general.NativeObject
-
- org.moe.natj.objc.ObjCObject
-
- apple.NSObject
-
- apple.metalperformanceshaders.MPSCNNYOLOLossDescriptor
-
public class MPSCNNYOLOLossDescriptor extends NSObject implements NSCopying
MPSCNNYOLOLossDescriptor [@dependency] This depends on Metal.framework. The MPSCNNYOLOLossDescriptor specifies a loss filter descriptor that is used to create a MPSCNNLoss filter. The MPSCNNYOLOLoss is a filter that has been specialized for object detection tasks and follows a specific layout for the feature-channels of the input, output, weight and label data. The layout of the data within the feature-channels is as follows: Each anchorbox uses ( 2+2+1 + numberOfClasses = 5 + numberOfClasses ) feature channels. Therefore the total number of feature channels used is: (5 + numberOfClasses) * numberOfAnchorBoxes. The first feature channel for anchorbox index 'anchorIdx' is at fcIndex = (5 + numberOfClasses) * anchorIdx, and the feature channels within each anchorbox are stored in the layout: 'XYWHCFFFFFF...', where (XY) are the so-called raw x and y coordinates of the bounding box within each gridcell and (WH) are the corresponding width and height. 'C' signifies a confidence for having an object in the cell and FFFFF... are the feature channel values for each class of object to be classified in the object detector. The YOLO-loss filter works by operating mostly independently on each anchorbox: * The XY-channels of the inputs are first transformed to relative XY-values by applying the sigmoid-neuron on them, after which they are passed through the loss function defined by @ref XYLossDescriptor, which is typically chosen to be the @ref MPSCNNLossTypeMeanSquaredError type loss function. * The WH-channels contain the raw width and height of the bounding box and they are operated with the loss function defined by @ref WHLossDescriptor, which is typically of type @ref MPSCNNLossTypeHuber. * The C-channel contains the confidence value of having an object in the bounding box and it is operated by the loss function defined by @ref confidenceLossDescriptor, which is typically chosen to be [@ref] MPSCNNLossTypeSigmoidCrossEntropy. * The FFFFF... (number of channels is number of classes) channels contains the raw feature channels for object classes, used to identify which objects are the most probable ones in the bounding box and these channels are passed through the loss function defined by @ref classesLossDescriptor, which in typical cases is of the type @ref MPSCNNLossTypeSoftMaxCrossEntropy. For details on how to set up the label values and anchorboxes see https://arxiv.org/abs/1612.08242
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class apple.NSObject
NSObject.Function_instanceMethodForSelector_ret, NSObject.Function_methodForSelector_ret
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedMPSCNNYOLOLossDescriptor(org.moe.natj.general.Pointer peer)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static booleanaccessInstanceVariablesDirectly()static MPSCNNYOLOLossDescriptoralloc()static java.lang.ObjectallocWithZone(org.moe.natj.general.ptr.VoidPtr zone)NSDataanchorBoxes()[@property] anchorBoxes NSData containing the width and height for numberOfAnchorBoxes anchor boxes This NSData should have 2 float values per anchor box which represent the width and height of the anchor box.static booleanautomaticallyNotifiesObserversForKey(java.lang.String key)static voidcancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)static voidcancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)MPSCNNLossDescriptorclassesLossDescriptor()[@property] classesLossDescriptor The type of a loss filter.static NSArray<java.lang.String>classFallbacksForKeyedArchiver()static org.moe.natj.objc.ClassclassForKeyedUnarchiver()static MPSCNNYOLOLossDescriptorcnnLossDescriptorWithXYLossTypeWHLossTypeConfidenceLossTypeClassesLossTypeReductionTypeAnchorBoxesNumberOfAnchorBoxes(int XYLossType, int WHLossType, int confidenceLossType, int classesLossType, int reductionType, NSData anchorBoxes, long numberOfAnchorBoxes)Make a descriptor for a MPSCNNYOLOLoss object.MPSCNNLossDescriptorconfidenceLossDescriptor()[@property] confidenceLossDescriptor The type of a loss filter.java.lang.ObjectcopyWithZone(org.moe.natj.general.ptr.VoidPtr zone)static java.lang.StringdebugDescription_static()static java.lang.Stringdescription_static()static longhash_static()MPSCNNYOLOLossDescriptorinit()static NSObject.Function_instanceMethodForSelector_retinstanceMethodForSelector(org.moe.natj.objc.SEL aSelector)static NSMethodSignatureinstanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)static booleaninstancesRespondToSelector(org.moe.natj.objc.SEL aSelector)static booleanisSubclassOfClass(org.moe.natj.objc.Class aClass)static NSSet<java.lang.String>keyPathsForValuesAffectingValueForKey(java.lang.String key)floatmaxIOUForObjectAbsence()[@property] neg_iou If the prediction IOU with groundTruth is lower than this value we consider it a confident object absence, default is 0.3floatminIOUForObjectPresence()[@property] pos_iou If the prediction IOU with groundTruth is higher than this value we consider it a confident object presence, default is 0.7static java.lang.Objectnew_objc()longnumberOfAnchorBoxes()[@property] numberOfAnchorBoxes number of anchor boxes used to detect object per grid cellbooleanreduceAcrossBatch()[@property] reduceAcrossBatch If set to YES then the reduction operation is applied also across the batch-index dimension, ie. the loss value is summed over images in the batch and the result of the reduction is written on the first loss image in the batch while the other loss images will be set to zero.intreductionType()[@property] reductionType ReductionType shared accross all losses (so they may generate same sized output)booleanrescore()[@property] rescore Rescore pertains to multiplying the confidence groundTruth with IOU (intersection over union) of predicted bounding box and the groundTruth boundingBox.static booleanresolveClassMethod(org.moe.natj.objc.SEL sel)static booleanresolveInstanceMethod(org.moe.natj.objc.SEL sel)floatscaleClass()[@property] scaleClass scale factor for no object classes loss and loss gradient default is 2.0floatscaleNoObject()[@property] scaleNoObject scale factor for no object confidence loss and loss gradient default is 5.0floatscaleObject()[@property] scaleObject scale factor for no object confidence loss and loss gradient default is 100.0floatscaleWH()[@property] scaleWH scale factor for WH loss and loss gradient default is 10.0floatscaleXY()[@property] scaleXY scale factor for XY loss and loss gradient default is 10.0voidsetAnchorBoxes(NSData value)[@property] anchorBoxes NSData containing the width and height for numberOfAnchorBoxes anchor boxes This NSData should have 2 float values per anchor box which represent the width and height of the anchor box.voidsetClassesLossDescriptor(MPSCNNLossDescriptor value)[@property] classesLossDescriptor The type of a loss filter.voidsetConfidenceLossDescriptor(MPSCNNLossDescriptor value)[@property] confidenceLossDescriptor The type of a loss filter.voidsetMaxIOUForObjectAbsence(float value)[@property] neg_iou If the prediction IOU with groundTruth is lower than this value we consider it a confident object absence, default is 0.3voidsetMinIOUForObjectPresence(float value)[@property] pos_iou If the prediction IOU with groundTruth is higher than this value we consider it a confident object presence, default is 0.7voidsetNumberOfAnchorBoxes(long value)[@property] numberOfAnchorBoxes number of anchor boxes used to detect object per grid cellvoidsetReduceAcrossBatch(boolean value)[@property] reduceAcrossBatch If set to YES then the reduction operation is applied also across the batch-index dimension, ie. the loss value is summed over images in the batch and the result of the reduction is written on the first loss image in the batch while the other loss images will be set to zero.voidsetReductionType(int value)[@property] reductionType ReductionType shared accross all losses (so they may generate same sized output)voidsetRescore(boolean value)[@property] rescore Rescore pertains to multiplying the confidence groundTruth with IOU (intersection over union) of predicted bounding box and the groundTruth boundingBox.voidsetScaleClass(float value)[@property] scaleClass scale factor for no object classes loss and loss gradient default is 2.0voidsetScaleNoObject(float value)[@property] scaleNoObject scale factor for no object confidence loss and loss gradient default is 5.0voidsetScaleObject(float value)[@property] scaleObject scale factor for no object confidence loss and loss gradient default is 100.0voidsetScaleWH(float value)[@property] scaleWH scale factor for WH loss and loss gradient default is 10.0voidsetScaleXY(float value)[@property] scaleXY scale factor for XY loss and loss gradient default is 10.0static voidsetVersion_static(long aVersion)voidsetWHLossDescriptor(MPSCNNLossDescriptor value)[@property] WHLossDescriptor The type of a loss filter.voidsetXYLossDescriptor(MPSCNNLossDescriptor value)[@property] XYLossDescriptor The type of a loss filter.static org.moe.natj.objc.Classsuperclass_static()static longversion_static()MPSCNNLossDescriptorWHLossDescriptor()[@property] WHLossDescriptor The type of a loss filter.MPSCNNLossDescriptorXYLossDescriptor()[@property] XYLossDescriptor The type of a loss filter.-
Methods inherited from class apple.NSObject
accessibilityActivate, accessibilityActivationPoint, accessibilityAssistiveTechnologyFocusedIdentifiers, accessibilityAttributedHint, accessibilityAttributedLabel, accessibilityAttributedUserInputLabels, accessibilityAttributedValue, accessibilityContainerType, accessibilityCustomActions, accessibilityCustomRotors, accessibilityDecrement, accessibilityDragSourceDescriptors, accessibilityDropPointDescriptors, accessibilityElementAtIndex, accessibilityElementCount, accessibilityElementDidBecomeFocused, accessibilityElementDidLoseFocus, accessibilityElementIsFocused, accessibilityElements, accessibilityElementsHidden, accessibilityFrame, accessibilityHint, accessibilityIncrement, accessibilityLabel, accessibilityLanguage, accessibilityNavigationStyle, accessibilityPath, accessibilityPerformEscape, accessibilityPerformMagicTap, accessibilityRespondsToUserInteraction, accessibilityScroll, accessibilityTextualContext, accessibilityTraits, accessibilityUserInputLabels, accessibilityValue, accessibilityViewIsModal, addObserverForKeyPathOptionsContext, attemptRecoveryFromErrorOptionIndex, attemptRecoveryFromErrorOptionIndexDelegateDidRecoverSelectorContextInfo, autoContentAccessingProxy, awakeAfterUsingCoder, awakeFromNib, class_objc, classForCoder, classForKeyedArchiver, copy, dealloc, debugDescription, description, dictionaryWithValuesForKeys, didChangeValueForKey, didChangeValueForKeyWithSetMutationUsingObjects, didChangeValuesAtIndexesForKey, doesNotRecognizeSelector, fileManagerShouldProceedAfterError, fileManagerWillProcessPath, finalize_objc, forwardingTargetForSelector, forwardInvocation, hash, indexOfAccessibilityElement, isAccessibilityElement, isEqual, isKindOfClass, isMemberOfClass, isProxy, methodForSelector, methodSignatureForSelector, mutableArrayValueForKey, mutableArrayValueForKeyPath, mutableCopy, mutableOrderedSetValueForKey, mutableOrderedSetValueForKeyPath, mutableSetValueForKey, mutableSetValueForKeyPath, observationInfo, observeValueForKeyPathOfObjectChangeContext, performSelector, performSelectorInBackgroundWithObject, performSelectorOnMainThreadWithObjectWaitUntilDone, performSelectorOnMainThreadWithObjectWaitUntilDoneModes, performSelectorOnThreadWithObjectWaitUntilDone, performSelectorOnThreadWithObjectWaitUntilDoneModes, performSelectorWithObject, performSelectorWithObjectAfterDelay, performSelectorWithObjectAfterDelayInModes, performSelectorWithObjectWithObject, prepareForInterfaceBuilder, provideImageDataBytesPerRowOrigin_Size_UserInfo, removeObserverForKeyPath, removeObserverForKeyPathContext, replacementObjectForCoder, replacementObjectForKeyedArchiver, respondsToSelector, self, setAccessibilityActivationPoint, setAccessibilityAttributedHint, setAccessibilityAttributedLabel, setAccessibilityAttributedUserInputLabels, setAccessibilityAttributedValue, setAccessibilityContainerType, setAccessibilityCustomActions, setAccessibilityCustomRotors, setAccessibilityDragSourceDescriptors, setAccessibilityDropPointDescriptors, setAccessibilityElements, setAccessibilityElementsHidden, setAccessibilityFrame, setAccessibilityHint, setAccessibilityLabel, setAccessibilityLanguage, setAccessibilityNavigationStyle, setAccessibilityPath, setAccessibilityRespondsToUserInteraction, setAccessibilityTextualContext, setAccessibilityTraits, setAccessibilityUserInputLabels, setAccessibilityValue, setAccessibilityViewIsModal, setIsAccessibilityElement, setNilValueForKey, setObservationInfo, setShouldGroupAccessibilityChildren, setValueForKey, setValueForKeyPath, setValueForUndefinedKey, setValuesForKeysWithDictionary, shouldGroupAccessibilityChildren, superclass, validateValueForKeyError, validateValueForKeyPathError, valueForKey, valueForKeyPath, valueForUndefinedKey, willChangeValueForKey, willChangeValueForKeyWithSetMutationUsingObjects, willChangeValuesAtIndexesForKey
-
-
-
-
Method Detail
-
WHLossDescriptor
public MPSCNNLossDescriptor WHLossDescriptor()
[@property] WHLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
XYLossDescriptor
public MPSCNNLossDescriptor XYLossDescriptor()
[@property] XYLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
accessInstanceVariablesDirectly
public static boolean accessInstanceVariablesDirectly()
-
alloc
public static MPSCNNYOLOLossDescriptor alloc()
-
allocWithZone
public static java.lang.Object allocWithZone(org.moe.natj.general.ptr.VoidPtr zone)
-
anchorBoxes
public NSData anchorBoxes()
[@property] anchorBoxes NSData containing the width and height for numberOfAnchorBoxes anchor boxes This NSData should have 2 float values per anchor box which represent the width and height of the anchor box. [@code] typedef struct anchorBox{ float width; float height; }anchorBox; anchorBox_t gAnchorBoxes[MAX_NUM_ANCHOR_BOXES] = { {.width = 1.f, .height = 2.f}, {.width = 1.f, .height = 1.f}, {.width = 2.f, .height = 1.f}, }; NSData* labelsInputData = [NSData dataWithBytes: gAnchorBoxes length: MAX_NUM_ANCHOR_BOXES * sizeof(anchorBox)]; [@endcode]
-
automaticallyNotifiesObserversForKey
public static boolean automaticallyNotifiesObserversForKey(java.lang.String key)
-
cancelPreviousPerformRequestsWithTarget
public static void cancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)
-
cancelPreviousPerformRequestsWithTargetSelectorObject
public static void cancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)
-
classFallbacksForKeyedArchiver
public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
-
classForKeyedUnarchiver
public static org.moe.natj.objc.Class classForKeyedUnarchiver()
-
classesLossDescriptor
public MPSCNNLossDescriptor classesLossDescriptor()
[@property] classesLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
cnnLossDescriptorWithXYLossTypeWHLossTypeConfidenceLossTypeClassesLossTypeReductionTypeAnchorBoxesNumberOfAnchorBoxes
public static MPSCNNYOLOLossDescriptor cnnLossDescriptorWithXYLossTypeWHLossTypeConfidenceLossTypeClassesLossTypeReductionTypeAnchorBoxesNumberOfAnchorBoxes(int XYLossType, int WHLossType, int confidenceLossType, int classesLossType, int reductionType, NSData anchorBoxes, long numberOfAnchorBoxes)
Make a descriptor for a MPSCNNYOLOLoss object.- Parameters:
XYLossType- The type of spatial position loss filter.WHLossType- The type of spatial size loss filter.confidenceLossType- The type of confidence filter.classesLossType- The type of classes filter.reductionType- The type of a reduction operation to apply.anchorBoxes- This is an NSData which has an array of anchorBoxes defined as a struct{ float width; float height; };- Returns:
- A valid MPSCNNYOLOLossDescriptor object or nil, if failure.
-
confidenceLossDescriptor
public MPSCNNLossDescriptor confidenceLossDescriptor()
[@property] confidenceLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
copyWithZone
public java.lang.Object copyWithZone(org.moe.natj.general.ptr.VoidPtr zone)
- Specified by:
copyWithZonein interfaceNSCopying
-
debugDescription_static
public static java.lang.String debugDescription_static()
-
description_static
public static java.lang.String description_static()
-
hash_static
public static long hash_static()
-
init
public MPSCNNYOLOLossDescriptor init()
-
instanceMethodForSelector
public static NSObject.Function_instanceMethodForSelector_ret instanceMethodForSelector(org.moe.natj.objc.SEL aSelector)
-
instanceMethodSignatureForSelector
public static NSMethodSignature instanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)
-
instancesRespondToSelector
public static boolean instancesRespondToSelector(org.moe.natj.objc.SEL aSelector)
-
isSubclassOfClass
public static boolean isSubclassOfClass(org.moe.natj.objc.Class aClass)
-
keyPathsForValuesAffectingValueForKey
public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey(java.lang.String key)
-
maxIOUForObjectAbsence
public float maxIOUForObjectAbsence()
[@property] neg_iou If the prediction IOU with groundTruth is lower than this value we consider it a confident object absence, default is 0.3
-
minIOUForObjectPresence
public float minIOUForObjectPresence()
[@property] pos_iou If the prediction IOU with groundTruth is higher than this value we consider it a confident object presence, default is 0.7
-
new_objc
public static java.lang.Object new_objc()
-
numberOfAnchorBoxes
public long numberOfAnchorBoxes()
[@property] numberOfAnchorBoxes number of anchor boxes used to detect object per grid cell
-
reductionType
public int reductionType()
[@property] reductionType ReductionType shared accross all losses (so they may generate same sized output)
-
rescore
public boolean rescore()
[@property] rescore Rescore pertains to multiplying the confidence groundTruth with IOU (intersection over union) of predicted bounding box and the groundTruth boundingBox. Default is YES
-
resolveClassMethod
public static boolean resolveClassMethod(org.moe.natj.objc.SEL sel)
-
resolveInstanceMethod
public static boolean resolveInstanceMethod(org.moe.natj.objc.SEL sel)
-
scaleClass
public float scaleClass()
[@property] scaleClass scale factor for no object classes loss and loss gradient default is 2.0
-
scaleNoObject
public float scaleNoObject()
[@property] scaleNoObject scale factor for no object confidence loss and loss gradient default is 5.0
-
scaleObject
public float scaleObject()
[@property] scaleObject scale factor for no object confidence loss and loss gradient default is 100.0
-
scaleWH
public float scaleWH()
[@property] scaleWH scale factor for WH loss and loss gradient default is 10.0
-
scaleXY
public float scaleXY()
[@property] scaleXY scale factor for XY loss and loss gradient default is 10.0
-
setAnchorBoxes
public void setAnchorBoxes(NSData value)
[@property] anchorBoxes NSData containing the width and height for numberOfAnchorBoxes anchor boxes This NSData should have 2 float values per anchor box which represent the width and height of the anchor box. [@code] typedef struct anchorBox{ float width; float height; }anchorBox; anchorBox_t gAnchorBoxes[MAX_NUM_ANCHOR_BOXES] = { {.width = 1.f, .height = 2.f}, {.width = 1.f, .height = 1.f}, {.width = 2.f, .height = 1.f}, }; NSData* labelsInputData = [NSData dataWithBytes: gAnchorBoxes length: MAX_NUM_ANCHOR_BOXES * sizeof(anchorBox)]; [@endcode]
-
setClassesLossDescriptor
public void setClassesLossDescriptor(MPSCNNLossDescriptor value)
[@property] classesLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
setConfidenceLossDescriptor
public void setConfidenceLossDescriptor(MPSCNNLossDescriptor value)
[@property] confidenceLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
setMaxIOUForObjectAbsence
public void setMaxIOUForObjectAbsence(float value)
[@property] neg_iou If the prediction IOU with groundTruth is lower than this value we consider it a confident object absence, default is 0.3
-
setMinIOUForObjectPresence
public void setMinIOUForObjectPresence(float value)
[@property] pos_iou If the prediction IOU with groundTruth is higher than this value we consider it a confident object presence, default is 0.7
-
setNumberOfAnchorBoxes
public void setNumberOfAnchorBoxes(long value)
[@property] numberOfAnchorBoxes number of anchor boxes used to detect object per grid cell
-
setReductionType
public void setReductionType(int value)
[@property] reductionType ReductionType shared accross all losses (so they may generate same sized output)
-
setRescore
public void setRescore(boolean value)
[@property] rescore Rescore pertains to multiplying the confidence groundTruth with IOU (intersection over union) of predicted bounding box and the groundTruth boundingBox. Default is YES
-
setScaleClass
public void setScaleClass(float value)
[@property] scaleClass scale factor for no object classes loss and loss gradient default is 2.0
-
setScaleNoObject
public void setScaleNoObject(float value)
[@property] scaleNoObject scale factor for no object confidence loss and loss gradient default is 5.0
-
setScaleObject
public void setScaleObject(float value)
[@property] scaleObject scale factor for no object confidence loss and loss gradient default is 100.0
-
setScaleWH
public void setScaleWH(float value)
[@property] scaleWH scale factor for WH loss and loss gradient default is 10.0
-
setScaleXY
public void setScaleXY(float value)
[@property] scaleXY scale factor for XY loss and loss gradient default is 10.0
-
setVersion_static
public static void setVersion_static(long aVersion)
-
setWHLossDescriptor
public void setWHLossDescriptor(MPSCNNLossDescriptor value)
[@property] WHLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
setXYLossDescriptor
public void setXYLossDescriptor(MPSCNNLossDescriptor value)
[@property] XYLossDescriptor The type of a loss filter. This parameter specifies the type of a loss filter.
-
superclass_static
public static org.moe.natj.objc.Class superclass_static()
-
version_static
public static long version_static()
-
reduceAcrossBatch
public boolean reduceAcrossBatch()
[@property] reduceAcrossBatch If set to YES then the reduction operation is applied also across the batch-index dimension, ie. the loss value is summed over images in the batch and the result of the reduction is written on the first loss image in the batch while the other loss images will be set to zero. If set to NO, then no reductions are performed across the batch dimension and each image in the batch will contain the loss value associated with that one particular image. NOTE: If reductionType == MPSCNNReductionTypeNone, then this flag has no effect on results, that is no reductions are done in this case. NOTE: If reduceAcrossBatch is set to YES and reductionType == MPSCNNReductionTypeMean then the final forward loss value is computed by first summing over the components and then by dividing the result with: number of feature channels * width * height * number of images in the batch. The default value is NO.
-
setReduceAcrossBatch
public void setReduceAcrossBatch(boolean value)
[@property] reduceAcrossBatch If set to YES then the reduction operation is applied also across the batch-index dimension, ie. the loss value is summed over images in the batch and the result of the reduction is written on the first loss image in the batch while the other loss images will be set to zero. If set to NO, then no reductions are performed across the batch dimension and each image in the batch will contain the loss value associated with that one particular image. NOTE: If reductionType == MPSCNNReductionTypeNone, then this flag has no effect on results, that is no reductions are done in this case. NOTE: If reduceAcrossBatch is set to YES and reductionType == MPSCNNReductionTypeMean then the final forward loss value is computed by first summing over the components and then by dividing the result with: number of feature channels * width * height * number of images in the batch. The default value is NO.
-
-