Package apple.metalperformanceshaders
Class MPSCNNBinaryKernel
- java.lang.Object
-
- org.moe.natj.general.NativeObject
-
- org.moe.natj.objc.ObjCObject
-
- apple.NSObject
-
- apple.metalperformanceshaders.MPSKernel
-
- apple.metalperformanceshaders.MPSCNNBinaryKernel
-
- All Implemented Interfaces:
NSCoding,NSCopying,NSSecureCoding,NSObject
- Direct Known Subclasses:
MPSCNNArithmetic,MPSCNNGradientKernel,MPSNNGridSample,MPSNNLossGradient,MPSNNReduceBinary
public class MPSCNNBinaryKernel extends MPSKernel
MPSCNNBinaryKernel [@dependency] This depends on Metal.framework Describes a convolution neural network kernel. A MPSCNNKernel consumes two MPSImages, primary and secondary, and produces one MPSImage.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class apple.NSObject
NSObject.Function_instanceMethodForSelector_ret, NSObject.Function_methodForSelector_ret
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedMPSCNNBinaryKernel(org.moe.natj.general.Pointer peer)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description boolean_supportsSecureCoding()This property must return YES on all classes that allow secure coding.static booleanaccessInstanceVariablesDirectly()static MPSCNNBinaryKernelalloc()static java.lang.ObjectallocWithZone(org.moe.natj.general.ptr.VoidPtr zone)booleanappendBatchBarrier()Returns YES if the filter must be run over the entire batch before its results may be considered complete The MPSNNGraph may split batches into sub-batches to save memory.static booleanautomaticallyNotifiesObserversForKey(java.lang.String key)static voidcancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)static voidcancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)static NSArray<java.lang.String>classFallbacksForKeyedArchiver()static org.moe.natj.objc.ClassclassForKeyedUnarchiver()MTLRegionclipRect()[@property] clipRect An optional clip rectangle to use when writing data.static java.lang.StringdebugDescription_static()static java.lang.Stringdescription_static()longdestinationFeatureChannelOffset()[@property] destinationFeatureChannelOffset The number of channels in the destination MPSImage to skip before writing output.MPSImageAllocatordestinationImageAllocator()Method to allocate the result image for -encodeToCommandBuffer:sourceImage: Default: MPSTemporaryImage.defaultAllocatorMPSImageDescriptordestinationImageDescriptorForSourceImagesSourceStates(NSArray<? extends MPSImage> sourceImages, NSArray<? extends MPSState> sourceStates)Get a suggested destination image descriptor for a source image Your application is certainly free to pass in any destinationImage it likes to encodeToCommandBuffer:sourceImage:destinationImage, within reason.MPSImageencodeToCommandBufferPrimaryImageSecondaryImage(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage)Encode a MPSCNNKernel into a command Buffer.voidencodeToCommandBufferPrimaryImageSecondaryImageDestinationImage(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage, MPSImage destinationImage)Encode a MPSCNNKernel into a command Buffer.MPSImageencodeToCommandBufferPrimaryImageSecondaryImageDestinationStateDestinationStateIsTemporary(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage, org.moe.natj.general.ptr.Ptr<MPSState> outState, boolean isTemporary)Encode a MPSCNNKernel into a command Buffer.longencodingStorageSizeForPrimaryImageSecondaryImageSourceStatesDestinationImage(MPSImage primaryImage, MPSImage secondaryImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)The size of extra MPS heap storage allocated while the kernel is encoding This is best effort and just describes things that are likely to end up on the MPS heap.static longhash_static()MPSCNNBinaryKernelinit()MPSCNNBinaryKernelinitWithCoder(NSCoder aDecoder)NS_DESIGNATED_INITIALIZERMPSCNNBinaryKernelinitWithCoderDevice(NSCoder aDecoder, java.lang.Object device)NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly.MPSCNNBinaryKernelinitWithDevice(java.lang.Object device)Standard init with default properties per filter typestatic NSObject.Function_instanceMethodForSelector_retinstanceMethodForSelector(org.moe.natj.objc.SEL aSelector)static NSMethodSignatureinstanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)static booleaninstancesRespondToSelector(org.moe.natj.objc.SEL aSelector)booleanisBackwards()[@property] isBackwards YES if the filter operates backwards.booleanisResultStateReusedAcrossBatch()Returns YES if the same state is used for every operation in a batch If NO, then each image in a MPSImageBatch will need a corresponding (and different) state to go with it.booleanisStateModified()Returns true if the -encode call modifies the state object it accepts.static booleanisSubclassOfClass(org.moe.natj.objc.Class aClass)static NSSet<java.lang.String>keyPathsForValuesAffectingValueForKey(java.lang.String key)static java.lang.Objectnew_objc()MPSNNPaddingpadding()[@property] padding The padding method used by the filter This influences how strideInPixelsX/Y should be interpreted.longprimaryDilationRateX()[@property] dilationRateX Stride in source coordinates from one kernel tap to the next in the X dimension.longprimaryDilationRateY()[@property] dilationRate Stride in source coordinates from one kernel tap to the next in the Y dimension.longprimaryEdgeMode()[@property] primaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image.longprimaryKernelHeight()[@property] primaryKernelHeight The height of the MPSCNNBinaryKernel filter window This is the vertical diameter of the region read by the filter for each result pixel.longprimaryKernelWidth()[@property] primaryKernelWidth The width of the MPSCNNBinaryKernel filter window This is the horizontal diameter of the region read by the filter for each result pixel.MPSOffsetprimaryOffset()[@property] primaryOffset The position of the destination clip rectangle origin relative to the primary source buffer.longprimarySourceFeatureChannelMaxCount()[@property] primarySourceFeatureChannelMaxCount The maximum number of channels in the primary source MPSImage to use Most filters can insert a slice operation into the filter for free.longprimarySourceFeatureChannelOffset()[@property] primarySourceFeatureChannelOffset The number of channels in the primary source MPSImage to skip before reading the input.longprimaryStrideInPixelsX()[@property] primaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.longprimaryStrideInPixelsY()[@property] primaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.static booleanresolveClassMethod(org.moe.natj.objc.SEL sel)static booleanresolveInstanceMethod(org.moe.natj.objc.SEL sel)MPSStateresultStateForPrimaryImageSecondaryImageSourceStatesDestinationImage(MPSImage primaryImage, MPSImage secondaryImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)Allocate a MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing.longsecondaryDilationRateX()[@property] dilationRateX Stride in source coordinates from one kernel tap to the next in the X dimension.longsecondaryDilationRateY()[@property] dilationRate Stride in source coordinates from one kernel tap to the next in the Y dimension.longsecondaryEdgeMode()[@property] secondaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image.longsecondaryKernelHeight()[@property] kernelHeight The height of the MPSCNNBinaryKernel filter window for the second image source This is the vertical diameter of the region read by the filter for each result pixel.longsecondaryKernelWidth()[@property] kernelWidth The width of the MPSCNNBinaryKernel filter window for the second image source This is the horizontal diameter of the region read by the filter for each result pixel.MPSOffsetsecondaryOffset()[@property] secondaryOffset The position of the destination clip rectangle origin relative to the secondary source buffer.longsecondarySourceFeatureChannelMaxCount()[@property] secondarySourceFeatureChannelMaxCount The maximum number of channels in the secondary source MPSImage to use Most filters can insert a slice operation into the filter for free.longsecondarySourceFeatureChannelOffset()[@property] secondarySourceFeatureChannelOffset The number of channels in the secondary source MPSImage to skip before reading the input.longsecondaryStrideInPixelsX()[@property] secondaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.longsecondaryStrideInPixelsY()[@property] secondaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.voidsetClipRect(MTLRegion value)[@property] clipRect An optional clip rectangle to use when writing data.voidsetDestinationFeatureChannelOffset(long value)[@property] destinationFeatureChannelOffset The number of channels in the destination MPSImage to skip before writing output.voidsetDestinationImageAllocator(MPSImageAllocator value)Method to allocate the result image for -encodeToCommandBuffer:sourceImage: Default: MPSTemporaryImage.defaultAllocatorvoidsetPadding(MPSNNPadding value)[@property] padding The padding method used by the filter This influences how strideInPixelsX/Y should be interpreted.voidsetPrimaryEdgeMode(long value)[@property] primaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image.voidsetPrimaryOffset(MPSOffset value)[@property] primaryOffset The position of the destination clip rectangle origin relative to the primary source buffer.voidsetPrimarySourceFeatureChannelMaxCount(long value)[@property] primarySourceFeatureChannelMaxCount The maximum number of channels in the primary source MPSImage to use Most filters can insert a slice operation into the filter for free.voidsetPrimarySourceFeatureChannelOffset(long value)[@property] primarySourceFeatureChannelOffset The number of channels in the primary source MPSImage to skip before reading the input.voidsetPrimaryStrideInPixelsX(long value)[@property] primaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.voidsetPrimaryStrideInPixelsY(long value)[@property] primaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.voidsetSecondaryEdgeMode(long value)[@property] secondaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image.voidsetSecondaryOffset(MPSOffset value)[@property] secondaryOffset The position of the destination clip rectangle origin relative to the secondary source buffer.voidsetSecondarySourceFeatureChannelMaxCount(long value)[@property] secondarySourceFeatureChannelMaxCount The maximum number of channels in the secondary source MPSImage to use Most filters can insert a slice operation into the filter for free.voidsetSecondarySourceFeatureChannelOffset(long value)[@property] secondarySourceFeatureChannelOffset The number of channels in the secondary source MPSImage to skip before reading the input.voidsetSecondaryStrideInPixelsX(long value)[@property] secondaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.voidsetSecondaryStrideInPixelsY(long value)[@property] secondaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.static voidsetVersion_static(long aVersion)static org.moe.natj.objc.Classsuperclass_static()static booleansupportsSecureCoding()MPSStatetemporaryResultStateForCommandBufferPrimaryImageSecondaryImageSourceStatesDestinationImage(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)Allocate a temporary MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing.static longversion_static()-
Methods inherited from class apple.metalperformanceshaders.MPSKernel
copyWithZone, copyWithZoneDevice, device, encodeWithCoder, label, options, setLabel, setOptions
-
Methods inherited from class apple.NSObject
accessibilityActivate, accessibilityActivationPoint, accessibilityAssistiveTechnologyFocusedIdentifiers, accessibilityAttributedHint, accessibilityAttributedLabel, accessibilityAttributedUserInputLabels, accessibilityAttributedValue, accessibilityContainerType, accessibilityCustomActions, accessibilityCustomRotors, accessibilityDecrement, accessibilityDragSourceDescriptors, accessibilityDropPointDescriptors, accessibilityElementAtIndex, accessibilityElementCount, accessibilityElementDidBecomeFocused, accessibilityElementDidLoseFocus, accessibilityElementIsFocused, accessibilityElements, accessibilityElementsHidden, accessibilityFrame, accessibilityHint, accessibilityIncrement, accessibilityLabel, accessibilityLanguage, accessibilityNavigationStyle, accessibilityPath, accessibilityPerformEscape, accessibilityPerformMagicTap, accessibilityRespondsToUserInteraction, accessibilityScroll, accessibilityTextualContext, accessibilityTraits, accessibilityUserInputLabels, accessibilityValue, accessibilityViewIsModal, addObserverForKeyPathOptionsContext, attemptRecoveryFromErrorOptionIndex, attemptRecoveryFromErrorOptionIndexDelegateDidRecoverSelectorContextInfo, autoContentAccessingProxy, awakeAfterUsingCoder, awakeFromNib, class_objc, classForCoder, classForKeyedArchiver, copy, dealloc, debugDescription, description, dictionaryWithValuesForKeys, didChangeValueForKey, didChangeValueForKeyWithSetMutationUsingObjects, didChangeValuesAtIndexesForKey, doesNotRecognizeSelector, fileManagerShouldProceedAfterError, fileManagerWillProcessPath, finalize_objc, forwardingTargetForSelector, forwardInvocation, hash, indexOfAccessibilityElement, isAccessibilityElement, isEqual, isKindOfClass, isMemberOfClass, isProxy, methodForSelector, methodSignatureForSelector, mutableArrayValueForKey, mutableArrayValueForKeyPath, mutableCopy, mutableOrderedSetValueForKey, mutableOrderedSetValueForKeyPath, mutableSetValueForKey, mutableSetValueForKeyPath, observationInfo, observeValueForKeyPathOfObjectChangeContext, performSelector, performSelectorInBackgroundWithObject, performSelectorOnMainThreadWithObjectWaitUntilDone, performSelectorOnMainThreadWithObjectWaitUntilDoneModes, performSelectorOnThreadWithObjectWaitUntilDone, performSelectorOnThreadWithObjectWaitUntilDoneModes, performSelectorWithObject, performSelectorWithObjectAfterDelay, performSelectorWithObjectAfterDelayInModes, performSelectorWithObjectWithObject, prepareForInterfaceBuilder, provideImageDataBytesPerRowOrigin_Size_UserInfo, removeObserverForKeyPath, removeObserverForKeyPathContext, replacementObjectForCoder, replacementObjectForKeyedArchiver, respondsToSelector, self, setAccessibilityActivationPoint, setAccessibilityAttributedHint, setAccessibilityAttributedLabel, setAccessibilityAttributedUserInputLabels, setAccessibilityAttributedValue, setAccessibilityContainerType, setAccessibilityCustomActions, setAccessibilityCustomRotors, setAccessibilityDragSourceDescriptors, setAccessibilityDropPointDescriptors, setAccessibilityElements, setAccessibilityElementsHidden, setAccessibilityFrame, setAccessibilityHint, setAccessibilityLabel, setAccessibilityLanguage, setAccessibilityNavigationStyle, setAccessibilityPath, setAccessibilityRespondsToUserInteraction, setAccessibilityTextualContext, setAccessibilityTraits, setAccessibilityUserInputLabels, setAccessibilityValue, setAccessibilityViewIsModal, setIsAccessibilityElement, setNilValueForKey, setObservationInfo, setShouldGroupAccessibilityChildren, setValueForKey, setValueForKeyPath, setValueForUndefinedKey, setValuesForKeysWithDictionary, shouldGroupAccessibilityChildren, superclass, validateValueForKeyError, validateValueForKeyPathError, valueForKey, valueForKeyPath, valueForUndefinedKey, willChangeValueForKey, willChangeValueForKeyWithSetMutationUsingObjects, willChangeValuesAtIndexesForKey
-
-
-
-
Method Detail
-
accessInstanceVariablesDirectly
public static boolean accessInstanceVariablesDirectly()
-
alloc
public static MPSCNNBinaryKernel alloc()
-
allocWithZone
public static java.lang.Object allocWithZone(org.moe.natj.general.ptr.VoidPtr zone)
-
automaticallyNotifiesObserversForKey
public static boolean automaticallyNotifiesObserversForKey(java.lang.String key)
-
cancelPreviousPerformRequestsWithTarget
public static void cancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)
-
cancelPreviousPerformRequestsWithTargetSelectorObject
public static void cancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)
-
classFallbacksForKeyedArchiver
public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
-
classForKeyedUnarchiver
public static org.moe.natj.objc.Class classForKeyedUnarchiver()
-
clipRect
public MTLRegion clipRect()
[@property] clipRect An optional clip rectangle to use when writing data. Only the pixels in the rectangle will be overwritten. A MTLRegion that indicates which part of the destination to overwrite. If the clipRect does not lie completely within the destination image, the intersection between clip rectangle and destination bounds is used. Default: MPSRectNoClip (MPSKernel::MPSRectNoClip) indicating the entire image. clipRect.origin.z is the index of starting destination image in batch processing mode. clipRect.size.depth is the number of images to process in batch processing mode. See Also: @ref subsubsection_clipRect
-
debugDescription_static
public static java.lang.String debugDescription_static()
-
description_static
public static java.lang.String description_static()
-
destinationFeatureChannelOffset
public long destinationFeatureChannelOffset()
[@property] destinationFeatureChannelOffset The number of channels in the destination MPSImage to skip before writing output. This is the starting offset into the destination image in the feature channel dimension at which destination data is written. This allows an application to pass a subset of all the channels in MPSImage as output of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel outputs 8 channels. If we want channels 8 to 15 of this MPSImage to be used as output, we can set destinationFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel outputs N channels, destination image MUST have at least destinationFeatureChannelOffset + N channels. Using a destination image with insufficient number of feature channels result in an error. E.g. if the MPSCNNConvolution outputs 32 channels, and destination has 64 channels, then it is an error to set destinationFeatureChannelOffset > 32.
-
destinationImageAllocator
public MPSImageAllocator destinationImageAllocator()
Method to allocate the result image for -encodeToCommandBuffer:sourceImage: Default: MPSTemporaryImage.defaultAllocator
-
encodeToCommandBufferPrimaryImageSecondaryImage
public MPSImage encodeToCommandBufferPrimaryImageSecondaryImage(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage)
Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself. This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h.- Parameters:
commandBuffer- The command bufferprimaryImage- A MPSImages to use as the primary source images for the filter.secondaryImage- A MPSImages to use as the secondary source images for the filter.- Returns:
- A MPSImage or MPSTemporaryImage allocated per the destinationImageAllocator containing the output of the graph. The returned image will be automatically released when the command buffer completes. If you want to keep it around for longer, retain the image. (ARC will do this for you if you use it later.)
-
encodeToCommandBufferPrimaryImageSecondaryImageDestinationImage
public void encodeToCommandBufferPrimaryImageSecondaryImageDestinationImage(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage, MPSImage destinationImage)
Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method.- Parameters:
commandBuffer- A valid MTLCommandBuffer to receive the encoded filterprimaryImage- A valid MPSImage object containing the primary source image.secondaryImage- A valid MPSImage object containing the secondary source image.destinationImage- A valid MPSImage to be overwritten by result image. destinationImage may not alias primarySourceImage or secondarySourceImage.
-
hash_static
public static long hash_static()
-
init
public MPSCNNBinaryKernel init()
-
initWithCoder
public MPSCNNBinaryKernel initWithCoder(NSCoder aDecoder)
Description copied from interface:NSCodingNS_DESIGNATED_INITIALIZER- Specified by:
initWithCoderin interfaceNSCoding- Overrides:
initWithCoderin classMPSKernel
-
initWithCoderDevice
public MPSCNNBinaryKernel initWithCoderDevice(NSCoder aDecoder, java.lang.Object device)
NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead.- Overrides:
initWithCoderDevicein classMPSKernel- Parameters:
aDecoder- The NSCoder subclass with your serialized MPSKerneldevice- The MTLDevice on which to make the MPSKernel- Returns:
- A new MPSKernel object, or nil if failure.
-
initWithDevice
public MPSCNNBinaryKernel initWithDevice(java.lang.Object device)
Standard init with default properties per filter type- Overrides:
initWithDevicein classMPSKernel- Parameters:
device- The device that the filter will be used on. May not be NULL.- Returns:
- A pointer to the newly initialized object. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
-
instanceMethodForSelector
public static NSObject.Function_instanceMethodForSelector_ret instanceMethodForSelector(org.moe.natj.objc.SEL aSelector)
-
instanceMethodSignatureForSelector
public static NSMethodSignature instanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)
-
instancesRespondToSelector
public static boolean instancesRespondToSelector(org.moe.natj.objc.SEL aSelector)
-
isBackwards
public boolean isBackwards()
[@property] isBackwards YES if the filter operates backwards. This influences how strideInPixelsX/Y should be interpreted.
-
isSubclassOfClass
public static boolean isSubclassOfClass(org.moe.natj.objc.Class aClass)
-
keyPathsForValuesAffectingValueForKey
public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey(java.lang.String key)
-
new_objc
public static java.lang.Object new_objc()
-
padding
public MPSNNPadding padding()
[@property] padding The padding method used by the filter This influences how strideInPixelsX/Y should be interpreted. Default: MPSNNPaddingMethodAlignCentered | MPSNNPaddingMethodAddRemainderToTopLeft | MPSNNPaddingMethodSizeSame Some object types (e.g. MPSCNNFullyConnected) may override this default with something appropriate to its operation.
-
primaryEdgeMode
public long primaryEdgeMode()
[@property] primaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image. This can happen because of a negative offset property, because the offset + clipRect.size is larger than the source image or because the filter looks at neighboring pixels, such as a Convolution filter. Default: MPSImageEdgeModeZero. See Also: @ref subsubsection_edgemode
-
primaryOffset
public MPSOffset primaryOffset()
[@property] primaryOffset The position of the destination clip rectangle origin relative to the primary source buffer. The offset is defined to be the position of clipRect.origin in source coordinates. Default: {0,0,0}, indicating that the top left corners of the clipRect and primary source image align. offset.z is the index of starting source image in batch processing mode. See Also: @ref subsubsection_mpsoffset
-
primaryStrideInPixelsX
public long primaryStrideInPixelsX()
[@property] primaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.
-
primaryStrideInPixelsY
public long primaryStrideInPixelsY()
[@property] primaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.
-
resolveClassMethod
public static boolean resolveClassMethod(org.moe.natj.objc.SEL sel)
-
resolveInstanceMethod
public static boolean resolveInstanceMethod(org.moe.natj.objc.SEL sel)
-
secondaryEdgeMode
public long secondaryEdgeMode()
[@property] secondaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image. This can happen because of a negative offset property, because the offset + clipRect.size is larger than the source image or because the filter looks at neighboring pixels, such as a Convolution filter. Default: MPSImageEdgeModeZero. See Also: @ref subsubsection_edgemode
-
secondaryOffset
public MPSOffset secondaryOffset()
[@property] secondaryOffset The position of the destination clip rectangle origin relative to the secondary source buffer. The offset is defined to be the position of clipRect.origin in source coordinates. Default: {0,0,0}, indicating that the top left corners of the clipRect and secondary source image align. offset.z is the index of starting source image in batch processing mode. See Also: @ref subsubsection_mpsoffset
-
secondaryStrideInPixelsX
public long secondaryStrideInPixelsX()
[@property] secondaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.
-
secondaryStrideInPixelsY
public long secondaryStrideInPixelsY()
[@property] secondaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.
-
setClipRect
public void setClipRect(MTLRegion value)
[@property] clipRect An optional clip rectangle to use when writing data. Only the pixels in the rectangle will be overwritten. A MTLRegion that indicates which part of the destination to overwrite. If the clipRect does not lie completely within the destination image, the intersection between clip rectangle and destination bounds is used. Default: MPSRectNoClip (MPSKernel::MPSRectNoClip) indicating the entire image. clipRect.origin.z is the index of starting destination image in batch processing mode. clipRect.size.depth is the number of images to process in batch processing mode. See Also: @ref subsubsection_clipRect
-
setDestinationFeatureChannelOffset
public void setDestinationFeatureChannelOffset(long value)
[@property] destinationFeatureChannelOffset The number of channels in the destination MPSImage to skip before writing output. This is the starting offset into the destination image in the feature channel dimension at which destination data is written. This allows an application to pass a subset of all the channels in MPSImage as output of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel outputs 8 channels. If we want channels 8 to 15 of this MPSImage to be used as output, we can set destinationFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel outputs N channels, destination image MUST have at least destinationFeatureChannelOffset + N channels. Using a destination image with insufficient number of feature channels result in an error. E.g. if the MPSCNNConvolution outputs 32 channels, and destination has 64 channels, then it is an error to set destinationFeatureChannelOffset > 32.
-
setDestinationImageAllocator
public void setDestinationImageAllocator(MPSImageAllocator value)
Method to allocate the result image for -encodeToCommandBuffer:sourceImage: Default: MPSTemporaryImage.defaultAllocator
-
setPadding
public void setPadding(MPSNNPadding value)
[@property] padding The padding method used by the filter This influences how strideInPixelsX/Y should be interpreted. Default: MPSNNPaddingMethodAlignCentered | MPSNNPaddingMethodAddRemainderToTopLeft | MPSNNPaddingMethodSizeSame Some object types (e.g. MPSCNNFullyConnected) may override this default with something appropriate to its operation.
-
setPrimaryEdgeMode
public void setPrimaryEdgeMode(long value)
[@property] primaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image. This can happen because of a negative offset property, because the offset + clipRect.size is larger than the source image or because the filter looks at neighboring pixels, such as a Convolution filter. Default: MPSImageEdgeModeZero. See Also: @ref subsubsection_edgemode
-
setPrimaryOffset
public void setPrimaryOffset(MPSOffset value)
[@property] primaryOffset The position of the destination clip rectangle origin relative to the primary source buffer. The offset is defined to be the position of clipRect.origin in source coordinates. Default: {0,0,0}, indicating that the top left corners of the clipRect and primary source image align. offset.z is the index of starting source image in batch processing mode. See Also: @ref subsubsection_mpsoffset
-
setSecondaryEdgeMode
public void setSecondaryEdgeMode(long value)
[@property] secondaryEdgeMode The MPSImageEdgeMode to use when texture reads stray off the edge of the primary source image Most MPSKernel objects can read off the edge of the source image. This can happen because of a negative offset property, because the offset + clipRect.size is larger than the source image or because the filter looks at neighboring pixels, such as a Convolution filter. Default: MPSImageEdgeModeZero. See Also: @ref subsubsection_edgemode
-
setSecondaryOffset
public void setSecondaryOffset(MPSOffset value)
[@property] secondaryOffset The position of the destination clip rectangle origin relative to the secondary source buffer. The offset is defined to be the position of clipRect.origin in source coordinates. Default: {0,0,0}, indicating that the top left corners of the clipRect and secondary source image align. offset.z is the index of starting source image in batch processing mode. See Also: @ref subsubsection_mpsoffset
-
setVersion_static
public static void setVersion_static(long aVersion)
-
superclass_static
public static org.moe.natj.objc.Class superclass_static()
-
supportsSecureCoding
public static boolean supportsSecureCoding()
-
_supportsSecureCoding
public boolean _supportsSecureCoding()
Description copied from interface:NSSecureCodingThis property must return YES on all classes that allow secure coding. Subclasses of classes that adopt NSSecureCoding and override initWithCoder: must also override this method and return YES. The Secure Coding Guide should be consulted when writing methods that decode data.- Specified by:
_supportsSecureCodingin interfaceNSSecureCoding- Overrides:
_supportsSecureCodingin classMPSKernel
-
version_static
public static long version_static()
-
appendBatchBarrier
public boolean appendBatchBarrier()
Returns YES if the filter must be run over the entire batch before its results may be considered complete The MPSNNGraph may split batches into sub-batches to save memory. However, some filters, like batch statistics calculations, need to operate over the entire batch to calculate a valid result, in this case, the mean and variance per channel over the set of images. In such cases, the accumulated result is commonly stored in a MPSState containing a MTLBuffer. (MTLTextures may not be able to be read from and written to in the same filter on some devices.) -isResultStateReusedAcrossBatch is set to YES, so that the state is allocated once and passed in for each sub-batch and the filter accumulates its results into it, one sub-batch at a time. Note that sub-batches may frequently be as small as 1. Default: NO
-
destinationImageDescriptorForSourceImagesSourceStates
public MPSImageDescriptor destinationImageDescriptorForSourceImagesSourceStates(NSArray<? extends MPSImage> sourceImages, NSArray<? extends MPSState> sourceStates)
Get a suggested destination image descriptor for a source image Your application is certainly free to pass in any destinationImage it likes to encodeToCommandBuffer:sourceImage:destinationImage, within reason. This is the basic design for iOS 10. This method is therefore not required. However, calculating the MPSImage size and MPSCNNBinaryKernel properties for each filter can be tedious and complicated work, so this method is made available to automate the process. The application may modify the properties of the descriptor before a MPSImage is made from it, so long as the choice is sensible for the kernel in question. Please see individual kernel descriptions for restrictions. The expected timeline for use is as follows: 1) This method is called: a) The default MPS padding calculation is applied. It uses the MPSNNPaddingMethod of the .padding property to provide a consistent addressing scheme over the graph. It creates the MPSImageDescriptor and adjusts the .offset property of the MPSNNKernel. When using a MPSNNGraph, the padding is set using the MPSNNFilterNode as a proxy. b) This method may be overridden by MPSCNNBinaryKernel subclass to achieve any customization appropriate to the object type. c) Source states are then applied in order. These may modify the descriptor and may update other object properties. See: -destinationImageDescriptorForSourceImages:sourceStates: forKernel:suggestedDescriptor: This is the typical way in which MPS may attempt to influence the operation of its kernels. d) If the .padding property has a custom padding policy method of the same name, it is called. Similarly, it may also adjust the descriptor and any MPSCNNBinaryKernel properties. This is the typical way in which your application may attempt to influence the operation of the MPS kernels. 2) A result is returned from this method and the caller may further adjust the descriptor and kernel properties directly. 3) The caller uses the descriptor to make a new MPSImage to use as the destination image for the -encode call in step 5. 4) The caller calls -resultStateForSourceImage:sourceStates:destinationImage: to make any result states needed for the kernel. If there isn't one, it will return nil. A variant is available to return a temporary state instead. 5) a -encode method is called to encode the kernel. The entire process 1-5 is more simply achieved by just calling an -encode... method that returns a MPSImage out the left hand sid of the method. Simpler still, use the MPSNNGraph to coordinate the entire process from end to end. Opportunities to influence the process are of course reduced, as (2) is no longer possible with either method. Your application may opt to use the five step method if it requires greater customization as described, or if it would like to estimate storage in advance based on the sum of MPSImageDescriptors before processing a graph. Storage estimation is done by using the MPSImageDescriptor to create a MPSImage (without passing it a texture), and then call -resourceSize. As long as the MPSImage is not used in an encode call and the .texture property is not invoked, the underlying MTLTexture is not created. No destination state or destination image is provided as an argument to this function because it is expected they will be made / configured after this is called. This method is expected to auto-configure important object properties that may be needed in the ensuing destination image and state creation steps.- Parameters:
sourceImages- A array of source images that will be passed into the -encode call Since MPSCNNBinaryKernel is a binary kernel, it is an array of length 2.sourceStates- An optional array of source states that will be passed into the -encode call- Returns:
- an image descriptor allocated on the autorelease pool
-
encodeToCommandBufferPrimaryImageSecondaryImageDestinationStateDestinationStateIsTemporary
public MPSImage encodeToCommandBufferPrimaryImageSecondaryImageDestinationStateDestinationStateIsTemporary(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage, org.moe.natj.general.ptr.Ptr<MPSState> outState, boolean isTemporary)
Encode a MPSCNNKernel into a command Buffer. Create a texture and state to hold the results and return them. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationState:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself. This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.- Parameters:
commandBuffer- The command bufferprimaryImage- A MPSImage to use as the source images for the filter.secondaryImage- A MPSImage to use as the source images for the filter.outState- The address of location to write the pointer to the result state of the operationisTemporary- YES if the outState should be a temporary object- Returns:
- A MPSImage or MPSTemporaryImage allocated per the destinationImageAllocator containing the output of the graph. The offset property will be adjusted to reflect the offset used during the encode. The returned image will be automatically released when the command buffer completes. If you want to keep it around for longer, retain the image. (ARC will do this for you if you use it later.)
-
encodingStorageSizeForPrimaryImageSecondaryImageSourceStatesDestinationImage
public long encodingStorageSizeForPrimaryImageSecondaryImageSourceStatesDestinationImage(MPSImage primaryImage, MPSImage secondaryImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)
The size of extra MPS heap storage allocated while the kernel is encoding This is best effort and just describes things that are likely to end up on the MPS heap. It does not describe all allocation done by the -encode call. It is intended for use with high water calculations for MTLHeap sizing. Allocations are typically for temporary storage needed for multipass algorithms. This interface should not be used to detect multipass algorithms.
-
isResultStateReusedAcrossBatch
public boolean isResultStateReusedAcrossBatch()
Returns YES if the same state is used for every operation in a batch If NO, then each image in a MPSImageBatch will need a corresponding (and different) state to go with it. Set to YES to avoid allocating redundant state in the case when the same state is used all the time. Default: NO
-
isStateModified
public boolean isStateModified()
Returns true if the -encode call modifies the state object it accepts.
-
primaryDilationRateX
public long primaryDilationRateX()
[@property] dilationRateX Stride in source coordinates from one kernel tap to the next in the X dimension.
-
primaryDilationRateY
public long primaryDilationRateY()
[@property] dilationRate Stride in source coordinates from one kernel tap to the next in the Y dimension.
-
primaryKernelHeight
public long primaryKernelHeight()
[@property] primaryKernelHeight The height of the MPSCNNBinaryKernel filter window This is the vertical diameter of the region read by the filter for each result pixel. If the MPSCNNKernel does not have a filter window, then 1 will be returned.
-
primaryKernelWidth
public long primaryKernelWidth()
[@property] primaryKernelWidth The width of the MPSCNNBinaryKernel filter window This is the horizontal diameter of the region read by the filter for each result pixel. If the MPSCNNKernel does not have a filter window, then 1 will be returned.
-
primarySourceFeatureChannelMaxCount
public long primarySourceFeatureChannelMaxCount()
[@property] primarySourceFeatureChannelMaxCount The maximum number of channels in the primary source MPSImage to use Most filters can insert a slice operation into the filter for free. Use this to limit the size of the feature channel slice taken from the input image. If the value is too large, it is truncated to be the remaining size in the image after the sourceFeatureChannelOffset is taken into account. Default: ULONG_MAX
-
primarySourceFeatureChannelOffset
public long primarySourceFeatureChannelOffset()
[@property] primarySourceFeatureChannelOffset The number of channels in the primary source MPSImage to skip before reading the input. This is the starting offset into the primary source image in the feature channel dimension at which source data is read. Unit: feature channels This allows an application to read a subset of all the channels in MPSImage as input of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel needs to read 8 channels. If we want channels 8 to 15 of this MPSImage to be used as input, we can set primarySourceFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel inputs N channels, the source image MUST have at least primarySourceFeatureChannelOffset + N channels. Using a source image with insufficient number of feature channels will result in an error. E.g. if the MPSCNNConvolution inputs 32 channels, and the source has 64 channels, then it is an error to set primarySourceFeatureChannelOffset > 32.
-
resultStateForPrimaryImageSecondaryImageSourceStatesDestinationImage
public MPSState resultStateForPrimaryImageSecondaryImageSourceStatesDestinationImage(MPSImage primaryImage, MPSImage secondaryImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)
Allocate a MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing. This may be necessary to avoid using too much memory and to manage large batches. The function should allocate a MPSState object (if any) that will be produced by an -encode call with the indicated sourceImages and sourceStates inputs. Though the states can be further adjusted in the ensuing -encode call, the states should be initialized with all important data and all MTLResource storage allocated. The data stored in the MTLResource need not be initialized, unless the ensuing -encode call expects it to be. The MTLDevice used by the result is derived from the source image. The padding policy will be applied to the filter before this is called to give it the chance to configure any properties like MPSCNNKernel.offset. CAUTION: the result state should be made after the kernel properties are configured for the -encode call that will write to the state, and after -destinationImageDescriptorForSourceImages:sourceStates: is called (if it is called). Otherwise, behavior is undefined. Please see the description of -[MPSCNNKernel resultStateForSourceImage:sourceStates:destinationImage:] for more. Default: returns nil- Parameters:
primaryImage- The MPSImage consumed by the associated -encode call.secondaryImage- The MPSImage consumed by the associated -encode call.sourceStates- The list of MPSStates consumed by the associated -encode call, for a batch size of 1.- Returns:
- The list of states produced by the -encode call for batch size of 1. When the batch size is not 1, this function will be called repeatedly unless -isResultStateReusedAcrossBatch returns YES. If -isResultStateReusedAcrossBatch returns YES, then it will be called once per batch and the MPSStateBatch array will contain MPSStateBatch.length references to the same object.
-
secondaryDilationRateX
public long secondaryDilationRateX()
[@property] dilationRateX Stride in source coordinates from one kernel tap to the next in the X dimension. As applied to the secondary source image.
-
secondaryDilationRateY
public long secondaryDilationRateY()
[@property] dilationRate Stride in source coordinates from one kernel tap to the next in the Y dimension. As applied to the secondary source image.
-
secondaryKernelHeight
public long secondaryKernelHeight()
[@property] kernelHeight The height of the MPSCNNBinaryKernel filter window for the second image source This is the vertical diameter of the region read by the filter for each result pixel. If the MPSCNNBinaryKernel does not have a filter window, then 1 will be returned.
-
secondaryKernelWidth
public long secondaryKernelWidth()
[@property] kernelWidth The width of the MPSCNNBinaryKernel filter window for the second image source This is the horizontal diameter of the region read by the filter for each result pixel. If the MPSCNNBinaryKernel does not have a filter window, then 1 will be returned.
-
secondarySourceFeatureChannelMaxCount
public long secondarySourceFeatureChannelMaxCount()
[@property] secondarySourceFeatureChannelMaxCount The maximum number of channels in the secondary source MPSImage to use Most filters can insert a slice operation into the filter for free. Use this to limit the size of the feature channel slice taken from the input image. If the value is too large, it is truncated to be the remaining size in the image after the sourceFeatureChannelOffset is taken into account. Default: ULONG_MAX
-
secondarySourceFeatureChannelOffset
public long secondarySourceFeatureChannelOffset()
[@property] secondarySourceFeatureChannelOffset The number of channels in the secondary source MPSImage to skip before reading the input. This is the starting offset into the secondary source image in the feature channel dimension at which source data is read. Unit: feature channels This allows an application to read a subset of all the channels in MPSImage as input of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel needs to read 8 channels. If we want channels 8 to 15 of this MPSImage to be used as input, we can set secondarySourceFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel inputs N channels, the source image MUST have at least primarySourceFeatureChannelOffset + N channels. Using a source image with insufficient number of feature channels will result in an error. E.g. if the MPSCNNConvolution inputs 32 channels, and the source has 64 channels, then it is an error to set primarySourceFeatureChannelOffset > 32.
-
setPrimarySourceFeatureChannelMaxCount
public void setPrimarySourceFeatureChannelMaxCount(long value)
[@property] primarySourceFeatureChannelMaxCount The maximum number of channels in the primary source MPSImage to use Most filters can insert a slice operation into the filter for free. Use this to limit the size of the feature channel slice taken from the input image. If the value is too large, it is truncated to be the remaining size in the image after the sourceFeatureChannelOffset is taken into account. Default: ULONG_MAX
-
setPrimarySourceFeatureChannelOffset
public void setPrimarySourceFeatureChannelOffset(long value)
[@property] primarySourceFeatureChannelOffset The number of channels in the primary source MPSImage to skip before reading the input. This is the starting offset into the primary source image in the feature channel dimension at which source data is read. Unit: feature channels This allows an application to read a subset of all the channels in MPSImage as input of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel needs to read 8 channels. If we want channels 8 to 15 of this MPSImage to be used as input, we can set primarySourceFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel inputs N channels, the source image MUST have at least primarySourceFeatureChannelOffset + N channels. Using a source image with insufficient number of feature channels will result in an error. E.g. if the MPSCNNConvolution inputs 32 channels, and the source has 64 channels, then it is an error to set primarySourceFeatureChannelOffset > 32.
-
setPrimaryStrideInPixelsX
public void setPrimaryStrideInPixelsX(long value)
[@property] primaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.
-
setPrimaryStrideInPixelsY
public void setPrimaryStrideInPixelsY(long value)
[@property] primaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the primary source image If the filter does not do up or downsampling, 1 is returned.
-
setSecondarySourceFeatureChannelMaxCount
public void setSecondarySourceFeatureChannelMaxCount(long value)
[@property] secondarySourceFeatureChannelMaxCount The maximum number of channels in the secondary source MPSImage to use Most filters can insert a slice operation into the filter for free. Use this to limit the size of the feature channel slice taken from the input image. If the value is too large, it is truncated to be the remaining size in the image after the sourceFeatureChannelOffset is taken into account. Default: ULONG_MAX
-
setSecondarySourceFeatureChannelOffset
public void setSecondarySourceFeatureChannelOffset(long value)
[@property] secondarySourceFeatureChannelOffset The number of channels in the secondary source MPSImage to skip before reading the input. This is the starting offset into the secondary source image in the feature channel dimension at which source data is read. Unit: feature channels This allows an application to read a subset of all the channels in MPSImage as input of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel needs to read 8 channels. If we want channels 8 to 15 of this MPSImage to be used as input, we can set secondarySourceFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel inputs N channels, the source image MUST have at least primarySourceFeatureChannelOffset + N channels. Using a source image with insufficient number of feature channels will result in an error. E.g. if the MPSCNNConvolution inputs 32 channels, and the source has 64 channels, then it is an error to set primarySourceFeatureChannelOffset > 32.
-
setSecondaryStrideInPixelsX
public void setSecondaryStrideInPixelsX(long value)
[@property] secondaryStrideInPixelsX The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.
-
setSecondaryStrideInPixelsY
public void setSecondaryStrideInPixelsY(long value)
[@property] secondaryStrideInPixelsY The downsampling (or upsampling if a backwards filter) factor in the vertical dimension for the secondary source image If the filter does not do up or downsampling, 1 is returned.
-
temporaryResultStateForCommandBufferPrimaryImageSecondaryImageSourceStatesDestinationImage
public MPSState temporaryResultStateForCommandBufferPrimaryImageSecondaryImageSourceStatesDestinationImage(MTLCommandBuffer commandBuffer, MPSImage primaryImage, MPSImage secondaryImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)
Allocate a temporary MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing. This may be necessary to avoid using too much memory and to manage large batches. The function should allocate any MPSState objects that will be produced by an -encode call with the indicated sourceImages and sourceStates inputs. Though the states can be further adjusted in the ensuing -encode call, the states should be initialized with all important data and all MTLResource storage allocated. The data stored in the MTLResource need not be initialized, unless the ensuing -encode call expects it to be. The MTLDevice used by the result is derived from the command buffer. The padding policy will be applied to the filter before this is called to give it the chance to configure any properties like MPSCNNKernel.offset. CAUTION: the result state should be made after the kernel properties are configured for the -encode call that will write to the state, and after -destinationImageDescriptorForSourceImages:sourceStates: is called (if it is called). Otherwise, behavior is undefined. Please see the description of -[MPSCNNKernel resultStateForSourceImage:sourceStates:destinationImage] for more. Default: returns nil- Parameters:
commandBuffer- The command buffer to allocate the temporary storage against The state will only be valid on this command buffer.primaryImage- The MPSImage consumed by the associated -encode call.secondaryImage- The MPSImage consumed by the associated -encode call.sourceStates- The list of MPSStates consumed by the associated -encode call, for a batch size of 1.- Returns:
- The list of states produced by the -encode call for batch size of 1. When the batch size is not 1, this function will be called repeatedly unless -isResultStateReusedAcrossBatch returns YES. If -isResultStateReusedAcrossBatch returns YES, then it will be called once per batch and the MPSStateBatch array will contain MPSStateBatch.length references to the same object.
-
-