Package apple.metalperformanceshaders
Class MPSCNNConvolution
- java.lang.Object
-
- org.moe.natj.general.NativeObject
-
- org.moe.natj.objc.ObjCObject
-
- apple.NSObject
-
- apple.metalperformanceshaders.MPSKernel
-
- apple.metalperformanceshaders.MPSCNNKernel
-
- apple.metalperformanceshaders.MPSCNNConvolution
-
- All Implemented Interfaces:
NSCoding,NSCopying,NSSecureCoding,NSObject
- Direct Known Subclasses:
MPSCNNFullyConnected
public class MPSCNNConvolution extends MPSCNNKernel
MPSCNNConvolution [@dependency] This depends on Metal.framework The MPSCNNConvolution specifies a convolution. The MPSCNNConvolution convolves the input image with a set of filters, each producing one feature map in the output image.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class apple.NSObject
NSObject.Function_instanceMethodForSelector_ret, NSObject.Function_methodForSelector_ret
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedMPSCNNConvolution(org.moe.natj.general.Pointer peer)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description boolean_supportsSecureCoding()This property must return YES on all classes that allow secure coding.static booleanaccessInstanceVariablesDirectly()longaccumulatorPrecisionOption()Precision of accumulator used in convolution.static MPSCNNConvolutionalloc()static java.lang.ObjectallocWithZone(org.moe.natj.general.ptr.VoidPtr zone)static booleanautomaticallyNotifiesObserversForKey(java.lang.String key)static voidcancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)static voidcancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)longchannelMultiplier()Channel multiplier.static NSArray<java.lang.String>classFallbacksForKeyedArchiver()static org.moe.natj.objc.ClassclassForKeyedUnarchiver()MPSCNNConvolutionDataSourcedataSource()[@property] dataSource dataSource with which convolution object was createdstatic java.lang.StringdebugDescription_static()static java.lang.Stringdescription_static()MPSCNNConvolutionWeightsAndBiasesStateexportWeightsAndBiasesWithCommandBufferResultStateCanBeTemporary(MTLCommandBuffer commandBuffer, boolean resultStateCanBeTemporary)GPU side export.MPSNNNeuronDescriptorfusedNeuronDescriptor()Fused neuron descritor passed in convolution descriptor for fusion with convolution.longgroups()[@property] groups Number of groups input and output channels are divided into.static longhash_static()MPSCNNConvolutioninit()MPSCNNConvolutioninitWithCoder(NSCoder aDecoder)NS_DESIGNATED_INITIALIZERMPSCNNConvolutioninitWithCoderDevice(NSCoder aDecoder, java.lang.Object device)NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly.MPSCNNConvolutioninitWithDevice(java.lang.Object device)Standard init with default properties per filter typeMPSCNNConvolutioninitWithDeviceConvolutionDescriptorKernelWeightsBiasTermsFlags(MTLDevice device, MPSCNNConvolutionDescriptor convolutionDescriptor, org.moe.natj.general.ptr.ConstFloatPtr kernelWeights, org.moe.natj.general.ptr.ConstFloatPtr biasTerms, long flags)Initializes a convolution kernel WARNING: This API is depreated and will be removed in the future.MPSCNNConvolutioninitWithDeviceWeights(MTLDevice device, MPSCNNConvolutionDataSource weights)Initializes a convolution kernellonginputFeatureChannels()[@property] inputFeatureChannels The number of feature channels per pixel in the input image.static NSObject.Function_instanceMethodForSelector_retinstanceMethodForSelector(org.moe.natj.objc.SEL aSelector)static NSMethodSignatureinstanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)static booleaninstancesRespondToSelector(org.moe.natj.objc.SEL aSelector)static booleanisSubclassOfClass(org.moe.natj.objc.Class aClass)static NSSet<java.lang.String>keyPathsForValuesAffectingValueForKey(java.lang.String key)MPSCNNNeuronneuron()[@property] neuron MPSCNNNeuron filter to be applied as part of convolution.floatneuronParameterA()Parameter "a" for the neuron.floatneuronParameterB()Parameter "b" for the neuron.floatneuronParameterC()Parameter "c" for the neuron.intneuronType()The type of neuron to append to the convolution Please see class description for a full list.static java.lang.Objectnew_objc()longoutputFeatureChannels()[@property] outputFeatureChannels The number of feature channels per pixel in the output image.voidreloadWeightsAndBiasesFromDataSource()CPU side reload.voidreloadWeightsAndBiasesWithCommandBufferState(MTLCommandBuffer commandBuffer, MPSCNNConvolutionWeightsAndBiasesState state)GPU side reload.voidreloadWeightsAndBiasesWithDataSource(MPSCNNConvolutionDataSource dataSource)Deprecated. dataSource will be ignored.static booleanresolveClassMethod(org.moe.natj.objc.SEL sel)static booleanresolveInstanceMethod(org.moe.natj.objc.SEL sel)MPSCNNConvolutionGradientStateresultStateForSourceImageSourceStatesDestinationImage(MPSImage sourceImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)Allocate a MPCNNConvolutionGradientSState to hold the results from a -encodeBatchToCommandBuffer... operationvoidsetAccumulatorPrecisionOption(long value)Precision of accumulator used in convolution.static voidsetVersion_static(long aVersion)longsubPixelScaleFactor()[@property] subPixelScaleFactor Sub pixel scale factor which was passed in as part of MPSCNNConvolutionDescriptor when creating this MPSCNNConvolution object.static org.moe.natj.objc.Classsuperclass_static()static booleansupportsSecureCoding()MPSCNNConvolutionGradientStatetemporaryResultStateForCommandBufferSourceImageSourceStatesDestinationImage(MTLCommandBuffer commandBuffer, MPSImage sourceImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)Allocate a temporary MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing.static longversion_static()-
Methods inherited from class apple.metalperformanceshaders.MPSCNNKernel
appendBatchBarrier, clipRect, destinationFeatureChannelOffset, destinationImageAllocator, destinationImageDescriptorForSourceImagesSourceStates, dilationRateX, dilationRateY, edgeMode, encodeToCommandBufferSourceImage, encodeToCommandBufferSourceImageDestinationImage, encodeToCommandBufferSourceImageDestinationStateDestinationImage, encodeToCommandBufferSourceImageDestinationStateDestinationStateIsTemporary, encodingStorageSizeForSourceImageSourceStatesDestinationImage, isBackwards, isResultStateReusedAcrossBatch, isStateModified, kernelHeight, kernelWidth, offset, padding, setClipRect, setDestinationFeatureChannelOffset, setDestinationImageAllocator, setEdgeMode, setOffset, setPadding, setSourceFeatureChannelMaxCount, setSourceFeatureChannelOffset, sourceFeatureChannelMaxCount, sourceFeatureChannelOffset, strideInPixelsX, strideInPixelsY
-
Methods inherited from class apple.metalperformanceshaders.MPSKernel
copyWithZone, copyWithZoneDevice, device, encodeWithCoder, label, options, setLabel, setOptions
-
Methods inherited from class apple.NSObject
accessibilityActivate, accessibilityActivationPoint, accessibilityAssistiveTechnologyFocusedIdentifiers, accessibilityAttributedHint, accessibilityAttributedLabel, accessibilityAttributedUserInputLabels, accessibilityAttributedValue, accessibilityContainerType, accessibilityCustomActions, accessibilityCustomRotors, accessibilityDecrement, accessibilityDragSourceDescriptors, accessibilityDropPointDescriptors, accessibilityElementAtIndex, accessibilityElementCount, accessibilityElementDidBecomeFocused, accessibilityElementDidLoseFocus, accessibilityElementIsFocused, accessibilityElements, accessibilityElementsHidden, accessibilityFrame, accessibilityHint, accessibilityIncrement, accessibilityLabel, accessibilityLanguage, accessibilityNavigationStyle, accessibilityPath, accessibilityPerformEscape, accessibilityPerformMagicTap, accessibilityRespondsToUserInteraction, accessibilityScroll, accessibilityTextualContext, accessibilityTraits, accessibilityUserInputLabels, accessibilityValue, accessibilityViewIsModal, addObserverForKeyPathOptionsContext, attemptRecoveryFromErrorOptionIndex, attemptRecoveryFromErrorOptionIndexDelegateDidRecoverSelectorContextInfo, autoContentAccessingProxy, awakeAfterUsingCoder, awakeFromNib, class_objc, classForCoder, classForKeyedArchiver, copy, dealloc, debugDescription, description, dictionaryWithValuesForKeys, didChangeValueForKey, didChangeValueForKeyWithSetMutationUsingObjects, didChangeValuesAtIndexesForKey, doesNotRecognizeSelector, fileManagerShouldProceedAfterError, fileManagerWillProcessPath, finalize_objc, forwardingTargetForSelector, forwardInvocation, hash, indexOfAccessibilityElement, isAccessibilityElement, isEqual, isKindOfClass, isMemberOfClass, isProxy, methodForSelector, methodSignatureForSelector, mutableArrayValueForKey, mutableArrayValueForKeyPath, mutableCopy, mutableOrderedSetValueForKey, mutableOrderedSetValueForKeyPath, mutableSetValueForKey, mutableSetValueForKeyPath, observationInfo, observeValueForKeyPathOfObjectChangeContext, performSelector, performSelectorInBackgroundWithObject, performSelectorOnMainThreadWithObjectWaitUntilDone, performSelectorOnMainThreadWithObjectWaitUntilDoneModes, performSelectorOnThreadWithObjectWaitUntilDone, performSelectorOnThreadWithObjectWaitUntilDoneModes, performSelectorWithObject, performSelectorWithObjectAfterDelay, performSelectorWithObjectAfterDelayInModes, performSelectorWithObjectWithObject, prepareForInterfaceBuilder, provideImageDataBytesPerRowOrigin_Size_UserInfo, removeObserverForKeyPath, removeObserverForKeyPathContext, replacementObjectForCoder, replacementObjectForKeyedArchiver, respondsToSelector, self, setAccessibilityActivationPoint, setAccessibilityAttributedHint, setAccessibilityAttributedLabel, setAccessibilityAttributedUserInputLabels, setAccessibilityAttributedValue, setAccessibilityContainerType, setAccessibilityCustomActions, setAccessibilityCustomRotors, setAccessibilityDragSourceDescriptors, setAccessibilityDropPointDescriptors, setAccessibilityElements, setAccessibilityElementsHidden, setAccessibilityFrame, setAccessibilityHint, setAccessibilityLabel, setAccessibilityLanguage, setAccessibilityNavigationStyle, setAccessibilityPath, setAccessibilityRespondsToUserInteraction, setAccessibilityTextualContext, setAccessibilityTraits, setAccessibilityUserInputLabels, setAccessibilityValue, setAccessibilityViewIsModal, setIsAccessibilityElement, setNilValueForKey, setObservationInfo, setShouldGroupAccessibilityChildren, setValueForKey, setValueForKeyPath, setValueForUndefinedKey, setValuesForKeysWithDictionary, shouldGroupAccessibilityChildren, superclass, validateValueForKeyError, validateValueForKeyPathError, valueForKey, valueForKeyPath, valueForUndefinedKey, willChangeValueForKey, willChangeValueForKeyWithSetMutationUsingObjects, willChangeValuesAtIndexesForKey
-
-
-
-
Method Detail
-
accessInstanceVariablesDirectly
public static boolean accessInstanceVariablesDirectly()
-
alloc
public static MPSCNNConvolution alloc()
-
allocWithZone
public static java.lang.Object allocWithZone(org.moe.natj.general.ptr.VoidPtr zone)
-
automaticallyNotifiesObserversForKey
public static boolean automaticallyNotifiesObserversForKey(java.lang.String key)
-
cancelPreviousPerformRequestsWithTarget
public static void cancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)
-
cancelPreviousPerformRequestsWithTargetSelectorObject
public static void cancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)
-
classFallbacksForKeyedArchiver
public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
-
classForKeyedUnarchiver
public static org.moe.natj.objc.Class classForKeyedUnarchiver()
-
debugDescription_static
public static java.lang.String debugDescription_static()
-
description_static
public static java.lang.String description_static()
-
hash_static
public static long hash_static()
-
instanceMethodForSelector
public static NSObject.Function_instanceMethodForSelector_ret instanceMethodForSelector(org.moe.natj.objc.SEL aSelector)
-
instanceMethodSignatureForSelector
public static NSMethodSignature instanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)
-
instancesRespondToSelector
public static boolean instancesRespondToSelector(org.moe.natj.objc.SEL aSelector)
-
isSubclassOfClass
public static boolean isSubclassOfClass(org.moe.natj.objc.Class aClass)
-
keyPathsForValuesAffectingValueForKey
public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey(java.lang.String key)
-
new_objc
public static java.lang.Object new_objc()
-
resolveClassMethod
public static boolean resolveClassMethod(org.moe.natj.objc.SEL sel)
-
resolveInstanceMethod
public static boolean resolveInstanceMethod(org.moe.natj.objc.SEL sel)
-
setVersion_static
public static void setVersion_static(long aVersion)
-
superclass_static
public static org.moe.natj.objc.Class superclass_static()
-
version_static
public static long version_static()
-
groups
public long groups()
[@property] groups Number of groups input and output channels are divided into.
-
init
public MPSCNNConvolution init()
- Overrides:
initin classMPSCNNKernel
-
initWithDevice
public MPSCNNConvolution initWithDevice(java.lang.Object device)
Description copied from class:MPSCNNKernelStandard init with default properties per filter type- Overrides:
initWithDevicein classMPSCNNKernel- Parameters:
device- The device that the filter will be used on. May not be NULL.- Returns:
- A pointer to the newly initialized object. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
-
initWithDeviceConvolutionDescriptorKernelWeightsBiasTermsFlags
public MPSCNNConvolution initWithDeviceConvolutionDescriptorKernelWeightsBiasTermsFlags(MTLDevice device, MPSCNNConvolutionDescriptor convolutionDescriptor, org.moe.natj.general.ptr.ConstFloatPtr kernelWeights, org.moe.natj.general.ptr.ConstFloatPtr biasTerms, long flags)
Initializes a convolution kernel WARNING: This API is depreated and will be removed in the future. It cannot be used when training. Also serialization/unserialization wont work for MPSCNNConvolution objects created with this init. Please move onto using initWithDevice:weights:.- Parameters:
device- The MTLDevice on which this MPSCNNConvolution filter will be usedconvolutionDescriptor- A pointer to a MPSCNNConvolutionDescriptor.kernelWeights- A pointer to a weights array. Each entry is a float value. The number of entries is = inputFeatureChannels * outputFeatureChannels * kernelHeight * kernelWidth The layout of filter weight is so that it can be reinterpreted as 4D tensor (array) weight[ outputChannels ][ kernelHeight ][ kernelWidth ][ inputChannels / groups ] Weights are converted to half float (fp16) internally for best performance.biasTerms- A pointer to bias terms to be applied to the convolution output. Each entry is a float value. The number of entries is = numberOfOutputFeatureMapsflags- Currently unused. Pass MPSCNNConvolutionFlagsNone- Returns:
- A valid MPSCNNConvolution object or nil, if failure.
-
inputFeatureChannels
public long inputFeatureChannels()
[@property] inputFeatureChannels The number of feature channels per pixel in the input image.
-
neuron
public MPSCNNNeuron neuron()
[@property] neuron MPSCNNNeuron filter to be applied as part of convolution. Can be nil in wich case no neuron activation fuction is applied.
-
outputFeatureChannels
public long outputFeatureChannels()
[@property] outputFeatureChannels The number of feature channels per pixel in the output image.
-
channelMultiplier
public long channelMultiplier()
Channel multiplier. For convolution created with MPSCNNDepthWiseConvolutionDescriptor, it is the number of output feature channels for each input channel. See MPSCNNDepthWiseConvolutionDescriptor for more details. Default is 0 which means regular CNN convolution.
-
initWithCoder
public MPSCNNConvolution initWithCoder(NSCoder aDecoder)
Description copied from interface:NSCodingNS_DESIGNATED_INITIALIZER- Specified by:
initWithCoderin interfaceNSCoding- Overrides:
initWithCoderin classMPSCNNKernel
-
initWithCoderDevice
public MPSCNNConvolution initWithCoderDevice(NSCoder aDecoder, java.lang.Object device)
NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead.- Overrides:
initWithCoderDevicein classMPSCNNKernel- Parameters:
aDecoder- The NSCoder subclass with your serialized MPSKerneldevice- The MTLDevice on which to make the MPSKernel- Returns:
- A new MPSKernel object, or nil if failure.
-
initWithDeviceWeights
public MPSCNNConvolution initWithDeviceWeights(MTLDevice device, MPSCNNConvolutionDataSource weights)
Initializes a convolution kernel- Parameters:
device- The MTLDevice on which this MPSCNNConvolution filter will be usedweights- A pointer to a object that conforms to the MPSCNNConvolutionDataSource protocol. The MPSCNNConvolutionDataSource protocol declares the methods that an instance of MPSCNNConvolution uses to obtain the weights and bias terms for the CNN convolution filter.- Returns:
- A valid MPSCNNConvolution object or nil, if failure.
-
neuronParameterA
public float neuronParameterA()
Parameter "a" for the neuron. Default: 1.0f Please see class description for interpretation of a.
-
neuronParameterB
public float neuronParameterB()
Parameter "b" for the neuron. Default: 1.0f Please see class description for interpretation of b.
-
neuronType
public int neuronType()
The type of neuron to append to the convolution Please see class description for a full list. Default is MPSCNNNeuronTypeNone.
-
subPixelScaleFactor
public long subPixelScaleFactor()
[@property] subPixelScaleFactor Sub pixel scale factor which was passed in as part of MPSCNNConvolutionDescriptor when creating this MPSCNNConvolution object.
-
supportsSecureCoding
public static boolean supportsSecureCoding()
-
_supportsSecureCoding
public boolean _supportsSecureCoding()
Description copied from interface:NSSecureCodingThis property must return YES on all classes that allow secure coding. Subclasses of classes that adopt NSSecureCoding and override initWithCoder: must also override this method and return YES. The Secure Coding Guide should be consulted when writing methods that decode data.- Specified by:
_supportsSecureCodingin interfaceNSSecureCoding- Overrides:
_supportsSecureCodingin classMPSCNNKernel
-
accumulatorPrecisionOption
public long accumulatorPrecisionOption()
Precision of accumulator used in convolution. See MPSNeuralNetworkTypes.h for discussion. Default is MPSNNConvolutionAccumulatorPrecisionOptionFloat.
-
dataSource
public MPSCNNConvolutionDataSource dataSource()
[@property] dataSource dataSource with which convolution object was created
-
exportWeightsAndBiasesWithCommandBufferResultStateCanBeTemporary
public MPSCNNConvolutionWeightsAndBiasesState exportWeightsAndBiasesWithCommandBufferResultStateCanBeTemporary(MTLCommandBuffer commandBuffer, boolean resultStateCanBeTemporary)
GPU side export. Enqueue a kernel to export current weights and biases stored in MPSCNNConvoltion's internal buffers into weights and biases MTLBuffer * returned in MPSCNNConvolutionWeightsAndBiasesState. * * @param commandBuffer Metal command buffer on which export kernel is enqueued. * @param resultStateCanBeTemporary If FALSE, state returned will be non-temporary. If TRUE, returned state may or may not be temporary. * @return MPSCNNConvolutionWeightsAndBiasesState containing weights and biases buffer to which weights got exported. This state and be temporary or non-temporary depending on the flag resultStateCanBeTemporary
-
fusedNeuronDescriptor
public MPSNNNeuronDescriptor fusedNeuronDescriptor()
Fused neuron descritor passed in convolution descriptor for fusion with convolution. Please see class description for interpretation of c.
-
neuronParameterC
public float neuronParameterC()
Parameter "c" for the neuron. Default: 1.0f Please see class description for interpretation of c.
-
reloadWeightsAndBiasesFromDataSource
public void reloadWeightsAndBiasesFromDataSource()
CPU side reload. Reload the updated weights and biases from data provider into internal weights and bias buffers. Weights and biases gradients needed for update are obtained from MPSCNNConvolutionGradientState object. Data provider passed in init call is used for this purpose.
-
reloadWeightsAndBiasesWithCommandBufferState
public void reloadWeightsAndBiasesWithCommandBufferState(MTLCommandBuffer commandBuffer, MPSCNNConvolutionWeightsAndBiasesState state)
GPU side reload. Reload the updated weights and biases from update buffer produced by application enqueued metal kernel into internal weights and biases buffer. Weights and biases gradients needed for update are obtained from MPSCNNConvolutionGradientState object's gradientForWeights and gradientForBiases metal buffer.- Parameters:
commandBuffer- Metal command buffer on which application update kernel was enqueued consuming MPSCNNConvolutionGradientState's gradientForWeights and gradientForBiases buffers and producing updateBuffer metal buffer.state- MPSCNNConvolutionWeightsAndBiasesState containing weights and biases buffers which have updated weights produced by application's update kernel. The state readcount will be decremented.
-
reloadWeightsAndBiasesWithDataSource
public void reloadWeightsAndBiasesWithDataSource(MPSCNNConvolutionDataSource dataSource)
Deprecated. dataSource will be ignored.
-
resultStateForSourceImageSourceStatesDestinationImage
public MPSCNNConvolutionGradientState resultStateForSourceImageSourceStatesDestinationImage(MPSImage sourceImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)
Allocate a MPCNNConvolutionGradientSState to hold the results from a -encodeBatchToCommandBuffer... operation- Overrides:
resultStateForSourceImageSourceStatesDestinationImagein classMPSCNNKernel- Parameters:
sourceImage- The MPSImage consumed by the associated -encode call.sourceStates- The list of MPSStates consumed by the associated -encode call, for a batch size of 1.destinationImage- The destination image for the encode call- Returns:
- The list of states produced by the -encode call for batch size of 1. -isResultStateReusedAcrossBatch returns YES for MPSCNNConvolution so same state is used across entire batch. State object is not reusasable across batches.
-
setAccumulatorPrecisionOption
public void setAccumulatorPrecisionOption(long value)
Precision of accumulator used in convolution. See MPSNeuralNetworkTypes.h for discussion. Default is MPSNNConvolutionAccumulatorPrecisionOptionFloat.
-
temporaryResultStateForCommandBufferSourceImageSourceStatesDestinationImage
public MPSCNNConvolutionGradientState temporaryResultStateForCommandBufferSourceImageSourceStatesDestinationImage(MTLCommandBuffer commandBuffer, MPSImage sourceImage, NSArray<? extends MPSState> sourceStates, MPSImage destinationImage)
Description copied from class:MPSCNNKernelAllocate a temporary MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing. This may be necessary to avoid using too much memory and to manage large batches. The function should allocate any MPSState objects that will be produced by an -encode call with the indicated sourceImages and sourceStates inputs. Though the states can be further adjusted in the ensuing -encode call, the states should be initialized with all important data and all MTLResource storage allocated. The data stored in the MTLResource need not be initialized, unless the ensuing -encode call expects it to be. The MTLDevice used by the result is derived from the command buffer. The padding policy will be applied to the filter before this is called to give it the chance to configure any properties like MPSCNNKernel.offset. CAUTION: The kernel must have all properties set to values that will ultimately be passed to the -encode call that writes to the state, before -resultStateForSourceImages:sourceStates:destinationImage: is called or behavior is undefined. Please note that -destinationImageDescriptorForSourceImages:sourceStates:destinationImage: will alter some of these properties automatically based on the padding policy. If you intend to call that to make the destination image, then you should call that before -resultStateForSourceImages:sourceStates:destinationImage:. This will ensure the properties used in the encode call and in the destination image creation match those used to configure the state. The following order is recommended: // Configure MPSCNNKernel properties first kernel.edgeMode = MPSImageEdgeModeZero; kernel.destinationFeatureChannelOffset = 128; // concatenation without the copy ... // ALERT: will change MPSCNNKernel properties MPSImageDescriptor * d = [kernel destinationImageDescriptorForSourceImage: source sourceStates: states]; MPSTemporaryImage * dest = [MPSTemporaryImage temporaryImageWithCommandBuffer: cmdBuf imageDescriptor: d]; // Now that all properties are configured properly, we can make the result state // and call encode. MPSState * __nullable destState = [kernel temporaryResultStateForCommandBuffer: cmdBuf sourceImage: source sourceStates: states]; // This form of -encode will be declared by the MPSCNNKernel subclass [kernel encodeToCommandBuffer: cmdBuf sourceImage: source destinationState: destState destinationImage: dest ]; Default: returns nil- Overrides:
temporaryResultStateForCommandBufferSourceImageSourceStatesDestinationImagein classMPSCNNKernel- Parameters:
commandBuffer- The command buffer to allocate the temporary storage against The state will only be valid on this command buffer.sourceImage- The MPSImage consumed by the associated -encode call.sourceStates- The list of MPSStates consumed by the associated -encode call, for a batch size of 1.destinationImage- The destination image for the encode call- Returns:
- The list of states produced by the -encode call for batch size of 1. When the batch size is not 1, this function will be called repeatedly unless -isResultStateReusedAcrossBatch returns YES. If -isResultStateReusedAcrossBatch returns YES, then it will be called once per batch and the MPSStateBatch array will contain MPSStateBatch.length references to the same object.
-
-