Package apple.metalperformanceshaders
Class MPSSVGF
- java.lang.Object
-
- org.moe.natj.general.NativeObject
-
- org.moe.natj.objc.ObjCObject
-
- apple.NSObject
-
- apple.metalperformanceshaders.MPSKernel
-
- apple.metalperformanceshaders.MPSSVGF
-
- All Implemented Interfaces:
NSCoding,NSCopying,NSSecureCoding,NSObject
public class MPSSVGF extends MPSKernel implements NSSecureCoding, NSCopying
Reduces noise in images rendered with Monte Carlo ray tracing methods This filter uses temporal reprojection to accumulate samples over time, followed by an edge-avoiding blur to smooth out the noise. It uses depth and surface normal textures to detect edges in the image(s) to be denoised. The filter also computes an estimate of the luminance variance of the accumulated samples for each pixel to reject neighboring pixels whose luminance is too dissimilar while blurring. This filter requires noise-free depth and normal textures, so it is not compatible with stochastic visibility effects such as depth of field, motion blur, or pixel subsampling. These effects need to be applied as a post-process instead. Furthermore, because the depth and normal textures can only represent directly visible geometry, the filter may over-blur reflections. The use of temporal reprojection may introduce artifacts such as ghosting or streaking, as well as a temporal lag for changes in luminance such as moving shadows. However, the filter is relatively fast as it is intended for realtime use. Slower but higher quality filters are available in the literature. This filter can process up to two images simultaneously assuming they share the same depth and normal textures. This is typically faster than processing the two images independently because memory bandwidth spent fetching depth and normal values and ALU time spent computing various weighting functions can be shared by both images. This is useful if e.g. you want to denoise direct and indirect lighting terms separately to avoid mixing the two terms. The filter is also optimized for processing single-channel images for effects such as shadows and ambient occlusion. Denoising these images can be much faster than denoising a full RGB image, so it may be useful to separate out these terms and denoise them specifically. This filter operates in three stages: temporal reprojection, variance estimation, and finally a series of edge-avoiding bilateral blurs. The temporal reprojection stage accepts the image to be denoised for the current frame and the denoised image from the previous frame, the depth and normal textures from the current and previous frame and, finally, a motion vector texture. It uses the motion vector texture to look up the accumulated samples from the previous frame. It then compares the depth and normals to determine if those samples are consistent with the current frame. If so, the previous frame is blended with the current frame. This stage also accumulates the first and second moments of the sample luminance which is used to compute the luminance variance in the next stage. The variance estimation stage computes an estimate of the variance of the luminance of the accumulated samples for each pixel. This stage may fall back to a spatial estimate if not enough samples have been accumulated. The luminance variance is used in the final stage to reject outlying neighboring pixels while blurring to avoid blurring across luminance discontinuities such as shadow boundaries. The final stage performs consecutive edge-avoiding bilateral blurs to smooth out noise in the image. The blurs are dilated with increasing power of two step distances starting from 1, which cheaply approximates a very large radius bilateral blur. Each iteration blurs both the input image and the variance image as variance is reduced after each iteration. It is recommended that the output of the first iteration be used as the input to the next frame's reprojection stage to further reduce noise. Tips: - It may be helpful to further divide out texture details such as surface albedo before denoising to avoid blurring texture detail and to preserve any careful texture filtering that may have been performed. The albedo can be reapplied after denoising. - High frequency geometry and normal maps may cause excessive disocclusions during reprojection manifesting as noise. - Jittering sample positions from frame to frame for temporal antialiasing may also cause disocclusions. However, this can be partially hidden by the temporal antialiasing algorithm itself. - This kernel, like many convolutions, requires quite a bit of bandwidth. Use the texture pixel formats with the smallest number of bits-per-pixel and the lowest resolution possible for the required quality level. Lower resolution images can be combined with a bilateral upsampling filter, especially if the image being denoised is mostly low frequency lighting or ambient occlusion. - The increasing dilation during the bilateral blurring stage can introduce ringing artifacts around geometric discontinuities. These can be partially hidden at the cost of potentially increased noise by reducing the bilateral blur's sigma value slightly after each iteration. - Use lower precision pixel formats if possible to reduce memory bandwidth. Refer to "Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination" for more information.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class apple.NSObject
NSObject.Function_instanceMethodForSelector_ret, NSObject.Function_methodForSelector_ret
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedMPSSVGF(org.moe.natj.general.Pointer peer)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description boolean_supportsSecureCoding()This property must return YES on all classes that allow secure coding.static booleanaccessInstanceVariablesDirectly()static MPSSVGFalloc()static java.lang.ObjectallocWithZone(org.moe.natj.general.ptr.VoidPtr zone)static booleanautomaticallyNotifiesObserversForKey(java.lang.String key)longbilateralFilterRadius()The radius of the bilateral filter.floatbilateralFilterSigma()The sigma value of the Gaussian function used by the bilateral filter.static voidcancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)static voidcancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)longchannelCount()The number of channels to filter in the source image.longchannelCount2()The number of channels to filter in the second source image.static NSArray<java.lang.String>classFallbacksForKeyedArchiver()static org.moe.natj.objc.ClassclassForKeyedUnarchiver()java.lang.ObjectcopyWithZone(org.moe.natj.general.ptr.VoidPtr zone)java.lang.ObjectcopyWithZoneDevice(org.moe.natj.general.ptr.VoidPtr zone, MTLDevice device)Make a copy of this MPSKernel for a new device -copyWithZone: will call this API to make a copy of the MPSKernel on the same device.static java.lang.StringdebugDescription_static()floatdepthWeight()Controls how samples' depths are compared during reprojection, variance estimation, and bilateral filtering.static java.lang.Stringdescription_static()voidencodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureDepthNormalTexture(MTLCommandBuffer commandBuffer, long stepDistance, MTLTexture sourceTexture, MTLTexture destinationTexture, MTLTexture depthNormalTexture)Encode bilateral filter into a command buffer Performs an edge avoiding blur with radius given by the bilateraFilterRadius property with sampling weighted by a Gaussian filter with sigma given by the bilteralFilterSigma property.voidencodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureSourceTexture2DestinationTexture2DepthNormalTexture(MTLCommandBuffer commandBuffer, long stepDistance, MTLTexture sourceTexture, MTLTexture destinationTexture, MTLTexture sourceTexture2, MTLTexture destinationTexture2, MTLTexture depthNormalTexture)Encode bilateral filter into a command buffer Performs an edge avoiding blur with radius given by the bilateraFilterRadius property with sampling weighted by a Gaussian filter with sigma given by the bilteralFilterSigma property.voidencodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTexturePreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture previousTexture, MTLTexture destinationTexture, MTLTexture previousLuminanceMomentsTexture, MTLTexture destinationLuminanceMomentsTexture, MTLTexture previousFrameCountTexture, MTLTexture destinationFrameCountTexture, MTLTexture motionVectorTexture, MTLTexture depthNormalTexture, MTLTexture previousDepthNormalTexture)Encode reprojection into a command buffer Normal and depth values from the previous frame will be compared with normal and depth values from the current frame to determine if they are similar enough to reproject into the current frame.voidencodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTextureSourceTexture2PreviousTexture2DestinationTexture2PreviousLuminanceMomentsTexture2DestinationLuminanceMomentsTexture2PreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture previousTexture, MTLTexture destinationTexture, MTLTexture previousLuminanceMomentsTexture, MTLTexture destinationLuminanceMomentsTexture, MTLTexture sourceTexture2, MTLTexture previousTexture2, MTLTexture destinationTexture2, MTLTexture previousLuminanceMomentsTexture2, MTLTexture destinationLuminanceMomentsTexture2, MTLTexture previousFrameCountTexture, MTLTexture destinationFrameCountTexture, MTLTexture motionVectorTexture, MTLTexture depthNormalTexture, MTLTexture previousDepthNormalTexture)Encode reprojection into a command buffer Normal and depth values from the previous frame will be compared with normal and depth values from the current frame to determine if they are similar enough to reproject into the current frame.voidencodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureFrameCountTextureDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture luminanceMomentsTexture, MTLTexture destinationTexture, MTLTexture frameCountTexture, MTLTexture depthNormalTexture)Encode variance estimation into a command buffer Variance is computed from the accumulated first and second luminance moments.voidencodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureSourceTexture2LuminanceMomentsTexture2DestinationTexture2FrameCountTextureDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture luminanceMomentsTexture, MTLTexture destinationTexture, MTLTexture sourceTexture2, MTLTexture luminanceMomentsTexture2, MTLTexture destinationTexture2, MTLTexture frameCountTexture, MTLTexture depthNormalTexture)Encode variance estimation into a command buffer Variance is computed from the accumulated first and second luminance moments.voidencodeWithCoder(NSCoder coder)static longhash_static()MPSSVGFinit()MPSSVGFinitWithCoder(NSCoder coder)NS_DESIGNATED_INITIALIZERMPSSVGFinitWithCoderDevice(NSCoder aDecoder, java.lang.Object device)NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly.MPSSVGFinitWithDevice(java.lang.Object device)Standard init with default properties per filter typestatic NSObject.Function_instanceMethodForSelector_retinstanceMethodForSelector(org.moe.natj.objc.SEL aSelector)static NSMethodSignatureinstanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)static booleaninstancesRespondToSelector(org.moe.natj.objc.SEL aSelector)static booleanisSubclassOfClass(org.moe.natj.objc.Class aClass)static NSSet<java.lang.String>keyPathsForValuesAffectingValueForKey(java.lang.String key)floatluminanceWeight()Controls how samples' luminance values are compared during bilateral filtering.longminimumFramesForVarianceEstimation()The minimum number of frames which must be accumulated before variance can be computed directly from the accumulated luminance moments.static java.lang.Objectnew_objc()floatnormalWeight()Controls how samples' normals are compared during reprojection, variance estimation, and bilateral filtering.floatreprojectionThreshold()During reprojection, minimum combined depth and normal weight needed to consider a pixel from the previous frame consistent with a pixel from the current frame.static booleanresolveClassMethod(org.moe.natj.objc.SEL sel)static booleanresolveInstanceMethod(org.moe.natj.objc.SEL sel)voidsetBilateralFilterRadius(long value)The radius of the bilateral filter.voidsetBilateralFilterSigma(float value)The sigma value of the Gaussian function used by the bilateral filter.voidsetChannelCount(long value)The number of channels to filter in the source image.voidsetChannelCount2(long value)The number of channels to filter in the second source image.voidsetDepthWeight(float value)Controls how samples' depths are compared during reprojection, variance estimation, and bilateral filtering.voidsetLuminanceWeight(float value)Controls how samples' luminance values are compared during bilateral filtering.voidsetMinimumFramesForVarianceEstimation(long value)The minimum number of frames which must be accumulated before variance can be computed directly from the accumulated luminance moments.voidsetNormalWeight(float value)Controls how samples' normals are compared during reprojection, variance estimation, and bilateral filtering.voidsetReprojectionThreshold(float value)During reprojection, minimum combined depth and normal weight needed to consider a pixel from the previous frame consistent with a pixel from the current frame.voidsetTemporalReprojectionBlendFactor(float value)When using MPSTemporalWeightingExponentialMovingAverage, how much to blend the current frame with the previous frame during reprojection.voidsetTemporalWeighting(long value)How to weight samples during temporal reprojection.voidsetVarianceEstimationRadius(long value)The radius of the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments.voidsetVarianceEstimationSigma(float value)The sigma value of the Gaussian function used by the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments.voidsetVariancePrefilterRadius(long value)The radius of the variance pre-filter of the bilateral filter.voidsetVariancePrefilterSigma(float value)The sigma value of the Gaussian function used by the variance pre-filter of the bilateral filter.static voidsetVersion_static(long aVersion)static org.moe.natj.objc.Classsuperclass_static()static booleansupportsSecureCoding()floattemporalReprojectionBlendFactor()When using MPSTemporalWeightingExponentialMovingAverage, how much to blend the current frame with the previous frame during reprojection.longtemporalWeighting()How to weight samples during temporal reprojection.longvarianceEstimationRadius()The radius of the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments.floatvarianceEstimationSigma()The sigma value of the Gaussian function used by the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments.longvariancePrefilterRadius()The radius of the variance pre-filter of the bilateral filter.floatvariancePrefilterSigma()The sigma value of the Gaussian function used by the variance pre-filter of the bilateral filter.static longversion_static()-
Methods inherited from class apple.metalperformanceshaders.MPSKernel
device, label, options, setLabel, setOptions
-
Methods inherited from class apple.NSObject
accessibilityActivate, accessibilityActivationPoint, accessibilityAssistiveTechnologyFocusedIdentifiers, accessibilityAttributedHint, accessibilityAttributedLabel, accessibilityAttributedUserInputLabels, accessibilityAttributedValue, accessibilityContainerType, accessibilityCustomActions, accessibilityCustomRotors, accessibilityDecrement, accessibilityDragSourceDescriptors, accessibilityDropPointDescriptors, accessibilityElementAtIndex, accessibilityElementCount, accessibilityElementDidBecomeFocused, accessibilityElementDidLoseFocus, accessibilityElementIsFocused, accessibilityElements, accessibilityElementsHidden, accessibilityFrame, accessibilityHint, accessibilityIncrement, accessibilityLabel, accessibilityLanguage, accessibilityNavigationStyle, accessibilityPath, accessibilityPerformEscape, accessibilityPerformMagicTap, accessibilityRespondsToUserInteraction, accessibilityScroll, accessibilityTextualContext, accessibilityTraits, accessibilityUserInputLabels, accessibilityValue, accessibilityViewIsModal, addObserverForKeyPathOptionsContext, attemptRecoveryFromErrorOptionIndex, attemptRecoveryFromErrorOptionIndexDelegateDidRecoverSelectorContextInfo, autoContentAccessingProxy, awakeAfterUsingCoder, awakeFromNib, class_objc, classForCoder, classForKeyedArchiver, copy, dealloc, debugDescription, description, dictionaryWithValuesForKeys, didChangeValueForKey, didChangeValueForKeyWithSetMutationUsingObjects, didChangeValuesAtIndexesForKey, doesNotRecognizeSelector, fileManagerShouldProceedAfterError, fileManagerWillProcessPath, finalize_objc, forwardingTargetForSelector, forwardInvocation, hash, indexOfAccessibilityElement, isAccessibilityElement, isEqual, isKindOfClass, isMemberOfClass, isProxy, methodForSelector, methodSignatureForSelector, mutableArrayValueForKey, mutableArrayValueForKeyPath, mutableCopy, mutableOrderedSetValueForKey, mutableOrderedSetValueForKeyPath, mutableSetValueForKey, mutableSetValueForKeyPath, observationInfo, observeValueForKeyPathOfObjectChangeContext, performSelector, performSelectorInBackgroundWithObject, performSelectorOnMainThreadWithObjectWaitUntilDone, performSelectorOnMainThreadWithObjectWaitUntilDoneModes, performSelectorOnThreadWithObjectWaitUntilDone, performSelectorOnThreadWithObjectWaitUntilDoneModes, performSelectorWithObject, performSelectorWithObjectAfterDelay, performSelectorWithObjectAfterDelayInModes, performSelectorWithObjectWithObject, prepareForInterfaceBuilder, provideImageDataBytesPerRowOrigin_Size_UserInfo, removeObserverForKeyPath, removeObserverForKeyPathContext, replacementObjectForCoder, replacementObjectForKeyedArchiver, respondsToSelector, self, setAccessibilityActivationPoint, setAccessibilityAttributedHint, setAccessibilityAttributedLabel, setAccessibilityAttributedUserInputLabels, setAccessibilityAttributedValue, setAccessibilityContainerType, setAccessibilityCustomActions, setAccessibilityCustomRotors, setAccessibilityDragSourceDescriptors, setAccessibilityDropPointDescriptors, setAccessibilityElements, setAccessibilityElementsHidden, setAccessibilityFrame, setAccessibilityHint, setAccessibilityLabel, setAccessibilityLanguage, setAccessibilityNavigationStyle, setAccessibilityPath, setAccessibilityRespondsToUserInteraction, setAccessibilityTextualContext, setAccessibilityTraits, setAccessibilityUserInputLabels, setAccessibilityValue, setAccessibilityViewIsModal, setIsAccessibilityElement, setNilValueForKey, setObservationInfo, setShouldGroupAccessibilityChildren, setValueForKey, setValueForKeyPath, setValueForUndefinedKey, setValuesForKeysWithDictionary, shouldGroupAccessibilityChildren, superclass, validateValueForKeyError, validateValueForKeyPathError, valueForKey, valueForKeyPath, valueForUndefinedKey, willChangeValueForKey, willChangeValueForKeyWithSetMutationUsingObjects, willChangeValuesAtIndexesForKey
-
-
-
-
Method Detail
-
accessInstanceVariablesDirectly
public static boolean accessInstanceVariablesDirectly()
-
alloc
public static MPSSVGF alloc()
-
allocWithZone
public static java.lang.Object allocWithZone(org.moe.natj.general.ptr.VoidPtr zone)
-
automaticallyNotifiesObserversForKey
public static boolean automaticallyNotifiesObserversForKey(java.lang.String key)
-
bilateralFilterRadius
public long bilateralFilterRadius()
The radius of the bilateral filter. Defaults to 2 resulting in a 5x5 filter.
-
bilateralFilterSigma
public float bilateralFilterSigma()
The sigma value of the Gaussian function used by the bilateral filter. Must be greater than zero. Defaults to 1.2.
-
cancelPreviousPerformRequestsWithTarget
public static void cancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)
-
cancelPreviousPerformRequestsWithTargetSelectorObject
public static void cancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)
-
channelCount
public long channelCount()
The number of channels to filter in the source image. Must be at least one and at most three. Defaults to 3.
-
channelCount2
public long channelCount2()
The number of channels to filter in the second source image. Must be at least one and at most three. Defaults to 3.
-
classFallbacksForKeyedArchiver
public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
-
classForKeyedUnarchiver
public static org.moe.natj.objc.Class classForKeyedUnarchiver()
-
copyWithZone
public java.lang.Object copyWithZone(org.moe.natj.general.ptr.VoidPtr zone)
- Specified by:
copyWithZonein interfaceNSCopying- Overrides:
copyWithZonein classMPSKernel
-
copyWithZoneDevice
public java.lang.Object copyWithZoneDevice(org.moe.natj.general.ptr.VoidPtr zone, MTLDevice device)Description copied from class:MPSKernelMake a copy of this MPSKernel for a new device -copyWithZone: will call this API to make a copy of the MPSKernel on the same device. This interface may also be called directly to make a copy of the MPSKernel on a new device. Typically, the same MPSKernels should not be used to encode kernels on multiple command buffers from multiple threads. Many MPSKernels have mutable properties that might be changed by the other thread while this one is trying to encode. If you need to use a MPSKernel from multiple threads make a copy of it for each additional thread using -copyWithZone: or -copyWithZone:device:- Overrides:
copyWithZoneDevicein classMPSKernel- Parameters:
zone- The NSZone in which to allocate the objectdevice- The device for the new MPSKernel. If nil, then use self.device.- Returns:
- a pointer to a copy of this MPSKernel. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
-
debugDescription_static
public static java.lang.String debugDescription_static()
-
depthWeight
public float depthWeight()
Controls how samples' depths are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by exp(-abs(Z1 - Z2) / depthWeight). Must be greater than zero. Defaults to 1.0.
-
description_static
public static java.lang.String description_static()
-
encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureDepthNormalTexture
public void encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureDepthNormalTexture(MTLCommandBuffer commandBuffer, long stepDistance, MTLTexture sourceTexture, MTLTexture destinationTexture, MTLTexture depthNormalTexture)
Encode bilateral filter into a command buffer Performs an edge avoiding blur with radius given by the bilateraFilterRadius property with sampling weighted by a Gaussian filter with sigma given by the bilteralFilterSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the blur. These values are weighted by the depthWeight, normalWeight, and luminanceWeight properties. Before the variance values are used for luminance weighting, the variance is prefiltered with a small Gaussian blur with radius given by the variancePrefilterRadius property and sigma given by the variancePrefilterSigma property. This kernel should be run multiple times with a step distance of pow(2, i), starting with i = 0. It is recommended that the output of the first iteration be used as the image to be reprojected in the next frame. Then several more iterations should be run to compute the denoised image for the current frame. 5 total iterations is reasonable. The bilateral filter can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same normal and depth values. The number of channels to filter in the source image(s) are given by the channelCount and channelCount2 properties. Furthermore, the luminance variance is packed into the final channel of the source image(s) to reduce the number of texture sample instructions required. The filtered color and variance values are packed the same way in the destination image(s). Therefore, the source and destination images must have at least channelCount + 1 and channelCount2 + 1 channels. Channels beyond the required number are ignored when reading from source images and set to zero when writing to destination images. The source image should be produced by either the variance estimation kernel or a previous iteration of the bilateral filter. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one.- Parameters:
commandBuffer- Command buffer to encode intostepDistance- Number of pixels to skip between samplessourceTexture- Source packed color and variance texturedestinationTexture- Destination packed color and variance texturedepthNormalTexture- The depth and normal values for the current frame
-
encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureSourceTexture2DestinationTexture2DepthNormalTexture
public void encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureSourceTexture2DestinationTexture2DepthNormalTexture(MTLCommandBuffer commandBuffer, long stepDistance, MTLTexture sourceTexture, MTLTexture destinationTexture, MTLTexture sourceTexture2, MTLTexture destinationTexture2, MTLTexture depthNormalTexture)
Encode bilateral filter into a command buffer Performs an edge avoiding blur with radius given by the bilateraFilterRadius property with sampling weighted by a Gaussian filter with sigma given by the bilteralFilterSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the blur. These values are weighted by the depthWeight, normalWeight, and luminanceWeight properties. Before the variance values are used for luminance weighting, the variance is prefiltered with a small Gaussian blur with radius given by the variancePrefilterRadius property and sigma given by the variancePrefilterSigma property. This kernel should be run multiple times with a step distance of pow(2, i), starting with i = 0. It is recommended that the output of the first iteration be used as the image to be reprojected in the next frame. Then several more iterations should be run to compute the denoised image for the current frame. 5 total iterations is reasonable. The bilateral filter can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same normal and depth values. The number of channels to filter in the source image(s) are given by the channelCount and channelCount2 properties. Furthermore, the luminance variance is packed into the final channel of the source image(s) to reduce the number of texture sample instructions required. The filtered color and variance values are packed the same way in the destination image(s). Therefore, the source and destination images must have at least channelCount + 1 and channelCount2 + 1 channels. Channels beyond the required number are ignored when reading from source images and set to zero when writing to destination images. The source image should be produced by either the variance estimation kernel or a previous iteration of the bilateral filter. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one.- Parameters:
commandBuffer- Command buffer to encode intostepDistance- Number of pixels to skip between samplessourceTexture- Source packed color and variance texturedestinationTexture- Destination packed color and variance texturesourceTexture2- Second source imagedestinationTexture2- Second destination imagedepthNormalTexture- The depth and normal values for the current frame
-
encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTexturePreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture
public void encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTexturePreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture previousTexture, MTLTexture destinationTexture, MTLTexture previousLuminanceMomentsTexture, MTLTexture destinationLuminanceMomentsTexture, MTLTexture previousFrameCountTexture, MTLTexture destinationFrameCountTexture, MTLTexture motionVectorTexture, MTLTexture depthNormalTexture, MTLTexture previousDepthNormalTexture)
Encode reprojection into a command buffer Normal and depth values from the previous frame will be compared with normal and depth values from the current frame to determine if they are similar enough to reproject into the current frame. These values are weighted by the depthWeight and normalWeight properties. If the combined weight exceeds the reprojectionThreshold property's value, the previous frame will be blended with the current frame according to the temporalWeighting and temporalReprojectionBlendFactor properties. The reprojection kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The number of channels in the source image(s), previous frame's image(s), and destination image(s) are given by the channelCount and channelCount2 properties. These images must have at least as many channels as given by these properties. Channels beyond the required number are ignored when reading from source images and set to zero when writing to the destination images, except the alpha channel which will be set to one if present. The previous frame's image will be ignored on the first frame. The source and destination luminance moments textures must be at least two-channel textures, which will be set to the accumulated first and second moments of luminance. Channels beyond the first two will be ignored when reading from the previous frame's texture and set to zero when writing to the destination texture. The previous frame's luminance moments will be ignored on the first frame. The frame count textures track the number of accumulated frames and must be at least R32Uint textures. The remaining channels will be ignored when reading from the source texture and set to zero when writing to the destination texture, if present. The previous frame count texture must be cleared to zero on the first frame or to reset the accumulated images to the current frame's image. The motion vector texture must be at least a two channel texture representing how many texels each texel in the source image(s) have moved since the previous frame. The remaining channels will be ignored if present. This texture may be nil, in which case the motion vector is assumed to be zero, which is suitable for static images. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. The depth and normal values are not required if the motion vector texture is nil. The destination texture, destination luminance moments texture, and destination frame count texture are used by subsequent stages of the denoising filter. The destination frame count texture is also used as the source frame count texture the reprojection kernel in the next frame.- Parameters:
commandBuffer- Command buffer to encode intosourceTexture- Current frame to denoisepreviousTexture- Previous denoised frame to reproject into current framedestinationTexture- Output blended imagepreviousLuminanceMomentsTexture- Previous accumulated luminance moments imagedestinationLuminanceMomentsTexture- Output accumulated luminance moments imagepreviousFrameCountTexture- The number of frames accumulated in the previous source imagedestinationFrameCountTexture- The number of frames accumulated in the destination texture(s) including the current framemotionVectorTexture- Motion vector texturedepthNormalTexture- The depth and normal values for the current framepreviousDepthNormalTexture- The depth and normal values for the previous frame
-
encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTextureSourceTexture2PreviousTexture2DestinationTexture2PreviousLuminanceMomentsTexture2DestinationLuminanceMomentsTexture2PreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture
public void encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTextureSourceTexture2PreviousTexture2DestinationTexture2PreviousLuminanceMomentsTexture2DestinationLuminanceMomentsTexture2PreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture previousTexture, MTLTexture destinationTexture, MTLTexture previousLuminanceMomentsTexture, MTLTexture destinationLuminanceMomentsTexture, MTLTexture sourceTexture2, MTLTexture previousTexture2, MTLTexture destinationTexture2, MTLTexture previousLuminanceMomentsTexture2, MTLTexture destinationLuminanceMomentsTexture2, MTLTexture previousFrameCountTexture, MTLTexture destinationFrameCountTexture, MTLTexture motionVectorTexture, MTLTexture depthNormalTexture, MTLTexture previousDepthNormalTexture)
Encode reprojection into a command buffer Normal and depth values from the previous frame will be compared with normal and depth values from the current frame to determine if they are similar enough to reproject into the current frame. These values are weighted by the depthWeight and normalWeight properties. If the combined weight exceeds the reprojectionThreshold property's value, the previous frame will be blended with the current frame according to the temporalWeighting and temporalReprojectionBlendFactor properties. The reprojection kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The number of channels in the source image(s), previous frame's image(s), and destination image(s) are given by the channelCount and channelCount2 properties. These images must have at least as many channels as given by these properties. Channels beyond the required number are ignored when reading from source images and set to zero when writing to the destination images, except the alpha channel which will be set to one if present. The previous frame's image will be ignored on the first frame. The source and destination luminance moments textures must be at least two-channel textures, which will be set to the accumulated first and second moments of luminance. Channels beyond the first two will be ignored when reading from the previous frame's texture and set to zero when writing to the destination texture. The previous frame's luminance moments will be ignored on the first frame. The frame count textures track the number of accumulated frames and must be at least R32Uint textures. The remaining channels will be ignored when reading from the source texture and set to zero when writing to the destination texture, if present. The previous frame count texture must be cleared to zero on the first frame or to reset the accumulated images to the current frame's image. The motion vector texture must be at least a two channel texture representing how many texels each texel in the source image(s) have moved since the previous frame. The remaining channels will be ignored if present. This texture may be nil, in which case the motion vector is assumed to be zero, which is suitable for static images. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. The depth and normal values are not required if the motion vector texture is nil. The destination texture, destination luminance moments texture, and destination frame count texture are used by subsequent stages of the denoising filter. The destination frame count texture is also used as the source frame count texture the reprojection kernel in the next frame.- Parameters:
commandBuffer- Command buffer to encode intosourceTexture- Current frame to denoisepreviousTexture- Previous denoised frame to reproject into current framedestinationTexture- Output blended imagepreviousLuminanceMomentsTexture- Previous accumulated luminance moments imagedestinationLuminanceMomentsTexture- Output accumulated luminance moments imagesourceTexture2- Second source imagepreviousTexture2- Second previous imagedestinationTexture2- Second destination imagepreviousLuminanceMomentsTexture2- Second previous luminance moments texturedestinationLuminanceMomentsTexture2- Second destination luminance moments texturepreviousFrameCountTexture- The number of frames accumulated in the previous source imagedestinationFrameCountTexture- The number of frames accumulated in the destination texture(s) including the current framemotionVectorTexture- Motion vector texturedepthNormalTexture- The depth and normal values for the current framepreviousDepthNormalTexture- The depth and normal values for the previous frame
-
encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureFrameCountTextureDepthNormalTexture
public void encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureFrameCountTextureDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture luminanceMomentsTexture, MTLTexture destinationTexture, MTLTexture frameCountTexture, MTLTexture depthNormalTexture)
Encode variance estimation into a command buffer Variance is computed from the accumulated first and second luminance moments. If the number of accumulated frames is below the minimumFramesForVarianceEstimation property, the luminance variance will be computed using a spatial estimate instead. The spatial estimate is computed using a bilateral filter with radius given by the varianceEstimationRadius property. Neighboring samples will be weighted according to a gaussian function with sigma given by the varianceEstimationSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the spatial blur. These values are weighted by the depthWeight and normalWeight properties. The variance kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The reprojected source texture, luminance moments texture and frame count texture are computed by the reprojection kernel. The computed variance will be stored in the last channel of the destination image, while the source image will be copied into the previous channels, to reduce the number of texture sample instructured required by the bilateral filter in the final stage of the denoising kernel. The number of channels in the source image(s) are given by the channelCount and channelCount2 properties. Therefore, the destination image(s) must have at least channelCount + 1 and channelCount2 + 1 channels and the source image(s) must have at least channelCount and channelCount2 channels. Channels beyond the required number are ignored when reading from source textures and set to zero when writing to destination textures. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. If the minimumFramesForVarianceEstimation property is less than or equal to one, variance will be estimated directly from the accumulated luminance moments so the depth/normal texture may be nil.- Parameters:
commandBuffer- Command buffer to encode intosourceTexture- Current reprojected frame to denoiseluminanceMomentsTexture- Luminance moments texturedestinationTexture- Output packed color and variance imageframeCountTexture- Number of frames accumulated into the source imagedepthNormalTexture- The depth and normal values for the current frame
-
encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureSourceTexture2LuminanceMomentsTexture2DestinationTexture2FrameCountTextureDepthNormalTexture
public void encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureSourceTexture2LuminanceMomentsTexture2DestinationTexture2FrameCountTextureDepthNormalTexture(MTLCommandBuffer commandBuffer, MTLTexture sourceTexture, MTLTexture luminanceMomentsTexture, MTLTexture destinationTexture, MTLTexture sourceTexture2, MTLTexture luminanceMomentsTexture2, MTLTexture destinationTexture2, MTLTexture frameCountTexture, MTLTexture depthNormalTexture)
Encode variance estimation into a command buffer Variance is computed from the accumulated first and second luminance moments. If the number of accumulated frames is below the minimumFramesForVarianceEstimation property, the luminance variance will be computed using a spatial estimate instead. The spatial estimate is computed using a bilateral filter with radius given by the varianceEstimationRadius property. Neighboring samples will be weighted according to a gaussian function with sigma given by the varianceEstimationSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the spatial blur. These values are weighted by the depthWeight and normalWeight properties. The variance kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The reprojected source texture, luminance moments texture and frame count texture are computed by the reprojection kernel. The computed variance will be stored in the last channel of the destination image, while the source image will be copied into the previous channels, to reduce the number of texture sample instructured required by the bilateral filter in the final stage of the denoising kernel. The number of channels in the source image(s) are given by the channelCount and channelCount2 properties. Therefore, the destination image(s) must have at least channelCount + 1 and channelCount2 + 1 channels and the source image(s) must have at least channelCount and channelCount2 channels. Channels beyond the required number are ignored when reading from source textures and set to zero when writing to destination textures. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. If the minimumFramesForVarianceEstimation property is less than or equal to one, variance will be estimated directly from the accumulated luminance moments so the depth/normal texture may be nil.- Parameters:
commandBuffer- Command buffer to encode intosourceTexture- Current reprojected frame to denoiseluminanceMomentsTexture- Luminance moments texturedestinationTexture- Output packed color and variance imagesourceTexture2- Second source imageluminanceMomentsTexture2- Second luminance moments imagedestinationTexture2- Second destination imageframeCountTexture- Number of frames accumulated into the source imagedepthNormalTexture- The depth and normal values for the current frame
-
encodeWithCoder
public void encodeWithCoder(NSCoder coder)
- Specified by:
encodeWithCoderin interfaceNSCoding- Overrides:
encodeWithCoderin classMPSKernel
-
hash_static
public static long hash_static()
-
initWithCoder
public MPSSVGF initWithCoder(NSCoder coder)
Description copied from interface:NSCodingNS_DESIGNATED_INITIALIZER- Specified by:
initWithCoderin interfaceNSCoding- Overrides:
initWithCoderin classMPSKernel
-
initWithCoderDevice
public MPSSVGF initWithCoderDevice(NSCoder aDecoder, java.lang.Object device)
Description copied from class:MPSKernelNSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead.- Overrides:
initWithCoderDevicein classMPSKernel- Parameters:
aDecoder- The NSCoder subclass with your serialized MPSKerneldevice- The MTLDevice on which to make the MPSKernel- Returns:
- A new MPSKernel object, or nil if failure.
-
initWithDevice
public MPSSVGF initWithDevice(java.lang.Object device)
Description copied from class:MPSKernelStandard init with default properties per filter type- Overrides:
initWithDevicein classMPSKernel- Parameters:
device- The device that the filter will be used on. May not be NULL.- Returns:
- a pointer to the newly initialized object. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
-
instanceMethodForSelector
public static NSObject.Function_instanceMethodForSelector_ret instanceMethodForSelector(org.moe.natj.objc.SEL aSelector)
-
instanceMethodSignatureForSelector
public static NSMethodSignature instanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)
-
instancesRespondToSelector
public static boolean instancesRespondToSelector(org.moe.natj.objc.SEL aSelector)
-
isSubclassOfClass
public static boolean isSubclassOfClass(org.moe.natj.objc.Class aClass)
-
keyPathsForValuesAffectingValueForKey
public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey(java.lang.String key)
-
luminanceWeight
public float luminanceWeight()
Controls how samples' luminance values are compared during bilateral filtering. The final weight is given by exp(-abs(L1 - L2) / (luminanceWeight * luminanceVariance + EPSILON)). Must be greater than or equal to zero. Defaults to 4.
-
minimumFramesForVarianceEstimation
public long minimumFramesForVarianceEstimation()
The minimum number of frames which must be accumulated before variance can be computed directly from the accumulated luminance moments. If enough frames have not been accumulated, variance will be estimated with a spatial filter instead. Defaults to 4.
-
new_objc
public static java.lang.Object new_objc()
-
normalWeight
public float normalWeight()
Controls how samples' normals are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by pow(max(dot(N1, N2)), normalWeight). Must be greater than or equal to zero. Defaults to 128.
-
reprojectionThreshold
public float reprojectionThreshold()
During reprojection, minimum combined depth and normal weight needed to consider a pixel from the previous frame consistent with a pixel from the current frame. Must be greater than or equal to zero. Defaults to 0.01.
-
resolveClassMethod
public static boolean resolveClassMethod(org.moe.natj.objc.SEL sel)
-
resolveInstanceMethod
public static boolean resolveInstanceMethod(org.moe.natj.objc.SEL sel)
-
setBilateralFilterRadius
public void setBilateralFilterRadius(long value)
The radius of the bilateral filter. Defaults to 2 resulting in a 5x5 filter.
-
setBilateralFilterSigma
public void setBilateralFilterSigma(float value)
The sigma value of the Gaussian function used by the bilateral filter. Must be greater than zero. Defaults to 1.2.
-
setChannelCount2
public void setChannelCount2(long value)
The number of channels to filter in the second source image. Must be at least one and at most three. Defaults to 3.
-
setChannelCount
public void setChannelCount(long value)
The number of channels to filter in the source image. Must be at least one and at most three. Defaults to 3.
-
setDepthWeight
public void setDepthWeight(float value)
Controls how samples' depths are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by exp(-abs(Z1 - Z2) / depthWeight). Must be greater than zero. Defaults to 1.0.
-
setLuminanceWeight
public void setLuminanceWeight(float value)
Controls how samples' luminance values are compared during bilateral filtering. The final weight is given by exp(-abs(L1 - L2) / (luminanceWeight * luminanceVariance + EPSILON)). Must be greater than or equal to zero. Defaults to 4.
-
setMinimumFramesForVarianceEstimation
public void setMinimumFramesForVarianceEstimation(long value)
The minimum number of frames which must be accumulated before variance can be computed directly from the accumulated luminance moments. If enough frames have not been accumulated, variance will be estimated with a spatial filter instead. Defaults to 4.
-
setNormalWeight
public void setNormalWeight(float value)
Controls how samples' normals are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by pow(max(dot(N1, N2)), normalWeight). Must be greater than or equal to zero. Defaults to 128.
-
setReprojectionThreshold
public void setReprojectionThreshold(float value)
During reprojection, minimum combined depth and normal weight needed to consider a pixel from the previous frame consistent with a pixel from the current frame. Must be greater than or equal to zero. Defaults to 0.01.
-
setTemporalReprojectionBlendFactor
public void setTemporalReprojectionBlendFactor(float value)
When using MPSTemporalWeightingExponentialMovingAverage, how much to blend the current frame with the previous frame during reprojection. The final value is given by current * temporalReprojectionBlendFactor + previous * (1 - temporalReprojectionBlendFactor). Must be between zero and one, inclusive. Defaults to 0.2.
-
setTemporalWeighting
public void setTemporalWeighting(long value)
How to weight samples during temporal reprojection. Defaults to MPSTemporalWeightingAverage.
-
setVarianceEstimationRadius
public void setVarianceEstimationRadius(long value)
The radius of the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Defaults to 3 resulting in a 7x7 filter.
-
setVarianceEstimationSigma
public void setVarianceEstimationSigma(float value)
The sigma value of the Gaussian function used by the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Must be greater than zero. Defaults to 2.0.
-
setVariancePrefilterRadius
public void setVariancePrefilterRadius(long value)
The radius of the variance pre-filter of the bilateral filter. Defaults to 1 resulting in a 3x3 filter.
-
setVariancePrefilterSigma
public void setVariancePrefilterSigma(float value)
The sigma value of the Gaussian function used by the variance pre-filter of the bilateral filter. Must be greater than zero. Defaults to 1.33.
-
setVersion_static
public static void setVersion_static(long aVersion)
-
superclass_static
public static org.moe.natj.objc.Class superclass_static()
-
supportsSecureCoding
public static boolean supportsSecureCoding()
-
_supportsSecureCoding
public boolean _supportsSecureCoding()
Description copied from interface:NSSecureCodingThis property must return YES on all classes that allow secure coding. Subclasses of classes that adopt NSSecureCoding and override initWithCoder: must also override this method and return YES. The Secure Coding Guide should be consulted when writing methods that decode data.- Specified by:
_supportsSecureCodingin interfaceNSSecureCoding- Overrides:
_supportsSecureCodingin classMPSKernel
-
temporalReprojectionBlendFactor
public float temporalReprojectionBlendFactor()
When using MPSTemporalWeightingExponentialMovingAverage, how much to blend the current frame with the previous frame during reprojection. The final value is given by current * temporalReprojectionBlendFactor + previous * (1 - temporalReprojectionBlendFactor). Must be between zero and one, inclusive. Defaults to 0.2.
-
temporalWeighting
public long temporalWeighting()
How to weight samples during temporal reprojection. Defaults to MPSTemporalWeightingAverage.
-
varianceEstimationRadius
public long varianceEstimationRadius()
The radius of the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Defaults to 3 resulting in a 7x7 filter.
-
varianceEstimationSigma
public float varianceEstimationSigma()
The sigma value of the Gaussian function used by the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Must be greater than zero. Defaults to 2.0.
-
variancePrefilterRadius
public long variancePrefilterRadius()
The radius of the variance pre-filter of the bilateral filter. Defaults to 1 resulting in a 3x3 filter.
-
variancePrefilterSigma
public float variancePrefilterSigma()
The sigma value of the Gaussian function used by the variance pre-filter of the bilateral filter. Must be greater than zero. Defaults to 1.33.
-
version_static
public static long version_static()
-
-