Class MPSSVGF

  • All Implemented Interfaces:
    NSCoding, NSCopying, NSSecureCoding, NSObject

    public class MPSSVGF
    extends MPSKernel
    implements NSSecureCoding, NSCopying
    Reduces noise in images rendered with Monte Carlo ray tracing methods This filter uses temporal reprojection to accumulate samples over time, followed by an edge-avoiding blur to smooth out the noise. It uses depth and surface normal textures to detect edges in the image(s) to be denoised. The filter also computes an estimate of the luminance variance of the accumulated samples for each pixel to reject neighboring pixels whose luminance is too dissimilar while blurring. This filter requires noise-free depth and normal textures, so it is not compatible with stochastic visibility effects such as depth of field, motion blur, or pixel subsampling. These effects need to be applied as a post-process instead. Furthermore, because the depth and normal textures can only represent directly visible geometry, the filter may over-blur reflections. The use of temporal reprojection may introduce artifacts such as ghosting or streaking, as well as a temporal lag for changes in luminance such as moving shadows. However, the filter is relatively fast as it is intended for realtime use. Slower but higher quality filters are available in the literature. This filter can process up to two images simultaneously assuming they share the same depth and normal textures. This is typically faster than processing the two images independently because memory bandwidth spent fetching depth and normal values and ALU time spent computing various weighting functions can be shared by both images. This is useful if e.g. you want to denoise direct and indirect lighting terms separately to avoid mixing the two terms. The filter is also optimized for processing single-channel images for effects such as shadows and ambient occlusion. Denoising these images can be much faster than denoising a full RGB image, so it may be useful to separate out these terms and denoise them specifically. This filter operates in three stages: temporal reprojection, variance estimation, and finally a series of edge-avoiding bilateral blurs. The temporal reprojection stage accepts the image to be denoised for the current frame and the denoised image from the previous frame, the depth and normal textures from the current and previous frame and, finally, a motion vector texture. It uses the motion vector texture to look up the accumulated samples from the previous frame. It then compares the depth and normals to determine if those samples are consistent with the current frame. If so, the previous frame is blended with the current frame. This stage also accumulates the first and second moments of the sample luminance which is used to compute the luminance variance in the next stage. The variance estimation stage computes an estimate of the variance of the luminance of the accumulated samples for each pixel. This stage may fall back to a spatial estimate if not enough samples have been accumulated. The luminance variance is used in the final stage to reject outlying neighboring pixels while blurring to avoid blurring across luminance discontinuities such as shadow boundaries. The final stage performs consecutive edge-avoiding bilateral blurs to smooth out noise in the image. The blurs are dilated with increasing power of two step distances starting from 1, which cheaply approximates a very large radius bilateral blur. Each iteration blurs both the input image and the variance image as variance is reduced after each iteration. It is recommended that the output of the first iteration be used as the input to the next frame's reprojection stage to further reduce noise. Tips: - It may be helpful to further divide out texture details such as surface albedo before denoising to avoid blurring texture detail and to preserve any careful texture filtering that may have been performed. The albedo can be reapplied after denoising. - High frequency geometry and normal maps may cause excessive disocclusions during reprojection manifesting as noise. - Jittering sample positions from frame to frame for temporal antialiasing may also cause disocclusions. However, this can be partially hidden by the temporal antialiasing algorithm itself. - This kernel, like many convolutions, requires quite a bit of bandwidth. Use the texture pixel formats with the smallest number of bits-per-pixel and the lowest resolution possible for the required quality level. Lower resolution images can be combined with a bilateral upsampling filter, especially if the image being denoised is mostly low frequency lighting or ambient occlusion. - The increasing dilation during the bilateral blurring stage can introduce ringing artifacts around geometric discontinuities. These can be partially hidden at the cost of potentially increased noise by reducing the bilateral blur's sigma value slightly after each iteration. - Use lower precision pixel formats if possible to reduce memory bandwidth. Refer to "Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination" for more information.
    • Constructor Detail

      • MPSSVGF

        protected MPSSVGF​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • alloc

        public static MPSSVGF alloc()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • bilateralFilterRadius

        public long bilateralFilterRadius()
        The radius of the bilateral filter. Defaults to 2 resulting in a 5x5 filter.
      • bilateralFilterSigma

        public float bilateralFilterSigma()
        The sigma value of the Gaussian function used by the bilateral filter. Must be greater than zero. Defaults to 1.2.
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • channelCount

        public long channelCount()
        The number of channels to filter in the source image. Must be at least one and at most three. Defaults to 3.
      • channelCount2

        public long channelCount2()
        The number of channels to filter in the second source image. Must be at least one and at most three. Defaults to 3.
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • copyWithZoneDevice

        public java.lang.Object copyWithZoneDevice​(org.moe.natj.general.ptr.VoidPtr zone,
                                                   MTLDevice device)
        Description copied from class: MPSKernel
        Make a copy of this MPSKernel for a new device -copyWithZone: will call this API to make a copy of the MPSKernel on the same device. This interface may also be called directly to make a copy of the MPSKernel on a new device. Typically, the same MPSKernels should not be used to encode kernels on multiple command buffers from multiple threads. Many MPSKernels have mutable properties that might be changed by the other thread while this one is trying to encode. If you need to use a MPSKernel from multiple threads make a copy of it for each additional thread using -copyWithZone: or -copyWithZone:device:
        Overrides:
        copyWithZoneDevice in class MPSKernel
        Parameters:
        zone - The NSZone in which to allocate the object
        device - The device for the new MPSKernel. If nil, then use self.device.
        Returns:
        a pointer to a copy of this MPSKernel. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • depthWeight

        public float depthWeight()
        Controls how samples' depths are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by exp(-abs(Z1 - Z2) / depthWeight). Must be greater than zero. Defaults to 1.0.
      • description_static

        public static java.lang.String description_static()
      • encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureDepthNormalTexture

        public void encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureDepthNormalTexture​(MTLCommandBuffer commandBuffer,
                                                                                                                      long stepDistance,
                                                                                                                      MTLTexture sourceTexture,
                                                                                                                      MTLTexture destinationTexture,
                                                                                                                      MTLTexture depthNormalTexture)
        Encode bilateral filter into a command buffer Performs an edge avoiding blur with radius given by the bilateraFilterRadius property with sampling weighted by a Gaussian filter with sigma given by the bilteralFilterSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the blur. These values are weighted by the depthWeight, normalWeight, and luminanceWeight properties. Before the variance values are used for luminance weighting, the variance is prefiltered with a small Gaussian blur with radius given by the variancePrefilterRadius property and sigma given by the variancePrefilterSigma property. This kernel should be run multiple times with a step distance of pow(2, i), starting with i = 0. It is recommended that the output of the first iteration be used as the image to be reprojected in the next frame. Then several more iterations should be run to compute the denoised image for the current frame. 5 total iterations is reasonable. The bilateral filter can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same normal and depth values. The number of channels to filter in the source image(s) are given by the channelCount and channelCount2 properties. Furthermore, the luminance variance is packed into the final channel of the source image(s) to reduce the number of texture sample instructions required. The filtered color and variance values are packed the same way in the destination image(s). Therefore, the source and destination images must have at least channelCount + 1 and channelCount2 + 1 channels. Channels beyond the required number are ignored when reading from source images and set to zero when writing to destination images. The source image should be produced by either the variance estimation kernel or a previous iteration of the bilateral filter. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one.
        Parameters:
        commandBuffer - Command buffer to encode into
        stepDistance - Number of pixels to skip between samples
        sourceTexture - Source packed color and variance texture
        destinationTexture - Destination packed color and variance texture
        depthNormalTexture - The depth and normal values for the current frame
      • encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureSourceTexture2DestinationTexture2DepthNormalTexture

        public void encodeBilateralFilterToCommandBufferStepDistanceSourceTextureDestinationTextureSourceTexture2DestinationTexture2DepthNormalTexture​(MTLCommandBuffer commandBuffer,
                                                                                                                                                       long stepDistance,
                                                                                                                                                       MTLTexture sourceTexture,
                                                                                                                                                       MTLTexture destinationTexture,
                                                                                                                                                       MTLTexture sourceTexture2,
                                                                                                                                                       MTLTexture destinationTexture2,
                                                                                                                                                       MTLTexture depthNormalTexture)
        Encode bilateral filter into a command buffer Performs an edge avoiding blur with radius given by the bilateraFilterRadius property with sampling weighted by a Gaussian filter with sigma given by the bilteralFilterSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the blur. These values are weighted by the depthWeight, normalWeight, and luminanceWeight properties. Before the variance values are used for luminance weighting, the variance is prefiltered with a small Gaussian blur with radius given by the variancePrefilterRadius property and sigma given by the variancePrefilterSigma property. This kernel should be run multiple times with a step distance of pow(2, i), starting with i = 0. It is recommended that the output of the first iteration be used as the image to be reprojected in the next frame. Then several more iterations should be run to compute the denoised image for the current frame. 5 total iterations is reasonable. The bilateral filter can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same normal and depth values. The number of channels to filter in the source image(s) are given by the channelCount and channelCount2 properties. Furthermore, the luminance variance is packed into the final channel of the source image(s) to reduce the number of texture sample instructions required. The filtered color and variance values are packed the same way in the destination image(s). Therefore, the source and destination images must have at least channelCount + 1 and channelCount2 + 1 channels. Channels beyond the required number are ignored when reading from source images and set to zero when writing to destination images. The source image should be produced by either the variance estimation kernel or a previous iteration of the bilateral filter. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one.
        Parameters:
        commandBuffer - Command buffer to encode into
        stepDistance - Number of pixels to skip between samples
        sourceTexture - Source packed color and variance texture
        destinationTexture - Destination packed color and variance texture
        sourceTexture2 - Second source image
        destinationTexture2 - Second destination image
        depthNormalTexture - The depth and normal values for the current frame
      • encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTexturePreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture

        public void encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTexturePreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture​(MTLCommandBuffer commandBuffer,
                                                                                                                                                                                                                                                                                         MTLTexture sourceTexture,
                                                                                                                                                                                                                                                                                         MTLTexture previousTexture,
                                                                                                                                                                                                                                                                                         MTLTexture destinationTexture,
                                                                                                                                                                                                                                                                                         MTLTexture previousLuminanceMomentsTexture,
                                                                                                                                                                                                                                                                                         MTLTexture destinationLuminanceMomentsTexture,
                                                                                                                                                                                                                                                                                         MTLTexture previousFrameCountTexture,
                                                                                                                                                                                                                                                                                         MTLTexture destinationFrameCountTexture,
                                                                                                                                                                                                                                                                                         MTLTexture motionVectorTexture,
                                                                                                                                                                                                                                                                                         MTLTexture depthNormalTexture,
                                                                                                                                                                                                                                                                                         MTLTexture previousDepthNormalTexture)
        Encode reprojection into a command buffer Normal and depth values from the previous frame will be compared with normal and depth values from the current frame to determine if they are similar enough to reproject into the current frame. These values are weighted by the depthWeight and normalWeight properties. If the combined weight exceeds the reprojectionThreshold property's value, the previous frame will be blended with the current frame according to the temporalWeighting and temporalReprojectionBlendFactor properties. The reprojection kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The number of channels in the source image(s), previous frame's image(s), and destination image(s) are given by the channelCount and channelCount2 properties. These images must have at least as many channels as given by these properties. Channels beyond the required number are ignored when reading from source images and set to zero when writing to the destination images, except the alpha channel which will be set to one if present. The previous frame's image will be ignored on the first frame. The source and destination luminance moments textures must be at least two-channel textures, which will be set to the accumulated first and second moments of luminance. Channels beyond the first two will be ignored when reading from the previous frame's texture and set to zero when writing to the destination texture. The previous frame's luminance moments will be ignored on the first frame. The frame count textures track the number of accumulated frames and must be at least R32Uint textures. The remaining channels will be ignored when reading from the source texture and set to zero when writing to the destination texture, if present. The previous frame count texture must be cleared to zero on the first frame or to reset the accumulated images to the current frame's image. The motion vector texture must be at least a two channel texture representing how many texels each texel in the source image(s) have moved since the previous frame. The remaining channels will be ignored if present. This texture may be nil, in which case the motion vector is assumed to be zero, which is suitable for static images. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. The depth and normal values are not required if the motion vector texture is nil. The destination texture, destination luminance moments texture, and destination frame count texture are used by subsequent stages of the denoising filter. The destination frame count texture is also used as the source frame count texture the reprojection kernel in the next frame.
        Parameters:
        commandBuffer - Command buffer to encode into
        sourceTexture - Current frame to denoise
        previousTexture - Previous denoised frame to reproject into current frame
        destinationTexture - Output blended image
        previousLuminanceMomentsTexture - Previous accumulated luminance moments image
        destinationLuminanceMomentsTexture - Output accumulated luminance moments image
        previousFrameCountTexture - The number of frames accumulated in the previous source image
        destinationFrameCountTexture - The number of frames accumulated in the destination texture(s) including the current frame
        motionVectorTexture - Motion vector texture
        depthNormalTexture - The depth and normal values for the current frame
        previousDepthNormalTexture - The depth and normal values for the previous frame
      • encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTextureSourceTexture2PreviousTexture2DestinationTexture2PreviousLuminanceMomentsTexture2DestinationLuminanceMomentsTexture2PreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture

        public void encodeReprojectionToCommandBufferSourceTexturePreviousTextureDestinationTexturePreviousLuminanceMomentsTextureDestinationLuminanceMomentsTextureSourceTexture2PreviousTexture2DestinationTexture2PreviousLuminanceMomentsTexture2DestinationLuminanceMomentsTexture2PreviousFrameCountTextureDestinationFrameCountTextureMotionVectorTextureDepthNormalTexturePreviousDepthNormalTexture​(MTLCommandBuffer commandBuffer,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture sourceTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture previousTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture destinationTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture previousLuminanceMomentsTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture destinationLuminanceMomentsTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture sourceTexture2,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture previousTexture2,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture destinationTexture2,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture previousLuminanceMomentsTexture2,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture destinationLuminanceMomentsTexture2,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture previousFrameCountTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture destinationFrameCountTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture motionVectorTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture depthNormalTexture,
                                                                                                                                                                                                                                                                                                                                                                                                             MTLTexture previousDepthNormalTexture)
        Encode reprojection into a command buffer Normal and depth values from the previous frame will be compared with normal and depth values from the current frame to determine if they are similar enough to reproject into the current frame. These values are weighted by the depthWeight and normalWeight properties. If the combined weight exceeds the reprojectionThreshold property's value, the previous frame will be blended with the current frame according to the temporalWeighting and temporalReprojectionBlendFactor properties. The reprojection kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The number of channels in the source image(s), previous frame's image(s), and destination image(s) are given by the channelCount and channelCount2 properties. These images must have at least as many channels as given by these properties. Channels beyond the required number are ignored when reading from source images and set to zero when writing to the destination images, except the alpha channel which will be set to one if present. The previous frame's image will be ignored on the first frame. The source and destination luminance moments textures must be at least two-channel textures, which will be set to the accumulated first and second moments of luminance. Channels beyond the first two will be ignored when reading from the previous frame's texture and set to zero when writing to the destination texture. The previous frame's luminance moments will be ignored on the first frame. The frame count textures track the number of accumulated frames and must be at least R32Uint textures. The remaining channels will be ignored when reading from the source texture and set to zero when writing to the destination texture, if present. The previous frame count texture must be cleared to zero on the first frame or to reset the accumulated images to the current frame's image. The motion vector texture must be at least a two channel texture representing how many texels each texel in the source image(s) have moved since the previous frame. The remaining channels will be ignored if present. This texture may be nil, in which case the motion vector is assumed to be zero, which is suitable for static images. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. The depth and normal values are not required if the motion vector texture is nil. The destination texture, destination luminance moments texture, and destination frame count texture are used by subsequent stages of the denoising filter. The destination frame count texture is also used as the source frame count texture the reprojection kernel in the next frame.
        Parameters:
        commandBuffer - Command buffer to encode into
        sourceTexture - Current frame to denoise
        previousTexture - Previous denoised frame to reproject into current frame
        destinationTexture - Output blended image
        previousLuminanceMomentsTexture - Previous accumulated luminance moments image
        destinationLuminanceMomentsTexture - Output accumulated luminance moments image
        sourceTexture2 - Second source image
        previousTexture2 - Second previous image
        destinationTexture2 - Second destination image
        previousLuminanceMomentsTexture2 - Second previous luminance moments texture
        destinationLuminanceMomentsTexture2 - Second destination luminance moments texture
        previousFrameCountTexture - The number of frames accumulated in the previous source image
        destinationFrameCountTexture - The number of frames accumulated in the destination texture(s) including the current frame
        motionVectorTexture - Motion vector texture
        depthNormalTexture - The depth and normal values for the current frame
        previousDepthNormalTexture - The depth and normal values for the previous frame
      • encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureFrameCountTextureDepthNormalTexture

        public void encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureFrameCountTextureDepthNormalTexture​(MTLCommandBuffer commandBuffer,
                                                                                                                                                     MTLTexture sourceTexture,
                                                                                                                                                     MTLTexture luminanceMomentsTexture,
                                                                                                                                                     MTLTexture destinationTexture,
                                                                                                                                                     MTLTexture frameCountTexture,
                                                                                                                                                     MTLTexture depthNormalTexture)
        Encode variance estimation into a command buffer Variance is computed from the accumulated first and second luminance moments. If the number of accumulated frames is below the minimumFramesForVarianceEstimation property, the luminance variance will be computed using a spatial estimate instead. The spatial estimate is computed using a bilateral filter with radius given by the varianceEstimationRadius property. Neighboring samples will be weighted according to a gaussian function with sigma given by the varianceEstimationSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the spatial blur. These values are weighted by the depthWeight and normalWeight properties. The variance kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The reprojected source texture, luminance moments texture and frame count texture are computed by the reprojection kernel. The computed variance will be stored in the last channel of the destination image, while the source image will be copied into the previous channels, to reduce the number of texture sample instructured required by the bilateral filter in the final stage of the denoising kernel. The number of channels in the source image(s) are given by the channelCount and channelCount2 properties. Therefore, the destination image(s) must have at least channelCount + 1 and channelCount2 + 1 channels and the source image(s) must have at least channelCount and channelCount2 channels. Channels beyond the required number are ignored when reading from source textures and set to zero when writing to destination textures. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. If the minimumFramesForVarianceEstimation property is less than or equal to one, variance will be estimated directly from the accumulated luminance moments so the depth/normal texture may be nil.
        Parameters:
        commandBuffer - Command buffer to encode into
        sourceTexture - Current reprojected frame to denoise
        luminanceMomentsTexture - Luminance moments texture
        destinationTexture - Output packed color and variance image
        frameCountTexture - Number of frames accumulated into the source image
        depthNormalTexture - The depth and normal values for the current frame
      • encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureSourceTexture2LuminanceMomentsTexture2DestinationTexture2FrameCountTextureDepthNormalTexture

        public void encodeVarianceEstimationToCommandBufferSourceTextureLuminanceMomentsTextureDestinationTextureSourceTexture2LuminanceMomentsTexture2DestinationTexture2FrameCountTextureDepthNormalTexture​(MTLCommandBuffer commandBuffer,
                                                                                                                                                                                                              MTLTexture sourceTexture,
                                                                                                                                                                                                              MTLTexture luminanceMomentsTexture,
                                                                                                                                                                                                              MTLTexture destinationTexture,
                                                                                                                                                                                                              MTLTexture sourceTexture2,
                                                                                                                                                                                                              MTLTexture luminanceMomentsTexture2,
                                                                                                                                                                                                              MTLTexture destinationTexture2,
                                                                                                                                                                                                              MTLTexture frameCountTexture,
                                                                                                                                                                                                              MTLTexture depthNormalTexture)
        Encode variance estimation into a command buffer Variance is computed from the accumulated first and second luminance moments. If the number of accumulated frames is below the minimumFramesForVarianceEstimation property, the luminance variance will be computed using a spatial estimate instead. The spatial estimate is computed using a bilateral filter with radius given by the varianceEstimationRadius property. Neighboring samples will be weighted according to a gaussian function with sigma given by the varianceEstimationSigma property. Normal and depth values from neighboring pixels will be compared with depth and normal values of the center pixel to determine if they are similar enough to include in the spatial blur. These values are weighted by the depthWeight and normalWeight properties. The variance kernel can operate on two sets of source and destination textures simultaneously to share costs such as loading depth and normal values from memory, computing various weights, etc. The second set of textures may be nil. The two images are assumed to share the same depth and normal values. The reprojected source texture, luminance moments texture and frame count texture are computed by the reprojection kernel. The computed variance will be stored in the last channel of the destination image, while the source image will be copied into the previous channels, to reduce the number of texture sample instructured required by the bilateral filter in the final stage of the denoising kernel. The number of channels in the source image(s) are given by the channelCount and channelCount2 properties. Therefore, the destination image(s) must have at least channelCount + 1 and channelCount2 + 1 channels and the source image(s) must have at least channelCount and channelCount2 channels. Channels beyond the required number are ignored when reading from source textures and set to zero when writing to destination textures. The depth/normal texture must contain the depth and normal values for directly visible geometry for the current frame for each pixel. These values are packed into a four channel texture to reduce the number of texture sampling instructions required to load them. The first channel must store the depth value from zero to infinity. The normals must be stored in the last three channels as the three signed X, Y, and z components each between negative one and one. If the minimumFramesForVarianceEstimation property is less than or equal to one, variance will be estimated directly from the accumulated luminance moments so the depth/normal texture may be nil.
        Parameters:
        commandBuffer - Command buffer to encode into
        sourceTexture - Current reprojected frame to denoise
        luminanceMomentsTexture - Luminance moments texture
        destinationTexture - Output packed color and variance image
        sourceTexture2 - Second source image
        luminanceMomentsTexture2 - Second luminance moments image
        destinationTexture2 - Second destination image
        frameCountTexture - Number of frames accumulated into the source image
        depthNormalTexture - The depth and normal values for the current frame
      • hash_static

        public static long hash_static()
      • initWithCoderDevice

        public MPSSVGF initWithCoderDevice​(NSCoder aDecoder,
                                           java.lang.Object device)
        Description copied from class: MPSKernel
        NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead.
        Overrides:
        initWithCoderDevice in class MPSKernel
        Parameters:
        aDecoder - The NSCoder subclass with your serialized MPSKernel
        device - The MTLDevice on which to make the MPSKernel
        Returns:
        A new MPSKernel object, or nil if failure.
      • initWithDevice

        public MPSSVGF initWithDevice​(java.lang.Object device)
        Description copied from class: MPSKernel
        Standard init with default properties per filter type
        Overrides:
        initWithDevice in class MPSKernel
        Parameters:
        device - The device that the filter will be used on. May not be NULL.
        Returns:
        a pointer to the newly initialized object. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • luminanceWeight

        public float luminanceWeight()
        Controls how samples' luminance values are compared during bilateral filtering. The final weight is given by exp(-abs(L1 - L2) / (luminanceWeight * luminanceVariance + EPSILON)). Must be greater than or equal to zero. Defaults to 4.
      • minimumFramesForVarianceEstimation

        public long minimumFramesForVarianceEstimation()
        The minimum number of frames which must be accumulated before variance can be computed directly from the accumulated luminance moments. If enough frames have not been accumulated, variance will be estimated with a spatial filter instead. Defaults to 4.
      • new_objc

        public static java.lang.Object new_objc()
      • normalWeight

        public float normalWeight()
        Controls how samples' normals are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by pow(max(dot(N1, N2)), normalWeight). Must be greater than or equal to zero. Defaults to 128.
      • reprojectionThreshold

        public float reprojectionThreshold()
        During reprojection, minimum combined depth and normal weight needed to consider a pixel from the previous frame consistent with a pixel from the current frame. Must be greater than or equal to zero. Defaults to 0.01.
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setBilateralFilterRadius

        public void setBilateralFilterRadius​(long value)
        The radius of the bilateral filter. Defaults to 2 resulting in a 5x5 filter.
      • setBilateralFilterSigma

        public void setBilateralFilterSigma​(float value)
        The sigma value of the Gaussian function used by the bilateral filter. Must be greater than zero. Defaults to 1.2.
      • setChannelCount2

        public void setChannelCount2​(long value)
        The number of channels to filter in the second source image. Must be at least one and at most three. Defaults to 3.
      • setChannelCount

        public void setChannelCount​(long value)
        The number of channels to filter in the source image. Must be at least one and at most three. Defaults to 3.
      • setDepthWeight

        public void setDepthWeight​(float value)
        Controls how samples' depths are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by exp(-abs(Z1 - Z2) / depthWeight). Must be greater than zero. Defaults to 1.0.
      • setLuminanceWeight

        public void setLuminanceWeight​(float value)
        Controls how samples' luminance values are compared during bilateral filtering. The final weight is given by exp(-abs(L1 - L2) / (luminanceWeight * luminanceVariance + EPSILON)). Must be greater than or equal to zero. Defaults to 4.
      • setMinimumFramesForVarianceEstimation

        public void setMinimumFramesForVarianceEstimation​(long value)
        The minimum number of frames which must be accumulated before variance can be computed directly from the accumulated luminance moments. If enough frames have not been accumulated, variance will be estimated with a spatial filter instead. Defaults to 4.
      • setNormalWeight

        public void setNormalWeight​(float value)
        Controls how samples' normals are compared during reprojection, variance estimation, and bilateral filtering. The final weight is given by pow(max(dot(N1, N2)), normalWeight). Must be greater than or equal to zero. Defaults to 128.
      • setReprojectionThreshold

        public void setReprojectionThreshold​(float value)
        During reprojection, minimum combined depth and normal weight needed to consider a pixel from the previous frame consistent with a pixel from the current frame. Must be greater than or equal to zero. Defaults to 0.01.
      • setTemporalReprojectionBlendFactor

        public void setTemporalReprojectionBlendFactor​(float value)
        When using MPSTemporalWeightingExponentialMovingAverage, how much to blend the current frame with the previous frame during reprojection. The final value is given by current * temporalReprojectionBlendFactor + previous * (1 - temporalReprojectionBlendFactor). Must be between zero and one, inclusive. Defaults to 0.2.
      • setTemporalWeighting

        public void setTemporalWeighting​(long value)
        How to weight samples during temporal reprojection. Defaults to MPSTemporalWeightingAverage.
      • setVarianceEstimationRadius

        public void setVarianceEstimationRadius​(long value)
        The radius of the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Defaults to 3 resulting in a 7x7 filter.
      • setVarianceEstimationSigma

        public void setVarianceEstimationSigma​(float value)
        The sigma value of the Gaussian function used by the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Must be greater than zero. Defaults to 2.0.
      • setVariancePrefilterRadius

        public void setVariancePrefilterRadius​(long value)
        The radius of the variance pre-filter of the bilateral filter. Defaults to 1 resulting in a 3x3 filter.
      • setVariancePrefilterSigma

        public void setVariancePrefilterSigma​(float value)
        The sigma value of the Gaussian function used by the variance pre-filter of the bilateral filter. Must be greater than zero. Defaults to 1.33.
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • supportsSecureCoding

        public static boolean supportsSecureCoding()
      • _supportsSecureCoding

        public boolean _supportsSecureCoding()
        Description copied from interface: NSSecureCoding
        This property must return YES on all classes that allow secure coding. Subclasses of classes that adopt NSSecureCoding and override initWithCoder: must also override this method and return YES. The Secure Coding Guide should be consulted when writing methods that decode data.
        Specified by:
        _supportsSecureCoding in interface NSSecureCoding
        Overrides:
        _supportsSecureCoding in class MPSKernel
      • temporalReprojectionBlendFactor

        public float temporalReprojectionBlendFactor()
        When using MPSTemporalWeightingExponentialMovingAverage, how much to blend the current frame with the previous frame during reprojection. The final value is given by current * temporalReprojectionBlendFactor + previous * (1 - temporalReprojectionBlendFactor). Must be between zero and one, inclusive. Defaults to 0.2.
      • temporalWeighting

        public long temporalWeighting()
        How to weight samples during temporal reprojection. Defaults to MPSTemporalWeightingAverage.
      • varianceEstimationRadius

        public long varianceEstimationRadius()
        The radius of the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Defaults to 3 resulting in a 7x7 filter.
      • varianceEstimationSigma

        public float varianceEstimationSigma()
        The sigma value of the Gaussian function used by the spatial filter used when not enough frames have been accumulated to compute variance from accumulated luminance moments. Must be greater than zero. Defaults to 2.0.
      • variancePrefilterRadius

        public long variancePrefilterRadius()
        The radius of the variance pre-filter of the bilateral filter. Defaults to 1 resulting in a 3x3 filter.
      • variancePrefilterSigma

        public float variancePrefilterSigma()
        The sigma value of the Gaussian function used by the variance pre-filter of the bilateral filter. Must be greater than zero. Defaults to 1.33.
      • version_static

        public static long version_static()