代码之家  ›  专栏  ›  技术社区  ›  Chewie The Chorkie

如何直接从ciimage而不是swift中的uiimage生成cvPixelBuffer?

  •  1
  • Chewie The Chorkie  · 技术社区  · 6 年前

    我正在通过iPhone摄像头录制过滤后的视频,并且在录制时将ciimage实时转换为uiimage时,CPU使用率会大幅增加。我的buffer函数使cvPixelBuffer使用uiImage,到目前为止,这需要我进行这个转换。我想做一个缓冲函数,如果可能的话,它将获取一个ciimage,这样我可以跳过从uiimage到ciimage的转换。我在想,这将给我在录制视频时的性能一个巨大的提升,因为CPU和GPU之间不会有任何交接。

    这就是我现在拥有的。在captureoutput函数中,我从ciimage创建一个uiimage,它是过滤后的图像。我使用uiimage从buffer函数创建cvPixelBuffer,并将其附加到AssetWriter的PixelBuffer输入:

    let imageUI = UIImage(ciImage: ciImage)
    
    let filteredBuffer:CVPixelBuffer? = buffer(from: imageUI)
    
    let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)
    

    使用uiimage的缓冲区函数:

    func buffer(from image: UIImage) -> CVPixelBuffer? {
        let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
        var pixelBuffer : CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
    
        guard (status == kCVReturnSuccess) else {
            return nil
        }
    
        CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
    
        let videoRecContext = CGContext(data: pixelData,
                                width: Int(image.size.width),
                                height: Int(image.size.height),
                                bitsPerComponent: 8,
                                bytesPerRow: videoRecBytesPerRow,
                                space: (MTLCaptureView?.colorSpace)!, // It's getting the current colorspace from a MTKView
                                bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
    
        videoRecContext?.translateBy(x: 0, y: image.size.height)
        videoRecContext?.scaleBy(x: 1.0, y: -1.0)
    
        UIGraphicsPushContext(videoRecContext!)
        image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
        UIGraphicsPopContext()
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    
        return pixelBuffer
    }
    
    2 回复  |  直到 6 年前
        1
  •  2
  •   rob mayoff    6 年前

    创建一个 CIContext 用它来渲染 CIImage 直接到您的 CVPixelBuffer 使用 CIContext.render(_: CIImage, to buffer: CVPixelBuffer) .

        2
  •  1
  •   Chewie The Chorkie    6 年前

    为了扩展我从Rob Mayoff那里得到的答案,我将在下面显示我所做的更改:

    在captureoutput函数中,我将代码更改为:

    let filteredBuffer : CVPixelBuffer? = buffer(from: ciImage)
    
    filterContext?.render(_:ciImage, to:filteredBuffer!)
    
    let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)
    

    注意,buffer函数传递一个ciimage。我对缓冲区函数进行了格式化,以传递ciimage,并能够清除其中的许多内容:

    func buffer(from image: CIImage) -> CVPixelBuffer? {
        let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
        var pixelBuffer : CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
    
        guard (status == kCVReturnSuccess) else {
            return nil
        }
    
        return pixelBuffer
    }
    
    推荐文章