代码之家  ›  专栏  ›  技术社区  ›  AlBlue RACGAMERUP

如何在iPhone/Mac上用CoreAudio合成声音

  •  17
  • AlBlue RACGAMERUP  · 技术社区  · 16 年前

    我想在iPhone中播放合成声音。我不想使用预先录制的声音和SystemSoundID来播放现有的二进制文件,而是想合成它。部分原因是我希望能够连续播放声音(例如,当用户的手指在屏幕上时),而不是一次性的声音样本。

    如果我想合成一个中间的a+1(a4)(440hz),我可以用sin()计算一个正弦波;我不知道如何将这些位排列成一个包,然后coreaudio就可以播放。网络上存在的大多数教程只涉及简单地播放现有的二进制文件。

    有人能帮我合成440Hz的正弦波吗?

    3 回复  |  直到 13 年前
        1
  •  13
  •   PeyloW    15 年前

    您想要做的事情可能是设置音频队列。它允许您在回调中用合成音频数据填充缓冲区。您将设置audeioqueue以在新线程中运行,例如:

    #define BUFFER_SIZE 16384
    #define BUFFER_COUNT 3
    static AudioQueueRef audioQueue;
    void SetupAudioQueue() {
        OSStatus err = noErr;
        // Setup the audio device.
        AudioStreamBasicDescription deviceFormat;
        deviceFormat.mSampleRate = 44100;
        deviceFormat.mFormatID = kAudioFormatLinearPCM;
        deviceFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
        deviceFormat.mBytesPerPacket = 4;
        deviceFormat.mFramesPerPacket = 1;
        deviceFormat.mBytesPerFrame = 4;
        deviceFormat.mChannelsPerFrame = 2;
        deviceFormat.mBitsPerChannel = 16;
        deviceFormat.mReserved = 0;
        // Create a new output AudioQueue for the device.
        err = AudioQueueNewOutput(&deviceFormat, AudioQueueCallback, NULL,
                                  CFRunLoopGetCurrent(), kCFRunLoopCommonModes,
                                  0, &audioQueue);
        // Allocate buffers for the AudioQueue, and pre-fill them.
        for (int i = 0; i < BUFFER_COUNT; ++i) {
            AudioQueueBufferRef mBuffer;
            err = AudioQueueAllocateBuffer(audioQueue, BUFFER_SIZE, mBuffer);
            if (err != noErr) break;
            AudioQueueCallback(NULL, audioQueue, mBuffer);
        }
        if (err == noErr) err = AudioQueueStart(audioQueue, NULL);
        if (err == noErr) CFRunLoopRun();
      }
    

    然后,当音频队列需要更多数据时,将调用回调方法audioqueue callback。使用以下内容实现:

    void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ,
                            AudioQueueBufferRef inBuffer) {
        void* pBuffer = inBuffer->mAudioData;
        UInt32 bytes = inBuffer->mAudioDataBytesCapacity;
        // Write max <bytes> bytes of audio to <pBuffer>
        outBuffer->mAudioDataByteSize = actualNumberOfBytesWritten
        err = AudioQueueEnqueueBuffer(audioQueue, inBuffer, 0, NULL);
    }
    
        2
  •  3
  •   AlBlue RACGAMERUP    14 年前

    戴维德·沃斯提的链接 http://lists.apple.com/archives/coreaudio-api/2008/Dec/msg00173.html 不再有效,因为苹果的名单似乎没有反应。这里是谷歌的完整性缓存。

    //
    //  AudioUnitTestAppDelegate.m
    //  AudioUnitTest
    //
    //  Created by Marc Vaillant on 11/25/08.
    //  Copyright __MyCompanyName__ 2008. All rights reserved.
    //
    
    #import "AudioUnitTestAppDelegate.h"
    #include <AudioUnit/AudioUnit.h>
    //#include "MachTimer.hpp"
    #include <vector>
    #include <iostream>
    
    using namespace std;
    
    #define kOutputBus 0
    #define kInputBus 1
    #define SAMPLE_RATE 44100
    
    vector<int> _pcm;
    int _index;
    
    @implementation AudioUnitTestAppDelegate
    
    @synthesize window;
    
    void generateTone(
                    vector<int>& pcm, 
                    int freq, 
                    double lengthMS, 
                    int sampleRate, 
                    double riseTimeMS, 
                    double gain)
    {
      int numSamples = ((double) sampleRate) * lengthMS / 1000.;
      int riseTimeSamples = ((double) sampleRate) * riseTimeMS / 1000.;
    
      if(gain > 1.)
        gain = 1.;
      if(gain < 0.)
        gain = 0.;
    
      pcm.resize(numSamples);
    
      for(int i = 0; i < numSamples; ++i)
      {
        double value = sin(2. * M_PI * freq * i / sampleRate);
        if(i < riseTimeSamples)
          value *= sin(i * M_PI / (2.0 * riseTimeSamples));
        if(i > numSamples - riseTimeSamples - 1)
          value *= sin(2. * M_PI * (i - (numSamples - riseTimeSamples) + riseTimeSamples)/ (4. * riseTimeSamples));
    
        pcm[i] = (int) (value * 32500.0 * gain);
        pcm[i] += (pcm[i]<<16);
      }
    
    }
    
    static OSStatus playbackCallback(void *inRefCon, 
                                      AudioUnitRenderActionFlags *ioActionFlags, 
                                      const AudioTimeStamp *inTimeStamp, 
                                      UInt32 inBusNumber, 
                                      UInt32 inNumberFrames, 
                                      AudioBufferList *ioData) 
    {    
        cout<<"index = "<<_index<<endl;
        cout<<"numBuffers = "<<ioData->mNumberBuffers<<endl;
    
        int totalNumberOfSamples = _pcm.size();
        for(UInt32 i = 0; i < ioData->mNumberBuffers; ++i)
        {
          int samplesLeft = totalNumberOfSamples - _index;
          int numSamples = ioData->mBuffers[i].mDataByteSize / 4;
          if(samplesLeft > 0)
          {
            if(samplesLeft < numSamples)
            {
              memcpy(ioData->mBuffers[i].mData, &_pcm[_index], samplesLeft * 4);
              _index += samplesLeft;
              memset((char*) ioData->mBuffers[i].mData + samplesLeft * 4, 0, (numSamples - samplesLeft) * 4) ;
            }
            else
            {
              memcpy(ioData->mBuffers[i].mData, &_pcm[_index], numSamples * 4) ;
              _index += numSamples;
            }
          }
          else
            memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
        }
    
        return noErr;
    }
    
    - (void)applicationDidFinishLaunching:(UIApplication *)application 
    {    
      //generate pcm tone  freq = 800, duration = 1s, rise/fall time = 5ms
    
      generateTone(_pcm, 800, 1000, SAMPLE_RATE, 5, 0.8);
      _index = 0;
    
      OSStatus status;
      AudioComponentInstance audioUnit;
    
      // Describe audio component
      AudioComponentDescription desc;
      desc.componentType = kAudioUnitType_Output;
      desc.componentSubType = kAudioUnitSubType_RemoteIO;
      desc.componentFlags = 0;
      desc.componentFlagsMask = 0;
      desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    
      // Get component
      AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
    
      // Get audio units
      status = AudioComponentInstanceNew(inputComponent, &audioUnit);
      //checkStatus(status);
    
      UInt32 flag = 1;
      // Enable IO for playback
      status = AudioUnitSetProperty(audioUnit, 
                      kAudioOutputUnitProperty_EnableIO, 
                      kAudioUnitScope_Output, 
                      kOutputBus,
                      &flag, 
                      sizeof(flag));
      //checkStatus(status);
    
      // Describe format
    
      AudioStreamBasicDescription audioFormat;
      audioFormat.mSampleRate = SAMPLE_RATE;
      audioFormat.mFormatID = kAudioFormatLinearPCM;
      audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
      audioFormat.mFramesPerPacket = 1;
      audioFormat.mChannelsPerFrame = 2;
      audioFormat.mBitsPerChannel = 16;
      audioFormat.mBytesPerPacket = 4;
      audioFormat.mBytesPerFrame = 4;
    
      // Apply format
    
      status = AudioUnitSetProperty(audioUnit, 
                      kAudioUnitProperty_StreamFormat, 
                      kAudioUnitScope_Input, 
                      kOutputBus, 
                      &audioFormat, 
                      sizeof(audioFormat));
    //  checkStatus(status);
    
      // Set output callback
      AURenderCallbackStruct callbackStruct;
      callbackStruct.inputProc = playbackCallback;
      callbackStruct.inputProcRefCon = self;
      status = AudioUnitSetProperty(audioUnit, 
                      kAudioUnitProperty_SetRenderCallback, 
                      kAudioUnitScope_Global, 
                      kOutputBus,
                      &callbackStruct, 
                      sizeof(callbackStruct));
    
      // Initialize
      status = AudioUnitInitialize(audioUnit);
    
      // Start playing
    
      status = AudioOutputUnitStart(audioUnit);
    
      [window makeKeyAndVisible];
    }
    
    
    - (void)dealloc {
        [window release];
        [super dealloc];
    }
    
    
    @end
    
        3
  •  0
  •   mahboudz    16 年前

    许多音频技术允许数据而不是声音文件被传入。例如,avaudioplayer具有:

    -initWithData:error:
    Initializes and returns an audio player for playing a designated memory buffer.
    
    - (id)initWithData:(NSData *)data error:(NSError **)outError
    

    但是,我不知道您将如何传入数据指针、启动声音,然后通过传入其他数据指针或重复相同的数据指针来保持循环。