天天看點

Android Framework 音頻子系統(07)AudioTrack資料傳遞

該系列文章總綱連結:專題分綱目錄 Android Framework 音頻子系統​​​​​​​

本章關鍵點總結 & 說明:

Android Framework 音頻子系統(07)AudioTrack資料傳遞

本章節主要關注➕ 以上思維導圖左上 AudioTrack 部分流程分析 的 子分支資料傳遞分析 即可。本章節主要分析AudioTrack的兩種模式以及APP的AudioTrack 和 playbackThread中mTracks的track 之間 建立共享記憶體是如何實作的。

1 AudioTrack端 建立共享記憶體

1.1 AudioTrack的兩種模式

APP建立AudioTrack,會和  AudioFlinger中PlaybackThread建立的Track 相對應。APP給AudioTrack提供音頻資料有2種模式: 一次性提供(MODE_STATIC模式)、邊播放邊提供(MODE_STREAM模式)。共享記憶體方式采用。這兩種模式在兩個方面有所不同:

@1 共享記憶體的2種模式

  1. MODE_STATIC模式:一次性提前提供資料。使用APP建立共享記憶體, APP一次性填充資料。playbackthread等資料構造好,取出資料就可以直接使用了,不存在同步問題。對應的playbackThread工作是:獲得含有資料的obtainBuffer(APP一次性送出共享記憶體的資料有可能很多,playbackThread需要進行多次播放)。完成後釋放buffer。
  2. MODE_STREAM模式:邊播放邊提供。使用playbackThread建立共享記憶體。APP使用obtainBuffer獲得buffer, 填充資料後使用releaseBuffer釋放buffer。playbackThread一樣需要進行多次播放。隻不過這裡使用的是 環形緩沖區機制來,不斷傳遞資料。完成後釋放buffer。

1.2 AudioTrack構造器回顧

接下來我們基于此開始分析代碼,這裡開始 我們對兩種模式分别進行分析,先從之前的Java層AudioTrack對象的建立開始分析,後面導緻了Native層的AudioTrack對象的建立。分析Java層的AudioTrack,根據上一節的分析,我們回顧下調用棧:

Java::AudioTrack->Java::native_setup->JNI轉換->android_media_AudioTrack_setup           

我們從android_media_AudioTrack_setup的實作開始繼續分析,代碼如下:

static jint
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa,
        jint sampleRateInHertz, jint javaChannelMask,
        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {
	//...
    //關鍵點1:建立native AudioTrack對象
    sp<AudioTrack> lpTrack = new AudioTrack();
	//...
    switch (memoryMode) {//這裡開始針對 兩種模式MODE_STREAM 和 MODE_STATIC進行不同參數的設定
    case MODE_STREAM:
        //關鍵點2.1:set方法,設定參數
        //注意:這裡APP不配置設定記憶體而是在後面的playbackthread中配置設定
        //是以share memory的值為“空”
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem,
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SYNC,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
    case MODE_STATIC:
        //應用端申請共享記憶體
        if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
            ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
            goto native_init_failure;
        }
        //關鍵點2.2:set方法,設定參數
        //是以share memory的值為 應用端申請共享記憶體 首位址
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                lpJniStorage->mMemBase,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SHARED,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
	//...
    default:
        ALOGE("Unknown mode %d", memoryMode);
        goto native_init_failure;
    }
	//...
    return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}           

總結下,這裡的模式的設定是起源于Java層的,在進入到C++層時:

  1. 模式如果是MODE_STATIC:先申請記憶體(allocSharedMem方法),再執行set方法。
  2. 模式如果是MODE_STREAM:直接執行set方法,記憶體後面由playbackthread來申請。

1.3 AudioTrack中的共享記憶體操作分析

根據上一節的分析,我們回顧下調用棧:

AudioTrack::set->AudioTrack::createTrack_l           

在createTrack_l函數中,共享記憶體相關的操作為:

status_t AudioTrack::createTrack_l()
{
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    //...
    // Starting address of buffers in shared memory.  If there is a shared buffer, buffers
    // is the value of pointer() for the shared buffer, otherwise buffers points
    // immediately after the control block.  This address is for the mapping within client
    // address space.  AudioFlinger::TrackBase::mBuffer is for the server address space.
    void* buffers;
    if (mSharedBuffer == 0) {
        //指向 playbackthread提供的Buffer
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        //指向應用端APP提供的Buffer
        buffers = mSharedBuffer->pointer();
    }
    //...
    /* update proxy,APP的AudioTack 與Thread中的Track建立共享記憶體,這裡的
     * AudioTrackClientProxy和StaticAudioTrackClientProxy用來管理Buffer
     */
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...

    return NO_ERROR;
    }
    //...
    return status;
}           

根據 上一節 的分析,我們知道 APP端的AudioTrack的建立必然意味着 AudioFlinger::PlaybackThread 中 Track 的建立。是以接下來專注分析PlaybackThread的Track對象。

2 PlaybackThread的Track建立共享記憶體

這裡的Track是繼承TrackBase的,而TrackBase中有我們需要分析的關于共享記憶體管理的重要内容,這裡主要關注sharedBuffer,代碼如下:

// TrackBase constructor must be called with AudioFlinger::mLock held
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            ThreadBase *thread,
            const sp<Client>& client,
            //...
            alloc_type alloc,
            track_type type)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(NULL),
        //...
{
    // if the caller is us, trust the specified uid
    if (IPCThreadState::self()->getCallingPid() != getpid_cached || clientUid == -1) {
        int newclientUid = IPCThreadState::self()->getCallingUid();
        if (clientUid != -1 && clientUid != newclientUid) {
            ALOGW("uid %d tried to pass itself off as %d", newclientUid, clientUid);
        }
        clientUid = newclientUid;
    }
	
    mUid = clientUid;
    size_t size = sizeof(audio_track_cblk_t);//頭部結構體
    size_t bufferSize = (buffer == NULL ? roundup(frameCount) : frameCount) * mFrameSize;
    if (buffer == NULL && alloc == ALLOC_CBLK) {
        size += bufferSize;
    }

    if (client != 0) {
        //若APP端提供Buffer,則這裡隻配置設定CBLK,否則配置設定CBLK+Buffer
        mCblkMemory = client->heap()->allocate(size);
        if (mCblkMemory == 0 ||
                (mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer())) == NULL) {
            client->heap()->dump("AudioTrack");
            mCblkMemory.clear();
            return;
        }
    } else {
        // this syntax avoids calling the audio_track_cblk_t constructor twice
        mCblk = (audio_track_cblk_t *) new uint8_t[size];
        // assume mCblk != NULL
    }

    // construct the shared structure in-place.
    if (mCblk != NULL) {
        new(mCblk) audio_track_cblk_t();
        switch (alloc) {
        case ALLOC_READONLY: {
            const sp<MemoryDealer> roHeap(thread->readOnlyHeap());
            if (roHeap == 0 ||
                    (mBufferMemory = roHeap->allocate(bufferSize)) == 0 ||
                    (mBuffer = mBufferMemory->pointer()) == NULL) {
                ALOGE("not enough memory for read-only buffer size=%zu", bufferSize);
                if (roHeap != 0) {
                    roHeap->dump("buffer");
                }
                mCblkMemory.clear();
                mBufferMemory.clear();
                return;
            }
            memset(mBuffer, 0, bufferSize);
            } break;
        case ALLOC_PIPE:
            mBufferMemory = thread->pipeMemory();
            mBuffer = NULL;
            break;
        case ALLOC_CBLK:
            // buffer的初始化
            if (buffer == NULL) {//指向playbackthread提供的Buffer
                mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                memset(mBuffer, 0, bufferSize);
            } else {
                mBuffer = buffer;//指向APP提供的Buffer
            }
            break;
        case ALLOC_LOCAL:
            mBuffer = calloc(1, bufferSize);
            break;
        case ALLOC_NONE:
            mBuffer = buffer;
            break;
        }
    }
}           

以上主要是Buffer的建立和配置設定。接下來 分析 Track的實作,代碼如下:

AudioFlinger::PlaybackThread::Track::Track(
            PlaybackThread *thread,
            const sp<Client>& client,
            //...
            const sp<IMemory>& sharedBuffer,
            //...
            track_type type)
    :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount,
                  (sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
                  sessionId, uid, flags, true /*isOut*/,
                  (type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
                  type),
    mFillingUpStatus(FS_INVALID),
    // mRetryCount initialized later when needed
    mSharedBuffer(sharedBuffer),
    mStreamType(streamType),
    //...
{
    //共享記憶體相關代碼,這裡的AudioTrackServerProxy和StaticAudioTrackServerProxy用來管理Buffer
    if (sharedBuffer == 0) {
        mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize, !isExternalTrack(), sampleRate);
    } else {
        mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize);
    }
    mServerProxy = mAudioTrackServerProxy;
    mName = thread->getTrackName_l(channelMask, format, sessionId);
    //...
    // only allocate a fast track index if we were able to allocate a normal track name
    if (flags & IAudioFlinger::TRACK_FAST) {
        mAudioTrackServerProxy->framesReadyIsCalledByMultipleThreads();
        int i = __builtin_ctz(thread->mFastTrackAvailMask);
        mFastIndex = i;
        // Read the initial underruns because this field is never cleared by the fast mixer
        mObservedUnderruns = thread->getFastTrackUnderruns(i);
        thread->mFastTrackAvailMask &= ~(1 << i);
    }
}           

總結下:

  1. AudioTrack中使用AudioTrackClientProxy對象 和 StaticAudioTrackClientProxy對象 來管理共享記憶體。
  2. Track中使用AudioTrackServerProxy對象 和 StaticAudioTrackServerProxy對象 來管理共享記憶體。

3 音頻資料的傳遞

音頻資料的傳遞是通過AudioTack的write方法來實作的,基于第5章節的分析棧繼續分析track的write方法。之前的代碼棧如下:

Java層AudioTrack.write->native_write_XXX->writeToTrack
->C++層track->sharedBuffer() 或 C++層track.write           

這裡的writeToTrack的代碼實作如下:

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
    ssize_t written = 0;
    //playbackthread提供共享記憶體,調用C++層track的write函數
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {//應用端 提供共享記憶體,直接執行memcpy
        const audio_format_t format = audioFormatToNative(audioFormat);
        switch (format) {

        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size();
            }
            //這裡将data資料拷貝給 共享記憶體
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            written = sizeInBytes;
            } break;

        case AUDIO_FORMAT_PCM_8_BIT: {
            //功能同上,隻是8位需要中間的資料轉換環節
            if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size() / 2;
            }
            int count = sizeInBytes;
            int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
            const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
            memcpy_to_i16_from_u8(dst, src, count);
            written = sizeInBytes;
            } break;
        }
    }
    return written;
}           

同時回顧如下說明:

  1. 如果track->sharedBuffer() == 0,即由playbackthread提供共享記憶體,則執行C++層track的write方法。
  2. 如果track->sharedBuffer() != 0,即由APP端提供共享記憶體,則直接執行memcpy操作,給track->sharedBuffer()指派。

3.1 MODE_STREAM模式下的資料傳遞流程

@1 用戶端proxy流程

這裡繼續分析track的write方法,代碼如下:

ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
    //...
    size_t written = 0;
    Buffer audioBuffer;

    while (userSize >= mFrameSize) {
        audioBuffer.frameCount = userSize / mFrameSize;
        //關鍵點1:擷取共享記憶體Buffer
        status_t err = obtainBuffer(&audioBuffer,
                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
        //...
        size_t toWrite;
        //buffer拷貝資料到audioBuffer中
        if (mFormat == AUDIO_FORMAT_PCM_8_BIT && !(mFlags & AUDIO_OUTPUT_FLAG_DIRECT)) {
            toWrite = audioBuffer.size >> 1;
            memcpy_to_i16_from_u8(audioBuffer.i16, (const uint8_t *) buffer, toWrite);
        } else {
            toWrite = audioBuffer.size;
            memcpy(audioBuffer.i8, buffer, toWrite);
        }
        //計算剩餘資料
        buffer = ((const char *) buffer) + toWrite;
        userSize -= toWrite;
        written += toWrite;
        //關鍵點2:釋放Buffer
        releaseBuffer(&audioBuffer);
    }
    //釋放共享記憶體
    return written;
}           

繼續分析obtainBuffer的實作,代碼如下:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, int32_t waitCount)
{
    ...//參數轉換跟計算
    return obtainBuffer(audioBuffer, requested);
}           

繼續分析轉換參數後的obtainBuffer的實作,代碼如下:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested,
        struct timespec *elapsed, size_t *nonContig)
{
    ...//參數轉換
    status = proxy->obtainBuffer(&buffer, requested, elapsed);
    ...//結果的填充
}           

對于MODE_STREAM模式來說,因為mSharedBuffer == 0,這裡的proxy是AudioTrackClientProxy

status_t AudioTrack::createTrack_l(){
   //...
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        buffers = mSharedBuffer->pointer();
    }
	//...
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...
}           

而AudioTrackClientProxy中并沒有obtainBuffer方法,實際上這裡是調用它的父類ClientProxy中的obtainBuffer方法,也就是通過ClientProxy來擷取一個空白的Buffer,然後将音頻資料寫入到Buffer中,最後 releaseBuffer。

@2 服務端流程

這裡以Track的getNextBuffer(擷取Buffer)方法作為入口分析:

// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    //這裡調用mServerProxy的obtainBuffer方法
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}
           

這裡繼續分析mServerProxy(MODE_STREAM模式下mServerProxy=AudioTrackServerProxy)的obtainBuffer方法,而AudioTrackServerProxy 中并沒有 obtainBuffer 方法,這裡是調用它的父類ServerProxy中的obtainBuffer方法來擷取一個有資料的Buffer。注意:最後 releaseBuffer 是通過TrackBase的析構函數直接調用的。TrackBase的析構函數代碼如下:

AudioFlinger::ThreadBase::TrackBase::~TrackBase()
{
    // delete the proxy before deleting the shared memory it refers to, to avoid dangling reference
    delete mServerProxy;
    if (mCblk != NULL) {
        if (mClient == 0) {
            delete mCblk;
        } else {
            mCblk->~audio_track_cblk_t();   // destroy our shared-structure.
        }
    }
    mCblkMemory.clear();    // free the shared memory before releasing the heap it belongs to
    if (mClient != 0) {
        // Client destructor must run with AudioFlinger client mutex locked
        Mutex::Autolock _l(mClient->audioFlinger()->mClientLock);
        // If the client's reference count drops to zero, the associated destructor
        // must run with AudioFlinger lock held. Thus the explicit clear() rather than
        // relying on the automatic clear() at end of scope.
        mClient.clear();
    }
    // flush the binder command buffer
    IPCThreadState::self()->flushCommands();
}           

也就是說,這裡是不必再通過調用releaseBuffer來釋放共享記憶體的。

@3 資料同步

MODE_STREAM模式 會使用到環形緩存區來同步資料,一個生産資料,一個消費資料,這個時候使用環形緩沖區是最可靠的。實際上就是 ClientProxy::obtainBuffer、ClientProxy::releaseBuffer、ServerProxy::obtainBuffer、ServerProxy::releaseBuffer之間的協作,這裡對環形緩沖區的邏輯進行簡述,了解其中的原理。音頻流資料分為兩個部分資料頭 和資料本身,如下所示:

Android Framework 音頻子系統(07)AudioTrack資料傳遞

同時對于環形緩沖區,有幾個關鍵變量:mFront(讀指針R),mRear(寫指針W),mFrameCount(資料長度LEN),mFrameCountP2(資料長度LEN,取2的N次方)。這裡接下來以例子和僞代碼的形式對環形緩沖區的邏輯進行說明。如下所示:

Android Framework 音頻子系統(07)AudioTrack資料傳遞

這裡 環形緩沖區的邏輯處理如下:

環形緩沖區:初始R=0,W=0,buf長度為LEN
寫入一個資料流程:w=W%LEN ; buf[w] = data ; W++ ;
讀取一個資料流程:r=R%LEN ; buf[r] = data ; R++;
判斷 環形緩沖區為空:R==W
判斷 滿:W-R == LEN           

這裡注意:在數學上當LEN為2的N次方時,以下運算是等價的:

w=W%LEN 等價于 w=W &(LEN-1)
r=R%LEN 等價于 r=R &(LEN-1)           

3.2 MODE_STATIC模式下的資料傳遞流程

@1 用戶端流程

前面的native層代碼writeToTrack方法執行後會直接向track->sharedBuffer()->pointer()中寫入資料,而對于MODE_STATIC模式來說,因為mSharedBuffer != 0,是以這裡的proxy是StaticAudioTrackClientProxy。

status_t AudioTrack::createTrack_l(){
   //...
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        buffers = mSharedBuffer->pointer();
    }
	//...
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...
}           

而StaticAudioTrackClientProxy中并沒有obtainBuffer方法,實際上這裡是調用它的父類ClientProxy中的obtainBuffer方法,也就是通過ClientProxy來擷取一個空白的Buffer,然後将音頻資料寫入到Buffer中,最後 releaseBuffer。

這裡依然以上面Track的getNextBuffer(擷取Buffer)方法作為入口分析:

// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    //這裡調用mServerProxy的obtainBuffer方法
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}
           

這裡繼續分析mServerProxy(MODE_STATIC模式下mServerProxy=StaticAudioTrackServerProxy)的obtainBuffer方法,而StaticAudioTrackServerProxy 中 重寫了ServerProxy的obtainBuffer 方法。同時關于releaseBuffer的操作同上, 它也是通過TrackBase的析構函數直接調用的,是以不必通過調用來釋放Buffer。

MODE_STATIC 模式下不存在資料同步的問題。

3.3 資料傳遞總結

@1 對于不同的MODE, 這些Proxy指向不同的對象:

  1. AudioTrack中含有mProxy,  它被用來管理共享記憶體, 裡面含有obtainBuffer, releaseBuffer函數。
  2. Track中含有mServerProxy, 它被用來管理共享記憶體, 裡面含有obtainBuffer, releaseBuffer函數

@2 AudioTrack和AudioFlinger通過mCblkMemory這塊記憶體來實作“生産者-消費者”資料互動,下面我們來分析一下ServerProxy和ClientProxy通過共享記憶體進行資料互動的原理:

  1. 建立 track 時 AudioFlinger 會給每個 track 配置設定 audio 共享記憶體,AudioTrack、AudioFlinger 以該buffer 為參數通過 AudioTrackClientProxy、AudioTrackServerProxy 建立 mClientProxy、mServerProxy。
  2. AudioTrack( APP應用端)通過 mClientProxy 向共享 buffer 寫入資料, AudioFlinger(server 端)通過 mServerProxy 從共享記憶體中 讀出資料。這樣 client、server 通過 proxy 對共享記憶體 形成了生産者、消費者模型。
  1. AudioTrackClientProxy:: obtainBuffer()從 audio buffer 擷取連續的空buffer;
  2. AudioTrackClientProxy:: releaseBuffer ()将填充了資料的 buffer 放回 audio buffer。
  1. AudioTrackServerProxy:: obtainBuffer()從 audio buffer 擷取連續的填充了資料的 buffer;
  2. AudioTrackServerProxy:: releaseBuffer ()将使用完的空buffer 放回 audio buffer。