0.引言
閱讀本文前,可以先閱讀前面文章,能夠幫助你更好了解本篇文章。文章清單如下:
SRS流媒體伺服器之RTMP推流消息處理(1)
SRS流媒體伺服器之RTMP協定分析(2)
SRS流媒體架構分析(1)
SRS流媒體之RTMP推流架構分析(2)
SRS流媒體之RTMP拉流架構分析(3)
SRS流媒體伺服器之RTMP協定分析(1)
簡述SRS流媒體伺服器相關技術
流媒體推拉流實戰之RTMP協定分析(BAT面試官推薦)
流媒體伺服器架構與應用分析
手把手搭建流媒體伺服器詳細步驟
手把手搭建FFmpeg的Windows環境
超詳細手把手搭建在ubuntu系統的FFmpeg環境
HTTP實戰之Wireshark抓包分析
當用戶端推流RTMP資料發到SRS流媒體伺服器,如果正确配置SRS流媒體伺服器,可以輸出HTTP-FLV的碼流,拉流端就可以成功拉取到,那這個詳細過程是怎樣呢?本篇文章就來詳細分析。先回顧下整體的架構:
RTMP推流端-----》SRS流媒體伺服器(建立SOURCE->生成Consumer->指定封裝格式endoder=FLV) 《《--------------拉流用戶端拉取HTTP-FLV
1.簡述http-flv技術
(1)在http協定中有個content-length字段,指的是http的body的長度。伺服器在恢複用戶端請求時,如果沒有這個字段,用戶端就一直接收資料,直到伺服器與用戶端的socket連接配接斷開。如果有這個字段,用戶端接收這個長度的資料後,就認為資料傳輸完畢。
http-flv直播就是利⽤了這個原理,伺服器回複用戶端請求的時候不加content-length字段,回複了http内容之後,緊接着發送flv資料,用戶端就⼀直接收資料了。用戶端就會認為一直有資料接收。
用戶端發起請求,SRS流媒體伺服器傳回的是:
0 SrsLiveStream::SrsLiveStream (this=0xa3da40, s=0xa3bbd0, r=0xa3ad40, c=0xa3d520)at src/app/srs_app_http_stream.cpp:5141 0x00000000005010bb in SrsHttpStreamServer::http_mount (this=0xa11fd0, s=0xa3bbd0,r=0xa3ad40) at src/app/srs_app_http_stream.cpp:9122 0x00000000005620f5 in SrsHttpServer::http_mount (this=0xa11e00, s=0xa3bbd0,r=0xa3ad40) at src/app/srs_app_http_conn.cpp:3083 0x00000000004cd3cc in SrsServer::on_publish (this=0xa11ea0, s=0xa3bbd0, r=0xa3ad40)at src/app/srs_app_server.cpp:16084 0x00000000004e6a9b in SrsSource::on_publish (this=0xa3bbd0) at src/app/srs_app_source.cpp:24665 0x00000000004d89f2 in SrsRtmpConn::acquire_publish (this=0xa30d00,source=0xa3bbd0) at src/app/srs_app_rtmp_conn.cpp:9406 0x00000000004d7a74 in SrsRtmpConn::publishing (this=0xa30d00, source=0xa3bbd0) at src/app/srs_app_rtmp_conn.cpp:822#7 0x00000000004d5229 in SrsRtmpConn::stream_service_cycle (this=0xa30d00) atsrc/app/srs_app_rtmp_conn.cpp:534#8 0x00000000004d4141 in SrsRtmpConn::service_cycle (this=0xa30d00) atsrc/app/srs_app_rtmp_conn.cpp:388#9 0x00000000004d2f09 in SrsRtmpConn::do_cycle (this=0xa30d00) atsrc/app/srs_app_rtmp_conn.cpp:209#10 0x00000000004d10fb in SrsConnection::cycle (this=0xa30d78) atsrc/app/srs_app_conn.cpp:171#11 0x0000000000509c88 in SrsSTCoroutine::cycle (this=0xa30f90) atsrc/app/srs_app_st.cpp:198#12 0x0000000000509cfd in SrsSTCoroutine::pfn (arg=0xa30f90) atsrc/app/srs_app_st.cpp:213#13 0x00000000005bdd9d in _st_thread_main () at sched.c:337#14 0x00000000005be515 in st_thread_create (start=0x5bd719 <_st_vp_schedule>,arg=0x700000001, joinable=1,stk_size=1) at sched.c:616
(2)配置檔案
主要分為兩部分:
(1)配置http服務
(2)配置http-flv服務
配置⽂件如下所示:
listen 1935;max_connections 1000; #srs_log_tank file; #srs_log_file ./objs/srs.log; # 前台運⾏ daemon off; # 列印到終端控制台 srs_log_tank console; http_api { enabled on; listen 1985; }http_server { enabled on; listen 8081; # http監聽端⼝ (1)配置的http伺服器,注意端⼝,如果是雲伺服器⼀定要注意開 放相應端⼝ dir ./objs/nginx/html; }stats { network 0; disk sda sdb xvda xvdb;}vhost __defaultVhost__ { # 使⽤預設的vhost # hls hls { enabled on; hls_path ./objs/nginx/html; hls_fragment 10; hls_window 60; } # 使用http-flv要配置 http_remux { enabled on; mount [vhost]/[app]/[stream].flv; # ⽀持flv的使⽤,flv拉流的位址 hstrs on; }}
(3)測試準備
在用戶端使用ffmpeg推rtmp流,其中xxx.xxx.xxx.xxx表示IP位址,根據實際環境的ip位址去配置,指令如下:
ffmpeg -re -i xxx.flv -vcodec copy -acodec copy -f flv -y rtmp://xxx.xxx.xxx.xxx/live/livestream
在用戶端拉取rtmp和http流,指令如下:
ffplay http://xxx.xxx.xxx.xxx:8081/live/livestream.flv
ffplay rtmp://xxx.xxx.xxx.xxx/live/livestream
2.SRS流媒體rtmp推流時的函數調用關系
RTMP推流的時候根據url,建立對應的handler,拉流的時候根據url,找到對應處理的handler。即url和handler是一一對應關系。以下流程,在RTMP推流時,建立了一個HTTP-FLV的SOURCE(函數調用關系是從下至上,即數字14到0),關于SOURCE的詳細分析,前面文章也分析過。
0 SrsLiveStream::SrsLiveStream (this=0xa3da40, s=0xa3bbd0, r=0xa3ad40, c=0xa3d520)at src/app/srs_app_http_stream.cpp:5141 0x00000000005010bb in SrsHttpStreamServer::http_mount (this=0xa11fd0, s=0xa3bbd0,r=0xa3ad40) at src/app/srs_app_http_stream.cpp:9122 0x00000000005620f5 in SrsHttpServer::http_mount (this=0xa11e00, s=0xa3bbd0,r=0xa3ad40) at src/app/srs_app_http_conn.cpp:3083 0x00000000004cd3cc in SrsServer::on_publish (this=0xa11ea0, s=0xa3bbd0, r=0xa3ad40)at src/app/srs_app_server.cpp:16084 0x00000000004e6a9b in SrsSource::on_publish (this=0xa3bbd0) at src/app/srs_app_source.cpp:24665 0x00000000004d89f2 in SrsRtmpConn::acquire_publish (this=0xa30d00,source=0xa3bbd0) at src/app/srs_app_rtmp_conn.cpp:9406 0x00000000004d7a74 in SrsRtmpConn::publishing (this=0xa30d00, source=0xa3bbd0) at src/app/srs_app_rtmp_conn.cpp:822#7 0x00000000004d5229 in SrsRtmpConn::stream_service_cycle (this=0xa30d00) atsrc/app/srs_app_rtmp_conn.cpp:534#8 0x00000000004d4141 in SrsRtmpConn::service_cycle (this=0xa30d00) atsrc/app/srs_app_rtmp_conn.cpp:388#9 0x00000000004d2f09 in SrsRtmpConn::do_cycle (this=0xa30d00) atsrc/app/srs_app_rtmp_conn.cpp:209#10 0x00000000004d10fb in SrsConnection::cycle (this=0xa30d78) atsrc/app/srs_app_conn.cpp:171#11 0x0000000000509c88 in SrsSTCoroutine::cycle (this=0xa30f90) atsrc/app/srs_app_st.cpp:198#12 0x0000000000509cfd in SrsSTCoroutine::pfn (arg=0xa30f90) atsrc/app/srs_app_st.cpp:213#13 0x00000000005bdd9d in _st_thread_main () at sched.c:337#14 0x00000000005be515 in st_thread_create (start=0x5bd719 <_st_vp_schedule>,arg=0x700000001, joinable=1,stk_size=1) at sched.c:616
3.SRS流媒體伺服器源碼的重要函數和類說明
RTMP不管是推流還是拉流都是對應一個連接配接實作,那HTTP-FLV也是一個用戶端對應一個連接配接,如果是HLS,那client也會對應一個連接配接。
(1)源碼中重要函數和檔案說明
在SRS流媒體伺服器源碼中,關于處理資料的重要函數說明:
SrsLiveStream::do_serve_http:處理用戶端的資料發送。
SrsHttpConn:表示每個http client或RTMP client都有這個連接配接。
SrsConsumer:每個SrsHttpConn都對應一個消費者SrsConsumer,對應RTMP或HTTP。關于SrsConsumer前面文章已經講過,這裡相當于中間資料的緩存。
(2)源碼中重要類說說明
SrsBufferCache:HTTP直播流編碼器的緩存。
SrsFlvStreamEncoder:将RTMP轉成HTTP FLV流。
SrsTsStreamEncoder:将RTMP轉成HTTP TS流。
SrsAacStreamEncoder:将RTMP含有的AAC成分轉成HTTP AAC流。
SrsMp3StreamEncoder:将RTMP含有的MP3成分轉成HTTP MP3流。
SrsBufferWriter:将流直接寫⼊到HTTP響應的資料中。
SrsLiveStream:HTTP直播流,将RTMP轉成HTTP-FLV或者其他格式,其實際是handler SrsLiveEntry 直播⼊⼝,⽤來處理HTTP 直播流。
SrsHttpStreamServer:HTTP直播流服務,服務FLV/TS/MP3/AAC流的合成。
SrsHttpResponseWriter: 負責将資料發送給用戶端,本質是調⽤SrsStSocket進⾏發送
SrsHttpServeMux:HTTP請求多路複⽤器,實際就是路由,⾥⾯記錄了path以及對應handler。
4.SRS流媒體伺服器源碼解析
根據源碼可以得到,http和RTMP都是繼承SrsConnection。源碼如下:
// The http connection which request the static or stream content.class SrsHttpConn : public SrsConnection{protected: SrsHttpParser* parser; ISrsHttpServeMux* http_mux; SrsHttpCorsMux* cors;public: SrsHttpConn(IConnectionManager* cm, srs_netfd_t fd, ISrsHttpServeMux* m, std::string cip); virtual ~SrsHttpConn();// Interface ISrsKbpsDeltapublic: virtual void remark(int64_t* in, int64_t* out);protected: virtual srs_error_t do_cycle();protected: // When got http message, // for the static service or api, discard any body. // for the stream caster, for instance, http flv streaming, may discard the flv header or not. virtual srs_error_t on_got_http_message(ISrsHttpMessage* msg) = 0;private: virtual srs_error_t process_request(ISrsHttpResponseWriter* w, ISrsHttpMessage* r); // When the connection disconnect, call this method. // e.g. log msg of connection and report to other system. // @param request: request which is converted by the last http message. virtual srs_error_t on_disconnect(SrsRequest* req);// Interface ISrsReloadHandlerpublic: virtual srs_error_t on_reload_http_stream_crossdomain();};
SrsRtmpConn繼承SrsConnection,源碼如下:
// The client provides the main logic control for RTMP clients.class SrsRtmpConn : virtual public SrsConnection, virtual public ISrsReloadHandler{ // For the thread to directly access any field of connection. friend class SrsPublishRecvThread;private: SrsServer* server; SrsRtmpServer* rtmp; SrsRefer* refer; SrsBandwidth* bandwidth; SrsSecurity* security; // The wakable handler, maybe NULL. // TODO: FIXME: Should refine the state for receiving thread. ISrsWakable* wakable; // The elapsed duration in srs_utime_t // For live play duration, for instance, rtmpdump to record. // @see https://github.com/ossrs/srs/issues/47 srs_utime_t duration; // The MR(merged-write) sleep time in srs_utime_t. srs_utime_t mw_sleep; // The MR(merged-write) only enabled for play. int mw_enabled; // For realtime // @see https://github.com/ossrs/srs/issues/257 bool realtime; // The minimal interval in srs_utime_t for delivery stream. srs_utime_t send_min_interval; // The publish 1st packet timeout in srs_utime_t srs_utime_t publish_1stpkt_timeout; // The publish normal packet timeout in srs_utime_t srs_utime_t publish_normal_timeout; // Whether enable the tcp_nodelay. bool tcp_nodelay; // About the rtmp client. SrsClientInfo* info;public: SrsRtmpConn(SrsServer* svr, srs_netfd_t c, std::string cip); virtual ~SrsRtmpConn();public: virtual void dispose();protected: virtual srs_error_t do_cycle();// Interface ISrsReloadHandlerpublic: virtual srs_error_t on_reload_vhost_removed(std::string vhost); virtual srs_error_t on_reload_vhost_play(std::string vhost); virtual srs_error_t on_reload_vhost_tcp_nodelay(std::string vhost); virtual srs_error_t on_reload_vhost_realtime(std::string vhost); virtual srs_error_t on_reload_vhost_publish(std::string vhost);// Interface ISrsKbpsDeltapublic: virtual void remark(int64_t* in, int64_t* out);private: // When valid and connected to vhost/app, service the client. virtual srs_error_t service_cycle(); // The stream(play/publish) service cycle, identify client first. virtual srs_error_t stream_service_cycle(); virtual srs_error_t check_vhost(bool try_default_vhost); virtual srs_error_t playing(SrsSource* source); virtual srs_error_t do_playing(SrsSource* source, SrsConsumer* consumer, SrsQueueRecvThread* trd); virtual srs_error_t publishing(SrsSource* source); virtual srs_error_t do_publishing(SrsSource* source, SrsPublishRecvThread* trd); virtual srs_error_t acquire_publish(SrsSource* source); virtual void release_publish(SrsSource* source); virtual srs_error_t handle_publish_message(SrsSource* source, SrsCommonMessage* msg); virtual srs_error_t process_publish_message(SrsSource* source, SrsCommonMessage* msg); virtual srs_error_t process_play_control_msg(SrsConsumer* consumer, SrsCommonMessage* msg); virtual void change_mw_sleep(srs_utime_t sleep_v); virtual void set_sock_options();private: virtual srs_error_t check_edge_token_traverse_auth(); virtual srs_error_t do_token_traverse_auth(SrsRtmpClient* client);private: // When the connection disconnect, call this method. // e.g. log msg of connection and report to other system. virtual srs_error_t on_disconnect();private: virtual srs_error_t http_hooks_on_connect(); virtual void http_hooks_on_close(); virtual srs_error_t http_hooks_on_publish(); virtual void http_hooks_on_unpublish(); virtual srs_error_t http_hooks_on_play(); virtual void http_hooks_on_stop();};
前面的文章已經講過了,rtmp推流的時候就會産生資料源,對應源碼就是source。那http-flv client也是要從source裡面拉取資料,也是要綁定一個consumer,這個思想在前面的文章中都要反複講過。
5.源碼調試分析
先運作SRS流媒體伺服器,執行指令:
gdb ./objs/srs
如下界面:
再執行指令:
set args -c ./conf/srs.conf
r
如下界面:
在win環境,開啟ffmpeg推流,推流指令在上面已經給出。
界面如下:
推流成功後,在win環境,用ffplay去播放。播放指令,上面已經給出了。
界面如下:
可以看到拉流端,關于http-flv具體的一些列印資訊,如下界面:
使用WireShark抓http-flv包,需要設定過濾條件,http or tcp.port==8081
界面如下:
拉流用戶端請求SRS流媒體伺服器路徑是/live/livestream.flv HTTP/1.1,請求方法是GET方法。
如下界面:
該請求資料包具體如下類型,如下界面:
通過WireShark抓包,也可以看到SRS流媒體伺服器回應用戶端消息,其中是不帶有content-length。其服務端回應用戶端的資料包的過程,如下界面:
6.http-flv在ffmpeg源碼中是怎樣實作呢?
這個時候用戶端開啟推流,經過調試分析,整個流程如下圖:
下面的源碼反應了http監聽的過程(Rtmp與http類似),也就是按照這個流程來分析:
run_master()-->SrsServer::listen()--->SrsServer::listen_http_stream()。
(1) main函數,src/main/srs_main_server.cpp:192行。
(2)do_main函數,src/main/srs_main_server.cpp:184行。
(3)run函數,src/main/srs_main_server.cpp:409行。
(4)run_master函數,src/main/srs_main_server.cpp:469行。
(5)SrsServer::listen函數,srs/app/srs_app_server.cpp:880行。
(6)SrsServer::listen_http_stream,srs/app/srs_app_server.cpp:1295行。
在ffmpeg源碼中搜尋http_code,可以搜尋到,在http.c裡,有實作。源碼在如下路徑。
在SRS流媒體服務端,從各類協定總入口SrsServer::listen()開始分析。對應源碼如下:
HTTP/1.1 200 OK Connection: Keep-Alive Content-Type: video/x-flv Server: SRS/3.0.141(OuXuli) Transfer-Encoding: chunked
如果是http協定,就會調用listen_http_stream(),到http的listen分析,對應源碼如下:
srs_error_t SrsServer::listen_http_stream(){ srs_error_t err = srs_success; close_listeners(SrsListenerHttpStream); if (_srs_config->get_http_stream_enabled()) { SrsListener* listener = new SrsBufferListener(this, SrsListenerHttpStream); listeners.push_back(listener); std::string ep = _srs_config->get_http_stream_listen(); std::string ip; int port; srs_parse_endpoint(ep, ip, port); if ((err = listener->listen(ip, port)) != srs_success) { return srs_error_wrap(err, "http stream listen %s:%d", ip.c_str(), port); } } return err;}
7.拉流時HTTP連接配接調試
打個斷點,輸入如下指令,調試:
b SrsServer::listen_http_stream()
界面如下:
輸入指令:
n
可以一行行執行。
這個時候,如果用戶端開啟拉流,可以看到SRS流媒體伺服器的調用棧,界面如下:
這個http流程與前面分析的RTMP流程是類似:
(1)st_thread_create,在sched.c:616行。
(2)_st_thread_main,在sched.c:337行。
(3)函數SrsSTCoroutine::pfn,在srs/app/srs_app_st.cpp:213行。
(4)函數SrsSTCoroutine::cycle,在srs/app/srs_app_st.cpp:198行。
(5)函數SrsTcpListener::cycle,在srs/app/srs_app_listener.cpp:202行。
(6)函數SrsBufferListener::on_tcp_client,在srs/app/srs_app_server.cpp:167行。
(7)函數SrsServer::accept_client,類型是SrsListenerHttpStream,在src/app/srs_app_server.cpp:1400行。
(8)函數SrsServer::fd2conn,類型是SrsListenerHttpStream,在src/app/srs_app_server.cpp:1465行。
不同的用戶端都是可以進來do_serve_http,當拉流用戶端要拉取http資料時,包含真正的音視訊資料,從這裡可以分析,源碼如下:
srs_error_t SrsLiveStream::do_serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r){ srs_error_t err = srs_success; string enc_desc; ISrsBufferEncoder* enc = NULL; srs_assert(entry); if (srs_string_ends_with(entry->pattern, ".flv")) { w->header()->set_content_type("video/x-flv"); enc_desc = "FLV"; enc = new SrsFlvStreamEncoder(); } else if (srs_string_ends_with(entry->pattern, ".aac")) { w->header()->set_content_type("audio/x-aac"); enc_desc = "AAC"; enc = new SrsAacStreamEncoder(); } else if (srs_string_ends_with(entry->pattern, ".mp3")) { w->header()->set_content_type("audio/mpeg"); enc_desc = "MP3"; enc = new SrsMp3StreamEncoder(); } else if (srs_string_ends_with(entry->pattern, ".ts")) { w->header()->set_content_type("video/MP2T"); enc_desc = "TS"; enc = new SrsTsStreamEncoder(); } else { return srs_error_new(ERROR_HTTP_LIVE_STREAM_EXT, "invalid pattern=%s", entry->pattern.c_str()); } SrsAutoFree(ISrsBufferEncoder, enc); // Enter chunked mode, because we didn't set the content-length. w->write_header(SRS_CONSTS_HTTP_OK); // create consumer of souce, ignore gop cache, use the audio gop cache. SrsConsumer* consumer = NULL; if ((err = source->create_consumer(NULL, consumer, true, true, !enc->has_cache())) != srs_success) { return srs_error_wrap(err, "create consumer"); } SrsAutoFree(SrsConsumer, consumer); srs_verbose("http: consumer created success."); SrsPithyPrint* pprint = SrsPithyPrint::create_http_stream(); SrsAutoFree(SrsPithyPrint, pprint); SrsMessageArray msgs(SRS_PERF_MW_MSGS); // Use receive thread to accept the close event to avoid FD leak. // @see https://github.com/ossrs/srs/issues/636#issuecomment-298208427 SrsHttpMessage* hr = dynamic_cast(r); SrsResponseOnlyHttpConn* hc = dynamic_cast(hr->connection()); // update the statistic when source disconveried. SrsStatistic* stat = SrsStatistic::instance(); if ((err = stat->on_client(_srs_context->get_id(), req, hc, SrsRtmpConnPlay)) != srs_success) { return srs_error_wrap(err, "stat on client"); } // the memory writer. SrsBufferWriter writer(w); if ((err = enc->initialize(&writer, cache)) != srs_success) { return srs_error_wrap(err, "init encoder"); } // if gop cache enabled for encoder, dump to consumer. if (enc->has_cache()) { if ((err = enc->dump_cache(consumer, source->jitter())) != srs_success) { return srs_error_wrap(err, "encoder dump cache"); } } SrsFlvStreamEncoder* ffe = dynamic_cast(enc); // Set the socket options for transport. bool tcp_nodelay = _srs_config->get_tcp_nodelay(req->vhost); if (tcp_nodelay) { if ((err = hc->set_tcp_nodelay(tcp_nodelay)) != srs_success) { return srs_error_wrap(err, "set tcp nodelay"); } } srs_utime_t mw_sleep = _srs_config->get_mw_sleep(req->vhost); if ((err = hc->set_socket_buffer(mw_sleep)) != srs_success) { return srs_error_wrap(err, "set mw_sleep %" PRId64, mw_sleep); } SrsHttpRecvThread* trd = new SrsHttpRecvThread(hc); SrsAutoFree(SrsHttpRecvThread, trd); if ((err = trd->start()) != srs_success) { return srs_error_wrap(err, "start recv thread"); } srs_trace("FLV %s, encoder=%s, nodelay=%d, mw_sleep=%dms, cache=%d, msgs=%d", entry->pattern.c_str(), enc_desc.c_str(), tcp_nodelay, srsu2msi(mw_sleep), enc->has_cache(), msgs.max); // TODO: free and erase the disabled entry after all related connections is closed. // TODO: FXIME: Support timeout for player, quit infinite-loop. while (entry->enabled) { // Whether client closed the FD. if ((err = trd->pull()) != srs_success) { return srs_error_wrap(err, "recv thread"); } pprint->elapse(); // get messages from consumer. // each msg in msgs.msgs must be free, for the SrsMessageArray never free them. int count = 0; if ((err = consumer->dump_packets(&msgs, count)) != srs_success) { return srs_error_wrap(err, "consumer dump packets"); } if (count <= 0) { // Directly use sleep, donot use consumer wait, because we couldn't awake consumer. srs_usleep(mw_sleep); // ignore when nothing got. continue; } if (pprint->can_print()) { srs_trace("-> " SRS_CONSTS_LOG_HTTP_STREAM " http: got %d msgs, age=%d, min=%d, mw=%d", count, pprint->age(), SRS_PERF_MW_MIN_MSGS, srsu2msi(mw_sleep)); } // sendout all messages. if (ffe) { err = ffe->write_tags(msgs.msgs, count); } else { err = streaming_send_messages(enc, msgs.msgs, count); } // free the messages. for (int i = 0; i < count; i++) { SrsSharedPtrMessage* msg = msgs.msgs[i]; srs_freep(msg); } // check send error code. if (err != srs_success) { return srs_error_wrap(err, "send messages"); } } // Here, the entry is disabled by encoder un-publishing or reloading, // so we must return a io.EOF error to disconnect the client, or the client will never quit. return srs_error_new(ERROR_HTTP_STREAM_EOF, "Stream EOF");}
從源碼中看出,這裡有個new SrsTsStreamEncoder,這個是用來合成flv資料,以供拉流端使用。
接下來列印斷點調試看看。輸入指令如下:
b SrsLiveStream::do_serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r)
界面如下:
輸入指令,繼續運作:
c
這時候開啟拉流端,再輸入指令:
bt
檢視調用棧,如下界面:
(1)st_thread_create,在sched.c:616行。
(2)_st_thread_main,在sched.c:337行。
(3)函數SrsSTCoroutine::pfn,在src/app/srs_app_st.cpp:213行。
(4)函數SrsSTCoroutine::cycle,在src/app/srs_app_st.cpp:198行。
(5)函數SrsConnection::cycle,在src/app/srs_app_conn.cpp:171行。
(6)函數SrsHttpConn::do_cycle,在src/app/srs_app_http_conn.cpp:133行。
(7)函數SrsHttpConn::process_request,在src/app/srs_app_http_conn.cpp:161行。
(8)函數SrsHttpCorsMux::server_http,在src/protocol/srs_http_stack.cpp:859行。
(9)函數SrsHttpServer::server_http,在src/app/srs_app_http_conn.cpp:300行。
(10)函數SrsHttpServerMux::server_http,在src/protocol/srs_http_stack.cpp:711行。
(11)函數SrsLiveStream::server_http,在src/app/srs_app_http_stream.cpp:544行。
(12)函數SrsLiveStream::do_serve_http,在src/app/srs_app_http_stream.cpp:552行。
當拉流端通過onsumer->dump_packets(&msgs, count)讀出消息後,然後就用ffe->write_tags(msgs.msgs, count)綁定一個Encoder(這裡就指的是用flv封裝),源代碼如下圖:
在SrsLiveStream::do_serve_http(ISrsHttpResponseWriter* w, ISrsHttpMessage* r)函數下。
int count = 0; if ((err = consumer->dump_packets(&msgs, count)) != srs_success) { return srs_error_wrap(err, "consumer dump packets"); } if (count <= 0) { // Directly use sleep, donot use consumer wait, because we couldn't awake consumer. srs_usleep(mw_sleep); // ignore when nothing got. continue; } if (pprint->can_print()) { srs_trace("-> " SRS_CONSTS_LOG_HTTP_STREAM " http: got %d msgs, age=%d, min=%d, mw=%d", count, pprint->age(), SRS_PERF_MW_MIN_MSGS, srsu2msi(mw_sleep)); } // sendout all messages. if (ffe) { err = ffe->write_tags(msgs.msgs, count); } else { err = streaming_send_messages(enc, msgs.msgs, count); }
對應源碼檔案為Srs_app_source.cpp,拉流端通過SrsConsumer::dump_packets(SrsMessageArray* msgs, int& count),讀取消息。關于這個函數的調用,在前面文章有更詳細的分析。
srs_error_t SrsConsumer::dump_packets(SrsMessageArray* msgs, int& count){ srs_error_t err = srs_success; srs_assert(count >= 0); srs_assert(msgs->max > 0); // the count used as input to reset the max if positive. int max = count? srs_min(count, msgs->max) : msgs->max; // the count specifies the max acceptable count, // here maybe 1+, and we must set to 0 when got nothing. count = 0; if (should_update_source_id) { srs_trace("update source_id=%d[%d]", source->source_id(), source->source_id()); should_update_source_id = false; } // paused, return nothing. if (paused) { return err; } // pump msgs from queue. if ((err = queue->dump_packets(max, msgs->msgs, count)) != srs_success) { return srs_error_wrap(err, "dump packets"); } return err;}SrsConsumer::dump_packets(SrsMessageArray* msgs, int& count)
在源碼Srs_app_http_stream.cpp,調用函數ffe->write_tags(msgs.msgs, count)(包括寫頭和資料),綁定Encoder,這裡指的是封裝格式。詳細分析,會在後面的文章繼續分析,源碼如下:
srs_error_t SrsFlvStreamEncoder::write_tags(SrsSharedPtrMessage** msgs, int count){ srs_error_t err = srs_success; // For https://github.com/ossrs/srs/issues/939 if (!header_written) { bool has_video = false; bool has_audio = false; for (int i = 0; i < count && (!has_video || !has_audio); i++) { SrsSharedPtrMessage* msg = msgs[i]; if (msg->is_video()) { has_video = true; } else if (msg->is_audio()) { has_audio = true; } } // Drop data if no A+V. if (!has_video && !has_audio) { return err; } if ((err = write_header(has_video, has_audio)) != srs_success) { return srs_error_wrap(err, "write header"); } } return enc->write_tags(msgs, count);}
8.拉流時,SRS流媒體伺服器發送資料給用戶端
調試界面如下:
在SRS流媒體伺服器給用戶端發送資料的函數,打斷點,跟蹤函數調用流程,輸入指令如下:
b SrsHttpResponseWriter::writev
(1)包括了前面連接配接過程的連接配接,這些流程就反應了函數調用的關系(調用關系是從下至上,即從15到0),跟蹤流程如下:
0 SrsHttpResponseWriter::writev (this=0x7ffff7f1ebd0, iov=0xaeaa80, iovcnt=240,pnwrite=0x0) at src/service/srs_service_http_conn.cpp:7841 0x00000000004fde62 in SrsBufferWriter::writev (this=0x7ffff7f1e860, iov=0xaeaa80,iovcnt=240, pnwrite=0x0) at src/app/srs_app_http_stream.cpp:5112 0x000000000040f109 in SrsFlvTransmuxer::write_tags (this=0xb92fb0, msgs=0xaea310,count=80) at src/kernel/srs_kernel_flv.cpp:5383 0x00000000004fd0b1 in SrsFlvStreamEncoder::write_tags (this=0xb51490, msgs=0xaea310,count=80) at src/app/srs_app_http_stream.cpp:3454 0x00000000004ff0dc in SrsLiveStream::do_serve_http (this=0xa3d9f0, w=0x7ffff7f1ebd0,r=0xb92840) at src/app/srs_app_http_stream.cpp:6775 0x00000000004fe108 in SrsLiveStream::serve_http (this=0xa3d9f0, w=0x7ffff7f1ebd0,r=0xb92840) at src/app/srs_app_http_stream.cpp:5446 0x000000000049c86f in SrsHttpServeMux::serve_http (this=0xa11fe0, w=0x7ffff7f1ebd0,r=0xb92840) at src/protocol/srs_http_stack.cpp:7117 0x0000000000562080 in SrsHttpServer::serve_http (this=0xa11e00, w=0x7ffff7f1ebd0,r=0xb92840) at src/app/srs_app_http_conn.cpp:3008 0x000000000049d6be in SrsHttpCorsMux::serve_http (this=0xb37440, w=0x7ffff7f1ebd0,r=0xb92840) at src/protocol/srs_http_stack.cpp:8599 0x0000000000561086 in SrsHttpConn::process_request (this=0xb93ff0, w=0x7ffff7f1ebd0,r=0xb92840) at src/app/srs_app_http_conn.cpp:16110 0x0000000000560ce8 in SrsHttpConn::do_cycle (this=0xb93ff0) at src/app/srs_app_http_conn.cpp:133 ---Type to continue, or q to quit---11 0x00000000004d10fb in SrsConnection::cycle (this=0xb93ff0) at src/app/srs_app_conn.cpp:17112 0x0000000000509c88 in SrsSTCoroutine::cycle (this=0xb93f10) at src/app/srs_app_st.cpp:19813 0x0000000000509cfd in SrsSTCoroutine::pfn (arg=0xb93f10) at src/app/srs_app_st.cpp:21314 0x00000000005bdd9d in _st_thread_main () at sched.c:33715 0x00000000005be515 in st_thread_create (start=0x5bd719 <_st_vp_schedule>,arg=0x900000001, joinable=1,stk_size=1) at sched.c:616
(2)用戶端拉取HTTP—FLV播放流程
當RTMP推流成功後,這裡通過調試,跟蹤用戶端拉流時,SRS流媒體伺服器的播放流程。當拉取HTTP-FLV流時,每個播放的SrsFlvStreamEncoder是獨⽴,互不影響。調用關系是從下至上,即11到0。如下:
0 SrsFlvStreamEncoder::SrsFlvStreamEncoder (this=0xa57820) at src/app/srs_app_http_stream.cpp:2501 0x00000000004fe2fd in SrsLiveStream::do_serve_http (this=0xa3da20, w=0x7ffff7eb5bd0,r=0xa5d7c0) at src/app/srs_app_http_stream.cpp:5622 0x00000000004fe108 in SrsLiveStream::serve_http (this=0xa3da20, w=0x7ffff7eb5bd0,r=0xa5d7c0) at src/app/srs_app_http_stream.cpp:5443 0x000000000049c86f in SrsHttpServeMux::serve_http (this=0xa11fe0, w=0x7ffff7eb5bd0,r=0xa5d7c0) at src/protocol/srs_http_stack.cpp:7114 0x0000000000562080 in SrsHttpServer::serve_http (this=0xa11e00, w=0x7ffff7eb5bd0,r=0xa5d7c0) at src/app/srs_app_http_conn.cpp:3005 0x000000000049d6be in SrsHttpCorsMux::serve_http (this=0xa52930, w=0x7ffff7eb5bd0,r=0xa5d7c0) at src/protocol/srs_http_stack.cpp:8596 0x0000000000561086 in SrsHttpConn::process_request (this=0xa5d120,w=0x7ffff7eb5bd0, r=0xa5d7c0) at src/app/srs_app_http_conn.cpp:1617 0x0000000000560ce8 in SrsHttpConn::do_cycle (this=0xa5d120) atsrc/app/srs_app_http_conn.cpp:1338 0x00000000004d10fb in SrsConnection::cycle (this=0xa5d120) atsrc/app/srs_app_conn.cpp:1719 0x0000000000509c88 in SrsSTCoroutine::cycle (this=0xa5d1c0) atsrc/app/srs_app_st.cpp:19810 0x0000000000509cfd in SrsSTCoroutine::pfn (arg=0xa5d1c0) atsrc/app/srs_app_st.cpp:21311 0x00000000005bdd9d in _st_thread_main () at sched.c:337
(3)用戶端拉取HTTP_FLV流過程中,部分日志如下:
[Trace][10457][554] HTTP client ip=175.0.54.116, request=0, to=15000ms[Trace][10457][554] HTTP GET http://111.229.231.225:8081/live/livestream.flv, content-length=-1[Trace][10457][554] http: mount flv stream for sid=/live/livestream, mount=/live/livestream.flv[Trace][10457][554] flv: source url=/live/livestream, is_edge=0, source_id=-1[-1][Trace][10457][554] create consumer, active=0, queue_size=0.00,jitter=30000000[Trace][10457][554] set fd=10, SO_SNDBUF=46080=>175000,buffer=350ms[Trace][10457][554] FLV /live/livestream.flv, encoder=FLV,nodelay=0, mw_sleep=350ms, cache=0, msgs=128
9.總結
本篇文章在前面文章的基礎上,講解從用戶端RTMP推流到SRS流媒體伺服器,拉流端拉取HTTP_FLV資料的過程,并通過調試的方法,跟蹤SRS流媒體伺服器的函數調用流程。能夠幫助大家理清其中錯綜複雜的關系,對于源碼分析非常有幫助。
本篇文章就分析到這裡,歡迎關注,轉發,點贊,收藏,分享,評論區讨論。
後期關于項目的知識,會在微信公衆号上更新,如果想要學習項目,可以關注微信公衆号“記錄世界 from antonio”