問題來源
問題來源于cnode社群:
node啟動占用記憶體的問題。
自己本地跑了一下,乖乖不得了,啟動一個 node,什麼都不做。結果是這樣子的:果然
900+M
我的電腦資訊:
$cat /proc/version Linux version 4.13.0-38-generic (buildd@lgw01-amd64-027) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9)) #43~16.04.1-Ubuntu SMP Wed Mar 14 17:48:43 UTC 2018 $cat /proc/cpuinfo | grep processor processor : 0 processor : 1 processor : 2 processor : 3
我們知道,程序真正從OS拿到的記憶體是RSS,一般意義上講,程序占了多少記憶體就是這個RSS。是以,cnode 社群的這個标題有點欠妥。
那麼我就求證一下 node 到底在哪裡申請了這麼多虛拟記憶體。
求證過程
pmap
檢視虛拟記憶體使用情況
pmap
下面指令輸出中把一些
100KB
以下的資訊省略,有興趣的同學可以自己檢視。
$ pmap -d 28708 28708: ./node Address Kbytes Mode Offset Device Mapping 0000000000400000 28540 r-x-- 0000000000000000 008:00005 node 00000000021df000 4 r---- 0000000001bdf000 008:00005 node 00000000021e0000 104 rw--- 0000000001be0000 008:00005 node 00000000021fa000 2168 rw--- 0000000000000000 000:00000 [ anon ] 0000000003bd6000 1352 rw--- 0000000000000000 000:00000 [ anon ] 0000031078d80000 512 rw--- 0000000000000000 000:00000 [ anon ] 000003cf96a00000 512 rw--- 0000000000000000 000:00000 [ anon ] 00000439e7900000 512 rw--- 0000000000000000 000:00000 [ anon ] 0000083024e00000 512 ----- 0000000000000000 000:00000 [ anon ] 000008c587300000 512 rw--- 0000000000000000 000:00000 [ anon ] 00000b1f3fb00000 512 ----- 0000000000000000 000:00000 [ anon ] 00000f247cf80000 512 rw--- 0000000000000000 000:00000 [ anon ] 0000169de244f000 196 ----- 0000000000000000 000:00000 [ anon ] 0000169de2485000 492 ----- 0000000000000000 000:00000 [ anon ] 0000169de2505000 492 ----- 0000000000000000 000:00000 [ anon ] 0000169de2585000 492 ----- 0000000000000000 000:00000 [ anon ] 0000169de2604000 492 rwx-- 0000000000000000 000:00000 [ anon ] 0000169de2684000 492 rwx-- 0000000000000000 000:00000 [ anon ] 0000169de2704000 492 rwx-- 0000000000000000 000:00000 [ anon ] 0000169de27ff000 520512 ----- 0000000000000000 000:00000 [ anon ] 000017db76f80000 316 rw--- 0000000000000000 000:00000 [ anon ] 0000219337b00000 512 rw--- 0000000000000000 000:00000 [ anon ] 000025cdf0280000 512 rw--- 0000000000000000 000:00000 [ anon ] 000025e610580000 512 ----- 0000000000000000 000:00000 [ anon ] 000026bff1500000 512 rw--- 0000000000000000 000:00000 [ anon ] 000028eaed980000 512 ----- 0000000000000000 000:00000 [ anon ] 0000309c9b900000 512 rw--- 0000000000000000 000:00000 [ anon ] 000031a7c5980000 512 ----- 0000000000000000 000:00000 [ anon ] 0000389d07380000 512 rw--- 0000000000000000 000:00000 [ anon ] 00003a0dee480000 512 rw--- 0000000000000000 000:00000 [ anon ] 00007f1630000000 132 rw--- 0000000000000000 000:00000 [ anon ] 00007f1630021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1634000000 132 rw--- 0000000000000000 000:00000 [ anon ] 00007f1634021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1638ffb000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f16397fc000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f1639ffd000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f163a7fe000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f163afff000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f163b800000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f163c000000 132 rw--- 0000000000000000 000:00000 [ anon ] 00007f163c021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1640000000 132 rw--- 0000000000000000 000:00000 [ anon ] 00007f1640021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1644000000 132 rw--- 0000000000000000 000:00000 [ anon ] 00007f1644021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1648713000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f1648f14000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f1649715000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f1649f16000 8192 rw--- 0000000000000000 000:00000 [ anon ] 00007f164a716000 1792 r-x-- 0000000000000000 008:00005 libc-2.23.so 00007f164a8d6000 2048 ----- 00000000001c0000 008:00005 libc-2.23.so 00007f164aaf8000 2044 ----- 0000000000018000 008:00005 libpthread-2.23.so 00007f164ad13000 2044 ----- 0000000000016000 008:00005 libgcc_s.so.1 00007f164af13000 1056 r-x-- 0000000000000000 008:00005 libm-2.23.so 00007f164b01b000 2044 ----- 0000000000108000 008:00005 libm-2.23.so 00007f164b21c000 1480 r-x-- 0000000000000000 008:00005 libstdc++.so.6.0.21 00007f164b38e000 2048 ----- 0000000000172000 008:00005 libstdc++.so.6.0.21 00007f164b5a5000 2044 ----- 0000000000007000 008:00005 librt-2.23.so 00007f164b7a9000 2044 ----- 0000000000003000 008:00005 libdl-2.23.so 00007f164b9aa000 152 r-x-- 0000000000000000 008:00005 ld-2.23.so 00007ffc8810d000 136 rw--- 0000000000000000 000:00000 [ stack ] mapped: 994068K writeable/private: 94540K shared: 0K
最後一行的資訊:
-
表示該程序映射的虛拟位址空間大小,也就是該程序預先配置設定的虛拟記憶體大小,即ps出的vszmapped
-
表示程序所占用的私有位址空間大小,也就是該程序實際使用的記憶體大小writeable/private
-
表示程序和其他程序共享的記憶體大小shared
指令輸出資訊具體含義,跟 liunx 下記憶體管理相關,那個話題實在太大,這裡隻關注虛拟記憶體哪裡來的這個問題。
上面的輸出結果中,大頭是下面這幾列。一共
500+M + 64M * 5
,大約
800+M
0000169de27ff000 520512 ----- 0000000000000000 000:00000 [ anon ] 00007f1630021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1634021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f163c021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1640021000 65404 ----- 0000000000000000 000:00000 [ anon ] 00007f1644021000 65404 ----- 0000000000000000 000:00000 [ anon ]
具體占用情況
作為一個程式員,基本一看就知道分兩類,
512M
和
64M
下面這段話來自
Java程式在Linux上運作虛拟記憶體耗用很大我們知道Linux下glibc的記憶體管理機制用了一個很奇妙的東西,叫arena。在glibc配置設定記憶體的時候,大記憶體從從中央配置設定區配置設定,小記憶體則線上程建立時,從緩存區配置設定。為了解決配置設定記憶體的性能的問題,就引入了這個叫做arena的memory pool。而恰好,在64bit系統下面,它的預設配置為64M。
Red Hat Enterprise Linux 6 features version 2.11 of glibc, providing many features and enhancements, including… An enhanced dynamic memory allocation (malloc) behaviour enabling higher scalability across many sockets and cores.This is achieved by assigning threads their own memory pools and by avoiding locking in some situations. The amount of additional memory used for the memory pools (if any) can be controlled using the environment variables MALLOC_ARENA_TEST and MALLOC_ARENA_MAX. MALLOC_ARENA_TEST specifies that a test for the number of cores is performed once the number of memory pools reaches this value. MALLOC_ARENA_MAX sets the maximum number of memory pools used, regardless of the number of cores.
驗證一下:
$export MALLOC_ARENA_MAX=1 $./node # 在另外一個視窗 $pmap -d 28567 |grep mapped mapped: 666420K writeable/private: 96472K shared: 0K
64M的一塊也沒了。(994068 - 666420 = 327648)。
好了,就剩下最大的
512M
的問題了。
這塊應該是 node 本身申請的了。在
node v8.x-staging
的代碼上找了一下虛拟記憶體相關的代碼。發現了這個
VirtualMemory按說這裡隻看代碼就能對照調用鍊可以找到申請的地方。不過還是用了偷懶的辦法:編一個debug版本的Node.js,然後利用gdb來找到backtrace。
斷點設定在
v8::base::VirtualMemory::VirtualMemory(unsigned long, unsigned long, void*)
$./configure --debug $make -j 4 make -C out BUILDTYPE=Release V=1 make -C out BUILDTYPE=Debug V=1 ... # 喝杯茶,出去轉一圈回來就能編譯好了 ... $ gdb out/Debug/node GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1 Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from out/Debug/node...done. (gdb) break v8::base::VirtualMemory::VirtualMemory(unsigned long, unsigned long, void*) Breakpoint 1 at 0x26b6b90: file ../deps/v8/src/base/platform/platform-linux.cc, line 203. (gdb) run Starting program: /path/to/your/node/out/Debug/node [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0x7ffff6b42700 (LWP 23574)] [New Thread 0x7ffff6341700 (LWP 23575)] [New Thread 0x7ffff5b40700 (LWP 23576)] [New Thread 0x7ffff533f700 (LWP 23577)] Thread 1 "node" hit Breakpoint 1, v8::base::VirtualMemory::VirtualMemory (this=0x7fffffffc780, size=536870912, alignment=4096, hint=0x137a187d000) at ../deps/v8/src/base/platform/platform-linux.cc:203 203 : address_(NULL), size_(0) { (gdb) bt #0 v8::base::VirtualMemory::VirtualMemory (this=0x7fffffffc780, size=536870912, alignment=4096, hint=0x137a187d000) at ../deps/v8/src/base/platform/platform-linux.cc:203 #1 0x000000000178cf0f in v8::internal::AlignedAllocVirtualMemory (size=536870912, alignment=4096, hint=0x137a187d000, result=0x7fffffffc810) at ../deps/v8/src/allocation.cc:117 #2 0x0000000001ea39e8 in v8::internal::CodeRange::SetUp (this=0x3ca6f20, requested=536870912) at ../deps/v8/src/heap/spaces.cc:122 #3 0x0000000001ea4649 in v8::internal::MemoryAllocator::SetUp (this=0x3ca7a00, capacity=1501560832, code_range_size=0) at ../deps/v8/src/heap/spaces.cc:304 #4 0x0000000001e22c46 in v8::internal::Heap::SetUp (this=0x3c65650) at ../deps/v8/src/heap/heap.cc:5922 #5 0x0000000001f5a8a7 in v8::internal::Isolate::Init (this=0x3c65630, des=0x7fffffffcc40) at ../deps/v8/src/isolate.cc:2786 #6 0x0000000002265697 in v8::internal::Snapshot::Initialize (isolate=0x3c65630) at ../deps/v8/src/snapshot/snapshot-common.cc:46 #7 0x00000000017d147d in v8::IsolateNewImpl (isolate=0x3c65630, params=...) at ../deps/v8/src/api.cc:8633 #8 0x00000000017d1284 in v8::Isolate::New (params=...) at ../deps/v8/src/api.cc:8580 #9 0x0000000002415900 in node::Start (event_loop=0x3a45fe0 <default_loop_struct>, argc=1, argv=0x3c63ea0, exec_argc=0, exec_argv=0x3c63f80) at ../src/node.cc:4856 #10 0x000000000240cba5 in node::Start (argc=1, argv=0x3c63ea0) at ../src/node.cc:4945 #11 0x00000000024843cb in main (argc=1, argv=0x7fffffffd8b8) at ../src/node_main.cc:106 (gdb)
512M找到了:
Thread 1 "node" hit Breakpoint 1, v8::base::VirtualMemory::VirtualMemory (this=0x7fffffffc780, size=536870912, alignment=4096, hint=0x137a187d000)
順着這條鍊路,我們可以看到有這麼一段關于
CodeRange deps/v8/src/heap/spaces.h的描述:
// All heap objects containing executable code (code objects) must be allocated // from a 2 GB range of memory, so that they can call each other using 32-bit // displacements. This happens automatically on 32-bit platforms, where 32-bit // displacements cover the entire 4GB virtual address space. On 64-bit // platforms, we support this using the CodeRange object, which reserves and // manages a range of virtual memory.
在
CodeRange::SetUp(size_t requested)中的這段代碼
- 非常清晰的描述:
// When a target requires the code range feature, we put all code objects // in a kMaximalCodeRangeSize range of virtual address space, so that // they can call each other with near calls.
512M
的定義。
問題結論:
900+M
的虛拟記憶體,其中
512M
是 v8 為代碼申請。
64M * N
(根據OS設定)是
glibc
的一種記憶體配置設定機制産生。其它剩餘小塊沒有深究,感興趣的同學可以類似思路去探索。