kafka 8和内存 – Java Runtime Environment没有足够的内存来继续

我正在使用DigiOcean实例和512兆内存,我用kafka得到了以下错误。 我不是一个java熟练的开发者。 如何调整kafka以利用少量的ram。 这是一个开发者。 我不想每小时为更大的机器支付更多费用。

# # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 986513408 bytes for committing reserved memory. # An error report file with more information is saved as: # //hs_err_pid6500.log OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000bad30000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12) 

您可以通过编辑kafka-server-start.shzookeeper-server-start.sh等来调整JVM堆大小:

 export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" 

-Xms参数指定最小堆大小。 要让您的服务器至少启动,请尝试将其更改为使用更少的内存。 鉴于您只有512M,您应该更改最大堆大小( -Xmx ):

 export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M" 

我不确定默认配置中kafka的最小内存要求是什么 – 也许您需要调整kafka中的消息大小才能使其运行。

区域:HotSpot / gc

概要

 Crashes due to failure to allocate large pages. On Linux, failures when allocating large pages can lead to crashes. When running JDK 7u51 or later versions, the issue can be recognized in two ways: Before the crash happens, one or more lines similar to the following example will have been printed to the log: os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed; error='Cannot allocate memory' (errno=12); Cannot allocate large pages, falling back to regular pages If a file named hs_err is generated, it will contain a line similar to the following example: Large page allocation failures have occurred 3 times The problem can be avoided by running with large page support turned off, for example, by passing the "-XX:-UseLargePages" option to the java binary.