I am new to Kafka and Hadoop technologies. I was trying to install and run my first Single Node, Single Broker Cluster on AWS EC2 VM instance, I am done with:
1) java installation
2) updating ~/.bashrc and ~/.nash_profile
files with java related entries
3) Able to run internal zookeeper instance, but
4) As soon as I am trying to start kafka broker it is throwing me following error message:
$ bin/kafka-server-start.sh config/server.properties
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0130000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 986513408 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/kafka/hs_err_pid2549.log
I am not sure what wrong am I doing. This AWS EC2 VM instance is a newly created Ubuntu-t2.micro instance with 8 GB general purpose SSD volume.
So I believe the error that you're getting is about RAM memory, not SSD memory.
I did a quick google search and it seems like the t2.micro account only has 1gb of RAM? Is that correct? From the error, and from this Stack Overflow Answer, it seems as though you don't have enough RAM. It says malloc requested 986mb or memory, which is basically 1gb and you only have 1gb on your machine.