I screwed up root volume of my system in ec2 instance so I attached root volume of the instance to other ec2 instance so that I could access the bad root volume and rectify my error. When I start the other instance, the screwed up root volume becomes the root volume of the instance. I attached the volume as /dev/sdb (kernel changed it to /dev/xvdf ) and the instance original root volume is at /dev/sda (kernel changed it to /dev/xvde ). So kernel should load /dev/xvde as root filesystem but its loading scrwed up root volume (/dev/xvdf) .
The snippet of system logs of the system is as following:
dracut: Starting plymouth daemon
xlblk_init: register_blkdev major: 202
blkfront: xvdf: barriers disabled
xvdf: unknown partition table
blkfront: xvde: barriers disabled
xvde: unknown partition table
EXT4-fs (xvdf): mounted filesystem with ordered data mode. Opts:
dracut: Mounted root filesystem /dev/xvdf
OR
The simple way is to attach Centos root volume to a amazon linux machine and fix the issue. Don't attach Centos root volume to another ec2 instance running Centos. Centos in AWS marketplace have "centos" as label for root volume . So when we attach centos root volume to another centos machine, AWS gets confused as to which root volume to mount and anomaly happens.