Ok, so i have the following architecture in AWS, i have a server which works 24/7 and 3 other servers that are only ON when the ELB (Elastic Load Balancer) makes turns them on.
However since 2 weeks my 24/7 server, lets call it master, is causing us some trouble, with the response-times, nothing has changed and all of a sudden the times started to increase.
I want to check if there's something wrong with the master, by turning one of the other 3 servers, lets call them nodes. I have some questions I couldnt find answer in the Documentation of Amazon.
I'll try to answer each of your questions, based on what I know of AWS OpsWorks:
What happens if master is turned off?
If you are using Auto-Healing it will try to keep that instance on. Otherwise nothing will happen automatically.
How can i assign a node to become master?
There is no such thing as "master" in the OpsWorks world. You will have to make use of the ELB and the recently added custom auto scaling feature.
Is this default architecture similar to a Failover cluster?
NO, it does not look like what you are describing is a Failover configuration.
If the 2 is true when 1 happens, how much time it lasts to become a master?
The time will depend on how much time your instance takes to boot, the different timeout and health thresholds that you can set in your ELB and other things.
Since AWS OpsWorks has added support for Custom Auto Scaling you should be able to create a CloudWatch alarm to trigger an "scale event" whenever an instance goes offline (fails health check). Then in order to verify which instance went offline and "re-instantiate it", while your newly launched instance is being provisioned and your recipe's code is being executed, you should use the EC2 API or the data-bags to identify which instance went offline and maybe trigger some other alarm or something... With the custom auto scaling feature you could do pretty much anything.