I've successfully deployed Azure Stack TP2 and created a few VMs on it using the provided "WindowsServer-2012-R2-Datacenter" image, I was even able to connect to them.
However, after a few weeks I was having problems with VPN connection so I've restarted the server. The VPN connectivity started to work again but I've noticed that all of my Virtual Machines are stopped and they switch to "Failed" state when I've tried to start them. Also I'm unable to successfully create another VM and this is the error message I get:
{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "InternalExecutionError",
"message": "Failed to change the diagnostics profile for VM 'testVM1'"
}
]
}
}
I've read on the Microsoft Azure Stack troubleshooting page that VMs not starting after boot may be related to Failover Cluster Manager not starting, however when opening Failover Cluster Manager as the instruction suggests I don't see any Clusters there so I'm not sure if it's even configured to use it.
Has anyone got this issue before?
Microsoft Azure Stack troubleshooting page was right indeed, however it didn't mention that:
After the reboot of Azure Stack TP2 server every Virtual Machine that stays in "Saved" state in Failover Cluster Manager needs to be started in order to be able to create new Virtual Machines or use available ones. To do this:
Moreover, if a Virtual Machine was going to be created on a rebooted server when there were any Virtual Machines in "Saved" state and failed, then it's not possible to delete this failed VM from Azure Stack Portal until the VM is started from Failover Cluster Manager.