azureazure-worker-roles

Azure architecture and role communication


I have an application that includes multiple hosted services in Azure. Two are web roles, one is a worker role. The problem is, two of the roles need to now communicate. One is a web role that serves as the admin interface. The other is a worker role. The admin interface needs to issue commands, like pause any running jobs, report status, etc. The 2nd web role is just a site, unrelated to the first two.

(Just to preface, I want to make sure my use of Azure terms are correct):

  1. Hosted Service: An Azure 'application'. Multiple roles with two deployments, production and staging

  2. Deployment: A specific instance of all the roles, either in production or staging, with a single external endpoint (*.cloudapp.net)

  3. Role: A single 'job', either a web role or a worker role.

  4. Instance: The VM's that service a role

Also to verify: Is it possible to add roles to an existing hosted service? That is, if I deploy 2 roles from one solution, can I add a third role in another deployment from a different solution?

Because each role is in it's own hosted service, it presents some challenges. Here's my understanding of the choices in how they can communicate:

  1. Service Bus: This seems to be the best from an architecture standpoint. Each hosted service can connect a WCF service to the service bus, and admin can issue commands to the worker role. The downside is this is pretty cost prohibitive.

  2. Internal endpoints: This seems best if cost is factored it. The downside is you have to deploy all the roles at once, and the web roles cannot have unique addresses. The only way to access both web roles externally is with port forwarding. As far as I'm aware, it's not possible to deploy 2 roles from one solution, and 1 role from another?

  3. External WCF service: Each component can be in individual projects and individual hosted services. The downside is there's now an externally visible service for administration.

  4. Queue/Table storage: Admin can write commands to the Azure Queue, and the worker roles can write their responses to table storage. This seems fine for generating reports, but seems not great for issuing synchronous commands.

Should multiple roles that all service "the application" all go into the same Azure hosted service? If from a logical standpoint it makes the most sense, then I'd be happy to go with #2 and just deal with port forwarding.


Solution

  • First off, your definitions look pretty good and I think you understand the problem pretty well.

    Also with each deployment, each external endpoint can only be assigned to one role. So if you want to run two sites on port 80, then they need to be in the same role. This is just like setting up two sites on an IIS with the same port (which is exactly what you're working with). The sites are distinguished using host headers. If you don't want to go to that effort or if you want to deploy the sites separately, then you'll want to put your stand alone site in its own service/cloud project.

    For the communication part, the one option that you've missed off is service bus queues. Microsoft have released a library using service bus queues that is specifically designed for inter-role communication.

    Other than that, the extra comments on your points: You're right internal endpoints is the cheapest way to go, but you will be rolling it all yourself. Of course it could setup WCF services to listen on these internal endpoints.

    An external WCF service might work OK, but if you have more than one instance of your role, all WCF calls will go through the load balancer and the message will only be sent to one of the instances. You would need to make multiple calls to make sure the message was received by all instances and even then you couldn't be sure it had worked without some other feedback method.

    Storage queues suffer from a similar issue. If you have two instances and want them both to receive the same message, there's no way to guarantee that this will happen.