I am taking over a project to replace an ancient legacy system from the ground up. Before I came on, the company hired a consultant who put together a basic sketch of the system and pushed SOA heavily. This resulted in a long list of "entity services", with the intention of them being composed into more complex service combinations. For instance, a user wanting committee info would hit the "Committee" service, which then calls the "Person" service to get its members, and the "Meeting" service to get its meetings, and so on.
I understand the flexibility gains in this, but my concerns are about performance. It seems to me that a system built with such a fine level of granularity to its services spends too many resources on translating service messages, and the performance will be unacceptable. It also seems to me that the flexibility gains can still be made using basic reusable objects, although in that case the benefit of a technology-agnostic interface is lost to gain performance.
For more background: The organization requesting this software does not currently have a stable of third-party software suites that need integrating with. This software will replace all suites. There are also currently no outside consumers who need to access the data outside of the provided website interface -- all service calls will be from other pieces inside our system. The choice of SOA in this case seems entirely based on the concept of "preparation".
So my question -- what level of granularity is acceptable in a stable of services without sacrificing performance? Am I being too skeptical of the performance hits we'll take implementing all our entities as services? Should functionality be available as web services only when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
First off, finding the "sweet spot" in the number of services is difficult for sure. Too many services, and your integration costs suffer, too few, and your implementation costs suffer. You have to find a good balance.
My advice to you is to follow Juval Lowy's methodology in that you should break down your services by areas of volatility, or areas of change. This will give you your granularity level. You should also read his WCF book if you can.
As for the performance, WCF will inherently support many thousands of calls per second depending on your use cases and hardware. Services calling services is not a problem. The platform will support it if you do your part. For example, use the right binding for the right scenario (named pipes to call services on the same machine and TCP to call services across machine where possible). You should also implement a vertical slice of the application and do performance testing before building the rest of the application. This will verify your architecture.