Latest Posts


Total Pageviews

Monday, 22 September 2014

Reference Architecture – vCloud Automation Center (vCAC) 6.0 Distributed Environment

vCAC Virtual Appliances: 

As you may already know, in vCAC 6.0 VMware has introduced a new web tier to the architecture. In our case here, two nodes are deployed for high availability and load balancing. Those appliances are completely stateless which means that all the configurations are stored on the database tier (see the Databases section below). That said, you can have any load balancer in front of those appliances to load balance the end-user traffic to the portal and to monitor also the connection in case one node goes down. In such a case, your LB should automatically switch the traffic to the active node. In your appliances configuration also, you must have your Load balancer hostname set there, not the actual vCAC VA node hostname. I have included in my architecture a very detailed information with regards to hostnames and IP addresses to make it easier for the viewer to understand the configuration parameters. More on the LB configuration in the second blog post.

Identity Appliance (SSO):

This is a virtual appliance that was released with vCAC 6.0 and you can find it within the same download page. Unlike what most of us know, this is *not* a mandatory components anymore after the vCenter SSO 5.50b was release. The latter is compatible with vCAC 6.0 and can be used instead of the identity appliance. One would still argue though that the Identity Appliance would be beneficial for advanced architectures like exposing/publishing the vCAC 6.0 portal on the Internet – more on that in a future blog post. Meanwhile, if you are architecting a solution for your private cloud, the vCenter SSO 5.5.0b is definitely the way to go.


There are two kind of databases required in this distributed architecture:
1) MS-SQL database: this will host the DBs for the vCenter Server (and relevant components) as well as the vCAC IaaS components and the vCenter Orchestrator. You can use a Windows Failover Cluster (aka MSCS) for the high availability of that database tier.
2) PostgresSQL database: this will host the DB of the vCAC appliances. Please note that you only need this external DB if you are clustering your vCAC virtual appliances for load balancing and high availability as explained above. If you have a relatively small environment or you do not want to cluster your vCAC virtual appliances for any reasons, you can just use one node with it’s embedded PostgresDB. In this case, you can still have high availability using the native vSphere HA. Another important point to note here, in my architecture you can see that the PostgresSQL DB is clustered. While this will work just fine, I cannot confirm (at the time of this writing) that it is a supported configuration by VMware. If you want to have more details on the clustering part of the PostgresDB, you can check out this blog post.

ITBM Standard Edition:

This is a standalone appliance that you can deploy and configure to integrate with vCAC and to collect your statistics from either vCenter Server, vCloud Director and/or external clouds (as end-points). When installed, the ITBM registers itself with vCAC and you get your UI and configuration panels right from the vCAC portal under a new tab called “Business Management”. Now, back to our architecture, this is a standalone appliance as I’ve mentioned that you just deploy inside your management cluster. Standard vSphere HA should be used here since you do not really need a 100% uptime for this type of application. A single ITBM appliance can support 20,000 VMs in version 1.0 without the need to scale up it’s resources.

vCloud Application Director (AppD):

This is another standalone appliance that comes with the vCAC 6.0 Enterprise Edition. Same as ITBM, you can deploy this virtual appliance to your management cluster and then register it with your vCAC environment. Please note that, in this 6.0 release, you will still need to manage your configurations and application blueprints from AppD directly, not from within the vCAC 6.0 portal as ITBM. In the official product documentation, the minimum configuration required for this appliance is 1vCPU and 2GB memory. At that point, you have to ask yourself what would be considered more important, the performance or the availability. The reason why I mention this is because you can protect your AppD appliance with vSphere FT (using 1vCPU). If performance is considered more important, you may want to go with the default settings of the appliance which is 2vCPUs but in that case you can’t protect the appliance with vSphere FT as it doesn’t support vSMP (yet!). In the latter case, you can still take some measures to ensure the availability for your AppD appliance like backup (or full image clones), enabling vSphere HA and so forth.

vCAC IaaS Web tier:

This is the traditional web tier that was inherited from the old DynamicOps acquisition. It still has the same characteristics and recommendations as it was back in the vCAC 5.x days. The only difference here is that the IIS pages of this web tier are actually “framed” within the vCAC VA front-end portal. For example, when you want to add a new Endpoint to your cloud environment, you will navigate through the vCAC VA portal menus until you click on the Endpoints button. At that point, we are actually grabbing an IIS page and framing it for you within the vCAC portal. That way, we are providing a consistent experience to the cloud admins for all the configuration wizards and  panels that they have been doing from the old days of vCAC 4.x/5.x. Back to the architecture, this web tier is active/active and stateless as it has always been. Like the vCAC VAs, you will need a load-balancer virtual IP for these web nodes as illustrated in the diagram.

vCAC IaaS Manager Service (aka Application Server):

This is another IaaS component that is ported from the old DynamicOps architecture. It is still an Active/Passive application tier that requires a manual intervention to start the service in case the first active node goes down. You will still need also to use a load balancer with a virtual IP to pass the traffic to the active node. If that node went down, the load-balancer can switch over to the other node but, again, the service has to be started manually. It is recommended also to stop the service on the LB before enabling it on the actual Manager Service node and then do the switch over.
Since this is an active/passive configuration, I like also to place the DEM-Orchestrator along with it. Two reasons for that. Firstly, the DEM-O is also an active/passive in functionality. If you have two DEM-O nodes, both will be active on the network (to monitor each others) but only one will be actually doing the work. The failover process is automated though, unlike the Manager Service. Secondly, the DEM-O should be as close as possible to the Manager Service for best performance. You can definitely use another dedicated VMs for your DEM Orchestrator nodes if you are worried about the performance of the VMs or if you like to completely segregate all your management workloads.

Model Manager Service vs. Model Manager Data:

During the setup of your IaaS components, you will see an item titled “ModelManagerData” as shown in the screenshot below. Do not confuse this with the actual Model Manager web service that gets installed automatically on the vCAC Web tier mentioned above. This ModelManagerData gets installed only once. It is basically to populate the database (the MS-SQL DB) with the default model data.

vCenter Orchestrator:

As you probably know, vCO is quite big now in this release of vCAC 6.0. You can do some amazing [X]aaS stuff right from your vCAC portal which will be in essence backed up by your vCO engine. Having said that, you should not be compromising on the availability of this important component in your infrastructure. Just like all the active/active components above, you can leverage the same load-balancer that you have to distribute the load across your vCO nodes. The Installation and Configuration product manual of vCO 5.5 provides a detailed instructions for your consideration when setting up this cluster.
Info taken From:-