First of all, from a solutions or cloud architect point of view, why do we really need to use multiple VPCs or When would it make sense?
Considering multi-VPC design architecture means segmentation, in which you may have a business need to segment workloads.
However, this segmentation can take different forms depends on the company structure, security policy, business functions and model, etc. for example: a segment per Environment (Production, Testing, Development), or a segment per Security Zone (DMZ, Management, Internal), Or Segment per business function or department (IT, HR, Marketing) and so on. Or it could be combination of these segmentation options. Also, the driver of the segmentation can vary, could be security and regulatory driven, cost driven, technology driven or might be based on a certain business model and offering. In addition, from architecture point of view, breaking single complex domain into smaller manageable chunks, almost always make it, more modular, scalable and flexible.
As a result, there is no single best design option that can fit all the different requirements, even though it is based on Multi-VPC architecture, because the actual need, scale and drivers can vary. That being said, there are always proven design pattern that can be considered as the foundation and then on to of it the specifics can be added/integrated based on the current and future requirements.
Let’s start with the basic design shown below. This is a simple Multi-VPC design that requires direct communication between each VPC and the On-Prem DC. With direct VPN ( assuming ~1.25G per VPN tunnel is enough), this is simple and easy design option.
What if there is a need to provide VPC to VPC communication as well (full mesh connectivity). Here we have two options, add VPC peering as illustrated in the figure below
Or we could consider VPN among the VPCs. The advantage of using VPC peering is the higher bandwidth and ensuring the traffic will be transported over AWS backbone, however, VPC peering is not a transitive connectivity, in which centralized services such as centralized internet access or access any other VPC through a middle VPC over the VPC peering, cannot be achieved. This is where the VPN can be used as an alternative, taking into consideration bandwidth capacity limit, as well as the possibility that the traffic might be sent over the public internet at any point throughout the path.
So far so good. What if the number of VPCs increased along with multiple connectivity options to the On-Prem DC. From operations point of view, building and managing full mesh connectivity with 10s of tunnels/peerings among the VPCs will become complicated.
Also, connecting the on-premises links/tunnels, will require each AWS VPN to be attached to each individual Amazon VPC. This connectivity option is time consuming to build and hard to manage because its unscalable when the number of VPCs grows to tens of VPCs. This is where we can start look at the Transit-VPC architecture. As illustrated in the figure below, the Transit-VPC create a Hub&Spoke type of topology for the VPCs within AWS, the Transit-VPC ( the Hub VPC) provides connectivity aggregation (typically the aggregation of the VPN tunnels) as well as, centralized access to the On-Prem, and any additional services such as network and application services (NGFW, SDWAN etc.). this is a proven architecture and is used by many organizations today.
As the organization network grows to support larger number of users in different parts of the world, typically AWS services need to scale as well within the organization network. As highlighted earlier, connecting and managing 10s or even hundreds of VPCs via peering requires enormous route tables which is difficult to deploy, manage and can be error prone.
Although, the Transit VPC architecture overcome most of these issues, still scale in number of VPCs, manageability, and bandwidth capacity is a limitation that an organization may face when the scale is becoming too high, especially when you need to quickly and easily add more Amazon VPCs from multiple AWS accounts to support the increased demands on your workloads. Technically, The maximum limit is 125 peering connections per VPC, and managing large number of VPN tunnels and peering connection is not a scalable nor manageable solution at scale.
“With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway in to each Amazon VPC, on-premises data center, or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network. Any new VPC is simply connected to the Transit Gateway and is then automatically available to every other network that is connected to the Transit Gateway”
AWS Transit Gateway TGW, acts as a connectivity aggregation point/hub you can easily share AWS services, such as DNS, Active Directory, and IPS/IDS, across all of your Amazon VPCs.
Let’s review the basic operation of the AWS TGW. Technically, it acts like a big elastic router connecting VPCs as well as On-Prem (today VPN supported, according to AWS, Direct Connect will be supported by early 2019). As illustrated below, each VPC or VPN is treated as an attachment and this attachment then is associated with the TGW route table or domain (think of the routing domain as a VRF routing table in classical routing concept), which means a TGW can hold multiple routing domains/tables, a VPC can be associated with one route table, but it can propagate route to more than one route table. This is useful when there is some complex segmentation routing required. Also, cloud Admin, can create static route entries as well as static blackholing entries for explicit routing and traffic engineering control. These static/blackhole routes takes precedence over the propagated routes.
From VPC architecture point of view, this flat open communication model looks like the one illustrated below.
Also, isolated or segmented routing can be designed with the AWS TGW. For instance, the below routing domain design has three different routing tables, VPC A and B propagating its routes only to the routing table where the VPN is attached, this means there will be communication between VPC A and B with the VPN/On-Prem, but there is no direct communication among VPC A and B.
From VPC architecture point of view, this isolated/segmented communication model looks like the one illustrated below ( think of it like having a shared services VPC (VPC C ), while communications among the VPCs is not permitted)
As it shown in the above examples, the TGW routing table allows you to define the Next-Hop as “attachment” VPC/VPN to forward packets to (A transit gateway attachment is both a source and a destination of packets).
The isolated/segmented routing architecture can be extended to the VPC design, for example the design shown below, a private subnet in each AZ associated with the TGW to provide internal backend connectivity to the other AWC VPCs/On-prem where a network interface is created automatically per AZ for the attachment VPC subnet with the TGW, while the VPC routing tables used to control how to route the traffic within the VPC.
Similarly, the connectivity can be terminated into different routing domains/tables of the TGW to provide isolated routing among different VPC. this can be used for some security inspection etc. type of design using some virtual appliances for traffic passing between different security zones/VPCs.
Since the AWS TGW can act as the connectivity aggregation point, services VPC(s) that provide specialized functions such as security inspection, SDWAN etc. can be moved from Transit VPC architecture to be connected as a service VPC to the AWS Transit GW as shown below. considering multiple VPN tunnels offer high aggregate bandwidth with the ability of the TGW to support multiple tunnels from multiple virtual appliances (horizontally scalable).
This requires the virtual appliance to support, VPN, BGP and Source NAT. SNAT helps with stateful instances to ensure return traffic use the same path/appliance. While BGP dynamic routing with VPN dead peer detection help to maintain HA and handle failover among the different tunnels/virtual instances.
If VPN tunnel or BGP, is not an option from the virtual appliance, the ENI can be used for the TGW VPC attachment, however, it will lose the ability to detect route or peer failure that was done by BGP over the VPN tunnel as it has not built-in health check mechanism. Also, from performance point of view, this means there will be almost always one TGW attachment per AZ, and traffic will not be distributed evenly across instances as ECMP requires multiple learned routes over multiple paths with the same cost over BGP to work.
Note: the termination of the On-Prem connections can be done at the TGW for simplicity as the aggregation point, and by using multiple routing tables, traffic routing can be controlled to pass through the inline services VPC, as shown below
According to AWS, AWS is partnering with Cisco and other vendors for TGW edge services.
Let’s look at these different VPC design options, and think about it like an architect, to decide when to use what (the info in the table below are generic to a certain extent, you will need to do more deep dive into each aspect when making the design decision for a real solution)
|Direct Full/partial mesh||Transit VPC||Transit GW|
|Scale||Very Low||Low to medium||Very high|
|Performance||VPN 1.25G per tunnel||VPN 1.25G per tunnel||VPN 1.25G per tunnel with ECMP may go beyond 50G|
|Security||Encryption with IPsec
Segmentation limited options
|Encryption with IPsec
Segmentation using virtual appliances as the hub VPC
|Encryption with IPsec
Segmentation options are flexible (routing tables, virtual appliances, shared services/security VPC etc. )
|Manageability||The larger the scale the more complex to manage||The larger the scale, the more complex to manage ( increased number to tunnels etc. )||Single management plane of the different routing domains and routes propagations|
|Flexibility||Limited||Intermediate to high depends on the scale and design requirements||High|
|Interrogability||Limited at scale ( difficult to integrate and control routing and secure segmentation at scale)||High, centralized VPC can provide centralized integration and shared services etc.||Very high, single aggregation point for routing and segmentation, integration with the specialized VPCs (security, SDWAN, shared services etc.)|
|Potential use case||Very Limited number of VPCs||Medium number of VPCs to provide centralized connectivity, segmentation with hybrid Cloud such as SDWAN with On-Prem||Medium to very large number of VPCs to provide centralized connectivity, complex segmentation with hybrid Cloud such as SDWAN with On-Prem. VPC Routing across multiple AWS accounts|
Note: there are other proven means to provide shared services such as the concept of privateLink “You can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as an endpoint service). Other AWS principals can create a connection from their VPC to your endpoint service using an interface VPC endpoint. You are the service provider, and the AWS principals that create connections to your service are service consumers.”
The following, are the limits of the TGW, at the time of this blog writing
Support VPN to On-Prem, Direct connect according to AWS early 2019
Supported with a region, roadmap to support inter-regional TGW
Supports 5000 TGW attachments
Number of route tables 20
Number of routes 10000