ADD SOME TEXT THROUGH CUSTOMIZER

ACI – Tenant Building Blocks & Forwarding Logic –

In my previous blog I described the concept of the Cisco ACI Policy Model in high level. This blog will take you a bit deeper inside the ACI fabric to understand some of the key ACI constructs and traffic forwarding logic.

constructing_pic (2) In order to understand how the forwarding logic works, first you need to understand the building blocks that construct a Tenant in Cisco ACI.

Building Blocks of a Tenant in Cisco ACI

Although the exact use and definition of a tenant in today’s modern data center networks can vary from solution to solution, in general the concept still the same, which is a logical or physical entity reside within a data center network to serve a certain user group (such as marketing team tenant, guest tenant, test & development tenant etc.). In a modern multi-tenant data center network multiple tenants normally utilize the same underlying infrastructure with logical separation on higher layers (layer 2 to 7).

In Cisco ACI, a tenant can be defined as is a logical container or a folder for application policies. It can represent an actual tenant, an organization, or a domain, or can just be used for the convenience of organizing information.

In other words, in Cisco ACI, typically all application configurations are part of a tenant as it represents a unit of isolation from a policy perspective, and not representing a private network.

Technically, within a tenant, you can define the following:

  • One or more Layer 3 networks (referred to VRF instances/contexts),
  • One or more bridge domains BDs per network
  • EPGs to divide the bridge domains.

pushing_up_box_text_10650

The concept of a VRF is known and not new even if you are not familiar with the Cisco ACI architecture, a context in ACI is equivalent to a VRF instance. In the APIC GUI, a context is called a “Private Network.” A context defines a Layer 3 address domain, therefore, all of the endpoints within a context /VRF must have unique IP addresses. In addition, if you want one or more subnets to be leaked/shared between different contexts/VRFs then each of these subnet must be unique among the different VRFs.

The figure below illustrates the ACI objects’ structure within a tenant.

ACI_stracture

 

What is a “Bridge Domain” ?

Technically, a bridge domain BD is a container for subnets and has impact on forwarding behavior. Keep in mind a bridge domain is not a VLAN, although it can act similar to a VLAN; you instead should think of it as a distributed switch, on a leaf, can be translated locally as a VLAN with local significance. In addition, a DB an act as a broadcast or flooding domain only if broadcast or flooding is enabled, however, this is rarely needed. (we will see an example later in this post when you need to enable flooding in a BD).

Unlink traditional networks where typically traffic is flooded with the Bridge Domain scope (within a VLAN), ACI offers the ability to control the flooding allowance of the following traffic types “as required”.

  • Unknown unicast Flooding
  • ARP Flooding
  • Multicast Flooding

ACI provide forwarding based on either destination IP or MAC address. The Forwarding logic can be either based on a local table for the directly attached networks/hosts or based global entries in a table that is a cached portion of the full global table that reside in the spine switches. The full global table at the spine switches will be used when an endpoint is not found in the local cache this is also known as Hardware proxy.

ACI_proxy

By disabling the ARP flooding you will be able to have more efficient forwarding, as ARP/GARP is forwarded as a unicast packet within the fabric based on the host DB.

On the other hand, as it shown in the figure below, at the egress point toward the external L2 switch, the ARP/GARP is forwarded as a flooded frame, this helps to support hosts reachable via downstream L2 switches.

ACI_BD

Similarly, designation hosts MAC or IP that are not known by a specific ingress leaf switch are forwarded to one of the spine proxies for address lookup.

With this forwarding logic, if there is no entry present (traffic miss) for that destination in the mapping database in the spine, the packet is handled differently for bridged versus routed traffic as following (1):

  • For bridged traffic miss: the packet is dropped.
  • For routed traffic miss, the spine communicates with the leaves to initiate an ARP request for the destination address. The ARP request is initiated by all the leaves that have the destination subnet. After the destination host responds to the ARP request, the local database in the leaf and the mapping database in the spine are updated. This mechanism is required for silent hosts.

The only obvious advantage that someone may think of , when disabling hardware-based proxy and using flooding for unknown hosts and ARP is that the fabric does not need to learn millions of source IP addresses coming from a given port. A BD can be defined as either:

  • Layer 2 where no routing is enabled
  • Routed BD where layer 3 is enabled along with pervasive gateway capability (also known as anycast default gateway).

From configurations point of view, because the bridge domain is a child of the VRF instance, so even if you need a purely Layer 2 network, you still need to create a VRF instance and associate the bridge domain with that VRF instance. The good thing here is that if you don’t enable routing, no VRF resources will be allocated in hardware for that VRF. Also, whenever you create an EPG, you will need to reference a BD.

Based on that, the relationships among the various objects can be summarized as follows:

  • The EPG points to a bridge domain BD.
  • The bridge domain points to a Layer 3 network
  • EPGs are grouped into application network profiles; the application profile can span multiple bridge domains.

With this approach by grouping EPGs into application profile the administrator makes the network aware of the relationship among application components regardless of where this application physically connected to the fabric ( physical or virtual). The figure below illustrates the relationship among the aforementioned building blocks of a tenant.

ACI_ANP_BD

First Hop Default Gateway in ACI

At this stage you know how the ACI forwarding construct is structured within a tenant with regard to BDs, VRFs, EPGs and ANPs. At this stage,  you need to understand how the first hope layer 3 gateway can be defined and works in Cisco ACI to route traffic between the different subnets within a tenant.

The good news here is that you don’t not need to worry about configuring additional protocol to provide layer 3 gateway services such as HSRP, because Cisco ACI offers what is known as “Pervasive SVI” that provides a distributed default gateway (anycast gateway), which is global across the fabric and configured on top-of-rack (ToR) leaf switches wherever the bridge domain of a tenant is present with unicast routing enabled. In addition, the subnet default gateway addresses are programmed in all Leaves with end points present for the specific tenant IP subnet. This not only helps to reduce design and operation complexity, it helps to large extent to optimize traffic forwarding across the ACI fabric, because when a packet needs to be routed to another leaf or external network, it dose not need to span the network to reach its default gateway that could reside on another leaf, as illustrated in the figure below.

ACI_GW1

 

That said, Cisco ACI still support external gateway, which is commonly used when the Fabric is deployed to provide layer 2 transport only for a specific tenant, or if the organization security standards dedicate that the default gateway must be a firewall. With this approach you will need to enable ARP flooding at the BD without enabling unicast routing.

ACI_GW2References:
(1) L. Avramov, M. Portol, “The Policy Driven Data Center with ACI”, CiscoPress.
(2)Cisco Application Centric Infrastructure Design Guide
(3) CiscoLive Introduction to Application Centric Infrastructure
Tags :
Marwan Al-shawi – CCDE No. 20130066, Google Cloud Certified Architect, AWS Certified Solutions Architect, Cisco Press author (author of the Top Cisco Certifications’ Design Books “CCDE Study Guide and the upcoming CCDP Arch 4th Edition”). He is Experienced Technical Architect. Marwan has been in the networking industry for more than 12 years and has been involved in architecting, designing, and implementing various large-scale networks, some of which are global service provider-grade networks. Marwan holds a Master of Science degree in internetworking from the University of Technology, Sydney. Marwan enjoys helping and assessing others, Therefore, he was selected as a Cisco Designated VIP by the Cisco Support Community (CSC) (official Cisco Systems forums) in 2012, and by the Solutions and Architectures subcommunity in 2014. In addition, Marwan was selected as a member of the Cisco Champions program in 2015 and 2016.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Order Now