Monday, August 15, 2022

Aviatrix-Episode3: Connecting OnPrem Remote site to Aviatrix Cloud infrastructure via BGPoIPSEC (incl. BGP route approval)

 Today, we will simulate an "On Prem" Data Centre connected to our existing MultiCloud Network infrastructure. For this, we will use Aviatrix feature called Site2Cloud (aka S2C).

You might remember in the previous blogpost that it has been used to connect Aviatrix GWs to AWS Cloud WAN via IPSEC & GRE.

What is S2C?



Connectivity's


S2C allows your Aviatrix GWs (Spoke or Transit) to be connected to many different entities. This can be:

  • On Prem DC or Branch
    • BGPoIPSEC to secure connections over Internet or Private Lines. High Performance Encryption (HPE) is available by overcoming the bandwidth limitations of a single IPSEC tunnel (1.25 Gbps)
    • BGPoGRE (AWS only on Private Lines (DX)) to extend Aviatrix overlay without IPSEC limitations
  • 3rd party appliances like SDWAN with BGPoLAN
    • Route exchange without any tunnelling protocol
    • High Performance, widely compatible SDWAN integration
    • Integrates with GCP NCC
  • Cloud Native Constructs (seen with AWS Cloud WAN). As an example, it can be:
    • BGPOIPSEC & BGPoGRE with AWS TGW / Cloud WAN
    • BGPoIPSEC with Azure VWAN

BGP Route Approval



One cool feature with S2C is BGP Route Approval in the Aviatrix Transit GWs. It allows you to filter unwanted routes propagated by the remote connection over BGP. [for the small anecdote, I faced recently a customer having had a big outage because default route was propagated into its Public Cloud causing substantial damages]

The process is the following:

1. New routes from remote connection are propagated to Aviatrix Transit over BGP

2. The Aviatrix Transit Gateway reports these new routes to the Aviatrix Controller

3. The Aviatrix Controller notifies the Admin. via email

4. Admin. logs into the Aviatrix Controller to approve these new routes

5. If approved, the Aviatrix Controller programs the new routes to the Aviatrix Spoke Gateways.

Other benefit provided by S2C

Imagine you are in the Context of Merger & Acquisitions: you have acquired a new company with a Cloud Infrastructure and / or On Prem Data Centre with IP overlapping with your own Cloud Infrastructure. You want to keep the control over this, but what solutions do you have with Cloud Native? Aviatrix offers multiple solutions with Natting

NOTE: Natting scenarios will be discussed in later blog posts & are out of scope for this blog.

Architecture 



  • A simulated On Prem DC connect to the Cloud via Cisco CSR1000V Virtual Router over Internet (S2C, BGPoIPSEC)
  • The 2 Aviatrix Transit GWs in AWS connects this On Prem via 2 distinct IPSEC Tunnels (when HA is enabled, this is automatically configured)
  • BGP Route Approval is enabled in the Aviatrix GWs located in AWS. We will only allow Loopback0.
  • We will then exclude Loopback0 in the Transit peering to see that VM VNET1 is not reachable anymore.

Configuration

Configuration of S2C - Aviatrix Controller



  • External Device for Cisco CSR Connection
  • BGPoIPSEC
  • Configure Aviatrix BGP ASN & Cisco CSR BGP ASN
  • Select the Primary Aviatrix GW (bear in mind that if you are in HA mode, 2 IPSEC Tunnels (1 per Aviatrix GW) will be created)
  • 'Learned CIDR approval is set to 'enabled' to activate BGP Route Approval
  • Remote GW IP is the Public IP of Cisco CSR Router
  • Pre Shared Key configured

Cisco CSR1000V Provisioning


1. Subscribe & Launch a new EC2


2. Launch EC2 Instance for Cisco Virtual Router


3. Allocate & Associate EIP to CSR Instance

4. You are now able to login into the CSR Instance

Download & Install BGP / IPSEC configuration from Aviatrix Controller




Only the following must be adapted according to your needs:

  • IKE crypto_policy number
  • IPSEC Tunnel Interfaces (*2)
  • Source Interface of the Cisco CSR for IPSEC (Public IP)
Please see full configuration.

The 2 IPSEC tunnels (to Transit & Transit HA) go UP.


You are even notified by the Aviatrix Controller via email of the status change of your IPSEC tunnels to Cisco CSR!! (since Controller Version 6.8)


Let's create the 2 loopback interfaces depicted in the diagram.

As we enabled BGP Route Approval, we receive notifications from Aviatrix Controller that we need to approve or deny the new CIDRs propagated via BGP by the Cisco CSR to the Aviatrix Transit GWs.


I decide to approve only Loopback0 for the purpose of the tests. (don't forget to click on 'Update')


Visualization & testing

Route table of Azure Spoke Vnet1 GW in Copilot. Only 10.10.10/24 has been propagated as foreseen.


Ping from Lo0 & Lo1 CSR to VM VNET1. Only ping from Lo0 is successful as whished.


Now filter Lo0 on the Transit peering between AWS & Azure. (configuration must be symmetric on the 2 Transit otherwise it will be rejected)


PING NOK as foreseen


BOTTOM Line

  • S2C is a very easy Aviatrix product to use to connect your Cloud to any kind of Remote site via different flavours (BGPoLAN for SDWAN, BGPoIPSEC to connect Cloud Native constructs or remote sites over Internet), BGPoGRE for remote private connections, etc..)
  • High Performance Encrytion (IPSEC) can be enabled to allow you more bandwidth to your remote site
  • Fancy mechanisms to overcome the Cloud Native limitations & relieve you from pain (NAT and BGP Route Approval)
  • You can even download the configuration of your remote device for an easy integration!

Next episodes foreseen:

Episode4: Embedded L4 Stateful FWs on Aviatrix GWs

Episode5: All you need to know about Aviatrix FQDN Filtering - Design Patterns

Episode6: Aviatrix Copilot Tour (including Cyber Threat Protection with ThreatIQ/ThreatGuard)

Episode7: How to spin up a fully resilient multicloud environment in minutes with Terraform

Aviatrix-Episode4: Embedded L4 Stateful FWs on Aviatrix GWs

 There is a cool Aviatrix feature on the Data Plane: the embedded L4 Stateful FW on every single Aviatrix Gateway!

You might say: 'Again another Security product..' or 'It is just a 'L4 Packet filtering FW'. 'What is so cool about it and even making a blog post on this specific?'

The answer is: 'You don't need to install anything if you need that capability!'

You might remember a recent post about AWS Network FW.. Remember all the complexity to make it work: so many different architectures, doing some routing manually, additional VPCs, etc..

Here nothing 😏. You start straight with the FW configuration. 

Some facts about embedded L4 Stateful FW

  • Filters on CIDRS, protocol & ports
  • It is great to be used in the Aviatrix Transit GWs for Centralized packet filtering (we will test that even if it could have been an Aviatrix Spoke GW)


  • Action can be 'Allow', 'Deny' or 'Force Drop'
    • Deny: blocks the new connections but allows the existing
    • Force Drop: Drop existing & new connections
  • This feature is automatically used by Aviatrix platform to enforce the FW rules for 
    • Public Subnet Filtering (AWS Guard Duty Enforcement) 
    • Threat Guard to block the malicious IPs & being protected against Data Exfiltration, Bitcoin Mining, DDoS, etc..): this is called Cyber Threat Protection.

What will we test today ? 😺

You might remember Episode1 architecture.


We will filter simply traffic on Aviatrix Transit GWs in AWS to deny only ICMP between SpokeB EC2 & VM VNET1 and allow everything else. Let's go?

Configuration

 1. Create Tag & Tag objects (example for SpokeB but same applies for VNET1)


2. Select the GW (same policy applies for HA GW)


3. Apply rules using TAGs or CIDRs


  • Sources & Destinations are based on the TAGs previously created
  • Protocol is ICMP
  • Action for this rule is Deny (remember the purpose of the test), whereas the base policy is Allow all: this means that everything will be allowed except the rule specified (ie ICMP between TAGs SpokeB & VNET1)

Testing

Ping is blocked as expected.


NOTE: You can visualize the logs by enabling 'Packet Logging' and sending it to a Syslog Server.

Bottom Line

  • Aviatrix embedded L4 Stateful FW is an easy feature to use
  • No rearchitecture is needed
  • It is free!

Next episodes foreseen:

Episode5: All you need to know about Aviatrix FQDN Filtering - Design Patterns

Episode6: Aviatrix Copilot Tour (including Cyber Threat Protection with ThreatIQ/ThreatGuard)

Episode7: How to spin up a fully resilient multicloud environment in minutes with Terraform

Sunday, August 14, 2022

Cisco CSR Config. with Aviatrix S2C

  Aviatrix Site2Cloud configuration template

!

! This configuration serves as a general guideline and may have to be modified to

! be functional on your device.

!

! If the provided encryption or authentication type is configured as 'n/a', then

! there was not a known mapping from the selected type to the encryption or

! authentication type expected by the Cisco device.  Please reference the Cisco

! documentation for your device and replace 'n/a' with the expected configuration.                                                                                                 

! This connection has two IPsec tunnels between the customer gateway and 

! Aviatrix gateways in the cloud. Tunnel #1 is the primary tunnel. The 

! customer gateway should be configured in such a way that it should

! switch over to tunnel #2 when tunnel #1 fails.

! You need to populate these values throughout the config based on your setup:

! <crypto_policy_number>: the IKE crypto policy number

! <tunnel_number1>: the primary IPSec tunnel interface number

! <tunnel_number2>: the backup IPSec tunnel interface number

! <ios_wan_interface1>: the primary source interface of tunnel packets

! <ios_wan_interface2>: the backup source interface of tunnel packets

! <customer_tunnel_ip1>: any un-used IPv4 address for the primary tunnel interface

!                        when static routing is used (e.g. 1.1.1.1)

! <customer_tunnel_ip2>: any un-used IPv4 address for the backup tunnel interface

!                        when static routing is used (e.g. 1.1.1.3)

! <netmask>: netmask for customer_tunnel_ip. Please use 255.255.255.255

!

! --------------------------------------------------------------------------------

! IPSec Tunnel #1 (Primary)

! --------------------------------------------------------------------------------

! #1: Internet Key Exchange (IKE) Configuration

! A policy is established for the supported ISAKMP encryption, 

! authentication, Diffie-Hellman, lifetime, and key parameters.

!

crypto keyring xxx

  pre-shared-key address xx key xx

  exit

!

crypto isakmp policy 1

  encryption 256-aes

  authentication pre-share

  hash sha256

  group 14

  lifetime 28800

  exit

!

! DPD configuration on Aviatrix gateway for this site2cloud connection is given below:

!     status       : enabled

!     initial delay: 10 seconds seconds

!     retry        : 3 seconds seconds

!     maxfail      : 3

!

crypto isakmp keepalive 10 3 periodic

!

crypto isakmp profile xx

  keyring xx

  self-identity address

  match identity address xx

  exit

!

!---------------------------------------------------------------------------------

! #2: IPSec Configuration

! The IPSec transform set defines the encryption, authentication, and IPSec

! mode parameters.

!

crypto ipsec transform-set xx esp-256-aes esp-sha256-hmac

  mode tunnel

  exit

crypto ipsec df-bit clear

!

crypto ipsec profile xx

  set security-association lifetime seconds 3600

  set transform-set xx

  set pfs group14

  set isakmp-profile xx

  set security-association lifetime kilobytes disable

  set security-association lifetime seconds 3600

  exit

!

!---------------------------------------------------------------------------------------

! #3: Tunnel Interface Configuration

! The virtual tunnel interface is used to communicate with the remote IPSec endpoint 

! to establish the IPSec tunnel.

!

interface Tunnel 1

  ip address 169.254.8.97 255.255.255.252

  ip mtu 1436

  ip tcp adjust-mss 1387

  tunnel source xx

  tunnel mode ipsec ipv4

  tunnel destination xx

  tunnel protection ipsec profile xx

  ip virtual-reassembly

  exit

!

!

! --------------------------------------------------------------------------------

! IPSec Tunnel #2 (Backup)

! --------------------------------------------------------------------------------

! #4: Internet Key Exchange (IKE) Configuration

!

crypto keyring xx

  pre-shared-key address xx key S2CTEST

  exit

!

crypto isakmp profile xx

  keyring xx

  self-identity address

  match identity address xx 255.255.255.255

  exit

!

!---------------------------------------------------------------------------------

! #5: IPSec Configuration

! The IPSec transform set defines the encryption, authentication, and IPSec

! mode parameters.

!

crypto ipsec transform-set xx esp-256-aes esp-sha256-hmac

  mode tunnel

  exit

!

crypto ipsec profile xx

  set security-association lifetime seconds 3600

  set transform-set xx

  set pfs group14

  set isakmp-profile xx

  set security-association lifetime kilobytes disable

  set security-association lifetime seconds 3600

  exit

!

!---------------------------------------------------------------------------------------

! #6: Tunnel Interface Configuration

! The virtual tunnel interface is used to communicate with the remote IPSec endpoint

! to establish the IPSec tunnel.

!

interface Tunnel 2

  ip address 169.254.188.9 255.255.255.252

  ip mtu 1436

  ip tcp adjust-mss 1387

  tunnel source xx

  tunnel mode ipsec ipv4

  tunnel destination xx

  tunnel protection ipsec profile xx

  ip virtual-reassembly

  exit

!

!---------------------------------------------------------------------------------------

! #7: BGP Routing Configuration

! The Border Gateway Protocol (BGPv4) is used to exchange routes from the VPC to on-prem

! network. Each BGP router has an Autonomous System Number (ASN).

!

router bgp 64512

  bgp log-neighbor-changes

  neighbor 169.254.8.98 remote-as 65000

  neighbor 169.254.8.98 timers 60 180

  ! bgp md5 authentication password need to be added if configured

  ! neighbor 169.254.8.98 password 

  neighbor 169.254.188.10 remote-as 65000

  neighbor 169.254.188.10 timers 60 180

  ! bgp md5 authentication password need to be added if configured

  ! neighbor 169.254.188.10 password 

 !

 address-family ipv4

  redistribute connected

  neighbor 169.254.8.98 activate

  neighbor 169.254.8.98 soft-reconfiguration inbound

  neighbor 169.254.188.10 activate

  neighbor 169.254.188.10 soft-reconfiguration inbound

  maximum-paths 4

 exit-address-family

!

!---------------------------------------------------------------------------------------

!

!

For vendor specific instructions, please go to the following URL:

http://docs.aviatrix.com/#site2cloud

Thursday, August 11, 2022

Aviatrix-Episode2: Aviatrix & AWS Cloud WAN compatibility & segmentation

The idea of that post came from an Aviatrix announcement..  I must admit I did not know till this official announcement.. 😅

When I read it, it looks pretty very promising, meaning: Aviatrix + AWS Cloud WAN =

The compatibility includes:

  • Aviatrix integrates with AWS Cloud WAN using GRE encapsulation (AWS Cloud WAN Connect Attachment) and/or  IPSEC encryption (AWS Cloud WAN S2S VPN attachment)
  • Network Segmentation between Aviatrix & AWS Cloud WAN is possible with each AWS Cloud WAN attachment being part of an Aviatrix Network Domain (aka segment)
  • Use Cases
    • High Performance Encryption between AWS Cloud WAN & another Public Cloud
    • AWS Cloud WAN Segments can be extended to other Public Clouds using Aviatrix MCNS (Multi Cloud Network Segmentation). Each AWS Cloud WAN attachment being part of a specific segment. (micro segmentation is also possible)
    • IPSEC encryption to On Prem with Secure Edge
    • Overcoming IP overlapping challenges (with MAPPED NAT) 
    • Cyber Threat Protection of workloads being behind AWS Cloud WAN  

AWS Cloud WAN - Aviatrix Architecture

NOTE: I cannot test every single use case. And I must tell you the truth: I am super excited about testing one (and only) specific use case listed above: SEGMENTATION!    


The rationale behind is fairly simple.. AWS Cloud WAN is the AWS feature about Regions communication but most of all segmentation across AWS Cloud infrastructure. 

NOTE: for precise explanation about AWS Cloud WAN, please read my blog.

So, let's configure AWS Cloud WAN connected to Aviatrix via the 2 techniques (GRE & IPSEC) with 2 specific segments (DEV & PRD)

  • DEV segment via GRE
  • PRD segment via IPSEC

The final goal is that these 2 segments in AWS Cloud WAN can communicate to 2 same segments in Azure via Aviatrix infrastructure.


VM DEV must communicate with EC2 DEV & VM PRD must communicate with EC2 PRD - Nothing else - The rest of communication between Azure & AWS or within the same Public Cloud is prohibited.

Pre requisites

  • Aviatrix Controller & Copilot are already installed (see Episode1)
  • All VPCs & VNETs are already configured
  • All Aviatrix GWs are already configured + attached + Transit Peering + Connected Transit (see Episode1)
  • All VMs & EC2 are already configured
NOTE: Oh! BTW, I create all VPCs & VNETs from Aviatrix Controller - easier & configures everything for me (like AWS IGW needed for accessing EC2 instances from Internet).

Preliminary AWS Cloud WAN configuration

Step1: Create Global Network



Step2: Create Core Network & Primary Segment (DEV)



Step3: Create Attachments for DEV & PRD VPCs

NOTE: Don't forget the TAG, it is very important for the Attachment policies further in the configuration.


Step4: Add Static routes in VPC RTs to 10/8 towards the CNE



Step5: Create New Segment (PRD)



Step6: Create Attachment policies for DEV & PRD



We can see in the attachment policy rules that key/value pairs are used as attachment condition. It retakes the Tag configured when attachments have been created in Step3.

Step7: Ensure that the policy is configured as below



  • CNE must have its ASN
  • Inside CIDR block must be configured. It is mandatory to configure Connect (GRE) Attachment later. This CIDR contains the outer IPs of the GRE Tunnels. (minimum /16)


AWS Cloud WAN Configuration for GRE & IPSEC attachments 

GRE connection for DEV Segment


GRE can only be done over Private IP@. (as we have to specify the Inside CIDR block).

1. Transit VPC Attachment creation

This step is needed in order that the CNE can communicate with Transit VPC to form the underlay of GRE tunnels. (10.0.0/23 [Transit VPC] must communicate with 10.10.0.0/16 [Inside CIDR block of the CNE])


2. Connect Attachment creation


  • Connect Attachment = GRE
  • The Transport Attachment must be a VPC attachment (in our case, this is the Transit VPC attachment)

3. Connect Peer creation

This step is needed to setup BGP over GRE between Transit GW & CNE.


  • Peer GRE address is the Private IP address of the Aviatrix Transit Gateway (eth0)
  • Configure a CIDR for BGP neighborship (it must be 169.254.X.Y/29 but X must be different than 1, 2, 3, 4, 5 and X.Y must be different than 169.248)
  • Peer ASN (Aviatrix Transit GW ASN)
4. Adding Static route

The CNE Inside Block must be reachable from the Transit VPC. Therefore the following route must be added towards the Core Network.


IPSEC connection for PRD Segment


The S2SVPN must be created on top Public IPs part of the AWS Backbone.

1. S2SVPN Creation


  • Target GW must be set to 'Not associated' for Cloud WAN
  • CGW creation with Public IP@ of Aviatrix Transit GW in AWS Transit VPC
  • Pre Shared Key also manually configured (not shown in screenshot)

2. IPSEC Attachment Creation


  • Attachment type must be VPN
  • VPN id must be the S2SVPN previously created

AVIATRIX Configuration

GRE



  • Remote GW IP must be the CNE GRE @ (Private IP created during Connect Peer creation). If HA is selected here, a second Connect Peer must be created @AWS side. [for the purpose of my test only one has been created)
Again there is a different implementation regarding the GRE tunnels. Aviatrix mapping is 1to1 whereas AWS mapping is 2to1, meaning that the 2 GRE tunnels of the same Connect Peer are initiated from the same 169.254.X.Y IP@ (Aviatrix side) to 2 different 169.254.X.Y IP@ (AWS side). Therefore always 1 of the 2 GRE Tunnels of a Connect Peer will remain DOWN. [because of it, the second Local IP@ above is a fake one]


IPSEC



  • Remote GW IP must be the 2 Public IPs created with S2SVPN. (only one depicted above)
  • Local Tunnel IP / Remote Tunnel IP must be Inside IPv4 CIDR created with the S2SVPN for the first Local & first remote. The second Local/Remote is a fake one because Aviatrix and AWS are not configured the same way. [because both AWS S2SVPN tunnels end to Customer GW, which is a single Public IP, ie Public IP of primary Aviatrix Transit GW]. If I wanted to have the 4 IPSEC Tunnels from Aviatrix working, I should configure a second AWS S2SVPN with Customer GW = Public IP of HA Aviatrix Transit GW.

AWS shows VPN tunnels are UP. 


Aviatrix shows VPN Tunnels are UP.




We can see above that only the IPSEC tunnels from Aviatrix Primary Transit GW are up as explained previously.

AWS Cloud WAN Dashboard Visualization after all configuration above


  • The 2 IPSEC Tunnels are up to Primary Aviatrix Transit GW (as explained, they both end to same CGW, ie a single Aviatrix GW. There is no IPSEC then configured to Aviatrix HA Transit GW).
  • A single GRE tunnel is operational as explained above. 
    • during Connect Peer configuration, a single Remote Peer IP was allowed to be configured). It means that the 2 GRE tunnels above end to Aviatrix Primary GW also.
    • GRE mapping between AWS and Aviatrix being different, always a single GRE of the Connect Peer can be UP.

Aviatrix Copilot View - Topology Network Map


As explained previously, no GRE or IPSEC tunnels from/to Aviatrix HA Transit Gw because additional configuration would be needed in AWS side.


Aviatrix Segmentation configuration

The Aviatrix Segmentation configuration is a 4-step process.

1. Enable AWS & Azure Aviatrix GWs for Segmentation


2. Create DEV & PRD Segments (aka Network Domains)


3. Step3 should have been Connection Policy to allow communication between DEV & PRD Segments but this is not foreseen for the purpose of our test. We skip that step.

4. Associate Spoke or S2C to Network Domain


4 associations needed:
  • Azure respective VNETs to DEV & PRD
  • AWS to S2C DEVGRE to DEV & PRDIPSEC to PRD

Aviatrix Copilot Visualization of the Segmentation


We can see the 2 Network Domains (DEV & PRD), as well as what is part of each segment: 1 VNET per segment (in Azure) & 1 S2C per segment (DEVGRE in DEV & PRDIPSEC in PRD).

AWS Cloud WAN visualization - Logical


We can see that the Transport VPC Attachment for GRE as part of the DEV segment.

Routes visualization on Cloud WAN


We can see in PRD Route Table that PRD segment in Azure has been propagated via BGP.

TESTING!!

Ping from PRD EC2: ping to PRD VM OK & ping to DEV VM NOK


Ping from DEV EC2: ping to PRD VM NOK & ping to DEV VM OK


BOTTOM Line

  • First of all, the integration between AWS Cloud WAN & Aviatrix works perfectly. Even the segmentation towards AWS Cloud WAN & Aviatrix (located onto another Public Cloud: Azure) is perfectly operational.
  • Second thing, I have spent almost 2 days trying to configure GRE on AWS Cloud WAN: that was tough! The good thing is that now you have the screen shots to make it easily.👍
  • GRE attachment
    • difficult setup (inside CIDR block, VPC Attachment for Transport é Connect Attachment for GRE, Connect Peer for BGP, Static routes to CNE)
    • Higher Bandwidth per attachment than IPSEC
    • Relies only on Private IPs for the underlay
  • IPSEC attachment
    • Easy setup but still needs CGW & S2SVPN constructs
    • Lower Bandwidth than GRE
    • Relies only on Public IPs for the underlay
    • Use it if you also need encryption for your data in-transit

Next episodes foreseen in August:

Episode3: Connecting OnPrem Remote site to Aviatrix Cloud infrastructure via BGPoIPSEC   (incl. BGP route approval)

Episode4: Embedded L4 Stateful FWs on Aviatrix GWs

Episode5: All you need to know about Aviatrix FQDN Filtering - Design Patterns

Episode6: Aviatrix Copilot Tour (including Cyber Threat Protection with ThreatIQ/ThreatGuard)

Episode7: How to spin up a fully resilient multicloud environment in minutes with Terraform

Aviatrix-Episode3: Connecting OnPrem Remote site to Aviatrix Cloud infrastructure via BGPoIPSEC (incl. BGP route approval)

  Today, we will simulate an "On Prem" Data Centre connected to our existing MultiCloud Network infrastructure . For this, we will...