Learning NSX-Part-6-Logical Switching and Transport Zones

In last post of this series we briefly looked what is VXLAN (In actual it’s an ocean of knowledge in itself) and also we configured VXLAN on our cluster/hosts.

In this post we will be talking about Logical switching and we will see how to create that and will cover prerequisites part as well.

If you have missed earlier posts of this series, you can read them from below links:

1: Introduction to VMware NSX

2: Installing and Configuring NSX Manager

3: Deploying NSX Controllers

4: Preparing Esxi Hosts and Cluster

5: Configure VXLAN on the ESXi Hosts

Let’s start with introduction to Logical Switching.

What is Logical Switching?

Functionality of a Logical switch is very similar to that of a physical switch i.e they allow isolation of applications and tenants for security purpose. A logical switch when deployed, creates a broadcast domain to allow isolation of the VM’s running in infrastructure.

Logical switches provides you with capability of creating VLAN’s (similar to in physical world) and these VLAN’s not only provides isolation but can also span across large compute clusters.

The logical switch is mapped to a VXLAN that encapsulates the traffic going over physical network.

Logical switches can be created as either local or as universal (which can span  across vCenter’s in a cross-vCenter NSX deployment architecture). Universal logical switch can aid you with capability of extending your VXLAN across two different sites.

Why logical switching?

To answer this question I am not gonna include a long paragraph with complex networking terms. Instead I will ask a simple question that inn a physical networking world why VLAN?

A cloud service provider or any organization can have multiple tenants running in their data center. In a public cloud environment a tenant can simply be referred as a customer while in an organization tenant can be the different departments of that organization.

Now these tenants are running their business critical applications deployed inside the VM’s running on top of virtualization stack and are sharing same piece of infrastructure (compute, storage and network).

Now the question arises how to stop each tenant from sneaking into other tenant’s VM or applications. These tenants require isolation from each other for various reason including security, fault isolation, and avoiding overlapping IP addressing issues.

So to solve the above problem we use logical switching.

Prerequisites for creating a Logical Switch

Before you go and start creating logical switches in your environment, you have to make sure you meet following requirements:

  • vSphere distributed switches must be configured. You cannot deploy logical switches on standard switches.
  • NSX controllers must be deployed.
  • Your compute host clusters must be prepared and ready to go.
  • VXLAN must be configured.
  • A Transport Zone and a segment ID pool must be configured.

In our last post covered all the above requirements. So let’s jump into lab and start creating logical switches.

To start creating Logical Switch, Login to your vCenter server Web Client and navigate to Home | Networking & Security | Logical Switches

Click on the green “+” button to start creating a new Logical Switch

ls-1

2. Provide a name and nice little description for your logical switch. Select the transport zone to which this logical switch will be mapped.

Also select the replication mode for the logical switch.

Note:

By default, the logical switch inherits the control plane replication mode set in the Transport Zone. You can change this by selecting one of the available modes.

IP discovery is enabled by default and allows Address Resolution Protocol (ARP) suppression between VMs connected to the same logical switch. There should not be any reason to disable this (optional).

Enable MAC learning setting if your virtual machines are having multiple MAC addresses or using virtual NICs that are trunking VLANs. This setting builds a VLAN/MAC pairing table on each vNIC.

ls-2

3. When you create a logical switch, a new dPortGroup is created on the vDS. You can verify this by going into your network inventory and checking the presence of the newly created portgroup.

ls-3

Migrating VM’s to Logical Switch

Once logical switch is created, we can migrate the workloads onto this switch. There are 2 methods of doing this.

A: Go to  Networking & Security | Logical Switches and clicking on the VM icon as shown in figure. This method will allow to migrate multiple VM’s at once from their current network (portgroup) to the logical switch.

ls-4.PNG

The second method is rather long and needs going VM by VM basis and edit their settings and changed the portgroup association of the vNIC.

Select the VM’s from the list and click on the blue arrow button to add them to selection window and hit next.

ls-5

ls-6

On vNIC selection page, select the vNIC’s (in case your VM has more than one vNIC) for which you want to change portgroup association and hit next. In my case all my VM was equipped with one vNIC.

ls-7

On ready to complete page review your settings and hit Finish to complete logical switch creation wizard.

ls-8

You can verify the change of portgroup association by selecting the VM and go to Manage > VM hardware tab.

ls-9

So now we have created our logical switch and migrated a few VM’s on that. In next post of this series we will learn about Distributed Logical Router.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

 

Posted in NSX, Vmware | Leave a comment

Learning NSX-Part-5-Configure VXLAN on the ESXi Hosts

In last post of this series we saw how to prepare Esxi host and Cluster for NSX. In this post we will be talking little bit about VXLAN, what are its benefits and how to configure VXLAN on Esxi hosts.

If you have missed earlier posts of this series you can read them from here:

1: Introduction to VMware NSX

2: Installing and Configuring NSX Manager

3: Deploying NSX Controllers

4: Preparing Esxi Hosts and Cluster

Lets start our discussion with what is VXLAN.

Virtual Extensible LAN (VXLAN) is an encapsulation protocol for running an overlay network on existing Layer 3 infrastructure. An overlay network is a virtual network that is built on top of existing network Layer 2 and Layer 3 technologies to support elastic compute architectures.

In VXLAN the original layer 2 frame is encapsulated in a User Datagram Protocol (UDP) packet and delivered over a transport network. This technology provides the ability to extend layer 2 networks across layer 3 boundaries and consume capacity across clusters.

Why we should use VXLAN?

VXLAN enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks.

If you are from networking background then you are aware that the traditional VLAN identifiers are 12-bits long—this naming limits networks to 4094 VLANs. If you are a cloud service provider, then it is likely to happen that you are going to hit this limit in nearby future as your environment grows.

The primary goal of VXLAN is to extend the virtual LAN (VLAN) address space by adding a 24-bit segment ID and increasing the number of available IDs to 16 million. The VXLAN segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure. As with VLANs, only virtual machines (VMs) within the same logical network can communicate with each other.

VXLAN Benefits?

  1. You can theoretically create as many as 16 million VXLANs in an administrative domain.
  2. You can enable migration of virtual machines between servers that exist in separate Layer 2 domains by tunneling the traffic over Layer 3 networks. This functionality allows you to dynamically allocate resources within or between data centers without being constrained by Layer 2 boundaries or being forced to create large or geographically stretched Layer 2 domains.

Since you have a bit of idea now about VXLAN, let’s jump into lab and see how to configure it on Esxi hosts.

VXLAN can be configured by logging into vCenter Web Client and navigating to Networking & Security > Installation > Host Preparation

You will see that VXLAN status is “Not Configured”. Click on that and a new wizard will open to configure VXLAN settings.

vxlan-1

Provide the following info to configure the VXLAN Networking.

  • Switch – Select the vDS from the drop-down for attaching the new VXLAN VMkernel interface.
  • VLAN – Enter the VLAN ID to use for VXLAN VMkernel interface. If you are not using and VLAN in your environment Enter “0″. It will pass traffic as untagged.
  • MTU – The recommended minimum value of MTU is 1600, which allows for the overhead incurred by VXLAN encapsulation.
  • VMKNic IP Addressing – You can specify either IP Pool or DHCP for IP addressing.

vxlan-2

If you have not created any IP Pool yet, then you can do so by selecting “New IP Pool” and a new wizard will be launched to create a new pool.

Provide a name for the pool and define the IP/Netmask/gateway/DNS etc along with Range of IP that will be used in this pool.

vxlan-3

Once you hit OK, you will return to the original window. Select the pool which you have just created.

Next setting is to select VMKNic teaming policy. This option is define the teaming policy used for bonding the physical NICs for use with the VTEP port group. The value of VTEP changes as you select the appropriate policy.

A very good read on various teaming policy available under NSX is explained here

I have left with the default Teaming policy to “Fail Over”. Hit OK once you are done with supplying all info.

vxlan-4

Now you will see VXLAN preparation on your cluster is started.

vxlan-5

Within a minute or so you will see that VXLAN status has been changed from busy to Configured.

vxlan-6

VXLAN configuration will create a new VMkernel port on each host in the cluster as the VXLAN Tunnel Endpoint (VTEP). You can verify this by selecting your host and navigating to Manage > Networking > VMkernel Adapters

vxlan-7

You can verify that IP allocated to these VMkernel interfaces are from your defined pool by clicking on Logical Network Preparation tab> VXLAN Transport.

vxlan-8

Next we need to Configure the VXLAN ID Pool to identify VXLAN networks:-

On the Logical Network Preparation tab, click the Segment ID button and Click Edit to open the Segment ID pool dialog box to configure ID Pool.

vxlan-9

Enter the Segment ID Pool and Click Ok to complete.

Note: VMware NSX™ VNI ID starts from 5000.

vxlan-10

Next task is to Configure a Global Transport Zone:-

A transport zone specifies the hosts and clusters that are associated with logical switches created in the zone. Hosts in a transport zone are automatically added to the logical switches that you create.

On the Logical Network Preparation tab, click Transport Zones and Click the green plus sign to open the New Transport Zone dialog box.

vxlan-11

Provide a name for the transport zone and select the Replication Mode as per your environment need.

A little note on Replication Mode

Unicast has no physical network requirements apart from the MTU. All traffic is replicated by the VTEPs. In NSX, the default mode of traffic replication is unicast.  Unicast has Higher overhead on the source VTEP and UTEP.

Multicast mode uses the VTEP as a proxy. In multicast, the VTEP never goes to the NSX Controller instance. As soon as the VTEP receives the broadcast traffic, the VTEP multicasts the traffic to all devices. Multicast has lowest overhead on the source VTEP.

Hybrid mode is not the default mode of operation in NSX for vSphere, but is important for larger scale operations. Also the configuration overhead or complexity of L2 IGMP is significantly lower than multicast routing.

vxlan-12

After clicking OK you will see the newly created Transport Zone

vxlan-13

We are done with configuring VXLAN on Esxi hosts here. In next post of this series we will learn about logical switching.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Posted in NSX, Vmware | 1 Comment

Learning NSX-Part-4-Preparing Esxi Hosts and Cluster

In previous posts of this series, we talked about NSX Manager and NSX Controllers Deployment and also validated NSX Control Cluster status.

If you have missed earlier posts of this series you can read them from here:

1: Introduction to VMware NSX

2: Installing and Configuring NSX Manager

3: Deploying NSX Controllers

In this post we are going to learn about how to prepare Clusters and Esxi Hosts for NSX.

At this point we have NSX manager and controllers ready and established connection between control and management plane. Next step is to prepare cluster and Esxi hosts.

NSX installs three vSphere Installation Bundles (VIB) that enable NSX functionality to the host. One VIB enables the layer 2 VXLAN functionality, 2nd VIB enables the distributed router, and the 3rd VIB enables the distributed firewall. After adding the VIBs to a distributed switch, that distributed switch is called VMware NSX Virtual Switch.

Login to vCenter Server using vSphere Web Client and Navigate to Networking & Security > Installation > Host Preparation. Choose your cluster and click the Install link.

Once you click on install option, NSX will start installing the VIB’s on the Esxi hosts that are part of the cluster.

host-0

Give it a few seconds to install the VIB’s. Once installation is completed you can see Installation status as OK and also Firewall status as enabled.

host-1

At this point VXLAN is not configured and we will cover this part in next section.

Verify Status of NSX VIBs

You can verify the VIBs installation by logging onto Esxi hosts using ssh.

You can use following commands to check status of NSX VIBs

# esxcli software vib list | grep vxlan

# esxcli software vib list | grep vsip

host-2

You can also get detailed information about these VIBs by using command

# esxcli software vib get | less

You have to navigate through the output to search for vxlan and vsip VIBs info

host-3

host-4

Verifying NSX User World Agent (UWA) Status:

The user world agent (UWA) is composed of the netcpad and vsfwd daemons on the ESXi host. UWA Uses SSL to communicate with NSX Controller on the control plane. UWA Mediates between NSX Controller and the hypervisor kernel modules,except the distributed firewall. Communication related to NSX between the NSX Manager instance or the NSX Controller instances and the ESXi host happen through the UWA. UWA Retrieves information from NSX Manager through the message bus agent.

You can verify the status of User World agents (UWA) using the below command:

# /etc/init.d/netcpad status

host-6

On doing esxtop, we can verify the netcpa deamon is running

host-7.PNG

Post completion of Cluster Preparation, you can see the vxlan is loaded under custom stacks in TCP/IP configuration of the ESXi hosts.

host-5.PNG

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Posted in NSX, Vmware | 2 Comments

Learning NSX-Part-3-Deploying NSX Controllers

In last 2 posts of this series we understood what NSX is and how to install/configure NSX manager.

In this post we will be talking about NSX controllers. Before diving into lab, we will first discuss a little bit theory about NSX controllers and its importance.

NSX Controllers

NSX controllers are the control plane for NSX. They are deployed in a cluster arrangement, so as you deploy these, you can add more controllers for better performance and high availability so that if you loose one of em, you do not loose control functionality. These are important, if you loose enough of these, things stop working.

NSX controllers stores following tables:

1: MAC Table
2: ARP Table
3: VTEP Table

One or more VMkernel interfaces on each vSphere host for VXLAN functionality. NSX controller keep these tables and the reason is that you may have your NSX deployment spread over a big DC or multiple DC.

You may have your vSphere hosts and cluster spread over multiple layer 3 network, so as these VM’s or host need to talk to each other, because we overlay routing networks with what looks like single layer 2 adjacent VLAN’s for the VM’s, if they do things like ARP lookup table or MAC add lookup or you need to know the IP of the VTEP interface on other hosts, normally you will send out a broadcast.

But we dont want to broadcast the stuffs all over the place, so these controllers keep these tables and if a host need those, it will send a copy of these records and thus help in greatly reducing broadcast traffic that we want to get rid of.

NSX controllers considerations:

1: Deployed in odd numbers

Controllers uses a cluster and uses a voting quoram. They should be deployed in odd numbers and should be resilient. The min which you can deploy is 1 but 1 is not resilient and is not supported by VMware. (You can choose to deploy 1 controller in lab environments to test things).

If you have 3 node nsx controller cluster, it allows you to tolerate failure of 1 node. But if 2 goes down things would stop working. These clusters wants a voting majority. The idea here is that in case of a split brain or there is a segmentation and 2 controllers end up in one partition and other one in another partition, the side that has 2 controllers knows that they have majority as they started with 3 nodes and they can institute changes.

If you have only 2 nodes and they split into different partitions, they cant push any type of changes as both dont have majority. The max currently supported is 5.

2: Not in data path

But that doesn’t mean you allow all of them to fail. If you have a 3 node cluster and one of them fail, either fix it or deploy a new node so that you can always have voting majority available

3: Work is striped across the controllers using the concept of slices.

Controllers scale for both performance and availability. Slicing method is used to spread the workload. Every job is divided into slices and then its spread across available nodes. When a new controller is added or existing one fails, these slices can be redistributed.

This can be understood easily via below 2 pictures

In this picture there are 3 controllers and each one have been assigned a workload

slicing-1

Now controller 3 failed and workload are being shifted to available 2 controllers

slicing 2.PNG

NSX Controller Cluster Functions

NSX controllers mainly perform these 2 functions:

1: VXLAN functions

2: Distributed Router

An election is held to find out the master for each kind of roles. When a controller fails, a new election is held to promote a new controller as master.

Controllers are deployed by NSX manager. You dont need any kind of ova/ovf files for deploying these. Each deployed controller has 4 GB RAM and 4 vCPU by default.

We have talked enough of theory now. It’s time to jump into lab and see some action.

To deploy a new controller navigate to Installation section under Networking & Security and click on green ‘+‘ button

con-1

Type the name of the controller and select the Cluster/Resource Pool and datastore where you want to deploy the controller. Also in connected To option select the same layer 2 portgroup in which your NSX manager resides.

For IP Pool click on select

con-2

If you have not created any IP pool for the NSX controllers yet, do so by clicking on New IP Pool

con-3

 

Provide a name for the IP pool and Gateway, DNS and Prefix length (Netmask). Also define a pool of IP under Static IP Pool. It can be a continuous IP Range or comma separated if the IP’s are not continuous. Hit OK once you are done filling up the relevant entries.

con-4

Select the newly created pool and hit OK to continue.

con-5

At last type in the password for accessing controllers over SSH and hit OK to finish the wizard.

con-6

You will see that NSX manager is now deploying a new controller for you. Also in Recent Tasks pane you will see a task triggered for Deploying an OVF template.

con-7

Once the controller deployment is successful, you will see the status as connected. Now you can deploy additional controllers.

con-8

Again click on green ‘+’ button to spin up a new controller. Provide a name and necessary info and hit OK.

Note:- Password option will only appear for the First NSX Controller Node. For 2nd and 3rd node same Password will be used so there will not be password field.

con-9

I have deployed 3 controllers in my lab and all shows connected.

con-10

If you switch to Host and Cluster view in vCenter, you will see 3 VM’s deployed which corresponds to the 3 controllers.

con-11

Now let’s have a look on controller cluster status

NSX Control Cluster Status

Once you SSH to the controller, you can start asking some questions to them.

Say for example you can ask for the cluster status (How are you doing today buddy)

# show control-cluster status

con-12

You will see the output with controller stating that yea I am fine. My join status is complete. I am connected to cluster majority and I can be safely restarted. Here is my cluster ID and UUID. I am happy, I am activated and ready to go.

NSX Control Cluster Connection Status

To verify the Controller Node’s intra-cluster communication connections status we can use below command:

# show control-cluster connections

con-13

The master Controller listens on port 2878 (you can see “Y” in the “listening” column).The other Controller nodes will have a dash (-) in the “listening” column for Port 2878.

NSX Control Cluster Role Status

Next is to query about the controller role. You can ask the controller “Are you the one who is incharge here”. Are you the master of anything.

Below command can be used to query the role status

# show control-cluster roles

Below is the output from my 3 NSX controller node. Each controller node can be master for different role.

con-14.PNG

con-15

con-16

At this point we have NSX manager installed and NSX controllers deployed, so we have our management plane and control plane established. We are ready for the Host preparation and we will cover this part in our next post.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Posted in NSX | 4 Comments

Learning NSX-Part-2-Installing and Configuring NSX Manager

In last post of this series we had a look into what NSX is and how it fits in a software defined datacenter. We also had a look on core NSX components and discussed in brief about them.

In this post we will be talking about basic installation and configuration options of NSX manager.

NSX manager provides a centralized management plane across your datacenter. It provides the management UI and API for NSX. NSX manager runs as a virtual appliance on an ESXi host and during installation it injects a plugin into the vSphere Web Client through which it can be managed.Each NSX Manager manages a single vCenter Server environment.

There are few prerequisites that must be met before proceeding with installation of NSX manager. These are as follows:

1: vSphere infrastructure should be ready. At least there should be 2 cluster.

2: NSX can be managed only via vSphere Web Client . Make sure you have Web Client installed and ready to use.

3: For NSX 6.X , VMware vCenter Server 5.5 or later is recommended.

4: Make sure DNS and NTP servers are ready in your infrastructure. Also ensure all the components ESXi, vCenter and NSX Manager are in sync time with configured NTP servers. Also all name resolution should be working fine.

5: Ensure you have all the required System Resources (CPU and Memory) available in your cluster to deploy various NSX Components like NSX Manager, Controller,etc.

6: Ensure you have Configured your Distributed Switch to use Jumbo frames i.e. MTU 1600 or more.

Apart from that, NSX requires below ports for installation and daily operations:

  • 443 between the ESXi hosts, vCenter Server, and NSX Manager.
  • 443 between the REST client and NSX Manager.
  • TCP 902 and 903 between the vSphere Web Client and ESXi hosts.
  • TCP 80 and 443 to access the NSX Manager management user interface and initialize the vSphere and NSX Manager connection.
  • TCP 1234 Communication between ESXi Host and NSX Controller Clusters
  • TCP 22 for CLI troubleshooting.

The NSX Manager virtual machine is packaged as an Open Virtualization Appliance (OVA) file, which allows you to use the vSphere Web Client to import the NSX Manager into the datastore and virtual machine inventory. VMware NSX can be downloaded from Here

nsx-dl

Once NSX manager ova file is downloaded, we can start installation which is just a very straight forward process of deploying any other ova file.

nsx-01

Verify OVF template details and hit Next.

nsx-02.PNG

Accept EULA and hit Next.

nsx-03

Provide a name for your NSX manager and hit Next.

nsx-04

Select the cluster where you need to deploy the NSX manager and hit Next.

nsx-05

Select appropriate datastore for your installation and proceed.

nsx-06

If you are testing NSX manager in lab, you can go for thin provisioned scheme. In production it is recommended to go for thick provision deployment scheme.

nsx-07

Map the appropriate network for NSX manager management interface.

nsx-08

On the properties page, provide the password and the IP/Netmask information etc.

nsx-09

Once you have finished configuring all the installation options, power on the VM after deployment. If you launch console of the NSX manager at this moment, you will see the booting process which looks very similar to any other VMware appliance.

nsx-10

Once NSX manager boot process is complete, it can be accessed via https://<NSX FQDN>/login.jsp

nsx-11

At this point we have successfully installed NSX manager. Let’s configure it now.

Once you logged in NSX manager admin console, click on Manage Appliance Settings.

nsx-12

 

Verify NTP settings. Change the Timezone as per your country.

nsx-13

Verify Network settings and provide if there is any missing info by clicking on Edit button.

nsx-14

Now click on NSX management Service to associate this NSX manager with a vCenter Server.

nsx-15

First configure Lookup Service URL. This is the URL to machine where your PSC is running.

Note that with vSphere 6, lookup service is running on port 443 and not on port 7444.

Provide the Lookup service host, port and SSO admin credentials to configure lookup service. Hit OK once you have filled all relevant fields

nsx-16

Accept the SSL certificate that is presented.

nsx-17

Next is to enter the vCenter details.

NOTE: If the SSO admin is being used to connect NSX to the vCenter, only the SSO admin will have access to the NSX section in the vCenter web client. If you do use the SSO admin to connect, but use a different user to connect to the web client, you will need to login to the web client using the SSO admin first and give the desired user the appropriate permissions to access NSX

nsx-18

Hit OK after entering the vCenter Server details and accept the SSL certificate.

nsx-19

Once both Lookup and vCenter information is provided in NSX Manager,You should be able to see the status as “Connected” with Green light for Lookup service and vCenter Server.

nsx-20

Now click on Summary page to verify all services are running on NSX manager.

nsx-21

We are done with basic NSX manager configuration here. We have to login to vSphere web-client to explore more about NSX.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Posted in NSX | 3 Comments

Learning NSX-Part-1-Introduction

VMware NSX is the network virtualization and security platform that emerged from VMware after they acquired Nicira in 2012. This acquisition launched VMware into the software-defined networking (SDN)  and network functions virtualization (NFV) world.

VMware NSX® is a software networking and security virtualization platform that delivers the operational model of a virtual machine for the network. Virtual networks reproduce the Layer2 – Layer7 network model in software, allowing complex multi-tier network topologies to be created and provisioned programmatically in seconds, without the need for additional SoftLayer Private Networks. NSX also provides a new model for network security. Security profiles are distributed to and enforced by virtual ports and move with virtual machines.

With VMware NSX, virtualization now delivers for networking what it has already delivered for compute and storage. NSX can be configured through the vSphere Web Client, a command line interface (CLI), and REST API.

NSX includes a library of logical networking services – logical switches, logical routers, logical firewalls, logical load balancers, logical VPN, and distributed security. You can create custom combinations of these services in isolated software-based virtual networks that support existing applications without modification, or deliver unique requirements for new application workloads.

Virtual networks are programmatically provisioned and managed independent of SoftLayer networking constructs. This decoupling from hardware introduces agility, speed, and operational efficiency that can transform datacenter operations. benefits of NSX include:

  • DataCenter automation
  • Self-Service Networking services
  • Rapid application deployment with automated network and service provisioning
  • Isolate dev, test, and production environments on the same SoftLayer Bare metal infrastructure
  • Single SoftLayer Account Multi-tenant clouds

 

NSX Architecture

An NSX-V deployment consists of a data plane, control plane and management plane:

nsx-1

NSX Core Components

The 2 Major components that make up NSX ecosystem are:

NSX Manager

NSX manager provides a centralized management plane across your datacenter. It provides the management UI and API for NSX. NSX manager runs as a virtual appliance on an ESXi host and during installation it injects a plugin into the vSphere Web Client through which it can be managed.Each NSX Manager manages a single vCenter Server environment.

Along with providing management APIs and a UI for administrators, the NSX Manager also installs a number of VIBs to the Esxi host when initiating host preparation. These VIB’s are VXLAN, Distributed Routing, Distributed Firewall and a user world agent.

The below diagram shows NSX Manager Components Plugin and Integration inside vSphere Web Client

nsx-2

NSX Controller

The NSX controller is a user space VM that is deployed by the NSX manager. It is one of the core components of NSX and could be termed as the “distributed hive mind” of NSX. It provides a control plane to distribute network information to hosts. They are deployed in a cluster arrangement, so as you deploy these, you can add more controllers for better performance and high availability so that if you loose one of em, you do not loose control functionality.

NSX is a very vast topic and we will cover the parts in upcoming post. There is a lot to discuss about core components and NSX services and we will touch upon them one by one. Till then stay tuned.

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

Posted in NSX | 3 Comments

Learning VSAN:Part-3- Storage Policies and VSAN

In our last 2 posts of this series we discussed about VSAN Architecture and walked through steps needed to configure VSAN. If you have missed earlier posts of this series you can read them from here:

1: Overview and Architecture of VSAN

2: Installation and Configuration

In this post we will discuss Storage Policies and its role in a vSAN environment.

Storage policy based management and implementation is an important part of software defined storage and software defined datacenter. VMware vSAN is one of the most robust and most complete implementation of storage policy based management.

When you use Virtual SAN, you can define virtual machine storage requirements, such as performance and availability, in the form of a policy. The policy requirements are then pushed down to the Virtual SAN layer when a virtual machine is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

When you enable Virtual SAN on a cluster, a single Virtual SAN datastore is created which represents all the storage in vSAN cluster. In addition, enabling Virtual SAN configures and registers Virtual SAN storage providers.

You can locate vSphere storage providers by navigating to vCenter server > Manage > Storage Providers

This is where we see a VASA enabled/capable disk array. With vSAN this is supposed to have automatically been done for each of the Esxi hosts that are part of vSAN cluster.

VM storage profiles allows for the capabilities of the underlying storage to be presented to administrators for easier assignment to virtual machines.

Note: if you are ruuning older version of vSAN, then there was a known bug where vCenter Server and vSAN cluster get out of sync causing VASA providers did not get created automatically. And becuase of that you cannot create storage policies. To remediate this you have to manually create the entries for the storage providers.

The process to create storage provider entries is fairly simple. Navigate to vCenter Server > Manage > Storage Providers.

Click on green + button and add the entries as below:

Name: Name for the storage provider

URL : http://esxi-fqdn:8080/version.xml

Username: root

Password: Password of root user on esxi host

vsan-1

In my case Storage Providers were already listed as I am using latest version of vSAN in my lab

vsan-2.PNG

Note: Only one of the providers will be active, and rest others will be standby.

vsan-4.PNG

If you select your storage providers and look into storage System details section, it will tell you that it is providing support for policy based management profile.

vsan-5.PNG

No Once you create storage providers, you can go ahead with creating storage policies

Before Proceeding with creation of new Storage Policies, lets understand the capabilities offered by vSAN first:

vSAN Storage Capabilities

vSAN storage capabilities can be divided into 5 major categories:

Number of failures to tolerate – This option allows admins to configure the number of failures to tolerate. A failure can be network, disk failure or host failure within the vSAN cluster. This value is important when design for the resiliency of your cluster.

Number of disk stripes per object – When designing for the performance of a specific VM or group of VMs you can determine if you need to allow for additional capacity by striping the data across additional disk spindles. By default the value is a single spindle. If a read or write cannot be handled from cache it will resort to the spindle, by using more than one it can increase performance as needed.

Flash read cache reservation – There is the option to explicitly reserve an amount of flash capacity on the SSD for read cache on a per object basis. This is configured as a percentage of the virtual machine disk.

Object space reservation – You can also reserve a percentage of the VM disk space on the hard drives during provisioning. This would be similar to thick provisioning on a standard datastore.

Force provisioning – If a policy is created with any of previous options and it vSAN cannot provide the service, this option will forcefully provision the VM. If the resources become available at a later time, vSAN will attempt to bring the VM back into compliance.

Let’s jump into creating storage policy now.

1: To create Storage Policies, navigate to vCenter Server home screen and click on VM Storage Policies

vsan-6.PNG

On the VM storage Policy page click on  icon icon to create a new policy.

On the Create New VM Storage Policy wizard page, provide a name and description for the policy and hit next.

vsan-8

Next is to create Rule Set.

In this example we are going to create a policy for high availability of a VM.

Click on <Add Rule> and select Number of failures to tolerate

Number of Failures to Tolerate indicates resiliency against host, network, or disk failures in the cluster. Increasing this number will cause VSAN to create copies of the object on additional hosts, up to 4 copies total that would allow for three concurrent failures without data loss

vsan-10.PNG

 

Click on Next to continue.

Select the compatible datastore from the list and hit Next.

vsan-11.PNG

On Ready to complete page, review your settings and hit Finish to close the wizard.

vsan-12.PNG

Select the newly created policy and click on Summary tab, you will see its shows 0 Non-Compliant VM’s, 0 Complaint VM’s and 0 unknown VM’s. This is because we have not applied this policy to any VM yet

vsan-13.PNG

Now since our policy has been created, let’s apply this policy to one of the VM.

To apply storage policy to the VM, select the VM and right click on it and select VM Policies > Edit VM Storage Policies

Change the VM storage policy from Datastore Default to one which your created (Tier-1 in our example) and hit OK

vsan-15

If you go to VM Storage Policies again and click on summary page, it will tell you the VM to which you applied the storage profile is compliant or not. IF you are seeing a non-complaint VM it means vSAN doesn’t support the capabilities defined by you in the storage profile.

vsan-111.PNG

I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing. Be sociable :)

 

Posted in VSAN | Leave a comment