OpenStack Neutron 101

What is ‘Neutron’?

Neutron is the networking component of OpenStack. It’s the component responsible for managing your cloud networking resources and provide network devices connectivity.

It manages resources such as networks, routers, subnets, and ports and allows users to develop and implement plugins to makes use of various networking devices and technologies.

A moment of history

Neutron wasn’t part of OpenStack from the beginning. It was only introduced in Sep 2012 called ‘Quantum’. Before Sep 2012 the networking code was part of Nova (Compute) source code which made it very hard to update networking code independently and extend it further.

Later on, in 2013, it was renamed to ‘Neutron’ as there was a trademark issue with using the name ‘Quantum’.

Neutron API Server

The way users are communicating with Neutron is by making RESTful API requests to neutron API server which is eventually a Python process (usually called ‘neutron-server’). This communication can be done in one of several ways:

  • OpenStack CLI – using OpenStack command line interface. For example, to create a network, a user will run the following command: ‘openstack network create smb-net’
  • Horizon – The OpenStack UI component where a user is able to click different buttons to manage network resources.
  • Direct API calls – similar to CLI but a more direct approach where you use tools like ‘curl’ to directly communicate with OpenStack API and in particular neutron API server.

Such requests are basically CRUD operations (create, read, update, delete) on different network resources such as:

  • Networks
  • Subnets
  • Ports
  • Security Groups
  • Routers

Note that any request will be first authenticated against Keystone server API and only then processed by neturon server.

For example, a user sending a request to create a router. The request first authenticated against Keystone server and once validated processed by neutron server which makes it persistent in the DB so it can track progress and update user accordingly. The message is then forwarded to the L3 agent which takes care of such requests.

Neutron entities

Neutron provides and manages several networking entities

  • Port: A port is a connection point for attaching a single device (e.g. the NIC of a virtual server) to a network. The port also describes the associated network configuration, such as the MAC and IP addresses to be used with this port. It can be either physical or virtual.
  • Network – a virtual isolated layer-2 broadcast domain which is typically reserved to the project who created it, unless the network has been explicitly configured to be shared.
  • Subnet – A block of IP addresses assigned to a network. Subnets are used to allocate IP addresses when new ports are created on a network. There is no problem using an overlapping subnet as long as it is used in different projects. Generally, IP addresses are managed by the IPAM service.

There several more entities neutron supports. We’ll discuss them in more details later on.

ML2 Plugin

ML2 deserves its own post but let’s go over it briefly as it has a very important role in OpenStack networking.

Probably the best way to understand what is ML2 plugin is to understand why it was introduced (yes, it wasn’t part of OpenStack from the beginning).

Before ML2 plugin was introduced, you would use neutron with one specific plugin. So if wanted to use OpenStack Neutron with Openvswitch you could do it, but you couldn’t simultaneously use also Linux Bridges.

In addition, when someone wanted to develop a new plugin, he/she had to duplicate an entire code and there was no place where all plugins could share a common code.

So ML2 was introduced to solve the above issues and allow users to simultaneously utilize several network technologies. Now, for example, you can use both Openvswitch and Linux Bridge at the same time. Same applies for tunneling protocols like VXLAN and GRE.

ML2 allows using several network technologies simultaneously via drivers. It distinguishes drives into two types: network type drivers and mechanism drivers.

Network type drives allow you to create a specific type of network. For example VXLAN, GRE, Geneve.

Mechanism drivers allow you to connect to or utilize existing systems, mechanisms like OpenvSwitch or Linux Bridge for example. Of course, several companies provide their own mechanism drivers but the two I mentioned before are ready to use out of the box.

That’s it for now on ML2 plugin. Let’s keep something for the separate post 😉

Project Network & Provider Network

You’ll hear this terms a lot when deep diving into using or managing OpenStack cloud.

Let’s start with project network. A project network is a network, you or anyone else who has access to your OpenStack project can create. It’s a virtual network that created on the virtual switch you are using (usually openvswitch) on the compute node.

As opposed to it, a provider network is a network only the cloud administrator can create and not any regular user using the project. Also, provider network usually involves configuration of the underlying physical infrastructure meaning there will be a mapping between an underlying network to a segmentation ID. Once done, the administrator can share the network and let other users see it and use it.

Routers

Neutron allows you to create virtual routers and manage routing using L3 agent which is usually installed on the network node. Routing is possible between networks in your OpenStack environment or between your instances and the external network.

Below, in the drawing you can see there is a virtual router on the network node making routing possible between Host_A and Host_B to Host_C. The Router has internal and external network ports. But in order for it to work, it will also need to use NAT and floating IPs.

NAT

Network Address Translation is used for modifying the source or destination IP address. Whenever your instance is trying to the reach  Host_C on the public network, the NAT protocol will translate your instance IP address from 192.168.1.x address to a routable address that can be used outside of OpenStack environment, on the public network. The same goes for Host_C, trying to reach the instances in your project.

Floating IP

Floating IP most commonly used to allow public connectivity to your OpenStack instances from the external world. The internal address of your instances is not known for hosts on the external network hence, the 192.180.1.2 address can’t be used by host X to reach host Y

To allow such connectivity you would use Floating IP. The floating IP would be allocated on the provider network range (= from the external network). So based on the above drawing, for host X to reach host Y, it would have to use the address ‘190.40.10.2’.

Network Namespaces

Neutron is using Linux’s ‘network namespaces’ to implement and support different entities such as ‘virtual router’

In this post, we will not deep dive into ‘network namespaces’. You can read more details explanation on ‘network namespaces’ here.

Security Groups

You can think of security groups as a virtual firewall that filters anything going to or from your OpenStack instances.

For example, you might want to permit any HTTP traffic to your instance so you can do it by applying a security group on a specific neutron port.

As opposed to iptables, the order in which roles are applied doesn’t matter

Types of Network Traffic

  • Guest/VM data – Actual instances traffic ( Green lines in the drawing below ).
  • External – Similar to VM data traffic but with access to the public network ( Orange lines in the drawing below ).
  • API – Access to the API OpenStack services. Services like Horizon, Neutron, Nova, and Glance. It should be accessible from the public network ( Purple lines in the drawing below ).
  • Management – Internal communication between services. Services like the different agents (L3, metadata, DHCP…), routers.  The communication between services is done via RPC ( Blue lines in the drawing ).

Neutron Services & Agents

Except for neutron server, there are several other components involved in this magic called ‘OpenStack Networking’.
Message Queue

Used for interaction (messages/notifications) between the different networking services or agents.

Probably the most common technology used for messaging is RabbitMQ (in the context of OpenStack).

This is how the services communicate among themselves to notify on occurring events that sometimes require their involvement in the process.

DHCP agent

The DHCP agent is in charge of IP address allocation. It receives delete/create network notifications from neutron server and then it using dsnmasq technology as the DHCP server to allocate IP addresses.

When you boot a VM, it will send DHCP request using the guest/VM data network. The request will reach the network node, specifically the dnsmasq instance of this network that will send a reply back to the instance with the allocated IP address.

L3 agent

Responsible for managing everything related to routing (yes, its name might gave it away 🙂 )

L3 uses  Linux network namespace. It provides isolated copy of network stack. You get your own private loopback and the scope is limited to a namespace which allows you to reuse addresses.

You can setup HA by using L3 on another network node with separate namespace and sync the states between them using VRRP.

L2 agent

L2 agent runs on the hypervisor and communicates with neutron server using RPC. Normally, it will be installed on the network and compute nodes. Its main job is to watch and notify when devices added or removed and to configure the network on the host accordingly.  It can handle Linux bridges, OVS, and security group rules.

One common example of its usage would be newly created VM with a single nic that must be connected to some network. The actual connection is done by L2 agent that makes sure it connected to the right network. It also handles  OVS flows, Linux bridges, VLAN tagging and security groups.

Metadata agent

Proxy to Nova metadata service. Provides any information requested by the instances. for example IP address, hostname, projects. Normally it’s  installed on the network node.

Neutron – Nova Interaction

Here is an example of Neutron-Nova interaction, using a simple VM creation workflow

More about Neutron & Computer Networking

If this 101 post made you hungry for more information on neutron or computer networking in general, you can find here some resources we gathered on the subject.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s