Quantcast
Channel: OpenContrail is an open source network virtualization platform for the cloud.
Viewing all 49 articles
Browse latest View live

OpenContrail SDN Workshop – Paris

$
0
0

Juniper Networks Inc and Cloudwatt (Orange Cloud for Business) jointly organized an SDN Workshop as part of the CloudWeek Paris. 75 people from Network Engineering and DevOps organizations attended the event.

Bruno Rossi, Sr. Manager Sales at Juniper Networks, provided his introductory perspective on advances in networking in recent times and introduced the subsequent speakers to kick off the proceedings.

Foucault de Bonneval, Product Owner, SDN at Cloudwatt (Orange Cloud for Business), explained how Cloudwatt is using OpenContrail in production and offering a sovereign French Infrastructure as a Service public cloud.

Aniket Daptari, Sr. Product Manager from Juniper Networks, was the Keynote Speaker for the event and he spoke about the OpenContrail project, its history and background, architecture of the solution, customer use-cases, and some of the important features of Contrail Networking and Contrail Cloud Platform products. He also performed a live demonstration of some of the features to supplement his explanation.

You can watch the recordings of the event from the two part series below:

Part 1: OpenContrail Overiew by Aniket Daptari

Part 2: OpenContrail Overiew by Aniket Daptari

 

 

 


OpenStack Neutron LbaaS integration with physical F5 in OpenContrail SDN

$
0
0

This is a guest blog from tcpCloud authored by Marek Celoud & Jakub Pavlik (tcp cloud engineers). To see the original post,click here.

In this blog we would like to show how to integrate physical F5 under OpenContrail SDN and create Load Balancer pools through standard Neutron LbaaS API.

Load Balancers are very important part of cloud and OpenStack Neutron has enabled to use LbaaS features since release Grizzly. However upstream implementation with OpenvSwitch/HAProxy does not provide High Availability by design. SDN OpenContrail provides HA LbaaS feature with HAProxy from release IceHouse and for example Symantec comes with great performance results.(http://www.slideshare.net/RudrajitTapadar/meetup-vancouverpptx-1)

However lots of companies still need to use physical load balancers especially F5 Networks for performance (HW SSL offloading) and other feature benefits. Therefore integration with physical load balancers is mandatory. Second mandatory requirement is tight integration with Neutron LbaaS to enable developers manage different LbaaS providers through standard API and orchestrate infrastructure by OpenStack Heat.

There exist different SDN solution, which support integration with physical F5, but none can provide it thru Neutron LbaaS API. They usually offer possibility to manage F5 in their own administrator dashboard, which does not provide the real benefits of automation. OpenContrail as only one SDN/NFV solution released a new driver for physical and virtual F5 balancers, which is compliant with previous two requirements.

In this blog we show:

  • How to configure OpenContrail to use F5 driver.
  • How to provisioning physical F5 thru Neutron LbaaS API.
  • How to automatically orchestrate them via OpenStack Heat.

LAB OVERVIEW

OpenContrail 2.20 contains beta release for managing physical or virtual F5 through OpenStack Neutron LbaaS API.

OpenStack Neutron LbaaS v1 contains following objects and their dependencies: member, pool, VIP, monitor.

lbaas-objects

F5 can operate now only in “global routed mode”, where all the VIPs are assumed to be routable from clients and all members are routable from F5 device. Therefore the entire configuration on F5 for L2 and L3 must be pre-provisioned.

In the global routed mode, because all access to and from the F5 device is assumed to globally routed, there is no segregation between tenant services on F5 device possible. In other words, overlapping addresses across tenants/networks is not a valid configuration.

Following assumptions made for global routed mode of F5 LBaaS support:

  • All tenant networks are in the same namespace as fabric corporate network
  • IP Fabric is also in the same namespace as corporate network
  • All VIPs are also in the same namespace as tenant/corporate networks
  • F5 could be attached to corporate network or to IP Fabric

The following network diagram capture lab topology, where we tested F5 integration.

f5_net_topology

  • VLAN F5-FROM-INET 185.22.120.0/24 – VLAN with public IP addresses used for VIP on F5 load balancer.
  • VLAN F5-TO-CLOUD 192.168.8.8/29 – VLAN between F5 and Juniper MX LB VRF (subinterface). It is transport network used for communication between members and F5.
  • Underlay network 10.0.170.0/24 – underlay internal network for OpenContrail/OpenStack services (iBGP peering, MPLSoverGRE termination on Juniper MX). Each compute node (vRouter) and Juniper MX have IP addresses from this subnet.
  • VIP network 185.22.120.0/24 – used for VIP pool. Same network as F5-FROM-INET, but created as VN in Neutron. Neutron LbaaS VIP cannot be created from network, which does not exist in OpenStack.
  • Overlay Member VN (Virtual Network) 172.16.50.0/24 – Standard OpenStack Neutron network with Route Target into LB routing-instance (VRF) on Juniper MX. This network is propagated into LB VRF.

Initial configuration on F5

  • preconfigured VLANs on specific ports with appropriate Self IPs. F5 must be able to access members in OpenStack cloud and INET for VIP pool.
  • accessible management from OpenContrail controllers

Initial configuration on Juniper MX (DC Gateway)

  • In this case configuration for MX is manual, so there must be preconfigured VRF for LB and INET.
  • Static routes must be configured correctly.

INITIAL OPENCONTRAIL CONFIGURATION

OpenContrail 2.20 contains two new components, which are responsible for managing F5:

  • contrail-f5 – package with Big IP interface for f5 load balancer.
  • f5_driver.py – driver itself delived in package contrail-config-openstack.

We need to create service appliance set definition for general F5 balancer and service appliance for one specific F5 device. These configuration enables to use F5 as LbaaS provider in Neutron API.

Service Appliance Set as LBaaS Provider

In neutron, loadbalancer provider is statically configured in neutron.conf using following parameter:

[service_providers]service_provider= LOADBALANCER:Opencontrail:neutron_plugin_contrail.plugins.opencontrail.loadbalancer.driver.OpencontrailLoadbalancerDriver:default

In OpenContrail, neutron LBaaS provider is configured using configuration object “service appliance set”. This config object includes “python” module to load for LBaaS driver. All the configuration knobs of the LBaaS driver is populated to this object and passed to the driver.

OpenContrail F5 driver options in current beta version:

  • device_ip – ip address for management configuration of F5.
  • sync_mode – replication
  • global_routed_mode – only one mode, which is now supported.
  • ha_mode – standalone is default settings.
  • use_snat – use F5 for SNAT.
  • vip_vlan – vlan name on F5, where vip subnet is routed. Our case is F5-TO-INET
  • num_snat – 1
  • user – admin user fo connection to F5.
  • password – password for admin user to F5.
  • MX parameters – (mx_name, mx_ip, mx_f5_interface, f5_mx_interface) are used for dynamic provisioning routing instances (VRF) between Juniper MX and F5. We have not tested this feature with F5 driver yet.

At first there must be installed contrail-f5 and python-suds packages. After that create service_appliance_set for neutron lbaas provider F5.


apt-get install python-suds contrail-f5
/opt/contrail/utils/service_appliance_set.py --api_server_ip 10.0.170.30 --api_server_port 8082 --oper add --admin_user admin --admin_password password --admin_tenant_name admin --name f5 --driver "svc_monitor.services.loadbalancer.drivers.f5.f5_driver.OpencontrailF5LoadbalancerDriver" --properties '{"use_snat": "True", "num_snat": "1", "global_routed_mode":"True", "sync_mode": "replication", "vip_vlan": "F5-FROM-INET"}'

Service appliance set consists of service appliances (Either physical device (F5) or Virtual machine) for loadbalancing the traffic.


/opt/contrail/utils/service_appliance.py --api_server_ip 10.0.170.30 --api_server_port 8082 --oper add --admin_user admin --admin_password password --admin_tenant_name admin --name bigip --service_appliance_set f5 --device_ip 10.0.170.254 --user_credential '{"user": "admin", "password": "admin"}'

Note: tcp cloud OpenContrail packages and OpenContrail lauchpad have service_applice.py scripts in /usr/lib/

Finally there must be created vipnet with subnet propagated on F5 interface. This subnet must be created for vip allocation.

CREATING LOAD BALANCER VIA NEUTRON LBAAS

We booted two instances with apache web server on port 80 into 172.16.50.0/24. This network is terminated in LB VRF. Use the following steps to create a load balancer in Contrail.

Create a pool for HTTP.


neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id 99ef11f3-a04f-45fe-b3bb-c835b9bbd86f --provider f5

Add members into the pool.


neutron lb-member-create --address 172.16.50.3 --protocol-port 80 mypool 
neutron lb-member-create --address 172.16.50.4 --protocol-port 80 mypool

Create and associate VIP into the pool. After this command F5 configuration is applied.


neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id vipsubnet mypool

Finally, create a sample health monitor.


neutron lb-healthmonitor-create --delay 20 --timeout 10 --max-retries 3 --type HTTP

Associate a health monitor to a pool.


neutron lb-healthmonitor-associate  mypool

When you login into F5 management dashboard, you have to switch into a new partition, which is dynamically created with each LbaaS instance.

partition

Local Traffic -> Network Map shows map all objects created and configured by F5 driver.

network-map

Green point shows that everything is available and active. If you select Virtual Servers, there is detail of created VIP.

vip-detail

Last screenshot captured selected VLAN for VIP.

vlan-f5

HEAT ORCHESTRATION

As already mentioned at begging the goal is to manage F5 same like other OpenStack resources thru Heat engine. To enable heat orchestration for LbaaS with F5, there must be resource for neutron lbaas provider, which was added in OpenStack Liberty. Therefore we had to backported this resource into OpenStack Juno and Kilo. This link contains gerrit review for lbaas provider https://review.openstack.org/#/c/185197/

Note: You can use our Ubuntu repositories, where this feature is included http://www.opentcpcloud.org/en/documentation/packages-and-repositories/

We prepared sample template for f5 lbaas provider, which can be downloaded and customized as required. https://github.com/tcpcloud/heat-templates/blob/master/templates/lbaas_contrail_f5_test.hot

When we have a template with appropriate parameters we can lauch stack.


heat stack-create -e env/test_contrail_f5_lbaas/demo_ce.env -f template/test_contrail_f5_lbaas.hot test_contrail_f5_lbaas_demo_ce

Check the status.


heat stack-list 
+--------------------------------------+---------------------------------+-----------------+----------------------+ 
| id                                   | stack_name                     | stack_status    | creation_time        | 
+--------------------------------------+---------------------------------+-----------------+----------------------+ 
| a4825267-7444-46af-87da-f081c5405470 | test_contrail_f5_lbaas_demo_ce | CREATE_COMPLETE | 2015-10-02T12:18:06Z | 
+--------------------------------------+---------------------------------+-----------------+----------------------+ 

Describe resources in this stack and verify balancer configuration.


root@prx01:/srv/heat/env# heat stack-show test_contrail_f5_lbaas_demo_ce
+-----------------------+--------------------------------------------
| Property              | Value
+-----------------------+--------------------------------------------
| capabilities          |[]| creation_time         | 2015-10-02T12:18:06Z
| description           | Contrail F5 LBaaS Heat Template
| id                    | a4825267-7444-46af-87da-f081c5405470
| links                 | http://10.0.170.10:8004/v1/2c114f (self)|| notification_topics   |[]| outputs               |[]
| parameters            |{||"OS::project_id": "2c114f0779ac4367a94679cad918fbd4",
||"OS::stack_name": "test_contrail_f5_lbaas_demo_ce",
||"private_net_cidr": "172.10.10.0/24",
||"public_net_name": "public-net",
||"key_name": "public-key-demo",
||"lb_name": "test-lb",
||"public_net_pool_start": "185.22.120.100",
||"instance_image": "ubuntu-14-04-x64-1441380609",
||"instance_flavor": "m1.medium",
||"OS::stack_id": "a4825267-7444-46af-87da-f081c5405470",
||"private_net_pool_end": "172.10.10.200",
||"private_net_name": "private-net",
||"public_net_id": "621fdf52-e428-42e4-bd61-98db21042f54",
||"private_net_pool_start": "172.10.10.100",
||"public_net_pool_end": "185.22.120.200",
||"lb_provider": "f5",
||"public_net_cidr": "185.22.120.0/24",
||||}| parent                | None
| stack_name            | test_contrail_f5_lbaas_demo_ce
| stack_owner           | demo
| stack_status          | CREATE_COMPLETE
| stack_status_reason   | Stack CREATE completed successfully
| stack_user_project_id | 76ea6c88fdd14410987b8cc984314bb8
| template_description  | Contrail F5 LBaaS Heat Template
| timeout_mins          | None
| updated_time          | None
+-----------------------+-----------------------------------------------------------

This template is sample, so you have to manually configure Route Target for private net or try to use Contrail heat resources, which is not part of this blog post.

CONCLUSION

We demonstrated that OpenContrail is the only one SDN solution, which enables to manage physical F5 through Neutron LbaaS API instead of own management portal. The next step is implementation of this feature at our pilot customers, where we want to continue on production testing scenarios. Future release should also provide dynamic MX configuration, multi-tenancy, etc.

OpenContrail team works also on integration of other vendor Loadbalancer, which will be available in next release.

Billing for Contrail network services using Openbook

$
0
0

Talligent Openbook now supports Contrail metrics for enhanced network billing in OpenStack cloud.

OpenContrail Analytics provides a deep set of network statistics related to the operations of virtual instances, virtual networks, and floating IPs and pushes them to Ceilometer via a new service plug-in.  Because Openbook currently integrates with Ceilometer, Openbook will be able to directly consume this set of detailed and accurate bandwidth measurements for billing, chargeback, capacity planning, and other management reporting.  Service providers can now bill on bandwidth leaving the datacenter, as well as detailed metrics at the instance, floating IP, and virtual network level.

Openbook by Talligent enables cloud providers to create and track on demand cloud services based on the OpenStack platform.  Service providers and enterprises are deploying ever more complex cloud solutions to meet customer demand.  It is important that the Openbook platform expand to include SDN metrics and products as they become available from solutions like OpenContrail, whether through the Ceilometer service or directly via Openbook’s API.  These metrics can be packaged by tiers, metered and sold by the GB, delivered on-site or as part of a shared infrastructure, and reported on by tenant, customer, cost center, or business unit.

“It is imperative for Wingu to be able to measure and bill customers on their proper bandwidth usage.  Talligent and OpenContrail are providing the key bandwidth metrics and have a platform that allows us to create new, high value network offerings…”
Thomas Lee, General Manager Cloud Services, Wingu 

The Ceilometer driver from OpenContrail provides traffic statistics for instances, floating IPs, and virtual networks.  From the description:

“For the floating IPs meters, the driver will query neutron to obtain the list of floating IPs and extract the virtual machines/instances associated with the floating IPs. It will then query the OpenContrail analytics REST API server to extract the floating IP statistics associated with those virtual machines and floating IPs and populate the meters. Similarly for the virtual network meters, the driver will query neutron/nova to obtain the list of networks and then query the OpenContrail analytics REST API server to extract the inter and intra virtual network statistics and populate the meters.”

More detail about the plug-in is available from the github wiki:

https://github.com/Juniper/contrail-controller/wiki/Ceilometer-and-OpenContrail-Driver-Enhancements

Talligent supports the decision of the Contrail team to integrate with Ceilometer for a couple of key reasons:

1) This plug-in extends the value of Ceilometer as a funnel for key metrics about the OpenStack environment; and

2) the community benefits from being able to use the common Ceilometer API to pick up this new information.  This initiative is additional validation that Ceilometer will be the primary telemetry module for OpenStack.

Openbook v2.5 is tested to work with the Juniper Networks Contrail Cloud and the OpenContrail Ceilometer Driver.

If you are interested in learning more about the use of OpenContrail and the OpenContrail Ceilometer Driver to bill and report on bandwidth metrics in Openstack, please contact Talligent at openbook@talligent.com for more information.

About Talligent

Talligent empowers enterprises and service providers to deploy production ready OpenStack clouds by dramatically improving the visibility and control of cloud infrastructure consumption. We are OpenStack experts – we know its capabilities and are developing solutions that take advantage of OpenStack’s key benefits without vendor lock-in. Talligent OpenBook provides the functionality to turn your private or public cloud into an efficient and automated multi-tenant environment.

Contrail Analytics Streaming API

$
0
0

Contrail Analytics collects information from the various components of the system, and provides the visibility into flows, logs and UVEs that is needed to operate this system. This information is provided via a REST API, which can be used to build management and analytics applications and dashboards. The Contrail UI uses this REST API as well.

In addition to providing HTTP GET APIs (Operational State in the OpenContrail system: UVE – User Visible Entities through Analytics API) , we also provide a Streaming API for UVEs and Alerts. This is a very useful option for enabling rapid development of powerful and efficient applications on top of Contrail Analytics. Contrail Analytics Streaming API uses the Server-Sent Events EventSource API , which is standardized as part of HTML5.

https://w3c.github.io/eventsource/

See Contrail Analytics Streming API features in a demo:

API details

Two APIs are provided – one for alarms (read more about it in the detailed Contrail Alerts blogpost), and the other for entire UVE contents:

GET http://<analytics-ip>:<rest-api-port>/analytics/alarm-stream
GET http://<analytics-ip>:<rest-api-port>/analytics/uve-stream

Client may ask for a subset of updates using filters in URL parameters:
“tablefilt=,[…]” will provide updates for only the given UVE table types (e.g. control-node, virtual-machine etc.)

For uve-stream, we can further filter by UVE Content Structures:
“cfilt=,[…]”

Lets take an example.
We can ask got a stream of all bgp-peer UVE updates as follows:

contrail_analytics_streaming_blogpost_image1

First, we read all existing bgp-peer UVEs and report their contents. Then, we report updates. Continuous updates will be reported as the UVE is updated, until the user closes the connection. The data is reported with “key” , “type”, and “value”.

  • The “key” is the UVE Key.
  • The “type” is a UVE Content Structure. Multiple of these may be reported per UVE-Key. When any attribute under a Structure changes, the entire Structure is sent again. When the UVE is deleted, the streaming API will report the “key” with a “type” of null.
  • The “value” is a JSON object representing the contents (attributes) of the Structure above. When the Structure is deleted, the streaming API will report the “key” and Structure “type” with a value of null.

Lets take another example. We have an Analytics Node where the contrail-snmp-collector process has stopped. This will cause an alarm.
This is what we see in the alarm stream:

contrail_analytics_streaming_blogpost_image2

Keep the stream open. Now we repair the system by restarting the contrail-snmp-collector. The stream reports that the alarm has been deleted:

contrail_analytics_streaming_blogpost_image3

Contrail Alerts

$
0
0

OpenContrail networking provides network virtualization to data center applications using a layered, horizontally scalable software system.  We have the abstractions in place to present the operational state of this system (Operational State in the OpenContrail system: UVE – User Visible Entities through Analytics API). The system is architected to be as simple as possible to operate for the functionality it delivers. An important element of this is Contrail Alerts – in addition to providing detailed operational state in a easy-to-navigate way, we also need to clearly highlight unusual conditions that may require more urgent administrator attention and action.

We provide Alerts on a per-UVE basis. Contrail Analytics will raise (or clear) instances of these alerts (alarms) using python-coded “rules” that examine the contents of the UVE and the object’s configuration. Some rules will be built-in. Others can be added using python Entry-Point based plugins.

See Contrail Alerts features in a demo:

Contrail Analytics APIs for Alerts

There is an API to get the list of supported alerts as follows:

contrail_alerts_blogpost_image1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lets look at an example.

We have a system configured with 2 BGP peers as gateways, but those gateways themselves have not been configured with this system’s control node yet. Based on the state of these peers, we raise an alarm against the control-node UVE.

We can look at the alarms being reported on this system via the following API:

GET http://<analytics-ip>:<rest-api-port>/analytics/alarms

contrail_alerts_blogpost_image2

The API reports the type of alarm, severity, a description which lists the reason why it exists, whether its been acknowledged yet, and the timestamp. We provide an API for acknowledging alarms as follows:

POST http://:/analytics/alarms/acknowledge

Body: {“table”: , “name”: , “type”: , “token”: }

When the alarm condition is resolved, the alarm will be deleted automatically, whether or not it has been acknowledged.
The alarm is also shown along with the rest of the UVE if the UVE GET API is used:

GET http://:/analytics/uves/control-node/

In addition to these GET APIs, a streaming interface is also available for both UVEs and Alarms. That interface is described in detailed in the Contrail Analytics Streaming API blogpost.

Alarm Processing and Plugins

New Alerts can be added to the Contrail Analytics by installing python plugins onto the Analyics Nodes. Consistent hashing techniques are used to distribute alarm processing among all functioning Analytics Nodes (the hash is based on the UVE Key). So, the python plugin for an Alert must be installed on each Analytics Node.
Let us look at the plugin for the alert used in the example above.

This module plugin is here:
controller/src/opserver/plugins/alarm_bgp_connectivity/

We install this plugin as follows:(from alarm_bgp_connectivity/setup.py)

#
# Copyright (c) 2013 Juniper Networks, Inc. All rights reserved.
#

from setuptools import setup, find_packages

setup(
    name='alarm_bgp_connectivity',
    version='0.1dev',
    packages=find_packages(),
    entry_points = {
        'contrail.analytics.alarms': [
            'ObjectBgpRouter = alarm_bgp_connectivity.main:BgpConnectivity',
        ],
    },
    zip_safe=False,
    long_description="BGPConnectivity alarm"
)

“ObjectBGPRouter” represents the control-node UVE.
See UVE_MAP in controller/src/analytics/viz.sandesh

The implementation is as follows (from alarm_bgp_connectivity/main.py)

from  opserver.plugins.alarm_base import AlarmBase

class BgpConnectivity(AlarmBase):
    """Not enough BGP peers are up in BgpRouterState.num_up_bgp_peer"""

    def __call__(self, uve_key, uve_data):
        err_list = []
        if not uve_data.has_key("BgpRouterState"):
            return self.__class__.__name__, AlarmBase.SYS_WARN, err_list

        ust = uve_data["BgpRouterState"]

        l,r = ("num_up_bgp_peer","num_bgp_peer")
        cm = True
        if not ust.has_key(l):
            err_list.append(("BgpRouterState.%s != None" % l,"None"))
            cm = False
        if not ust.has_key(r):
            err_list.append(("BgpRouterState.%s != None" % r,"None"))
            cm = False
        if cm:
            if not ust[l] == ust[r]:
                err_list.append(("BgpRouterState.%s != BgpRouterState.%s" \
                        % (l,r), "%s != %s" % (str(ust[l]), str(ust[r]))))

        return self.__class__.__name__, AlarmBase.SYS_WARN, err_list

This plugin code is called anytime a control-node UVE changes. It can examine the contents of the UVE can decide whether an alarm should be raise or not. In this case, we compare the “BgpRouterState.num_bgp_peer” attribute of the UVE with the “BgpRouterState.num_up_bgp_peer” attribute.

Contrail UI

A dashboard listing all Alarms present in the system is also available in Contrail UI as follows:
contrail_alerts_blogpost_image3

Recap : OpenContrail @ OpenStack Summit Tokyo

$
0
0

opencontrail-openstack-tokyo-banner

Last week, over 5,000 people have attended the OpenStack Summit at Tokyo. The event was a great success with the presence of many technology and business leaders both as speakers and audience. OpenContrail had a huge presence with 6 speaking sessions, 2 brown bag talks along with the OpenContrial User Group Meeting. If you missed the opportunity to attend the event in person, you came to the right place. All the sessions were recorded and posted by the OpenStack Summit organizers.

Check below for the videos related to OpenContrail sessions at the summit. The videos for OpenContrail User Group meeting are posted in this blog. You can watch similar videos about user talks, demos and presentations at the videos page.

Virtual Brown Bag Sessions

 

Open source approach to secure multi-tenancy via OpenStack in VMware clusters

In this brown bag session at OpenStack Summit, Aniket Daptari, Sr. Product manager from Juniper Networks talks about how OpenContrial helps in implementing secure multitenancy within VMWare clusters along with a customer use case.


Open source secure multi-tenancy in containerized and hybrid environments

 

In this brown bag session, Aniket Daptari, Sr. Product manager from Juniper Networks covers what modern cloud applications leveraging micro-services architecture look like, and what are the infrastructure needs they impose.

Main Conference Speaking Sessions

 

Extending OpenStack Heat to Orchestrate Security Policies and Network Function Service Chains

Some Neutron plugins like OpenContrail not only implement the APIs specified by Neutron but also extend the API set to provide additional functionality. Some examples include the ability to specify security policies and the ability to define sequences of network functions to be applied to selected tenant traffic. This session highlights some of these functionalities.


Software-Defined WAN Implementation to Optimize Enterprise Branch Networking

Connecting an Enterprise Branch to the Enterprise Headquarters and Data Center in a seamless way, with centralized provisioning and monitoring, is something that all Telcos desire to accomplish for their enterprise customers in the immediate term. Check out the video for more info.

 

Apply, Rinse, Repeat. (re)Build OpenStack Ready Infrastructure Like a Pro [Crowbar + Contrail]

The work of running OpenStack has been getting easier; unfortunately, it’s still just as hard to operate the underlying physical data center. In this presentation, we’re going to talk about how to use open source tools build a consistent and repeatable underlay for your OpenStack infrastructure using OpenCrowbar and OpenContrail.


SaaS Experience: Building OpenStack on OpenStack CI With SDN and Containers

Workday as a SaaS leader in human resources management and finances has faced a number of challenges in adopting and deploying open-source technologies such as OpenStack and OpenContrail. One of these challenges was to find a quick way to evaluate and validate Open source technologies in development environment before considering those for production.

 

Gohan: An Open-source Service Development Engine for SDN/NFV Orchestration

Gohan is an API server for XaaS Services. Using gohan webui front-end, you can define new XaaS API on the fly. Resource status will be synced with agents, who will realize XaaS service, using etcd.


Decomposing Lithium’s Monolith With Kubernetes and OpenStack

Application developers are rapidly moving to container-based models for dynamic service delivery and efficient cluster management. In this session, we will discuss a OpenStack production environment that is rapidly evolving to leverage a hybrid cloud platform to deliver containerized micro services in a SaaS Development/Continuous Integration environment.

Panel Talks

 

 

Containers Are Hot, but How Do They Network?

While containers have been around for decades, they are only now becoming the rage for developers and DevOps practitioners alike. Indeed, they have seen a sudden surge in popularity and are rapidly becoming the standard for developing, packaging and deploying applications.


OpenStack Consumption Models: Three User Perspectives

Deploying, managing, and maintaining an OpenStack powered cloud are a part of daily life for every operator. In this panel, we will take a critical view of the various options that are available for building an OpenStack cloud.

OpenContrail User Group Meeting – Tokyo

$
0
0

OCUG_Tokyo

On October 29th, we had our OpenContrail User Group Meeting where folks who have used OpenContrail shared their experiences in a casual way. All the speakers have deployed OpenContrial into production and some of them even contributed code back to the OpenContrail source code. The recordings for this  session is now available and you can listen to their stories below.

The OCUG meeting was organized in conjunction with OpenStack Summit. The videos related to OepnContrail from OpenStack main conference event are posted in this blog.

A huge thanks to everyone who participated in this meeting!

 

Allegro Tech

Speaker: Michal Dopierala & Krzysztof Kowalik from Allegro Tech.

 

Cloudwatt/Orange Labs

Speakers: Edouard Thuleau & Jean Philippe Braun  from Cloudwatt and  Thomas Morin from Orange Labs.

Intercloud Systems

Speaker: Nabeel Asim, Chief Technologist – NFV and SDN from Intercloud Systems

Lithium Technologies

Spekaer: Lachlan Evenson, Team Lead, Cloud Platform Engineering from Lithium Technologies

NTT i3

Speaker: Ichiro Fukuda, Chief Architect, Infrastructure from NTT i3

tcpCloud

Speakers: Jakub Pavlík, Chief Architect – Cloud Operations & Filip Pytloun from tcpCloud.

Workday

Speakers: Edgar Magana,  Cloud Operations Architect from Workday

Installing Kubernetes & Opencontrail

$
0
0

In this post we walk through the steps required to install a 2 node cluster running Kubernetes that uses OpenContrail as the network provider. In addition to the 2 compute nodes, we use a master and a gateway node. The master runs both the kubernetes api server and scheduler as well as the opencontrail configuration management and control plane.

OpenContrail implements an overlay network using standards based network protocols:

This means that, in production environments, it is possible to use existing network appliances from multiple vendors that can serve as the gateway between the un-encapsulated network (a.k.a. underlay) and the network overlay. However for the purposes of a test cluster we will use an extra node (the gateway) whose job is to provide access between the underlay and overlay networks.

For this exercise, I decided to use my MacBookPro which has 16G of RAM. However all the tools used are supported on Linux also; it should be relativly simple to reproduce the same steps on a Linux machine or on a cloud such as AWS or GCE.

The first step in the process is to obtain binaries for kubernetes release-1.1.1. I then unpacked the tar file into the ~/tmp and then extracted the linux binaries required to run the cluster using the command:

cd ~/tmp;tar zxvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz

In order to create the 4 virtual-machines required for this scenario I used virtual-box and vagrant. Both are trivial to install on OSX.

In order to provision the virtual-machines we use ansible. Ansible can be installed via “pip install ansible”. I then created a default ansible.cfg that enables the pipelining option and disables ssh connection sharing. The later was required to work around failures on tasks that use “delegate_to” and run concurrently (i.e. run_once is false). From a cursory internet search, it appears that the openssh server that ships with ubuntu 14.04 has a concurrency issue when handling multi-session.


~/.ansible.cfg
[default]
pipelining=True
 
[ssh_connection]
ssh_args = -o ControlMaster=no -o ControlPersist=60s

With ansible and vagrant installed, we can proceed to create the VMs used by this testbed. The vagrant configuration for this example is available in github. The servers.yaml file lists the names and resource requirements for the 4 VMs. Please note that if you are adjusting this example to run in a different vagrant provider the Vagrantfile needs to be edited to specify the resource requirements for that provider.
After checking out this directory (or copying over the files) the VMs can be created by executing the command:

vagrant up

Vagrant will automatically execute

config.yaml

which will configure the hostname on the VMs.

The Vagranfile used int this example will cause vagrant to create VMs with 2 interfaces: a NAT interface (eth0) used for by the ssh management sessions and external access and a private network
interface (eth1) providing a private network between the host and the VMs. OpenContrail will use the private network interface; the management interface is optional and may not exist in other
configurations (e.g. AWS, GCE).

After vagrant up completes, it is useful to add entries to /etc/hosts on all the VMs so that names can be resolved. For this purpose i used another ansible script invoked as:

ansible-playbook -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory resolution.yaml

This step must be executed independently of the ansible configuration performed by vagrant since vagrant invokes ansible for each VM at a time, while this playbook expects to be invoked for all hosts.

The command above dependens on the inventory file that vagrant creates automatically when configuring the VMs. We will use the contents of this inventory file in order to provision kubernetes and OpenContrail also.

With the VMs running, we need to checkout the ansible playbooks that configure kubernetes + opencontrail. While an earlier version of the playbook is available upstream in the kubernetes contrib repository, the most recent version of the playbook is in a development branch on a fork of that repository. Checkout the repository via:

git clone https://github.com/pedro-r-marques/contrib/tree/opencontrail

The branch HEAD commit id, at the time of this post, is 15ddfd5.

I will work to upstream the updated opencontrail playbook to both the kubernetes and openshift provisioning repositories as soon as possible.

With the ansible playbook available on the contrib/ansible directory it is necessary to edit the file ansible/group_vars/all.yml replace the network provider:

# Network implementation (flannel|opencontrail)
networking: opencontrail

We then need to create an inventory file:

[opencontrail:children]
masters
nodes
gateways
 
[opencontrail:vars]
localBuildOutput=/Users/roque/src/golang/src/k8s.io/kubernetes/_output/dockerized
opencontrail_public_subnet=100.64.0.0/16
opencontrail_interface=eth1
 
[masters]
k8s-master ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-master/virtualbox/private_key
 
[etcd]
k8s-master ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-master/virtualbox/private_key
 
[gateways]
k8s-gateway ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-gateway/virtualbox/private_key
 
[nodes]
k8s-node-01 ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-node-01/virtualbox/private_key
k8s-node-02 ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=

This inventory file does the following:

  • Declares that hosts for the roles: masters, gateways, etcd, nodes;The ssh information is derived from the inventory created by vagrant.
  • Declares the location of the kubernetes binaries downloaded from the github release;
  • Defines the IP address prefix used for ‘External IPs’ by kubernetes services that require external access;
  • Instructs opencontrail to use the private network interface (eth1); without this setting the opencontrail playbook defaults to eth0.

Once this file is created, we can execute the ansible playbook by running the script"setup.sh" in the contrib/ansible directory.

This script will run through all the steps required to provision kubernetes and opencontrail; it is not unusual for the script to fail to perform some of network based operations (downloading the repository keys for docker for instance or downloading a file from github); the ansible playbook is ment to be declarative (i.e. define the end state of the system) and it is supposed to be re-run if a network based failure is encountered.

At the end of the script we should be able to login to the master via the command “vagrant ssh k8s-master” and observe the following:

  • kubectl get nodes
    This should show two nodes: k8s-node-01 and k8s-node-02.
  • kubectl --namespace=kube-system get podsThis command should show that the kube-dns pod is running; if this pod is in a restart loop that usually means that the kube2sky container is not able to reach the kube-apiserver.
  • curl http://localhost:8082/virtual-networks | python -m json.toolThis should display a list of virtual-networks created in the opencontrail api
  • netstat -nt | grep 5269
    We expect 3 established TCP sessions for the control channel (xmpp) between the master and the nodes/gateway.

On the host (OSX) one should be able to access the diagnostic web interface of the vrouter agent running on the compute nodes:

These commands show display the information regarding the interfaces attached to each pod.

Once the cluster is operational, one can start an example application such as “guestbook-go”. This example can be found in the kubernetes examples directory. In order for it to run successfully the following modifications are necessary:

    • Edit guestbook-controller.json, in order to add the labels “name” and “uses” as in:

"spec":{
  [...]
  "template":{
    "metadata":{
      "labels":{
        "app":"guestbook",
        "name":"guestbook",
        "uses":"redis"
      }
    },
  [...]
}
    • Edit redis-master-service.json and redis-slave-service.json in order to add a service name. The following is the configuration for the master:
"metadata": {
  [...]
  "labels" {
         "app":"redis",
         "role": "master",
         "name":"redis"
  }
}
  • Edit redis-master-controller.json and redis-slave-controller.json in order to add the “name” label to the pods. As in:
    "spec":{
       [...]
       "template":{
          "metadata":{
             "labels":{
                "app":"redis",
                "role":"master",
                "name":"redis"
             }
          },
       [...]
     }

After the example is started the guestbook service will be allocated an ExternalIP on the external subnet (e.g. 100.64.255.252).

In order to access the external IP network from the host one needs to add a route to 192.168.1.254 (the gateway address). Once that is done you should be able to access the application via a web browser via http://100.64.255.252:3000.


Kube-O-Contrail – get your hands dirty with Kubernetes and OpenContrail

$
0
0

This blog is co-authored by Sanju Abraham and Aniket Daptari from Juniper Networks.

The OpenContrail team participated in the recently concluded KubeCon 2015. It was the inaugural conference for the Kubernetes ecosystem. At the conference, we helped the attendees with a hands-on workshop.

In the past we have demonstrated the integration of OpenContrail with OpenStack, CloudStack, VMware vCenter, IBM Cloud Orchestrator and some other orchestrators. With the growing acceptance of Containers as the compute vehicle of choice to deploy modern applications, the OpenContrail team extended the overlay virtual networking to Containers.

As Kubernetes came along, groups of containers deployed together to implement a logical piece of functionality started being managed together as Pods and multiple Pods as Services.

The OpenContrail team extended the same networking primitives and constructs to the Kubernetes entities like Pods and Services – providing not just security via isolation for Pods, and interconnecting them based on app tier interrelationships specified in the app deployment manifest, but also providing load balancing across the various Pods that implement a particular service behind the service’s “ClusterIP”.

OpenContrail also creates the construct of Virtual Networks for every collection of Pods along with a CIDR block allocated for that Virtual Network. Then, as Pods are spawned, OpenContrail assigns an IP for every new Pod created.

When entities like Webservers need to be accessed from across the internet and need to have a public facing IP address, OpenContrail also provides NATting in a fully distributed fashion.

In summary, OpenContrail provides all the following functionalities in a fully distributed fashion:

IPAM, DHCP, DNS, Load balancing, NAT and Firewalling.

All the above sounds pretty cool and the next thing on anyone’s mind is, “Fine, how do I see these in action for myself?”

In order to reap all the above benefits from OpenContrail, we have committed all the necessary OpenContrail code to Kubernetes mainline – well almost. Our pull request to merge our changes with Kubernetes mainline is open and we anticipate it getting approved within the next few weeks.

What that allows is that whenever any one deploys Kubernetes:

1) On baremetal servers in an on-prem private cloud,
2) On top of OpenStack perhaps using Murano in an on-prem private cloud,
3) On a public cloud like GCE,
4) Or a public cloud like AWS,

All the OpenContrail goodness is right there along with Kubernetes. All that needs to be done to leverage the OpenContrail goodness is to set the value of an environment variable “NETWORK_PROVIDER” to “opencontrail” before Kubernetes is installed.

So let’s go through the steps to first deploy Kubernetes in a public cloud, say GCE, that includes and enables OpenContrail, and then deploy a sample application and see what benefits OpenContrail brings along.

Step 1: Deploying Kubernetes in GCE along with OpenContrail.

In order to do this, we will build Kubernetes and deploy it:

a) git clone -b opencontrail-integration https://github.com/Juniper/kubernetes/kubernetes.git

b) ~/build/release.sh

c) export NETWORK_PROVIDER=opencontrail

d) ./cluster/kube-up.sh

…Starting cluster using provider: gce

Kubernetes cluster is running.  The master is running at:

https://104.197.128.44

The user name and password to use is located in /Users/adaptari/.kube/config.


... calling validate-cluster
Found 3 node(s).
NAME                    LABELS                                         STATUS                     AGE
kube-oc-2-master        kubernetes.io/hostname=kube-oc-2-master        Ready,SchedulingDisabled   1m
kube-oc-2-minion-59ws   kubernetes.io/hostname=kube-oc-2-minion-59ws   Ready                      1m
kube-oc-2-minion-htl8   kubernetes.io/hostname=kube-oc-2-minion-htl8   Ready                      1m
Validate output:
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   nil
scheduler            Healthy   ok                   nil
etcd-0               Healthy   {"health": "true"}   nil
etcd-1               Healthy   {"health": "true"}   nil
Cluster validation succeeded
Done, listing cluster services:

 

Kubernetes master is running at https://104.197.128.44

GLBCDefaultBackend is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/default-http-backend
Heapster is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

d) To view the Contrail components, you can issue:

docker ps | grep contrail | grep -v pause

Step 2: Now that we have Kubernetes running with OpenContrail, let’s find and prepare an app.

The main forte of OpenContrail lies in abstraction. Abstraction is necessary for the speed and agility that developers care most about. This abstraction is accomplished by letting developers specify app tier inter-relationships in the form of annotations in the app deployment manifests. OpenContrail controller will then infer the policy requirements based on the inter-relationships specified in the app manifests and program corresponding security policies into the vRouters for fully distributed enforcement.

Therefore, the existing app manifests of applications need to be patched with the annotations.

So, patch the existing app, guestbook-go.

https://github.com/Juniper/contrail-kubernetes/blob/vrouter-manifest/cluster/patch_guest_book
The patch above introduces the labels “name” and “uses” that help specify the app tier inter-relationships.
To apply the patch,

git apply –stat patch
git apply –check patch
git apply patch

Step 3: Now that the app is ready, let’s go ahead and deploy it:

kubectl create -f guestbook-go/redis-master-controller.json
kubectl create -f guestbook-go/redis-master-service.json
kubectl create -f guestbook-go/redis-slave-controller.json
kubectl create -f guestbook-go/redis-slave-service.json
kubectl create -f guestbook-go/guestbook-controller.json
kubectl create -f guestbook-go/guestbook-service.json

Notice here that the way Kubernetes was installed or the way apps are deployed has not changed one bit. The only thing that has changed is the introduction of an environment variable and the introduction of annotations in the form of labels – “name” and “uses”.

Finally, to view the replication controllers and service pods created from the above commands, use:

kubectl get rc
kubectl get pods

Step 4: Establish ssh tunnel into the public IP allocated to the guestbook webserver from localhost. Then point browser to http://localhost:3000 (or port used in port forwarding).

This completes the hands-on exercise for OpenContrail with Kubernetes.

In the next part of this blog, we will continue the ride deeper into OpenContrail and look closely at what components OpenContrail has introduced and what benefits those components provide.

Data Center Micro Segmentation in Contrail Virtual Networks

$
0
0

Micro segmentation divides the data center into smaller, more-protected zones.  The servers can be added to multiple application tiers and depending on the type of application, traffic flow is controlled when it flows from one tier to another tier rather than individual server ports. In a real world scenario an application tier may not have a 1:1 mapping to a Layer 3 subnet.  So applying firewall rules on the physical or virtual firewall appliance based on the IP address of the server becomes highly un-manageable and not scalable.

With Contrail security groups feature one can follow a declarative model to label the servers based on the application it is catering to and then constructing security rules to define the traffic flow between these different applications rather than referring to IP addresses.

Use case example:

As shown in the below figure the subnet 172.16.0.0/16 is hosting all the servers and these servers may be any of web, application or database servers depending on the end-application requirement. In this specific example we have 2 servers in each tier.

As shown by the arrows at the bottom of the figure the idea is to make sure that the web tier can talk to the app tier and the app tier can access the db tier , but the web tier cannot access the db tier directly.

For the simplicity purpose the use case in this example is demonstrated using ssh.

opencontrail_DC_microsegmentation_blogpost_image1

Configuration steps:

The idea here is to create security groups one for each tier. And under the security group match condition we will match the traffic flow between different application tiers by matching the security group names we have created for each of the application tier and the traffic direction instead of matching individual server IP address or subnets. And then launch the VMs by associating them with the respective security groups based on the tier they were launched.

opencontrail_DC_microsegmentation_blogpost_image2

 

opencontrail_DC_microsegmentation_blogpost_image3

Verification:

Check 1: Login to a machine in web tier and ssh to VMs in app tier. Ssh should be allowed to pass through.

opencontrail_DC_microsegmentation_blogpost_image4

Check 2: Login to a machine in web tier and ssh to a VM in database tier. Ssh should be blocked.

opencontrail_DC_microsegmentation_blogpost_image5

 

Hybrid service chaining across multiple Hypervisors

$
0
0

OpenContrail supports multiple types of hypervisors and containers. These can spawn simple tenant VMs and also more complex service instances implementing a Virtualized Network Function (NFV).

This video demonstrates a service chain composed of two service instances. One is running on a KVM host, and the other on an ESXi host.

This hybrid service chain enables value-added services for tenant VMs which are also spread across KVM and ESXi hypervisors.

Also, check out how to introduce secure multi-tenancy to VMware/vCenter clusters via open source network virtualization.

Kubernetes and OpenStack multi-cloud networking

$
0
0

This is a guest blog from tcpCloud, authored by Marek Celoud & Jakub Pavlik (tcp cloud engineers). To see the original post,click here.

This blog brings first insight into usage of real bare metal Kubernetes clusters for application workloads from networking point of view. A special thanks goes to Lachlan Evenson and his colleagues from Lithium for collaboration on this post and providing real use cases.

Since the last OpenStack Summit in Tokyo last November we realized the magnitude of impact the containers have on a global community. Everyone has been speaking about using containers and Kubernetes instead of standard virtual machines. There are couple of reasons for that, especially because it is lightweight nature, easy and fast deploys, and developers love this. They can easily develop, maintain, scale and roll-update their applications. We at tcp cloud focus on building private cloud solutions based on open source technologies wanted to get Kubernetes and see if it can really be used in production setup along or within the OpenStack powered virtualization.

Kubernetes brings a new way to manage container-based workloads and enables similar features like OpenStack for VMs for start. If you start using Kubernetes you will soon realize that you can deploy easily it in AWS, GCE or Vagrant, but what about your on-premise bare-metal deployment? How to integrate it into your current OpenStack or virtualized infrastructure? All blog posts and manuals document small clusters running in virtual machines with sample web applications, but none of them show real scenario for bare-metal or enterprise performance workload with integration in current network design. To properly design networking is the most difficult part of architectural design, just like with OpenStack. Therefore we have defined following networking requirements:

  • Multi tenancy – separation of containers workload is basic requirement for every security policy standard. e.g. default Flannel networking only provides flat network architecture.
  • Multi-cloud support – not every workload is suiteble for containers and you still need to put heavy loads like databases in VMs or even on bare metals. For this reason single control plane for the SDN is the best option.
  • Overlay – is related to multi-tenancy. Almost every OpenStack Neutron deployment uses some kind of overlays (VXLAN, GRE, MPLSoverGRE, MPLSoverUDP) and we have to be able inter-connect them.
  • Distributed routing engine – East-West and North-South traffic cannot go through one central software service. Network traffic has to go directly between OpenStack compute nodes and Kubernetes nodes. Optimal is to provide routing on routers instead of proprietary gateway appliances.

Based on these requirements we have decided to start using OpenContrail SDN first and our mission was to integrate OpenStack workload with Kubernetes, then find a suitable application stack for the actual load testing.

OpenContrail overview

OpenContrail is open source SDN & NFV solution, with tight ties to OpenStack since Havana. It was one of the first production ready Neutron plugins along with Nicira (now VMware NSX-VH) and last summit’s survey showed it is the second most deployed solution after OpenVwitch and first of the Vendor based solutions. OpenContrail has integrations to OpenStack, VMware, Docker and Kubernetes.

Kubernetes network plugin kube-network-manager was under development since OpenStack summit at Vancouver last year and first announcement was released in end of year.

The kube-network-manager process uses the kubernetes controller framework to listen to changes in objects that are defined in the API and add annotations to some of these objects. Then it creates network solution for the application using the OpenContrail API that define objects such as virtual-networks, network interfaces and access control policies. More information is available at this blog

Architecture

We started testing with two independent Contrail deployments and then set up BGP federation. The reason for federation is keystone authentication of kube-network-manager. When contrail-neutron-plugin is enabled, contrail API uses keystone authentication and this feature is not yet implemented at kubernetes plugin. The Contrail federation is described in more later in this post.

The following schema shows high level architecture, where on left side is OpenStack cluster and Kubernetes cluster is on the right side. OpenStack and OpenContrail are deployed in fully High Available best practice design, which can be scaled up to hundreds of compute nodes.

opencontrail-kubernetes

The following figure shows federation of two Contrail clusters. In general, this feature enables Contrail controllers connection between different sites of a Multi-site DC without the need of a physical gateway. The control nodes at each site are peered with other sites using BGP. It is possible to stretch both L2 and L3 networks across multiple DCs this way.

This design is usually used for two independent OpenStack cloud or two OpenStack Region. All components of Contrail including vRouter are exactly the same. Kube-network-manager and neutron-contrail-plugin just translate API requests for different platforms. The core functionality of the networking solution remains unchanged. This brings not only robust networking engine, but analytics too.

opencontrail-kubernetes-bgp

Application Stack

Overview

Lets have a look at typical scenario. Our developers gave us docker compose.yml , which is use for development and local tests on their laptop. This situation is easier, because our developers already know docker and application workload is docker-ready. This application stack contains following components:

  • Database – PostgreSQL or MySQL database cluster.
  • Memcached – it is for content caching.
  • Django app Leonardo – Django CMS Leonardo was used for application stack testing.
  • Nginx – web proxy.
  • Load balancer – HAProxy load balancer for containers scaling.

When we want to get it into production, we can transform everything into kubernetes replication controllers with services, but as we mentioned at beginning not everything is suitable for containers. Therefore we separate database cluster to OpenStack VMs and rewrite rest into kubernetes manifests.

Application deployment

This section describes workflow for application provisioning on OpenStack and Kubernetes.

OpenStack side

At the first step, we have launched Heat database stack on OpenStack. This created 3 VMs with PostgreSQL and database network. Database network is private tenant isolated network.

# nova list
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks              |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
| d02632b7-9ee8-486f-8222-b0cc1229143c | PostgreSQL-1 | ACTIVE | -          | Running     | leonardodb=10.0.100.3 |
| b5ff88f8-0b81-4427-a796-31f3577333b5 | PostgreSQL-2 | ACTIVE | -          | Running     | leonardodb=10.0.100.4 |
| 7681678e-6e75-49f7-a874-2b1bb0a120bd | PostgreSQL-3 | ACTIVE | -          | Running     | leonardodb=10.0.100.5 |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+

Kubernetes side

At kubernetes side we have to launch manifests with Leonardo and Nginx services. All of them can be displayed there.

In order for it to run successfully with networking isolation, look at the following sections.

  • leonardo-rc.yaml – Replication Controller for Leonardo app with replicas 3 and virtual network leonardo
apiVersion: v1
kind: ReplicationController
...
  template:
metadata:
  labels:
    app: leonardo
    name: leonardo # label name defines and creates new virtual network in contrail
...
  • leonardo-svc.yaml – leonardo service expose application pods with virtual IP from cluster network on port 8000.
apiVersion: v1
kind: Service
metadata:
  labels:
    name: ftleonardo
  name: ftleonardo
spec:
  ports:
    - port: 8000
  selector:
    name: leonardo # selector/name matches label/name in replication controller to receive traffic for this service
...
  • nginx-rc.yaml – NGINX replication controller with 3 replicas and virtual network nginx and policy allowing traffic to leonardo-svc network. This sample does not use SSL.
apiVersion: v1
kind: ReplicationController
...
  template:
    metadata:
      labels:
        app: nginx
        uses: ftleonardo # uses creates policy to allow traffic between leonardo service and nginx pods.
        name: nginx # creates virtual network nginx with policy ftleonardo
...
  • nginx-svc.yaml – creates service with cluster vip IP and public virtual IP to access application from Internet.
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
    name: nginx
...
  selector:
    app: nginx # selector/name matches label/name in RC to receive traffic for the svc
  type: LoadBalancer # this creates new floating IPs from external virtual network and associate with VIP IP of the service.
...

Lets run all manifests by calling kubeclt

kubectl create -f /directory_with_manifests/

This creates following pods and services in Kubernetes.

# kubectl get pods
NAME             READY     STATUS    RESTARTS   AGE
leonardo-369ob   1/1       Running   0          35m
leonardo-3xmdt   1/1       Running   0          35m
leonardo-q9kt3   1/1       Running   0          35m
nginx-jaimw      1/1       Running   0          35m
nginx-ocnx2      1/1       Running   0          35m
nginx-ykje9      1/1       Running   0          35m
# kubectl get service
NAME         CLUSTER_IP      EXTERNAL_IP     PORT(S)    SELECTOR        AGE
ftleonardo   10.254.98.15    <none>          8000/TCP   name=leonardo   35m
kubernetes   10.254.0.1      <none>          443/TCP    <none>          35m
nginx        10.254.225.19   185.22.97.188   80/TCP     app=nginx       35m

Only Nginx service has public ip 185.22.97.188, which is floating ip configured as LoadBalancer. All traffic is now balanced by ECMP on Juniper MX.

To get cluster fully working, there must set routing between leonardo virtual network in Kubernetes Contrail and database virtual network in OpenStack Contrail. Go into both Contrail UI and set same Route Target for both networks. This can be automated too through contrail heat resources.

route-target

The following figure shows how should look final production application stack. At top there are 2 Juniper MXs with Public VRF, where are floating IPs propagated. The traffic is ballanced through ECMP to MPLSoverGRE tunnel to 3 nginx pods. Nginx proxies request to Leonardo application server, which stores sessions and content into PostgreSQL database cluster running at OpenStack VMs. Connection between PODs and VMs is direct without any routed central point. Juniper MXs are used only for outgoing connection to Internet. Thanks to storing application session into database (normally is memcached or redis), we do not need specific L7 load balancer and ECMP works without any problem.

opencontrail-kubernetes-scenario

Other Outputs

This section shows other interesting outputs from application stack. Nginx service description with LoadBalancer shows floating IP and private cluster IP. Then 3 IP addresses of nginx pods. Traffic is distributed through vrouter ecmp.

# kubectl describe svc/nginx
Name:                   nginx
Namespace:              default
Labels:                 app=nginx,name=nginx
Selector:               app=nginx
Type:                   LoadBalancer
IP:                     10.254.225.19
LoadBalancer Ingress:   185.22.97.188
Port:                   http    80/TCP
NodePort:               http    30024/TCP
Endpoints:              10.150.255.243:80,10.150.255.248:80,10.150.255.250:80
Session Affinity:       None

Nginx routing table shows internal routes between pods and route 10.254.98.15/32, which points to leonardo service.

nginxRT

The previous route 10.254.98.15/32 is inside of description for leonardo service.

# kubectl describe svc/ftleonardo
Name:                   ftleonardo
Namespace:              default
Labels:                 name=ftleonardo
Selector:               name=leonardo
Type:                   ClusterIP
IP:                     10.254.98.15
Port:                   <unnamed>       8000/TCP
Endpoints:              10.150.255.245:8000,10.150.255.247:8000,10.150.255.252:8000

The routing table for leonardo looks similar like nginx except routes 10.0.100.X/32, whose points to OpenStack VMs in different Contrail.

leonardoRT

The last output is from Juniper MXs VRF showing multiple routes to nginx pods.

185.22.97.188/32   @[BGP/170] 00:53:48, localpref 200, from 10.0.170.71
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32782, Push 20
                    [BGP/170] 00:53:31, localpref 200, from 10.0.170.71
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32778, Push 36
                    [BGP/170] 00:53:48, localpref 200, from 10.0.170.72
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32782, Push 20
                    [BGP/170] 00:53:31, localpref 200, from 10.0.170.72
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32778, Push 36
                   #[Multipath/255] 00:53:48, metric2 0
                    > via gr-0/0/0.32782, Push 20
                      via gr-0/0/0.32778, Push 36

Conclusion

We have proved that you can use single SDN solution for OpenStack, Kubernetes, Bare metal and VMware vCenter. The more important thing is that this use case can be actually used for production environments.

If you are more interested in this topic, you can vote for our session Multi-cloud Networking for OpenStack Summit at Austin.

Currently we are working on requirements for Kubernetes networking stacks and then provide detailed comparison between different Kubernetes network plugins like Weave, Calico, OpenVSwitch, Flannel and Contrail at scale of 250 bare metal servers.

We are also working on OpenStack Magnum with Kubernetes backend to bring developers self-service portal for simple testing and development. Then they will be able to prepare application manifests inside of OpenStack VMs, a then push changes of final production definitions into git and at the end use them at production.

Special thanks go to Pedro Marques from Juniper for his support and contribution during testing.

Jakub Pavlik & tcp cloud team

OpenStack Neutron IPv6 support in OpenContrail SDN

$
0
0

This is a guest blog from tcpCloud, authored by Marek Celoud & Jakub Pavlik (tcp cloud engineers). To see the original post,click here.

As private cloud (primary based on OpenStack) deployers and integrators lots of customer ask as about support of IPv6. Most of our deployments run on OpenContrail SDN&NFV. Reasons are described in our previous blogs (http://www.tcpcloud.eu/en/blog/2015/07/13/opencontrail-sdn-lab-testing-1-tor-switches-ovsdb/) . OpenContrail SDN supports IPv6 for quite long, but there is not so many real tests. Therefore we decided to share procedure how we configured and used IPv6 in OpenStack.

This short blog desribes support of IPv6 in OpenStack using Neutron plugin for SDN/NFV – OpenContrail.

With cloud deployments there is significant growth of need for public IP addresses. These deployments are facing problems due to lack of IPv4 addresses. One of the solutions is to migrate to public IPv6.

We start with capability of IPv6 for internal communication between virtual machines within same virtal network and across different virtual network. Then we show how to expand IPv6 public addresses to external world. In our case we use Juniper MX routers as cloud gateway.

Creating IPv6 network

We need to consider few things when creating IPv6 virtual network. First one is adding also IPv4 subnet, because without IPv4 address instance can not connect to nova metadata api. Cloud images are built to use cloud-init to connect to API on 169.254.169.254:80 address. So if you create network without IPv4 subnet, you will not receive metadata to your instance. Second consideration is whether to you want to go to internet with your IPv6 capable instances. There is currently problem with IPv6 floating IP pool, so if you want to expand to external world, you need to boot to network with associated route target.

We first create private IPv6 network for demonstation.

ipv61

When the network is created we can boot instances. We will boot 2 of them to demonstrate functional communication. You will probably need to modify network interface configuration, because there is not enabled dhcp for IPv6. For nonpreemptive recieve you can use:

#dhclient -6

ipv63

As you can see, you have both IPv4 and IPv6 address associated with interface of instance.

ipv64

Before testing communication, we need to modify security groups to enable traffic. For testing purposes we will enable everything.

ipv62

We choose ubuntu-ipv6-1 from instance list and try to ping instance ubuntu-ipv6-2 with fd00::3 IPv6 address.

ipv65

As you can see, we are now able to ping other instance.

ipv66

This capability is nice, but not very useful without connecting to external world. We will create route with associated route target to expand routes to Juniper MX routers via BGP. In the picture below is sample architecture. There is one VRF CLOUD-INET created on each of MX routers. The route target associated with this VRF matches route target added to virtual network in Contrail. In the picture is demonstrated both IPv4 and IPv6 addresses propagated to same VRF. There is also INET virtual-router, that is connected to VRF via lt tunnel interfaces running ospf and ospf3. From this virtual-router is aggregated default route ::/0 from all internet routes from upstream EBGP.

ipv66 expanded

There are few things to configure on MX routers to enable IPv6 traffic from cloud. First is enabling ipv6 tunneling through mpls tunnels.

protocols {
    mpls {
        ipv6-tunneling;
        interface all;
        }

It is also good practice to filter what routes you export and import to and from cloud. We only need default route present in cloud. And we also want to filter only IPv6 addresses to be imported from Contrail, because of IPv4 pool created with IPv6 virtual network.


policy-statement CLOUD-INET-EXPORT {
    term FROM-MX-IPV6 {
        from {
            protocol ospf3;
            route-filter ::/0 exact;
        }
        then {
            community add CLOUD-INET-EXPORT-COMMUNITY;
            accept;
        }
    }
    term LAST {
        then reject;
    }
}
policy-statement CLOUD-INET-IMPORT {
    term FROM-CONTRAIL-IPV6 {
        from {
            family inet6;
            community CLOUD-INET-IMPORT-COMMUNITY;
            route-filter 2a06:f6c0::/64 orlonger;
        }
        then accept;
    }
    term LAST {
        then reject;
    }
}
community CLOUD-INET-EXPORT-COMMUNITY members target:64513:10;
community CLOUD-INET-IMPORT-COMMUNITY members target:64513:10;

So now we create network 2a06:f6c0::/64 and we associate route target 64513:10 to this network. We can also make it shared so all tenants can boot in this network. Once we create instance to this network, there is already routing information in MX routing table.


# run show route table CLOUD-INET.inet6.0

CLOUD-INET.inet6.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

::/0               *[OSPF3/150] 20:37:13, metric 0, tag 0
                    > to fe80::6687:8800:0:2f7 via lt-0/0/0.3
2a06:f6c0::3/128   *[BGP/170] 00:00:15, localpref 100, from 10.0.106.84
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32789, Push 1046
                    [BGP/170] 00:00:15, localpref 100, from 10.0.106.85
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32789, Push 1046

We can also verify that default route is propagated by ispecting routing tables in Contrail.

ipv610

When we verify that instance have public IPv6 address, we can try to access internet.

ipv67

ping google

Conclusion

We proved that OpenContrail SDN solution is fully IPv6 capable with cloud platform OpenStack for private and public communication and communicate directly with edge routers as Juniper MX, Cisco ASR, etc.

Getting to GIFEE with SDN: Demo

$
0
0

A few short years ago, espousing for open source and cloud computing was even more difficult than touting the importance of clean energy and the realities of climate change. The doubters and naysayers, vocal as they are, are full of reasons why things are (fine) as they are. Reasons, however, don’t get you results. We needed transformative action in IT, and today, as we’re right between the Google NEXT event and the OpenStack Summit in Austin, open source and cloud are the norm for the majority.

After pausing for a moment of vindication – we told you so – we get back to work to improve further and look forward, and a good place to look is indeed at Google: a technology trailblazer by sheer necessity. We heard a lot about the GCP at NEXT, especially their open source project Kubernetes, powering GKE. What’s most exciting about such container-based computing with Docker is that we’ve finally hit the sweet spot in the stack with the right abstractions for developers and infrastructure & ops pros. With this innovation now accessible to all in the Kubernetes project, Google’s infrastructure for everyone else (#GIFEE) and NoOps is within reach. Best of all, the change this time around is less transformative and more incremental…

One thing you’ll like about a serverless architecture stack like Kubernetes, is that you can run it on bare-metal if you want the best performance possible, but you can easily run it on top of IaaS providing VMs in public or private cloud, and that benefits us with a great deal of flexibility in so many ways. Then of course if you just want to deploy workloads, and not worry about the stack, an aaS offering like GKE or ECS is a great way to get to NoOps faster. We have a level playing field across public and private and a variety of underpinnings.

For those that are not only using a public micro-service stack aaS offering like GKE, but supplementing or fully building one internally with Kubernetes or a PaaS on top of it like OpenShift, you’ll need some support. Just like you didn’t build an OpenStack IaaS by yourself (I hope), there’s no reason to go it alone for your serverless architecture micro-services stack. There’s many parts under the hood, and one of them you need baked into your stack from the get go is software-definedsecure networking. It was a pleasure to get back in touch with my developer roots and put together a demo of how you can solve your networking and security microsegmentation challenges using OpenContrail.

I’ve taken the test setup for OpenContrail with OpenShift, and forked and modified it to create a pure demo cluster of OpenContrail + OpenShift (thus including Kubernetes) showing off the OpenContrail features with Kubernetes and OpenShift. If you learn by doing like me, then maybe best of all, this demo cluster is also open source and Ansible-automated to easily stand up or tear down on AWS with just a few commands to go from nada to a running OpenShift and OpenContrail consoles with a running sample app. Enjoy getting your hands dirty, or sit back and watch demo video.

If you are looking to setup and run this demo yourself, please see: https://github.com/jameskellynet/container-networking-ansible

Enhancing OpenStack LBaaSv1 via Custom Attributes in OpenContrail

$
0
0

Note: This blog is co-authored by Aniket Daptari from Juniper Networks and Varun Lodaya from Symantec. Varun will be presenting his work at the upcoming OpenContrail User Group meeting during the OpenStack Summit (April 27th). RSVP to the event if you want to see his talk.

 

This blog highlights how OpenContrail has fostered the notion of User and Developer communities. Here, we highlight one example of a specific User and how they have contributed as a Developer to enhance the value they extract from OpenContrail. Symantec has now been a long time user and contributor to OpenContrail. In this particular blog we aim to highlight one of their most recent contributions that enhances the LBaaS offering in a manner that not only addresses their specific use case, but is generic enough for other users to leverage.

Further, this general approach may be used to extend other API sets beyond LBaaS as well.

With this, we also want to highlight the contribution of OpenContrail developer, Varun Lodaya who has played a critical role in the design and development of this enhancement

Abstract :

With LBaaS, OpenStack attempts to define a single set of APIs to consume load balancing functionality regardless of the implementation of those APIs. This allows the OpenStack operator flexibility in choosing the load balancing implementation on the backend as well as in making changes to that back end. This has allowed various vendors to provide their southbound LBaaS drivers for example F5, A10, HAProxy, NGINX, etc. Now, while each of these load balancers offers many varied features, the northbound LBaaS APIs (v1.0) are a bit limited. It is perhaps impractical to provide APIs for every feature provided by all the load balancers. Therefore, there is a need to provide a way to support load balancer functionality beyond what is made available via the LBaaS v1.0 APIs.

OpenContrail user, Symantec had the following LBaaS use cases that translated to a requirement to exercise additional functionality beyond what is available via LBaaS v1. In response to that requirement, Symantec developers designed the LBaaS Custom Attributes support.

The following are the use cases in the words of Cloud Platform Engineering developer, Varun Lodaya, from Symantec:

Use Cases:

● Enable our cloud users to manage all their LBaaS features/configurations themselves.
● Empower the cloud providers to decide what capabilities they want their users to have since they know their infrastructure capabilities and limitations the best.
● Support tenant SSL certs with custom attributes.

Design :

We came up with a modular design that would help to cater to all of the above use cases. Following is the design flow:
● Based on the LBaaS driver that the cloud provider is using, the cloud provider will identify the additional features they want to expose to their users. These additional features will be made available to the users via custom attributes. The cloud provider will then generate a validation file that contains all the custom attributes (corresponding to the features) they want to provide to their users. In addition to the list of additional features, the cloud provider will also specify any limits they might want to enforce associated with those custom attributes.
● This validation file will be used and enforced by OpenContrail when facilitating the invocation of the custom attributes.
● When users exercise the load balancing functionality via the LBaaS APIs, they will invoke additional functionality by specifying a list of keyvalue dicts while configuring the LBaaS pool. This list of keyvalue dicts is saved in a database.
● When the LBaaS VIP is created, the OpenContrail Service Monitor process reads the custom attributes from the lb_pool object and validates them against the corresponding custom validation file.
● If validation fails, service_monitor process moves the vip/service_instance to “Error” state with the corresponding error message. It’s users responsibility to then go back and correct their custom attributes to pass validation.
● If validation passes, the custom attributes are pushed down to the corresponding drivers. In case of OpenContrail’s implementation of LBaaS using HAProxy, these custom attributes get pushed to the vRouter from where they get applied to the corresponding HAProxy process.
● This custom attributes extension could also be used to support tenant level SSL certs with LBaaSv1. Users could manage their certs in different ways, one of those ways being the Openstack project Barbican.
● Once they have their certificate pem files ready, they can provide the certificate references as custom attributes to OpenContrail.
● OpenContrail then downloads the certificates via references provided in custom attributes and updates the corresponding southbound driver with the SSL certificates.
● Currently, we support Openstack Barbican as certificate manager and OpenContrail code fetches the certificates and private keys from Barbican, but, code is generic enough to be able to plugin any cert_manager_driver which could then fetch the certificates from thirdparty certificate managers.

Usage:

CLI:
Neutron CLI to provide custom attributes:

neutron lb-pool-create --name Test_Pool --subnet-id <subnetid>
--lb-method ROUND_ROBIN --protocol HTTP --custom-attributes type=dict
list=true client_timeout=50000,
tls_container= http://<barbican_ep>/v1/containers/ <container_ref_uuid>
neutron lb-pool-update <pool-id> --custom-attributes type=dict
list=true server_timeout=100000

Barbican CLI to manage certs :

barbican secret store --payload-content-type='text/plain'
--name='certificate' --payload="$(cat server.crt)"
barbican secret store --payload-content-type='text/plain'
--name='private_key' --payload="$(cat server.key)"
barbican secret container create --name='tls_container'
--type='certificate' --secret="certificate=$(barbican secret list |
awk '/ certificate / {print $2}')" --secret="private_key=$(barbican
secret list | awk '/ private_key / {print $2}')"

API:
Neutron API to provide custom attributes:

curl -i -X POST http://<neutron_end_point>:9696/v2.0/ports.json -H
"UserAgent: pythonneutronclient" -H "ContentType:
application/json" -H "Accept: application/json" -H "X-Auth-Token:
0cb1bad6081e4ca383495a3f5a3ea718" -d '{"port": {"network_id":
"9be1ce8e-5226-4046-be82-100fcd041dc1", "fixed_ips": [{"subnet_id":
"c2968821-e52b-4c2e-a895-4ded7abf2edb", "ip_address":
"192.168.1.5"}], "custom_attributes": [{"client_timeout=50000",
"tls_container=http://<barbican_ep>/v1/containers/<container_uuid>"}]
, "admin_state_up": true}}'

Barbican API to create private_key secrets :

curl -i -X POST https://<barbicanapi_end_point>:9311/v1/secrets -H
"content-type:application/json" -H "X-Auth-Token:$TOKEN" \
-d '{"name": "Private_Key", "payload": "<private_key>",
"payload_content_type": "text/plain"}'

Barbican API to create certificate secrets:

curl -i  -X POST https://<barbicanapi_end_point>:9311/v1/secrets -H
"content-type:application/json" -H "X-Auth-Token:$TOKEN" \
-d '{"name": "Certificate", "payload": "<certificate>",
"payload_content_type": "text/plain"}'

Barbican API to create pem containers:

curl -i -X POST https://<barbicanapi_end_point>:9311/v1/containers -H
"content-type:application/json" -H "X-Auth-Token:$TOKEN" \
-d '{"name": "tls_container", "type": "certificate", "secret_refs": \
[{"name": "private_key", "secret_ref": "<key_ref>"},{"name": \
"certificate", "secret_ref": "<certificate_ref>"}]}'

Barbican API to update secret ACLs:

curl -i -X PUT \https://<barbicanapi_end_point>:9311/v1/secrets/<secret_uuid>/acls \
-H "content-type: application/json" -H "X-Auth-Token:$TOKEN" \
-d '{"read": {"users": "[<user_uuids>]"}}'

EXAMPLE:
How to use Custom Attributes for SSL Cert Support with HAProxy:
Steps:
1) Add the new config file which contains keystone auth credentials to
/etc/contrail/contrail-vrouter-agent.conf file as follows:

cat /etc/contrail/contrail-vrouter-agent.conf | grep B3 lb_custom
[ SERVICE INSTANCE]
# Path to the script which handles the netns commands
netns_command = /usr/bin/opencontrail-vrouter-netns
lb_custom_attr_conf_path = /etc/contrail/contrail-vrouter-custom-attr.conf

2) The keystone file contains the following by default:

/etc/contrail/contrail-vrouter-custom-attr.conf
[DEFAULT]

[KEYSTONE]
keystone_endpoint=http://172.16.38.189:5000
barbican_endpoint=http://172.16.38.188:9311
domain_name=default
username=admin
password=abc123
project_name=demo
keystone_version=v3

[CERT]
#cert_manager=Barbican_Cert_Manager
cert_manager=Generic_Cert_Manager

3) Restart contrail vrouter agent for it to read this new config.

root@ubuntu : /var/log/keystone # service supervisor-vrouter restart
supervisor-vrouter stop / waiting
supervisor-vrouter start / running , process 18287

Steps 4 to 7 would vary based on the driver selected in the config.

Barbican Cert Manager Flow:

4) Create barbican secrets first. Make sure payloadcontenttype is ‘text/plain’.

barbican secret store --payload-content-type= 'text/plain' --name= 'certificate'
--payload= "$(cat ssl.crt)"
barbican secret store --payload-content-type= 'text/plain' --name = 'private_key'
--payload= "$(cat ssl.key)"

5) Create the barbican container referencing both the secrets

barbican container create --name= 'tls_container' --secret= "certificate=$(barbican
secret list | awk '/ certificate / {print $2}')" --secret= "private_key=$(barbican
secret list | awk '/ private_key / {print $2}')"

6) Now create the load balancer pool with tls_container as the custom-attribute as follows:

neutron lb-pool-create --subnet-id c41ec07c-9330-4469-b7f7-33fd4f29fce1 --lb-method
ROUND_ROBIN --protocol HTTPS --name TestPool --custom-attributes type = dict list = true
tls_container = http://172.16.38.188:9311/v1/containers/f3c48a4b-efab-4050-9c6e-289fb6c10168

7) Create the load balancer VIP now

neutron lb-vip-create --subnet-id c41ec07c-9330-4469-b7f7-33fd4f29fce1 --protocol HTTP
--protocol-port 443 --name TestVip TestPool

Basic Cert Manager Flow:

4) Store secrets in shared folder.

5) Now create the load balancer pool with tls_container as the custom-attribute as follows:

neutron lb-pool-create --subnet-id c41ec07c-9330-4469-b7f7-33fd4f29fce1 --lb-method
ROUND_ROBIN --protocol HTTPS --name TestPool --custom-attributes type = dict list = true
tls_container =/var/lib/contrail/shared_crts/crt.pem

7) Create the load balancer VIP now

neutron lb-vip-create --subnet-id c41ec07c-9330-4469--b7f7-33fd4f29fce1 --protocol HTTP
--protocol-port 443 --name TestVip TestPool

8) Monitor Logs by tailing the below file:

/var/log/contrail/haproxy_parse.log

BLUEPRINT and SOURCE CODE:
https://blueprints.launchpad.net/opencontrail/+spec/lbaas-custom-attr-support
https://bugs.launchpad.net/opencontrail/+bug/1475393
https://bugs.launchpad.net/opencontrail/+bug/1546253
https://bugs.launchpad.net/opencontrail/+bug/1547645

 


OpenContrail User Group Meeting – Austin

$
0
0

OCUG_Austin

During the recent OpenStack Summit at Austin, we repeated the tradition of hosting the OpenContrail User Group Meeting. Speakers included companies like AT&T, Symantec, Tieto, and Mirantis. The meeting also had an eventful panel discussion represented by Workday, tcpcloud and Lithium Technologies.  The recordings for this  session is now available and you can listen to their stories below.

A huge thanks to everyone who participated in this meeting!

 

Symantec

Speaker: Varun Lodaya

 

AT&T

Speakers: Paul Carver

Tieto

Speaker: Lukas Kubin

AT&T / Mirantis

Speaker: Munish Mehan (AT&T); Randy DeFauw (Mirantis)

Panel Discussion

Participants: Edgar Magana(Workday); Jakub Pavlik (tcpcloud); Lachlan Evenson (Lithium Technologies)

Port Tuples and Service Chain Redundancy in Contrail

$
0
0

Port Tuples in Contrail

One of the reasons why OpenContrail is so flexible in adopting new compute flavors (different typical hypervisors, containers, bare metal servers, etc.) is the VM Interface (VMI) concept. OpenContrail uses the VMI object abstraction as a means to interconnect a heterogeneous compute environment to the overlay network. Thanks to this abstraction layer, many of the features that were originally made to work for VMs also work seamlessly for other compute flavors.

In OpenContrail 3.0 we go one step further and define a Port Tuple as an ordered set of VM Interfaces. A given Port Tuple is an ordered list of network interfaces connected to the same VM, or container, or physical appliance. By chaining port tuples, it possible to build a service chain out of heterogeneous network functions, some of them virtual (VMs, containers) and others physical. In summary, Port Tuples are to NFV what VM Interfaces are to overlay networking.

Disclaimer: One of the scenarios displayed in the video (bare metal servers connected to vrouter CPE) is upcoming but not officially supported as of Contrail 3.0 yet. It is shown as a way to illustrate the power of the port tuple concept.

Service Chain Redundancy in Contrail

OpenContrail has recently boosted its control plane feature set in several ways. One of them is routing policies. If you are familiar with network operating systems like Junos or IOS XR, you have certainly experienced routing policies more than once. Well, in OpenContrail this feature works exactly in the same way. Thanks to routing policies, it is possible to filter and modify routes in a fine-grained manner, adding a much greater control plane flexibility within OpenContrail itself.

At the time of publication of this post, the infrastructure has been completely developed and routing policies can be applied to Service Instance interfaces (left, right). In other words, it is currently possible to do fine-grained route leaking through a Service Chain. And the same infrastructure can be used in the future for other purposes too.

Among other use cases, the currently available routing policy feature set allows for Service Chain Redundancy. You can have a primary service chain in data center #1 and the backup service chain in data center #2. Combined with the Service Health Check feature, Service Providers can easily implement HA on their NFV offerings.

Physical Network Function (PNF) service chaining with Contrail

$
0
0

When it comes to service functions there is a huge amount of focus on making the shift to virtualization or NFV. Virtual Network Functions (VNF) can be Firewalls, Load Balancers, routers, route reflectors, BNGs, EPCs, the list goes on. This is not without good reason as virtualization comes with many benefits such as increased agility, simpler automation, more granular scaling, licensing models that allow a true “pay as you grow” business model and in general the opportunity for Service Providers to revolutionize how they offer services to their customers. So why would we want to keep using physical network functions (PNF) and develop SDN solutions that support these PNF? Well, there are actually some pretty good reasons. Firstly, many service providers have made huge investments in these appliance based solutions and quite rightly expect to continue to realize the benefit of these investments for some years into the future. Secondly, when it come to raw throughput performance ASIC based forwarding is still far superior compared to x86 powered forwarding.  Serious improvements have been made but the gap is still wide.

As you probably know, Contrail provides the capability to insert network functions providing network services like those described above in the traffic path between two different virtual networks on demand and in a dynamic way.  There is no explicit dependency on the network function itself to allow service stitching to happen. As of the most recent Contrail releases PNF service chaining is also supported, we can now create service chains that are PNF only, VNF only or a hybrid of PNF and VNF with multiple instances of both physical and virtual in a single service chain. These PNF and VNF are included as part of network policy definition that is applied between two virtual networks as has always been the case for VNF service chaining. While using slightly different mechanisms under the hood to realize the correct route-leaking and next-hop updates that ensure traffic between the two virtual networks is correctly directed through the service appliances, the logic for PNF service chaining is the same as that used in the VNF service chaining approach. The main difference is that in the case of PNF service chaining Contrail pushes the required configuration to the MX router via Netconf rather than installing forwarding state on the vrouters running on the compute nodes. What’s really nice is that you can add many distinct chains running over the same physical appliance using the same interfaces, with each chain using different VLAN tag in order to maintain traffic segregation on the PNF.

Below is an example workflow of traffic flowing between two virtual networks/zones that is subject to a physical and/or virtual network services and an additional service chain between two different virtual networks that uses the same appliance. Some of this is covered in the video below.

This is an attempt to show how to unleash the power of automation by leveraging existing network services as well as virtual services for Cloud environments!

OpenContrail In Service Software Upgrade

$
0
0

This blog goes over the procedure followed for In Service Software Upgrade of OpenContrail.

OpenContrail is an Openstack neutron plugin in the Openstack environment, and it primarily has two components, the Contrail controller services and the vRouter and associated services in the compute node. Since it is OpenContrail ISSU, we assume that OpenContrail and Openstack are installed in separate computational resources as shown in the Figure1 and can be upgraded independently. In this blog, we will go over procedure for OpenContrail In Service Software Upgrade from Version1 (V1) to Version2 (V2).

OpenContrail ISS Upgrade Blog Image 1

As part of first step, spawn a parallel v2 Contrail controller cluster and launch ISSU task as shown in Figure2.

OpenContrail ISS Upgrade Blog Image 2

ISSU task will first, BGP peer v1 and v2 controllers as shown in Figure3.

OpenContrail ISS Upgrade Blog Image 3

Then ISSU task will freeze north bound communication of v1 Contrail controller cluster and start a run time ISSU sync as shown in Figure4. Note during this stage datapath is not impacted.

OpenContrail ISS Upgrade Blog Image 4

Then it will open the northbound communication with Openstack as show in Figure5.

OpenContrail ISS Upgrade Blog Image 5

Note the run time ISSU config sync and Contrail controller BGP peering ensures all the state generated in v1 Contrail controller cluster to be available in v2 Contrail controller cluster. Now system is ready for compute node upgrade.

Now admins can perform rolling upgrades of computes individually or in batches at a time as shown in figures 6 and 7. This will facilitate necessary testing that admin may intend to do before all computes upgrade.

OpenContrail ISS Upgrade Blog Image 6

OpenContrail ISS Upgrade Blog Image 7

Admin completes all compute upgrades as shown in Figure 8.

OpenContrail ISS Upgrade Blog Image 8

Admin can also rollback the upgraded computes if some issue is detected, individually or in batches as shown in Figure 9.

OpenContrail ISS Upgrade Blog Image 9

Once all computes are upgraded, admin can initiate decommissioning v1 Contrail controller cluster. For that, ISSU task will freeze the north bound communication of v1 Contrail controller with Openstack, does a final ISSU config sync of state as shown in Figure 10. It is not recommended to rollback from this step onwards.

OpenContrail ISS Upgrade Blog Image 10

ISSU task finalizes the upgrade by decommissioning v1 Contrail Controller and set v2 Contrail controller as newer neutron plugin of Openstack as shown in Figure 11.

OpenContrail ISS Upgrade Blog Image 11

ISSU task is terminated and the upgrade is done as shown in Figure 12.

OpenContrail ISS Upgrade Blog Image 12

As you can see in the upgrade procedure, a hybrid approach is taken for the upgrade. Contrail controller cluster is side by side upgraded while computes are in place upgraded. Also communication between older version and newer version is over standard protocols like BGP or through ISSU task. This facilitates focused testing and less likely error prone.

Note all the Contrail Controller services including support services such as RabbitMQ, and database services such as Zookeeper and Cassandra and running in the same computational node is for illustrational purpose only. These support and database services could be running external to the Contrail controller and can be shared between old version and newer version, as they are logically partitioned and so old version and newer version don’t step on each other.

In the below screencast, all the support and database services are running in the Openstack Node. And HA proxy would be front ending neutron. This is to avoid touching Openstack components when the upgrade happens. For convenience ISSU task is running in v2 Contrail Controller. For convenience the whole procedure is driven through fabric scripts, which is referred as ISSU task.

In summary, a hybrid approach is taken here. Contrail controller cluster is side by side upgraded while computes are in place upgraded. Also communication between older version and newer version is over standard protocols like BGP or through ISSU task. This facilitates focused testing and less likely error prone. It is not a hitless upgrade, but it is a minimal hit upgrade with lots of flexibility for admins, for example connectivity to a VM that was spawned after upgraded is not impacted even after rollback!

Openstack Bare Metal server integration using Contrail SDN in a multi vendor DC fabric environment

$
0
0

Introduction:

In this blog we are going to show a hands on lab demonstration of how Juniper Contrail SDN controller allows the cloud administrator to seamlessly integrate Bare-metal servers into a virtualized network environment that may consist of existing virtual machine workloads. The accompanying video shows the steps required to configure various features of this lab.

The solution uses standards based OVSDB protocol to program the Ethernet Ports, VLANS and the MAC table entries on the switches and hence any switch that supports OVSDB protocol can be used as the top of the rack switch to implement Contrail BMS solution and we have already demonstrated one such solution with Cumulus Linux Switches in this video.

Specifically, in this blog we are going to use QFX5100 and ARISTA as the two top of rack switches.

Setup:

The figure below shows the physical topology which consists of an all in one Contrail controller and compute node (IP=10.84.30.34) where we are going to spin the VMs. The TSN node has a IP address of 10.84.30.33. The second server (10.84.30.33) is going to act as the TSN node which is going to run the TOR agent (OVSDB clients) for the two TOR switches and also going to provide the network services such as ARP/DHCP/DSN etc. to the bare-metal services through a service known as TOR services node. TSN node vRouter forwarder translates the OVSDB message exchanges done with the TOR switches to XMPP messages which is advertised to the Control node. This is done by vRouter forwarder agent running on the TSN node.

The TWO switches QFX5100 (IP=10.84.30.44) & Arista 7050 (IP=10.84.30.7.38) have one bare-metal servers connected to them on the 1G interfaces as shown in the figure. In the case of Arista, the OVSDB session is established not with the TOR but with the cloudvision VM. Cloudvision in turn is going to push the OVSDB state into the TOR switch using Arista’s cloudvision protocol. 10.84.0.0/16 is DC fabric underlay subnet. 10.84.63.144 is the loopback IP address of QFX5100 and for Arista switch it is 10.84.63.189.

Finally, the MX SDN gateway (10.84.63.133) is also connected to the DC fabric. The MX gateway provides two main functionalities. 1. Public access to VMs and bare-metal servers and 2. required to provide inter-VN connectivity between the bare-metal servers.

Contrail integration with Arista TORs_blog_image1

 

The MX gateway provides two main functionalities:1. Public access to VMs and bare-metal servers and 2. required to provide inter-VN connectivity between the bare-metal servers.

1. Public access to VMs and bare-metal servers and 2. required to provide inter-VN connectivity between the bare-metal servers.

2. required to provide inter-VN connectivity between the bare-metal servers.

Provisioning & control-plane:

The TOR switches can be added to the cluster during provisioning of the cluster itself or can be added to an existing cluster using fab tasks. The TSN and the TOR agent roles are configured under ‘tsn’ and ‘toragent’ roles in the env.roledefs section of testbed.py. All the TOR switch related parameters such as the TOR switch loopback address (VXLAN VTEP source), OVSDB transport type (PSSL vs TCP), OVSDB TCP port etc. is defined under the env.tor_agent section of testbed.py. Please refer the github link provided at in the references section for more details.

Once the cluster is provisioned along with the TOR switches the only other thing that is required is to configure the TOR switches with the required configurations for OVSDB, VXLAN etc. Once all these are configured properly this should result in the control plane protocols being established between the various nodes, TOR switches and the MX gateway. The configurations for the TOR switches and the MX gateway is provided below.

TSN has OVSDB sessions to QFX5100 & the Arista cloudvision VM. TSN also has a XMPP session with the control node. Control and MX gateway exchange BGP routes over the IBGP session.

Data Plane:

As one can see in the video we are going to create two virtual networks RED (IP Subnet 172.16.1.0/24) and BLUE (IP Subnet 172.16.2.0/24). QFX5100 ge-0/0/16 logical interface is added to the RED VN and the Arista switch Et3 interface is added to the BLUE VN. In addition to this on the all in one contrail node we are going to create one VM each for the RED and the BLUE VNs.

The resulting data-plane interactions are as shown in fig 3. The RED and BLUE VTEPs are created on the QFX5100 and Arista 7050 switches by OVSDB intra-VN communication between the bare-metal server and the virtual machine in the RED and BLUE VNs are facilitated by the VXLAN tunnels that are setup between the TOR switches and the compute node (all in one node in this case 10.84.30.34). Broadcast Ethernet frames for protocols such as ARP, DHCP etc., are directed towards the TSN node using VXLAN tunnels established between the TOR switches and the TSN nodes. All the inter-VN traffic originated by the bare-metal server which is destined to another bare-metal server or a VM in a different VN (RED to BLUE or vice versa) is sent to the EVPN instances for the respective VNs on the MX SDN gateway wherein routing takes place between the RED and BLUE VRFs. The VRF and the EVPN (virtual-switch routing-instance) configuration on the MX gateway is automated using the Contrail Device Manager feature.

In addition to the all in one compute node has VXLAN and MPLSoGRE tunnels to the MX gateway for L2 and L3 stretch of the virtual network.

Contrail integration with Arista TORs_blog_image3

 

Contrail integration with Arista TORs_blog_image4

Resulting logical tenant virtual network logical connectivity is as shown below.

Contrail integration with Arista TORs_blog_image5

References:

Sample testbed.py file & the MX GW, QFX5100 , Arista Switch, Cloudvision configurations
https://github.com/vshenoy83/Contrail-BMS-Configs.git

Viewing all 49 articles
Browse latest View live