DATASHEET
Big Cloud Fabric 2.5
HYPERSCALE NETWORKING FOR ALL
Big Switch’s Big Cloud Fabric is leaf-spine Clos fabric providing physical and virtual workload connectivity in data center pods. Embracing the hyperscale data center design principles, Big Cloud Fabric solution enables rapid innovation, ease of provisioning and management, while reducing overall costs.
BIG CLOUD FABRIC OVERVIEW
Big Switch’s Big Cloud Fabric™ (BCF) is the industry’s first bare metal SDN data center fabric bringing hyperscale data center design principles to cloud environments making it ideal for current and next generations data centers. Applications can now take advantage of high east-west bisectional bandwidth, secure multi-tenancy, and workload elasticity natively provided by Big Cloud Fabric. Customers benefit from unprecedented application agility due to automation, massive operational simplification due to SDN and, dramatic cost reduction due to HW/SW disaggregation.
Big Cloud Fabric supports both physical and virtual (multi-hypervisor) workloads and choice of orchestration software 1. It provides L2 switching, L3 routing and, L4-7 service
insertion and chaining while ensuring high bisectional bandwidth. The scalable fabric is fully resilient with no single point of failure and supports head-less mode operations. Big Cloud Fabric is available in two editions:
• P-Clos — Leaf-spine physical Clos fabric controlled via SDN Controller
• Unified P+V Clos — Leaf-spine plus virtual switches (vSwitches) controlled by SDN Controller (future release)
This data sheet explains details of P-Clos edition of Big Cloud Fabric.
ARCHITECTURE: SDN SOFTWARE MEETS BARE METAL HARDWARE
Software Defined Networking (SDN) fabric architecture refers to a separation of the network’s data and control plane, followed by a centralization of the control plane
functionality. In practice, it implies that the network’s policy plane, management plane and much of c
ontrol plane are externalized from the hardware device itself, using an SDN controller, with few on-device off-load functions for scale and resiliency. The network state is centralized but hierarchically implemented, instead of being fully distributed on a box-by-box basis across access and aggregation switches.
Controller-based designs not only bring agility via centralized programmability and automation, but they also streamline fabric designs (e.g. leaf-spine L2/L3 Clos) that are otherwise cumbersome to implement and fragile to operate in a box-by-box design.
1. See specific hypervisor and orchestration support in a later section of this datasheet
BIG SWITCH NETWORKS
Our mission is to bring hyperscale networking to a broader audience—ultimately removing the network as the biggest obstacle to rapid deployment of new applications.
We do this by delivering all the design
philosophies of hyperscale networking in a single solution.
The Big Cloud Fabric Features:
• Bare Metal Hardware to Reduce Cost • SDN Controller T echnology to Reduce Complexity • Core and Pod Designs to Innovate Faster
Get hands-on experience with our offering, register for a free online trial at: bsnlabs.bigswitch Contact our sales team at *******************
B i g
C l o u d F a b r i c : H y p e r s c a l e N e t w o r k i n g f o r A l l
DATASHEET
The Big Cloud Fabric architecture consists of a physical switching fabric, which is based on a leaf-spine Clos architecture. Bare metal Leaf and Spine switches running Switch Light™ Operating System form the individual nodes of this physical fabric.
Intelligence in the fabric is hierarchically placed: most of it in the Big Cloud Fabric Controller (where configuration, automation and troubleshooting occur), and some of it off-loaded to Switch Light for resiliency and scale-out.
BIG CLOUD FABRIC SYSTEM COMPONENTS
• Big Cloud Fabric Controller Cluster — a centralized and hierarchically implemented SDN controller implemented as a cluster of virtual machines or hardware appliances for high availability (HA)• Bare Metal Leaf and Spine Switch Hardware — the term ‘bare metal’ refers to the fact that the Ethernet switches are shipped without embedded networking OS. The merchant silicon networking ASICs used in these switches are the same as used by most incumbent switch vendors and have been widely deployed in production in hyperscale datacenter
networks. These bare metal switches ship with Open Network Install Environment (ONIE) for automatic and vendor-agnostic installation of third-party network OS. A variety of switch HW configurations (10G/40G) and vendors are available on the Big Switch hardware compatibility list.
• Switch Light™ Operating System — a light-weight bare metal switch OS purpose built for SDN • OpenStack Plug-In (optional) — a BSN Neutron plug-in or ML2 Driver Mechanism for integration with various distributions of OpenStack • CloudStack Plug-In (optional) — a BSN Networking plug-in for integration with CloudStack
DEPLOYMENT SCENARIOS
Big Cloud Fabric is designed from the ground up to satisfy the
requirements of physical, virtual or combination of physical and virtual workloads. Some of the typical Pod deployment scenarios include:• Private/Public Clouds
• OpenStack (Nova or Neutron Networking) / CloudStack Pods • High Performance Computing / Big Data / Software Defined Storage Pods • Virtual Desktop Infrastructure (VDI) Pods • Specialized NFV Pods
The P-clos fabric can be designed to support the above listed
deployment scenarios using a combination of bare metal Ethernet switch options. A few examples are listed in the table shown in Figure 2.
Figure 1: Big Cloud Fabric (Leaf-Spine Clos Architecture)
PAGE 3
Big Cloud Fabric: Hyperscale Networking for All
DATASHEET
USING BCF: A 3-TIER APPLICATION EXAMPLE
The Big Cloud Fabric supports a multi-tenant model, which is easily customizable for the specific requirements of different
organizations and applications. This model increases the speed of application provisioning, simplifies configuration, and helps with analytics and troubleshooting. Some of the important terminology used to describe the functionality include:
• T enant — A logical grouping of L2 and/or L3 networks and services. • Logical Segment — An L2 network consisting of logical ports and end-points. This defines the default broadcast domain boundary.• Logical Router — A tenant router providing routing and policy enforcement services for inter-segment, inter-tenant, and external networks.
• External Core Router — A physical router that provides connectivity between Pods within a data center and to the Internet. • T enant Services — Services available to tenants and deployed as dedicated or shared services (individually or as part of a service chain).
register forTenant Workflow
In the most common scenario, end consumers or tenants of the data center infrastructure deal with a
logical network topology that defines the connectivity and policy requirements of applications. As an illustrative example, the canonical 3-tier application in Figure 3, shows various workload nodes of a tenant named “BLUE”. Typically, a tenant provisions these workloads
using orchestration software such as OpenStack, or BCF Controller GUI/CLI directly. As part of that provisioning workflow, the Big Cloud Fabric Controller seamlessly handles enabling the logical topology onto the physical switches.
Figure 2: Example BCF Deployment Scenarios
Figure 3: BCF Logical T
opology
PAGE 4
B i g
C l o u d F a b r i c : H y p e r s c a l e N e t w o r k i n g f o r A l l
DATASHEET
Mapping Logical to Physical
The BLUE T enant has three logical network segments, each of the three segments represents the broadcast domain for the
3-tiers—Web, App and Database. Let’s say in this example, Web 1,2 and App 1,2 are virtualized workloads but DB 1,2 is comprised of
physical workloads. Following the rules defined by the data center administrator, the orchestration system provisions requested
workloads across different physical nodes within the data center. As an example, the logical topology shown in Figure 3 could be mapped on the pod network as shown in Figure 4 . The Big Cloud Fabric Controller handles the task of providing optimal
connectivity, between these loads dispersed across the pod, while ensuring tenant separation and security.
In order to simplify the example, we only show racks that host virtualized and physical workloads in the figure below, but similar concepts apply for implementing tenant connectivity to external router an
d chaining shared services.
An illustrative sample set of entries in various forwarding tables highlight some of the salient features of the Big Cloud Fabric described in earlier sections.
• L3 routing decision is made at the first hop leaf switch (no hair-pinning)• L2 forwarding across the pod without special fabric encapsulation (no tunneling)
• Full load-balancing across the various LAG links (leaf and spines)• Leaf/Spine mesh connectivity within the physical fabric for resilience
BIG CLOUD FABRIC BENEFITS
Centralized Controller Reduces Management Consoles By Over 30:1With configuration, automation and most troubleshooting done via the Big Cloud Fabric controller, the number of management consoles involved in provisioning new physical capacity or new logical apps goes down dramatically. For example, in a 16 rack pod with dual leaf switches and two spine switches, a traditional network design would have 34 management consoles. The Big Cloud Fabric design has only one—the controller console—that performs the same functions. The result is massive time savings, reduced er
ror rates and simpler automation designs. As a powerful management tool, the controller console exposes a web-based GUI, a traditional networking-style CLI and REST APIs.
Streamlined Configuration, Enabling Rapid Innovation
In the Big Cloud Fabric design, configuration in the CLI, GUI or REST API is based on the concept of logical tenants. Each tenant has administrative control over a logical L2/L3/policy design that connects the edge ports under the tenant’s control. The Big Cloud Fabric controller has the intelligence to translate the logical design
into optimized entries in the forwarding tables of the leaf, and spine.
Figure 5: Application Centric Configuration
Figure 4: BCF Logical to Physical Mapping
Big Cloud Fabric: Hyperscale Networking for All DATASHEET
Network/Security/Audit Workflow Integration
The Big Cloud Fabric controller exposes a series of REST APIs used to integrate with application template and audit systems, starting with OpenStack. By integrating network
L2/L3/policy provisioning with OpenStack HEAT templates in Horizon GUI, the time to deploy new applications is reduced dramatically as security reviews are done once (on a template) rather than many times (on every application). Connectivity edit and audit functions allow for self-service modifications and rapid audit-friendly reporting, ensuring efficient reviews for complex applications that go beyond the basic templates.
Bare Metal Switch Hardware Reduces CapEx Costs
By Over 50%
By adding up hardware, software, maintenance and optics/ cables, a complete picture of the hard costs over three years shows that the savings are dramatic.
Scale-out & Elastic Fabric
The Big Cloud Fabric’s flexible, scale-out design allows users to start at the size and scale that satisfies their immediate needs while future proofing their growth needs. By providing a choice of hardware and software solutions across the layers of the networking stack and pay-as-you-grow economics, starting small scale and growing the fabric gradually instead of locking into a fully integrated proprietary solution, provides a path to a modern data center network. Once new switches are added, the controller adds those switches to the fabric and extends the current configuration hence reducing any error that may happen otherwise. Customers take advantage of one time configuration of the fabric. Any workload added beyond this point is incremental configuration. DC-grade Resilience
The Big Cloud Fabric provides DC grade resiliency that allows the fabric to operate in the face of link or node failures as well as in the rare situation when the entire controller cluster is unavailable (headl
ess mode). Swapping a switch (in case of HW failure or switch repurpose) is similar to changing a line card in modular chassis. After re-cabling and power up, switch boots up by downloading the right image, configuration and forwarding tables. Additionally, the BCF Controller coordinates and orchestrates entire fabric upgrade ensuring minimum fabric down time. These functionalities further enhance fabric resiliency and simplify operations.
Figure 6: BCF Graphical User Interface (GUI)
PAGE 5
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论