Deploying white box (whitebox) switches in the enterprise network– Network Definition
White box and brite box (branded white box) switches share three basic attributes: they are built on commodity hardware, use chipsets (ASICs) from established vendors, and run your choice of network operating system (NOS).
Within that broad definition, however, the options can vary considerably, mostly in terms of the NOS.
At the hardware level, white box switches are based on commodity, “bare metal” hardware from manufacturers such as Accton, Delta Networks, Foxconn and Quanta Cloud Technology. These same players supply hardware to major networking industry vendors such as Cisco, Juniper and Arista, in a standard 1U, 48-port platform that supports speeds up to 100G.
The networking chipsets the hardware is based on likewise come from large, established vendors including Broadcom, Cavium, Intel, Marvell and Mellanox. Chip vendors supply an application programming interface (API), which is used by the NOS to control the ASIC. They also usually include a software development kit (SDK) to program the ASIC, such as to set up a VLAN or ACL entries.
The main difference between white box switches and models from white box switch vendors and those from legacy vendors is the NOS. Most white box switches employ an “open” Linux-based NOS that is intended to be disaggregated, or abstracted, from the underlying network hardware. Hardware-software-disaggregation means the user is free to swap out either the hardware or the NOS at will; they are not tied to one another as with legacy switches from major vendors that install their own NOSs, turning their offerings into proprietary switches. The approach is analogous to the use of virtual servers, which disaggregate the server OS from the underlying hardware – and provide similar benefits.
Deployment of these enterprise switches creates the potential for an open source portable NOS that can run on a wide variety of switches from multiple vendors, compared to the previous traditional architecture of a single legacy networking equipment vendor. The switches are highly programmable, and were originally intended for software defined networking (SDN) use.
The Open Compute Networking Project, for example, is an effort by the Open Compute Project (OCP) aimed at creating a set of disaggregated, open network technologies, including the Linux-based NOS and developer tools.
Similarly, the Open Network Install Environment (ONIE) is an open source initiative driven by a community of vendors to define an open “install environment” for white box switches. Also a project of the OCP, ONIE is intended to enable an ecosystem where end users can choose among different NOSs and install them on a common set of white box switches in the same manner that they provision servers. Open Network Linux, another OCP project, is one example of such an open-source NOS that uses the ONIE install environment to install into a white box switch’s flash memory.
Depending on the exact approach, white box network switches are capable of simplifying network deployment, operations and support. The Pica8 PICOS network operating system, for example, enables a flatter network architecture that extends the leaf-spine network architecture that has been used successfully in data centers to the wider network (see diagram).
Additionally, the switches provide increased flexibility for optimized hardware and software at lower costs for many enterprises.
As deployment of multi-vendor white box switches becomes increasingly common in distributed campus environments and in remote offices at the access edge, a need has emerged for an effective automation framework.
Enterprise network automation is an element of software-defined networking (SDN). In an SDN, a network controller handles network control and forwarding functions, based on automated, policy based software programs.
But an enterprise automation framework handles other aspects. It must seamlessly span a full-scale network deployment, enabling admins to deploy and manage switches either from a simplified GUI or a centralized CLI interface. Such an in-band, easy-to-use white box enterprise automation framework solution brings powerful new capabilities to not only accelerate and standardize day-to-day network tasks, but also support the zero-touch provisioning capability needed to run today’s large networks.
For example, Pica8’s Automation Framework for the enterprise, paired with the company’s PicaPilot switch orchestration management software, slashes the operational overhead needed to make ongoing configuration management along with upgrades, policy and security changes. Pica8’s built-in framework not only reduces OpEx/CapEx costs, it also allows the integration of any number of Pica8-supported multi-vendor 1G-to-100G open white box/brite box switches.
With the size of networks continuing to expand, the implementation of network automation has become a core feature in the enterprise.
Pica8’s Automation Framework automates tasks including network switch provisioning, configuration, licensing and other ongoing management functions.
Pica8’s modern model comprises the following:
- Single IP management – By taking today’s per-switch IP addressing management and aggregating it under one IP address, it eliminates the operational overhead associated with managing a large network.
- Quick-start GUI – Activation and configuration of hundreds of remote switches is so simple it can be performed by entry-level IT personnel (no coding experience required).
- Centralized CLI – Enables network portability by providing a centralized standard command line interface for advanced orchestration services and/or network services.
- Auto/ZTP provisioning solution – Provides auto-provisioning and configuration of an entire network during deployment as well as providing network-wide visibility.
- Aggregated management solution – Aggregates Syslog and SNMP management to view and filter on various parameters for troubleshooting and monitoring purposes.
- Redundant network connectivity – Provides improved performance and high-availability redundancy by using Multi-Chassis Link Aggregation (MLAG) to address insufficient uplink bandwidth issues.