What Would You Do With Two Million Flows?

Today, Pica8 announced support for Table Type Patterns (TTP) in PicOS, our leading SDN operating system. The premise of this announcement is that with TTP, network engineers and operators can now implement SDN at greater scale – in some cases, up to two million flows (a 1,000x increase from previous methodologies) – while still using standard, white box hardware.

The magic of the technology is how PicOS can seamlessly leverage the capabilities of different switch ASICs. This empowers users with greater choice, and enables them to take advantage of unique capabilities of the ASIC they choose – such as memory space, programmable pipelines, and table management.

In terms of how we achieve greater flow scale with TTP, it’s similar to what I wrote about OpenFlow scale last year: all tables within the ASIC (VLAN, MAC, IP, TCAM, etc) are exposed and can be programmed via OpenFlow. But what’s more interesting is how we are seeing customers put this functionality to use.

Example 1: Cloud Brokering

Cloud BrokerFor ISPs, automation and self-service portals are nirvana for the reduction in OpEx alone. If a customer wants to increase their bandwidth from 10Mbps to 100Gbps, but only wants to do it from 8:00am – 5:00pm, and also wants to apply a firewall filter and a QoS policy, this would be hard to do quickly with standard network provisioning and protocols.

ISPs are therefore looking at OpenFlow to achieve this level of automation and granular control. The network uses Layer-2 and Layer-3 protocols as the baseline transport, and OpenFlow rules are used to define the exception based forwarding that end users want.

When considering the requirements of multi-tenancy, dynamic VLANs, virtualized services, and scale, it’s easy to see why scaling the number of OpenFlow rules would be important in this scenario.

Example 2: Elephant and Mice Flows in the Data Center

Handling elephant and mice flows in the data center is a well-known problem statement. Data center networks have standardized on some variant of spine-and-leaf architectures, which makes perfect sense when it comes to east-west traffic and the ability to quickly add scale.

However, problems can still arise when it comes to handling different flows of different sizes and how packets get queued when bandwidth is at a premium. At its heart, it’s a
E-and-Mtraffic-engineering problem. The beauty of using OpenFlow in this instance is that it does not disrupt what is already working with the spine-and-leaf architect
ure. Whether it’s MLAG, BGP, or any other standard protocol, our customers have been able to use strategically stitch in OpenFlow rules to handle these elephant flows as special cases.

And again, considering the number of virtual machines and workloads in data center racks today, it’s easy to see why the ability to increase the number of OpenFlow rules is important.

So while adding scale is always good, it’s what you can now do with that scale that really gets our customers excited. These are just two examples of what our Pica8 customers are doing with Layer-2 and Layer-3 networks with OpenFlow. We’re also talking to many other customers and partners who are looking at interesting use cases around multicast convergence, NFV gateways, and disaster recovery (DR) as a service. We plan to share more and write about these cases in the coming months.

Until then, just imagine what you could do with a two million flows.