Is The Global Chip Shortage Disrupting Your Network Upgrade Plans? Pica8 Might Be The Key to Your Network Challenges
Top 3 Telecom Provider with a Heavy Retail Presence Selects Pica8 over Cisco
In 2017 a top 3 telecom provider with a heavy retail presence (more than 4,000 stores in the U.S.) made a business decision that has been delivering solid dividends ever since: It started replacing some 5,000 Cisco Catalyst switches in its stores with Pica8’s PicOS® Software Switches on white box hardware. The strategy is saving the company about 50% in capital and support costs. But cost savings accounted for only about half the rationale for making the change. The open strategy also gave the flexibility that the company wanted to implement a multi-vendor network with Aruba WiFi and enable an automation strategy, based on Pica8’s AmpCon™ Controller, that includes zero-touch switch provisioning in its stores and simplifies ongoing (switch) lifecycle management, thereby reducing operational costs. The decision to replace Cisco Five years ago, as part of its regular, rotating access switch refresh process, the customer examined its costs for upgrading the Cisco 3850 switches in its stores. The 50% cost savings the company identified came from replacing its Cisco switches with Pica8 PicOS Software Switches deployed bare metal on comparable power-over-Ethernet (POE) white box switches from Edgecore. Pica8 software runs on an unmodified Debian Linux-based kernel. White box switches are built (by the same ODMs) on the same basic platforms in terms of chipsets as switches from Cisco, Juniper and the like, but without the brand name cost far less. Support costs are also considerably lower than what the customer was paying Cisco. With Edgecore/PicOS switches installed in about 1,400 stores to date, and more on the way, the savings have panned out as expected. On top of the cost savings, three other factors collectively played an equally important role in the decision. First, the company had plans to build an automation framework that would enable it to manage its entire IT infrastructure, including the network, from a single centralized console. The company wanted any new switches to support network automation tools with open APIs that would integrate with that framework, rather than Cisco’s proprietary automation platform and tools. Second, even if the customer did decide to use Cisco’s automation tools (which eventually became DNA Center), in many cases doing so would mean incurring the additional expense of upgrading to the latest switch and software versions. This would require a massive unplanned upgrade and paying for features and functions the customer didn’t need in its stores. Finally, the customer said it was not getting the level of responsiveness from Cisco that it needed, in terms of addressing bugs and issues with different features. In contrast, Pica8 consistently receives compliments on its responsiveness in our quarterly business review calls. Pica8 passes the security test After it made the decision to go with an open switch approach, the company held a proof-of-concept bake-off between Pica8’s PicOS and Cumulus Linux. Pica8 won out largely on the strength of our ability to address some access layer protocols that are critical to ensuring security and robustness in the access layer. These protocols address issues like user authentication, and guarding against malformed packets as well as bombardments of spam traffic that can bring down switches – all issues of concern in access environments such as retail stores. Pica8 has been servicing campus and access networks since its inception, so we had no problem meeting the customer’s requirements. The competition had its roots in data center networks where such issues don’t generally come up. Automation makes the grade Pica8 was also able to deliver an automation framework that fit with the customer’s enterprise IT and network management vision. Our AmpCon Controller enables several capabilities that help the customer cut its network operational expenses and enhance operational reliability, including zero-touch deployment. Installing a new switch in one of its stores is a simple process for the customer: plug it in, connect it to the network and turn it on. From there, an agent on the switch enables it to find the AmpCon server, update the software if necessary, add licensing, install a pre-loaded switch configuration and bring the switch up on the network in a single deployment workflow. That means the customer can quickly deploy switches with no truck roll or on-site technician required – an important consideration given its stores generally don’t have IT staff on-site. AmpCon also helps with ongoing operational tasks, including configuration backups and software updates. The customer can queue up software upgrades for all switches in a given region, for example, and execute it overnight with the push of a button. Should a switch fail, AmpCon can execute a RMA workflow to push its configuration to a replacement switch. It will also flag any switches that have support scheduled to expire soon, and automatically install updated licenses with support extensions. Reliable switches are essential to any company with distributed sites. They support network traffic and IT infrastructure that is critical to keeping sites functional, so they need enterprise-level features that promote reliability and ease operations. As our work with this large customer shows, Pica8 understands those requirements well and can deliver them even under the most demanding circumstances. And our open approach means you not only save on costs but can take advantage of the flexibility that is inherent to open networking. To learn more, read our white paper, “An Enterprise Approach to White Box Networking.” Or, if you have questions, feel free to get in touch. Niraj Jain is the COO of Pica8.
Want NAC without Cisco’s Financial Handcuffs? Then You Want Pica8
In recent years, we’ve seen a real upsurge of interest in deploying network access control (NAC) in the enterprise campus. We’re not at all surprised by this, given the network is now used by more types of devices (IoT, security cameras, BYOD) and users (students, employees, contractors) than ever before. What’s more, organizations are more conscious about securing their networks and data – rightfully so – and capabilities and enhancements NAC has gained over the years make it an intriguing option. But is it millions of dollars’ worth of intriguing? As we understand it, enterprises contacting Cisco about NAC are likely to find themselves being funneled toward Cisco DNA Center, whether they want that expensive network management platform or not. It’s not that Cisco DNA is required for customers seeking to implement NAC, but you can expect a hard sell in that direction. We believe we’ve got a far better solution, and we know we’ve got one that’s less expensive, and based on open networking principles. With Pica8, NAC is built right in. It’s part of PICOS, our open NOS. All that’s required is a bit of configuration – defining policy for users and devices on the server, and some straightforward configuration on the switches. The result is a network that is protected by PICOS-enabled switches acting as front-line sentry, tightly integrated and controlled by the NAC server as the security policy “brain” of the network. NAC’s growing impact While NAC started life as a configuration tool for dial-up access servers, it now serves as a vital network-wide policy management tool for enterprise networks. NAC can individually authenticate both a device (via certificate or MAC address) and that device’s user (typically via credentials). Attempts that seem suspicious can be quarantined, while those that aren’t are assigned to the appropriate VLAN (by department or business unit, for example). Next, the appropriate authorizations are applied, ensuring that a given user can access only predetermined data and areas of the network. Today, these traditional functions are just the beginning for NAC; there’s good reason switches supporting NAC are now more accurately called policy servers. Modern NAC solutions consider the full context of the network access request – including users, devices, time and location – when making access decisions. Users logging in from a personal device or off-premises location (and isn’t that important in a post-pandemic world) may face reduced access privileges compared to those logging in from company-issued laptops. Additionally, the NAC server can be connected to DNS servers, DHCP servers and other network infrastructure components, where it collects as much user information as possible. Using that data, it constantly monitors back-end system activities. If a user begins to act in a suspicious manner, the NAC server can shut down that port, notify the questionable user’s manager, and take other actions — all on the fly, in real-time. In short, we support a comprehensive set of NAC services in an affordable, easily managed manner. It’s rare for an enterprise to deploy, for example, Aruba wireless but Cisco NAC, or vice versa. But Pica8 PICOS switches can fit well in either environment, because they’re built on open networking principles, and we’ve focused on ease of integration. That’s the beauty of open networking. IBN without the pain Just as Pica8’s NAC approach is more straightforward than our competitors’, so too is our intent-based networking, which is garnering more and more interest (though the definition and operation is a moving target). IBN is just a much easier lift with Pica8. Our AmpCon framework, available for a fraction of what Cisco DNA costs, replaces legacy vendor-driven operational complexity with simplicity, addressing the networking-specialist skills shortage and reducing operating expenses along the way. Taken together, Pica8’s NAC solution and AmpCon IBN serve as another example of Pica8’s flexibility, accessibility and affordability. We believe it’s time for organizations to break free of vendor networking lock-in and take-it-or-leave-it pricing. For more technical details on NAC, see this previous blog post. To learn more about the Pica8 vision for IBN, download our white paper, “An Open Approach to Implementing Intent-Based Networking.” Click here to try out PICOS or request a demo.
Tech Talk: Why Telemetry is Critical to Successful Intent-Based Networking
Interest in intent-based networking (IBN) continues to grow, even as the definition remains vague and varies by vendor. In this post, I’d like to offer my take on one element that I believe will be important for any successful IBN implementation: efficient and effective open telemetry. Telemetry is key to data collection and monitoring which, in turn, is crucial for managing not only networks, but everything the network supports, including applications and storage systems. With effective telemetry, we can automate responses to network issues to ensure reliability, uptime and performance. Open telemetry with gNMIAt its core, telemetry involves two things: The continuous collection of data from networking devices, such as switches and routers Ensuring all data collected is time-stamped Continuous data collection is important in order to be able to diagnose issues on the network, including those in the past, and to establish a baseline of “normal” network performance. The timestamp is important to establish when a given incident occurred, and the time differential between different, related incidents. Together, the two elements enable you to identify trends and, soon, to diagnose issues in real time, using artificial intelligence tools. The real challenge with telemetry is coming up with an efficient way to collect data while handling data traffic in parallel, which is exactly the issue gNMI-gRPC Network Management Interface is designed to overcome. gNMI is an open source (OpenConfig Project) unified management protocol for streaming telemetry and configuration management that leverages the open source gRPC framework. This means a single gRPC service definition can cover both configuration and telemetry. The gNMI service defines operations for configuration management, operational state retrieval, and bulk data collection via streaming telemetry. Spawning new management capabilitiesThis efficient data-collection capability is important because it will give users management capabilities they simply have not had up to now. It will enable us to use sFlow to monitor network devices in real time, while using gNMI to efficiently send the data to a central data collection server for storage and analysis. To date, sFlow has been used mainly for non-real-time historical analysis and troubleshooting, because there simply wasn’t an efficient way to get sFlow data to a server. This capability will enable users to do things like set thresholds and get alarms for any criteria they like. In the past, most alarm criteria were defined by the switch vendor for alarms like the CPU is too hot or memory is too low. These things rarely happen in switches, but when they do you certainly want to know. It would be preferable, however, to monitor the CPU utilization by dynamically adjusting the threshold based on the telemetry data itself. For example, if the CPU is normally running at 10% during off-shift hours, but somehow spikes to 60% on a particular night, the telemetry data analyzer could automatically generate an alarm. However, if a CPU is usually running 70% busy around 9 a.m. because all staff are being authenticated at that hour, the telemetry analyzer would not trigger an alarm, even if the CPU was at 80%. This is just one example of how telemetry data can help the analyzer to observe and learn. The same technique can just as easily be applied to network security or performance monitoring. Event logs are another consideration. Right now, you can set up a system log server to collect data from a network device event log. But you can’t analyze it in real time because, again, there’s no efficient way to get the data to the server. gNMI provides just that mechanism. In short, gNMI provides a way to implement telemetry in an open, efficient, standards-based way. We are working on integrating it with our Linux-based network operating system, PICOS, so you’ll soon have an efficient way to stream data out of any network device. Automated troubleshooting and responseWhile using gNMI for network telemetry is new, it has long been used with servers and other IT infrastructure. So, once we have widespread availability of streaming telemetry from network elements, network and IT managers will have much better visibility into the state of not just the network, but the entire IT infrastructure. This is where we can expect AI to play a role, by analyzing the available data in order to identify root causes of problems. For decades, we’ve all struggled to troubleshoot application performance problems. Was it actually a problem with the network, or was it the server or the application code? By constantly collecting data from all the pieces involved in the IT infrastructure and applying AI, we can finally start answering those important remediation questions – in real time. The next step is to automate the response, which is what IBN is all about. You collect and analyze all the event data and, if it is a network problem, IBN automatically adjusts data paths to correct it. But none of this is feasible if you can’t efficiently collect the data in the first place. IBN also doesn’t work effectively if you’re not combining network data with event data from other components, including servers and storage systems. So, it won’t be just a single IBN server managing the network, but lots of servers monitoring different components. This is the IBN vision Pica8 is working towards. I fully expect we’ll have the gRPC-gNMI piece done around the end of the third quarter this year. From there, you can use our AmpCon open network services platform, or your own controller, to implement it. That’s what open networking is all about – choice. To learn more about the Pica8 vision for IBN, download our latest white paper, “An Open Approach to Implementing Intent-Based Networking.” Click here to try out PICOS or request a demo.
Welcoming Walmart to the Open Networking Community
Want to catch the latest in open networking? Head to your local Walmart store. It appears Walmart has bought into open networking in a big way. The company recently joined LF Networking, the collaboration ecosystem for Open Source Networking projects that is part of the Linux Foundation. “By joining LFN, Walmart has the opportunity to contribute, influence the cloud growth and better support the enterprise and service provider communities by open-sourcing innovative technologies across its retail infrastructure,” said Koby Avital, Executive Vice President, Walmart Global Tech. Those comments were echoed by Subhadra Tatavarti during her (virtual) appearance at the Open Networking & Edge Executive Forum 2021. Now with Wipro Ltd., at the time of the conference Tatavarti was Sr. Director Technology Commercialization at Walmart. She made it clear the move to open source was driven in part by the pandemic, which accelerated a trend that had already started: more traffic at the network edge. Traditionally, the bulk of Walmart transactions were in-store, not online. “What Covid has done is completely flip it for us. We’ve seen up to a 76% increase in e-commerce traffic,” Tatavarti said. (Many other companies experienced the same sort of change in traffic behavior, as I wrote about previously.) The increase in online activity was accompanied by another change in customer behavior. “More and more customers were buying online but picking up in store,” she said. In essence, stores were becoming fulfillment centers, which drove the need for additional technology in-store, such as for customer check-in and supply chain. Like many enterprises, Walmart has been adopting a hybrid cloud strategy. The company has 1.7PB of analytics data in the cloud for example, and its check-out applications are also cloud-driven, Tatavarti said. Over the past few years, the company has also been investing heavily in artificial intelligence (AI) and machine learning (ML) applications to drive operational efficiency. In thinking about the company’s edge strategy, Walmart had to decide which workloads would be hosted in the cloud vs. at the edge. “Workloads that can be deployed on edge are those that require low latency and fast compute,” she said. “Workloads like AI and AR/VR [augmented reality/virtual reality] need a fairly robust edge platform.” The same goes for video security systems and AI/ML-driven inferencing engines. This should sound like a familiar story. It’s been Pica8’s position for some time that the explosive growth of enterprise applications like AI/ML, IoT, Digital Health and Smart Buildings has overrun the ability of big networking-iron infrastructure to adapt to a distributed workforce/customer environment with Wi-Fi 6 as the last mile. Covid’s redistribution of the workforce has simply moved the problem to the top of the heap. But the way Walmart addressed the issue should give comfort to any company that has been considering an open source approach to networking – and other infrastructure. “It’s almost impossible to run an extremely large organization without the latest, greatest technology,” Tatavarti said. Which is why Walmart is a “huge consumer of open source technology,” including Node.js, OpenStack, Cassandra, Hadoop, and the Cloud Native Computing Platform. She’s right, of course. The beauty of open source software is that it’s constantly being updated, thanks to the contributions of the open source community. By adopting open source software, you’re basically taking advantage of the expertise of engineers at hundreds of large companies, like Walmart. What’s more, with an overall open networking strategy, you don’t have to wait for the next release from a legacy vendor in order to take advantage of the “latest, greatest technology.” With disaggregated networking, you can update your network operating system whenever it makes sense to do so from a feature/function perspective, while continuing to make full use of open source tools like Ansible, Puppet and Chef. The same also goes for the underlying network hardware. In terms of technology, that can put you months if not years ahead of legacy vendors with their outdated hardware and software upgrade schedules. Walmart has a GPU in every store to support security applications, payment and checkout systems, including self-checkout, all of which are highly sensitive to latency. The fact that the company is using open source edge network technology should be seen as a vote of confidence in open networking. And adopters will stand to benefit from the work Walmart is doing. One example Tatavarti cited is eBPF, which enables programmability within the Linux kernel, to add various features and functions. (Pica8’s PICOS runs an unmodified Linux kernel to enable just this type of use.) Walmart used eBPF to control the number of concurrent connections coming in and effectively manage them, to deliver a good customer experience without crashing the network. She also made reference to eBPF-based technologies for deployment of network functions and observability. “We are in the early phase of figuring out how to give back what we have done with eBPF to the developer community,” Tatavarti said. In terms of the open source community, “We hope we can become bigger partners and play a bigger role going forward.” Well Walmart, representing Pica8, the company that delivered the first open, Linux-based NOS in 2012, let me say welcome aboard. If you want to learn more about what open, disaggregated white/brite box networking can do for your network, check out our white paper, “An Enterprise Approach to White Box Networking.”