If Rip Van Winkle had gone to sleep around 2006 and woken up 10 years later, he'd find the world a strange brew of the new and the old. He'd be amazed that phones had grown a brain, dismayed that a most excellent rendition of the Dark Knight had wandered back to the wasteland as most Dark Knight capers do. People had warmed upto electric cars, but not to climate change. And, if Ol' Rip were a network operations guy at some of the large webscale companies, he might think he'd died and woken up in heaven. Networks were no longer slow as molasses: to deploy, manage and upgrade. He'd find some things had stayed the same (IPv4 still ruled the roost), and some others not so much. He would be puzzled by the terminology and the discussions as he wandered the hallways. SDN, Open networking, Openflow, microservices, Ansible, Puppet, Kubernetes, and so on.
This tutorial is an attempt to bring folks up to speed on whats happened with networking in the past 10 years or so, especially in the data center, concluding with some thoughts on why exciting times lie ahead. The talk will be roughly divided into the following sections:
The tutorial will include demos and hands on work with some modern tools.
The audience is expected to be aware of basic networking (bridging, routing, broadcast, multicast etc.).
The key takeways from this talk will be:
Some preliminary ideas for hands on work:
eBPF (extended Berkeley Packet Filters) is a modern kernel technology that can be used to introduce dynamic tracing into a system that wasn't prepared or instrumented in any way. The tracing programs run in the kernel, are guaranteed to never crash or hang your system, and can probe every module and function -- from the kernel to user-space frameworks such as Node and Ruby.
In this workshop, you will experiment with Linux dynamic tracing first-hand. First, you will explore BCC, the BPF Compiler Collection, which is a set of tools and libraries for dynamic tracing. Many of your tracing needs will be answered by BCC, and you will experiment with memory leak analysis, generic function tracing, kernel tracepoints, static tracepoints in user-space programs, and the "baked" tools for file I/O, network, and CPU analysis. You'll be able to choose between working on a set of hands-on labs prepared by the instructors, or trying the tools out on your own test system.
Next, you will hack on some of the bleeding edge tools in the BCC toolkit, and build a couple of simple tools of your own. You'll be able to pick from a curated list of GitHub issues for the BCC project, a set of hands-on labs with known "school solutions", and an open-ended list of problems that need tools for effective analysis. At the end of this workshop, you will be equipped with a toolbox for diagnosing issues in the field, as well as a framework for building your own tools when the generic ones do not suffice.
This workshop is a part of the "full lifecycle" workshop track which includes Post-Mortems, Incident Response, and Effective Design Review Participation. Using several example cases, participants in this session will learn to apply a variety of different points of view to analyze a design for issues which could affect its reliability and operability.
The sample designs and play list can be found at https://goo.gl/VIiN6i - now updated with the comments and suggestions that came in during the workshop.
Participants will have the opportunity to try their hand on designing a reliable, distributed, multi-datacenter near-real-time log processing system.
The session will start with a short presentation on lessons learned about designing reliable distributed systems, and then participants will break out in small groups, assisted by Google facilitators, and try their hand at solving a real-world design challenge, from high-level architecture down to an estimate of the computing resources required to run the service.
The session will likely appeal to experienced engineers who want to have fun tackling a real-world design problem faced by many teams in Google.