What tools should I use? Part 1

By
Charles Cary
January 15, 2020

We started Shoreline in January 2019 with the goal of developing tools to empower software service operators. As we move into our next stage—getting our product out to customers and growing as a company, we want to provide some insight into who we are and how we can be a resource to the ops community.

This is the first in a series of posts where we’ll detail our journey up to this point and our plans for the future through the frame of product development.

The Pipeline

As operators, it’s very easy for us to draw parallels between software delivery and industrial manufacturing. For example, Ford-style assembly line manufacturing has a sequence of transformation steps that turn raw materials into usable goods. This is conceptually similar to the idea of a build pipeline transforming raw code through linting, type checking, compilation, testing, and deployment into usable software.

Similarly, the Toyota system divides up the production process into work groups that signal to each other when parts should be manufactured. Service oriented architecture likewise divides up the code and teams into separate units that interact via messages to form the whole software out of the parts.

The similarities between the two are even evident in the language we use. Terms like builds, pipelines, procedures, and toolchain all trace back to industrial manufacturing.

Simplified, the purpose of manufacturing management is to drive higher output at lower cost with reduced variability. The earliest wave of this discipline focused on increasing efficiency. This was followed by a focus on improved quality and reduced variability.

The most well known methods to originate from this later wave include:

  • Six Sigma - A set of data-driven techniques originated at Motorola focused on reducing variation and eliminating defects; quality is measured in defects per million opportunities (DPMO).
  • Toyota Production System - This precursor to lean manufacturing has the notable principle of “andon” — empowering any worker to stop production when a quality problem is discovered.

Manufacturing Software?

With the many surface level similarities between manufacturing and software delivery, it’s no surprise that the ideas surrounding variability reduction have moved into software engineering.  One has to look no further than agile methodology, whose principles are rooted in this minimization of variability, to see the similarities.

  • Iterative development - The idea of interactively developing and building a product is inspired by just in time (JIT) manufacturing.
  • Time - Time is a major focus for waste reduction efforts in manufacturing, and is at the core of software engineering.
  • Minimal inventory - Similar to not building products that may not ever be used, in software development we avoid building up a queue of ideas that might not be applicable to our customers or goals. Instead, we do it as needed based on customer or market demand and expectations.
  • Transparency - Kanban boards, almost ubiquitous in software development, are used as a tool to communicate real-time capacity and work in progress, just like kanban cards are used on the factory floor.

The Divergence

While methods of variability reduction are in fact applicable to software delivery,  it’s important to remember that the relationship between variability and quality mean different things in industrial manufacturing and software.

First let’s step back and discuss some key differences between the processes of industrial manufacturing and software engineering. Although both start with a design phase, they diverge quickly after that. After physical products go into production, the manufacturing process adheres to that original design. In software, especially in organizations with a strong DevOps culture, the process of design and delivery are more cyclical than linear.

This division feeds into how one defines quality. In manufacturing, quality is determined by the percentage of parts with defects (DPMO). If a device doesn’t work, the customer is unhappy. You minimize the prevalence of defects when you minimize the amount of variation in the production process.

On the other hand, software is focused on the quality of the way a single part performs the task. We measure software quality, from a customer perspective, in the features and capabilities of the software. Engineers often focus on bugs per LOC as a measure of quality, but that metric often has little influence over customer buying decisions. They buy your product because it does something that they need and possesses a benefit not found with your competitors.

We think of minimal variability being most important when we are running software than when we are designing or writing it.

Software Production Needs Variability

So now we come to Shoreline. As you may know, we are a team of operators. Our founding team is made up of AWS veterans. Our own backgrounds presented an interesting challenge as we thought about how we would bring our product to market: how to balance variability in order to create an innovative product, but also be able to deliver it.

There had to be at least some variability in our process. We believe that good design comes from new ideas generated from experimentation, discussion, and thinking. But designing takes time, and as a startup of operators we knew that time is the easiest way in which to compete. It’s also one of the easiest metrics to control.

In agile/scrum, engineering tasks generally take as long as is assigned to them. The value of the sprint for reducing variability is that tasks are generally capped at sprint length (i.e., two weeks). However, prioritizing efficiency and time over anything else leaves little opportunity for innovation.

One option would have been to stay in stealth and waterfall, working  to be slightly better than incumbents on existing metrics. But it’s possible that by the time we got a product out the door, we would already be playing catch up. We couldn’t just create a better way to do the same thing, we needed an entirely new metric to optimize.

Alternatively, we could have started pushing out a product as quickly as possible. But without the time to implement feedback from out test users, and to just be creative, we wouldn’t have had much to bring to the table.

The Two-part Control Loop

At Shoreline we use a two-part control loop to satisfy the two competing demands of low variability for delivery and high variability for innovation. One control loop to set targets and the other drives the system to those targets. We us a Gantt chart to serve as an outer control loop and a Trello-style workboard as inner control loop.

A Gantt chart is useful because it allows us to explicitly model the dependencies between our items, helping us to address the “mythical man-month” problem. Overall, it forces us to be analytical about our thinking for the planning process. Additionally, we can analyze staffing scenarios to determine costs and be intentional with creating long running design tasks. It also aids in hiring by surfacing the skills you need so you can hire for strengths.

Importantly, the two-part loop helps predict when we will ship in the long term. We run a two week to one-month sprint continuously where we put items that we are doing from the current Gantt chart into a workboard-style flow. This creates a shorter-term sprint to get tasks completed by the due date, and allows individual contributors to plan the order of the tasks, which often can be wrong in the micro view.

These quick wins of task completion create intermediate successes, that lead to ongoing progress. In our next post, we’ll go into more detail about how we implement these control loops.

If you’re interested in learning more about our journey, please sign up for our newsletter. We love connecting with other operators to discuss their challenges and experiences.