The Bleeding Edge: Embracing SDLC Controls and the Inevitability of Automation

SDLC - gears
Author: Ed Moyle, CISSP
Date Published: 1 November 2023

It is a truism that historically many trust professionals (i.e., cybersecurity, audit, governance, risk and privacy professionals) have not always been directly or universally involved in software development. This is, of course, a generalization, and there are exceptions. After all, some organizations have highly sophisticated application security programs and deep understandings of product security. They build privacy and security into software and work in lockstep with developers and engineers. However, it is fair to say that these lucky few are the exception rather than the rule.

The reason I bring this up is that there is a meta-trend at work in many organizations. That trend? Acceleration and automation of software development and release processes. I believe it behooves trust practitioners to sit up, take notice of this trend and, if necessary, potentially alter how we do things. We all know that from time to time, situations can and do arise where business dynamics and market forces collide and create an inflection point: times when, depending on how we approach the challenge, new risk can be created. While they can create risk, these same inflection points often carry with them risk reduction opportunities when approached strategically. Those of us who have been in the trust profession for long enough saw this happen with the rise of virtualization and the virtual datacenter approximately a decade ago. We experienced it again merely a few years ago with the cloud. I believe we are at such an inflection point right now in an entirely new domain: software development. What I am referring to are several different (yet related) technologies and paradigms that have emerged in recent years and are working to transform software development. The cumulative effect of these trends can be enormously transformative to enterprises and trust professionals alike. From an organization’s point of view, these emerging technologies change some of the most critical and sensitive business processes.

At the same time, for us as practitioners, they have the potential to vastly alter how we assure and secure the output of these processes—as well as the processes themselves.

Economic Dimensions of Control Selection

To understand what I am getting at, take a moment and reflect on some of the new and emerging trends in software development. Over the past several years, you have probably noticed that development methodologies have been rife with changes, including:

  • The rise of DevOps and DevSecOps
  • Modular architectures (e.g., microservices, service mesh)
  • Modularization of deployment through application containerization
  • Automation of deployment provisioning via infrastructure as code (IaC)
  • Continuous development methodologies such as continuous integration (CI) or continuous delivery/deployment (CD)
  • Expanded support automation (e.g., testing automation)
It may be tempting to lump these changes together under the umbrella of DevOps or to fixate on each of them individually rather than acknowledging the aggregate trend.

It may be tempting to lump these changes together under the umbrella of DevOps or to fixate on each of them individually rather than acknowledging the aggregate trend. However, each of these trends has something in common with the others that speaks directly to the core of how we approach application security generally and the risk associated with software output specifically. While it is subtle, I think understanding what is at the core of each of these trends has tremendous ramifications on how we approach application security, application risk and application assurance.

If that sounds hyperbolic, let me explain what I mean and draw out some of the implications of what these changes mean taken together. I think, once seen in aggregate, it absolutely necessitates that we approach application security and product security more rigorously, more comprehensively, and more holistically.

To begin laying this out, I will start with the concept of opportunity cost and how we choose to approach our security, audit, governance, risk and/or privacy programs. If you are familiar with opportunity cost as it applies to financial investments, you maybe know exactly what I am driving at here. If not, allow me to explain. In a financial or investment context, opportunity cost refers to potential opportunities that you may miss out on if your capital is tied up somewhere else. As an example, say that I have US$1000 to invest. Looking at available options, I decide to put those funds into a certificate of deposit (CD) with a three-year term at a yield of six percent.

This may seem like a pretty good investment given that the current average yield for a three-year CD is about 5 percent.1 But what happens when, on the way home from the bank, my friend calls to tell me about an investment that is equal risk to a CD, but pays ten percent instead of six? That sounds great—except my money is tied up in the CD for the next three years, so I cannot make the new investment. This scenario illustrates the concept of opportunity cost.

Opportunity cost represents something we could have done but were not able to because we chose to do something else instead. In the example, I missed out on a better yield because I had already invested in the CD. The same thing happens every day in our jobs for those of us dealing with risk. Any time we do anything, we make a choice. By applying resources to area A, we don’t have them to invest in area B. This is true of financial resources of course, but also our time, staff members’ time, our attention, executive attention and numerous other things. Anything we do, we do at the expense of something—or someone—else.

This, of course, has implications for how we select controls. Deploying a control means that we have fewer resources available for other projects. Any time or money we reclaim introduces the possibility that we could use those resources to offset risk somewhere new. It also speaks to the type of controls that we might choose to deploy.

As an example, consider a customer service center that can access customer data records containing customers’ personally identifiable information (PII) such as their social security numbers or other national identifiers. If we want to ensure that customer service representatives do not accidentally provide sensitive details to unauthorized persons, there are several options available for how to go about it. One option (of many) is that we can train representatives not to do this. We can send them periodic reminders, conduct onboarding and refresher training to ensure that it is etched into their memories, conduct drills and simulation exercises help keep employees resistant to social engineering, and periodically test them to ensure that the lessons stick.

But is this the most efficient option? Another option is to change the underlying system so that customer service personnel cannot access sensitive data without a manager providing an override and signing off. In this case, customer service reps are literally unable to provide the information to a social engineer. Even if they were completely tricked by a social engineer’s pretext (the subterfuge used by the social engineer), they cannot comply because they do not have access to the underlying data being requested.

Returning to the concept of opportunity cost, in the first scenario (training), we incur a lower opportunity cost in the form of dollars, but a higher opportunity cost in terms of time. Meaning, the time spent creating training campaigns, sending reminders, developing the training, updating the training and conducting simulations could have been invested elsewhere, but instead was dedicated to the operation of the control. Likewise, in the second scenario (updating the application), we incur a higher opportunity cost in dollars (representing the investment required to update the application), but we reclaim time.

Now, I am not arguing that one approach is better and the other is worse. Which approach is better will obviously depend on circumstances and context. Instead, I am merely pointing out that the economics of these two approaches are different, even if their outcomes are similar. How are they different? On the one hand, the training controls represent an ongoing cost over time. Because of factors such as employee attrition, fallibility of human memory and human nature, training and associated measures will need to be routinely repeated to remain effective. This in turn means that control will have a similar cost to operate in years four or five as it does in years one or two. By contrast, the approach of changing the application has a low maintenance overhead but a higher (potentially much higher) initial overhead, because it requires changing or customizing the application in use.

The Pace of Development Forces Our Hand

Even if you agree with all the above, you may be scratching your head wondering what any of this has to do with either software development or how we approach risk in the software development life cycle (SDLC). And, historically, the answer might have been “not that much.” Up until recently, as a principle of flexible management, we may have recognized relative tradeoffs for controls such as these and made the optimal choices for our organizations based on what resources were available and where we wanted to invest. We may have, in some cases, favored an automated approach (with higher upfront costs) and in other cases favored more manual approaches (with higher overhead costs but that cost less to bootstrap.) The decision of which to choose can be based on any number of contextual factors.

Today, however, at least when it comes to how software is created, I would argue that we are left with very little choice by virtue of how software is developed. Recall earlier when I asked you to consider a laundry list of new software development trends and what they all have in common. I would argue that the primary “through line” for those trends boils down to one thing: reduction in time spent in software development. They all, when viewed in the abstract, serve to reclaim time. Some reduce time to market, some reduce iteration cadence, some favor automated methods and thereby reduce the time required by individual engineers, some increase agility (thereby reducing time and friction required for release) and so on.

Looking at this through the lens of opportunity cost, one thing becomes clear. We are moving away from paradigms where we can selectively choose whether to automate (at a higher upfront cost and lower ongoing maintenance costs) or implement manual controls (at lower initial costs and higher ongoing maintenance costs.) Instead, we must automate our security controls to stay relevant. And any method or control we have at our disposal that requires a high level of manual intervention is in the crosshairs of irrelevance due to the fundamental incompatibility with how software development is evolving.

The methods available to us are on a direct collision course with software development mechanisms. This is not a good sign.

Consider the controls that historically have been applied to application security: static application security testing (SAST), dynamic application security testing (DAST), threat modeling, application penetration testing, vulnerability scanning, software composition (SCA), secure code training and more.

Think for a moment about how much manual overhead goes into the care and feeding of these controls. SAST and DAST both require manual intervention to remove the legions of false positives. Threat modeling is often a highly manual process requiring the creation of diagrams and manual analysis of component interaction points. Application penetration testing requires a human engineer to get the most value and training requires developer time—and time from the security and compliance teams to administer. The point? The methods available to us are on a direct collision course with software development mechanisms. This is not a good sign. And, frankly, it is likely to get worse as artificial intelligence (AI) technology continues to evolve and further streamlines software development.

It would be fruitless to explain to enterprise leaders why important goals related to product development, internal application deployment, and innovation at large must slow down arbitrarily because of assurance or security reasons. Likewise, it is suboptimal to insist upon manual techniques solely for the value of security and risk management. Such a stance expends political capital and undermines the willingness of other teams (e.g., development, operations, engineering, etc.) to engage with us.

Conclusion

So, what can be done? There are several things, most of which involve taking actions that may feel uncomfortable at first, because they move in directions that have been challenge areas historically. First and foremost, I think it is required that we get into the weeds and understand how “the sausage is made”2 from a software development point of view. It is important to understand how applications are developed, what methodologies are employed to do so, developers’ and operations teams‘ toolsets and their goals. Since these factors are neither static nor areas where security and assurance teams have historically had much exposure, developing these understandings may require new skills and a concerted effort.

Part of building this understanding is to help build trust, which you can only do when you know what you are talking about. Seeking knowledge will help identify ways to automate security control operation. This is what we are striving for. We need new, automated methods because the ones we have relied upon for a generation are in danger of losing viability at scale and given the demands of modern development. We should be looking very carefully at security controls and how we can automate the work that we do. Can we consume IaC to automatically produce data flow diagrams used for threat modeling? Can we automate DAST to run automatically at the conclusion of a build? Can we integrate software composition and/or SAST into source control archives so that a commit automatically triggers evaluation? The mechanisms will be different depending on the business, but the thought process should be the same: what can we automate and where?

Also, we need to further our understanding of the specific tools and trends upon which developers increasingly rely because many of these technologies can provide direct value from security and assurance points of view. Technologies such as IaC can prove tremendously valuable when leveraged for security purposes, as can strategies such as creating a software bill of materials (SBOM), containerization and more. When approached creatively, all can be tremendously useful to us. Therefore, learning about how they work is imperative.

Additionally, we need to forge relationships with stakeholders within development and operations teams that are most connected to performing the work. Reach out to DevOps teams, production operations teams, developers and testers to get them on your side. Show value to them, because they will prove critical to your efforts to automate controls down the road and to keep the digital trust services that you provide relevant to the work developers and operations teams do. At the end of the day, these teams will always understand the work that they do better than a trust professional can. Enlist them to help you by openly communicating your goals and being flexible about how you accomplish those goals.

It is banal to say that the purpose of trust-supporting disciplines is to enable the business. We all say this, but an important factor that is often overlooked is the willingness to step out of our comfort zones to help make it happen. Nowadays, software and business are equivalent. The more we can bolster one, the better we can support the other.

Endnotes

1 Tierney, S.; “Current CD Rates: September 2023,” 31 August 2023, NerdWallet, http://www.nerdwallet.com/article/banking/current-cd-rates
2 Merriam-Webster, “How the Sausage Is Made,” http://www.merriam-webster.com/dictionary/how%20the%20sausage%20is%20made

ED MOYLE | CISSP

Is currently director of Software and Systems Security for Drake Software. In his years in information security, Moyle has held numerous positions including director of thought leadership and research for ISACA®, application security principal for Adaptive Biotechnologies, senior security strategist with Savvis, senior manager with CTG, and vice president and information security officer for Merrill Lynch Investment Managers. Moyle is co-author of Cryptographic Libraries for Developers and Practical Cybersecurity Architecture and a frequent contributor to the information security industry as an author, public speaker and analyst.