Presentation: Implementing OSSF Scorecards Across an Organization

MMS Founder
MMS Chris Swan

Article originally posted on InfoQ. Visit InfoQ

Transcript

Swan: Welcome to implementing OpenSSF Scorecards across the organization. This is the mascot of the Open Source Security Foundation. Scorecards is one of the projects within that. This is the logo for Scorecards. I’ll start with a Scorecard badge. You’ll see a badge like this appearing in open source repos where Scorecards have been implemented. This one is telling you that it’s got a score of 8.2. The background to that number is green, which is telling you that 8.2 is a good score. That should be able to give you confidence that that repo is doing lots of things to help secure the repository in terms of how things are getting in there. As an open source user, you’re looking for that badge to say these people care about security. As an open source contributor, you’re looking for that badge because it’s letting you know that that’s going to be a safe environment to work within. As an open source creator, you’re looking for that badge as something designating a project that’s got a thoughtful approach to security.

This is a page I put together on our .GitHub repo inside of our GitHub organization. It’s a summary of all of the Scorecards that we have for the repos in the organization where I work. You can see there’s a mixture there of high 7s and low 8s. That’s reflective of where we generally are in shoring up those repos. I’ll get into the detail of what has been involved with that, and why there’s a little bit of a spread going on there, and the kind of work that’s actually involved to get those scores higher. The reason why we did this was, we saw the Dart project, which we use quite extensively at Google, implementing open source Scorecards themselves. We thought that this was a good thing to tell the world that you care about security. We’re building a platform for end-to-end encryption. Security is key when you’re doing crypto stuff. The people using that generally want to know that you’re developing software in a secure manner. Scorecards was a way of visually representing to the consumers of our software that we do care, and that we’re building some security into what we do.

Background

I’m Chris. I’m an engineer at Atsign. Atsign is a platform for end-to-end encryption. It’s like WhatsApp or Signal but for anything. We’re doing a lot of work at the moment in the Internet of Things space, particularly things like medical devices, but also providing remote admin to Internet of Things. If people are going to be using that end-to-end encryption channel to gain access to the remote administration of their field deployed devices, then they’re going to want that software to be developed in a secure manner. The Scorecards help people see that.

Outline

I’m going to start out with who are the Open Source Security Foundation, and what is a Scorecard. Scorecards is just one of the projects going on within that organization. My recommendation if you’re going to try implementing Scorecards is actually to start with a different OpenSSF application called Allstar, because I think that helps lay the foundations for then doing some of the more hand-to-hand combat, repo-by-repo that’s necessary to actually accomplish a decent Scorecard. I then say, pick a repo that’s representative of your environment, and start with just getting that one repo into shape, and implementing Scorecards and improving the Scorecard for that repo. That will tell you a lot about what you need to do then for the rest of the repos in the organization. Hopefully, you’ve got enough discipline in that organization, that it’ll be a relatively straightforward process then to repeat what’s been done to that first repo, at scale. There’s a handful of tools that I’ve been using to help automate that process and take some of the drudge and toil out of it.

That takes you to scaling across multiple repositories. I showed you scores there of the high 7s, low 8s, and I think this runs into a Pareto 80:20 thing of, it’s pretty easy, 20% of the work to get 80% of the score. Then it’s pretty hard, 80% of the work to get the next 20% of the score. You have to think then about a law of diminishing returns and how much effort is necessary to expend to keep pushing that score up. Really, what are you trying to achieve with that? What are the outcomes you’re looking for? Then I’ll finish up talking about some of the toil that arises from doing this. It is not a journey that comes without cost, and it creates a bunch of additional work that needs to somehow be managed. There are ways of dealing with that. It’s still in some cases, a bit clumsier than I would like it to be. If I think about it from an engineering perspective, a bunch of the tools could actually do a better job of taking toil away. I’m doing my best to find workarounds, but also work with the tool providers in order to make those things better.

Who Are OpenSSF, and What Is a Scorecard?

Who are OpenSSF, and what is a Scorecard? OpenSSF is the Open Source Security Foundation. You will find their home at openssf.org. That is the home of a number of projects. Scorecards is one of them. There was a bit of talk about things like Sigstore. There’s a whole bunch of stuff happening around software levels of assurance, SLSA, the abbreviation. A lot of things that are going on in the software supply chain security world at the moment, are happening through this organization. If you follow the money, or if you follow the contributors, you will find an awful lot of Google going on here. One of the presenters worked for Google. There’s a lot of Google projects that are using this stuff. A lot of Google’s open source, things like the Dart programming language is a good example there, are making use of the Scorecards and the SLSA tools in order to show the world that Google is making the appropriate investments in security, and that these things are going to be safe to use as the basis of projects that people are building.

The Scorecard looks at this holistic set of security practices. If we start at the top and move our way around clockwise, code vulnerabilities. How is code getting into a project and what do we know about the vulnerability of that code? Have we got mechanisms in place to deal with that? Maintenance. Is stuff being maintained and is there proper mechanisms in place for that maintenance to be done in a safe and effective manner? Continuous testing. We’ve got a bunch of conversations happening in a conference like this about CI and CD. Those are things that are well known to be necessary to produce high quality software. Is there evidence that testing is going on? Is there evidence that if we’re doing deployments, that the pipelines are being maintained throughout there? Source risk assessment, what do we know about the dependencies coming into a project? How are we controlling those? As they change under our feet, how do we deal with that? I’ll be spending a lot of time on that, because that’s where pretty much all of the issues and a lot of the toil comes from. Then build risk assessment. How are we creating builds? How are we thinking about the security of the build process? How would things get into that? How would we stop unwanted things from getting into that?

Start with Allstar

My first recommendation for practitioners implementing this stuff is, start with a different OpenSSF project, which is called Allstar. Allstar is a GitHub app. Its recommended configuration is to look at every project in your organization, and raise a bunch of issues automatically, which will tell you about all the things that you’re doing wrong. If you’ve got things like branch protection isn’t configured, then Allstar is going to nag you about that. If you don’t have a SECURITY.md in your repos, Allstar is going to nag you about that. It’ll generate a whole bunch of issues. Then, as soon as you fix those things, it’ll actually recognize that those things have been fixed and close the issues down for you automatically. Having implemented Allstar, what will happen is an initial surge of issues being generated in the repos across the organization. Then, as you deploy things that mitigate those, then Allstar’s going to take care of shutting down those issues. You should then have a really sound foundation in order to implement the Scorecard.

There are ways of implementing Allstar a little bit differently. You can have opt-in, where you’re saying, these are the repos that I want Allstar to take a look at. It’s also possible to operate it in an opt-out mode, where you’re saying, look at everything except for these ones, because for whatever reasons, I don’t want to have the bother going on. One of the things I’ve been tending to do with the repos in our organization, is as new stuff comes along, give it a little while to actually crystallize. For instance, we’ve got an intern at the moment, working on a new language SDK. It would be a lot of effort for them to be asking permission and review for every minor change that they’re doing until they get it working, and it’s ready for launch. It’s at that launch point that we then have it cut across and say, ok, it’s time for grownup activities. We’re going to make sure that branch protection is in place. We’re going to make sure that this repo has got all of the READMEs and contributing guides and security files in there to let people know about what they need to do in those various different circumstances. At that point, you can flick the switch and say, ok, Allstar, have at it. If there’s anything that you’ve missed, then it’s going to automatically tell you about it in any issues. Template repos are a super good idea for a lot of that boilerplate. We call ours the archetype. GitHub has a thing on it called Insights. One of the Insights is community guidelines. Those are the things that they expect a repository to have in in order to be a good quality open source repo. Those are things like, have a README, have a contributing guideline, have a code of conduct. Actually, SECURITY.md isn’t one of the ones that it checks there. If you put all of those into a template, even if it’s just, this is what a README should look like. Then it helps people get started that the files are there as placeholders and can be customized into shape.

Getting Allstar whipped into shape across an organization ends up being about a whole bunch of config, and a whole bunch of files. The sad thing about dealing with these is you have to have different approaches to config and files. There’s not one universal mechanism that we can reach into. The tools I’ve picked up in order to make the changes that are necessary, Terraform and git-xargs. Terraform is a super popular way of doing infrastructure as code. Pretty much all of the automation happening in cloud native land is happening with Terraform. Terraform have got a mature, but sometimes awkwardly documented GitHub provider. There’s plenty of examples out there, and I’ll give some pointers towards the end, of Terraform scripts that can be applied to get repos into the right shape for Allstar and for Scorecard. git-xargs is a fun little tool. What this lets you do is take a script and take a list of repos, and basically run that script on the list of repos. When you’ve got something that you want to change in a set of repos, whether it’s introduction of a file, or the correction of a mistake in a file that you’ve deployed everywhere, or whatever it happens to be, git-xargs is your friend for generating dozens of pull requests at once, rather than having to go through all of that file manipulation in a more manual way.

Doing Your First Repository

After doing Allstar, it’s now time to properly engage with Scorecard. This is the process that I think is necessary for our first repository. Allstar gets implemented as a GitHub Action. On installing the action, it’ll run every time there’s a change made to the default branch. The installation of the action itself is a change to the default branch. It’s going to automatically go off and generate a SARIF file. That SARIF file will then start populating the security tab of the repo. Initially, you’ll get something like this. It won’t be 128 closed issues, it’ll be 128-plus something open issues of all of the things that have been identified in the repo. Maybe I picked a bad repo to start with. I picked one of our busiest repos, and my mindset there was, if I can get this one whipped into shape, then all of the rest will fall easily afterwards. That’s how it worked out. You’re going to get lots of issues.

Thankfully, there’s a great deal of help at hand. You’re actually pointed to the help by the issues themselves. If you click into an issue that’s about dependency pinning, it’s going to point you in the direction of the StepSecurity app. Inside of that StepSecurity app, you can take something like a GitHub Actions workflow, and it will take care of recommending the right permissions. It’ll take care of pinning all of the action’s dependencies to chars, and produce an output that can be either PR’d directly into the repo, that’s one option. Or you can just cut and paste it into whatever editor you’re using, in order to manipulate the repo. A lot of the manual drudge work has actually been taken care of there. I don’t know if I’d have actually had the stamina to have done 128 issues without this StepSecurity tool, because going off and looking at the chars line by line, for everything that’s involved there, would have been a truly painful task.

There’s a whole bunch of other dependencies that you’re going to have to deal with besides just GitHub Actions. Whatever your choice of language and framework, there is probably some package manager associated with it. Things like npm and PyPi and Docker. We use a lot of pub.dev, doing Dart development. Pub.dev is the Dart package manager. All of these things really should have their dependencies pinned, and the Scorecard is going to whine at you if they’re not. All of these things, once you’ve got the dependencies pinned, you can then get Dependabot to take care of doing pull requests on an automatic basis as the dependencies move under your feet. Of course, those pull requests are then going to run through your CI pipelines. Hopefully, those tests are then going to tell you that any problems with the dependencies will be surfaced before they get anywhere near being merged into the default branch.

Scaling Across Multiple Repositories

Scaling then across multiple repositories. It’s a rinse and repeat of some of the things before for Allstar. Having done the first repository, that should give an idea of the config that’s necessary for all of the rest. That’s going to be things like branch protection. Then put a Terraform script in place, that’s going to do branch protection for all of the rest of the repos that you want to do Scorecard on. Similarly, all of the files that need to be slid into place, including the Scorecard action itself, that can be done with git-xargs. There’s no need then to be doing manual effort on a repository-by-repository basis. Where doing this will get you to is a whole bunch of repositories that then need more of this. Having been through the process of doing the StepSecurity stuff on the first repo, doing the repeat on that across a dozen or so more repos wasn’t actually a huge lift. It’s one of those tasks that gets easier with familiarity. Doing the whole of the rest of the organization actually probably took less time than doing the first repo. That should give a sense of the amount of effort that’s involved there.

80:20

When I talked about scores, I was showing that you get a nice happy shade of green when you’re above 8, and a duller shade of green when you’re in the high 7s. Those Scorecards go all the way to red if you’re just not doing anything at all, and work their way through the spectrum up to that happy shade of green. The best Scorecards I’ve seen have been in the low 9s. I’ve yet to see a 10. That includes the Scorecards on the OpenSSF’s own repositories. It’s really hard to achieve those super high scores. I think there’s a few almost discrepancies there. There’s certain repos we’ve got, actually documentation repos. The Scorecard is kind of, yes, you’ve not done any dynamic code analysis. There’s no code to do dynamic analysis on. It can become a little bit unattainable.

If you look at the residue that I found in our repos, this is where it gets hard. If I look at the residue here, the thing that it’s asking me to do, static source code analysis. That generally is going to require the introduction of some additional tooling. There is some open source tooling in this space. A lot of what people use in practice is commercial tooling. Dependent upon your choice of language and framework, there might not even be tooling available. Doing this with Dart is not straightforward at the moment. It’s one of those things where if you’re on a mature platform, and you’ve got an existing investment in these kinds of tools, you might be able to check these off. For some of the things, it becomes a bit harder. Fuzzing, so dynamic analysis. Again, getting those tools in place can be a challenge. There’s some open source stuff. I’ve been peeking at how the Dart project is making use of Open Fuzzer. I still don’t quite understand what’s going on there. It’s not clear to me how I would apply that to our project.

SAST Tooling, and GitHub Actions

Participant: How does it know whether you’re doing stuff based on well-known open source SAST tools, or is it looking at GitHub Actions?

Swan: It’s looking for the evidence of tools that are in a dictionary of SAST tools, which will include open source tools and commercial tools.

Participant: It’ll look at GitHub Actions to see if they concur.

Swan: Yes, it’s looking at the workflows to see if those things are actually being engaged.

Participant: It only works with GitHub Actions?

Swan: Everything I’ve talked about has been in the context of GitHub. Other management tools are available for source code. GitHub is where the center of mass is for a lot of people, especially in open source projects. To that end, GitHub Actions is GitHub’s universal solution to almost everything. Sometimes it’s a bit of a sledgehammer to crack a nut. I happen to think it’s a really clever product management decision. A lot of things that we as customers would ask GitHub to do, instead of them having to do feature creation and product management around that, and all of the maintenance that goes along with that, and the exploding test matrix of doom, as you take on more features, they’ve just said, “Here’s GitHub Actions. It can do anything. Here’s some core actions to get you started.” The community has provided a lot more. The implementation of Scorecards itself is as an action, and it spends a lot of time looking at the rest of your actions to see what they’re doing in terms of the testing, and how they’re using dependencies within the actions as well.

80:20

The last one here is where the fun really starts. CII Best Practices is the old name for what is now OpenSSF Best Practices. OpenSSF Best Practices is a giant questionnaire. You sign up for an account and you associate your repo with your account. Then you start answering these questionnaires. The list there is basics, change control, reporting, quality, security, and analytics. You might be able to see on the far side, the scores here. For this particular repo I’m looking at, we got an overall in-progress badge of 84%. The basics, it’s fully green, 13 out of 13. As we get towards the end, the analysis, which is where you’re into things like those SAST and DAST tools, 2 out of 8, and an amber badge, must do better. The questionnaire itself is very long and very detailed. Reckon on an hour-plus per repo to answer these questionnaire questions. As you get to the end of it, I’ve found some of these are just really hard to accomplish, especially in certain contexts. How are we doing dynamic code analysis? It’s tough with Dart at the moment. A bit of a struggle. This is one of the areas where if you’re using more mature languages and frameworks, you’re actually at an advantage because there’s better tooling available in order to actually be saying, yes, I’m doing this stuff and sliding it into place.

The Toil of It All

The toil of it all. Make friends with the new boss, because Dependabot will be giving you a lot of work. This is a screengrab I took from a docs repo. There’s no actual code to maintain in that. Here you can see in the space of just over a week, a whole bunch of Dependabot PRs just bumping things to do with actually, in this case, Scorecards itself. All of these PRs have been generated by the action for Scorecards. It’s a little bit busy. This is from an actual code repo. The intention of this illustration is to just go, there’s a lot coming at you very often. Let’s actually have a drill down into some of the nonsense going on here. One of the irritating things about Dependabot and Dockerfiles at the moment, is you can’t just say, look for Dockerfiles across my whole repo, or you can’t even give it an array of locations within the repo, you have to have different Dockerfiles entries for every location where it will find Dockerfiles. In this particular repo, there are four of those. That means that rather than one PR, we get four PRs every time there’s a change to something in a Dockerfile. This is obviously pretty annoying.

The workaround that I’ve come up with for this is a little script called rollup. What I actually want to be doing here is just go up Dependabot, rollup, and give it a list of PRs, or a PR range. What this script does is it makes use of the GitHub command line tooling, and just a little bit of looping. It takes parameters of a starting PR, and a finishing PR, so it’s taking a PR range. It’s just merging all of the branches into the base branch, and then pushing that base branch back up. At least Dependabot is smart enough that when you approve and merge that PR, it goes, ah. All of those merged branches have now found their way into main or trunk or whatever you call it. I can close all of those PRs. All of the PRs blink out of existence. It’s good about that. The main reason why you have to do this is you do not want to be sat there for four different runs of your CI, because if you do this the manual way, then you merge the first one and that’s going to then trigger all of the rest of the Dependabot PRs to rebase themselves and run the CI again, and that gets tedious really quick.

As I mentioned with that docs repo, Scorecards own dependencies can change with annoying regularity. It’s these two culprits, the GitHub CodeQL action. We’re going to go back to here, Actions Checkout, and the GitHub CodeQL action, they don’t change on a daily basis, but probably on a weekly basis. This is causing a lot of background toil. The main point with these is, if you’ve just deployed Scorecards across your organization, then you don’t just have to do this once, you have to do this x times. In my case, x is about 12, heading towards 15 at the moment. What I actually want to be able to do here is say to Dependabot, once I’ve approved and merged one of these, just do it for the rest. That is not a capability at the moment. That’s another script I’ve got sketched out in my head, haven’t had the time to actually type it into an editor yet and do all the testing necessary for it. There’s a lot of, at the moment, manual effort and toil associated with dealing with these things. It’s where I think the Dependabot team could be making things easier.

I have heard of organizations that basically just go, auto-merge all of my Dependabot PRs. I totally think that that’s cheating, because if you’re doing that, then actually all of these dependencies changing aren’t getting the kind of review that they deserve. I think all of these need to be eyeballed at least once, so you’ve got an understanding of what’s going to change. I’ll give an example of why that’s necessary as well. One of the recent changes to one of the Docker actions, if you did actually look at the text, had a warning in it saying this might break some stuff. They released it as a minor. A minor in semver should be, this won’t break any stuff. Of course, a bunch of people merged it and it broke a bunch of stuff. Then they actually rolled out a different point release the following day to go, we’ll go back to how things were before. Then there was then a major release to say this is actually a breaking change. There is a need to pay some degree of attention to this. Not everything is going to get caught by your CI. Things that relate to deployable artifacts, it depends upon the depth of your end-to-end testing, whether those things are actually going to be picked up. In this particular case, the new images that were being created were to OCI spec with a SLSA attestation in them. All good stuff, one would think, except for older versions of Docker, including pieces that we use for automating deployments, weren’t capable of working with those kinds of images. The automated deployments broke as a result of that.

Base dependencies can be amplified. What do we see going on here? We’ve got a bump to Debian, from a very visible release to a particular date, to a release on a later date. Going alongside of that, we’ve got a bump to Dart. That’s not because Dart itself has changed, otherwise we’d see a version number change for Dart. What’s happening here is, the official Dart Docker image depends upon Debian. That Debian change has flowed into Dart. We’ve got our familiar four Docker Dart PRs, plus one more for the Debian. In this case, we’ve got five PRs all caused by the same thing. This implies that you’ve actually got a certain amount of understanding of your dependency graph, and how things are related here. I see these, every week or two, and I know what’s going on here. I expect that hash change on the Dart when there’s been a change to Debian. Again, the review process needs to be informed in that way. I think, again, a visualization of the dependency graph would be a helpful thing to guide people’s hand as they’re doing those reviews.

Review

An OpenSSF Scorecard can show that you care about security. I think it can also be one of those things that helps with safety of your environment. If we go to the recent DORA report, there was this section of unexpected findings. The biggest one in there was people saying, “I tried trunk-based development, and I hurt myself.” I know exactly what’s happening there. Which is, they’ve tried to do trunk-based development without branch protection, and then some fool has pushed something to trunk and it’s just gone and stomped over a whole bunch of other things that should have been happening. It’s at that point, they’re like, “I’ve hurt myself. The Git gods need to help because I’m now in trouble and I don’t know how to roll back properly.” There should probably be mechanisms in in things like GitHub to say, actually I’m not playing in child mode anymore and hacking on an individual repo. I am an organization, and trunk, I want to move into org mode and be safe by default, and have things like branch protection automatically enabled.

Allstar provides a good starting point. The Allstar app lets you very quickly establish a foundation across the whole organization. It’s got two deployment modes, opt-in or opt-out, that they’re going to let you trim any repos that you don’t want involved in that. Pick a first repo to get a hang of what’s needed. Up to you whether you pick an easy one, or a hard one. I found myself picking a hard one and everything was easy after that. Then you can automate across the rest of the organization. Once you’ve taken the lessons from that first repo, that should be fairly cookie cutter across everything else that you’re doing. I reckon it takes about 20% of the effort to get 80% of the score. Getting those high 7s, low 8s, isn’t a huge amount of work. Getting from there up towards a 10 is. Then, Scorecards do create ongoing toil and that needs to be minimized. There are things you can do as an engineer to work around some of the limitations of the tools. I think the main thing that we can do as a community is get on the case of GitHub to improve how these tools can work for all of us.

Call to Action

My call to action in this case wouldn’t be dive headlong into the implementation journey I’ve just been describing, but rather, run the Scorecard command line tool against one of your repos. This will allow you in the privacy of your own shell to see the sorts of issues that you’re going to be dealing with, and to think through how you might actually respond to that.

Resources

This talk is based upon the blog post, “Implementing OSSF Scorecards Across a GitHub Organization.” That’s the written down version of the journey I took there. The second blog post is about the rollups. That code I showed earlier for the shell script, that’s available in a gist. That blog post will take you there. While you’re there, you’ll probably see a link to my Dependabot wish list. That’s the bunch of stuff that I’m trying to push with GitHub product management of, I think we probably all want that stuff as well. I mentioned our .GitHub repo. .GitHub is a special repo in any GitHub organization, where you can do things like org-wide templates, and you can do an organization README. I find it’s also a good place to tell the world about how you’re using GitHub. We have a section in that that we call atGitHub. It’s saying, this is what we do at Atsign Foundation, in order to make use of the GitHub tools. I talk there about our use of Allstar and Scorecards, but a bunch of other things that we do. I think for any open source organization, it’s good to just be transparent about how you’re doing these things. Some of that stuff comes through in contributing, but maybe not all of it all the time. I just as well there said open source. It’s like everything that’s happening with these OpenSSF tools is focused on open source. I don’t think they’re actually just for open source. I think they’re tools that are useful and can be applied to any repositories that you’re working with. Even if you’re not doing stuff out in the open, I think there’s a lot of utility to the approaches that they have on offer here.

Then, lastly, Varun Sharma is the creator of StepSecurity. Varun used to be a software engineer at Microsoft, focused on security. He saw that this stuff was going to be important and StepSecurity is his startup that’s implementing some of the tools that I think eventually people will be paying for, to have added value in how they manage these environments. He did a talk at QCon London. As part of that talk, he stepped through this demo repo that he’s got on GitHub, showing what was necessary to improve the scores there. As part of that demo repo, he’s got a whole bunch of Terraform templates that you could use as the basis of shaping up your own environment. If you don’t want to start from scratch, just go and grab Varun’s work. It’s good.

Questions and Answers

Participant: Dart uses this a lot, does Flutter?

Swan: Yes. Flutter and Dart are two to side-by-side projects emanating out of Google. The story here is, Dart was created about a dozen years ago now as a client-side programming language. The original idea was they were going to put a Dart plugin into Chrome and you’d be able to run stuff there like the old days of applets, I think. That never happened and Dart went into the wilderness for a while, and then has emerged with Flutter. Flutter is Google’s cross-platform development framework. People mostly use it to create iOS and Android apps out of the same codebase. Dart is the underlying language for that. Pretty much everything I’ve referenced with Dart also applies to Flutter. That team have got a common set of repos that you’ll find on GitHub. You’ll definitely see the Scorecards there on the Flutter repos, as well as the Dart ones.

Participant: Does it make sense to learn how to use it by maybe forking other repos and then play around with it?

Swan: The thing that I would recommend trying first, is to run the Scorecard command line tool against one of your own repos that you’re familiar with. It’ll actually spit out a report, and that’ll give you a sense of how much work have I got to do with this thing.

Participant: Because I’m more interested in like the mechanics of where it’s looking for.

The best practices seems useless unless I’m really missing something, because filling out a questionnaire in a point of time, that’s going to change. How do you keep it maintained? It’s the whole quantitative analysis versus qualitative.

Swan: That best practices questionnaire is a bit like the thing that auditors do. Generally speaking, audits happen on a periodic basis. I saw this with black box testing. With an important public facing web app, you’d maybe do a black box test every year. Then along came tools, which automated that and they could be doing it continuously. They could be doing it every hour if you need it to. They could be sensing when you change something, and running the scan then. It’s much better to have that continued vigilance. The nature of the questionnaire is actually much more structural. It’s sort of, have you got the pieces in place to run automated testing on these bits and those bits.

Participant: The subjective nature of answering it, gives you a false sense of security.

Swan: People can lie and cheat as well.

Participant: Not even lie, they just don’t understand the whole part of them. They don’t really understand the context of the question.

Swan: It is a bunch of work, and it is somewhat fraught. What they do do is they do start sending you reminders. Once you’ve done a best practices questionnaire, you will actually be prompted on a periodic basis to say, has anything changed? Do you want to come along and either fess up to you open some stuff up, or tell us about the improvements that we persuaded you to make because last time you were here, we said that still needs a bit of work.

Participant: As you rightly said, there’s like a chance of not fraught, but like somebody just misunderstanding a question or whatever, so that some idea of an external review of these.

Swan: I think that comes to the question of due diligence and who is being duly diligent. When somebody sees a badge for best practices, if they understand what’s happening behind that, you’ve gone and filled out a questionnaire and you’ve said that you’re doing these things, and that’s raised it to a particular level. If you really care about that, then you’re going to validate that those questions are actually true and correct. Maybe you’ll have your own similar questionnaire and people to actually check on that stuff. In financial services organizations, that’s the work that’s happening with internal and external audit. As we think about supply chain security and how we buy things, that is often supply management questionnaires that are being sent off and the evidence that we gather to validate against those. If we look at actually that best practices piece, it felt to me like it lent heavily on the NIST Cybersecurity Framework. That in turn is then an almanac of various NIST guidelines, ISO 27000-series, COBIT, and a bunch of other good things. I think we’ve got in that broad agreement about what are the things we should be doing? It’s then just a question of validating that they’re actually happening.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.