Agencies Shouldn’t ‘Just Trust’ Software Vendors’ Security Assurances, IG Warns

Share:

NIST advisors debating the merits of OMB’s policy on software vendors’ “self-attestation” to secure development practices found common ground on a need for audits and testing.

 

A key National Institute of Standards and Technology advisor expressed skepticism at a recent meeting about a policy that encourages agencies to accept software vendors’ security promises.

The sentiment may be broadly felt but is rarely voiced by individual federal officials and sparked a discussion about what could come next in the administration’s efforts to avoid a repeat of the infamous SolarWinds hack.

“You can’t just trust vendors, we have to stop that,” said Brett Baker, inspector general for the U.S. Archives and Records Administration. “Somebody had to say it,” he added.

Baker’s comment came Wednesday in an exchange with Steve Lipner on the sidelines of a meeting of NIST’s Information Security Privacy Advisory Board witnessed by Nextgov. Baker and Lipner, the executive director of SAFECode—a nonprofit organization formed by major technology companies to coordinate their security efforts—are both members of the board. Lipner, who previously worked for many years on security at Microsoft, is the board chair.

The two were following up on Baker’s reaction during the meeting to a briefing the board received on M-22-18, the memo OMB issued to agencies under President Joe Biden’s executive order to improve the nation’s cybersecurity. That order came in the wake of the SolarWinds breach, which was part of a campaign that compromised the security of at least nine federal agencies and more than a hundred companies.

SolarWinds’ IT management software is ubiquitous across the federal government. After hackers gained entry into the firm’s delivery mechanism and were able to insert malware, undetected, into a routine update, thousands of their customers installed it and became vulnerable to unauthorized access.

The executive order detailed a set of security practices—such as “establishing multi-factor, risk-based authentication and conditional access across the enterprise”—that it said should be included in guidelines for the development of software, which NIST would issue. And it instructed the director of OMB to “take appropriate steps to require that agencies comply with such guidelines” in their procurement and use of software.

When NIST issued its guidelines in February, it recommended agencies err on the side of allowing software vendors to themselves attest to complying with the secure software development practices. OMB followed suit with M-22-18 in September, requiring agencies to collect a signed self-attestation form—to be developed by the Cybersecurity and Infrastructure Security Agency—from their software vendors.

“You have to remember they’re submitting to the government a form signed by, you know, senior professionals within their organization attesting to certain standards, so it is something that I would hope software producers are taking relatively seriously before signing on that bottom line,” said Mitch Herckis, the OMB official delivering the briefing. Herckis, who is director of federal cybersecurity at the office of the federal chief information officer, said the office also sought to expedite the promises while avoiding burden for industry stakeholders and a falloff in the number of federal contractors.

The OMB memo leaves it up to agencies to determine whether they should require vendors to submit to a security assessment by a third party. It also makes optional the collection of evidence that would support the vendors’ attestation—artifacts such as a Software Bill of Materials, log entries and reports from source-code vulnerability scans and other tests.

Baker told Nextgov the collection of such artifacts “would be helpful to agencies to get more insight into whether or not they should trust that vendor.”

During the meeting, he asked Herckis why OMB chose to rely only on the word of the software vendors in issuing its requirements for agencies.

 

“They do those things just to provide assurance to people that invest in them, but they actually have insecure applications and software,” Baker said, referencing standards that allow self-assessment to indicate compliance with security controls in the private sector. “It just seems like, given SolarWinds a few years ago, do we want to go into maybe a little bit more oversight and assurance? … I’m just saying self-assessment isn’t enough.”

Lipner beat Herckis to the punch in response, using his position as chair to jump in and highlight issues with third-party security assessments.

The most likely “outcome is that you get documentation basically outsourcing your assurance,” he said. Lipner also pointed to the fact that SolarWinds had been evaluated for security by a third party under the Common Criteria Scheme prior to the breach of its software.

One high-profile example that has demonstrated how challenging it can be to implement a successful system of third-party assessment is the Cybersecurity Maturity Model Certification program. The Defense Department launched the initiative, citing a lack of confidence in similar self-attestation forms that Defense contractors must already submit pledging their adherence to security standards from NIST. The Biden administration suspended the program last November amid controversy over conflict-of-interest issues and opposition from large information technology vendors.

Other stakeholders, such as Sen. Rob Portman, R-Ohio and Pieter Zatko, the former head of security at Twitter, have also recently identified current dynamics around third-party cybersecurity certification in the U.S. as problematic.

Testifying before Congress, Zatko raised the conflict-of-interest implications for entities hiring their own assessors. He also explained how easy it was for Twitter to evade the Federal Trade Commission’s enforcement process, which relies on evaluators simply asking a series of questions, instead of getting the “ground truth” on an entity’s security through the use of auditable standards.

Baker, who has a doctorate in information technology and systems management, has been relatively reserved during his five-year tenure on the NIST board. But on Wednesday, he pressed on to finish his point.

“You need to have testing to make sure the controls work properly,” he said, noting lessons the inspector general community has learned in evolving from a similar questions-based approach for checking agencies compliance with the Federal Information Security Management Act. “That’s kind of where I’m at with it.”

Lipner did not disagree on the value of testing. After the briefing, he sought to reconcile his position with Baker, arguing that the attestations vendors make about their security can be used to hold them publicly accountable—and set an example for others—by selecting firms to independently audit.

He told Nextgov that effectively addressing the quest for higher assurance is a matter of scale, as it takes high-level capabilities to conduct appropriate assessments.

There are “[errors] that you can find, but they’re not trivial to find, and you can’t just hire people en masse to do it,” Lipner said.

Asked how agencies might select vendors to audit, Lipner said, “even random is not bad, but you can do better than that.” He pointed to the value of security researchers for detecting inconsistencies in vendors’ stated security practices and a record of cybersecurity incidents as factors that could prompt an audit.

The next stage of the executive order administration officials are working to execute is the proposal of new procurement rules from the Federal Acquisition Regulatory Council to cement—and potentially add to—M-22-18.

Ultimately, Lipner said, “I think there has to be some basis for [audits] and, maybe that comes in the FAR guidance, or what have you. If you’re selling to the government, you’re kind of subject to the government’s rules, as I understand it.”

Under the executive order, OMB was required to submit recommendations to the FAR council, and an OMB representative also chairs the body. Baker told Nextgov the agency has the ability to empower agencies to take a proactive, evidence-based approach to securing their software.

“The memo that came out, it’s a start,” he said. “I’m not saying that this is not the right direction, it’s better than where we were at a few months ago. It’s a process. Maybe over time,” OMB can do more.

https://www.nextgov.com/cybersecurity/2022/10/agencies-shouldnt-just-trust-software-vendors-security-assurances-ig-warns/379054/

Leave a Comment

आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

Scroll to Top