January 20, 2017
4 min read
Last month, the Certificate Authority Security Council (CASC) officially announced its new minimum requirements for publicly trusted Code Signing Certificates. For the first time, Certificate Authorities (CAs) will have a set of standardized issuance and management policies explicitly designed for Code Signing.
The new minimum requirements are detailed about CA policies, covering topics such as certificate contents, revocation and status checking, verification practices, and much more.
While CAs have been hard at work to comply with all of these new requirements, you may be wondering what the changes mean for you.
Let’s take a look at how some of the new requirements will affect those who use these certificates to sign code:
Private keys need to be stored on cryptographic hardware
According to the CASC, one of the leading causes of Code Signing attacks is key compromise. That is, a malicious party gains access to the private key of a legitimate, “good” publisher and uses it to sign a malicious file, making it look trustworthy and increasing the chances it will be downloaded.
Storing the keys on secure cryptographic hardware, such as a USB token or Hardware Security Module (HSM), significantly decreases the chance of key compromise compared to storing the keys locally (the most common method prior to the new requirements).
We’ve been recommending stronger private key protection for a while now, and it’s been a requirement for Extended Validation (EV) Code Signing Certificates since they were introduced in 2014.
Under the new guidelines, though, it will be required for ALL Code Signing Certificates. Specifically, all private keys will need to be stored on FIPS 140-2 Level 2 HSM or equivalent on-premise hardware, or in a secure cloud-based signing service.
All new GlobalSign Code Signing orders after January 30, 2017, will include a USB token to store the certificate and protect the private key.
Standardized and strict identity verification
The other leading cause of Code Signing attacks, according to the CASC, is issuing certificates to malicious publishers, who then use the certificate to sign viruses or malware.
To prevent this, the new requirements outline specific precautions CAs must take before issuing, including:
- Strict identity verification about the publisher, including legal identity, address, date of formation and more
- Cross-checking against lists of suspected or known malware publishers, producers, and distributors
- Maintaining and cross-referencing an internal list of certificates that were revoked because they were used to sign suspect code and certificate requests that were previously rejected by the CA
Many CAs already abide by most of these processes, but standardizing makes it harder for a bad publisher to shop around for a CA with weaker vetting procedures if they get rejected by someone else.
Reporting and responding to certificate misuse or suspect code
In addition to trying to prevent the certificates from being issued in the first place, the new requirements also dictate that CAs operate a “Certificate Problem Reporting” system, by which third parties such as anti-malware organizations, RPs (relying parties), and software suppliers) can report suspected “Private Key Compromise, Certificate misuse, Certificates used to sign Suspect Code, Takeover Attacks, or other types of possible fraud, compromise, misuse, inappropriate conduct, or any other matter related to Certificates.”
CAs are then held to very strict standards regarding how to respond to these reported problems. For example, they must begin investigating within 24 hours and maintain 24×7 communication about any incidents.
There are also strict guidelines and timelines regarding revocation, in the event that malware or any other abuse is suspected.
The new reporting systems mean even if a malicious publisher happens to pass through the verification process, their certificate can promptly be reported, investigated, and revoked.
Another scenario that benefits from timestamping is the event of a key compromise and subsequent certificate revocation:
For example, if your key was used to sign legitimate code, but then later was compromised and used to sign malicious code, you could set the revocation date to be between the two events.
This way, your legitimate code would continue to be trusted, but the malicious code would not.
A safer future for code signing
The new standards and requirements are an essential step toward cutting down on Code Signing attacks.
Microsoft is the first application software vendor to adopt the guidelines and will start enforcing them on February 1, 2017. TRUSTZONE’s Code Signing Certificates will comply with the new requirements before then.