Posted 6 hours ago

/

93 comments

/

support.sectigo.com

5 hours ago by jeffbee

Quick reminder from your friendly local SRE: never ever issue certificates that expire on weekends. Make certs expire in the middle of the afternoon on a business day wherever your operators live and work. The cert in question expires at May 30 10:48:38 2020 GMT, which smells suspiciously like a fixed time after the cert was generated, rather than at a well-chosen point in time.

4 hours ago by lmilcin

All my applications use a component that watches certs configured (everything in cert and trust store) and returns warning in telemetry from the application if any of the certificates is less than a week from expiration. This is checked periodically while the application runs.

This not only makes sure we don't miss expiration but also ensures we don't forget to configure any of the application.

We had a situation when the cert was replaced but the file was placed in incorrect path and was not actually used by the app. Having the app report on what is actually being in use is the best way to prevent this from ever happening.

2 hours ago by colechristensen

I've used this https://manpages.debian.org/testing/nagios-plugins-contrib/c...

After one scrambling emergency with a cert expiring in the middle of the day, a constant check with warnings and alerts a couple of weeks before expiry made a matter of defensive organization into something trivial.

3 hours ago by undefined

[deleted]

2 hours ago by kl4m

Good old "cert replaced but apache/nginx failed to reload" has bitten me more than once...

3 hours ago by BurningFrog

There is just no substitute for Reality!

5 hours ago by btgeekboy

If you get to the point where the exact expiration date on the certificate matters, you've already lost the game.

2 hours ago by colechristensen

Engineering for failure is important, you should always set yourself up so that you have several lines of defence which can fail. Some lines of defence to make failing "impossible" others to make a fail softer, even when you think failing is impossible.

4 hours ago by DavidSJ

Defense in depth.

4 hours ago by techslave

it’s more like blue m&m’s than an actual requirement

2 hours ago by alasdair_

>it’s more like blue m&m’s than an actual requirement

Did you mean Van Halen's famous "WARNING: ABSOLUTELY NO BROWN M&Ms" clause?

https://www.snopes.com/fact-check/brown-out/

5 hours ago by encoderer

Great tip. Did you notice that cert in this case was issued 20 years ago? It’s crazy to me that it was still being used to sign certs as recently as last week (according to twitter)

10 minutes ago by imron

> It’s crazy to me that it was still being used to sign certs as recently as last week (according to twitter)

It's likely because it was issued 20 years ago. People have been using it for 20 years and no-one realized it was about to stop working.

5 hours ago by jeffbee

Of course, but that doesn't really excuse them. My first experience with middle-of-Sunday-night SSL certificate expiration was in December 1998, and it was already a well-known doctrine by then. I'd expect a commercial certificate authority to have these kinds of things squared away.

4 hours ago by user5994461

My experience with commercial CA is that they set the expiry exactly 1 year from creation. Doesn't matter if it's a week end or a holiday.

3 hours ago by jis

It's actually worse. The new root (good I believe until 2038) uses the same key as the now expired certificate. It has to or it would not be possible to validate the certificates that were issued. And this new one is a root certificate installed in browsers!

What "should" happen is that no certificate should be issued with an expiration date later than the issuing certificate. Then as the issuing certificate gets closer to expiration, a new one, with a new key pair, should be created and this new certificate should sign subordinate certificates.

3 hours ago by jis

Sorry to reply to my own comment. But I want to clarify. Two certificates (at least) expired. The root named "AddTrust External CA Root" and a subordinate certificate with a subject of "USERTrust RSA Certification Authority." Both expired around the same time.

The "USERTrust RSA Certification Authority" certificate signed yet another layer of intermediate certificates.

The "USERTrust RSA Certification Authority" certificate was promoted to a self-signed certificate, now in the browser trust stores, using the same key pair as the original certificate that was signed by "AddTrust External CA Root." It has an expiration of 2038 (although that concept is a bit vague in a root certificate).

an hour ago by adrianmonk

> rather than at a well-chosen point in time

So, you're saying that "I'm not going to be working here anymore by then... hahahaha" isn't well-chosen?

3 hours ago by sleevi

Andrew Ayer has a write-up about this at https://www.agwa.name/blog/post/fixing_the_addtrust_root_exp...

At the core, this is not a problem with the server, or the CA, but with the clients. However, servers have to deal with broken clients, so it’s easy to point at the server and say it was broken, or to point at the server and say it’s fixed, but that’s not quite the case.

I discussed this some in https://twitter.com/sleevi_/status/1266647545675210753 , as clients need to be prepared to discover and explore alternative certificate paths. Almost every major CA relies on cross-certificates, some even with circular loops (e.g. DigiCert), and clients need to be capable of exploring those certificates and finding what they like. There’s not a single canonical “correct” certificate chain, because of course different clients trust different CAs.

Regardless of your CA, you can still do things to reduce the risk. Using tools like mkbundle in CFSSL (with https://github.com/cloudflare/cfssl_trust ) or https://whatsmychaincert.com/ help configure a chain that will maximize interoperability, even with dumb and old clients.

Of course, using shorter lived certificates, and automating them, also helps prepare your servers, by removing the toil from configuring changes and making sure you pickup updates (to the certificate path) in a timely fashion.

Tools like Censys can be used to explore the certificate graph and visualize the nodes and edges. You’ll see plenty of sites rely on this, and that means clients need to not be lazy in how they verify certificates. Or, alternatively, that root stores should impose more rules on how CAs sign such cross-certificates, to reduce the risk posed to the ecosystem by these events.

2 hours ago by mehrdadn

Given you mention OpenSSL is currently terrible at verifying "real" certificates: why doesn't e.g. Google just throw a bit of money at them and fix their bugs when they're clearly so well-known? It seems like such an obvious thing to do for a company whose entire business is built on the web. Is there really too little benefit to justify the cost of the engineer(s) it would take even for big companies? Or are the projects somehow blocking help?

an hour ago by sleevi

Google has, in the past. Look at the ChangeLog for 1.0.0 - the massive improvements made (around PKITS) were sponsored by Google.

Google has a healthy Patch Rewards program ( https://www.google.com/about/appsecurity/patch-rewards/ ) that rewards patches to a variety of Open Source Projects.

Google also finds a variety of projects through the Core Infrastructure Initiative ( https://www.coreinfrastructure.org/ ), which OpenSSL is part of https://www.coreinfrastructure.org/announcements/the-linux-f...

3 hours ago by telesilla

Andrew Ayer's tip on getting Debian sorted may have saved me hours.

4 hours ago by LeonM

This one bit me today and abruptly ended my day at the beach.

The certificate reseller advised my customer that it was okay to include the cross-signing cert in the chain, because browsers will automatically ignore it once it expires, and use the Comodo CA root instead.

And that was true for browsers I guess. But my customer also has about 100 machines in the field that use cURL to access their HTTPS API endpoint. cURL will throw an error if one of the certs in the chain has expired (may be dependent on the order, don't know).

Anyway, 100 machines went down and I had a stressed out customer on the phone.

4 hours ago by 0x0

Sounds like a good test case for exercising those otherwise useless "million dollar insurances" that some certificate vendors flash in their sales materials?

3 hours ago by donmcronald

Have you ever read the terms? I don't know if they even publish them anymore, but I read one many years ago. TLDR;

1. The CA must misissue a cert.

2. The misissued cert is used by a malicious party to impersonate you.

3. Every user (your users) must prove their damages and claim individually.

4. There might have been a low maximum, per-user claim, but I can't remember.

I'd be amazed if there's a single person on the internet who's been paid out by that warranty.

3 hours ago by mehrdadn

Is that a cURL bug?

2 hours ago by mshade

It seems only to be older versions of curl or curl with openssl <= 1.1.1. My macbook's curl fails, but my arch linux box's curl works fine.

5 hours ago by admax88q

Honestly, certificates should never expire or should expire daily. If certificate revocation works then its pointless to have expiring certs. Its just a mechanism for CAs to seek rent.

If certificate revocation doesnt work then certs need to expire super frequently to limit potential damage if compromised.

A certificate that expires in 20 years does absolutely nothing for security compared to a certificate that never expires. Odds are that in 20 years the crypto will need to be updated anyways, effectively revoking the certificate.

4 hours ago by josephcsible

Exactly. Certificate expiration has never really been about security. It's purely for practicality, so that CRLs won't grow without bound.

This is especially true now that we have OCSP stapling. From a security perspective, a short-lived certificate is exactly equivalent to a long-lived certificate with mandatory OCSP stapling and a short-lived OCSP response, but the latter is much more complicated.

And in this case since it's a root, it goes even further than that. Root CA's can't be revoked anyway, so if they're compromised, a software update to distrust it is required. There's really not a good reason for them to expire at all.

3 hours ago by sleevi

It’s not true that expiration is not about security. Dan Geer’s talk in 1998, noted at https://cseweb.ucsd.edu/~goguen/courses/275f00/geer.html , is just as relevant today in the design of key management systems.

Expiration is not “just” about cryptographic risk either; there are plenty of operational risks. If you’re putting your server on the Internet, and exposing a service, you should be worried about key compromise, whether by hacker or by Heartbleed. Lifetimes are a way of expressing, and managing, that risk, especially in a world where revocation has a host of failure modes (operational, legal/political, interoperability) that may not be desirable.

As for Root expiration, it’s definitely more complicated than being black and white. It’s a question about whether software should fail-secure (fail-closed) or fail-insecure (fail-open). The decision to trust a CA, by a software vendor, is in theory backed by a variety of evidence, such as the CA’s policies and practices, as well as additional evidence such as audits. On expiration, under today’s model, all of those requirements largely disappear; the CA is free to do whatever they want with the key. Rejecting expired roots is, in part, a statement that what is secure now can’t be guaranteed as secure in 5 years, or 10 years, or 6 months, whatever the vendor decides. They can choose to let legacy software continue to work, but insecurely, potentially laying the accidental groundwork for the botnets of tomorrow, or they can choose to have legacy software stop working then, on the assumption that if they were receiving software updates, they would have received an update to keep things working / extend the timer.

Ultimately, this is what software engineering is: balancing these tradeoffs, both locally and in the broader ecosystem, to try and find the right balance.

3 hours ago by josephcsible

I don't see anything about expiration in that talk.

If you don't have a strong revocation system, then your host is vulnerable whether or not you have expiration, since attackers aren't going to wait until the day before your key expires to try to steal it.

In general, when a CA's root certificate expires, it creates a new one and gives it to browser and OS vendors. What's the difference between the CA continuing to guard their old private key, and starting to guard the new private key?

3 hours ago by undefined

[deleted]

2 hours ago by cheerlessbog

Expiration may be useful but how is expiration in 2038 useful?

4 hours ago by jakub_g

Someone on Twitter (forgot whom, maybe swiftonsecurity?) suggested lately in a tongue-in-cheek way that the certs should not hard-expire, but instead add an exponentially-increasing slowdown at TLS handshake.

Once the slowdown is too big, someone will notice and have a look.

4 hours ago by vbezhenar

To revoke a certificate you must keep a list of revoked certificates. Without expiration date that list would grow infinitely. And that list should be downloaded periodically by every entity which wants to verify certificate.

4 hours ago by josephcsible

They said "certificates should never expire or should expire daily". Roots already can't be revoked, so they should never expire. Intermediates and leaves should expire daily. Since currently, OCSP responses are often valid for that long, there'd be no need for revocation anymore then.

2 hours ago by elcomet

What if your CA is down for a day? Imagine let's encrypt being down for 24 hours and all if it's certificates going invalid. That would be millions of websites unavailable..

6 hours ago by elithrar

Great thread by Ryan Sleevi tracking the many (and growing) reports of issues caused by this root expiring: https://twitter.com/sleevi_/status/1266647545675210753

Top offender so far seems to be GnuTLS.

6 hours ago by MobileVet

This appears to have caused our Heroku managed apps to go offline for 70+ minutes.

https://status.heroku.com/incidents/2034

Anyone that was already connected was able to continue accessing the sites but new connections failed. This mostly affected web users.

Our main app server continued to crank along thankfully (also on Heroku) and that kept the mobile traffic going which is 90% of our users.

Edit: adding Heroku ticket link

6 hours ago by Mojah

This issue is largely cause by people still stuffing old root certificates in their certificate chains, and serving that to their users.

As a general rule of thumb:

1) You don't need to add root certificates to your certificate chain

2) You especially don't need to add expired root certificates to the chain

For additional context and the ability to check using `openssl` what certificates you should modify in your chain, I found this post useful: https://ohdear.app/blog/resolving-the-addtrust-external-ca-r...

6 hours ago by toast0

You shouldn't need to send the root certificate (unless the clients are _really_ dumb, but I worked with a lot of dumb clients, and did not see any issues with only sending intermediates and the entity cert), but a fair number of cert chain verifiers are fairly dumb and won't stop when they get to a root they know which makes things tricky.

If some of your clients don't have the UserTrust CA, but do have the AddTrust CA, up until today, you probably wanted to include the UserTrust CA cert signed by AddTrust. Clients with the UserTrust CA should see that the intermediate cert is signed by UserTrust and not even read that cross signed cert, but many do see the cross signed cert and then make the trust decision based on the AddTrust CA.

It's hard to identify clients in the TLS handshake to give them a cert chain tailored to their individual needs; there's some extensions for CA certs supported, but they're largely unused.

6 hours ago by encoderer

Any guess at what percentage is this versus the case where these certs are cross-signed with a newer root but older clients with outdated bundles do not trust the newer root?

(At Cronitor, we saw about a 10% drop in traffic, presumably from those with outdated bundles)

6 hours ago by Mojah

Hard to say, as we don't have any insights into the client-side. But we can say that only ~2% of our clients had expiring root certificates in their chain in the last few weeks, so it's definitely a minority.

Since you don't control the clients in anyway, it might be that there are clients that haven't updated their local certificate stores in ages and don't yet trust the new root certificates.

7 hours ago by encoderer

I have never really wanted to go "serverless" until today.

TIL that I can buy a cert that expires in a year that is signed by a root certificate that expires sooner. Still not sure WHY this is the case, but this is definitely the case.

4 hours ago by ta17711771

Because the certificate authority paradigm is LITERALLY INSANE.

4 hours ago by AmericanChopper

It’s the PKI paradigm that creates most of the insanity. Authentication is still an unsolved issue with PKI, there’s many ways that you can perform authentication, but all of the different approaches lead to one form of insanity or another. The CA system has its share of insanity, but it is the most successful PKI implementation in history, and by a long way.

6 hours ago by Sphax

As far as I understand your certificate is still valid but you need to remove the intermediate certificate from your bundle. That was the case for me anyway.

6 hours ago by encoderer

If your traffic comes from a browser you are fine with this but if you're coming from e.g. Curl you will find that you need to include an intermediate chain.

(The reason for the difference being that browser stay up to date, many old client systems do not.)

We ended up getting a new cert from a different provider.

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.