7 hours ago by ani-ani

"Apple also did an investigation of their logs and determined there was no misuse or account compromise due to this vulnerability."

Given the simplicity of the exploit, I really doubt that claim. Seems more likely they just don't have a way of detecting whether it happened.

6 hours ago by drivebycomment

The one case (and about the only case) I can think of where they can claim above is:

If they have a log of all JWTs issued that records which user requested and which email in JWT, then they can retroactively check if they issued any (user, email) pair that they shouldn't have. Then they can assert that there was no misuse, if they only found this researcher's attempt.

5 hours ago by mlthoughts2018

How could you prove the user was the correct user in any given case?

4 hours ago by drivebycomment

I can think of two possible "root causes" with this vulnerability.

One is where the API ("2nd step" mentioned in the doc, POST with a desired email address to get a JWT) is an authenticated API, meaning it requires a valid credential, but Apple's implementation of this API made a mistake of not checking if the user-requested email belongs to the user or not. In this case, the log can give enough information for the forensic analysis to determine misuse. I presumed this was the case.

The other possibility is if they implemented that API as unauthenticated. I presumed this was not the case - as this is a more difficult mistake to make, and given that they claimed some knowledge of no misuse - but I have no way to know for sure this isn't the case here. The end result would be the same. If the root cause was this case, indeed it's difficult to know if no misuse has happened.

2 hours ago by tedunangst

Assume they have two log entries.

  request 678: request from user bananas
  request 678: issued token for bananas
That looks good.

  request 987: request from user <blank>
  request 987: issues token for carrots
That doesn't look good.

4 hours ago by rubyn00bie

They very like have a complete log of the action performed; I'd guess, they'd perform some kind of replay/playback after the bug was fixed, and see what failed to pass. Assuming their changes immediately flag things like the researcher's initial attempts and discovery, it'd probably be pretty safe to say that no one was affected if no other instances are flagged.

4 hours ago by lordofmoria

I agree, especially given how many developer ā€œeyesā€ were on this from having to integrate the log in with Apple flow into their apps.

Just as a first-hand anecdote to back this up, a dev at my former company which did a mix of software dev and security consulting found a much more complex security issue with Apple Pay within the first hour of starting to implement the feature for a client and engaging with the relevant docs.

How did no one else notice this? The only thing I can think of is the ā€œhidden in plain sightā€ thing? Or maybe the redacted URL endpoint here was not obvious?

6 hours ago by thephyber

It depends what the fix was. If the fix was just to add a validation check to the POST endpoint to validate that the logged in user session matched the payload (and session data was comprehensively logged/stored), this may be verifiable.

There are obviously lots hypotheticals for which this might not be verifiable.

7 hours ago by deathgrips

Yeah, doesn't this just mean they didn't detect misuse?

5 hours ago by thephyber

It's not clear because it's not a direct quote and Apple probably wasn't explicit about the difference. I wouldn't infer one way or the other from this sentence.

4 hours ago by mazeltovvv

This is an amazing bug, I am indeed surprised this happened in such a critical protocol. My guess is that nobody must have clearly specified the protocol, and anyone would have been able to catch that in an abstracted english spec.

If this is not the issue, then the implementation might be too complex for people to compare it with the spec (gap between the theory and the practice). I would be extremely interested in a post mortem from Apple.

I have a few follow up questions.

1. seeing how simple the first JWT request is, how can Apple actually authenticate the user at this point?

2. If Apple does not authenticate the user for the first request, how can they check that this bug wasnā€™t exploited?

3. Anybody can explain what this payload is?

{ "iss": "https://appleid.apple.com", "aud": "com.XXXX.weblogin", "exp": 158XXXXXXX, "iat": 158XXXXXXX, "sub": "XXXX.XXXXX.XXXX", "c_hash": "FJXwx9EHQqXXXXXXXX", "email": "contact@bhavukjain.com", // or "XXXXX@privaterelay.appleid.com" "email_verified": "true", "auth_time": 158XXXXXXX, "nonce_supported": true }

My guess is that c_hash is the hash of the whole payload and it is kept server side.

2 hours ago by sdhankar

The bug is not in the protocol. The bug is about the extra value addition that apple was doing by letting the user choose any other email address. 1. The account take over happens on the third party sites that use the apple login. 2. This seems like a product request to add value to user by providing a relay email address of a user's choice. From the report- `I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Appleā€™s public key, they showed as valid.`

It's not a bug with protocol or security algorithm. A lock by itself does not provides any security if its not put in the right place.

2 hours ago by albertTJames

Exactly, a case of broken security by overdoing privacy.

4 hours ago by arcdigital

For #3 it's part of the JWT ID Token. Take a look at https://openid.net/specs/openid-connect-core-1_0.html#Hybrid...

3 hours ago by guessmyname

All your questions can be answered by reading ā€œSign in with Apple REST APIā€ [1][2]:

1. User clicks or touches the ā€œSign in with Appleā€ button

2. App or website redirects the user to Appleā€™s authentication service with some information in the URL including the application ID (aka. OAuth Client ID), Redirect URL, scopes (aka. permissions) and an optional state parameter

3. User types their username and password and if correct Apple redirects them back to the ā€œRedirect URLā€ with an identity token, authorization code, and user identifier to your app

4. The identity token is a JSON Web Token (JWT) and contains the following claims:

ā€¢ iss: The issuer-registered claim key, which has the value https://appleid.apple.com.

ā€¢ sub: The unique identifier for the user.

ā€¢ aud: Your client_id in your Apple Developer account.

ā€¢ exp: The expiry time for the token. This value is typically set to five minutes.

ā€¢ iat: The time the token was issued.

ā€¢ nonce: A String value used to associate a client session and an ID token. This value is used to mitigate replay attacks and is present only if passed during the authorization request.

ā€¢ nonce_supported: A Boolean value that indicates whether the transaction is on a nonce-supported platform. If you sent a nonce in the authorization request but do not see the nonce claim in the ID token, check this claim to determine how to proceed. If this claim returns true you should treat nonce as mandatory and fail the transaction; otherwise, you can proceed treating the nonce as optional.

ā€¢ email: The user's email address.

ā€¢ email_verified: A Boolean value that indicates whether the service has verified the email. The value of this claim is always true because the servers only return verified email addresses.

ā€¢ c_hash: Required when using the Hybrid Flow. Code hash value is the base64url encoding of the left-most half of the hash of the octets of the ASCII representation of the code value, where the hash algorithm used is the hash algorithm used in the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is HS512, hash the code value with SHA-512, then take the left-most 256 bits and base64url encode them. The c_hash value is a case sensitive string

[1] https://developer.apple.com/documentation/sign_in_with_apple...

[2] https://developer.apple.com/documentation/sign_in_with_apple...

2 hours ago by PunksATawnyFill

Let's start with the fact that Apple is forcing people to use an E-mail address as a user ID. That's just straight-up stupid.

How many members of the public think that they have to use their E-mail account password as their password for Apple ID and every other amateur-hour site that enforces this dumb rule?

MILLIONS. I would bet a decent amount of money on it. So if any one of these sites is hacked and the user database is compromised, all of the user's Web log-ins that have this policy are wide open.

Then there's the simple fact that everyone's E-mail address is on thousands of spammers' lists. A simple brute-force attack using the top 100 passwords is also going to yield quite a trove, I'd imagine.

Apple IDs didn't originally have to be E-mail addresses. They're going backward.

2 hours ago by ath0

The thing that made this bug possible was because, while your Apple ID has to be an email address, Apple has a mechanism to avoid exposing it to third parties - unlike Google, Apple, or Facebook's single sign-on implementation; the bug seems to be in the step between verifying your identity and telling Apple whether you would or would not like your email address to be exposed.

If anything, the issue is that third parties treat the email address as a unique, unchangeable identity, and then agree to rely on Apple's assertion of what your email address is. But given how hard identity is - and the challenges in dealing with passwords, account recovery, and name changes at scale - it's a pretty reasonable tradeoff to make.

an hour ago by 0x0

Sign in with facebook also lets the user choose whether or not to share their email address.

7 hours ago by phamilton

> The Sign in with Apple works similarly to OAuth 2.0.

> similarly

I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.

> $100,000

That was a good bounty. Appropriate given scope and impact. But it would have been a lot cheaper to offer a pre-release bounty program. We (Remind) occasionally add unreleased features to our bounty program with some extra incentive to explore (e.g. "Any submissions related to new feature X will automatically be considered High severity for the next two weeks"). Getting some eyeballs on it while we're wrapping up QA means we're better prepared for public launch.

This particular bug is fairly run-of-the-mill for an experienced researcher to find. The vast majority of bug bounty submissions I see are simple "replay requests but change IDs/emails/etc". This absolutely would have been caught in a pre-release bounty program.

7 hours ago by zemnmez

> I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.

The token described in this disclosure is an OpenID Connect 1.0 Token. OIDC is a state of the art AuthN protocol that supersets OAuth with additional security controls. It's used by Google, Facebook and Twitch amongst others.

I'd do more analysis, but the author leaves off the most important part here (not sure why)

https://openid.net/specs/openid-connect-core-1_0.html#IDToke...

6 hours ago by thephyber

The important part is in the authorā€™s article. The POST to the opened endpoint generates a valid JWT token for the email address in the payload, not for the one in the logged-in session. Everything else is extraneous.

2 hours ago by dtech

That also explains why Apple could exclude abuse happening in the logs, which some commenters have refuted.

If they have all the JWTs, seeing if one had a different e-mail than the logged-in user should be fairly doable.

4 hours ago by SV_BubbleTime

Oh. Ok, so you did have to have an existing logged in session for any account, then could leverage that to get the token for another account by changing out the email?

6 hours ago by phamilton

My understanding is that the token itself is fine and within spec. But they altered the flow to accept an email address in one of the request payloads which opened the door for spoofing the email address. I've never seen an OAuth or OpenID flow that relied on the payload for identity.

4 hours ago by SV_BubbleTime

Wait... I donā€™t get it.

Why was Apple signing a response JWT when the user only supplied an email?

Iā€™m not a web guy so I just donā€™t see what they were going for here.

4 hours ago by donmcronald

This is likely it IMO. They probably pass the preferred email around as a parameter and the user can jump into the flow and modify it.

6 hours ago by noctune

I think it's actually the OIDC access token and not the ID token. The OIDC spec does not mandate any structure for the access token, but letting it be a JWT isn't out-of-spec.

7 hours ago by saagarjha

Apple supposedly marks certain beta builds with a bounty multiplier. I say supposedly because like their "research iPhones" they mentioned it in a presentation once and I never heard about it again.

2 hours ago by g_p

This might be what you're referring to:

From https://developer.apple.com/security-bounty/payouts/

"Issues that are unique to designated developer or public betas, including regressions, can result in a 50% additional bonus if the issues were previously unknown to Apple."

2 hours ago by saagarjha

Yes, that's it.

7 hours ago by snazz

I'm guessing that the research iPhones were given to a very select group of security researchers with track records of reporting important vulnerabilities under some kind of NDA.

6 hours ago by saagarjha

1. Still never heard of anyone getting them and 2. thatā€™s worse than useless.

6 hours ago by tyrion

How is this something that can happen? I mean, the only responsibility of an "authentication" endpoint is to release a JWT authenticating the current user.

At least from the writeup, the bug seems so simple that it is unbelievable that it could have passes a code review and testing.

I suspect things were maybe not as simple as explained here, otherwise this is at the same incompetence level as storing passwords in plaintext :O.

6 hours ago by enitihas

Apple has had more simple "unbelievable" bugs, e.g

https://news.ycombinator.com/item?id=15800676 (Anyone can login as root without any technical effort required)

And to top it off (https://news.ycombinator.com/item?id=15828767)

Apple keeps having all sorts of very simple "unbelievable" bugs.

5 hours ago by meowface

You can't forget the infamous "goto fail": https://www.imperialviolet.org/2014/02/22/applebug.html

There seems to be kind of a common theme to these:

- SSL certificates not validated at all

- root authentication not validated at all

- JWT token creation for arbitrary Apple ID users not validated at all

I think these are all very likely due to error and not malice, but it's pretty crazy how these gaping holes keep being found.

6 hours ago by saagarjha

More recent example of Apple "undoing" patches: https://www.synacktiv.com/posts/exploit/return-of-the-ios-sa...

5 hours ago by fishywang

Last year (or maybe 2018?) my employer hired an external consultant to give engineers security trainings (all are optional, they provide a few sessions on different topics, and engineers can sign up for interested ones). In one of the sessions I signed up, during the pre-session chat (while waiting for everyone signed up show up in the conference room), the external trainer "casually" chatted about "if you have an Android phone, you should throw it out of the window right now and buy an iPhone instead". That's the point I lost all my respect to them.

(The session itself was ok-ish. It was some trainings about xsrf, nothing special either)

(That incident also triggered me to purchase a sheet of [citation needed] stickers from xkcd to put on my laptop, so the next time this kind of thing happens I can just point to the sticker on my laptop. But I didn't got a chance to do that yet since received the stickers)

5 hours ago by kohtatsu

This was pretty true not long ago. It's still a notoriously short window for OEM software patches on Android, whereas Apple's first 64-bit phone, the 5s from Fall 2013 is still getting patches (May 20th was the last one, iOS 12.4.7)

Apple pioneered usable security with TouchID and the secure enclave; a lot of Android fingerprint readers were gimmicks for years, same with the face unlocks. https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...

They also invest piles of money into privacy https://apple.com/privacy (1 minute overview), https://apple.com/privacy/features (in-depth with links to whitepapers).

I imagine that's where your teacher was coming from.

4 hours ago by SV_BubbleTime

Itā€™s none of my concern what camp people fall into... but...

I hired a very high level pen test company, they mandated iPhones for their company work. They were the best infosec company weā€™ve ever hired. Sample of one.

I wouldnā€™t suggest iPhones are more safe than Android, but i also wouldnā€™t suggest in any way they are less safe overall.

6 hours ago by Jaxkr

Apple has really lost their touch, software quality has declined dramatically

6 hours ago by SaltyBackendGuy

Anecdotally, I upgraded my wife's iMac to Catalina and she's experiencing issues (rendering latency) she's never had before (hadn't upgraded the OS since buying it 4 years ago). I figured is was good to get on the latest and greatest for security reasons, now she wont let me touch her computer anymore.

6 hours ago by ksec

I used to be on the latest and security camp as well. But after all these years I am starting to understand why people dont update.

It is extremely frustrating. Especially when Catalina removes features that were working perfectly.

5 hours ago by StreamBright

Welcome to the late adopter group. Never upgrade, unless it is absolutely necessary.

6 hours ago by iphone_elegance

does it really matter though?

4 hours ago by hootbootscoot

Someone with an AV production studio totalling over $100k in Apple products and a few million in outboard gear whose drivers worked fine before may just care a bit...

The whole pro-multimedia production crowd probably cares...

(vs the current Apple paramour: the multimedia consumer who wants to order pizza and get back to netflix on their phablet or whatever..)

5 hours ago by VMisTheWay

I'm not sure if Apple ever had quality.

They are the Nintendo of Computing. They have some novelties, but in general they are average at their best. Notice that both Nintendo and Apple are big advertisers.

5 hours ago by donmcronald

My guess is that it has to do with that private relay because OAuth isn't too complex by itself. During the OAuth flow they probably collect the user preference, (if needed) go out to the relay service and get a generated email, and POST back to their own service with the preferred email to use in the token.

If that's it, it's about as bad as doing password authentication in JavaScript and passing authenticated=true as a request parameter.

Edit: Looking at the OAuth picture in the article, my guess would be like adding a step in between 1 and 2 where the server says "what email address do you want here" and the (client on the) user side is responsible for interacting with the email relay service and posting back with a preferred email address. Or the server does it but POSTS back to the same endpoint which means the user could just include whatever they want right from the start.

The only thing that makes me think I might not be right is that doing it like that is just way too dumb.

AND I'm guessing a bunch of Apple services probably use OAuth amongst themselves, so this might be the worst authentication bug of the decade. The $100k is a nice payday for the researcher, but I bet the scope of the damage that could have been done was MASSIVE.

Edit 2: I still don't understand why the token wouldn't mainly be linked to a subject that's a user id. Isn't 'sub' the main identifier in a JWT? Maybe it's just been too long and I don't remember right.

5 hours ago by randomfool

The only thing I can think of is some 'test mode' override which inadvertently got enabled in production.

1. Don't add these.

2. If you must add something, structure it so it can only exist in test-only binaries.

3. If you really really need to add a 'must not enable in prod' flag then you must also continuously monitor prod to ensure that it is not enabled.

Really hoping they follow up with a root-cause explanation.

5 hours ago by saagarjha

Apple? No way.

7 hours ago by cfors

Wow. That's almost inexcusable, especially due to the requirement of forcing iOS apps to implement this. If they didn't extend the window (from originally April 2020 -> July 2020) so many more apps would have been totally exploitable from this.

After this, they should remove the requirement of Apple Sign in. How do you require an app to implement this with such a ridiculous zero day?

6 hours ago by thephyber

Iā€™m of the mind that just about any security bug is ā€œexcusableā€ if it passed a good faith effort by a qualified security audit team and the development process is in place to minimize such incidents.

The problem I have is that I canā€™t tell what their processes are beyond the generic wording on this page[1]

[1] support.apple.com/guide/security/introduction-seccd5016d31/web

6 hours ago by resfirestar

Even if there was clear evidence that this system underwent a proper security audit, with a failure this basic you would have to ask why it didn't work. What is going on inside Apple that brought them to the point of releasing a lock that simply opens with any key, despite the efforts of their state of the art lock design process and qualified lock auditors?

6 hours ago by Areading314

Writing some test cases for "can anyone generate a valid token" or "does an invalid token allow access" should be the first thing to do when writing an auth system.

3 hours ago by driverdan

> That's almost inexcusable

No, it's completely inexcusable. There should never be such a simple, major security vulnerability like this. Overlooking something this basic is incompetence.

6 hours ago by yreg

I believe the deadline is June 30. [0]

[0] - https://developer.apple.com/news/?id=03262020b

7 hours ago by gruez

Is it me or is this writeup low on details? There are a couple of commenters saying that this is a great writeup, but all it amounts to is:

1. what sign in with apple is

2. sign in with apple is like oauth2

3. there's some bug (not explained) that allows JWTs to be generated for arbitrary emails

4. this bug is bad because you can impersonate anyone with it

5. I got paid $100k for it

7 hours ago by antoncohen

I think the write up is so short because the bug is so simple. Send a POST to appleid.apple.com with an email address of your choice, and get back an auth token for that user. Use the auth token to log-in as that user. It's that simple.

7 hours ago by snazz

Did it show what URL you had to send the request to? It looked to me like that was redacted. I'm guessing that that URL would have been in the developer documentation.

5 hours ago by adrianmonk

The URL has "X"s in it. I don't know if that means it is redacted or is variable.

Note that when they give the POST request, they say "Sample Request (2nd step)".

But what is step 2? The diagram above shows step 2 as a response, not a request. At least that's how I interpret an arrow pointing back toward the user. So the write-up conflicts with the diagram.

How do you resolve that conflict? One guess is that "Sample Request (2nd step)" should say "1st step" instead.

Another guess is that the arrow directions don't necessarily always indicate whether a step is a request or a response, so that step 1 could be a request and response, and step 2 could be another request and response that POSTs to a secret URL that was learned about in step 1. (This guess could make sense because the request is a JSON message with just the email field. There must be credentials somewhere, so either it's redacted or some kind of credentials were given another way, like in step 1.)

If this second guess is right, then a follow-on guess is that the crux of the bug is that in step 1, you sign in with a particular email, then Apple says "OK, now here's a secret URL to call to get a JWT token", and then in step 2, you change email, and it doesn't notice/care that you changed emails between step 1 and 2.

7 hours ago by ahupp

It seems low on details because the exploit was incredibly simple. AFAICT you didn't have to do anything special to get the signed token, they just gave it out.

> Here on passing any email, Apple generated a valid JWT (id_token) for that particular Email ID.

7 hours ago by cheez

it's literally that simple.

6 minutes ago by ljm

> For this vulnerability, I was paid $100,000 by Apple under their Apple Security Bounty program.

Fucking hell. Even after tax, that's a substantial pay-out.

4 hours ago by Ronnie76er

Just want to mention something about the id_token provided. I'm on my phone, so I don't have apples implementation handy, but in OIDC, the relying party (Spotify for example) is supposed to use the id_token to verify the user that is authenticated, specifically the sub claim in the jwt id_token.

https://openid.net/specs/openid-connect-core-1_0-final.html#...

It's likely (although like others have noted, this is scant on details), that this value was correct and represented the authenticated user.

A relying party should not use the email value to authenticate the user.

Not contesting that this is a bug that should be fixed and a potential security issue, but perhaps not as bad.

Anyone else? Am I reading this right?

25 minutes ago by cfors

So the way I believe that it works is that the vulnerability was that a valid email is used to generate an Apple signed JWT. The server side validation would be unable to tell that the token wasnā€™t issued in behalf of the user since Apple actually signed it.

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.