Apple's CSAM Detection with Matthew Green

Apple's CSAM Detection with Matthew Green

We’re talking about Apple’s new proposed client-side CSAM detection system. We weren’t sure if we were going to cover this, and then we realized that not all of us have been paying super close attention to what the hell this thing is, and have a lot of questions about it. So we’re talking about it, with our special guest Professor Matthew Green.

We cover how Apple’s system works, what it does (and doesn’t), where we have unanswered questions, and where some of the gaps are.

It’s kind of odd to me that iMessage has worse reporting than like league of legends does.

This rough transcript has not been edited and may have errors.

Deirdre: Hello, welcome to Security Cryptography Whatever. I am Deirdre.

David: I’m David. and today, we don’t have Thomas with us, but we do have Professor Matthew Green from Johns Hopkin— Johns Hopkins? Where, which one’s plural. Is it Johns Hopkins or Johns Hopkins?

Matt: They’re all plural, Johns Hopkins.

David: Okay.

Matt: Hi, thanks for having me on here.

Deirdre: Yeah. Full disclosure upfront, Matt is on the board of the Zcash Foundation and I’m an employee of the Zcash Foundation, but we’re not doing anything Zcash-related today. So that’s okay.

Today we’re talking about Apple’s new proposed CSAM detection system, and we originally were sort of bandying about, should we talk about this on the podcast?

Do we have a take, do we have anything to add to this? And we weren’t sure if we were going to do anything. And then we realized that not all of us have been paying super close attention to what the hell this thing is, and have a lot of questions about it. So we totally have stuff to talk about.

David: Yeah. So basically my knowledge of this situation is that Apple demo’d, some quote, "privacy preserving", client-side scanning to detect, basically child porn, on people’s phones, with the goal of then submitting that I believe to the automated, child porn API, that the government runs

Deirdre: Actually, I don’t think they have, well, I don’t know if they have an API, but they submit a report to the National Center of Exploited and Missing Children, Cyber tip line. And I don’t know how, since, it’s large enough numbers from, from providers like Facebook, there has to be something automated to it, but there are still humans in the loop when they make those reports.

David: Yeah, I was talking with someone who works on this from the Facebook messenger angle a few years back, and they implied that, uh, something around, I believe at the time, I want to say like 24 million submissions per year of which 18 million were coming from Facebook

Deirdre: oh,

David: and they were automated through some

Deirdre: yeah,

David: API.

Deirdre: In uh, there were, I think it was 21 million reports to that tip line and 20 million of them were from Facebook / WhatsApp / Instagram. and people tend to look at those very large numbers and freak the fuck out. But you have to remember that this is Facebook scale, which is, across all of those services, about 3 billion users and Facebook alone reported in 2012, 2013 that they were ingesting, somewhere between 300 million and 400 million images a day, and that was like eight or nine years ago. So 20 million reports in a year, you can judge if that it is a large number, But with many users in many images, so

Matt: But keep in mind, those may not all be unique users either. There’s a pattern where people make an account, get it closed down for doing exactly this and then open up, you know, 10 new accounts. And so it’s not totally clear. You should maybe divide by. Yep.

Deirdre: It’s coming clearly back of the envelope calculations that— it’s y’ try to figure out what the scale is. yeah.

David: 60 million alerts sounds just useless to me. Like I used to sell effectively, a alerting SAS product and like the first rule of doing that was like, don’t alert people about 60 million things. Like what are you going to do with that?

Deirdre: Yeah, that’s a good question.

David: but yeah, so I guess I know that they’re doing some kind of hypothetically privacy-preserving fuzzy hash on the client side.

I don’t know how they’ve defined privacy-preserving or if I’m just injecting that word. And, and I don’t know where they’re running this. I don’t know what conditions they’re running this. And then I know that it’s somehow also related to the, kind of family controls where I think they added in a new feature.

Where, if you’re like 13 and on a family, Apple accounts with your parents, it’ll tell you if— the parents— if you’re sharing nudes, something like that.

Deirdre: yeah.

Matt: Yup.

David: so lots of stuff all going around. but, that’s pretty much the extent of my knowledge. I haven’t kept up with what the technology actually is, what the crypto is or the conditions in which they’re running it, or why they decided to do.

Deirdre: Right. So three weeks ago, the first week of August, which is also the week that I went back to DEF CON for the first time in two years. And it ruined— not ruined my DEF CON, but you know, whatever. They announced three things. One, they added some stuff to Siri, so that if you try to like ask Siri for child abuse images, or, you know, or something, it’ll be like, no, maybe you should go talk to someone about this query.

The second thing was in iMessage / Messages. they will have a nudes filter for incoming nudes, being sent to you, if you are, a minor on a family plan and you’re explicitly 12 and under, or under 13, I forget what it was. it’ll say, "Hey, we think this is a nude photo. Are you sure you want to look at it?"

And if you say "yes", it will be like, "we want to take care of you are going to notify your parents" on this plan and you don’t really get an option to opt out of that. So that’s one thing. That is one classifier that they’re using. But then the third thing, is this CSAM detection on the client, but it’s not just the client.

It’s, it’s this whole smorgasbord of things that they’ve put together. And that is for any photo on an iOS device, iPad device and macOS that gets this update, that would get uploaded to iCloud Photos. And as a lot of people probably know, iCloud Photo backups are on by default and they push you towards, leaving your iCloud backups on all over the product.

Like I have iCloud turned off on all of my Apple products because I just don’t use it. I just don’t use iCloud because I don’t have an iPhone. I don’t have an iPad. I just have a lot of Macs, but they still push you and nudge you in the, in the UI and the UX to turn this on all, all over the place.

And just to add some definition: CSAM stands for child sexual abuse materials, usually it’s images, but can include video. This thing that Apple’s proposing is just static images, as far as we know.

So, all of the documents that they’ve released and all of the interviews and, presentations that they’ve given about the system so far seems to be a combination of client-side scanning where on your local device, they run images through a new hash function, a perceptual fuzzy match called NeuralHash.

This is a brand new hash function that is very poorly specified in terms of what we would like to see from quote unquote, cryptographic hash functions. It’s not necessarily the same as a cryptographic hash function, but it’s a hash function nonetheless, that— it runs through that.

Part of the OS update, they pushed down a blinded database of hashes of known CSAM images. By blinded, it’s blinded with a secret scalar for P-256. P-256 is a common NIST elliptic curve. So that you take the hash of the image, you want to back up to iCloud, and you take the set of blinded images. A blinded hash is in this database, and you try to do a private set intersection on your device.

The whole point of this is so that they can ship down these hashes of known, previously reported CSAM images onto your device. But you, if you are a bad guy, for example, you won’t be able to tell what they are and what they correspond to and try to game them. So if you get a intersection, you get a hit locally.

Then what it does is they generate a share of a threshold secret sharing algorithm. They generate a, a, share, a key pair for this threshold, secret sharing, uh, protocol. They create a voucher that encrypts the result of your match and other data about the match.

and they include like the public key or like the key share or whatever. and when it gets uploaded to iCloud, it includes this voucher. The idea is, you need to hit 30 matches uploaded to iCloud. That could be multiple of the same, or they, could be 30 completely independent matches of known CSAM images, get uploaded to iCloud. Once you hit that 30— and 30 is just a parameter that Apple has set. They’ve just decided based— they claim based on, the possibility of false positives with the hash function and the largest sizes of iCloud photos that they’ve ever seen based on all of these parameters, they decided that 30 is a good level to avoid false positives and, and all this sort of stuff.

David: so this is happening when users are already uploading pictures to iCloud?

Deirdre: Yes, because if you just turn off backup to iCloud. this whole system is circumvented. At least the way that they have currently done it.

David: Don’t they have these images in plain text, if you’re uploading them to iCloud?

Deirdre: They, well, right now they do. Yes. So let me finish the last part and then we can, and then we can talk about where, where the cracks show.

They meet this threshold. They have enough shares of the secret sharing to reconstruct the secret, and they can decrypt all of these vouchers.

They compare the image that you’ve uploaded .Right now, it’s in plain text, but in the voucher they have a "visual derivative" [I’m using air quotes], and the information about the match that you did. And then on the iCloud server side, they check it again. They check the private set intersection again, to make sure that it’s accurate. A human, an Apple, or one of their contractors, looks at the visual derivative of the image, which has probably some pixelated form, so that you’re trying to shield a human from looking at child sexual abuse imagery as much as possible while also having a human checking for errors in this automated system.

And if they confirm that this is a match, they will then wrap that up in a report to the National Center of Missing and Exploited Children with all of the details, as they required to by us law. so that later NCMEC will find the law enforcement agency that would take over this case? Somehow? Maybe? Crossing our fingers? And then that law enforcement agency would go to Apple and get a warrant for everything that Apple has to hand over information about this case.

So you raise a very good question, which is that, iCloud currently isn’t end-to-end encrypted. So why the hell are they building this whole thing? Which seems to operate as if the things that you’re uploading are end-to-end encrypted.

That’s a very good question.

Matt: So people have a lot of theories about this and the only reasonable theory. I mean, let’s just not fool ourselves. The only reasonable theory is that at some point in the future, or at least Apple is hoping to keep open the option, to make these photos end-to-end encrypted. I think we can— you know, it’s weird, we hear a lot of people arguing about this in different ways and saying, you know, tailoring their arguments to whatever their belief is, but let’s just go with the assumption that that’s going to happen. Cause it makes sense.

Deirdre: And they have tried— there are Reuters reports that they have tried to do this several years ago, and that pressure from the FBI and other us agencies, basically, we said, please, don’t, it will make our lives harder and Apple backed down, but they’ve done enough work in the past and this sort of slots together with, it only really makes sense, instead of doing it just every time you upload an image to iCloud that you just do the private set intersection on the iCloud or something when you upload it to them, that if you’re pushing it down to the client, it’s because you want to have everything that you upload to iCloud, encrypted end-to-end, in theory.

Oh, I, I left one more part. The whole, uh, you know, is this really privacy preserving or not part, one leakage that you might notice in the system I originally described is that, if only the accounts that have matches are uploading these vouchers, you would be able to distinguish between accounts that have matches and ones that don’t have matches whether or not you can see, the plain text of the image.

So what they did is everyone who is uploading to iCloud, uploads dummy vouchers, that are literally empty so that you can have some background noise in case someone, steals your phone and uses it for downloading the sort of child abuse imagery. it’s a one-time to two time hit or something like that.

Or you’re a person who downloads a bunch of adult, consensual porn. And sometimes there are unfortunately child sexual abuse images in there, and it’s just a one-time, two-time hit. Or, there is a bad match in this hash function that we don’t know much about. That level of background noise is kind of spread across all uploaders to iCloud.

So that it’s only when you have this threshold of 30, matches then they can get decrypted. And everyone else is kind of preserved with a baseline noise to protect their privacy and to not have them be standing out as an outlier about like, "Hmm. You know, Jane Smith’s device never uploads any vouchers on their iCloud photos, and, you know, John Smith’s does", that sort of thing. But of course we don’t know much about, like, how many the vouchers there are. It’s very fuzzy and, you know, we would like more detail. Please

David: what exactly do we mean by private set intersection in this context? Like, does it mean both what it normally means? And also what does it normally mean?

Matt: So, what it normally means is you have two different people who have a set of elements or strings or whatever. And typically what you want to do is you want to have some kind of comparison, some kind of set intersection, figuring out which of those elements are shared on both sides and you want to do it privately.

And the private part is a little tricky, right? So you don’t want anyone to learn what the entire set— so if Alice and Bob are going to compare their lists, you don’t want Alice to learn anything about the non-matching ones and vice-versa, uh, on Bob’s side. and so that’s important.

There’s also the question of who learns about the matches. Some PSI protocols say, okay, if the server and a client, then the client should be the one who learns. So for example, a lot of people have been talking about using PSI for Signal contact discovery, and things like that. So, Signal has a big list of phone numbers, and I have a small phone book of my own contacts, and PSI is a great way, you could like find the matches between Signal’s big list and my little list, but the idea of PSI is only I would learn the matches. I don’t want Signal learning about them.

And so, what’s actually happening here is kind of backwards. It’s the opposite direction where, if it’s a client and a server, what happens is now in this system, only the server learns about the matches after you’ve crossed 30, not the client; the client doesn’t get any knowledge, hopefully at all about whether there’re matches or at least it’s not really part of the design.

So it’s server notification PSI.

Deirdre: And it’s using a cuckoo hash table, which I— it’s funny because I was learning more about Cuckoo hash tables because of this. And it’s like, no, you don’t just have the NeuralHash function. You’ve got another hash function as well. And I think they haven’t done much description about the other one at all, that I can see.

Matt: Yeah. I, I don’t know what they’ve actually said about it and I don’t know what the accuracy rate of it is. And I’m still kind of, because it’s summertime, I haven’t gotten all the way through, down to the end and I think there’s a little weird stuff or they add some noise in this 30, these dummy vouchers.

And there’s some parts of this that I haven’t dug down into quite deep enough to know if there are any problems. So I’ve just been kind of assuming that, you know, this is all doing roughly speaking. What it claims to be. And even if you just assume it’s doing all of these things perfectly, there are a lot of problems.

so, so without even getting into those, very, very deep details.

Deirdre: Yeah, exactly.

David: Cuckoo hash is something else that Apple built themselves?

Deirdre: Uh, the idea of a cuckoo hash table is not unique to Apple. it’s a thing that has existed, but their instantiation of this one is their own hash functions as far as we can tell, but it’s brand new and not very well specified, beyond a few paragraphs here and there and some documentation they put on the website.

Matt: Yeah, it’s weird. I mean, there’s a lot of stuff that, was really over specified. There’s an entire document that says here’s how this PSI protocol works, which is clearly written by some of Apple’s cryptographers and also Dan Boneh, who’s a Stanford cryptographer. And then there are parts of this system that were just, never specified.

So NeuralHash. Nope. We don’t get to know how NeuralHash works. It’s fundamental to how this entire system works, because if it doesn’t work well, then everything’s going to be crazy broken. Um, but no, Apple’s not going to publish that.

Of course the good news is, thanks to the magic of reverse engineering, somebody got a copy of it about a week or two ago. And since then has been like, you know, letting loose the hounds of finding all kinds of collisions and problems with it. So that’s been really interesting. But the way that Apple has rolled this out, uh, very partial information and then a little bit of extra information.

You know, all this other stuff happening at the same time, it just makes it super confusing and not even confusing, just like, this is something they’re releasing on a billion iCloud users allegedly very soon.

Deirdre: Yeah. It’s in theory, iOS 15 And the latest releases of, of macOS and all that, which is in like a couple of weeks.

Matt: and the idea that they’ve just kind of dumped all this excess information in one area, not any information in the other area. And then they expect people to actually tell us whether this is safe. I don’t think it’s possible and I’m, I’m shocked that they’re doing it at such a huge scale. It’s really amazing to me.

Deirdre: Not to beat up on NeuralHash, because other people are beating up on it, because, even if NeuralHash were perfect and had no false positives and all this shebang, we still have a lot of other problems with the system in general, both what it does. as opposed to what it’s supposedly supposed to do, and also all the questions we have. NeuralHash seems to be a new perceptual fuzzy hash, kind of like PhotoDNA, which has been used for over 15 years server-side, by a lot of these electronic service providers to catch other, CSAM. but it’s been protected to avoid reverse engineering because it’s very vulnerable to reverse engineering.

So. Yeah, PhotoDNA. NeuralHash is getting deployed on your device in your hand because it’s trying to do it client-side. They have said that it is a convolutional neural net on the inside, and then it turns the resulting vector of whatever classification it has done to a byte string to a bit string, like a hash function would. This is an interesting innovation in terms of trying to move perceptual fuzzy hashes forward, because there’s been a ton of innovation in, machine learning and image classifying, uh, especially in the past 15 years since PhotoDNA came out.

But also, we also know that there are adversarial examples against, neural nets that, show that they can be gamed really easily. And that’s what all of those examples that Matt was just talking about. Like, "Here, this is a puppy and we tweak it a little bit and we’re able to, to get like a, like a collision out of, you know, NeuralHash".

And it’s like, that’s not great. I don’t want like a weirdly tweaked version of a puppy that someone sends me and then accidentally it gets backed up to my iCloud Photos, because it’s on by default, to get me flagged. And you just have to trust lost that Apple’s people will just catch it because that’s kind of where several of their safeguards are.

It’s just literally a person will look at a pixelated version of an image and be on their best— they’re well rested, not traumatized, looking closely at it and making sure that it that is in fact, a puppy and not child abuse, and doesn’t get you reported to the only organization designated by U S regulations to hold and forward on child abuse imagery in the United States to law enforcement.

Matt: So I think one of the things that’s helpful before we totally just talk about the tech is, I know you’ve been involved in this Deirdre going back a couple of years. There’s a lot of background and context to this that I think people from the outside don’t necessarily see. and part of this is that I think it was back in 2018, attorney general William Barr and uh, and a few other of his counterparts in other countries, were very upset.

They became very upset that Facebook was about to deploy end-to-end encryption

Deirdre: Yep.

Matt: broadly, there was a lot of publicity about this. And so they wrote an open letter to Facebook saying specifically, "we are very concerned that if you activate encryption in your new systems, this is going to imperil our ability to, access CSAM", uh, so child sexual abuse materials, "and also just scan for other things like terrorist content". And they even mentioned disinformation content, uh, that could mess with elections. So there’s actually a pretty big list here. What they were asking for was not just, you know, CSAM, but CSAM was definitely the headline.

This lead within our community— and our community, speaking broadly about the cryptographers and also people who care about policy, and people who are in industry— to a series of pretty loud discussions within our community on places like Lawfare, Alex Stamos at Stanford had a whole bunch of workshops about this.

And specifically for the last two years, we have all been talking about this kind of problem, of how do you deal with end to end encryption when governments are asking very explicitly for the ability to scan files, and everybody was talking about this in the terms of like end-to-end encrypted messaging, we didn’t think really that much about backup, but it was sort of there.

And so I just want to point this out that this has been a big issue. Everyone kind of knew that there was huge amounts of government pressure. Everybody was talking about this. We were having meetings with NCMEC. Deirdre, I think you came to one of those?

Deirdre: Not with NCMEC, but to several of those workshops. Yes.

Matt: Yes. Okay. this was not something that just popped out of the blue with Apple. People are talking about, you know, was Apple pressured or is Apple able to resist government pressure? I mean, the, the idea that this has nothing to do with that government ask is, is like saying that, you know, the latest results in COVID vaccines have nothing to do with the pandemic.

I mean, of course the context here is that the government, the most powerful entities in the world asked for this capability and made it clear they were going to put pressure, and there was even a legislation that was potentially going to mandate this, in Congess that failed,

Deirdre: Yes. there was a hearing on this material and I don’t know if it was Tim cook or someone from Apple was, uh, you know, on the stand in front of the Senate or the house or whatever. And they’re like, "if you’re not going to do something about this problem [this problem being, CSAM, on electronic service private platforms] we will make you do something about it".

So there was a threat of legislation and regulation.

Matt: Yes. So you can’t look at anything Apple is doing here as just coming out of the blue. The government made a threat, the government made the demand, and everybody said, "who’s going to crumble first?" I thought it was going to be Snapchat. I really did, uh, Snapchat actually I think is building in some client-side scanning stuff or has done it.

And so I thought they were going to be first. Apple jumping ahead of everybody in line to

Deirdre: It’s a wild,

Matt: it’s wild. It is insane. It is unexpected based on their reputation. That’s unexpected based on the fact, they don’t even have end-to-end encryption as David mentioned. So,

so once you. And I in iCloud photos, yes.

They don’t even have end-to-end encryption. So what are they doing announcing this crazy client side system when they don’t need it? Weird. But you know, you have to look at this in context of, this shoe was waiting to drop from someone and it just happens to be Apple and the cause of the shoe dropping was all of this government pressure.

So Apple builds this system. And the question that you have to ask is, well, you know, did Apple actually like do the work to really prove to people that deploying this was a good idea. they’re doing it as we speak and they seem to be doing more of it on a daily basis, USENIX talks and so on.

The only other thing I want to say about this that is non-technical is that there is an aspect of this that is a little bit stronger than a lot of the other services I’ve seen. many people have been talking about CSAM distribution, like me sending, you know, some horrible file to you.

There are systems out there that also scan CSAM in, backups. So for example, I think Microsoft, uh, there have been pro yeah.

Deirdre: When you share out of your Drive, it is automatically scanned.

Matt: yes, but when you share, the critical thing is if you upload something, and I don’t have literal documents proving this because Google doesn’t really advertise it. But generally speaking, most of the cloud providers, when you upload something to your personal drive or your personal backup, they seem to represent a distinction between private backup and distribution by sharing it with somebody

Deirdre: else.

Yep. That coheres with what I’ve heard. yeah.

Matt: This. Yeah.

They do the scanning when you share it, because that’s the justification for CSAM scanning is preventing distribution of it.

What Apple’s doing is not that. Apple does have a photo sharing feature. You can share albums. I’ve done this with my daughter. you know, mostly my dog pictures and whatever, but Apple could absolutely have implemented this scanning on share.

They chose not to do it. They’re doing it for every single one. I have 29,000 photos in my photo library. If I have iCloud photos turned on, everything since 2010 that I’ve ever taken a picture of or downloaded from the internet will get scanned. Even if I never share it with anyone. And a lot of people are not making a big deal of that. I just want to make a big deal of it because it’s enormous.

Deirdre: And I want to dovetail on that because. This is not an image classifier to detect any child abuse.

Matt: Yes,

Deirdre: This is, detecting a match against known reported child abuse images.

Therefore, if you are a person creating new child abuse images, and may you rotten hell, it will not detect you. Yes. One, you can turn off iCloud photos. One, you could just do that because it’s

Matt: But even if you don’t,

but literally

yeah. If you have your iPhone camera

Deirdre: The only way you would get detected and reported by the system is if you had also 30 hits on known child abuse imagery, and that’s


Matt: your

photo library,

Deirdre: in my, In your photo library, so that kind of goes to my, to what I want to talk about, the system is being touted as a way to protect children. it only detects, known images. You have to have 30 hits and those will then get reported to CSAM. It does not detect general child abuse images.

It is not a nude classifier. Even the nude classifier may detect something new that it has not seen before and report it to a human, but that’s not what this is. And so I guess my point being is that even if it works perfectly and NeuralHash, never has any false positives. If everything is perfect and it’s never wrong, it is very narrow in what it actually does, but. It is, very broad in scope in what it enables for every iOS user macOS user an iPadOS user that it rolls out to.

Building this whole system from scratch is a big lift, but there’s a lot of pieces, cryptography flying around into this system. But taking this system as designed because it’s coming, basically, it seems like whether we like it or not, and tweaking it, to be like, oh, just search for some other images too.

Because as far as we know, NeuralHash— we don’t know enough about NeuralHash to know if they’ve trained this convolutional neural net on a specific corpus, or if it’s just images, if it’s just an image matcher. And if it is just an image matcher, a fuzzy image hasher— perceptual hashes allow you to, pixelate the image, change the color, rotate it slightly, you know, stuff like that, and still match, because if your human eyes, like, "that is the same image", a regular cryptographic hash, if you tweak it slightly, the bytes that you put into the hash, it would output two completely different hashes. The perceptual image hashes are supposed to avoid that.

if system is out there, it is not that much work to change it, to detect other things. And we already know that Apple has bent to pressure from different governments in markets, such as China, to change how their systems work, to change, where they store their iCloud keys, to maintain access to the market in China, for example, and there are governments that are pressuring electronic service providers to look for certain content or prevent certain content on their platforms because of reasons political or otherwise. And now that this thing is theoretically going to be out there, it’s very scary to think that it’s not that much work to tweak it go for very different ends of detecting known child abuse imagery.

Matt: yep. I just want to add two more things to that. One of them is that this thing that Apple is, is talking about, this ability to scan for known images, is really, if you think about like a two-dimensional scale, right? We’re on like one axis you have, how effective is this going to be at stopping the real problems with child porn, getting to district distributors, getting to originating it’s about as close to, the zero end as possible.

It is the most restricted system in terms of effectiveness, at solving the problem, you could build. So not very good. At the same time you think about the risk and the number of different problems and potential issues, this system could have, this totally untested system, and a number of files it’s going to be scanning.

The number of users are going to be affected by it. It’s about as high risk as you could get. So if you’re balancing, you know, what is the societal benefit that of catching people with this thing versus what is the risk that this will reopen? You know, the, the gateway to hell of, you know, all kinds of risks, you have found the sweet spot of, wow, like you couldn’t have done worse. Thanks Apple.

So it’s very frustrating to see them. They could have started with something smaller, something more limited and made a lot of difference here, potentially, but they’re not. So that’s one thing I want to say. And just before I start, stop ranting, um, I do want to point out that this idea that we are just going to be scanning for known CSAM images and that’s going to be acceptable is I think a very dated and almost obsolete idea.

Deirdre: Hm.

Matt: Google has actually begun, uh, using neural networks to scan for new CSAM imagery

Deirdre: Oh!

Matt: have very sophisticated.

Deirdre: there, was an update from OnlyFans, who had their first transparency, uh, trust & safety report this month, who said that they are also training up their own, uh, neural nets to detect, CSAM versus !CSAM. And it’s kind of fascinating because, only fans might have their own, corpus with which to train, to make sure that they don’t get false positives with their own neural nets that other, uh, electronic service providers might not have available, but they’re doing that too! Th they don’t have everything that’s end-to-end encrypted, everything is un-encrypted and public because you have to serve it to anyone who logs into OnlyFans, but they’re doing it too. So it’s not even the big companies like Google, but

Matt: Yeah,

Deirdre: OnlyFans! Anyway, go ahead.

Matt: yeah, no, it’s absolutely right. So this perceptual hashing technology that Apple is currently implementing is last generation technology. It is not going to be the industry standard for much longer. you know, it takes a little while for us to have, you know, the neural network technology and make everything efficient enough, and build up the corpi, you know, corpuses to build, you know, these kinds of, new detection systems, but they’re coming, they’re already here, but they’re coming everywhere within five to 10 years tops. And at that point, Apple is going to have to make a decision. They have a powerful neural net processor on the phone.

do they want to have a system that is obsolete and ineffective or do they want to update their system to scan for new imagery using this processor? And they’re going to have to make a decision. And once they’ve made the decision that CSAM scanning on your device of your private files is something that they are responsible for, they cannot tell governments they’re going back.

They can’t tell governments are going to keep using the ineffective old system. It’s really hard to imagine a world in which we are not moving to, powerful neural network-based scanning for new types of content. And I don’t see a way out of it.

It’s just what they, it’s inevitable based on what they announced two weeks ago.

Deirdre: And I would say that, the pushback of like, okay, well, what would you do instead? And, well, what I would do instead is what WhatsApp is trying to do and what non-encrypted platforms or services sort of like the iClouds, the drives, not iClouds. they look at patterns in the metadata.

They look at sharing patterns, they look at account churn and burn patterns, like swarming of people, because the people that we actually are trying to get tagged to law enforcement, the baddies. They know each other, they coordinate, they share content and then they’d burn it all down. And they, trash the accounts, to make it look like they were never there.

And it’s very hard to trace them. Those are recognizable patterns in the metadata, not in the plain text, You don’t necessarily need a plain text to notice that sort of information. And I know that WhatsApp has described doing research on that to be able to pinpoint stuff.

Things that also help is literally having reporting function in your service. WhatsApp has this, a lot of the reports from WhatsApp to NCMEC come out of reporting, if you’re in a group or in your one-to-one chat and it’s end to end encrypted, if one of the ends, says, "this is sketchy. I need to report this person", or, "they’re talking about something sketchy or they’re sharing images that are making me uncomfortable".

You can report them to WhatsApp and WhatsApp can open a file. And if they get enough reports with enough plain text, that one of the ends has reported to them, they can then send that on to NCMEC or to law enforcement. Whoever needs to be reported to.

iMessage has a reporting function for spam.

This is a spammy phone number. Like they do not have a reporting function for, "this is abusive. This person is harassing me", uh, or anything like that. It’s just going to goes into a hole with like, not even describing what is bad about some other user or the content that they’re sending you. It’s just, ‘ a flag.

And the fact that they went with this whole rigmarole system, of threshold and NeuralHash and private set intersection and all this stuff before even fleshing out like their reporting mechanism, in iMessage is just, it kind of says a lot about where Apple’s priorities are in terms of making their platforms safer for their users, all their users.

David: It’s kind of odd to me that iMessage has worse reporting than like league of legends does. Um, but if we could temporarily jump back to something you were saying earlier, Matt, about the neural networks getting better at detecting new CSAM. why do we think that running that type of neural network client side on someone’s iPhone would be any better than the system that is being described now?

Matt: better in terms of more private or better in terms of better at detecting?

David: Well you suggested that Apple would never like renege their current system in favor of this quote, better system. Why do you think that system’s any better? Both, either in effectiveness or in terms of privacy?

Matt: So I think that one of the big questions is, is it enough to detect people sharing old content? Or do you need to actually detect new content? And I don’t know the answer to that. Um, I’m not a law enforcement officer. I don’t have any clue. what I did understand when we talked to NCMEC, the general counsel of NCMEC, as they said that, they typically, their database is not the most up-to-date, a lot of the stuff in it is older content from, you know, years and years ago. It’s sort of identified as, because it is some of the more popular, as weird as that sounds, the more common, uh, CSAM that has been shared. And so that’s why it tends to get flagged. And what they said is even though a lot of the CSAM that people are actually sharing is not this stuff that’s in our database, some of these files get shared anyway. So maybe, you know, thousands of files will get shared and they will get one hit that happens to be in our database. But all it takes is one hit

Deirdre: Yeah. It’s a Canary. It’s a

Matt: Yeah.

Deirdre: right?

Matt: Yeah. Yeah. the point is that these databases in particular Apple’s database is not going to be the best database of fresh, current, you know, what’s out there. The kids were actually being abused, now, CSAM. and as Deirdre pointed out, you know, if your goal is to stop the abuse of kids, you want to get closer to the sources of new abuse material. of course you want to stop all crime.

Deirdre: and otherwise.

Matt: Yeah, exactly.

So people are literally abusing children and taking photos of it. but your database can’t catch those people because they are, you know, they’re not producing content, that’s in NCMEC’s old database. you can’t catch those people. You will catch people who are, you know, sitting at home, sharing it and, and I’m not going to opine on whether that’s a good goal or not.

but it is certainly not what, you know, Google has decided to do, which is, "we will detect, we will train a neural network to find even brand new CSAM we’ve never seen before". that capability is clearly going to be attractive and more effective, especially as these databases, you know, start to kind of fade away. I think you’re going to see a move towards.Just my intuition.

Deirdre: that kind of plays into, if this system as specified by Apple, if it worked perfectly and had no false positives, it means that it’s probably going to get a hit when someone acquires or has been shared with, content that’s been seen before. It has to come from somewhere. So it’s either going to be shared out from Apple or shared into Apple.

if that’s end-to-end encrypted, that’s kind of sucky, but it’s not all end-to-end encrypted. So why don’t you do more scanning on those incoming edges first and are not doing that first? They’re going all the way to assuming that everything is end to end encrypted and doing it on the client. And it’s, tough.

David: let’s say there’s newly created the CSAM, as long as that stays within the apple ecosystem and never gets shared out of Apple, it would not be detected under the current system. Like you’re relying on Google to detect it when you share it out and then someone share it back.

Matt: Yup. Yup.

Deirdre: One thing that the system as designed seems to be good for is finding, known, CSAM flagging it, deactivating those accounts, reporting them to NCMEC, and making Apple’s numbers go up. So instead of Apple having a billion devices and only 200, 300 reports to NCMEC a year, as compared to, Facebook, Instagram WhatsApp’s 20 million a year because they have 3 billion users, it will go up. Because of the 30 count threshold, because of a lot of things, I do not think that when Apple turns this on, it’s going to come close to 20 million a year. I originally did a back of the envelope calculation of it being like, well, given a billion and how many photos and blah, blah, blah. It would be somewhere between 60 and 80 million a year. Eventually. But. Facebook, WhatsApp, and Instagram’s numbers include reporting, include a ton of stuff.

And they include people looking at images and not strictly classifying it against a finite set. they look at reports and say, this is a nude image of, a person we know to be a minor. We are required by law to report it. That means that the numbers from Facebook, WhatsApp, and Instagram include minors, who are consensually sexting, each other, showing nude images of each other.

Those have to be reported to NCMEC by law. That includes people who quote unquote have "edgy" behavior. they have a lot of consensual adult porn. And sometimes in that swath of imagery, there are minors who are nude. And that if it’s reported and seen by Facebook slash Instagram slash WhatsApp, it has to get reported.

Facebook put out a study in February of this year, that basically said that 75% of the reports that they have to make to NCMEC are considered non-malicious; 25% are of this CSAM that is known or is explicitly showing abuse or exhibition of a child or a minor.

So that’s only 25% of about 20 million. So that’s 5 million of billions and billions of users and devices. Apple’s not going to get that high in their reports, but it will be higher. It may be in the millions because they have billions of users in lots and lots of photos.

This system will be good at detecting and shutting down some accounts making their numbers go up and making it look like apple is doing something, anything!

Matt: I think that is the goal. I mean, I think this is kind of a Goddart’s law situation where like the measure that, uh, law enforcement and others have adopted is how many reports are you making? If you’re not making enough reports, you’re in trouble. And you know, it’s amusing because there was a huge series of reports in the New York Times last year or two years ago, which was all about the epidemic of CSAM and the, you read through those reports and the evidence of the epidemic, meaning that the problem is getting worse is that the number of reports is getting worse.

And you dig down a little further and you get quotes from people who actually build this thing. No, no, no, no, no, no. What’s happening is that our detection systems have gotten better. And so this is the kind of the trap that you get into though, you build a detection system, it produces tons of reports than everybody else.

It feels like they’re not doing enough. And they receive pressure from government and angry reports in the New York times saying that there they’re slagging off on the job. And so they have to build better detection systems. And next thing you know, there are 500 million reports a day.

Going into NCMEC who doesn’t have any of the resources to process them. Law enforcement can’t do anything with them. And the whole system is just useless, but you know, you built a great surveillance system. You know, if you, if a government wants to abuse that system, they could do all sorts of things.

So it’s very hard to see what’s happening here.

Deirdre: Yup. It would be great if we had numbers from NCMEC, which in the United States all of the reports of this sort of stuff have to go through NCMEC because by law, they’re the only ones that are allowed to handle this sort of imagery and reports. We have all the top of the funnel numbers, like 21 million reports in 2020, or whatever; the bottom of the funnel, like, okay, of these reports, how many get turned into indictments, arrests, convictions or acquittals? We do not have those numbers. And part of the reason we do not have those numbers is that NCMEC is slammed and underfunded. We would love to know whether, you know, those 21 million reports turned into 600,000 convictions of child abusers, but we don’t have those numbers. We have very fuzzy kind of back room, like, "don’t quote me on this, but it’s somewhere in the low thousands or tens of thousands". those are extremely fuzzy. Not on the record, numbers. we have 21 million now with, you know, voluntary and some, some scanning reports from Facebook, WhatsApp, Instagram, if we turn on the system for a billion devices, is that going to go from 21 million a year to a hundred million a year or whatever, and like, is that going to help?

Because the whole system of protecting children is literally making reports to NCMEC, and just hoping NCMEC handles it. hoping that NCMEC farms it out to the correct law enforcement jurisdiction, which may or may not be in the United States.

And now we have the system that is great for matching images and we just have to hope and pray. It’s really frustrating because in all of Apple’s interviews about this, they’ve just been like, well, what is stopping the system from getting abused and turning it from a, child abuse images system into a, "people say bad things about Modi" system or, "people have images of Xi looking like Winnie the Pooh" system? Like what’s stopping that? And they’re like, "we will not do that." That is their answer. apples answer is we will not bow to pressure. And that is not very reassuring.

Matt: That’s pretty much it. we, we will definitely, you know, despite having, Stood up very much to this, first level of government pressure we planned to stand up to the next level. again, you can’t prove what’s going to future. So who knows? but what I did want to say was this, going back to the protocol, obviously there are a lot of different problems with the protocol. One is the NeuralHash potential for collisions. And we can talk about that for hours, but let’s not, uh, it’s been talked about a lot on the internet.

One of the other things that’s really interesting about this protocol is, Apple makes a big deal about the fact that they can put some limits on what the database is and they can prevent themselves from changing the database. They can prevent governments from forcing them put other files into the database.

And they’ve done a lot of claim claiming about this. They actually say in this frequently asked questions, document that they "hash the encrypted database", I think is what they call it. And it’ll be in your phone and they can update it and that you can check the hash. However, when I looked through all of their technical, uh, their proofs, when I looked through there, none of that stuff exists.

There is no property in that says, "Hey, here is a proof that we can’t, that this database", or they call it, uh, P data list or PPH list or whatever, "that this list is collision resistant, that we can’t change it through some flaw in the protocol". Now I fully believe that probably those features do exist,

Deirdre: but they are not elucidated in any of these proofs, any of these specs, anything that they have released they only talked about those, of like a root hash of the database or whatever, after all of their initial things came out and people were like, "uh, we have questions."

Matt: And it wasn’t even like they did the proofs and kept the proofs, you know, kind of in the side. And then didn’t announce them as big features. As far as I can tell, these are the only proofs, there are no proofs or technical work that backs up these ideas. And it just makes me make the whole thing feel improvisational.

It’s not like they had a bunch of safety features and they’re slowly announcing new safety features and they were keeping back. They’re kind of making it up on the spot.

Deirdre: Yeah. Like people

Matt: really concerning.

Deirdre: there’s a lot of literature about gossip protocols or transparency trees, about how you publicly attest to a data structure so that people can see and then they can check. You can either gossip it amongst devices about what do you see your view of the world? What do you see?

And if you see a discrepancy, you can flag it, or, you can publish them in multiple areas. They’ve said something about, the organizations like NCMEC, we are working with are going to publish their own something, something. And that way you can check that we’re all like doing the right thing.

Like how often are you updating those? there’s a lots of questions and yes, it does feel like they are kind of making up these really ad hoc after the fact, if they’re not, please give us documentation and proofs and designs. we would love to read them.

It’s very interesting.

Matt: The number of people and the number of photos, the number of amount of data being effective is just too big to be anything less than perfect here. I mean, if you can’t do it right, do it in small scale,

Deirdre: yeah,


Matt: do it like this.

David: And it kind of seems like they only published in detail, the parts that they worked on with an academic, who are the people that like are good at publishing stuff.

Matt: But they also have, they had a computer vision expert. I don’t know what he was doing. He published a two page report. but that NeuralHash gets, you know, reverse engineered, made public and— less yeah. A week to reverse engineer it and like 12 hours to find really kind of good looking collisions in it.

And the computer vision expert’s report basically says, "it seems like the false positive rate is low." Huh. I can kind of give them that because you know, it’s low maybe against accidental collisions. Although I think we found collisions now just accidental ones as well. So, um, but you know, like give them that, but how do you do a report, and not even think about the problem of malicious adversarial collisions that people could create? Why would you not cover that? Say you thought about it. It’s crazy stuff.

Deirdre: Yeah.

David: I mean, it just seems like all, anything that’s ML-based, neural net-based, my mental— I’m certainly not an ML expert, but like my mental model of those is they’re basically a really big compression function, and that you’re approximating a function that is like every possible image in the world to thing-that’s-in-the-image, is like saying what the image is, and that function in like naïve form is clearly way too big to store anywhere. and so coming up with some way to compress that down based off of what you’ve seen, which means that necessarily there’s going to be the thing that’s just right outside of how you compress it.

Right. it’s basically overfitting as a service, is how ML works, which is fine for a lot of applications, but like the adversaries are always going to be able to find, gaps is my understand.

Matt: Yes, they are always going to be able to find a way, even with the traditional hash functions there are flaws and, and some of my students have been looking at this for the last year. but there are these, these new functions just seem so much more malleable, to these kinds of things.

Deirdre: abs and everything else. I want to learn more about the NeuralHash it’s so that we can understand it better. there’s always this trade-off of like, you know, if it’s PhotoDNA, they, specifically do not license PhotoDNA for client side, applications, because it’s fragile to reverse engineering.

And the whole point is you don’t want the adversary to the hash to be able to game it. There’s a little bit of concern about that with NeuralHash, but like it’s already out there. So you kinda, you gotta give us a little bit more about this thing so that we can try to make it better? Find vulns that you can fix? You know, we don’t want false positives. Maybe, you know, if you’re going to put this in our iPhones.

Matt: But don’t make us reverse engineer. And by us, I mean the community don’t, don’t make people reverse engineer things to find out if they’re secure, when you’re deploying them to a billion users. I mean, the fact that it lasted five seconds— best way you can say this is that Apple’s security system relied on the obscurity, the secrecy of NeuralHash.

I’m going to be, try to be charitable and say they didn’t expect it to remain secret maybe, but then why not publish it? Why force people to reverse engineer? Like what is the thinking there?

Don’t know, it’s baffling.

Deirdre: I think it was their VP. I forget his name. He did an interview with wall street journal, and basically he said, "people can go look at the software and check that we’re doing things correctly, the way that we say that we’re doing them", and then Corellium, who is just settling a lawsuit with Apple, because they make tools to be able to test apple iOS software.

And they’re getting sued by apple, are saying "hello?!". it is incredibly hard to reverse engineer your software and you were incredibly litigious when we try to do it. So

Matt: Yup.

Deirdre: us that.

Matt: Yeah, I’d be scared. I mean, I would actually be scared if, if, uh, you know, somebody else hadn’t done this, somebody who kept them, their name anonymous and done this, um, it would be a little scary doing this. And I would definitely ask for legal representation before I published it under my own name. If was, you know, kind of person with the skill to do it, which I’m not.

Deirdre: David, thank you for listening to us, both yell about this. Thank you, Matt. very happy to have you on, uh, this is great. Awesome.