Privacy roundup, week 8 November

November 8, 2021
Rebecca LaChance
on a light blue background, a dark blue symbol representing a person throwing things into a bin. The person wears the new infinity sign logo for meta, and throws away three smiling pictures of people.

All folks are talking about lately is Meta, Meta, Meta... and apparently even this roundup is not exempt!  We’re also welcoming a “Tweet of the Week” segment– might your witty  whoppers be next? Read on, you know you want to...


Meta, née Facebook, announced last week that they will delete the facial scans of over one billion users as a result of their decision to stop using facial recognition methods… for now. Many privacy advocates raise the concern that deleting these scans may not matter ultimately, as DeepFace, the model which was trained on those faces, remains untouched. Will this tiny sticking plaster the tech conglomerate is attempting to use to cover the gaping wound of public outcry for privacy actually do any good? 

We remain staunch believers that ​​companies shouldn’t have unfettered access to our biometric data and that our faces shouldn’t be stored in massive data repositories owned by big tech without permission. Users deserve to be in control!


In further facial recognition news, last week Clearview AI was told by Australian regulators that it must stop collecting photos taken in Australia and remove any already in their systems. Clearview AI, which harvests images from the internet and social media, has a database of over ten billion images and its facial recognition system is primarily marketed towards law enforcement. While they claim the images they’ve collected are fair game as they’re publicly available, privacy advocates and the Office of the Australian Information Commissioner believe their gathering is invasive. Angelene Falk, Australia’s Information Commissioner said,

 "The covert collection of this kind of sensitive information is unreasonably intrusive and unfair. It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI's database".

Once again, where and how do consumers draw the line on their own private data in a social media-driven world?


image by Dave Birch

Dave Birch made us giggle with this screenshot last Wednesday. These days phishing attempts are so widespread that every interaction requires a keen eye, and when legit sources look suspect, consumer trust erodes entirely. 

Self can solve that problem. By ensuring that every interaction takes place between verified parties, you can always trust that the person on the other end of the phone is exactly who you think they are. 

Don’t miss early access to the future of trust:

More blog posts from Self