Meta's Smart Glasses Just Got Creepier
E41

Meta's Smart Glasses Just Got Creepier

Welcome back to This Week in Privacy,

our weekly series where we discuss the

latest updates with what we're working on

within the Privacy Guides community and

this week's top stories in the data

privacy and cybersecurity space,

including Meta's AI glasses are getting

creepier somehow,

vulnerabilities in popular cloud-based

password managers,

reminders about the privacy concerns of

AI, and more.

I'm Nate,

and with me this week is Jordan.

Hello, Jordan.

How are you?

I'm good, thanks.

Ready to dive into this week's top stories

in data privacy and cybersecurity.

All righty.

Before we do that,

for those of you just joining us,

Privacy Guides is a nonprofit which

researches and shares privacy-related

information and facilitates a community on

our forum and matrix where people can ask

questions and get advice about staying

private online and preserving their

digital rights.

One more quick piece of business before we

jump in.

We want to thank misanthropic forty two on

YouTube for becoming a member.

If you become a member on YouTube,

you get videos a week early.

It gets you a little badge in the

chat.

And if you want to support us and

you don't use YouTube,

we will talk about how to do that

a little bit later.

But first,

we're going to jump into our story about

Meta's facial recognition glasses.

Jordan,

why don't you go ahead and take it

away?

Yes,

so for anyone who's sort of out of

the loop,

Meta has been creating a new smart glasses

brand for the last couple of years now.

And now they've announced their plans to

add facial recognition to its smart

glasses.

Basically how these glasses worked before

was that they had a camera built in

and they allowed you to record and do

sorts of

you know, smart activities on them.

But now they're planning to add facial

recognition.

So quoting from this article by The New

York Times here, five years ago,

Facebook shut down the facial recognition

system for tagging people in photos on its

social network,

saying it wanted to find the right balance

for a technology that raises privacy and

legal concerns.

Now it wants to bring facial recognition

back.

Meta, Facebook's parent company,

plans to add the feature to its smart

glasses,

which it makes with the owner of Ray-Ban

and Oakley.

As soon as this year,

according to four people involved with

plans who are not authorized to speak

publicly about confidential discussions,

the feature, internally called NameTag,

would let wearers of smart glasses

identify people and get information about

them via Meta's artificial intelligence

assistant.

Meta's plans could change.

The Silicon Valley company has been

conferring since early last year about how

to release a feature that carries safety

and privacy risks,

according to an internal document viewed

by the New York Times.

The document from May described plans to

first release name tag to attendees of a

conference for the blind,

which the company did not do last year

before making it available to the general

public.

Meta's internal memo said the political

tumult in the United States was good

timing for the feature's release.

Really?

Really?

I don't know about that.

We will launch during the dynamic

political environment where many civil

society groups that would expect to attack

us would have their resources focused on

other concerns,

according to the document from Meta

Reality Labs, which works on hardware,

including smart glasses.

So I guess this is sort of,

we should say from a privacy perspective,

having glasses that you can just basically

walk around and record people.

It does have, you know,

protections against recording people,

obviously.

Like there are...

a single light on one side of the

glasses,

which is meant to alert somebody if

they're being recorded.

But it's extremely common for people to

basically do a DIY hack to disable the

light so they can record people without

their consent.

And I think this would also be quite

a huge problem if we are using facial

recognition on

glasses because, again,

you would be able to use the glasses

to identify people without their consent

and they wouldn't know that they're being

recorded or being identified.

I think it goes without saying, though,

this is definitely meta glasses were

already creepy and this is just a move

to make them even more creepy and invade

people's privacy.

So there was,

I don't know if people remembered,

but there was a

a video floating around a couple of years

ago,

and just quoting from the New York Times

article here,

MetaSmart glasses have been used to

identify people before.

In twenty twenty four,

two Harvard students used Ray-Ban Metas

with a commercial facial recognition tool

called PMIs to identify strangers on the

subway in Boston,

and then they released a viral video about

it.

At the time,

Meta pointed to the importance of a small

white LED light on the top right corner

of the frames that indicates to people

that the user is recording.

So I think this is an extremely flimsy

response,

especially because of how easy it is to

disable and cover.

I think

That's a little bit ridiculous that that's

the only protection that the company is

sort of pointing to.

So yeah,

basically this is what they're saying,

the AI assistant.

So metasmart glasses require a wearer to

activate them to ask the AI assistant a

question or to take a photo or video.

The company is also working on glasses

internally called super sensing that would

continually run cameras and sensors to

keep a record of someone's day,

similar to how AI note takers summarize a

video call meetings,

three people involved with the plan said.

And they're saying that the facial

recognition would be a key feature for

super sensing glasses.

So they could, for example,

remind wearers of tasks when they saw a

colleague.

Mark Zuckerberg has questioned if the

glasses should keep their LED light on to

show people that they are using the super

sensing feature or if they should use

another signal.

One person involved with the plan said,

I think this is

Obviously there needs to be more

protections,

but I think this sort of tool shouldn't

actually be allowed, right?

Because I feel like a lot of the

laws we have around like photography and

recording in public spaces is it's based

on very old things, right?

Like back,

fifty years ago when people were

journalists.

The only people that had cameras were

journalists taking photos for, you know,

newspapers and stuff like that.

That obviously makes sense,

but basically recording every single

person you interact with and then using

facial recognition on them is clearly an

invasion of not only that person's

privacy,

but everyone you're interacting with,

so...

I can't believe that uh they would

actually consider putting this out it kind

of makes sense because you know like they

said they're trying to push it out at

a time when people aren't as fully engaged

on this stuff and I'm sure this is

what a lot of companies do when they're

pushing out a lot of these um awful

stuff so yeah that's sort of what I

was those were my primary thoughts on this

but um Nate do you have any thoughts

um after having a look at this article

I have several as usual, um,

where to begin, uh,

I guess just to add a little bit

more, um,

a little bit more of the facts to

the context of this, actually, no,

let me start with this.

This is a solution in search of a

problem.

And the reason I say that, uh,

for people who disagree with me for some

reason, um,

the reason I say that is because.

Meta this article details basically that

this plan is super early in development.

And meta is really trying to figure out

what this is going to look like.

And basically what's happening is open AI

has announced that they're going to

release their own glasses for some reason.

Um,

cause they keep like any great company,

they keep adding things.

Nobody asked for it to a product that

everybody was perfectly happy with.

And then I think snap has been wanting

to release glasses for years as well.

And, um,

who I think there's someone else,

but basically the space is starting to

have competitors.

And so they realize like, well,

we need something that makes us stand out.

And so now they've literally thrown around

the facial recognition,

I think is the leading idea,

but I want to say there were other

ideas they were tossing around too.

And so, yeah,

there's a lot of different like

there's a lot of different discussions

that they're still having internally.

Like what is this facial recognition gonna

look like?

Like, for example,

they say here in the article,

possible options include recognizing

people a user knows because they are

connected on a meta platform.

So like, for example,

if you're friends with somebody

on Instagram and you're out shopping and

they walk past you in the grocery store,

your glasses will ping and be like, Oh,

Hey, that's that person.

Which to me is ridiculous because like,

I don't necessarily need to know every

time one of my friends walks past me.

And also like,

what if you're really not that close?

Like there's just,

there's so many problems with this.

And, um,

I've actually had this happen to me.

Not obviously not with this, but, uh,

years and years and years ago,

I befriended somebody on Tumblr and yes,

I used to use Tumblr once upon a

time.

And then

I think we ended up like texting or

something.

And then they showed up on my people

you may know on Facebook.

And this was years before I ever cared

about privacy.

I think this was even before Snowden.

And even back then I was like,

that feels really creepy.

And that feels like too much.

And I'm really uncomfortable with that.

And just, I don't know, like, and that's,

again, these are people, you know,

these are people, quote unquote,

these are people you're somehow connected

to.

And how long before Meta just starts

rolling this out in general,

where it goes into like, it's not just,

you know, you're connected on Facebook.

It's because that's how it started, right?

Like Facebook was your feed of people you

follow.

And then it became, you know,

somebody you tangentially know,

like a friend of a friend.

And now you're seeing posts that people

liked that you don't even follow that

page.

And how long before this turns into that?

where people are showing up in your little

glasses HUD, your heads up display,

just because you're tangentially connected

somehow.

So yeah,

I just wanna point that out first.

I also, I need to point out,

I wanna point this out every single time.

They invented or discovered, I don't know.

I don't know if they invented it,

but they workshopped facial recognition

years ago, years and years and years ago.

And they shelved it because it was too

creepy, even for them.

And then once clear view AI came along,

suddenly they were cool with it.

And so I just need to point that

out that like meta has no moral compass,

which that quote was probably indicative.

I'll get to the quote again in a

second.

Meta has no moral compass and they

basically just wait for something to

become socially acceptable enough that

it's okay.

Okay.

Like meta would probably popularize the

Hunger Games if they thought they could

get away with it.

They don't care.

They just want a dollar,

which is shown in just to state it

again, that quote,

I actually laughed when you read that,

Jordan, in your response.

You're like, really?

Are you sure about that?

Because like, oh,

we're going to launch during a dynamic

political environment where many civil

liberty groups that we would expect to

attack us would have the resources.

Like they literally said the quiet part

out loud.

Like, hey,

now's the perfect time to do this because

we know that nobody's going to like this

and they're going to be busy paying

attention to everything else that is wrong

in the world.

And we can do our evil thing.

It's like cartoon villains are less

cartoonishly evil than that.

I just,

I don't know how else to put it.

But the last thing I want to touch

on is you mentioned that story that was

originally covered by four Oh four,

where when metal launched their Ray-Ban

glasses,

they didn't have facial recognition

hooked, hooked into them originally.

And some researchers hooked it up to PIM

eyes and,

and started identifying random people on

the subway.

And Meta got super pissed about this

coverage because they told four oh four,

they're like, well, that's not us.

The researchers did that.

We have this quote unquote safety feature

built in, which is a little light.

That's apparently super easy to bypass it.

To be fair,

it's a little bit more advanced than like

just put a piece of tape over it.

Apparently you can tell when you do that,

but it's not hard to do.

There's tutorials online everywhere.

And, you know,

we've got safety features and we don't do

that.

And, you know,

there's always going to be bad people who

do things with technology.

Like they were so defensive and like,

how dare you accuse us of doing something

so nefarious?

And now they're doing the exact same

thing.

And how long before somebody finds a way

to jailbreak this?

And now it doesn't just show people you

know.

Now it does exactly what those researchers

did,

except you just made it a thousand times

easier because the capability is already

there.

They just have to jailbreak it.

And I mean,

I hope we don't have to say the

obvious here,

but like this will be used for stalkers.

This will be used for, you know,

I mean,

mainly I think that's the big one.

I'm sure it'll be used for all kinds

of other things.

And it's, I don't know,

this is just so like,

don't even know what to say this this

just to me seems like such an obviously

bad idea and an overstep and again just

that quote the fact that they just said

the quiet part out loud i don't even

know where to go from there it's like

they they've shown their true colors as

being downright evil um real quick they

they actually let me share this real quick

eff did notice despite despite meta's best

efforts eff noticed and uh they wrote this

blog post called seven billion reasons for

facebook to abandon space recognition

plans

And what did they say at the,

I think it was at the end here.

Yeah.

Meta's conclusion that it can avoid

scrutiny by releasing a privacy invasive

product during a time of political crisis

is craven and morally bankrupt.

And that was like such a good way

to put it.

Like there is absolutely no way anyone at

Meta can pretend to have a shred of

ethics right now because nobody with a

moral compass would do this.

It's just, yeah,

I think those are all my thoughts.

That's just so crazy.

yeah it is kind of surprising especially

because like i know there's a lot of

stuff people already concerned about um

ice agents wearing meta smart glasses and

then it's like now they're planning on

adding facial recognition i don't know it

just seems very like dystopian very creepy

um i think people need to be a

little bit more loud about this because

there's people that are

I don't know if we're reading the,

I was reading through information about

these meta smart glasses,

like in preparation for this episode.

And it seems like they're actually selling

more than ever.

Like they sold triple the amount that they

did in twenty three,

twenty three and twenty,

twenty four and twenty, twenty five.

So it's kind of concerning how popular

these things are getting,

because I think the worst possible outcome

here is these products are like become a

very

popular like thing that a lot of people

have and that would basically almost

create like a dragnet surveillance.

Like we talked about this last week with

Amazon's, uh, pet searching, uh,

Amazon rings, uh, pet searching system,

search party, search party.

Yes.

Um, so I think it's, you know,

it's a similar thing.

It's like Nate said,

it's going to always be used for like

the most nefarious purposes,

like stalking people or, you know, just,

I think the fact that if someone could

identify your name and other information

about you without your consent is kind of

going against the whole idea of privacy,

right?

Because you should be able to control who

knows that information.

And it kind of blurs the line.

Yes,

I think it's great that the EFF has

come out with a like, I guess,

statement against this.

And we're really big fans of the EFF.

So I hope they keep up the great

work on that.

And yeah, I mean,

I hope that this doesn't become a reality

because like we said,

it was part of a plan.

So I mean, it's still not...

actually implemented so we can only hope

that people like Meta will get the idea

once a lot of this stuff is leaked

that this is a really bad idea and

people really don't want it but I'm kind

of afraid that some people just

don't seem to see the issue with a

lot of this stuff like just how popular

these smart glasses are already is kind of

indicative of the climate of people's like

I guess people caring about their privacy

so it's kind of a little bit unfortunate

um but I guess with that being said

we should move on to the next article

here Nate

Alrighty.

Yeah.

Let's go ahead and head over to our

next article,

which I believe is about password

managers.

Yes, it is.

Let me get this pulled up here.

Alrighty.

So several popular, well,

three specifically,

three popular password managers fall short

of quote unquote zero knowledge claims.

So this came from researchers at ETH

Zurich,

which we have seen them do quite a

bit of good research in the past on

cryptography and cybersecurity.

And they did audits with permission of

Bitwarden, LastPass, and Dashlane.

And so they basically had a – I

thought I saw it in here somewhere.

They had a –

Yeah, in controlled tests,

the team was able to recover passwords and

tamper with vault data,

challenging longstanding zero-knowledge

encryption claims made by vendors.

And then the findings were published in a

technical paper and disclosed to vendors

under a coordinated nine-day process.

So usually the way these audits work is

the vendor will set up

basically like a parallel environment,

but it won't actually have any user data.

So that way,

if they do find any problems,

it's like nobody's actually exposed,

but now we know these problems exist.

So they probably did something similar

here where they set up like a test

server and ETH Zurich found some pretty

troubling stuff.

So unfortunately, Bitwarden did the worst.

They had twelve attacks against Bitwarden,

seven against LastPass and six against

Dashlane.

And real quick,

why those three is because apparently

those three are the most popular password

managers out there currently.

And they account for more than sixty

million users and about twenty three

percent of the market.

So those three account for about a quarter

of all password manager users.

Um, yeah, if you're watching,

you can see here,

there's like a list of the type of

attacks they found.

And you can kind of tell the BW

is Bitwarden.

The LP is LastPass.

The DL is Dashlane.

Um, Bitwarden,

I should say Bitwarden and Dashlane have

fixed most of these, I believe.

Uh, LastPass is working on fixing them.

The fact that LastPass is still in the

top three at this point makes me sad.

But anyways, um,

Yeah, so it's, well, real quick,

let me just say, so Bitwarden,

I personally found their blog post to be

the best because they did actually give a

full explanation of all twelve

vulnerabilities.

I believe that they said all of them

were medium or low impact,

required an attacker to already have full

server control, which is worth noting,

but at the same time, it's, in theory,

we would hope that these are designed in

such a way where it doesn't matter.

Like,

that's kind of why they did this research

is,

Products like Bitwarden, Signal,

I'm trying to think of some others,

Proton, in theory.

In theory,

the way these products are designed is

that it doesn't matter if the server is

malicious because everything happens on

device, everything is really secure,

and the server being malicious is more

kind of like a bummer than an actual

problem.

And that was not the case here.

So again,

getting back to the vendor response,

Bitwarden did fix...

I think nine of them and three of

them, they, uh,

I guess the term is they accepted it.

They basically said like, we hear you,

we acknowledge it.

And here's why we're not fixing it.

Um,

they did the reasons they gave made sense

in my opinion.

Like one of them was, uh,

they basically said like,

we need this functionality for shared

vaults to work.

Like if you share with another user or

a family member, which I hear,

but at the same time,

all three of them that they didn't fix,

they also said like,

We'd be open to looking into this in

the future,

which I appreciate the humility.

But at the same time, it's like,

why not just fix it now?

I don't know.

I just I don't like that they left

stuff open,

even though I understood their answers.

It's like, but can you fix it anyways?

There's got to be a way to do

it.

Dashlane was a lot less open on their

blog post.

They said that they did fix some stuff,

but they didn't really give that same

detailed breakdown that Bitwarden did,

which makes sense because Dashlane and

LastPass are not open source.

So they're just not as transparent,

I guess.

And LastPass, like I said,

I think they fixed one of the issues.

And I think they've got a couple others

that they've got the fixes ready for,

but they haven't rolled out yet.

And then they've got a few more that

are still in progress.

Um, although again,

personal biased opinion,

I would not use last pass if you

paid me after their last big data breach.

So yeah.

And I,

I think this is really disappointing

because again, the,

the idea of an attack like this is

we want to make sure that your,

your vaults are protected no matter what,

like that is the whole point of a

password manager is that you can trust

this.

And again,

It's very frustrating when that is not the

case and that does not turn out to

be true.

I really don't have an excuse for that.

It's very frustrating.

But at the same time, I think,

because I know already there's probably

some of our more hardcore veteran

listeners or viewers,

they're thinking like, oh, well,

this is why I use KeePass.

This is why I use offline password

managers, which is great.

If you have the kind of organizational

skill to do that, that's fantastic.

And I'm totally in favor of it.

We do recommend KeePass on KeePass XC

specifically, I think, on privacy guides.

But for a lot of people,

offline password managers are

too much work.

And the problem with security is security

requires you to trade convenience,

but everybody has a different threshold of

convenience.

And for a lot of people,

when for everybody,

once something becomes too inconvenient,

they're going to stop doing it because

it's just too much work and it's not

worth it.

And everybody has a different level.

You know, some people don't mind key pass.

Some people do.

So that's kind of the concern or the

unfortunate side of key passes,

because yes, in a perfect world,

that would be great.

But for a lot of people that requires

you to manually sync up across multiple

devices and

that requires you to manually keep really

good backups.

And the nice thing about cloud-based

password managers is it's just easy.

Bitwarden syncs across every device.

It looks really clean.

I don't have to worry about keeping it

updated.

Well, I mean,

I have to keep the app updated,

but you know what I mean?

It's just such a seamless,

easy user experience.

So it is really unfortunate to see

when these kind of things happen.

And I'm really glad that Bitwarden

especially took this to heart and they

took the criticism and they fixed it.

I hope that they will fix the remaining

vulnerabilities if possible,

because I feel like it's one thing when

there's a vulnerability and you say like,

okay, we hear you,

but the odds of that are pretty low.

It's kind of out of scope.

It's very unlikely.

But this is, again,

this is like the whole thing the product

is supposed to do is keep your vault

safe, even if the server is compromised.

So I feel like this one's a pretty

big deal.

And the last thing I want to throw

in there real quick is

Uh, one password was not audited,

but they went ahead and released a blog

post and basically said like,

this wouldn't impact us because they have

that.

Like, I,

I still don't fully understand one

password structure,

even though I've read about it like a

million times,

but they basically have some kind of, um,

like a two-password system where you sign

up and it's not quite your recovery key,

but it kind of is.

I don't know.

Either way,

the way that they have their setup,

they said that this would not have

affected them.

So if you are a one-password user,

go ahead and pat yourself on the back.

And as usual,

one-password continues to have great

security.

And ProtonPass, I don't think,

has released a blog post, surprisingly,

and they were not part of this audit,

so I don't know how they fare, but...

I think those are my thoughts.

Did you have any additional takeaways from

that, Jordan?

Yes.

So I did end up putting together a

post on our social media channels and I

did a little bit more research into this

article and sort of like the timelines of

things.

And one interesting thing that I did find

was LastPass was sort of

downplaying some of the severity risks of

these vulnerabilities that were found by

ETH Zurich.

So they said,

our own assessment of these risks may not

fully align with the severity ratings

assigned by ETH Zurich team.

And I think the interesting thing to think

about here is I don't think we should

be trusting LastPass,

especially because in twenty twenty two,

they basically had a breach which impacted

one point six million of their users

because they didn't adequately secure

their infrastructure.

And it also showed that a lot of

the fields in LastPass weren't actually

encrypted and was stored in plain text,

which basically allowed for, you know,

there was a breach of the server like

we're talking about in this circumstance

if something is zero knowledge then you

know you should expect that every single

piece of data is actually protected right

so

Zero knowledge needs to cover every single

data field.

It needs to cover metadata.

It needs to cover everything, right?

Otherwise there's information that the

provider has and it's no longer zero

knowledge.

I think there's definitely been an

interesting debate that we've been having.

on the team about, you know,

it's becoming zero knowledge, zero access,

all these buzzwords that a lot of

companies like to throw around is,

you know,

they're becoming the military grade

encryption sort of, you know,

thing that we always kind of make fun

of because it doesn't really mean anything

unless the implementation is actually

correct.

So I think one thing as a takeaway

from this is

If you're still using LastPass,

please stop.

There's so many great options now,

especially because, you know,

you've got all sorts of options that you

can pick.

Like Nate was saying,

if you're not sort of,

if you don't need that high level of

security of a local password manager,

like KeePass,

then you can also use a bunch of

these reputable cloud-based ones.

And I think the way that Bitwarden handled

this was incredibly professional.

It showed that they have a good

understanding of how to disclose fixes,

how to actually show and be transparent

about fixing things.

So I think they had a great response

and, you know,

one password wasn't affected and Dashlane

also had a good response,

but I think we should try and center

this back on some of the recommendations

that we have on the site.

So we do recommend ProtonPass,

which Nate talked about a little bit.

They weren't included in this,

so we're not sure if that affects them

or not,

but

It's another password manager that we

recommend.

They've been audited.

They've passed rigorous checks from our

community members and our staff members to

be recommended on privacy guides.

And we also recommend Bitwarden because

they're open source.

They're transparent.

They offer a high level of security.

There's a couple of other ones that we

do recommend such as one password,

which is like Nate said it does have

great security but does come with the

unfortunate side effect of being

proprietary and there's also the Sono

password manager,

which is a German password manager.

It's definitely more of a niche

recommendation because

It's less popular in Bitwarden,

OnePassword and ProtonPass,

but it still meets all of our criteria

as well.

And of course,

when we move on to the local password

managers, there's KeePassXC,

which is basically the gold standard of

KeePass clients,

and it's available on all desktop

platforms,

which is

Great.

And there's also KeePass DX,

which is available on Android,

which allows you to access your KeePass

databases on your Android device.

And we also recommend KeePassium,

which is available on iOS and macOS to

access your KeePass databases there.

So I think this is a great opportunity

to push people towards safer tools that

respect,

like they follow proper security

protocols, right?

I think that's all we can kind of

take away from this.

I think every service is gonna have people

that find issues.

The best thing we can hope for is

how quickly these companies respond,

how well they respond and how

transparently they respond.

So I think the gold standard there was

Bitwarden.

They took it very seriously.

They actually,

implemented all the changes.

Most of the changes, actually,

I should say,

there's a couple that they weren't able to

fix.

But I think that is what we should

be looking for when these things happen,

because every tool is going to have an

issue,

it's always going to have vulnerabilities,

it's going to always have issues.

It's just the way the company treats these

vulnerabilities.

That is the most important thing.

So I think this is, I guess,

a great

a great time to kind of segue into

our next story here.

Unless, Nate,

you have anything else you want to add?

I just wanted to say I'm really glad

you mentioned that about LastPass.

I didn't really notice that their response

contradicted.

I just skimmed it real quick to see

if they were like, here's what we found,

here's what we fixed,

here's what kind of like Bitwarden did.

So yeah,

they are not the most trustworthy.

Yeah, that's crazy for them to be like,

oh, this isn't as bad.

And it's like, yeah,

let's take your word for it.

Yeah.

So I guess with that being said,

we can move into some exciting iOS based

news here.

And after that,

we'll talk a little bit about Microsoft

co-pilot sending confidential files,

but first let's dive into the iOS.

news here.

So iOS twenty six point three adds a

unique new privacy feature and it's Apple

at its best.

This is an article from nine to five

Mac.

And basically this is an update that

allows people to have additional privacy

against their cellular provider,

which is like, you know,

the company you pay for your mobile plan.

And basically it's because of this new C

one X modem in Apple's new products.

So basically before they were using like

Qualcomm modems instead of having their

own custom Silicon,

but now Apple's developed their own custom

modem,

which I guess may means that they

decoupled from a third party company and

they're keeping more things inside.

Um,

So I think that's it's definitely an

interesting move from Apple.

And I think this sort of feature is

basically I can quote from the article

here.

Cellular networks can determine your

location based on which cell towers your

device connects to.

The limit precise location setting

enhances your location privacy by reducing

the precision of location data available

to cellular networks.

With this setting turned on,

some information made available to

cellular networks is limited.

As a result,

they might be able to determine only a

less precise location, for example,

the neighborhood where your device is

located,

rather than a more precise location such

as a street address.

The setting doesn't impact signal quality

or user experience.

So not entirely sure how this feature

works, I guess.

I guess it's not super clear,

especially because this is

basically a brand new feature like I was

only just came out last week.

Um, so it's basically it's,

it's important to remember though,

with this feature is that it's only

available on very specific devices,

which have the C one X modem.

So that would be the iPhone air,

the iPhone,

and the iPad Pro with cellular connection.

And of course,

it's only supported by very specific

carriers.

So in Germany, Telecom is supported.

In the United Kingdom,

EE and BT are supported.

In the United States,

Boost Mobile is supported.

And in Thailand,

AIS and True are supported.

So

It's basically an additional privacy

setting that Apple has added to their

devices.

I think this is definitely a positive,

especially because now with five G

connections,

it enables much closer tracking of your

location.

Especially because the towers have to be

closer together.

It's much easier to identify your location

based on your cellular signals.

So I think this is definitely a step

in the right direction that I think we

should see other companies also following

suit because this is sort of an issue

that some people deal with.

And reducing the amount of data points

that your carrier has is definitely a

benefit.

What are you thinking about this one,

Nate?

No, I agree.

I think this is really, really cool.

To me,

this reminds me a lot of how in

modern smartphone OSs, Android, iOS,

you can... Maybe not Android,

but I know iOS for sure.

You can tell an app if you want

it to have precise location or coarse

location.

C-O-A-R-S-E, coarse, like rough location.

And I think that's amazing because...

I mean, first of all,

I think a lot of apps shouldn't require

you to have location anyways.

Like a lot of, you know,

rewards apps for fast food.

They're like, oh,

what's your precise location?

Just to function.

And it's like, no,

I don't want to find the nearest store.

I know what store I'm going to.

And so it's really cool to see them

roll out this feature.

The one thing I didn't find that I'm

a little curious about is if it

if it will be bypassed for emergency

services,

like if you call here in the U

S nine one one,

I know it's something else in other

countries,

but if you call emergency services,

will that bypass it and give precise

location or will it continue to only give,

um, rough location?

I have to assume it's, it'll,

it'll bypass it, but yeah, I, I'm also,

I guess I'm curious to see what exactly

this defends against.

My money says probably things like geo

fence warrants and stuff like that.

Um,

But yeah, I don't know.

Overall,

I think this is a really cool feature.

And the thing I'm excited about is that

in my experience,

phones are always an arms race, right?

We've seen that a lot,

especially with the privacy stuff.

Like Apple rolled out,

I think it was Apple that first rolled

out granular app permissions.

And then Android came in later.

And then

Apple rolled out the privacy dashboard and

then in screen time,

and then Android rolled out the same

thing.

And I think Android actually beat Apple to

one of them.

I can't remember which one,

but even Graphene OS, you know,

Graphene OS rolled out storage scopes and

contact scopes.

And then it took a couple of years,

but Apple rolled out contact scopes now

ever, or not cause storage scopes.

I think contact is still coming,

but you know,

and now everybody gets to benefit from

that.

So this is to me,

this is one of those things where a

rising tide lifts all ships.

And so it's,

We obviously would prefer for people to

use like Graphene OS or something,

but we've covered this many times.

There's a lot of countries where pixels

are not available.

Pixels are expensive, whatever the case.

Maybe you just bought a brand new phone

and this is when you decided to get

into privacy.

And I totally don't blame you for throwing

away a brand new phone and running out

to buy another one.

So privacy is for everybody,

regardless of what phone OS they're using.

Some make it easier than others.

And it's really cool when we see features

like this roll out

that help everybody.

And my hope is that now Android will

be forced to copy this and we'll get

something similar on the Android side as

well.

So I think that's my main thought with

that one.

Yeah, I mean, it's definitely interesting.

I think with Android,

they don't have the same level of control

that Apple does because they are doing

this through their new Conex modem.

I think a lot of Android devices,

they're all reliant on these massive

silicon companies like Qualcomm, Broadcom,

et cetera, et cetera.

So I think the chances of seeing it

are definitely lower because, you know,

Apple is in this position of control here

where they have the ability to basically,

I think this is one of the benefits

of Apple really,

because they have such control over

everything.

They can make these bespoke solutions that

other companies just wouldn't be able to

do.

So it's definitely,

it's good that they're using this new chip

for additional privacy protection.

But yeah,

I feel like that's that story out.

Do you want to talk a little bit

more about some more upcoming iOS features

here, Nate?

Sure.

Let's talk about,

this will be pretty quick,

but the iOS twenty six point four beta,

the first version has already been

released.

And like I said,

we'll keep this quick because there's

really just a couple of things that we

have talked about in the past.

The first one we'll go ahead and talk

about is end to end encryption for RCS.

So I want to say Jonah and I

talked about this a few episodes ago.

RCS is.

the new standard that's supposed to be

replacing SMS,

and it brings a lot of really fun

little features.

All the same stuff you enjoy with

iMessage, really bigger attachment sizes.

You can emoji react to messages,

like you can give a thumbs up reaction.

I think give support,

but don't quote me on that.

It's just all around a better user

experience.

But one of the cool things it brings

with it is the ability to have end-to-end

encrypted messages.

However, comma,

people need to enable that.

So originally,

Apple said that they were not going to...

to support end-to-end encryption with

Android.

And I'm told, uh,

I didn't look into that too closely

admittedly,

but I'm told from multiple people that's

because Google was trying to force like

their proprietary version of it.

And Apple was like, no,

this is an open standard.

We're not going to play ball with you.

And eventually Google backed down and they

went with the open standard again.

That's just what I'm told.

But either way, uh,

Apple has since changed course and said,

yes,

we will support end-to-end encrypted RCS.

And I believe the code for this originally

showed up in this last beta that just

came out, the twenty six point three.

But even at the time,

whoever I was hosting,

I'm pretty sure it was Joan I was

hosting with.

They pointed out it's like this isn't a

guarantee.

It's just, you know, it's coming.

It may be this one.

It may be the next one.

And now it's looking like it's going to

be the next one.

Twenty six point four.

We are seeing actual code for encrypted

RCS, which is fantastic.

The drawback is that.

This still has to be enabled by the

carrier as well, from what I've been told.

So just like with this location thing we

were just talking about for cell phones,

the capability will be there,

at least a lot more widely than the

cell phone thing.

The capability will be there,

but the carriers will have to choose to

support it.

And unfortunately,

I don't know if there's enough incentive

for them to.

I don't know how much they're going to

care.

I hope they will, but no guarantees.

Fingers crossed.

And then the other thing that's really

cool is Apple.

Again, Apple and Google, arms race.

They're always copying each other.

Apple has this thing called stolen device

protection,

which is supposed to protect your phone if

it gets stolen,

if it gets snatched out of your hand

or something.

And it basically does a lot of like...

I think, uh,

like you requires additional

authentication to access like Apple pay or

your iCloud account, things like that.

If it detects that it's not in a

familiar location, things of that nature.

So it's,

it's a pretty neat little feature if

you're an iPhone user.

I think it requires iCloud,

but don't quote me.

But anyways, up until now,

that has been an opt-in feature.

You have to go enable it,

and now it will be enabled by default.

It will become opt-out,

which normally we are not fans when things

are opt-out,

but I think this is kind of one

of the good times where something should

be that way by default.

And there's some other stuff in that

article as well,

if you guys are interested,

but this is a podcast about privacy.

So that's kind of what we focused on.

Do you have anything you want to add

to those, Jordan?

Yeah,

I think this is the I especially want

to talk about the stolen device

protection,

because what we were seeing with that is

some people would basically look over

people's shoulders and see them entering

their pin.

And basically that would allow them to

take full control of someone's device.

Right.

Because there was no restrictions on

accessing a device.

It would basically

It would allow if you had the pin,

you could change iCloud settings,

you could drain someone's bank account

using the Apple Pay.

It was a little bit ridiculous.

But someone in the chat, Lucas Truman,

said it exists on Samsung.

So the stolen device protection

on Google devices is different to Apple's

implementation.

The stolen device protection on Android

devices actually uses like proximity

sensors and stuff like that to basically

identify if someone is running away with

your phone and then it will lock the

phone automatically.

Whereas Apple's implementation is actually

more that it just requires a security

delay if you start to try and make

sensitive changes on your device,

like Nate was saying,

like changing your iCloud password,

disabling Find My,

these sort of sensitive things,

it adds a security delay.

I think this is really important to be

enabled by default because this is the

sort of thing that is basically destroying

Because if they have this extra barrier

that they now have to worry about to

basically break into someone's device to

steal their money,

they're going to be much more like they're

much less likely to actually steal

devices.

And I think that's why it's so important

that these features are enabled by

default,

because

It's a great way to basically stop thieves

from deciding to do this and steal

people's phones.

I don't understand why people do that.

It's like a literal tracking device.

I'm not really sure why people are still

doing this,

but

I guess you can drain people's bank

accounts,

but I guess after iOS twenty six point

four,

it's going to be a lot more difficult

to do these things.

I would also just mention that this isn't

a silver bullet.

Obviously,

it gives you thirty minutes after you're

in a not familiar location.

I would set this to always just in

case.

So it always activates.

I would I would not leave it on

always from familiar locations.

it makes it a little bit more annoying

to change things.

It's fine.

I don't think it's a problem.

Most people, I think, you know, if you're,

you're probably not making sensitive

changes on your phone very often.

And I think it's better to have that

protection because, you know,

if someone did steal your device,

they could just follow you home and then

unlock your device and steal all your

money.

It doesn't really make that much sense,

but it could be,

and I would stop them from getting quicker

access to your device.

So yeah,

I think this is why this is quite

an important feature.

I don't really have anything more to add

on the RCS front.

I think Nate did a pretty good job

of covering that.

But I think this stolen device protection

thing is pretty important.

Cool.

Okay.

Well, before we jump into our next story,

which I think is about AI,

we're actually going to take a quick pause

and a detour.

And we're going to talk about some updates

here behind the scenes at Privacy Guides.

We have been chugging along a lot behind

the scenes lately.

And we have a bunch of new videos

for one to share with you guys.

So if you are not subscribed to us

on YouTube,

a lot of you are already watching.

Actually, whatever platform you're on,

whatever platform you're watching on,

except maybe Twitch.

But go ahead and subscribe and follow us

because we do post about

new releases, news, and stuff like this.

So definitely.

But on YouTube, on PeerTube,

we now have our private browsing video

out.

Actually,

I think the private browsing one is still

syncing to PeerTube.

But that one should be up on PeerTube

any day now.

But that is out to the public now.

And it's all about private browsing.

Again,

I want to reiterate for a lot of

the veterans in the crowd,

you already know this stuff.

But believe it or not,

there are people who still think that

incognito mode is actually private.

So this is a great video to share

with people and

explain to them like why it's not and

in addition to that we go through a

lot of the other popular browsers Vivaldi

Opera Safari we talk about how they

measure up and then of course our top

recommendations which I think you guys

probably know but I'm going to go ahead

and pretend it's a secret not spoil it

so

And then our smartphone privacy and

security course is still going strong.

The intermediate level just published to

everybody as well.

It is now public.

And that is iOS and Android.

So again, whichever phone you're on,

go ahead and check that out.

That's all about how to replace the stock

apps with much more privacy respecting

versions.

So like calendars, email, browsers.

And if you're on Android,

how to get those apps in a much

more private fashion.

Unfortunately,

that same option does not currently exist

on iOS, at least outside of Europe.

But yeah.

And then one more thing before I turn

it over to Jordan is we did make

some updates to the website.

We removed Yachty,

which is a YouTube front end for iOS

because it appears that it's no longer

being maintained.

According to the comments and the issue

that was opened,

it doesn't work very well.

It's even been removed from the app store.

Which is a shame.

We have removed Dataveria from a list of

people search websites.

We have updated about uBlock Origin Lite's

capabilities.

So definitely check that out.

And the BitLocker command line workaround

no longer works in Windows Home.

So we have updated our instructions on

that.

as well as some stuff about Firefox.

And then I think the rest of it

is kind of like code behind the scenes

stuff.

But that is all in the show notes.

And actually,

that was a last minute edition.

So it's not in the newsletter yet,

but I will make sure to add it

after this episode.

Jordan,

did you have anything to add to this

section?

yes thank you uh so basically we also

there was some people asking whether the

ios um and android advanced sections for

our smartphone security guide are coming

out and i can say that the ios

advanced video is pretty much done it's in

like the pre pre-production stage right

now it just needs some more small edits

to make sure it's

ready to go up and then the android

one is also at a similar um draft

level i guess it still needs some changes

before it goes up so i'm hoping to

work on that a bit today and also

tomorrow and then we can also get that

out for everybody to our members next week

um that's the plan on that um but

I think there's also a,

we had some more news articles going out.

So Freya was working on posting those on

Ghost and that gets shared to our forum

and stuff.

So if you haven't been catching those,

there was one that they wrote about

Project Toscana,

which is basically an upgraded Google Face

Unlock.

They're basically planning to upgrade the

Face Unlock system on Google Pixel

devices.

They also had another one about iOS,

twenty six point four beta RCS.

So if you're interested in reading more

about that,

definitely check out those articles from

Freya.

And there's also another one going up

soon,

which

is about AI.

So definitely check that out.

Make sure you follow our news feed because

we do keep Priyas managing that as well

as Nate.

Nate does some great posts there like data

breach roundups and all sorts of stuff

like that.

So if you're interested in keeping up to

date on the latest news,

definitely check out the Privacy Guys news

page.

But yeah,

that's sort of everything we're working

on.

We're hoping to work on some more

less course-related stuff and move into

some more, you know,

I think we had a private email video

planned.

So we're looking at that.

And yeah,

we're sort of finishing out all the

projects that we had going so far.

So that's sort of where we are at

on the video front.

Awesome.

Do you want to take this next story

or would you like me to jump into

that one?

Yes.

This next story,

like we were talking about before,

we hinted on it.

Microsoft says,

bug causes co-pilot to summarize

confidential e-mails.

Microsoft says,

a Microsoft three sixty-five co-pilot bug

has been causing the AI assistant to

summarize confidential e-mails,

since late January,

bypassing data loss prevention policies

that organizations rely on to protect

sensitive information.

According to a service alert seen by

Bleeping Computer,

this bug tracked under CW.

One, two, two, six, three, two, four.

That's a mouthful.

And first detected on January twenty first

affects the copilot work tab chat feature.

which incorrectly reads and summarizes

emails stored in users' sent items and

draft folders,

including messages that carry

confidentiality labels explicitly designed

to restrict access by automated tools.

Copilot Chat,

or short for Microsoft Office,

is the company's AI-powered content-aware

chat

that lets users interact with AI agents,

Microsoft began rolling out co-pilot chat

to Word, Excel, PowerPoint, Outlook,

and OneNote for paying Microsoft three

sixty five business customers in September

twenty twenty five.

So the problem was that the user's email

messages with a confidential label applied

were being incorrectly processed by

Microsoft three sixty five co-pilot chat.

Microsoft said when it confirmed this

issue.

So this is obviously a major problem.

And I think this is sort of the

issue with integrating AI into so many of

these things, right?

If you don't implement these things

correctly,

you're basically just sharing confidential

or private stuff with an AI chat company,

which, you know,

their policies around what they're using

to train their models,

their policies around the chats that

you're sending are somewhat vague.

So

This is like a breach of confidentiality

agreements.

A lot of companies utilize these

confidentiality labels to make sure that

people aren't sending these emails outside

the company.

And the fact that Microsoft Copilot was

just scanning and summarizing this is

just...

absolutely ridiculous.

I'm sure that a lot of companies who

had specifically super sensitive stuff

they were discussing are probably really

mad at Microsoft right now because,

you know,

they now have technically they've sent all

of their confidential information to

copilot chat.

So it's just a kind of fail by

Microsoft.

This should have been caught.

I think the fact that we've got all

these AI chat bots that are like

summarizing people's entire inboxes is

kind of bad.

Like it's,

this is sort of inevitable when you

basically grant entire access to inboxes.

I think we should try and avoid these

sort of tools because this is sort of

an outcome of, you know,

unless someone sets it up perfectly or it

ends up sending things to an AI server

that it wasn't supposed to,

I think there's not really a good way

to implement these sort of tools.

What do you think, Nate?

I think my favorite part of that that

I somehow missed when I was reading that

article is the part where it summarizes

the scent and the drafts.

So not only was it doing something that

it explicitly was not supposed to,

it wasn't even being useful in the

process.

Like,

I don't need you to summarize my sent

and draft emails when I'm the one writing

them.

Or, I don't know,

I guess maybe a lot of people are

using Copilot to write their emails

nowadays.

So, like, it's Copilot reading Copilot.

But to me,

that was the moment where I was just

like, oh, my God, are you serious?

Like, again,

not only are you putting users' data at

risk,

you're not even doing a good job while

you're at it.

That's just...

Oh, classic.

What are the kids calling it these days?

Microslop.

So yeah, that cracked me up.

But yeah, there it is.

Lucas at Microslop.

Yep.

Ten out of ten.

No, it's just, yeah.

And that's, I don't understand how,

like you said, yeah,

that should have been something you would

catch in testing.

And it's just so...

I don't know.

Cause I, I, I, I want to be,

I want to be fair and I want

to acknowledge that like, okay,

there's always going to be bugs.

There's always going to be mistakes,

but there's certain bugs and mistakes that

it's just like,

how did it get that far through the

life cycle?

And no one caught it.

Like we see that with all kinds of

like product names and ads and slogans.

We see that all the time where it's

like,

how did this get all the way from

the boardroom to the TV and not one

person spoke up?

which I'm sure usually somebody did,

but they were told to shut up and

do your job.

And maybe that's what happened here.

So yeah, that's just crazy.

But yeah, I mean,

the privacy aspect of this is very

obvious.

What if you're sending sensitive medical

data, sensitive national secrets?

The government uses Microsoft for reasons

that are completely beyond me at this

point.

But the government uses Microsoft Windows

and Outlook and Azure and all kinds of

stuff.

And so like...

It's bad enough we've got our own

government officials adding journalists to

signal chats.

Now we've got Microsoft itself is training

on sensitive war plans.

It's just – it's crazy.

This is so not good for anybody.

So –

Yeah.

I think it also goes,

it's like the double whammy, you know,

we're like burning down forests and

building like power plants to fund all of

this.

And like,

it's like an endless money pit of like

AI data centers.

And then it's just being used for like

the most unnecessary stuff,

like summarizing people's scent and draft

folders.

Like what?

It's just so ridiculous.

I think this timeline is,

It's very scuffed.

I mean, yeah,

I don't really have too much more to

add on it.

Like,

I feel like we've kind of talked about

this for quite a bit, but I think,

you know, obviously no one here,

please don't use Microsoft products.

I think it goes without saying.

It would probably be interesting to see if

you,

a lot of companies use these at their

businesses.

So, you know,

if you work for a company that uses

these tools, I'd probably look at,

Maybe letting them know that there might

be a confidentiality breach because this

is probably pretty common, right?

A lot of companies are using Microsoft

tools.

It's extremely common.

So, yeah,

there might be some massive breaches in

the future from someone accidentally

sending their war plans to Microsoft

Copilot or something.

I don't know.

We'll see.

I just want to add real quick,

you reminded me of,

I don't know if you've seen that meme

going around.

It's like a cartoon,

like the kind they draw in newspapers,

where the guy is like, oh,

we've invented this machine that answers

questions,

but you have to feed it twelve giraffes

per day.

And the other guy is like, wow,

that's a lot of giraffes,

but you said it answers questions, right?

And it's like, oh, no.

No, no, no, no, no, no, no, no,

no.

And that's just, yeah,

that's what came to mind when you talked

about burning forests and stuff.

Yeah, the cost is insane.

And it doesn't even do the thing it's

supposed to do well.

Anyways.

Yeah, it is kind of frustrating.

And that's kind of the atmosphere at the

moment.

All righty.

And then we had one more AI story.

I guess I'll go ahead and take this

one.

Yeah.

So this, this kind of tying in with,

uh,

AI and remembering the AI is a privacy

invasion.

So the headline,

this comes from four or four media.

It says Grok explores exposed a porn

performers,

legal name and birth date without even

being asked.

And I think, um, sorry guys,

I didn't sign in before,

before we recorded,

but I remember this one,

I was going to show you the screenshot,

but literally, so somebody, um,

I guess somebody posted a picture of this

girl online and she is,

she's an adult performer.

Um,

But they posted a picture and somebody

else replied and they asked Rock, like,

who is this?

And that was literally it.

It was like, who is this?

And Grok said like, oh, that's a,

what is her stage name?

Siri Dahl, I guess.

And that's like, oh, that's Siri Dahl.

She's an adult performer,

but her legal name is this and her

birthday is this.

And it's just so like,

like on the one hand,

I understand how AI doesn't really

understand what you're asking because just

in case anybody's under the delusion out

there, AI is not sentient.

I don't care how convincing it looks.

I'm certainly not anywhere remotely

convinced.

But it,

I understand the concept of like,

it didn't know if you were asking like,

who is this person for real?

Who is this person's name?

But just the fact that it threw that

out there,

and especially this goes back to, again,

we mentioned this in previous episodes,

the idea that AI just trains on everything

indiscriminately without consent.

And I guarantee you...

In some kind of world,

I can see a world where this woman

may have said like, sure,

it can know who – like my stage

name.

It can know the sites I've been on.

It can know the videos I've been in.

I don't mind that.

But I think she would have been like,

I would prefer it not tell people my

legal name, right?

Like that's why so many performers use

stage names is to give themselves a little

bit of a layer of privacy.

Yeah.

just the fact that it is scooped that

up and just threw it out there.

And you know, your personal morals aside,

like that's fine.

You can, it's not fine,

but your personal morals aside,

you can say like, well,

she's an adult performer, whatever.

I'm using a fake name.

Nate is not my real name.

I'm very open about that fact.

And what happens when you ask the AI,

like, Hey, who's this?

And it doxes me.

Like anybody, any of you,

like every single one of you in the

chat right now are using like fake names.

I'm hoping,

I'm hoping Lucas Trauman isn't your real

name, but maybe it is.

We have handles and we have usernames and

we have those for a reason.

And the whole point of privacy is that

you're supposed to have that...

that consent to be able to say who

you want to share data with and what

data you want to share with them.

And when AI scrapes all that up,

even if it never shares it like it

did here,

it's still taking away that agency from

you and it's taking away your right to

privacy and your control over that

information.

Yeah,

I wanted to throw this one into our

AI section because I felt like that was

a really important reminder.

And just, again,

the fact that it did so without being

prompt.

GROK has been completely insane from the

get-go.

Some people like that for some reason,

but the fact that it just threw that

out there completely unchecked really blew

me away.

And I was like, wow,

we should talk about this.

That one, I think, was pretty quick.

That's all I had on that one.

Do you have any thoughts on that, Jordan?

I mean,

I think this is sort of a case

of a very unfortunate case of AI doing

something that you don't intend,

which seems to be very common with these

tools.

Like it's,

I guess I'm kind of interested to know

like I also don't have access to this

full article so I think I might need

to reload it or something but I think

you brought up some good points like I

think you know the whole point of these

adult performers using these like

pseudonyms or like stage names and you

know trying to have some level of privacy

because

Obviously, when you're in that industry,

I think the chances of, you know,

stalking and doxing and swatting is

substantially higher and your safety

concerns would be much worse than the

average person.

Like, you would need much more security.

So it's very concerning that there's

basically been...

breach of you know this person's

information against their consent that's

probably causing them a bunch of issues

right now so that's really unfortunate but

yeah I don't really have too much more

to add I think this is

just a very unfortunate and sad story

because, yeah,

I think many people wouldn't like their

personal name being connected to their

activities.

And I think this is also especially

important with, like,

adult performers because, you know,

there's stigma around that industry and I

think some people don't want to deal with

that.

So, yeah,

that's kind of my take on it.

It's understandable that somebody...

Yeah,

there's a tweet that Nate's showing on the

screen.

So someone asked Grok, who is she?

What is her name?

And Grok appears to have responded,

she appears to be Siri Dahl,

an American adult film actress,

and then all the information about her.

So it's kind of unfortunate that...

this AI tool could identify somebody so

easily?

I guess it is somebody whose face is

pretty widely shared on the internet,

but still connecting that back to an

actual person's name.

How exactly did that happen?

That's kind of my question.

Where did it get this information?

I guess it's, you know,

possible that there was data brokers with

her information that were listed and an AI

scraped up that information and associated

it or something,

or doxing sites that found this

information and then basically just

published it.

I'm not really sure how that, you know,

happened.

Do you have any,

did you see anything in the article that

like suggested that or explained how that

happened?

Because it seems pretty, pretty terrible.

No.

And that's scrolling through the article

again.

That's what's horrifying is she said that

up until now,

she's been able to keep her real name

kind of unknown.

And it says she's been paying for like

data removal services for years and stuff

like that.

And it's, it's unfortunate that

You know,

the thing that got me into privacy was

it was actually Michael Basil's podcast

back when there were two of them,

like him and his co-host.

And they said they were talking about why

they split up their data instead of like,

you know,

because I will admit I was that guy

that I use Gmail, Google search,

Google Chrome, Google Drive,

Google Calendar.

I used Google everything.

And they were talking about that and they

were like, yeah,

but the defender needs to get it right

every single time.

The attacker only needs to get it right

once.

And that was the moment it clicked for

me.

And I'm like, wow,

like not obviously like all of us,

I have nothing to hide, but I'm like,

if my Google account gets breached,

that's my entire life.

That's again, my calendar,

my browsing history, my searches,

my YouTube, like everything, my files,

everything.

And so that's when I kind of started

to diversify a little bit.

And when I did, I realized I'm like,

oh,

this is actually a lot easier than you

would think it is.

And that's kind of what got me started

in privacy.

But

It's unfortunate.

So when we think about that phrase,

you know,

the defender needs to get it right every

single time,

we think about it in terms of like

data breaches, right?

And so we want to like data,

or at least I do,

I guess I shouldn't speak for everybody,

but I think of it in terms of

data breaches.

We want to minimize how much data we

put out there.

We want to make sure we're diversifying so

that the fallout is reduced.

But

it also goes for this stuff.

It also goes for these data removal

services.

It goes for, and it's unfortunate because,

you know, you were asking, like,

do we know how that got out there?

Throw a coin and take your pick, right?

Or like throw a rock and, you know,

it's like, there's so many ways,

especially in America where our data laws

can best be,

or our privacy laws can best be described

as LOL.

Like there's just so many opportunities

and it's impossible to defend against them

all.

We try our best, you know, which is,

One of the reasons we recommend like data

removal services and Michael Basil's

workbook and Yale Grauer's big ass data

broker opt out list.

And whether you want to do it automated,

whether you want to do it personally,

whether you want to do a hybrid,

it's just there's so much to defend

against and it can be so exhausting.

And unfortunately,

I've personally seen some people burn out

and quit because it is so much work

and it's so exhausting.

And it's just, yeah,

who knows where they got it from?

There's a million places they could have

gotten it from.

And it's really hard.

just horrendously unfortunate.

Um,

especially when somebody seems to be doing

everything right and trying their best and

using the data services.

And I don't know which ones she used.

Maybe she was using, but I mean,

we just got that study from consumer

reports, right?

Like a couple of years ago where we

finally got some transparency about which

ones work and which ones don't.

And that was one study.

And we really need a lot more of

them because like,

we still don't really know for sure which

ones are the most effective and

And yeah,

this could have come from so many,

so many places.

And it's just so unfortunate that somebody

is trying to do everything right and still

failing.

And now they're it says in this article,

they're like making a game out of it,

asking Grok, like,

what kind of car does she drive and

what's her address?

And she's like, yeah,

how long before Grok guesses?

They said so far it hasn't been able

to reply accurately yet,

but she worries it's only a matter of

time.

And it's like, cool, that's awesome.

So I don't know.

Yeah, that's that's a depressing story.

I feel so sorry for her.

Yeah,

I think it's like I was saying before,

this leads to, you know,

this can lead to violence,

it can lead to abuse.

Like it said,

almost instantly harassers started opening

Facebook accounts in her name and posting

stolen photos.

adult clips with a real name on sites

for leaking OnlyFans content so you know

there's people I think these people that

post on you know adult websites I think

they're at much higher risk of receiving

harassment and abuse so I think it's

especially important for someone like this

to have this protection but I think when

it comes back to the the data broker

stuff I think

that the tools it doesn't it doesn't

matter so much about the tools and stuff

i think there really needs to be a

change where that data doesn't isn't

allowed to be collected in the first place

because you're basically just playing like

a cat and mouse game um plenty of

countries it's this information is not

allowed to be used for this purpose like

that's the whole point of a lot of

data protection laws is they stop this

sort of thing happening and

you know,

maybe it was great back in eighteen twenty

five when everyone needed to have access

to the public records of everyone because,

you know,

they needed to work out where to go

to, I don't know, find someone.

That's great.

But we live in the interconnected like

Internet age,

like people are instantly able to access

information.

They can find out things significantly

faster.

You don't have to go down to like

a courthouse.

You don't have to go down to like

a

government building to access this

information it's readily available on the

internet it shouldn't be um and it

shouldn't be allowed to be used to like

advertising for like all these like creepy

stuff like training ai and stuff like that

um i honestly wouldn't be surprised if you

know i feel like grok has the least

amount of ethical like guidelines set for

it like it'll just answer absolutely

everything and it won't have any concern

over like the ethics like where does this

person live what's their address it'll

just be like

certainly here's the address of blah,

blah, blah.

It's just like, it's, you know,

AI is basically the guy,

the guardrails are not very great.

Like we talked about with the Microsoft

copilot issue,

the guardrails are very thin.

People can use them for malicious

purposes.

Like we saw with this, you know,

finding out a person's identity or finding

out where they live, you know,

this is all concerning stuff.

And,

I think it's like a combination of things.

AI tools don't have really any ethical

frameworks.

They can be used for like really abusive

stuff.

And also just like the US's lack of

any national privacy laws restricting

companies to use people's information.

And I'm sure that, you know,

these AI chatbots have also been trained

to

like a bunch of personal information has

been sucked up by these chatbots, right?

Like, I'm sure that, you know,

if you asked Grok about where Elon Musk

lives, it probably wouldn't say,

but I'm sure if you said some other

famous person whose address is somewhat

public,

it would come up and tell you exactly

where they live.

So it's, you know,

I don't think this is

great,

especially because it's so accessible now.

Like it's basically available to anyone.

Like you can use an AI chatbot for

free.

Anyone can access this.

So it becomes really concerning when it's

used to dox people.

So I think we'll only continue to see

more of this unless some changes are made,

which is kind of unfortunate, but yeah,

You can only protect yourself as much as

possible.

Like Nate was saying,

using all these data removal tools and

getting DMCA requests and all sorts of

stuff,

there's only so much you can do to

protect yourself.

And if the laws and the country doesn't

prioritize people's privacy,

then there's only so much you can do.

You're basically just removing stuff for

them to add it back again.

Um,

so I can definitely understand why someone

might feel worn out after they've just

constantly been removing the information

from the same sites over and over again.

Um, it definitely makes sense.

Yeah.

Thank you for completing my thought there.

Cause that's what I was going towards and

I forgot to like close the loop on

that is.

We shouldn't need to pay for these data

removal services.

We should just have strong data privacy

laws.

And I know this is really not the

best example,

but it makes me think of years ago,

somebody tracked...

I think it was a journalist at Wired.

They got the phone records of every phone

that went in and out of Epstein's Island

for like a year or a month or

something.

And the ones that went back to Europe,

as soon as they hit European airspace,

the record stopped because GDPR is so

strong there.

that they just basically stopped keeping

the records or they had been deleted by

that point or something.

Um,

like everywhere else we could track the

phones right back to their front door.

But that was the only one that like,

as soon as they hit European airspace,

they stopped.

And so like, it's laws are,

I know laws are really contentious in the

privacy space.

Cause some people are like, Oh,

nobody pays attention to laws.

They just don't work.

And, and yeah, some,

some companies will bypass them.

And then when that happens,

you have the right to sue them.

Hopefully if they're well-written laws and

you have private right of action, like,

It's not a perfect bullet.

We definitely need all these layers.

We need the technical layers that enforce

the laws,

but we also need the laws that give

you that protection in the first place.

And I would be – I'm not going

to say I guarantee you.

I would be very shocked for something like

this to happen in Europe.

I don't believe she was European.

Because they just have stronger privacy

laws.

Yeah, it said she was American, right?

Yeah, an American film actress.

So yeah, that's definitely that.

And actually one more thing that occurred

to me real quick while you were talking

is the article says that the reason she

and a lot of other adult actors use

fake names and try to protect their

privacy is because they don't want their

family being harassed, which is a thing.

that I could absolutely see terrible

people doing.

You don't have to like porn.

That's fine.

I'm not here to convince you that you

should.

But a lot of people will take it

way too far and say,

I'm going to call your family and send

them pictures of you naked, clips of you,

and basically just harass them because

it'll guilt you.

And I think that's something I see more

in authoritarian governments, I think,

or at least that I read about more,

is there was a human rights lawyer in

Iran that...

her daughter was basically arrested one

time.

Um, I read this book years ago,

so I may have the finer details wrong,

but like her daughter was arrested coming

in and out of the country.

And she,

Basically,

the government was trying to pressure the

author, the actual lawyer, the mom.

They were trying to pressure her to quit

her job and stop being a lawyer.

And she told her daughter straight up.

She's like,

I have to pretend like I don't care

because if I bend and I give in

to their demands,

every time they want to pressure me,

they're going to go straight to you and

they're going to harass you.

And it took months,

but finally the government stopped

harassing her daughter and left her alone

and they've never bothered her since.

And that's something that, I don't know,

that's just a thought I guess I'm trying

to get at is like,

we tend to think of privacy as a

very individual thing.

And a lot of the work is,

you have to download Signal,

you have to sign up for the services,

you have to be mindful what data you

put out there,

but it is important to think about the

impact of the people around you as well.

And yeah.

Yeah.

I don't know if that necessarily applies

to everybody,

but I thought that was an interesting

thing that the article pointed out.

I think we've talked about that story

plenty,

unless you have any more thoughts to add.

No, I'm good.

I guess we could move on here to

some,

we got some quick stories here just to

quickly cover because

We've talked about this every week.

There's more of these age verification

laws for children coming out.

It's really unfortunate.

More countries are doing it.

There was even some movement in the US

as well.

I saw today, this morning from Politico.

I think the governor of California was

considering it.

So there's definitely movement in the US

as well.

This doesn't apply just to...

Australia and uh Europe it's now happening

uh in the United States well it's moving

to happen we'll see how that happens but

um so yeah Newsom backs social media

restrictions for teens under so he didn't

entirely say this was going to happen he

just was less against it than usual um

so

I'm not really certain about this.

I don't really know much about this guy,

but, um, I think this is basically,

he said that, uh,

he was convinced that this was a good

idea by some of the Australian, uh,

movements from Australia where we

basically implemented a social media ban

for under sixteens.

Personally,

I think it has been largely pretty bad.

It hasn't worked that well.

Um,

So basically now they're having,

according to this article,

they're now having a debate over whether

this would be a good idea to implement.

And there's also another article that we

have here, which covers basically,

it's from TechCrunch,

and it's basically covering all the

countries that are moving to implement

social media bans and age verification and

identity verification.

So Australia, obviously,

we talked about that when it happened.

Denmark is set to ban social media for

platforms for children under fifteen.

And France is also pushing for it for

kids under fifteen.

So it needs to get through the Senate,

though, before it actually gets passed.

In Germany,

there's also a movement to add a social

media ban for under sixteens.

There's still, you know,

it still needs to go through and be

approved before it actually happens.

Greece is also close to announcing a

social media ban for children under

fifteen.

Malaysia is also considering one for under

sixteen year olds.

Slovenia is also drafting legislation to

prohibit people under the age of fifteen

from accessing social media.

Spain has also announced

But they plan to ban social media for

children under the age of sixteen.

And the United Kingdom is weighing a ban

on social media for children under

sixteen.

So the UK, I would assume,

would be pretty likely just because

they've had the Online Safety Act for a

while, which kind of required this stuff.

So it's pretty much a...

it's just a continuation of what we've

been saying before here.

This is not great for people's privacy.

There's no way that you do this without,

you know, identifying people.

And not everyone has ID as well.

So you're locking people out of platforms.

And I think you're also locking children

out of communities that they, you know,

some people have very niche interests.

They're from a minority group.

They find solace in, you know, these,

these platforms,

these online platforms to talk and meet

people.

So taking away that access is probably

going to be pretty detrimental.

I think we're going to see that in

the next couple of years,

but it doesn't seem to have been super

effective.

At least in Australia,

a lot of people are still bypassing it,

but it could be that further restrictions

need to be put in place before it

becomes effective.

So we can only hope that that won't

happen because we probably don't want,

you know,

people tap to upload their ID in every

case,

because right now they're using age

assurance technology,

which we've talked about and said,

you know, that doesn't work very well.

It's kind of racist.

It's not great at determining people's

age.

And you basically have to send a biometric

scan of your face.

But yeah,

do you have anything you want to add

here, Nate?

I feel like we've definitely talked about

this a lot.

We probably don't need to go into super

detail.

Yeah, I don't really have anything to add.

I just, it's,

like you said, and, and I mean,

I'm not Australian, but from what I hear,

Australia's ban has not really been that

much successful.

Um,

the UK is online safety has been a

downright catastrophe, but you know,

politicians are going to be politicians

and they're going to, who, who was it?

Um,

I should know this one is American.

George Bush Jr.

Like there's that famous picture of him

standing on the aircraft carrier saying

mission accomplished like three days into

the Iraq war.

And we were there for another twenty

years.

So or maybe it was three months,

but it was like still it was.

And that's that's just what comes to mind

when, you know,

all these politicians are like, oh, yeah,

we should do what Australia did.

And it's like, OK, come on, guys, like.

Yeah, I don't know.

Changing your mind is not cool these days,

I guess.

But yeah, from what I've seen,

it does not seem to have been successful

anywhere at all.

And I cannot imagine that that's going to

bear out any different for anybody else.

I don't know why they think it's going

to work better for them.

But yeah, it is.

I will echo real quick.

Somebody said in our private signal chat

is like,

it's really concerning how much this is

spreading like wildfire.

So I can sit here and laugh at

them and I can be like, God,

they're so stupid.

And I'm going to do that.

But also it's, you know,

it's one of those things where like,

we can't,

we can't take it not seriously because

unfortunately even stupid people are

dangerous in the right situations and

they're gonna push this through if nobody

stops them.

And it's really important that we keep

trying to like,

this is why we keep sharing this every

week,

even though we don't really have anything

new to add.

It's just to remind you guys that like,

this is happening.

And especially if you live in one of

these countries or one of these States,

you need to like contact your politicians

and speak up and be like, Hey,

this is bad.

And you know,

I'm not going to get too far into

the whole thing, but like,

Try not to come at them like they're

idiots.

I know I was just making fun of

politicians,

but try not to come at them and

be like, oh, you guys are so dumb.

Try to come at them in good faith

because if nothing else,

you're probably going to win them over

better that way.

You certainly stand a better chance.

Yeah,

we really need to fight back against this

and educate everyone on why this is a

terrible idea and get people to understand

that so that there's pushback because

otherwise they're just going to rush it

right through and everybody's going to be

like, yeah, the children, sure.

And it's going to hurt everyone in the

long run.

So yeah, that's all I got.

And with that,

if you don't have anything else to add

to that,

we are actually going to move into our

forum updates.

So in a minute,

we will be taking viewer questions.

So if you're watching,

go ahead and be sure to...

to go ahead and leave those.

There we go.

That's the one I was trying to do.

Be sure to go ahead and leave those

in the live chat.

Or if you're on the forum,

you can leave them on the forum too.

We will be checking that.

But for now,

we are going to go to the forum

and we're going to talk about a few

of the hot topics that people have been

discussing this week.

And the first one,

this actually came in pretty last minute.

So I don't know if you had a

chance to look at this one, Jordan,

but it says,

I verified my LinkedIn identity and here's

what I actually handed over.

We thought this one might be a good

one to discuss in light of all this

age verification stuff.

This is not the forum post,

but real quick,

I am going to share what it links

to.

This is like a blog post that somebody

wrote.

Um, so this person says that, uh,

they wanted the blue check mark on

LinkedIn.

The one that says this person is real

in a sea of fake recruiters, bot accounts,

and AI generated headshots.

It seemed like the smart thing to do.

So I tapped verify.

I scanned my passport.

I took a selfie three minutes later,

done badge acquired.

Then I did what apparently nobody does.

I went and read the privacy policy in

terms of service,

not LinkedIn's the other company.

Um,

Which I know this author probably did this

on purpose,

but I think it's funny that they did

the thing and then read the privacy

policy.

But yeah,

so apparently LinkedIn uses this company

called Persona,

which I believe is the same one that

Discord is going to be using.

Don't quote me on that.

But yeah, you can see here,

they went and read the privacy policy and

basically Persota collects full name,

passport photos, selfie, facial geometry,

NFC chip data.

I do want to point out a lot

of the time privacy policies say they may

collect this stuff.

So like, for example, he says like,

where is it?

Postal address?

I don't know how they would get that

from just his passport because U.S.

passports do not have your postal address

on them.

I think most passports don't.

But maybe they ask for it during signup

or maybe it's one of those like if

you submit a driver's license,

we'll read the address off that.

So I just want to point out like

maybe not everything was taken.

And obviously we know things like IP

address.

You can obfuscate by by using a VPN.

But then like Mac address,

OS version language,

all of this is completely insane.

Um, he said,

here's the weirdest of hesitation,

detection, copy and paste detection.

Um, which quick unrelated note,

if you run a website and you disable

copy and paste,

you suck because password managers are a

thing.

I hate when companies do that.

Um, but yeah, it's,

and then they shared all this stuff with

their quote unquote global trust,

global network of trusted third-party data

sources.

And they use it for training AI,

I think he said.

And yeah, right there, uh,

to train their AI,

where does my face go?

Here's what LinkedIn gets.

Here's what persona gets and the company

that they, good God.

Oh man.

Yeah.

This is just really crazy.

And, um,

just on the topic of all this, like,

uh,

like age verification stuff.

It's, it's really worth,

and he does have some actionable stuff

towards the end for the record.

Like, you know,

if you can try to contact these companies,

tell them to delete your data because

they've already verified you.

There's no reason they should be holding

onto it in theory, but, um,

and he's European.

So this is very much written for the

perspective of like, you know,

the GDPR says this and you can contact

this person, but, um,

um it's just very eye-opening because i

think a lot of us especially those of

us who are people who aren't really more

into privacy they kind of think especially

when the company says like discord you

know it's like oh you'll submit it and

we'll delete it as soon as we're done

but they don't all say that and there

are situations where they won't do that

and so yeah it's just kind of a

it's kind of a very crazy thing um

i thought it was a good read uh

do you have anything you wanted to add

to that

So, yeah, you're right about Discord.

So Discord was using KID originally.

And because people kept bypassing it,

they switched to Persona.

So the benefit of KID is allegedly,

in massive quotation marks,

the picture doesn't leave your device and

the scanning process is done on your

device, right?

That allowed people to bypass it.

So...

persona,

you actually are sending it to them,

the image to them.

And people looked a little bit more into

persona, which I didn't even know.

Unfortunately,

it's very common persona.

It's like a very common, like, uh,

age verification,

identity verification thing.

Um, so people are bringing up, uh,

basically this,

this persona service actually has links to

Peter Thiel's Palantir.

So if you don't know Palantir,

they make like spyware and like mass

surveillance technology.

Um, so yeah,

It's kind of concerning that there was a

link here.

I'm reading this article from Kotaku,

and it says that one of Persona's biggest

investors was Peter Thiel.

So I'm not really sure.

I don't know.

This is very sus, obviously.

It seems like there's some...

connection here.

I'm not entirely sure what the whole

context is for that, but, you know,

I think it's also great that this person

was able to put together a list of

all of these data points that Persona is

collecting,

because I feel like a lot of people

just assume that this data is

probably collected,

but it's good to get a validation that

they are collecting passport photos,

selfies, facial geometry.

I'm not sure about NFC chip data.

I feel like you would have to scan

the chip itself,

which as far as I know,

it doesn't ask you to do.

National ID number, nationality, sex,

birth date, et cetera.

So a lot of that does make sense

for the service they're providing,

but

Obviously,

you shouldn't need to provide this in the

first place.

This is a lot of sensitive information.

Someone could steal your identity with all

this information.

Yeah, so Draken Blacklight said,

Peter Thiel is evil, and yeah,

I would say so.

Like, yeah, evil CEO vibes, definitely.

I don't really know too much about him,

but I know quite a bit about Palantir,

and it's, like,

one of the worst companies to ever exist.

Like,

why would you name your company Palantir?

Like, it's, like...

That's like, are we the baddies?

You know what I mean?

So yeah, Nate,

did you have anything you want to add

there?

No, no, I don't think so.

I love that sketch for the record.

That's one of my favorite sketches of all

time.

Are we the baddies?

Yeah.

No, that's all I got on that one.

I think we can move on to,

we had one other quick forum post,

I think.

Did you have a chance to look at

that one at all or?

No, you can take it for sure.

Okay.

Yeah, it's just Entei.

I swear I never pronounce it right.

I think it's Entei.

It might be Entei.

I think it's Entei.

Entei, the popular photo manager.

They also have a password manager.

No, not a password manager.

Authenticator app.

They have a two FA authenticator app.

But Entei, really cool people.

They have released Entei Locker.

which let me go...

I can go ahead and share the blog

post here real quick.

So this is kind of...

This is kind of like a very low,

I haven't played with it myself and I

feel bad I should have because admittedly

I did get an email from them like

two weeks ago that was like, Hey,

we're giving people early access.

Like, cause you know,

I'm an influencer and they're like,

just don't release it until this day.

And I've just been so busy.

I haven't looked at it at all.

And I feel bad.

because I want to.

But it's supposed to be like a little

vault to organize and store your important

documents.

So for example,

he cites like medical records,

insurance policies, identity cards,

passwords, and notes.

They say that you can share them with

trusted people.

You can set up trusted contacts.

So kind of like a lot of password

managers these days have legacy contacts

where they can request access to your

vault.

And if you don't,

reject it in a certain amount of time,

like three days or seven days and they

get access.

And the idea is God forbid,

you step outside tomorrow,

you get hit by a bus,

then somebody can still get access to your

passwords and, you know,

make sure the bills are paid or whatever.

I think it's really more for, um,

like head of household kind of scenarios

or caretakers.

Um,

They say it's free for up to a

hundred items, all features included.

If you're a subscriber,

you can store up to a thousand items.

It's fully end-to-end encrypted,

fully open source.

Uh, I don't know if this is self-hostable,

but honestly,

I wouldn't be surprised if it is in

the future because I know Entei Photos is

already self-hostable and they even have a

blog post about how to do that.

So yeah.

Um,

I think the question I've been seeing a

lot of people ask and one I admit

I will ask and, uh,

I might email them back and ask this

question, actually.

But basically,

why would I use this instead of something

like NextCloud or ProtonDrive?

Well,

I think NextCloud is obvious for several

reasons.

But something like ProtonDrive or one of

the other cloud-based encrypted clouds

that we would normally recommend.

And I think in my personal opinion,

I would say maybe the answer there is

like,

maybe it's like simpler or maybe it's kind

of a more, a more minimal version.

Let me put it that way.

Cause I will admit if my password manager,

as much as I preach digital minimalism and

I try to live by it,

it still has hundreds of, of, of entries.

It's cause you know,

there's all kinds of things that like I

use a couple times a year,

like my medical portal I'm in,

I'm relatively healthy right now.

So I don't,

really log into the doctor's office a lot

other than like to schedule physicals and,

you know, stuff like that.

What else am I, I don't even know,

but there's things that I just don't log

into very often,

but I have accounts for them.

And so, you know, if again, God forbid,

if I were to get hit by a

bus,

I think

I could see how it'd be stressful for

my wife to have to go through hundreds

of entries and try to figure out like,

you know, okay,

which one is where we pay the rent,

which she should have access to that too.

But you know,

which one's where we pay the rent,

which one's utilities,

which one's insurance,

which one's banking this, that,

and the other.

And I could definitely see it being really

useful where there's a space where like,

look, here's like,

the ten or fifteen things you need to

keep a roof over our head, keep groceries,

and just figure everything else out later.

So I think for me,

or maybe if you don't use the cloud,

if you just keep really steady backups,

this could be the one minimal cloud you

use,

especially if you're already an Entei

user.

I don't know.

Those are kind of my thoughts,

just off the cuff.

But do you have any thoughts on this

product, Jordan?

I mean, yeah.

I kind of see it as...

I guess,

an extension to a password manager or just

like an alternative, I guess.

I don't know.

I think this is an interesting idea.

I'm not sure if this sort of,

I feel like there's been a couple of

companies that have tried to do a similar

product.

I guess it's nice to have everything in

a separate spot.

It kind of makes sense that

Entei is kind of trying to have like

a drive-ish sort of thing.

I guess it's more like,

I feel like it's more like a password

manager, but yeah, I mean,

this is a password manager.

I think if, if we,

if we're being honest,

like this is basically just like a

password manager.

So, I mean,

it'll be interesting to see how this

product works in like, you know,

I haven't personally tried it out.

It says it is available right now in

the app stores, any popular app stores.

So it could be an interesting alternative,

but I think we'll have to watch it

closely.

It seems like it's less about

auto-filling,

like it's more just like a specific place

to store private documents and stuff.

So I'm not sure if this needed to

be a separate product or not.

I feel like there's already products that

do this.

But I think if they build out the

features to really cover a lot of these

important things that people need a

separate app for, for instance,

Compartmentalization is good.

I think, you know,

not having all those documents in your

password manager,

but instead in this separate app would be

a security benefit to some people.

So, I mean,

I don't think it's a terrible idea.

I think it's something we'll have to watch

because right now it does look relatively

basic,

but it is kind of expected considering

they only just released the product.

So yeah, overall, pretty interested in it.

But I think it will need some extra

testing,

and we'll see how the development process

goes.

Yeah, I'm with you.

I'll be interested to see where they take

it.

I will say, to me,

it doesn't strike me as a password

manager, because the photos they show,

the screenshots are like, there's a JPEG,

there's a couple of PDFs.

I don't know.

This one here looks interesting.

It says thing, and then the subtext is...

uh, save location of real world items.

So like,

I guess you could drop like a GPS

pin, like, Oh,

this is the storage unit or something,

which I mean, for the record,

of course there's workarounds.

Like you could put that in the note

section of your password manager.

Right.

But I don't know.

That's interesting.

I kind of like that.

I think I do want to dig into

this a little more.

I'm interested.

I'm not, like you said,

I'm not totally sold.

Um,

but I would be interested to know what

they think the use cases for this and

And yeah,

I think I'll shoot him an email this

weekend, because I do like it.

It's just, yeah.

And real quick, actually,

I think that does take us to our

questions.

So I was going to mention Lucas here.

Lucas said,

would a one thousand password mean one

thousand items?

Like a one thousand character password?

I hope not.

But yeah, I mean,

I think a password would count as an

item, to be honest.

So yeah,

let's go ahead and transition to our Q&A

section here.

Unfortunately,

it does not look like we have any

questions on the forum.

Might be kind of a light week.

But were there any questions you

specifically noticed, Jordan,

or any comments you wanted to shout out?

Someone said, floating head,

question mark.

Yeah.

You know, it's good.

You need to protect your privacy.

You know,

everyone has certain requirements.

So that's just my requirement.

I feel like everyone can understand that

in this community, you know.

So not really surprising, I don't think.

Is it a locked note app or a

password manager, Lucas says.

I guess it's...

I feel like it's in between that because

it does store passwords technically and it

also stores...

like Nate was saying,

like that other information.

If you watch the actual,

like there's like a one minute intro video

from the CEO, Vishnu.

If you watch that,

it was the reason he decided to create

it was because his dad was struggling with

like storing documents securely and having

a way to access it.

So it does seem like, you know,

It is a tool that is applicable to

some people,

maybe not as much for people who are

really into privacy and security stuff,

but I think, you know,

It's clear that they've thought out this

product quite a lot.

So I don't know.

I think it's worth downloading it and

trying it yourself if it fits for what

you need,

if you need this sort of app.

I mean,

I don't need every single app that you

need.

Everyone has different needs.

So if this is something you do need,

maybe check it out.

And maybe it might actually be something

that...

fits your use case.

Obviously,

Privacy Guides is not going to recommend

this until we've had it announced by the

community and it passes all of our

criteria.

But I would say that's pretty likely that

will happen just because Entei has a

couple of their products listed on Privacy

Guides already because they've been so

great about auditing their software and,

you know,

being generally proactive.

So I wouldn't be surprised if the same

thing happened for this new app.

ANDREW FITZ GIBBONS, Yeah, for sure.

Um, not really a question,

but earlier when we were talking about

persona,

Lucas said that apparently discord has

stated they will not continue to use

persona for age verification because of

all the pushback.

So, um,

I've heard a lot of conflicting things.

I've heard they,

they started working with persona and then

they're not.

And I've heard there's like you said,

there's like ties to Peter Thiel and, um,

yeah.

So I hope they stop working with them,

but I haven't done any digging on that

myself.

And then just one other one real quick.

Hello Hello said,

they're straight from Mordor.

For those who don't know,

because you're not a massive nerd like me,

Palantir is the name in Lord of the

Rings

It's been a while.

I'm due for a rewatch because I know

it's like the twenty fifth anniversary of

Fellowship of the Ring this year.

I think Palantir,

the Palantir is like the little orb that

the bad guy Sauron,

he's like the big bad guy and his

like secondhand Saruman.

They use it to communicate long distances,

kind of like a crystal ball.

Um, I could be getting that wrong.

Like I said,

it's been a long time and I haven't

read the book since I was really young,

so that's not going to save me either.

But, um, yeah,

but it is literally like there's a company

in China that named themselves Skynet

because the CEO really liked the

Terminator series.

And it's the exact same thing.

It's just like, oh, I'm a huge,

and they all do it.

There's another one called Enduro,

which is another Lord of the Rings

reference.

Um, there's others I'm forgetting,

but they all do.

And I feel like they do it to

be like cheeky and funny.

Like, Oh,

we're just a bunch of harmless nerds.

And it's like, okay,

but that's like in a thousand years.

And actually it's not like this because

this actually happened, but like,

imagine in a thousand years,

some companies like, Oh yeah,

we named ourselves like third Reich

industries.

It's like,

what the hell is wrong with you?

Like, why would you do that?

Like you said, it's like,

are we the baddies?

Like, yes, you're trying to prove you are.

It's, it's insane, but.

Yeah, I digress.

I mean,

I personally thought that their logo

looked a little bit like the Eye of

Sauron a little bit, but I mean... Oh,

hold on.

I've never seen their logo.

I have to go look this up now.

I don't think I've seen their logo.

I feel like it does look like the

Palantir thing that you were saying.

It does look like that.

Yeah, it's like him holding the orb.

I see it.

Oh my God.

Yeah.

I almost wonder if they are like genuinely

malicious and this is them just like

trying to, like,

you know how a lot of times people

in conspiracy theories will be like, oh,

they're leaving all these breadcrumbs and

it's like, no, you're reading into it.

And this almost makes me wonder if like,

are they right?

Like,

are they genuinely leaving breadcrumbs and

we're just ignoring it?

Because that is so spot on.

That is horrifying.

I don't want to live in this timeline

anymore.

Okay.

I digress.

Do you have any other questions or

comments you want to highlight?

Not really.

I think this has just sort of been

a quiet week this week.

So thanks, everybody,

for chatting in the chat with us and

stuff like that.

But yeah,

it doesn't seem like we have any questions

on the forum, unfortunately.

Yeah.

Thank you, guys,

who showed up and chatted.

Thank you for the regulars.

It's always nice to see you guys.

Um,

all the updates from this week in privacy

will be shared on the blog every week,

uh, which is already out now.

And in case you guys didn't know,

we've started sending that out right when

we start streaming.

So, um,

if you want to go sign up for

that as a newsletter or subscribe on RSS,

that's a good little indicator right

there.

Reminder, um,

to get that notification for people who

are audio listeners.

We also offer the podcast available on all

audio platforms as well as RSS again,

and the video will be synced to peer

tube shortly after this.

Privacy Guides is an impartial nonprofit

organization that is focused on building a

strong privacy advocacy community and

delivering the best digital privacy and

consumer technology rights advice on the

internet.

If you wanna support our mission,

you can make a donation on our website,

privacyguides.org.

To make a donation,

click the red heart icon located in the

top right corner of the page.

You can contribute using standard fiat

currency via debit or credit card,

or you can donate anonymously using Monero

or with your favorite cryptocurrency.

Becoming a paid member unlocks exclusive

perks like early access to videos,

priority during the live stream Q&A,

and on our forum,

you get a cool little badge in your

profile and the warm,

fuzzy feeling of supporting independent

media.

So thank you all so much for watching,

and we will be back next week with

more news.

Bye.

Creators and Guests