Claude Code Leaked It's Own Source Code
E47

Claude Code Leaked It's Own Source Code

Hi, folks,

we got a lot to talk about.

Claude's source code was leaked.

LinkedIn scrapes your browser extensions.

There's a horribly insecure messenger app

going around and more.

All this and more coming up this week

in on this week in privacy number forty

seven.

So stay tuned.

Welcome back to This Week in Privacy,

our weekly series where we discuss the

latest updates with what we're working on

in the PrivacyGuides community and this

week's top stories in data privacy and

cybersecurity.

I am Nate,

and joining me again this week is Jonah.

How was your week, Jonah?

You know, my week has been pretty good,

thanks for asking.

Besides misspeaking during the intro

there.

Can't complain.

A lot of these things happen, right?

Yes, yes.

All righty.

Yeah.

I guess with that,

we'll jump right into our headline story

this week.

And you guys have probably heard about

this one.

So there's an AI called Claude,

Claude Code specifically,

because there's a few different kinds of

Claude.

I'm not a heavy AI user myself,

so I've heard that Claude is one of

the better ones in terms of the results

it puts out are mostly accurate.

It puts out mostly good code.

That's just what I've heard.

You could do a lot worse than that

one,

but we're not going to talk about that.

We're here to talk about the fact that

Claude code had its source code leaked

thanks to some human error.

To clarify,

this is the source code for the app

itself, the Cloud Code CLI,

not like the models or anything like that.

But it still gives us a little bit

of insight into what's going on under the

hood.

And...

I guess I'll go over it a little

bit, but I'm also mostly going to,

I mean, we're a privacy podcast, right?

So we're going to focus mostly on the

privacy and security stuff.

But just to kind of give you a

little bit of a recap.

So this happened because when they

published the newest version of the NPM

package, there was a source map file,

which I'll be honest,

that's technical stuff that goes over my

head.

But basically it allowed clever people who

noticed it

to access the source code.

Like we said,

it was almost two thousand TypeScript

files and more than five hundred and

twelve thousand lines of code.

I saw somebody else round up to five

hundred and thirteen thousand.

So, yeah.

I mean,

it's one of those once the cat's out

of the bag things, right?

Or once the horse has left the barn.

Because everybody quickly went and

downloaded this,

and there's other repos are springing up,

which we'll talk about that in a second.

Anthropic tried to get some of them taken

down with a DMCA takedown,

a copyright thing, basically.

unrelated we're not going to talk about it

but you know there was a whole uh

like github interpreted that dmca

according to the the official story github

interpreted that dmca a little harshly and

took down even things that were not

supposed to be taken down but yeah it's

it's been a whole thing um

So I've also seen some pretty polarizing

takes here because I think it was this

article.

Yeah,

this article said that its sophistication

is, quote, both inspiring and humbling,

according to some people who looked at the

code.

I saw some people on Mastodon look at

the code and say that it was pretty

sloppy and kind of shocking that it was

so bad.

But I mean, to be fair,

Mastodon tends to be a pretty anti-AI

crowd.

So I don't know who's telling the truth

there, but.

Yeah.

So, and then real quick,

before I jump into an analysis part,

we have like a follow-up to this story

that's related that says,

Clawed code leak used to push InfoStealer

malware on GitHub.

And this one comes from Bleeping Computer.

Basically, once the leak was out there,

a lot of people started...

putting up their own GitHub repos where

they would advertise that this was cloud

code with all the paywalled stuff removed,

basically.

So free premium cloud code.

And they would game the SEO to make

sure that it would show up in the

front.

If y'all are watching the video,

you can see here,

this one outlined in red is like the

third result from the top on Google.

And this is one of the malicious ones

that the article focused on.

And yeah, turns out, shocker,

it includes an InfoStealer malware.

I'm going to go out on a limb.

The article didn't say this,

but I'm going to go out on a

limb and say that it did work once

you fired it up.

Because usually that's how it is, right?

It works,

so you don't think anything's wrong.

But when you install it,

it's actually got that InfoStealer in

there.

And they said that there were multiple

repos like this.

So...

yeah so um cyber security takeaways from

this the we're covering this as a headline

story partially because this is a really

big story going around right but there's

there's a couple reminders here um one of

them is as far as the uh the

repo thing goes you know we we always

talk about making sure you get things from

an official source and um not not to

go too far out of my way to

pick on google here not like they don't

deserve it but uh you know we've been

covering a lot this whole google side

loading story and you know google

is trying to act like, oh,

this is all for security, right?

It's dangerous to get apps from a

third-party store,

even though the Play Store has plenty of

malware on its own.

But even so,

the point being is get things from a

trusted source.

Signal, for example, does have an APK,

but it's kind of hard to find.

But it is okay to get it from

the Play Store because that's a trusted

source.

Um,

there's also other places that you could

get the APK directly.

There's third party app stores like F

droid,

which I know have some concerns about

them.

But the point being is like,

this is when I go to download something,

typically what I do is I go straight

to the developer's website and I go, okay,

what are their official channels?

And then they'll say, you know, it's,

it's on the play store.

It's on F droid.

It's on GitHub directly.

And then I'll look at the list and

decide which one I want to use.

It's not so much the channel it's making

sure it's, it's official.

Um,

So maybe don't try to get free Claude

doing that.

And then, yeah,

just the other takeaway I had was the

whole source code leak thing.

Anthropic was really quick to own up that

it was a human error.

They said here, what was it?

Yeah,

this was a release packaging issue caused

by human error, not a security breach.

Yeah,

not like AI doesn't do this kind of

stuff all the time.

But, you know,

it's just remembering that there is...

remembering the human element in

everything.

You know,

if you listen to any social engineering

people,

they're always quick to point out that

humans tend to be the weakest link in

any system.

You know,

I could spend a lot of time trying

to,

if I'm trying to get into a building,

right?

I could spend a lot of time trying

to hack the door code or the card

readers or whatever,

Or I could come up with a really

convincing story for why I need to be

there,

usually involving a high-vis vest and a

clipboard, in my opinion.

But yeah,

so I think those are kind of the

more technical things that I took away.

Jonah,

was there anything specific about this

story that jumped out to you from your

expertise?

yeah there were a couple things that i

noticed and i was trying to find a

tweet that i saw from somebody else but

i couldn't pull it up here um but

i'll talk about some stuff going back to

what you said about mastodon i do think

it's interesting um

like the supposed quality or sloppiness of

the code,

because I believe Anthropic has said for a

while that all of their code base is

now AI-generated by all of their

developers.

That does, I think,

at least bring into question whether you

can DMCA or copyright any of this code

at all.

Maybe you can't because it's all

AI-generated,

which AI companies have been pretty firm

about saying, you know, this is not...

like a copyright concern at all.

So it's kind of a taste of their

own medicine there that all of this is

out, I think.

The main thing that I think we see

in this source code,

because like you said,

the models aren't leaked,

but there is a lot of information about

the

system prompts that Claude uses for a lot

of different tasks,

which definitely gives a lot of insight

into how Claude works and how like,

it like to to their competitors,

I think it gives a lot of insight,

like how you could make a similar product

and also

to people who are trying to do prompt

injections to bypass some of the

restrictions in placing cloud code,

you can more easily see how they're

implemented and get around them.

So I don't know how people are going

to end up using that,

but I think that there is a lot

of opportunity for people to do something

with it.

All of the AI stuff, I mean,

we've talked about it on the show before,

not...

the most interesting to me from a security

or privacy standpoint,

because like cloud code and all of these

AI models,

they're going to run fully in the cloud.

So they get all of this information.

I think it is sort of dangerous to

be using and relying on,

especially for sensitive information.

And that hasn't changed from any of this.

But yeah, it's interesting stuff.

The tweet that I was trying to pull

up talked about how Claude and how

Anthropic is using their AI to contribute

security patches to a lot of different

open source projects.

And they've been doing that out in the

open.

Certainly,

I've seen a lot of security

vulnerabilities submitted to

GitHub from Anthropic.

I think one of the latest Mastodon

security vulnerabilities in patches was

submitted by Anthropic.

So I believe I've seen contributions to

that and to Firefox and a lot of

other open source projects from them.

Unfortunately,

I just cannot find this source,

but maybe I'll be able to pull it

up later.

But I saw some information about internal

tools that Anthropic is using where the

system prompt is like,

create these security vulnerability

patches without giving any indication that

AI or cloud code is used at all.

So it's very specifically told not to

attribute anything to cloud or Anthropic.

It's told...

you know,

not to include comments that might

indicate it's AI, et cetera.

So I think that that's really interesting

that they are,

I don't know what cases they're using

those tools in.

I would have to find out more information

about that,

but I think it's interesting that they are

doing that.

Yeah,

it looks like you pulled up on the

screen some of the instructions that I

saw.

Yeah,

I found it on another article from Ars

Technica.

Yeah,

I don't know where the original thing is.

But yeah,

basically they were saying there's an

undercover mode.

So as you can see there...

they're basically telling Claude that

they're operating undercover in a public

open source repository.

So they can't contain any Anthropic

related information.

I can imagine that's probably used because

a lot of open source projects are very

anti-AI contributions and anti-AI pull

requests and just automatically close

anything that's AI generated.

So this is probably a way for them

to

um try and get around those restrictions

whether that's a good idea for them to

be doing or not i guess that's a

debate but um it seems to be what

they what they are doing and that's kind

of confirmed with this so i thought that

that was um fascinating yeah i agree i

i feel very torn on that because on

the one hand um

there's probably an angle I'm missing

here.

On the one hand,

I understand the idea of like,

let's just assume they're doing that

altruistically, right?

Like we want to make these open source

projects better.

We want to make them more secure.

You know,

like I don't think at Privacy Guides,

for example, correct me if I'm wrong,

we don't typically go out and solicit

people to like, hey,

come check out our website and make sure

all this information is accurate.

But we totally welcome it if somebody does

come up and they're like, hey,

I found an inaccuracy and they report it.

And I feel like that's kind of what

they're doing is, you know,

on the one hand,

it's like it's still creating a more

secure project, right?

Assuming that the bug report is good.

I know that's historically been a problem

is a lot of AI slot bug reports

that aren't really valid and they're not

really bugs or whatever the case.

And semi-related,

but I did see an article earlier this

week.

that said that actually there's been a

noticeable increase in quality on AI bug

reports.

So maybe they're starting to make some

progress on that.

But either way, point being,

I understand the idea of the end result

is the same and either way it makes

the project more secure.

But it also feels very disrespectful of

like, if I don't want AI reporting it,

why would you go out of your way

to hide that?

And I don't know,

it's a really weird thing and I don't

know how to feel about it.

But I did see that too.

That's really strange.

yeah i would be really interested to see

data on like all of the security related

pull requests or vulnerability reports

that anthropic specifically has reported

because i feel like there's two different

types of ai contributions to to these

projects i think a lot of them are

kind of slop contributions because a lot

of

a lot of people in the open source

space or some students, for example,

they want to pad their GitHub profiles

because it looks more attractive to

developers.

I see that quite a bit where if

you can get like a PR merge into

a major project,

it just kind of

looks good for you.

And so I think a lot of people

are just spreading a wide net and just

submitting a ton of AI slot pull requests

and hoping that some of them get accepted,

which is very annoying for open source

maintainers.

But on the other hand,

if Anthropic themselves,

if they have a legitimate interest in

improving open source tools,

which they probably do because a lot of

these big companies do use these open

source tools themselves for a lot of

different reasons,

I can imagine that

like somebody being like an engineer at

Anthropic being paid to use AI and submit

these pull requests might be doing a

better job in not just completely

submitting slop but like using AI to find

these vulnerabilities and write this code

but checking it themselves before

submitting it and writing like explainers

because they're getting paid to do this

unlike the people who are just you know

rapid fire submitting

vulnerability reports and PRs, right?

I don't know if that's true or not,

but I would imagine Anthropic would

probably argue that that's true and would

probably use that as the reason that

they're doing this.

And like I said,

I have definitely seen AI companies report

security vulnerabilities that were patched

to open source projects,

and some of them were major

vulnerabilities.

So there is some merit to the idea

that AI can find these vulnerabilities

more easily than, I mean,

I don't know if it's more easily than

people who are auditing the code,

but it certainly is happening.

So yeah, I mean,

if all of the reports that Anthropic

themselves are submitting are accurate and

worthwhile to fix,

I don't know if that's necessarily a

problem.

But of course, people are

all along the spectrum of AI and AI

contributions and AI code specifically.

So yeah,

I think that's going to be quite a

debate in the open source community for a

while,

and I don't know how people are going

to handle that.

Yeah, I don't know either.

It seems like one of the better uses

of AI, in my opinion,

as opposed to writing songs or putting out

blog posts.

It's still just, yeah.

Like you said, what, what would be the,

I wonder what the success ratio is,

especially from Claude.

And is there a human review?

It doesn't sound like it from that,

that snippet that I shared,

but that's personally,

that's where I fall.

Like,

I don't mind and I'm not a developer,

so maybe I just don't understand how,

how bad the problem is, but like,

I would imagine,

I don't mind if AI helps you find

the vulnerability,

as long as a human looks it over.

um but yeah i'm sure there's a lot

of people that are not doing that

unfortunately so yeah i mean we've we've

definitely talked about this in the

privacy guys community and when we're

talking about all these different tools

that we recommend um people really want to

see audits but they're extremely expensive

and um if ai is not being used

to like write new code but it's being

used as like a second pair of eyes

to take a look at all of this

code that could be a good thing um

You know,

it is not going to be perfectly accurate,

but if we're being honest,

all of these security audits that projects

are paying for are not completely accurate

or totally thorough either.

And they're certainly going to be cheaper

to run AI than have a whole team

of people auditing this code.

So while I would imagine it's probably

going to...

be worse quality and probably have more

false positives if you're using AI.

I do think that doing it and revealing

some of these vulnerabilities is probably

better for a lot of open source code

bases than not doing any sort of audits

at all and just hoping that the maintainer

catches all of these bugs.

So I can definitely see a use case

here.

It's a tricky situation.

Yeah, for sure.

I know it's not really AI per se,

but I know, and you probably do too.

I get the emails from GitHub every once

in a while.

That's like, Hey,

there's a thing that you use NPM or

whatever,

and there's like a vulnerability go ahead

and upgrade.

So yeah, I, I would be, uh,

I don't know.

I mean, mine's,

mine's just a static website,

so I can't imagine the damage would be

too terrible,

but still it's nice to get that proactive

without having to go out and get a

whole code audit thing.

So useful stuff.

But I don't have anything to add to

that story unless you did.

Did you want to tell us about this

next story out of California?

Yeah.

So this one was reported by the Los

Angeles Times.

Their headline is California bill would

require parent bloggers to delete content

of minors on social media.

Yeah.

So they have a quote here from somebody

directly impacted.

It says,

as the daughter of a social media

influencer,

Kami Barrett says she navigates life

within a digital footprint she wished

never existed.

Everything my mom posted is still on

social media, she said.

Photos I wish never saw the light of

day, private details about my health,

even when I started my first menstrual

cycle.

She was saying this at a Wednesday news

conference to advocate for Senate Bill,

which would require social media platforms

to offer a process for adults to request

the removal of content that features

themselves as minors and was created by a

family member who received compensation

for sharing material online.

So yeah, this is an interesting story,

and I guess it specifically relates to all

of these family influencers that we see,

which has definitely become more of a

problem lately.

Especially, I would imagine,

in California.

So it's interesting,

but probably makes sense that this is only

going to apply to...

kind of public influencers,

ones who are receiving money or

sponsorships in exchange for all of this

stuff.

But it is only going to be available

for adults.

So there isn't really a process that

prevents any of this stuff from being

posted in the first place or anything like

that.

It's only a retroactive thing that

adults can do about their childhood if

they were a part of like a family

influencer situation um does i i would say

i don't know if that makes a lot

of sense from my perspective because i

think as we always say um

anything that you post on the internet is

sort of permanent all of this stuff is

going to be archived and it could be

potentially years before you're able to

take any of this stuff down so uh

children who are like uncomfortable with

all of this going on um at the

moment i don't think have a lot of

protections um and i don't know um how

that should be handled to be honest i

know that that's been a debate that's been

going on for quite a while how children

should be

um like compensated for that is that

considered child labor um there's all

sorts of laws especially in the

entertainment industry and in hollywood

and on the internet um that that come

into play here so

I don't know if this is going to

really impact a lot of the people that

we see in the privacy guides community who

are trying to clean up their digital

footprint,

because I think a lot of people are

more concerned about smaller scale

situations than some of these commercial

ventures that this bill is going to

attack.

But I do think it's a good idea

for more privacy protections and some sort

of

process to get that data removed if you

are an adult and you don't want that

information out there.

So it seems to be a good thing.

I'm not sure how effective it'll be or

if it goes far enough,

but I think any protections and processes

to protect your privacy are good at the

end of the day.

Was there anything you wanted to note in

this article, Nate?

No, I agree with you.

It's funny.

I think most people would agree I'm a

lot more

lenient with some privacy stuff than a lot

of other privacy people are.

But like kids are kind of one of

the few things where I'm actually kind of

like,

like in a perfect world,

I think it should be illegal to post

pictures of your kids online at all.

Um, or at very least publicly, like,

you know,

if you're going to post pictures of your

kids,

it has to be in like a closed

group chat or like a,

a friends only Facebook post again in a

perfect world, there wouldn't be Facebook,

but that's beside the point.

Um, so like, yeah, this, I,

and I agree with you.

It's really sad.

Cause like, even in this article, um,

one of the,

one of the people they talked to said

that, um,

I think it was that first girl, Barrett.

Yeah, Kami Barrett.

Further down,

she says that she recalled being a target

for predators and online bullying,

said her mother was aware of the problems

it created,

but continued to share her daughter's life

on social media.

So, like, cool, thanks.

Now that I'm twenty, twenty-five, thirty,

I can ask you to take it down,

but that doesn't help me when I'm ten,

fifteen, sixteen, seventeen.

You know, like you said,

the damage is already done in so many

ways, and...

I mean, I guess, yeah, I don't know.

It's just, it's crazy.

And it's one thing I thought was

interesting is it says the legislation

requires that social media platforms offer

a process for adults to request the

removal of content.

And then basically from there,

they pass it on to the parent and

the parent has ten days to take it

down.

After ten days,

they get a three thousand dollar a day

fine.

So I don't know.

It's just it's really I'm with you.

I feel like it doesn't go far enough

and it doesn't.

it's not proactive enough,

but at the same time, I mean,

I guess it's better than nothing.

And I think that's, I don't know.

It's.

It frustrates me.

I wish it would do more,

but it's a story for sure.

A couple of things to note about this

story.

This bill hasn't passed yet.

It's just a proposal.

But this the person in question in this

article was talking about their support

for it.

The other thing I would note is similar

laws do exist in a couple of other

states, including here in Minnesota.

There are some laws that

here restrict um more highly how children

can participate in like commercial content

in the first place um so I think

if you're under thirteen you can't

actively uh participate in any of this

content creation at all you can maybe be

featured in it but you can't be um

Like an active part in it,

so I think that in Minnesota,

at least a lot of those toy unboxing

channels where people have their children

unbox a bunch of toys and that kind

of thing that's not allowed.

teenagers here in Minnesota are allowed to

participate,

but there are laws in both of these

situations around how that revenue is

split between everyone involved,

so there are some protections I think for

people participating in these commercial

ventures.

But from a privacy perspective,

I think they probably don't go far enough

in any case.

But it is interesting to see how this

is being handled.

It is a very, I think,

new issue with the internet and everything

that none of the existing laws were really

equipped to handle around child labor and

stuff like that.

So it's good that this is at least

getting attention,

and we'll see how this plays out.

Yeah,

it does say in California they have a

law that was signed two years ago that

content creators that feature minors and

at least thirty percent of the material

have to place some of their earnings into

a trust that children can access when they

turn eighteen.

So, yeah, like you said,

there's there's some it's an issue that's

starting to get attention for sure.

But.

Also, just on a personal note,

they interviewed Alison Stoner,

who they said was a former child actor

who appeared in films like Step Up and

Cheaper by the Dozen.

They were also Isabella in Phineas and

Ferb, and no mention of that.

And I feel so offended because I love

that show.

I just had to call that out.

Interesting.

I had to.

So in a little bit,

we are going to talk about LinkedIn's

browser scanning.

So that should be fun.

But first,

we're going to go ahead and jump into

site updates and talk a little bit about

what's been going on at Privacy Guides

this week.

Just this afternoon,

we dropped a new video.

It is currently members only.

So we usually leave those members only for

about a week.

This one is about encrypted email.

This is another one of those like really

beginner friendly videos that if you're a

bit of a privacy veteran,

you probably know this stuff,

but hopefully it's something that you can

share with your friends and family.

It talks about why mainstream providers

like Gmail and Yahoo aren't quite good

enough and how encrypted email works and

some of the different ones we recommend,

pros and cons of each.

So yeah,

if you are not a member yet and

you want to check that out,

you can join on YouTube or you can

go to privacyguides.org slash donate and

that will take you to a link where

you can sign up for a membership.

But that's what we did this week in

the video department.

And I will turn it over to Jonah.

Very cool.

Yeah, another thing we did recently,

Nate and I recorded this a few weeks

ago, but it's finally live.

We did a panel discussion on the Firewalls

Don't Stop Dragons podcast.

So episode four seventy four of that

podcast is now out.

It's called Privacy Guides Panel.

Nate and I are on it and we

talked about a ton of interesting stuff.

So I would definitely recommend checking

that episode out if you want to listen

to those discussions.

You can look at the table of contents

here.

It looks like Nate's showing that on the

screen,

but you can find the Firewalls Don't Stop

Dragons website for more information.

And if any of those topics sound

interesting to you,

Definitely check it out because it was a

ton of fun for us to record.

I think we talked about a lot of

cool, interesting, informative stuff.

So hopefully somebody finds it useful or

at least finds it entertaining.

In other news,

we again published a bunch of news briefs

that we're not covering here on this show,

but you can find our articles at

privacyguides.org slash news about them.

We have stories on Mac OS,

improving security in the terminal app.

a grandmother who was wrongfully arrested

because of facial recognition, iOS,

twenty six point five beta,

including end to end encryption for RCS

messages, Walmart,

digital price labels and more.

So definitely check that out.

Again,

it's privacyguides.org slash news if you

want to read those stories and let us

know if you have any questions about them

on the forum or anything else,

because there's always a lot of

discussions about these stories.

over there as well everything that we do

at privacyguides is made possible by our

supporters you can sign up for a

membership or donate at privacyguides.org

donate or you can support us by picking

up some swag like this water bottle for

example at shop.privacyguides.org

Privacy Guides is a nonprofit which

researches and shares privacy-related

information,

and we facilitate a community on our forum

in Matrix where people can ask questions

and get advice about staying private

online and preserving their digital

rights.

Now let's move on to our next story.

This is about NextCloud and OnlyOffice.

That is right.

So, um, full disclosure,

I am a next cloud user and a

little bit of a next cloud fan boy.

So, um, I'm bummed to hear this story,

but only office has suspended their

partnership with next cloud for forking

its project without permission.

And this comes on the heels of another

announcement.

So earlier this week, uh,

next cloud IONOS and several other

European tech companies

came together and announced this new open

source project called Euro Office,

which they describe as, quote,

a sovereign replacement for Microsoft with

intuitive interface and strong

compatibility backed by European open

source community.

Only Office has basically claimed that

this is a fork of their code.

And they say that this violates license

agreements because they offer only office

is source available or open source.

And they use the AGPL version three.

So specifically towards the end here,

if you're watching on screen,

you can see this, but towards the end,

it says we require compliance with

applicable licensing conditions,

including,

but not limited to the preservation of

only office branding logo and all required

attribution elements as defined in our

licensing terms, which is,

If this is a brand new project,

it would, of course,

have none of those things.

So for those who do not use Nextcloud,

you may or may not know that Nextcloud,

one of the things that it comes with

by default is an online document editor or

Office editor.

And there's a couple different ways to

make this work.

You can use Collabora online,

or you can use only Office.

And this has been...

I think they said for eight years,

only Office has partnered with NextCloud,

and now they are terminating that.

They do say...

I think they said that no, yeah,

no existing partners or clients will be

affected.

So basically if you've already got it

installed, you're good to go.

I don't know what that means for updates

and stuff, but yeah,

I guess we'll find out.

They also, interestingly, um,

just to throw it out there,

only office said that in the past,

and I'm quoting the article here,

next cloud has behaved in a manner not

expected from a partner,

including trying to poach its employees

and influencing customers against the

company,

but directly forking the project and

repacking it was the straw that broke the

camel's back.

Um,

Yeah, then kind of a statement here.

They said partnership is built on trust

and trust requires shared principles where

those principles are no longer upheld.

Continuing operation is no longer

sustainable.

For this reason,

we made the decision to suspend our

partnership cooperation.

And then just kind of, I guess,

a little bit more background towards the

end.

Lever Office has criticized OnlyOffice for

being, quote unquote, fake open source.

They say, for one reason,

OnlyOffice defaults to Microsoft Office

formats like DocX, XLSX, and PPTX,

which is Word, Excel, and PowerPoint,

rather than open standards like Open

Document Format or ODF.

And there's also, apparently,

Nextcloud says they didn't just

collaborate directly with OnlyOffice.

Let me rephrase that.

When asked why they didn't just

collaborate with OnlyOffice,

they said that there were a number of

reasons,

including that OnlyOffice is a Russian

company that tends to obscure its origins.

Developers often leave code comments in

Russian and many users are hesitant to use

software potentially linked to the Russian

government.

They also claim the only office

discourages contributions,

ignores pull requests and lacks

transparency since commit messages

frequently reference internal issue

trackers only.

So yeah,

I don't know that I have a lot

of thoughts on this one.

Jonah, did you have any,

like what do you know about this AGPL

V three, for example?

Yeah, so what's in question here is, well,

only office says that they've added

provisions to AGPL requiring certain

attribution in forks of the project.

So we could,

if I could share my screen here,

let's see.

Huh.

So they're talking about in their license

two things.

You have to retain the original product

logo when you distribute the program.

And they do not grant any rights under

trademark law for the use of any only

office trademarks.

And the Euro Office Project Initiative

basically removed these provisions saying

that

Basically, if I can find it here,

section seven of the AGPL says that you...

Which line is this?

Says that you can remove any additional

restrictions or any of these terms from

that license on your own.

And this is kind of the basis of

Euro Office's claim that they can kind of

change this license.

And they say that they don't have to

use their logo to...

Give attribution to OnlyOffice.

The AGPL is still going to require that

they provide some attribution somehow,

but according to the Euro Office project,

they don't have to use the OnlyOffice

trademark.

I think this is kind of interesting

because usually open source projects like

OnlyOffice in this position,

fight tooth and nail for forks to not

use their branding at all.

So the fact that they want them to

use their logo is kind of strange because

we've seen like Mozilla, for example,

when there's any Firefox forks,

they want to make sure that there's no

Firefox branding whatsoever associated

with that because they don't want it

associated with their

project um and related to this there's

actually another case um around it a few

years ago this started and then there was

a i think the latest update on this

was in um but a company called neo

four j um started a lawsuit against uh

another company purethink and

about a very similar issue.

Basically Neo-FourJ added a lot of clauses

to their AGPL license,

and Neo-FourJ said that because the AGPL

says that you can remove certain passages

or restrictions that were added onto the

AGPL, that they were able to do that.

And Peerthink actually lost this case.

This twenty twenty five article is

basically announcing an appeal that's

taking place.

I don't know if that's actually gone to

court yet,

but they

So yeah,

this article says that the AGPL allows

added-on terms like the Commons clause

that Neo-FourJ was using to be stripped

from the license.

And Neo-FourJ said that because they added

it,

you have to comply with all of the

terms of the license.

And the court basically agreed that any

terms in the license have to be followed

regardless of what the AGPL says.

And then the Free Software Foundation and

other organizations in the open source

space

said that that's not the case and that

they did intend this tenant or this

provision in the AGPL to work and for

these restrictions to be removed because

they believe that you can't really have

restrictions on free and open source

software,

which is kind of the point of the

AGPL.

So

It's a strange case.

It's definitely in a gray area.

And it really depends on how much

OnlyOffice wants to fight this.

But I think you could certainly argue in

the Neo-FourJ case and probably in this

OnlyOffice case that any of these

restrictions that are being added onto the

AGPL that have very...

specific restrictions on how the software

can be used probably make the software not

open source um so at the end of

the day you shouldn't be calling it an

agpo licensed project if you really want

these terms to be followed i think that

they would have to call it something else

and it

wouldn't be, I mean,

it would be at odds with the open

source in the same way that a lot

of these source available licenses that we

see are.

It's definitely a hot debate in the

community in general.

We've seen a lot of talk about like

the FUDO license, for example.

not being open source and they went with

a different name because of that.

But there's certainly other licenses that

projects are trying to use and they still

continue to claim to be open source when

in reality they're source available.

So I think that if only Office really

wants to follow through on having these

restrictions in place,

I think that would be very at odds

with their claims that they are an open

source

project um which would be a bit concerning

because the entire idea of open source is

that these forks should be able to exist

and like you should be able to completely

fork and create um this euro office that

nextcloud is making um without any

restrictions or preservation of OnlyOffice

branding.

That doesn't make a lot of sense for

a fork to be doing.

And so OnlyOffice is in a bit of

a strange situation here.

It's always the business

pace against open source software in

general.

They don't want people taking their work.

And OnlyOffice clearly believes that

because they say that they've spent years

building a fully functional production

ready Office document editor.

But at the same time,

they marketed that as an open source

project.

And that is what people kind of expect

from that.

I would also note,

Nextcloud kind of has a history of

forking open source projects um in not a

very collaborative way i mean their next

cloud itself was forked from own cloud of

course and that division was um

I don't think super well received by

OwnCloud themselves.

So it's kind of a situation that they're

used to.

But I think a lot of people side

with Nextcloud in that case.

And I think that a lot of people

are going to side with Nextcloud here as

well.

So it might just kind of be what

it is.

yeah and like kind of going back to

what you were saying about um you

mentioned that like a lot of companies

they they put work into it and then

they don't want people stealing that work

it's one of those things where like in

that case and i say this kind of

spitefully but like then just don't be

open source because i mean obviously in a

perfect world i would prefer everything

was or at very least be like you

said like be transparent about being

source available because um

in a perfect world,

I would love for everything to be at

very least source available,

because that's how we're able to verify

that the code is doing what it's doing.

And it helps build that trust at very

least, I think, um,

especially things that deal with security,

like password managers should at very

least have their cryptographic bits be

source available, um, bare minimum,

but cause security is something where like

everyone benefits.

Right.

But

To me,

it's just such a crappy thing because

that's the risk you take.

And I've talked to a lot of projects

that are not open source and I've asked

them that.

I'm like,

why don't you guys have any open source

clients or anything?

And that's usually the number one reason

they give is they're like,

we're worried that people are going to

take our stuff and steal it.

We have no real way to control that.

And then there's...

to counter their argument.

There are plenty of companies who seem to

be doing just fine despite that.

But yeah, so it's, I don't know.

It's just me.

I say this with a little bit of

bitterness in my voice towards only

office.

It's like, then just don't be open source.

Like it feels almost, um,

it's going to be a really niche reference.

And I don't think this is as much

of an issue anymore,

but back in like the early two thousands,

um, there was a real,

I don't know if you'd call it an

issue or not.

I guess it depends on how you feel

about it, but a lot of bands would,

um,

would market themselves as Christian bands

because the Christian market would be a

lot easier to break into.

And then once they hit a certain level

of success,

they would quote unquote go mainstream.

And some of them would even like

vehemently deny, like, no,

we were never a Christian band.

And it's like, well,

we have interviews with you where you said

that you were, so whatever, dude.

But to me,

it just feels the same way.

It's like,

you don't actually believe this stuff.

It's just some kind of marketing gimmick.

And in this case, the open source,

I just feel like that, I mean,

I guess in the Christian thing too,

just feels kind of crappy.

It's like, you know,

don't say that you believe in this stuff

just to get ahead in the competition,

like commit to it or don't.

So I don't know.

That just, that really frustrates me.

Quick thanks to at Sod This All for

gifting a Privacy Guys membership.

Thank you for your support.

Oh, nice.

I just saw that.

Yeah.

I had comments closed down so I could

see the screen a little better.

But yeah, thank you.

That's super cool.

All right.

I think that'll take us into a LinkedIn

story that Jonah actually alerted me to

this story right before we started

recording.

This one's like hot off the presses.

Yeah,

I think this was reported just yesterday

or today, if I remember correctly.

I definitely saw it only yesterday,

but maybe it's been talked about a bit

for a while.

But there's this report that LinkedIn is

illegally or allegedly illegally searching

your computer.

They are scanning installed browser

extensions without user permission.

So this is reported by Apple Insider.

And something is wrong with my computer.

There we go.

They say researchers have determined that

Microsoft's LinkedIn is scanning browser

plugins and other information without

permission and building user profiles

using data that the company did not get

permission to take.

A European advocacy group claims LinkedIn

is probing browser extensions through its

website code.

Fairlinked EV published a BrowserGate

report alleging LinkedIn detects installed

browser extensions by probing for known

identifiers through JavaScript.

The group says the technique reveals

personally identifiable information.

And so this is a threat that we've

talked about before,

I think in a previous episode of this

show,

but definitely on the forum where the

browser extensions that you install can

definitely add to your browser fingerprint

and can specifically identify you based on

what extensions you have installed.

And that's been a known threat for quite

a while,

but I think this is one of the

first and maybe the largest examples of a

real world situation where this is

happening.

And so if we look at this Fairlinked

BrowserGate website,

they point out a lot of different problems

with these tools,

namely that

Microsoft is designated as a gatekeeper

under the Digital Markets Act in the EU.

So Microsoft Windows and Microsoft

LinkedIn are both regulated products under

the DMA, and they need to allow,

as a result, free, effective,

high-quality, continuous,

real-time access to all data,

including personal data that's generated

through the use of these products,

which LinkedIn is not doing because

they're doing this

in the background.

They also point out that this search of

all of your browser extensions can reveal

a lot of different personal information,

and they give some examples of extensions

that could potentially reveal that.

It could reveal your political opinions,

for example,

because there are extensions like

anti-woke, anti-Zionist tag,

no more Musk that you can install.

I don't know what those extensions do,

but obviously having them installed

definitely shares a bit about what you

believe.

It could share some...

Could reveal your religious beliefs

because there are extensions like Porta AI

which blur haram content or Dean Shield

which blocks haram sites.

It could reveal potential disabilities or

neurodivergence through extensions you

have installed like Simplify which aids

neurodivergent users in browsing the

internet.

Certainly,

LinkedIn could be getting your employment

information.

There's a lot of obvious ways to do

that,

but there are job search extensions that

people use on LinkedIn where that could

reveal information to LinkedIn or your

current employer.

And then it just reveals a lot of

potential trade secrets because LinkedIn

is this network where so many

professionals are located and they share a

ton of information about where they work

and Microsoft would have access to all of

that data and they would also have access

to all of the extensions that these people

have installed,

some of which would be mandated by their

companies.

So like whether you use

The examples that they give are Apollo,

Zoom, Info.

You could imagine other browser extensions

of professional tools that would be

installed by these companies.

I don't know what tools companies use,

to be honest.

I know in the education space,

we would use tools like GoGuardian,

for example.

And so...

In that example,

they could find out what we're using.

But a similar case would apply to all

of these organizations and their employees

who use LinkedIn.

They say,

Fairlink says in their BrowserGate site

that LinkedIn has not disclosed this

practice in its privacy policy.

There's no mention of extension scanning

in any public-facing document that

LinkedIn has published.

And so on this BrowserGate website,

which you can find at browsergate.eu,

they list six thousand two hundred twenty

two extensions that a hidden JavaScript

program on LinkedIn will scan your browser

for.

I believe this only applies to Chrome

browsers,

but that's probably most people visiting

LinkedIn, I would imagine.

and you can't opt in or opt out

of that and there's again no mention of

any of this happening in any of their

privacy policies um which is definitely

very concerning um so

It's kind of a mass breach of your

personal data.

They say that this is deceiving

e-regulators, which is probably true.

And so I think it's just interesting to

note for sure that this definitely leads a

lot of credence to the idea that your

browser fingerprints are going to identify

you and reveal a lot of information about

you and what you do,

especially when it's being done by a

company like

LinkedIn that has probably a lot of

information about you if you use it.

It has your real name.

Some people ID verify on LinkedIn.

They have your whole resume and being able

to tie all of this digital data to

those profiles.

creates a very unique and very

comprehensive profile of you when you use

the service.

So I think it is very concerning for

sure.

And it shows that the threats that we

talk about when it comes to your privacy

are in fact a real issue.

And these companies are trying to get all

of this data wherever they can.

Yeah, for the record,

I tried to show the browsergate.eu

website.

For some reason,

it's not loading on the device.

It worked fine earlier,

but I guarantee it's DNS.

It's always a DNS issue.

But yeah, it's my first thought.

OK, so my first thought,

because I was recently educated,

for general browser fingerprinting,

like the day-to-day,

Some browsers like Firefox, for example,

they do actually try to obfuscate what

extensions you have installed.

And I guess just to back up a

little further, I know that for, again,

for general fingerprinting,

it's not always a guarantee that having

more extensions will make you more

fingerprintable because it generally

depends on what does the extension do and

whether or not it modifies the page.

But obviously this one is going out of

its way to scan your extensions, right?

So that's a little bit of a different

story.

which I would argue that general

fingerprinting probably does that too.

But going back to what I was saying

about Firefox,

I know Firefox basically tries to,

and I'm probably going to get the fine

details wrong on this, so I apologize,

but they basically try to like randomize

the ID that your extensions have to make

it a little bit harder for you to

be fingerprinted.

Do you think that would,

do you think that would stop something

like this or slow it down?

Or is it just going to be able

to get past that anyways?

um it could potentially but i mean yeah

it depends on you know i'm not sure

how these programs work randomizing it

could work if you can't find the files

in the first place um and that probably

is a strong protection against it um but

If those extensions modify the page

itself, which a lot of extensions do,

then that probably is still going to be

detectable.

And so that's only going to protect your

privacy against certain extensions you

have installed that make public resources

available, but don't modify the page,

which I don't think would be a ton

of extensions,

especially like password managers.

I can imagine where...

like if they edit the page itself to

add like a pop up or like a

drop down menu to logins,

that's going to be impacted.

So if you disabled all of that autofill

stuff, and you kept the extension,

and only like manually copied from it on

certain pages, you know,

it could potentially protect you in that

situation.

But I don't think most people are doing

that.

So I don't know how extensive that

protection would really be.

Which even then,

my thought process is that kind of defeats

one of the advantages of a password

manager, which is if it doesn't autofill,

that could be an indicator that you're on

a phishing page.

So if it never autofills,

then you never have that moment of like,

wait, am I on the right page?

Yeah.

Yeah.

And then I guess my other thought, too,

is just not really a question,

but just a thought.

You pointed out that this was tested on

Chromium browsers,

which is probably what most people are

going to use anyways.

I, at my last job,

they gave us work computers that came with

Microsoft.

And I mean,

ninety nine percent of what I did was

logging into the company stuff anyway.

So I just use Edge because that's what

it came with.

And at one point,

I think at one point I did get

Brave installed on it and then I was

never able to do it again.

But I think I did try Firefox because

I was like, well, you know,

it'll it's not Edge, right?

It'll be a way bigger improvement in

privacy.

But I got really annoyed because

everything in a corporate environment is

optimized to work

with Edge.

And so it was just so much extra

friction to use Firefox.

So where I'm going with this is, yeah,

like most corporate environments are

probably going to be using either Chrome

or Edge because everybody's familiar with

Chrome.

And where I was going with that is

at my job, they said like, yeah,

if you go to our little app store,

you can download Chrome or whatever.

We don't care.

Use whatever browser you want.

So most people are probably going to be

using Chrome or Edge.

Maybe some will be using Safari,

which I think the article did say that

Yeah,

Safari users are less likely to be

affected by the specific mechanism based

on how extension detection typically works

across browsers.

Apple's browser model limits

fingerprinting surfaces.

But it kind of goes back to...

It's unfortunate because not everything...

Where am I?

How am I trying to word this?

It's important to try to compartmentalize

your professional life and your personal

life, right?

Like never...

Never do personal stuff on a work

computer.

For some reason, people do anyways,

and I don't know why.

But even then,

LinkedIn is something that...

You wouldn't get in trouble for doing that

on a work computer, I would imagine,

but it's also something you would do at

home, right?

LinkedIn is supposed to be something that

follows you from job to job to job.

It's not necessarily specific to that job.

So it is something that I could see

people checking on a home device,

which is so frustrating because it's like

you're trying to...

I don't know.

It's like,

it's one of those things where like,

you're not really necessarily doing

anything wrong and you're still getting

punished.

And that's, that's super frustrating,

but yeah, I guess I,

I just wanted to point that out.

It's, it's, yeah, I don't know for sure.

I mean, a ton of people, I think,

um,

Their work laptop is their only computer

in a lot of cases besides their phone.

I think a lot of I know people

in that situation.

And yeah,

definitely do not recommend doing that.

You should get your own personal laptop

and use that instead.

But I know a lot of people do

that anyways.

Another thing that I wanted to share,

not in the notes,

but kind of related to this is browser

extensions aren't the only ways that

websites can potentially fingerprint you

or like software you have installed on

your computer.

Sometimes the software itself on your

computer can work against you.

And so kind of recently,

I think this has been going on for

a while,

but it's been picked up by some news

sources.

Basically, Adobe Creative Cloud,

of course it's Adobe,

is changing the host file on your

computer,

which allows websites to detect whether

you have Adobe Creative Cloud installed.

So this is posted to Reddit.

Basically,

Adobe is adding this line to your host

file.

And then when you visit the Adobe website,

it tries loading an image from that exact

domain.

And if the image loads because of this

line that they've added that points that

domain to a specific IP,

then they know that you have Creative

Cloud installed.

And that could, of course,

be checked by any number of different

websites to detect whether you have Adobe

Creative Cloud installed.

And so even if you don't have any

browser extensions,

there are other ways that software

on your device itself can um increase your

browser fingerprinting profile um

regardless of what you do with the browser

so that is something to definitely keep an

eye out for because the only thing that

would really protect you against this is

either not letting creative cloud do this

which i don't know if there's a mechanism

to do that but it might be

uh worth looking into or using a browser

like tor browser which is going to bypass

all of your local network stuff

specifically but that's um challenging to

do and not a lot of people are

doing that for day-to-day use um and so

software that does something like this um

is a problem i don't know of any

other software that's going to do this

besides adobe but um

Of course, again,

of course it's Adobe doing that, but yeah,

that is another attack vector unrelated to

extensions that websites could be using

that you'd also have to look out for.

That's insane though.

Editing the host file.

I don't even like screwing with that.

That's some deep level stuff.

Oh my God.

Wow.

These companies are out of control, man.

Yeah.

My brain hurts.

Just related to that, it's always DNS,

right?

DNS can be used against you.

DNS is used for evil.

Anyways,

I think that's all I have to say.

Do you want to talk about our next

story here?

Yeah.

My brain is still hurting from the host

file thing, so we'll just move on.

So this next story,

it helps if I share the actual screen.

Here we go.

So this next story comes from Four of

War Media.

It says,

a secure chat app's encryption is so bad,

It's quote unquote meaningless.

I mean, okay,

we'll go through it a little bit.

So the app is called Teleguard and I've

heard of it a little bit.

It actually rang a bell when I read

this.

I really, I'm not going to lie.

I really wanted a moment where I went

and checked the DMs on the forum because

we get a lot of projects and privacy

guides that message us directly.

And they're like, hey,

you should recommend our product.

And we always tell them like,

go post on the forum.

This is a community project.

Let the community vet it.

Um, so I,

I went and checked and I thought like

maybe I,

I knew their name cause they messaged us,

but, um, nothing like that, I guess.

So I don't know where I've heard it

from, but, um,

it has been mentioned on the forum once

or twice,

but never really like heavily recommended

or anything.

Just, I don't know.

But either way.

Yeah.

So this is an app that markets itself

as a secure end to end encrypted messaging

platform.

It's been downloaded at least a million

times.

Um, but apparently this researcher, uh,

found, it says there's no storage,

highly encrypted, highly encrypted, um,

kind of like military grade encryption,

right?

Anyways, um, Swiss made,

and there's an anonymous researcher in

March who contacted four Oh four.

They said that the private encryption keys

are sent to the company server upon

account registration.

And, um,

Jonah can correct me if I'm wrong about

any of this, cause I'm,

I'm speaking a little bit outside my

element here, but I think I'm,

I'm right about this.

Um, there are services like proton,

for example, that, um,

I don't know if I'd say the private

key gets sent to the server,

but they do have a way where like

you can log in from any device and

your email is decrypted.

Right.

But they also store that in such a

way where they don't really get the key

itself.

Um, I, again,

I could have the details wrong here,

but my point being like,

I think there is a way to store

private keys,

but they weren't doing it this way.

They weren't doing it in a way where

it's like,

we don't have access to your private key.

Like, no, they just had your private keys.

Um,

So yeah, they also...

I think it's further down.

They go through every single issue they

found,

which is basically like your private key

was derived from your user ID.

So anybody who had your user ID could

plug it into this API and decrypt your

messages, which is anybody you message.

Or a lot of people will post their...

Well, Signal, for example,

but a lot of people will post their

username publicly because they're like,

hey, anybody who wants to contact me,

go ahead.

They said further down that metadata was

stored in plain text.

So basically every single mistake you

could possibly imagine a company doing or

a messenger doing,

it seems like they were doing.

And oh, man, hold on.

I do have to find...

Yeah.

So the CEO, after publication,

the CEO contacted four Oh four via

LinkedIn,

hopefully from a company computer in a

direct message and said, quote, this,

the information is incorrect.

Exclamation point.

The person who gave you the technical

information that has completely misled

you.

That person is not competent.

Exclamation point.

Uh,

the CEO did not provide any evidence for

this or point to any specifics.

Um, very.

Yeah.

I don't know.

I always like when people do that kind

of stuff.

Very professional.

Um,

So is my making fun of them,

but whatever.

So yeah, I personally,

I wanted to share this story because I

feel like in the privacy community in

general,

I see a lot of people who I

think...

we get excited about new projects.

I think there's two kinds of privacy

people.

I think there's the people who get excited

about new projects and the people who are

suspicious of anything new.

But I see a lot of people who

get excited about new projects and they're

constantly like, oh,

there's this new messenger I just heard

about.

I'm excited to try it.

What does everybody think?

And first of all,

I think that's really awesome when you go

to other members of the community.

What do people think?

And because I have seen one of the

messages that I mentioned when I was

trying to figure out where I've heard of

this app before.

I went to the privacy guides forum and

one person was asking like, hey,

what does everybody think of this?

And a lot of people were like, oh,

it's proprietary.

Like this seems weird.

This seems weird.

There's a lot of red flags here.

I don't think anybody did like an actual

technical analysis like this person did.

But, you know,

it's good to get that kind of feedback

from other people.

Like I'm very open about the fact that

I don't really know a lot of code.

I did take a...

There's a little app that kind of gamifies

learning code, kind of like Duolingo does.

And allegedly it taught me Python,

but I wouldn't trust me to code anything

in Python if I were you.

I can now look at Python and recognize

it as Python, basically.

So that said, like,

I think it's really good to,

in my case, you know, like, hey,

I don't know enough about code to

understand this.

Can anybody else weigh in on this?

That's a really good thing.

But I think it's just this...

be a little bit cautious, right?

There's a fine line because on the one

hand, if we never trust anything new,

we would never have any mass adoption of

all these great tools like Proton, Intuda,

Signal, SimpleX.

All these really good tools would never

get out of the small phase because nobody

would ever trust them.

But at the same time,

we have seen so many apps that shut

down, sold.

Every once in a while,

it does turn out to be a honeypot.

And so there's a very fine line between

these things.

And-

Yeah, I would also ask,

especially with chat messengers,

one of my personal beefs is I feel

like there's

an obnoxious amount of messengers.

And one of the questions I always ask

with any new product, not just messengers,

but any new product is what are you

solving?

Like people send me links all the time

and they're like, this looks really cool.

And I'm like, okay, what is it doing?

What is it solving?

What problem is this solving that,

you know, whether it's a search engine,

an email provider, whatever,

like what is it doing that this existing

tool doesn't already do?

And I'd say about half the time people

are like, oh, I don't know.

I just,

I saw it and thought it was cool.

it's gotta be solving a problem for me

personally, but yeah.

So, um, I don't know.

Did you have any thoughts on this?

I know,

I think this one may have gone below

your radar a little bit,

but did you have any thoughts about it?

Yeah.

Um, yeah,

I think that's all a good takeaway.

Um,

thankfully the only thing I would say is

in the case of teleguards specifically,

um,

thankfully we've known about some of the

issues with it for a while.

I know that they note at the end

of this article, um,

TeleGuard handed over information to the

FBI in around,

according to the Washington Post.

That article was shared on our forum and

in all of the posts where TeleGuard is

brought up or in the thread about

TeleGuard itself.

People have known for a while that they

can provide information like the push

notification tokens and other information

and hopefully are avoiding that.

But yeah,

it's definitely a good thing to keep in

mind because there is a balance,

like you said.

We do need to have more products in

this space,

but knowing whether they work well is...

kind of tricky.

And it's always good to keep an eye

on this stuff because they certainly do

not always work the way that they

market themselves for sure.

I wasn't aware until I just read this

article that it was made by Swiss cows.

I've heard a lot of, well,

not a lot,

but I've heard their search engine brought

up a few times.

I know that they also have a file

storage service,

which is as far as I know,

just based on next cloud and uses like

the next cloud end to end encryption,

which isn't the best.

So they kind of just seem to be

one of the,

one of those companies where um they're

just putting stuff out there probably with

open source tools without really um adding

too much or changing it i don't know

if telegard is its own homebrew product i

would imagine it is because i don't know

of any like open source stuff that would

be um

that would have this poor encryption,

at least the people who are like forking

element, for example,

are getting a reasonably decent encryption

implementation,

whereas I don't know what's going on with

teleguard.

But yeah,

I think I think in this specific case,

people already know not to use it.

And otherwise,

with with stuff not like this,

it's it's everything you said for sure.

Yeah, I looked into Swiss cows briefly,

I think the only thing it has going

for it is it says,

like the search engine.

It says that it will censor adult content,

which I think could be useful if you

have really young kids,

just as like one of those layers of

defense, you know,

maybe set that as the default search

engine on the family computer and

But then we get into the whole topic

of like,

at what point is it appropriate to kind

of transition your kids off that?

But I don't know.

I remember when I looked into it,

that was kind of the only advantage I

saw was like, okay,

I could see this if I had young

kids and I just wanted it as one

more layer of defense of like,

I don't want them to accidentally find

their way onto something bad.

But yeah, of course,

for for immediate does note that Teleguard

has a reputation of being linked to cam

models and child abusers at the end of

this article.

So how much I would trust their approach

to child safety,

it probably would not be that far.

But yeah, in general,

it's probably a good idea for companies to

be a bit more thoughtful about all of

that stuff.

Yeah, that's fair.

I, I don't know.

I trust four Oh four,

but I'm not going to lie.

When I read that part,

my brain kind of went to like,

I wonder how much Teleguard does get used

for that stuff.

I don't know.

Yeah,

maybe if you turn a blind eye to

it.

I don't know how much they market it,

but I know that Kik had this reputation.

Maybe it still does, I don't know.

It does with me, that's for sure.

Yeah,

I've definitely heard this about various

messaging apps to the point where it seems

to be...

If you have that reputation and it

remains,

it seems to be kind of intentional,

and if it's on the radar...

Of these officers saying that they're

notorious for it,

that is a bit of a red flag.

Of course, with law enforcement,

it can always go either way because a

lot of law enforcement officers will say

Drive Fina OS, for example,

is notorious for being used by criminals

when in reality it's just a security tool.

But seeing as how this chat app doesn't

seem to provide adequate security,

I don't think it's the same sort of

situation.

Yeah.

Which not to get off topic,

but I know I've said in the past,

like,

cause there was that story about a year

or two ago about, um, apparently in Spain,

just having a pixel phone automatically

makes you suspicious,

like maybe not legally, but in practice,

it makes you suspicious because the only

people in Spain that have pixels are drug

dealers using graphene.

Um, and so it's the,

my argument when we covered that story

back then was like,

this is why we need to normalize tools,

uh, privacy tools,

because

if the only people using Signal are,

not that they're doing anything wrong,

of course,

but like dissidents and drug dealers,

then like it becomes like, oh,

you're on Signal,

you have something to hide, which,

you know,

in some countries being a dissident is

illegal.

So my point is I'm not trying to

morally group them into the same thing,

but my point,

it becomes something suspicious.

Whereas like if my stepdad is using

Signal, probably does not know what it is.

I had to download it and put it

on his phone and set it up for

him and get him in the family chat.

He didn't even know it could do video

or voice calls.

I tried to call him on it one

time and he didn't pick up.

So I called him on the regular phone

and he's like,

did you just try to call me on

signal?

I'm like, yeah, it does voice calls.

He's like, oh, I didn't know that.

So,

but my point being like when everybody's

using it,

then it takes away from that stigma

because they can't point to it and be

like, oh,

only bad people are using signal.

Really?

Really?

My seventy year old stepdad,

you think is running drugs from the

border?

Come on.

So anyways, yeah,

I just I know that's a little off

topic,

but I always feel the need to say

that.

So I think that'll take us into forum

updates if I remember correctly.

Yeah, well, in a minute, everyone,

we're going to start taking viewer

questions.

Of course,

you can always leave them in the chat

anytime.

But if you've been holding on to any

questions about any of these stories that

we've talked about so far, go ahead,

start leaving them now here in the chat

or in the forum thread for this live

stream.

Otherwise, yeah,

let's check it on the community forum.

There's always a lot of activity on the

forum every week,

so you should always check it out.

But here's a couple discussions that we

had

wanted to highlight from this week.

The first one is here about Russia's

internet blocks.

Let me get this pulled up.

This was just a discussion on a New

York Times piece which talked about

Russian internet restrictions and how

Russians are evading them.

So it's a bit of a cat and

mouse game there.

So the person who posted this said,

as some background, since early March,

Moscow and St.

Petersburg have experienced widespread

mobile internet blackouts,

not just blocked apps,

but full mobile data shutdowns.

Telegram is reportedly being blocked

entirely starting in April.

The government regulators now have the

authority to disconnect Russia from the

global internet entirely.

And some regions of Russia are on

lockdown.

whitelist mode meaning everything on the

internet is blocked except state-approved

services like yandex and government

portals um so yeah this version was

interested in whether or not there's a way

around this government censorship um which

could expand to europe and north america

The whitelisting situation,

that is pretty tricky because that's going

to block even the ability to use Tor

bridges, for example.

I know that Tor bridges are probably the

best way to get around censorship,

but if you're in a full whitelist

situation, that may not work.

At the end of the day,

if your internet service provider isn't

going to allow you to make any sort

of connections,

There isn't much you can do about that

besides find an entirely alternative

network.

So people in this thread note that Russian

citizens have started using Meshtastic to

communicate,

which is a decentralized network that

doesn't use the internet at all.

It uses LoRa radios,

which are small devices that you can

connect to your phone to communicate,

but they have very limited range,

although you can set up a mesh with

them.

There's probably other solutions,

but I think, yeah,

there's probably not too much you could do

from a technical perspective here that I

can think of.

Was there anything in this form that you

wanted to highlight specifically?

No,

it was really just kind of the Russian

internet blocks in general.

I know when those, well,

when the war in Ukraine first started,

I know Russia started cracking down on

VPNs.

And at the time I was with Surveillance

Report and Henry really made a good point

about how this is one of the drawbacks

of a centralized app store.

And at the time we were talking about

Apple,

but now it seems like we're starting to

talk about Android too.

Um, because, you know,

with Android and sideloading, uh,

which I know people don't like that term,

but you know,

Android and installing third-party

installs, whatever you want to call it.

It's, um,

it's kind of hard for Android to be

like, well,

we blocked VPN installs because they can't

block VPN installs and Tor installs.

Whereas Apple, you know, when,

when Russia came to Apple and was like,

Hey,

remove proton VPN and Nord VPN and all

these VPNs, they had no choice,

but to be like, all right, we'll do.

Cause you know that everything's so

centralized and locked down, but.

Yeah, with a total internet blackout,

the thing that comes to mind is years

ago, again, back on surveillance report.

So I interviewed John Todd,

who was the president of Quad Nine,

I think.

He's from Quad Nine.

I think he was the president at the

time.

I'm not sure if he's still there.

But it was interesting because we talked

about,

or it briefly came up about censorship

resistance.

And something he said that always stuck

with me is he's not a fan of,

DNS over HTTPS specifically for stuff like

this because if if the government starts

doing mass blocking at a DNS level and

you use something like DOH it makes your

traffic just blend in and eventually it

kind of like

I'm not trying to use language that is

sympathetic to a government for the

record,

but it kind of backs them into a

corner where they just decide to shut off

the internet entirely because they can't

figure out what traffic is going around

the censorship.

And I guess he was really ahead of

his time with that prediction because

that's basically what we're looking at

right now.

So yes,

it's a really tricky thing because how

would you, you know, and especially,

I don't know,

I feel like completely disconnecting from

the global internet is a completely

different beast that I don't even know how

we would handle that.

And I guess at that point it's,

I mean,

it's what are you trying to do?

If you're just trying to talk to people

locally, then yeah.

Things like Meshtastic, I think,

I really want to get into that,

but it looks like it would require a

little bit of skill just to kind of

first time dive in, you know,

to figure out the hardware and figure out

the install and the apps and the,

it feels like a bit of a commitment,

but if there's maybe a way to make

those things a little bit more

user-friendly or...

I don't know.

Yeah,

it's a good question because there's

different things you would need in that

situation, right?

I would need to be able to communicate

with my family here in the country,

hypothetically.

But then I would also need to be

able to communicate with the wider

internet and get information,

which here in the US, unfortunately,

we are kind of the wider internet.

But in another country,

that wouldn't be the case.

Or I mean, Proton even.

I wouldn't be able to check my ProtonMail,

so...

Yeah.

I don't know.

It's crazy.

And even Meshtastic,

that puts people in a dangerous situation.

And there's always the possibility that

Russia could, I mean,

both ban the use of it,

but also ban the import of Meshtastic

hardware.

I doubt any of it is being made

domestically in Russia.

And if anything is or could be,

the Russian government could stop that.

Yeah.

and also just using it,

or any sort of radio service,

you can be trivially tracked.

It does have a short range,

so it depends where you are,

but if people go around from the

government and try to track people down

who are using Matchtastic in the future,

they would pretty much be able to find

out who's

using it um so there so there are

concerns there i mean we even talked about

in a previous episode um i think it

was in belarus if i remember correctly um

ham radio enthusiasts were being accused

of like um being espionage agents

basically um for

for using their own like radio waves to

communicate rather than like these

government sanctioned things.

So it does like any of this amateur

radio stuff does put you in a dangerous

position in a country like this.

And especially if it becomes too

widespread,

it's very easy to imagine that Russia

would take a similar position to the

Internet in general and just blanket ban

it because they don't really need it.

The other reason this can't really be

solved from like a technical perspective

is

Um, like,

I don't think it's something that another

country like the United States or someone

else could kind of reach in and try

to solve for Russian citizens.

Like immediately what might come to mind

is something like Starlink, for example,

providing direct access to the internet,

bypassing, you know,

anything going on in Russia.

But Starlink,

like when that technology is in place,

we see it used for, um,

a lot of different things that the United

States and companies like SpaceX

definitely do not want to promote or

support.

We saw in the war with Ukraine,

for example,

Russian frontline troops were using

Starlink extensively to communicate on the

battlefield.

That's actually the reason SpaceX

does not operate in that region at all

and hasn't for many years.

And bringing it back for Russian citizens

to get around something like this would

just enable that usage of it again,

which they definitely don't want to do.

So it puts Russians in not a great

situation and really the only solution.

like we say for a lot of these

very widespread privacy issues,

whether it's age verification in Western

countries or mass censorship in other

countries, like in this case,

it's more of a social issue that you

have to resolve within your own country.

And hopefully people can fight back

against this there.

Because, I mean,

this should not be unacceptable.

I mean, this should not be acceptable,

if you know what I mean.

So, yeah.

Yeah,

it's definitely something tricky that I

don't know if we're qualified to solve.

But I guess if there's any takeaways on

this one,

it would just be kind of a...

I'm pretty open about having a mild

interest in disaster prep.

And sometimes that gets categorized as

wrongly it gets characterized as like you

know worrying about the end of the world

um which i don't care about that but

you know just little things like floods uh

hurricanes tornadoes earthquakes and

unfortunately we are in an incredibly

digital world so you have to think about

outages and cyber attacks and so um i

guess yeah if nothing else this is just

kind of a thought experiment of

if you're listening and you're in a

situation where you don't have to worry

about this yet,

just think about it a little bit.

Like don't lose any sleep over it,

but you know, what,

what would I do in that situation?

And just kind of give that some thought,

I guess.

The other forum post we were going to

look at, this one,

there's probably not too much to say on

this one, but there's a new video,

a YouTuber, this got shared on our forum,

that said, if you ran this debloater,

reinstall your system immediately.

And this is specifically,

so for the Windows users out there,

you know that there's a lot of scripts

that promise to do all kinds of different

things to your system.

Um,

there's a lot of ones that are popular

in the privacy community that promise to

remove a lot of telemetry and stuff.

There's also some that claim to optimize

the graphics and the performance and this,

that, and the other,

there's even entire windows ISOs that, um,

I want to say it was called Atlas

OS.

And if I've got that wrong,

I apologize to those guys.

But there was one that advertised itself

as like a gaming distro.

And it's basically like you install

Windows from scratch using this customized

ISO and it comes pre-optimized for gaming.

But the downside is it turns off Windows

Defender.

which I don't know why you would do

that.

So yeah, I'll be honest,

I didn't watch this specific video,

but basically it was not trustworthy.

I think it may have even come with

actual malware,

but don't quote me on that.

And so this whole thread is basically

talking about these deep loaders and

stuff.

And I think the official position of

Privacy Guides is that we don't recommend

them because they are...

They're tricky.

I know there's some that are open source

or source available, I should say.

And if you know code and you're

comfortable doing it, then sure,

you could look through it and make sure

that you verify what it's doing.

But it's definitely very...

A lot of these deep loaders,

you're giving them a lot of power over

your Windows system.

And...

If you're going to use one,

you have to be like come off of

a mountain and found a religion positive

that this thing is trustworthy because it

would not take much for it to do

something malicious,

whether that's planning malware,

crypto mining, stealing data, whatever.

So, yeah.

Absolutely.

I think I just wanted to take a

moment to mention that.

And it's worth noting this tool in

question is open source as well.

As far as I know,

it doesn't come with malware,

but it basically acts...

the way that malware would.

There's a pinned comment on this video

saying there's a few inaccurate

statements, but overall,

the conclusion of the video is that this

is all implemented poorly.

I do think this is a classic case

of like,

somebody putting something out there

without really fully understanding what it

does.

I don't want to say whether it is

or not,

but I think we're just going to see

this happening more often as more people

try to AI code tools to solve all

of their problems without really

understanding what they are.

Whether or not that's the case in this

situation,

it's definitely something to look out for

when you're running any sort of scripts

that you don't fully understand.

I think that is absolutely the most

important takeaway here,

that you cannot run any of these deep

loading scripts unless you know exactly

what they do and you see how they're

doing it,

because at which point you could probably

do it yourself, by the way.

But yeah, all of these scripts like this,

they affect the system so substantially.

And Windows is already such a not secure

and not private platform in the first

place that it doesn't really make a lot

of sense to me to try and improve

it, especially to this degree,

unfortunately.

There isn't really a ton...

that you can do at the end of

the day to improve your privacy on Windows

because the operating system itself is

going to be constantly fighting against

you.

So that's unfortunate.

It's cool to see more videos from this

YouTuber.

I first...

heard about this person who made this

video calling the other YouTuber who made

the script out because they made a video

about Freely around the same time that I

published a video about Freely.

So they came up in my feed.

And I think we had some overlapping

complaints.

I haven't watched the rest of their

videos,

but I think anybody who is creating

content in the privacy space

That's always a good thing.

And so if they continue to be brought

up and they continue to post more useful

content like this,

I think that's fantastic.

Yeah, I think about that a lot.

There's...

I think there's still plenty of room in

the privacy space for more voices,

for sure.

Yeah, real quick,

looking through his description on his

video, you're right.

It doesn't look like it installs malware,

but it disables crucial security

components and makes your system severely

vulnerable to malware.

So it also makes your system much more

unstable and prone to corruption and

breaking.

I recommend that anyone who ran this tool

immediately reinstall a fresh copy of

Windows.

So yeah, they're dangerous things.

You've got to make sure they're trusted if

you're going to use them at all.

I also totally hear your argument of like,

it's already so like such a lost cause

that it may be safer for a lot

of people to just not even try and

just stick to like the,

the toggles and the settings and stuff.

So, yeah.

Yeah.

Well,

we've been going for about an hour and

a half here.

We'll probably give a last call for any

questions or comments that people want to

leave on the forum.

I don't think we had any on our

forum post today,

and I'm not sure if I've seen any

in the chat here.

I know there's a bit of a delay

on this live stream between when I'm

saying this and when you'll hear it,

so we'll give people a couple of minutes

if you want to add anything else.

Otherwise,

we'll probably begin to wrap things up.

So this is your final morning,

anyone who's watching and wants to chime

in on any of these stories.

It's always my favorite part of a live

stream is knowing that delays there.

So you say stuff like that, like, Hey,

we're going to open the floor.

And now I have to fill time for

a couple minutes and give people time to

hear it and write their questions.

Uh, it just feels so awkward, but, um,

Yeah,

apparently we don't have any questions in

the forum here.

Got one question from Hogan in the chat

here.

There's been a couple of supply chain

attacks recently.

The current best practice is to always

update your apps,

but this opens you up to those attacks.

Does it still make sense to keep apps

updated as recent as possible?

Definitely depends on the app.

I know you have definitely seen this more

prominently in apps that are built with

NPM, probably a lot of web-based apps.

But in general,

I do think it's probably the safer option

to keep your apps up to date versus

not updating them because

Typically,

all of these updates are going to... Well,

not all of them,

because some of them just add features.

But most updates that you see are going

to be patching known vulnerabilities or

vulnerabilities that you already see in

the wild.

And so the potential of a new update

having a zero-day vulnerability that

hasn't been discovered yet is probably a

lot lower than the potential of using code

that

almost certainly has known vulnerabilities

that can be exploited.

So yeah,

I would definitely recommend keeping apps

up to date.

And especially the lower level you go,

the more important it is.

Keeping your operating system up to date

is super important.

As we saw, I don't know if...

when we talked about this on the show,

but recently iOS had a bunch of updates

for zero-day vulnerabilities in Safari and

some other security vulnerabilities which

were not patched at all in the previous

version of iOS.

You had to be on iOS to receive

some of these security updates.

And so it's examples like that where even

a company like Apple,

which is relatively well known to provide

security patches to older versions of

their operating system,

they almost are never doing that super

consistently.

And in that case, they weren't.

And so it's always, I think,

a danger to not be fully up to

date.

It's interesting, kind of related to this,

I just installed an app on my phone

and during the setup process it said you

should disable automatic OS updates

because they don't validate how it works.

And I was like, that's terrible advice.

That was a medical device related app.

So I think they were saying that because

they have to validate how OS updates work,

but it's like that kind of puts all

of the users of this device in danger.

So that's...

yeah sometimes you will definitely see

software and advice that are at odds with

security advice but generally um yeah keep

your stuff up to date yeah i agree

i think um i hope that i would

be interested to see an actual study on

how many like supply chain attacks versus

how many um like known vulnerability or

zero days are being patched um i'd be

willing to bet that

the supply chain attacks are more rare

just by raw numbers.

And, you know, something,

something I struggle with a lot in all

areas of life is I forget what the

name of it is, but, um,

It's a logical fallacy because news is

news because it's out of the ordinary,

right?

Even the example I like to use is

traffic accidents.

Nobody ever goes on... Tonight at five,

man gets home from office safely without

incident.

Nobody talks about that.

And even traffic,

accidents are so common that we don't even

really talk about the accidents that much.

It's usually just like, hey,

traffic's bad because there's an accident.

It's more about the traffic.

But

News is news because it's unusual.

So when we see all these supply chain

attacks, it's because, I'm guessing,

they're still the exception instead of the

norm.

But that said, I hope...

kind of a dark way to look at

it, but I,

I hope we're seeing enough of them that,

um,

companies are starting to wake up and

realize the importance of securing their

supply chain, whatever that may look like.

Um,

and hopefully we will start to see those

go down because if they do become too

common,

it becomes a problem for the companies

too.

Cause think about it.

That's money.

They have to spend a, um, regain control,

kick out the person,

try to push out the good code to

fix the bad code.

Um, the reputational damage,

like all of that stuff.

So,

It affects them too.

I don't know if we're at that point

yet, but... Absolutely.

I mean,

to take it to the most extreme example,

right?

Like,

you never would ever see a news article

today about a new vulnerability in Windows

XP or something like that.

But everyone knows you can't be using

Windows XP on the open internet because

it's just so insanely vulnerable to all of

these attacks.

But, like...

We already know you shouldn't be using it.

So if a new attack is discovered,

that's not going to make the news.

And that's going to be the case for,

I think,

a lot of apps that you don't keep

up to date, which is why, in general,

I would still say the updates are super

important.

Yep.

All right.

I guess that's all we got this week.

Okay.

All right.

Well, I think, yeah, we can, I'll just,

I'm going to give the form thread one

more check unless you just did,

but it looks like there's nothing else.

Yeah,

I've got it open on another window here.

Cool.

Okay, well, thanks everyone for tuning in.

Like usual,

all of the updates from This Week in

Privacy,

we share them on the blog and in

our email newsletter every week,

so you can sign up for that newsletter

or subscribe with your favorite RSS reader

if you want to stay

tuned about new episodes,

and also all of the sources for this

episode.

That's where we post them all,

so if you want links to all the

articles we talked about, check that out.

For people who prefer audio,

we have a podcast available on all podcast

platforms in RSS,

so you can use your own podcast reader.

The recording for this video is also going

to be synced to PeerTube like usual,

so you can watch it outside of YouTube.

Here at Privacy Guides,

we are an impartial nonprofit organization

that is focused on building a strong

privacy advocacy community and delivering

the best digital privacy and consumer

technology rights advice on the internet.

If you want to support our mission,

you can make a donation on our website

at privacyguides.org.

To make a donation,

you can click the red heart icon located

in the top right corner of our website,

or go to privacyguides.org slash donate.

You can contribute using standard fiat

currency via debit or credit card,

or opt to donate anonymously using Monero

or with your favorite cryptocurrency.

Becoming a paid member of Privacy Guides

will unlock exclusive perks like early

access to video content,

early access to the show notes for this

show,

and priority during the This Week in

Privacy livestream Q&A.

You'll also get a cool badge on your

profile on the forum and the warm,

fuzzy feeling of supporting independent

media.

Thank you all again for watching,

and we will see you next week.