Did Signal's Founder Create the Most Private AI?
E36

Did Signal's Founder Create the Most Private AI?

Hello, everybody.

Welcome back to This Week in Privacy.

This is our weekly series where we discuss

the latest updates with what we're working

on in the Privacy Guides community and

this week's top stories in the data

privacy and cybersecurity space.

The stories this week include Moxie

Marlinspike's new AI chatbot,

a new privacy services alliance,

a new advanced Linux malware, and more.

I'm Jonah,

and with me this week is Nate.

How are you doing, Nate?

I'm good.

How are you?

I'm doing great.

Thank you.

For those of you who are just tuning

in for the first time or might not

know,

Privacy Guides is a nonprofit which

researches and shares privacy-related

information,

and we facilitate a community on our forum

and matrix where people can ask questions

and discuss

get advice about staying private online

and preserving your digital rights.

So we always kick off the show with

what we've been working on at Privacy

Guides this week,

and I'll hand it over to Nate to

share a bit about what he's been doing

on the video side of things.

Awesome.

Yeah,

so there's been some exciting

developments.

For those of you who didn't know,

we have a new video now out to

the public.

It's loosely based on Em's written article

that privacy is like broccoli.

So if you haven't checked that out yet,

please go check it out.

Very proud of everyone at the team.

It's obviously, it's not just me,

Jordan edited and everybody helped double

check the writing and everything.

So yeah,

we're continuing to work on our video

courses, the smartphone security guide,

and I believe there's still some work

going on.

Well,

there's definitely some work going on on

the threat modeling course.

And I also,

Just came in.

I yesterday recorded a new video about,

I should know what this was.

I just recorded it when I'm drawing the

blanks.

Private browsing.

That was it.

Yeah.

Yesterday I just recorded a new video

about private browsing and that will be

coming out very soon as well.

we have uh yeah we've got a lot

going on behind the scenes it's hard to

keep track of all of this um i

can share some other updates on our end

um most notably if you missed it last

week um we had some personnel changes um

our wonderful intern kevin is no longer

working with us his internship ended um so

he's moving on to other projects but um

it was a great experience working with him

i'm super glad uh that we got the

chance to do that and i hope to

see kevin um

hopefully stick around on the forum and do

some volunteer,

maybe some one-off projects with us in the

future.

But of course, that's up to him.

We'll see how that goes.

But I just wanted to, again,

thank Kevin for all of his work and

explain a bit about what the show is

going to look like going forward.

Of course,

it's going to be mainly Nate and I

or...

hosting this show.

Jordan may be hosting some episodes as

well, like they used to.

But that'll be kind of the roundup for

this week in privacy going forward.

Besides that,

we published several news articles this

week at privacyguides.org slash news.

There were Instagram password resets in

the news, RCS potentially in the iOS,

which I believe we're going to talk about

later in the show.

Threema,

which is a pretty popular end-to-end

encrypted messaging app,

was recently acquired by another private

capital firm.

There's some vulnerabilities in AI agentic

browsers.

There's news about Windscribe and their

partnership with some other privacy

services,

which is another topic that we're going to

cover in more detail here in the show.

And an article on the WhisperPair

Bluetooth vulnerability.

So definitely a lot of cool stuff coming

out there.

Freya has been working really hard getting

timely news briefs and news updates out on

our site for people who are interested in

that sort of thing.

So if you want to keep up to

date with that,

definitely give the privacyguides.org

slash news site a follow or follow it

on various social media platforms or the

news category on our forum,

which is where all of these articles are

posted.

I also want to highlight an article that

I wrote a year ago.

I know with a lot of political news

going on in the world,

I think it's just always a good time

to maybe give a reminder of this.

We do have an article on smartphone

security and keeping your smartphone safe

if you are attending any events.

In person like protests or activities like

that if you're an activist or you're a

journalist who needs to be sure that your

baseline security for your smartphone is

up to speed definitely give this article

or read, we will.

I'll have a link to this in the

show notes when we send this out later.

But yeah,

just since the article is a bit old,

I wanted to resurface it here.

I think off the top of my head,

that's all of the major updates we've been

working on at Privacy Guides.

It's been a pretty busy few weeks for

me.

We've been working on a lot of things

behind the scenes.

And if you're a regular member on our

forum,

you may have seen some of the notes

from meetings that we've had,

and we have some future stuff that we

have to work out.

still in the works.

But I think that's kind of everything for

now.

Nate,

if you don't have anything else to add

in terms of Privacy Guides updates,

I think I'll hand this over to you

to share some of our news stories,

maybe our first one.

yeah all right so we uh our our

headline story this week is about moxie

marlin spike's new ai chat bot this has

been kind of making waves for those of

you who don't know who moxie marlin spike

is you probably at least know uh his

work which is the signal messenger uh

moxie created signal he passed off the

reins to the current president uh

Well,

I don't know if he picked her necessarily,

but he passed off the reins.

And now who has taken over is Meredith

Whitaker, who, yeah.

Anyways,

she's doing a great job with Signal.

But he moved on, what was that,

about a year?

No, more than a year ago.

My sense of time is all messed up.

But he's been kind of laying low,

actually.

At least I feel like.

I certainly haven't heard his name in

quite a while.

And now it's popped up with this new

chatbot called Conferr.

And I believe the link for that is

confer.to if any of you want to check

it out.

But it's very minimal right now.

And what really sets this one apart is

this might be, to my knowledge,

the first really private AI chatbot on the

user side.

And we're going to dig into that in

a little bit.

But...

You know, things like,

which this article here does unfortunately

incorrectly say at the end,

things like Lumo, things,

as far as I know,

things even like Braves Leo,

I could be wrong about that one,

but they don't really protect the user.

They're basically like a no log VPN for

LLMs.

They promise they won't keep your logs,

but technically they can access them.

Um, if there was a situation like, uh,

you know, famously years ago,

proton was forced by law to monitor a

certain account and record the IP

addresses.

Cause they were trying to figure out who

was behind that account.

Not they proton, the,

the authorities were trying to figure it

out.

And, um, you know,

eventually they were able to do that.

So this, uh,

this is kind of in the,

this is the first time that they're trying

to create something more like signal

where, uh,

That's not even possible.

It's all encrypted from start to finish

for real.

And yeah, it's, I mean, it's, okay.

So if you go to the confer.to website

and you click on the blog,

you can dig in there and there is

some like really

There's some really technical details.

I don't think there's any code or

anything.

I mean, the thing's open source,

so you can go look at the code

if you're that technical.

This Ars Technica article, I think,

does a pretty good job of dumbing it

down.

And basically,

it really relies on what are called

trusted execution environments.

This goes way over my head,

and I'm sure Jonah can probably break it

down a little bit simpler.

But basically, the way they describe it,

they say it prevents even server

administrators from peeking at or

tampering with conversations.

And they talk about how it's designed

where you can access it from different

devices,

and it can synchronize just like Signal.

And I don't know.

It's really cool stuff.

Before I jump into the next part of

this thought, Jonah,

is there anything you want to add?

Yeah, so I don't know.

I'm probably less excited about this

Confer stuff than other people I've seen

in the community.

All of this private AI stuff has been

trending quite a bit lately,

and they especially rely on these trusted

execution environments.

TEE's Confer is now different,

but we've seen

For example,

if anyone wants to look into this later,

we've talked a lot about Maple AI on

the forum,

which uses a similar technology to kind of

validate the security of how these AI

models are being run in theory.

And basically what these trusted execution

environments are,

are

features that are built into the CPUs of

servers that these models are running on,

where they can run a certain set of

code and it can be validated by the

hardware that the code hasn't been changed

or modified in theory.

So if an AI service, for example,

like Confer is releasing an open source

model and they're saying this code is

what's running on here,

the hardware should, in theory,

ensure that they're not swapping out that

code behind the scenes and it might

protect you.

The problem with TEE is that there are

quite a few limitations,

and it wasn't really designed to protect

against the operator of the server from

potentially being a physical attacker

here.

And so we've seen security experts like

Matthew Green, for example,

on

on Twitter caution against the use,

or not against the use,

but against relying too much on TEEs for

the security of AI models because they

can't provide the same type of guarantees

that full end-to-end encryption can

provide.

That's simple math, whereas this is

a bit more policy based.

It's certainly a step better than,

you know,

them just saying they're not going to look

at it,

but not really doing anything about it.

But it's nowhere near the guarantees that

you're going to get from end to end

encryption.

And so it's tricky to say that any

of these are truly private because

While some AI models like Lumo,

for example, or this one or other ones,

they will use end-to-end encryption for

the storage of your chat logs.

They don't necessarily, well,

they don't at all use end-to-end

encryption on the chat as you're chatting

with it.

And they can theoretically read that chat

or log it.

at that time.

And there isn't really any way around this

because these AI servers,

they need access to your data in order

to run the AI query on it.

And so that's why

I think all of these cloud AI models,

they don't probably reach the level of

privacy that a lot of people are going

to want from AI,

especially for some of the things that

people are using this for.

I think we're going to maybe talk about

this in a bit,

but I know there's a ton of stories

about how AI is starting to be used

for health stuff and personal,

very sensitive information.

And these protections, in my opinion,

don't really go anywhere.

too far or far enough to protect that

data, unfortunately.

That's kind of how I would describe the

technical side of things.

I know there's a lot that we want

to talk about that isn't necessarily

technical,

but more just privacy concerns with AI in

general,

beyond the pure security of the system.

But yeah,

I definitely want to create the

distinction between TEEs,

like better being used,

In this case versus end-to-end encryption,

because end-to-end encryption,

it's a whole different ballgame,

and it's much better security,

and AI is not providing that in any

of these cases.

Cool, yeah,

and thank you for clarifying what I meant

when I said that most other, like Lumo,

is more like a no-logs VPN in the

sense that they can read your chats,

they just promise not to store them.

And they do store them in a way

where they can't read them,

but while you're having that chat,

they totally could.

And that's what this is supposed to do

differently.

But yeah, let's,

because I'm excited to get into this,

let's go ahead and talk about,

towards the end there,

you mentioned that this is all fine and

nice from a technical perspective,

potentially,

Um,

but this doesn't solve a lot of the

privacy concerns with AI and, um,

Man, this is really good.

So just to kind of give listeners a

tiny peek behind the curtain,

we have a weekly meeting,

a staff meeting at Privacy Guides.

And part of that discussion is always what

story do we think would make the best

headline story?

And when we talked about this,

our staff member, Em, had an amazing,

a lot to say, really.

She didn't just make a point and she

gave a really good talk.

I guess you could call it that.

But anyways,

she pointed out that something that we

never really talk about,

and I'm sure some of you guys have

thought about this,

but I personally have never thought about

this before.

And AI really,

at least in its current form,

can't be made private.

Like maybe it can from the end user,

but that doesn't count the data and how

the data is scraped.

And it's...

I think a lot of us know that

anything you put online you should treat

as,

you know, being public.

At least that's something I've always been

able to say,

or that's something I've always said in

the past is anything you post online,

be prepared for it to be breached or

anything like that.

But even so,

that doesn't make it right for these

companies to go around scraping all this

data without consent.

And I do want to acknowledge there are,

I know there's at least one AI that

they're trying to, or not, well,

I guess you could call it an AI,

a training set they're trying to create

that is drawn from

consenting users but at this time there's

really no mechanism like that i think uh

i think creative commons is currently in

talks to create a flag where basically you

can say oh yes i consent to let

ai train on this data but certainly chat

gpt anthropic uh whatever some of the

other ones are out there i don't think

any of them have ever respected that

because they existed before that was a

thing and we already know that there's so

many stories about like uh

Copyrighted material,

the Harry Potter books,

the New York Times is currently in a

lawsuit where there's so many stories

where we know that this is what's

happening.

They're training on data that is not

licensed for free use,

that is not consented to.

And yeah,

that's a trickier thing that I don't think

can be solved in a technical way.

Any thoughts on that one, Jonah?

Yeah,

there's so many concerns with the privacy,

and I totally agree with what Em had

brought up in our meeting and on Mastodon.

A lot of these things that are being

built right now,

they're being just used in very un-private

ways, and it requires harming privacy,

exactly like you said,

in order to build these things in the

first place.

And so it's very concerning what's going

on with AI here.

Yeah,

I think she summed it up pretty well.

So I'm not sure if I have anything

else to add beyond that off the top

of my head.

Sorry,

I was just looking at the chat here,

seeing if something was coming up.

So I was a bit distracted.

But yeah,

I think that's kind of where I'm at

with that.

Yeah,

we're trying something new on the back end

here, and there's a lot of moving parts.

Yeah, let us know.

Definitely, if you're watching,

leave a comment how the stream is going

in terms of quality.

Or if you're chatting on a different

platform,

let us know and we can see if

that shows up.

Because, yeah,

we have a bit of a different setup

than usual.

But, yeah, going back to AI, I mean,

it's...

I don't know.

I feel like there's not much to add

to that statement,

but it's to what Em said.

But it's such a – to me, again,

I thought it was really enlightening

because I've never thought of that before.

And it feels so – because I've seen

some people say,

particularly in the Mastodon post that Em

sent us where she kind of put all

these thoughts into words.

And someone argued,

like I said at the top, it's like,

well,

you post anything online and you know it's

going to be public.

And I think this kind of falls under

–

A long time ago,

I wrote a blog post where I made

the argument that there's a difference

between an expectation of privacy and an

expectation to be stalked.

And, you know,

I've been in situations where because I

have, you know,

very visible arm tattoos for those who've

never seen my arms before.

And I usually wear short sleeve shirts,

especially in the warmer months.

And I've had many times where, you know,

I've had like there was one time I

went to the store and as I was

leaving the store,

a friend texted me and was like, hey,

I just saw you at the store.

And I'm just like,

How did they know it was me?

And this was like during COVID.

So like everybody had a mask on.

I'm like,

how did they know it was me?

And then I'm like, oh, right.

Duh.

But and, you know,

I'm not mad about that.

Right.

But the difference is that friend didn't

then immediately like get in the car and

follow me around town as I ran errands

or went back home or whatever I did.

And it feels different when, you know,

you post something publicly and sure,

somebody might see it.

Somebody might get upset.

versus that might get scraped up.

That might get added to a data training

set.

I personally have, and many people do,

I have disappearing messages enabled on

Mastodon because I want them to go away.

And if they get caught up in a

data training set,

having a hard time talking tonight,

then they never go away.

They're there forever.

And it's just such an interesting

perspective that I never considered.

Because when we look at these new

technologies,

I think there's two kinds of problems.

There's technical problems and then

there's,

I don't know what I would call them,

but there's all the other problems.

And technical problems to me are things

like the energy usage.

And for the record,

I am not trying to downplay this,

but the energy usage of AI is really

bad for the environment, right?

That's a technical problem.

As we go, I think,

not to sound overly optimistic,

but I think we'll learn how to make

more energy-efficient data centers.

We'll learn how to maybe switch to

renewable energies, maybe someday,

eventually.

And all these things that can reduce the

environmental impact of AI.

But then there's the other things like...

what we were just talking about,

the fact that all this privacy data was

taken,

the fact that this is potentially putting

people out of work, you know,

these are much harder things to solve.

And maybe there is a solution if you

want to be an optimist,

but I think starting this discussion is

certainly part of that, you know,

finding that solution if nobody's talking

about that side of things, for sure,

because

I don't know if I said this,

but we usually think of privacy as the

end user, my prompts and that stuff,

the responses.

But I feel like we don't talk enough

about the training data in the sense of

the privacy invasion from it.

So yeah, it's really interesting stuff.

Absolutely.

And beyond just the training aspect of it,

I think...

like AI has definitely shown that privacy

has never really been more important in

protecting your data online because we

could talk about this other story that we

have.

This was reported by four or four media

talking about Grok's AI sexual abuse

material that it's generating on Twitter

right now or X.

And I think it also shows that like,

You have to be careful with the data

and images that you're posting online,

especially personal stuff,

because now people can basically use these

models.

They're getting better and better at

creating very photorealistic things,

and they can take innocuous images or just

selfies or any images of people and turn

it into stuff that you probably don't want

to see or don't want on the internet.

And certain people are going to be more

affected by this than others.

But I think that...

You know,

AI is creating a very dangerous

environment right now where any data that

you put out can potentially be misused or

blown far out of proportion,

far beyond what you originally intended

when you were maybe making a post.

And I think it's just, I don't know.

AI, I think,

has created a lot of terrible situations

all around in terms of privacy, for sure,

and in terms of safety on the internet.

And I don't really see a way that

it's ever going to be rectified.

There aren't a lot of great solutions

here.

And so it makes me very hesitant to

recommend or use AI in any

capacity because it's just creating it's

really creating a monster that we don't

really know how to how to control and

it's not a good idea I think for

people to become reliant on tools like

this and use it like in their everyday

lives for for all sorts of things because

you know, eventually, you know,

it's either going to create these

dangerous situations or it's going to have

to be reined in by these tech companies.

And then you're in a situation where,

you know,

tech companies are kind of censoring the

stuff that you can create online.

There's always a censorship problem,

I think,

with a lot of these centralized services

and big tech services where outsourcing

all of your control to these centralized

cloud providers instead of trying to do

everything yourself puts you in a very bad

situation.

We see it all the time in other

industries,

and it's something that I think we can

catch right away and try to avoid going

forward.

I don't know what the general vibe for

AI is among the general population right

now.

I don't think AI is a huge thing

for most people outside of the tech

sphere.

And I think most people are rejecting AI

right now, which is probably...

probably a good thing because it it just

seems to be creating a lot more harm

than good right now in so many different

ways yeah that's fair and to be clear

when i when i kind of um

when I kind of take a potentially

optimistic approach,

I'm not necessarily trying to be pro AI.

I'm just, I'm trying to be fair.

I'm trying to point out, cause you know,

one,

and this is definitely not a one-to-one,

this may even be a disingenuous

comparison, but,

One comparison I keep hearing people make

is newspapers.

When newspapers first went to mass print,

we had a huge problem with misinformation

and disinformation and what's now called

yellow journalism,

which is – we still see to an

extent the super sensational just making

things up out of thin air because it's

scandalous and it sells.

I think now we call it clickbait.

But we –

That was eventually something that we were

able to mostly figure out by creating

legislations and having these very strict

slander laws and things like that.

And to be fair, laws are retroactive,

right?

Like laws work after the fact,

after somebody has already been harmed.

So I'm not saying that's a perfect

solution,

but I think we can agree that in

the end,

newspapers ended up being better than they

were.

So that's kind of the lens I'm trying

to look at this through is like,

what are we looking at in terms of

some of these problems are technical

problems that can be solved,

but then others like you're right.

It's, it's,

I think going back to the whole training

data thing.

Okay.

We create,

let's say we created a system where people

have to opt in to have their training

data scraped up by AI and whatever that

looks like, whether it comes with a, like,

what's the word I'm looking for?

Compensation or whatever.

Would that be enough?

Because that's what makes AI so quote

unquote good or effective is that it just

has obnoxious amounts of training data.

And I would be surprised if enough people

opted in to really make AI as effective

as it is now.

And again, to be clear,

I'm not saying like, oh,

people should opt in.

Like, no,

it's your content that you're creating.

Do whatever you want with it.

But it's just,

it's kind of backing up what you were

saying, Jonah,

about if

we may be facing something that there may

be no way to use it privately.

And I think that is,

I agree with you.

I think that's really good that people are

opting not to use this,

but I do worry for people.

We had at least one person in the

forum who said that they're in a field

where it's becoming increasingly difficult

to navigate that field without having AI

on your resume because it's,

I don't even know what businesses are

using it for,

but apparently everybody wants you to be

able to learn AI,

whatever that even means.

I don't know.

And so I guess know how the prompts

work.

I really don't know.

But if you're in a field where you're

like, I don't like to use AI,

even if you just don't like it,

I don't see a use for it.

I don't see a value.

I've tinkered with AI in the early days

and-

There's a few things it does really well,

but overall,

I don't see how it became this trillion

dollar industry that's propping up the

entire US economy.

It's just not that, for me at least,

it doesn't do that much.

So if I were in one of those

fields where they're like, well,

how often do you use AI?

Hardly ever because it just doesn't – I

don't have a use for it.

It doesn't do anything for me.

And anyways, yeah,

my point being what I'm trying to get

to is it's unfortunate that some people

are in a position where they're now stuck

where they have to show that they know

how to use AI.

And I don't know.

It's like –

It's like phones,

like phones are not really private, right?

And it's hard to make them private.

And some people can afford to not have

a phone.

They can work in a field or they

can be self-employed where they don't have

a phone,

but not everybody has that luxury.

And it's really unfortunate people are

being put in that situation.

Yeah,

I think you bring up a really good

point,

especially with the whole like having to

maybe opt in or like consensually sharing

your data with AI.

This is something that really bothers me

about the current AI landscape, actually,

because I think these tech companies have

kind of created this situation where now

that they've created the problem and all

of these problems that AI causes.

now they're trying to sell you on various

solutions to try and take control of it

after the fact and it's like these these

problems wouldn't exist without all of the

ai that's being pushed on consumers from

these tech companies fairly irresponsibly

um i was just looking i mean we

were just taking a look at youtube studio

the other day and looking at um some

of their like likeness detection features

and how that's going to require me to

you know scan my face and send them

my id if i want to

you know,

monitor YouTube for people who are

potentially creating AI generated videos

of me, for example.

And this is not a position that I

think people should be putting in the

first place because it's just yet another

thing in a long string of events where

tech companies create these problems as an

excuse to try and get more and more

of your data.

And now I have to share even more

data with Google that they didn't

necessarily have before because of the AI

problems that they've created.

And I'm sure that this is going to

be commonplace

On other platforms, if it isn't already,

it'll be coming soon.

And I'm sure there's not going to be

a single way to opt out of it

everywhere across the internet because

there's just no coordination like that.

And there isn't really a great way to

do it privately.

And so by just accepting AI and kind

of normalizing all of this,

that's just kind of the society that we're

creating here.

We're just losing the ability to control

who has access to our data and who

benefits from it.

And unfortunately, at this point,

it seems pretty clear that the only people

who are really benefiting from having all

of our data is

these big tech companies.

So I don't know.

It's ridiculous, I think.

Yeah, for sure.

The normalization,

that's a huge problem with privacy, right?

As privacy advocates,

it's so normal to use these tools that

it sometimes can, I don't know, it's...

I'm sure a lot of us have been

in that situation where it's like, oh,

I don't have Facebook.

What do you mean you don't have Facebook?

And they like, I don't know.

Usually when I say that,

people are just like, whoa,

that sounds awesome.

And I'm like, yeah, just delete it.

It's not that hard.

But I also know some people have just

been met with like, you know,

they're isolated now because it's like,

oh, well, you're not on Facebook.

So I didn't send you an event invite

because apparently you don't exist

anymore.

So yeah,

when you use that word normalization,

that really jumped out at me.

That's such a problem with a lot of

these privacy invasive texts is they

become normalized.

I don't have any more thoughts to add

to that one.

Do you have anything to add before we

move on to our next story?

I think that kind of covers all the

stuff I was thinking about with AI.

So we can take a look at our

next post here.

This comes from the Windscribe blog,

actually.

The headline is,

Windscribe partners with Kaji, Notesnook,

Addy.io and ENTI to create a privacy

focused alliance.

And so basically,

what Windscribe has done is they've

partnered with all the services I've just

named to give people kind of exclusive

discounts or deals on all those services

if you're a Windscribe user.

And I know this is I don't remember

if we've talked about this in a previous

episode,

you can remind me but I know ENTI

has done a similar thing with other

services in the past.

And it looks like Windscribe is kind of

joining in

on that initiative.

So I think it's pretty cool what they're

doing.

I guess the question that we would

probably want to talk about is how do

we feel about these privacy alliances?

Do you have any opinions?

I have a couple of things to say

for sure,

but I can let you go first.

I got to be honest,

I think this is the first one I've

seen,

or at least the first one on this

scale, for sure.

I don't really have too much of an

issue with it personally.

I think the thing that disappoints me is

that a lot of these are like Addy.io,

Addy.io,

twenty five percent off the first year.

Same thing with NT,

twenty five percent off for the first

twelve months.

I think.

I don't know.

Maybe it's just me being cheap,

but I'm one of those people that if

I'm going to sign up for a discount,

I would like to continue to have that

discount.

But I mean,

at least they're being upfront about it.

But I don't know.

I think...

I don't have too many issues with it

because I think wind scribes, uh,

their logic makes sense.

You know, if,

if you read the blog posts that they

put out, they said, uh,

like why a privacy focused partnership

instead of just like building a suite in

house.

And their answer is basically

compartmentalization.

You know, if, if you compartmentalize,

then.

If any one of these services goes away

or becomes compromised or what have you,

then it's just that one service.

It's not across the board.

It's not your entire account.

And I think they make a really good

point there.

And also one thing they didn't say,

but one thing I've historically said that

I really believe that is that when you

try to do everything,

usually you end up doing everything kind

of poorly.

So I would definitely prefer like

Windscribe.

We're going to focus on our VPN and

we're going to make a really good VPN

and we're going to let NT handle the,

or NT handle the photo storage.

We're going to let Kagi or Kaji handle

the search and, you know,

which I've heard really good things about

Kaji.

I still haven't used it myself,

but I've used NT.

I'm very happy with it.

So yeah, it's,

it's kind of nice to see that, um,

that specialization there the only thing i

can think that might kind of not be

great is a lot of uh quote-unquote normie

users are really big fans of they want

the ecosystem right they want like that's

one of the amazing things about google

right is you get an email and it

says hey let's have lunch on the and

it automatically asks do you want to add

this to your calendar or it used to

ask i think now it just does it

but i don't use google anymore so i

have no idea um

And it adds it to your calendar.

And then when you send an email and

you add an attachment that's too big,

it's like, oh,

do you want to just one click,

add this to Google Drive and send it

that way?

And they make it so seamless and

everything works together.

And so that is kind of the argument

for things like Proton, for example.

If you don't use them, that's fine.

You don't have to.

But it's a really compelling alternative

for people who want that ecosystem.

And that's kind of the only thing I

could see getting in the way from my

perspective is some people may say like,

well,

why would I sign up for six different

services when I could

just go somewhere else and get it all

at once but yeah i think those are

kind of all my thoughts absolutely i i

totally agree well i'll go through a

couple of your points i think the the

first thing that you mentioned how some of

these discounts aren't lifetime plans i

think is really unfortunate because i do

think that the the the big draw for

for this um for a lot of people

would be to escape an ecosystem like like

proton um i i understand all your points

about the ecosystem and definitely a lot

of people are

into that sort of thing.

And I definitely use a lot of Proton

services myself personally,

but I also know a lot of people

who don't want to put all of their

eggs in one basket and they don't want

to use ProtonMail and ProtonDrive and

ProtonVPN, right?

And they would rather like trust

individually vetted individual services.

And I think there's also something to be

said about companies that really just

specialize on doing one thing and one

thing

really well,

like Haji with Search or NT with Photos,

for example.

I think all of these privacy services and

companies still exist in a pretty niche

market,

and I'm glad that more people are

becoming concerned about the security and

privacy of their data,

and they're switching to these services.

But there's, you know,

there's still a lot of growth to be

had in this sector.

And I think that prevents a lot of

companies from growing super big at the

moment.

Proton, I think,

is a is a good exception.

But I know Proton

people have a lot of complaints about how

Proton is slow to add new features or

they're not integrating all of their

products properly or that sort of thing.

And it's true.

And I think it's just really hard to

build like a full ecosystem right off the

bat.

And if you could have all of these

separate teams that are much more

streamlined,

they don't have to worry about integration

as much.

They can just focus on their own features.

Like Windscribe can just focus on trying

to be the best VPN they can be,

for example,

and they can leave

like cloud storage and search and photo

storage to these other companies.

You know,

I think that's really beneficial and would

help a lot of companies.

But if you're only giving away like trials

or limited time discounts,

it's

It's not going to be very compelling just

from a cost perspective, unfortunately.

I'm not sure if there's a super great

way for a coalition of these companies to

work together on something like that,

or if there's an opportunity for someone

to sell bundles.

Because at the end of the day,

especially with most modern payment

systems right now, you have to...

have like one central company that's in

charge of billing and that's obviously

going to give that company whoever it is

a lot of power over the other companies

in this in this coalition right and so

i'm sure there's probably some

cryptocurrency solution to this where

everything could be decentralized and

split up but not everyone is going to

pay in cryptocurrency um

But I would maybe want to see a

solution that's more integrated than this,

where it's not just like exclusive

discount codes,

but maybe it's a bundle where you could

sign up with any of these services and

get billed through the service of your

choice.

And that might decentralize it a bit,

but then you get access to all of

these other services for the lifetime of

the bundle.

And I think that that would be a

lot more compelling for a lot of people

who are switching from something like

Proton,

that kind of

includes many of the things that are being

sold here.

How that would work exactly, again,

I don't know.

But I think that that would be the

biggest draw for a bundle of privacy

products like this.

And it's kind of a shame that

They're not doing that right now,

but maybe they'll go in that direction.

For now at least,

I guess if you're a Windscribe user,

this is a pretty good opportunity to use

some of the services that we recommend.

We haven't recommended or evaluated all of

the services,

including Windscribe itself on privacy

guides.

I know there's a lot of discussions on

our form if people are interested in

learning about pretty much all of these.

But yeah, I think...

that would be the direction that I would

want to see something like this go in.

And we'll see if that happens.

Yeah.

And just to back up what you were

saying, I totally get the appeal of not,

like the compartmentalization is the

appeal for some people, right?

Like you were saying, like, sure,

there are a lot of people who want

the ecosystem,

but there's also a lot of people who

want whatever the best thing is.

You know,

if you think Mulvad is better than Proton

and you would rather use Mulvad and you

kind of,

mix and match.

I think that's great.

And I do agree with Windscribe that that

is certainly more secure from a

compartmentalization perspective.

But yeah, I'm with you.

I think really,

even if you want these disparate services,

it would be really cool if there was

some kind of... If it was...

I don't know.

This feels to me like...

this feels to me a lot like the

concept of a sister cities,

which I've never really understood.

It's like, Oh, we're gonna like,

I swear to God,

I've lived in places that like the sister

city is in like Russia.

And I'm like, how?

Like I'm in Texas.

How, what, what, what's going on?

And you know, it's,

it's really means nothing.

It's just some kind of like cooperation.

They're like, Oh,

maybe you should check this place out.

We've, you know, shook hands or whatever.

And

And, you know,

I don't mean to downplay this to that

extent,

but it feels very similar that it's kind

of like, well,

we just got together and agreed we all

like each other and we're really cool.

And they are really cool services.

Again, I want to stress that, but.

It would be nice to see something that's

a little bit more cohesive, I guess,

or benefits the user a little more other

than just some kind of like temporary,

which again,

I think some of them are like permanent,

right?

I think at least one of them was,

I'm trying to pull the page back up

here.

Notes Nook was a permanent discount.

As far as I'm seeing.

And I think there was one other one.

There's not any, well, Control D,

but that's also run by Windscribe, so.

oh okay okay that's the one i was

thinking of yeah fifty percent off control

d um so yeah it would be yeah

lifetime discount that's dope but yeah so

it would be cool to see something a

little bit more cohesive like that but i

don't know at the same time it's like

that could be a really cool i'm i'm

trying so hard to get a lot of

my family members to try ente because you

know most of them are just in google

photos apple photos whatever phone they're

using and

Yeah.

So maybe, I mean, if nothing else,

maybe this could be a nice like, hey,

here's twenty five percent off.

Give it a shot.

So totally.

I really like your sister cities analogy,

actually,

because that is kind of what what this

is.

I know.

I think all of those sister city things

are like with international cities and

there isn't much like true connection

between them.

And that is sort of what this feels

like at the moment.

Like it's a lot of disparate services

where you can get like, you know,

that there's cross promotional stuff going

on.

There's

limited time discounts,

but there isn't a true partnership or

working together on something extremely

cohesive.

It's just awareness.

Windscribe is probably for the most part

just making people aware of these services

more than providing an actual long-term

value for their users.

But I think awareness of other privacy

companies is certainly a good thing.

So I'm not going to knock it for

that,

but I don't think

in its current iteration,

this is going to create a great

alternative for someone coming from

something like Proton, for example.

But yeah,

if you were going to try out some

or all of these services anyways,

especially because you can pick and

choose,

it's not like some bundles where you have

to register for everything and then you

might not even want to use some of

these things.

You know, it's not that serious.

So yeah,

I think it's a cool opportunity for them.

I always like to see privacy companies

work together on this sort of thing rather

than you know constantly compete with each

other especially in at times when it

doesn't make any sense to be competing

with each other at all um so so

yeah i think it's i think it's cool

yeah for sure um man you said one

last thing i wanted to touch on but

got away from me so all right yeah

I think I would just ask you,

maybe we didn't cover this.

Do you have any thoughts about like any

of these companies in particular?

How many of these have you used?

Because I know not all of these are

even recommended on our site, for example.

But I know they've been talked about a

lot on the forum.

Well, actually, real quick,

my thought just came back to me.

I was going to say,

you said this is kind of Windscribe just

like kind of bringing awareness of these

companies.

To their defense,

that can be used because, you know,

back when I was on Surveillance Reporter,

we took a sponsor and our first sponsor

was JMP Chat, the voiceover IP.

And I thought for sure that I was

like, oh, everybody knows JMP Chat.

And I was floored how many people left

comments like, oh,

I've never heard of this before.

This is really cool.

And I'm like,

Really?

And I don't mean that in a bad

way.

Like, really?

But I was like, really?

That many people have never heard of this.

So I'm sure even Windscribe probably has

tons of people that are like, oh,

I've never heard of Kaji.

I've never heard of Addy.

So...

But in answer to your question,

I use Entei.

I'm kind of in this weird space where

I'm like halfway between Entei and

Nextcloud,

and I'm not sure which one I want

to commit to, to be totally honest.

I like Nextcloud, but I'm debated.

The encryption in Nextcloud is still not

great,

so it's like I could have all my

photos end-to-end encrypted,

but then they don't integrate,

but then how much do I use the

integration?

So anyways...

I've used Addy IO in the past.

I've tinkered with Kaji a little bit.

I haven't really like used it personally.

I've used it to test it out,

but I've never used it in like my

day-to-day use to see how it would

integrate my workflow.

And I've looked into Notesnook.

One of these days,

I actually want to do a video about

privacy respecting alternatives to things

like Notion,

which Notion is already not terrible,

but there's so many open source,

like Obsidian, Notesnook.

There's one called AnyType, I think it is.

So yeah, Notesnook,

I looked into it a little bit as

a potential note alternative,

but I haven't actually used it myself.

How about you?

Do you have any experience with any of

these?

Um, yeah, I, I'm,

I'm also on the boat of maybe switching

to Entei.

Um,

but I haven't really like fully committed

to any of these photo backup platforms

myself yet.

Um,

otherwise I don't really use a lot of

these.

I do need to get better at, um,

note taking and maybe notes look would be

a good solution, but maybe that'll be my,

my new year's resolution this year and

I'll have to report back on what I

ended up doing.

Makes sense.

Yeah.

I've, I've been pretty happy with, um,

Addy, like all the ones I've used.

It's not like I didn't stop using them

for some, like, Oh,

they had this big problem, but, uh,

I don't know how many of them would

qualify to be listed on,

on privacy guides.

I know we have some really strict

standards, but for me, it was just,

I found other things that integrated with

my needs better or, you know,

my workflow or they were a little bit

cheaper or something, but.

Yeah, like I said before,

for better or for worse,

I'm kind of in the Proton ecosystem right

now.

And I'm thinking about changing it,

but I haven't yet.

So that's kind of where I'm at.

Fair enough.

I will admit I'm one of those people

that's constantly like,

I'll have a workflow that works.

Like let's say Nextcloud, right?

Let's say I'm all in on Nextcloud.

And then I'll have that moment where I'm

like,

but it's not really end-to-end encrypted.

So what if I did replace the notes

and then I go back to this system

where everything's like,

I've got my notes here and I've got

my photos here and I've got this here

and this here.

And then I'm like, yeah,

but I really miss next class.

I'm constantly trying different things and

going back and forth and it's awful.

I don't know why I'm like this.

All right.

Okay.

If that's all we have on that topic,

first of all,

I've been asked to let you guys know,

as a reminder,

that you can get this bottle on

shop.privacyguides.org.

Our next story,

we are going to talk about messaging a

little bit.

We're going to talk a little bit about

RCS and iMessage later,

but first we're going to talk about

Threema, which is an encrypted messenger.

It is not recommended by privacy guides.

I think historically,

I think they've added forward secrecy now,

but in the past they were missing it.

And I think there's a few,

maybe a few other shortcomings,

but it's not the worst messenger in my

opinion.

And yeah,

Well, for now,

it's not the worst messenger because they

have just been acquired by a venture

capital firm.

And this is called what is it?

I'm probably going to mispronounce this

comatose capital or comatose.

Maybe I'm not sure,

but I believe they are a German company.

I somebody mentioned in the the group chat

that this is actually not the first time

three months been acquired by a company

like this.

So

I think it was about five years ago.

They were acquired by another private

equity company.

So this is just a second private equity

acquisition,

but it has been kind of the case

for a while that they were owned by

this.

They weren't their own company.

Which on the one hand,

I could see that as an argument for

maybe this won't really affect the quality

of the product at all because they've

already kind of...

Although I don't want to take shots at

Threema because I think anybody who's

trying to make privacy and security...

is doing a good thing,

but I do have to be honest that

they are severely lacking on a lot of

basic features that other messengers

already have.

Um,

So yeah,

it's not the most feature rich platform

and it costs money for those who didn't

know.

It's five dollars one time for the

individual.

Like a lot of companies,

they have like an individual arm and they

have like a business to business arm.

I think the B to B one is

like a subscription.

But if you're just an individual user,

it's five bucks one time.

And that's a hard sell when I could

go download Signal, SimpleX, Session,

pretty much any of them.

So yeah, that's awesome.

It's already a hard sell to get people

to using it.

And like I said,

it is missing a few of the more

advanced securities features that we've

come to expect out of things like Signal,

like perfect forward secrecy.

But yeah, I mean...

I don't know.

Do you want to talk about why?

The price was always the thing that was

holding back Threema, I think,

from gaining widespread recognition or

recommendations in our community and on

our site.

I think even one of our criteria right

now, which are, of course,

always subject to change of people if the

community feels otherwise,

but I think we settled on we only

recommend free messengers because I think

while a lot of people...

in our community are willing to pay for

more private and more secure services with

something like a messenger or social

network or something along those lines it

really there there is a network effect and

the reality is you are going to want

to communicate with people who don't care

about privacy and security and aren't

going to pay for for a messenger like

this and so it was a very niche

um use case where where three would make

sense compared to something like signal

Or especially Simplex,

which doesn't even require a phone number.

But even in Signal's case,

I think most people have phone numbers and

most people expect that's a way to text

people on your phone.

And so slotting in Signal to replace those

messages makes a lot of sense for people.

It was definitely argued to me in the

past that Threema makes sense for people

who don't have phone numbers.

To acquire a phone number to use Signal,

for example,

probably costs more than the five dollars

that Threema costs.

You could argue that Threema is actually

cheaper than Signal from that perspective.

But I think the reality is most people

do have phone numbers and most people are

looking for free messengers.

And especially with the introduction of

completely free ones like SimpleX,

it was just challenging to recommend.

With a messenger specifically,

if we want to improve privacy in the

space overall,

we need to be promoting services that...

Everyone can use,

and you can get your entire network on

because that improves the baseline

security and privacy for everyone.

Whereas with Threema,

if it's only people who are willing to

pay for it,

you're only going to get people who

already care about privacy.

It's a bit like preaching to the choir,

I think.

I've never used Threema for this reason.

I think there's easier ways to reach me.

I think you've mentioned that you use

Threema in the past.

I don't know if you want to share

a bit about your experience with that.

Yeah, I mean,

I don't have much to share because I

haven't used it a lot.

I think I've only run into like one

or two other people who use it.

And honestly, even those people,

like after a couple months of chatting on

and off,

and they're not people I chat with every

day.

It's, you know,

people who are in the privacy community

who are basically like, oh, I have three,

I'll help you test it out.

And, you know, those semi articles like,

couple of times a month or something.

And,

and then after like four to six months,

they're just like, yeah,

I'm just gonna like go all in on

signal because that's where most of the

people I talk to are.

So I'm going to stop using this.

And, um, and like I said,

it's already missing so many things that,

you know, signal.

it has just like emoji reactions.

Like you're very limited to the emoji

reactions you can use on Threema.

I think they do have polls now,

but I think they're very limited polls.

It's just, I don't know.

It's just,

it's not as good of an experience.

And again, I hate to say that.

Cause I think, you know,

anybody who's trying to further privacy

and security, I think that's great,

but it's just,

it hasn't really been the best experience.

And going back to the payment thing,

I agree with you.

Like people are just so, and again,

I see the argument from both sides,

because on the one hand,

we shouldn't be conditioned to expect

things for free, right?

If it's free, you are the product,

which isn't totally true,

but it's a great shorthand.

And when we have so many free services,

nine out of ten times,

they're selling our data or they're doing

something like that,

something shady to monetize.

But on the other hand,

And it's good also that Threema has like

a business model, right?

Like you pay for the product the same

you would as anything else.

But at the same time,

that is such a hard sell to like

try and get, you know,

I always use my family as an example,

but to try and get my sister to

like, hey,

you should switch to Threema when,

you know, you have to pay for it.

It's missing a lot of features.

It's,

not the prettiest UI.

And this is coming from me.

I'm the kind of person that normally

doesn't even care what the UI looks like.

I have cubes right in front of me

right now, which is true.

So it's just a really hard sell,

unfortunately.

I agree with you though.

I think if I were to make the

shots,

I would tell Threema that they should make

their individual facing arm totally free

and they should just focus on monetizing

their business to business side

And that's how you should do their

business model.

You know,

there's like we see Telegram and now we

see Signal kind of venturing into

monetizing certain features of these

platforms where you can provide a very

good base service for free,

but then optional stuff,

especially for power users,

which are probably the core demographic of

what Threema is serving right now with

their five dollar pricing.

I think people will pay for those

features,

but they're not necessary for everyone.

And I think

for any messenger to take off your

experience,

I think really

validates the point I was making.

I think the background behind our criteria

that the messengers that we recommend on

our site have to be free basically comes

down to any of these paid messengers,

I don't really see them taking off as

more than a neat tool for a hobbyist

who's into security to mess around with.

But it's not going to get the kind

of mass appeal that you need from certain

products.

It's fine if

If, you know,

NT charges more than Google Photos,

for example,

because these aren't social platforms.

I can protect all of my data.

If other people aren't protecting their

data,

I think that's unfortunate and that should

be fixed.

So there's that,

but it's not going to affect me, right?

But with a messenger,

like the only thing I'm doing is

communicating with other people and they

might not care about security and privacy

as much as myself.

But I want to...

It benefits me when those types of people

can use these platforms and they simply

won't find pretty much any price worth it

for something that other companies can

provide for free.

So I would agree.

I would just hope for a different

monetization model.

I think there's there's room here.

I don't know if they'll do that.

Threema, like at this point,

they kind of seem a lot like Wire,

which used to be pretty widely recommended

in the privacy community, as you know.

But then they really pivoted after they

were acquired to be very business to

business focused.

And I can imagine Threema kind of

following that same direction where they

just focus on their business product and

kind of drop the consumer side of things.

Which would be a bit unfortunate,

but also I don't know how much three

month is currently adding to the landscape

at the moment,

so it is what it is,

I think there's a couple different

directions that they could go in and we'll

see if they do any of that or

if comment is capital.

is the type of private equity firm that

strips their acquisitions for parts and

completely shuts everything down.

You never know with these private equity

things.

That's usually what they do, yeah.

So yeah, if you're a Threema user,

I would be concerned by this acquisition.

But if you're not a Threema user,

which I would imagine a lot of people

are not,

I don't think there's going to be a

lot of impact in the privacy space from

this news.

Oh, gosh.

They own Petco.

Petco GmbH.

Okay.

Well,

that is not a good sign for Trio.

I would say, yeah.

GmbH, that's Germany, isn't it?

Or is that Switzerland?

I think GmbH is Germany.

I think there's a couple of different

countries in the EU that use that one.

Use that, yeah.

I couldn't tell you off the top of

my head.

Oh, nevermind.

They don't own them anymore.

They sold their majority stake in.

Okay.

Sorry.

I'm just poking around their website now.

Yeah.

Yeah.

Um,

I think I kind of came into the

privacy scene on the tail end of wire,

but I remember that too.

Wire used to be.

it used to be pretty solid.

It was,

it was much more polished than three

months and it was free and it didn't

require any, any private information,

but yeah,

they went all in on business to business.

I think maybe you can still download wire,

but they certainly don't make it easy.

And, um,

Yeah, it's unfortunate.

That was kind of the other big thing

I wanted to mention was, like you said,

venture capital firms – or private equity

firms, sorry.

Their whole – there's a podcast called

Stuff You Should Know that I love,

and late last year they did an episode

about private equity,

and it –

covered all of that.

Like that's usually,

that's their whole job basically is they

buy a company,

they make it run super efficient and by

efficient, we mean we fire everybody,

we triple the workload.

We, you know, it's, it's honestly,

it's like a corporate pump and dump scam.

I don't even know how they get away

with it,

but that's what a lot of them do.

So hopefully Threema can survive this.

Um, but I, I, I will say they've,

they've done a few really interesting

marketing stunts in the past that I think

have, uh,

done good things to raise awareness to

privacy.

Like

I still see sometimes they have a,

you can still access it actually.

They have a website where you can upload

a picture and it'll blur it and then

it'll put a banner on it that says

hashtag normalize privacy or regain

privacy.

That's what it is.

And that was part of an awareness campaign

they did a couple years ago.

And I think they also did something in

Europe where they rented an ice cream

truck.

I could be remembering the details of this

wrong,

but they rented an ice cream truck and

they were giving people free ice cream.

But in return,

you had to hand over personal data.

They would ask people for their phone

number or whatever and their date of

birth.

And it was funny watching people with the

ice cream cone and they're like, why?

No, no, here, have it back.

I don't know.

And they're like, yeah, exactly.

It's insane.

So why are we doing this with other

services?

So yeah, I do agree.

Overall,

they haven't really made a huge dent,

but I really appreciate the innovative

marketing stunts like that they used to

do.

I think those were super fun.

That is funny.

I didn't hear that story,

but I think we don't have to get

too much into this,

but it's really interesting how people

definitely treat the online space

differently than real life.

If people were asked for that on a

website,

no problem entering that information.

Your browser would probably autofill it

for you.

But when you ask this in real life,

people suddenly realize what's happening

here.

I don't know.

why people make that distinction in their

mind.

But that's a really funny way to kind

of realize that.

Yeah, really, really true.

Really quick before we move on to the

next story,

I'll address one comment that we got in

the YouTube chat here.

That was about this story before we move

on.

They asked about Jammy and if we've used

it.

That's not something that we've really

looked into too much on our website.

And I think...

like whenever i've looked into jammy in

the past that's more of like a video

conferencing service i know it has instant

messaging built in but i don't know how

usable it is in my mind it's sort

of like a a free software skype

alternative i think a lot of people um

will probably use something like signal

and either signal video calls or or jitsi

video calls instead of jammy that's

usually what i see recommended but if

Uh, if you have any additional questions,

do you want to share more about like

what you would use jammy for?

And if it makes sense for you,

I would encourage, um,

the user who asked this to, uh,

post on the forum about it and maybe

get some more, more opinions.

Yeah, I agree.

Not to spend too long on it,

but Jemmy is a name that I've seen

pop up from time to time repeatedly.

And I feel like it's hard because it's

not super popular.

I feel like for me as a not

very technical,

like I don't know any code,

I feel like it's really hard for me

to kind of get a good...

What's the word I'm looking for?

Get really good insight into how it

measures up to things like Signal or some

of these other alternatives.

So I would definitely like to know more

about it.

I'd like to know what is going on

under the hood that makes it better or

worse or what use case it's for.

I haven't found a lot of people using

it,

so I've never had a chance to really

test it myself.

But yeah, like I said,

it pops up from time to time.

So I would love to learn more about

it.

I just feel like I have a hard

time finding that information myself.

All right.

I believe...

Is it my turn to take the next

story or is it yours?

I can look at this.

Our next story is encrypted RCS.

Signs of that were spotted in the iOS

twenty six point three beta.

So the article that we have was actually

posted by Freya on our site as a

news brief reporting on a few different

sources.

Basically,

people have discovered in the iOS twenty

six point three beta some settings that

indicate carriers will be able to enable

end to end encryption for RCS messaging

and indicate that in iMessage.

So that's pretty exciting news for people

who have been following RCS support on iOS

for decades.

a while um because of course right now

it is all not encrypted and apple said

that they weren't committing to using the

same sort of encryption standard that

google is using right now in google

messages on android because it was

something that google developed on their

own instead of working with gsma to create

like a standardized encryption protocol

for all these platforms and services to

use but now um there is a new

standard it's called messaging layer

security or mls and that's

what RCS is going to be using in

the future.

And I guess that's what's being added to

iOS.

So the appearance of this stuff in the

iOS beta doesn't indicate that it's coming

in the iOS twenty six point three release

necessarily that could come with this code

still disabled, for example.

So we might not see encrypted RCS right

away,

but it is a sign that it is

actually coming at some point.

They're actively working on support for

it.

And I think that people who

are on iOS right now or who are

on Android and use Google Messages and

chat with people on iOS,

are going to be excited about this because

there's definitely a lot of benefit to

encrypting all of your messages.

RCS definitely is not the ideal messaging

platform for a lot of reasons.

There isn't a lot of production of your

metadata,

like who you're chatting with and that

sort of thing compared to something like

Signal, for example.

So we're still going to recommend a lot

of different,

more secure and more private messengers on

our site that we would recommend

you use.

But especially in the United States,

I don't know how much of how much

this is the case in other places.

I know a lot of other countries just

standardized on various messengers like

WhatsApp or whatever.

But texting is extremely common here.

And it's definitely used around the world.

And

It's easy for a lot of people,

and people just default to it.

And so improving, again,

kind of what I was talking about with

Threema earlier,

I think anything that improves the

baseline security of all of these people

who don't care about privacy and security

and who aren't seeking out private and

secure alternatives like Signal,

it's still a good thing.

It's a step in the right direction,

even though it's not the best you could

be doing.

A lot of people rely on this,

and it's going to benefit a lot of

people.

So hopefully...

This is a sign that this will come

in the final release sooner rather than

later.

But I guess time will tell when this

will actually come out.

So I don't have too much to add

to this, but just to clarify,

and you may or may not know this,

you said, and the article says here too,

that it's a carrier setting.

So does that mean the carriers would have

to choose to opt into this,

like Verizon and T-Mobile?

Most likely.

This is definitely the biggest bummer with

RCS.

RCS...

Right now,

it can be implemented in two ways,

you can kind of do everything yourself as

a carrier and add support for it and

interoperate with other carriers that are

using the universal profile,

but it is much like texting it's a

carrier based platform or.

The other thing you can do with our

CS which a lot of carriers do I

don't remember which ones in the US do

this,

but I think there's a list on Wikipedia

or somewhere that I could find,

but a lot of carriers.

don't run RCS and they purchase a service

from Google that does it for them.

And so the reality behind RCS right now

is that Google is actually running all of

the service behind it for I think the

majority of people,

but if not the majority,

definitely a lot of them.

And so it's basically just a centralized

Google Messenger right now that your

carrier is kind of promoting on your

phones.

So obviously, from a metadata perspective,

that gives Google a lot of data,

but also it is, yeah,

there's a lot of middlemen involved here.

It's not just like an over-the-top service

that these tech companies are working on

together.

It's integrating with the traditional

carrier platform.

And whether you're going to Google servers

or whether you're going to the carrier

servers,

that is something that the carrier has to

set up on their end, which is...

Yeah, I agree with your reaction.

A bummer.

Yeah.

Cause I,

I feel like it's going to be a

challenge to get carriers to go ahead and

roll this out.

Like,

I feel like if Apple did it or

even Google,

if they did it at the phone level,

it would just, they would do it,

but I feel like carriers have very little

incentive and.

Kind of going back to what you said

earlier at the beginning when you were

covering this,

I agree with you that we look at

something like iMessage and people who do

not care about privacy or security,

who use the I have nothing to hide

argument liberally,

these are the same people who are using

iMessage.

And they're getting end-to-end encryption

just talking to each other without even

knowing what it is or knowing that it's

enabled.

And so, yeah,

it would be really cool if RCS could

roll out

to the general public and be available for

everyone cross-platform.

But I guess the only thing I could

see maybe as incentive for the carriers is

I know that RCS also comes with a

lot of those quality of life features that

iMessage is known for,

like bigger attachments and reacting to

messages and stuff.

So maybe we'll get lucky and maybe

carriers will roll it out because of the

features and the privacy and the security

will just be an added bonus.

But

yeah um yeah at the moment i do

think there's a lot of pressure and i

think apple adding rcs better rcs support

is going to add even more pressure for

these carriers to support it because like

it i'm in one group chat with some

family members on rcs right now and it's

very nice to be able to see like

read receipts and and like

typing indicators and all the normal group

stuff that you don't get on SMS because

SMS is a terrible platform.

And so there have been some improvements

there.

And I think people will realize that,

especially if they're in chats with RCS

users,

and they will eventually demand carriers

do it,

especially in the US where like texting is

so common.

I don't know how it'll be

in places where people don't use SMS in

the first place.

Maybe there's less incentive to use RCS,

but I think at least for a good

amount of people,

there is pressure to support it,

which is all right.

Like I said, it's not my favorite,

but it increases the baseline,

especially if this gets included.

So I'm hopeful that we'll see wide

adoption.

Yeah, same here.

Um, I think actually I'm going to,

I'm going to keep on the Google and

Apple thing and I'm going to go to,

we have a story here about Google Gemini

is going to power Apple's AI features such

as Siri.

And yeah, so, oh man,

this is kind of a confusing story for

me because,

so Apple's been trying to roll out their

Apple Intelligence,

clever little bit of marketing there,

which is just on-device AI.

And-

Man,

I know we've talked about AI so much

already tonight, but again,

on the user end,

from what I've been understanding,

it seems relatively private.

A lot of it is going to be

done on device.

And I think Apple actually has a very

similar architecture to confer.

Jonah can definitely correct me if I'm

wrong,

but I think they have a very similar

architecture where they try to run

everything in these trusted modules and

they try really hard to make it as

private as possible.

And Apple has been running into a lot

of delays rolling out their Apple

intelligence thing.

And,

One of the few things I don't like

about TechCrunch is they're very sparse on

technical details here.

But basically,

they say that Apple and Google have signed

a deal where Google's Gemini is going to

power at least some of the AI features.

And the headline specifically says,

like Siri, for example.

This is not an exclusive deal,

according to this article.

So Apple may...

Um,

potentially this is me speculating Apple

might tap, uh, you know,

like Claude for something else or,

you know, whatever chat GP,

I think originally they did contract with

chat GPT.

So yeah, it's again,

it's very sparse on technical details as

far as privacy stuff goes.

It just says here in the article that,

uh,

Apple has focused on privacy with its AI

rollout with much of the process happening

on processing happening on device or

through tightly controlled infrastructure.

Apple says it will maintain these privacy

standards through its partnership with

Google.

That's kind of all they said.

So, yeah.

I don't know.

What do you think about this?

There's a lot going on here.

Okay.

So, yeah,

I know Apple says that they are going

to maintain their privacy standards.

which to their credit the things that

apple have has been working on lately to

my knowledge they were kind of the first

to go in this private compute direction

that um confer the ai company we just

talked about earlier in the show and that

maple ai and that other people are doing

i think google was i mean i think

apple was kind of the first to create

this and they kind of have a

big advantage compared to their

competitors in the sense that they can

build their own hardware and CPUs to make

the security more robust rather than just

relying on these off the shelf solutions

from Intel and AMD or Nvidia, for example.

Something I want more clarity about when

it comes to how this will work is

like what exactly Google's involvement is.

I saw a lot of rumors that this

would happen leading up to this

announcement where people were basically

saying that Apple and Google came to

an agreement where like Apple would get

access to Gemini's models basically and

they could create their own models based

on that or add additional training data or

whatever and they could run everything

themselves on these private compute

servers that they have.

So it isn't like the current

implementation that Apple has right now

with ChatGPT where Siri will sometimes

offload your request to ChatGPT and just

send it over.

In theory,

if Apple is running all of this and

keeping it on their cloud and they're just

using these models that Google has created

and that's what their partnership is,

Keeping everything in one ecosystem and

not giving more data to Google, I think,

is an improvement for sure.

But all of this private AI stuff,

no matter how it's implemented,

has all of the problems that we talked

about earlier on in the show and the

problems with AI in general.

And I don't think that Apple's private

compute is going to be at a level

of privacy...

and security that I would be comfortable

with using for anything serious if I was

going to use AI at all.

And that's unfortunate because I think

that Apple is kind of doing the best

you really technically can do from the

security perspective if you want to get

back into the technical specifics of how

AI works.

But the best that's possible right now

with our current technology isn't

good enough in my opinion for people who

are serious about their privacy and

security and, um, any of this cloud stuff,

like I think it sets a very dangerous,

um,

path that we are going on with technology.

Because it seems like all of these tech

companies like Apple and especially like

ChatGPT or Google,

what they're trying to do is offload as

much as possible to the cloud.

And in doing so,

they're making normal hardware for people

more expensive.

We talked about this a few episodes ago,

and I know there was a news brief

about the RAM pricing,

which is

crazy right now because all of these data

centers are buying it up.

And what's really happening is people are

being priced out of the market where you

can own your local compute.

And that's a trend.

I was sharing this, I think,

in one of the group chats we have,

but it's a trend that we really see

in society at large for many years.

People were

priced out of the housing market.

I know that's a hot topic,

especially all around the world right now.

People are being priced out of even the

car market.

So many more people are leasing before or

people are just relying on things like

Uber or Lyft.

I know Tesla really wants to do this

with their robo taxis where people won't

own cars generally.

They will just rely on other people who

own cars to taxi them around.

I think that that is the direction that

tech companies want

everything to go in because they can

control all of it and they can create

this subscription model that you have to

pay for and local compute is kind of

going away and I think that's very scary

and dangerous because we're really they're

really forcing everyone in this position

where everyone's going to be locked in as

rent seekers on all of these platforms and

won't have any agency over

Over anything anything that's doing

computers all of these computers are just

going to be thin clients for the cloud,

which is extremely unfortunate it's not a

direction that I think people should

should tolerate.

And I guess i'm having a camera issue,

but whatever hopefully you can see me but

yeah that's my.

Concern with with all of this.

Yeah, no, I, I agree with you.

Cause even it's, it's this whole,

like everything on the cloud is even from

a practical perspective,

like I swear I'd have to go find

it,

but I swear I read a story several

years ago about somebody who rented a car

and they were in like Arizona.

And they couldn't start the car because

the car couldn't get cell signal to call

home and do whatever stupid checks it had

to do to like verify that they could

start the car.

And just things like that are just so

it's, it's a practical perspective.

You know,

what happens when the power goes out?

You know,

that's a very common scenario we've all

been in what happens i mean i guess

when the power's out you're not really

using computers but you know what i mean

or even your phone yeah what happens when

the power goes out and now the grid

is overloaded with everybody texting and

everybody checking twitter to be like oh

what's happening does anybody know why the

power is out and you can't do anything

because you have like you can't make that

connection to the the server for whatever

license you're supposed to have and it's

just it seems like such a

like I get it on the one hand,

right?

Like I love the cloud in the sense

of like, I, you know,

if my computer crashes,

I have a copy of my data or,

you know,

to not have to destroy my own CPU

doing this.

God,

I wish I could render videos in the

cloud and not have to destroy my GPU

to do it.

But, you know,

it comes with practical drawbacks of just

that, that,

resilience, you know,

what happens when AWS knocks out a third

of the internet traffic or cloud flare,

whoever.

And it's just, yeah.

I mean,

I feel like I've seen that multiple times

just in the last several months of like

some major outage and all my friends in

discord are just like, well,

I guess I'm just gonna like, you know,

take an extended lunch today or something.

Cause I can't do anything.

Cause the cloud's out.

My whole job is on the cloud and

Yeah,

it seems so very short-sighted in the name

of profits,

which I know is so hard to believe

that tech companies would do that.

But yeah, I don't like it either.

It's horrible.

I don't have much else to add to

that one.

Yeah, I think...

That's kind of all I have to say.

We did have one more discussion question

for that topic about did Google win kind

of this AI thing despite being ruled

against making anti-competitive deals in

court?

It's a really interesting case, I think,

this one.

And again,

I want to see more about this because

you know,

if Google is actually like controlling all

of the stuff that Apple is doing behind

the scenes, that would be very concerning,

especially from an antitrust standpoint.

But if it is a deal where Apple

just kind of building on their work,

but they're doing it themselves,

that's pretty typical of Apple across

their software and their hardware.

I mean,

most of like Apple's advances in hardware

come from like Samsung making better

screens and that sort of thing.

And if it's a situation like that with

Google,

It's probably not a huge anti-competitive

concern,

but if Gemini branding is going to be

prominently featured and stuff,

and Gemini is kind of buying their way

into being the AI company that people

think about, yeah,

it is a weird situation for Google to

be in.

So definitely something I hope antitrust

people keep an eye on,

but I don't think they have much teeth

at the moment against these big tech

companies.

Sadly true.

Just to add on to that,

I don't know much about a Google AI

antitrust lawsuit,

but it says here in the TechCrunch article

that Google and Apple specifically have

faced lawsuits.

In August,

a federal judge ruled that Google acted

illegally to maintain monopoly in online

search by paying companies like Apple to

present its search engine as the default.

So I don't know.

Yeah, this could...

Hmm.

I don't know.

Yeah.

I guess now that I think about it,

I could see a scenario like Europe saying,

Hey,

you have to offer other models or

something.

But like you said,

that may only be the case if Google

is maintaining everything.

If like you said, if Google's just like,

okay, here's a copy of our model,

go host it on your server and do

whatever, then I don't know.

I mean,

I would still argue that's Apple being

monopoly,

but governments seem to be a little bit

easier on that.

Yeah.

I mean, and it still has, um,

troubling implications, I think,

for the AI industry.

Because whoever trains these models,

they have a lot of control over what

the AI does.

And so they can definitely shift things to

show up in certain ways or prioritize

certain responses.

I don't know what these AI companies could

do,

but it does give Google a lot of

power either way.

Agreed.

All right.

Let's see.

I think this is our...

We have another news story before we move

on to forum updates here.

But this is from Ars Technica.

Never before seen Linux malware is far

more advanced than typical.

Void link includes an unusually broad and

advanced array of capabilities.

So basically this article from Ars

Tactica,

it kind of dives into this new Linux

malware that can infect Linux machines and

it has a lot of advanced capabilities that

attackers can use to perform various

things on your computer.

I feel like we've talked a bit about

malware on Linux in the past.

I think this is a trend that's only

going to continue,

especially as more people adopt Linux.

The reality is all of these malware

targets or malware developers are going to

target the platforms that most people use.

And so if we see more people adopt

something like the Steam Deck or more

people adopt Linux on desktop because

gaming is getting better or because they

want to escape...

all of the copilot nonsense in Microsoft

Windows or for whatever reason,

we will see Linux become more and more

of a target

just inherently, I think.

And so we're going to probably see more

articles like this.

But it does demonstrate what I think a

lot of people in the privacy and

especially the security community have

been saying about Linux and desktop Linux,

especially for a while,

which is that I think Linux does have

a good ways to go as far as

defending itself against

malware like this i think linux has for

a very long time greatly benefited from

not having a very big market share on

desktop people will always say you know

linux has very high usage on the server

and so therefore there should be more

malware for it based on that but that

isn't true because the desktop ecosystem

is just a very different um

threat landscape,

you're running so many different

applications,

you're running like web browsers,

especially that's downloading arbitrary

code from the internet,

anything that you're just doing, random,

whatever desktop things on is going to

have a much

larger attack service than something like

a Linux server.

If I set up a Linux server,

it's only going to do whatever I install.

And so the attack surface is very small,

and that's why you haven't seen a lot

of malware targeting these Linux servers.

But yeah, this is...

I think that's basically my only point.

I think this is a trend that we'll

continue to see.

So I hope that Linux distro developers and

the Linux kernel product take security a

bit more seriously because there are

security features that we see on

mainstream big tech platforms like Mac OS

and Windows that Linux still could benefit

from.

And it hasn't seen much of a focus

yet, unfortunately.

Did you have any takeaways from this

article when you read through it that I

didn't cover, Nate?

Well, kind of to add to that,

because I think your takeaway is spot on,

like Linux needs better security.

I don't think there's a lot of people

that would argue that,

that know what they're talking about.

But no, it's interestingly,

this kind of plays into what we were

talking about right before we transitioned

to this story, because it says that...

This particular malware is actually aimed

at servers.

It's specifically aimed at virtual

machines and stuff,

and it can detect popular hosting

providers like AWS, Azure, Tencent,

and they say that there's indications the

developers are gonna add detection for

Huawei, DigitalOcean,

And it's very modular.

So that's kind of been one of Linux's

saving graces, I guess you could say,

is because, you know,

when you buy a Mac,

you're buying the entire device, right?

And like when you're buying Windows,

you're generally buying,

it's a little bit mix and match,

but generally there's, you know,

a handful of people make the chips and

a handful of people make, you know,

the RAM and all that, even less now.

But it's, you know,

Linux machines are so varied in their

hardware and their capabilities and what

they're designed for,

like you were saying.

And

So this one is very modular,

and it can do all kinds of different

things depending on what type of machine

it's on and what,

if I'm reading this right,

and what the attacker needs it to do,

which is really interesting.

And kind of just to back up what

you were saying about we're going to see

more of this, the article says, like,

similar things have targeted Windows

servers for years,

but they're less common on Linux.

And, you know, like I said,

this goes back to everything.

This goes back to Linux needs better

security.

This goes back to...

Was I saying resilience?

Because if the VM that's hosting my app

goes down, can I use the app?

Oh, man.

This was a good story to end on,

I think,

because so many things come together.

And yeah,

is it going to take down AI data

centers now?

How are people going to live without their

AI chatbot telling them what kind of

coffee they want?

So yeah, I think they'll survive.

Oh, I don't know, man.

But, but I, I can't pick between the,

the, the hot coffee and the cold brew.

I don't know.

I got nothing, but yeah, it's,

it's really, and I agree.

I agree with you a hundred percent on

your takeaway that we do.

I,

I always hate telling developers what to

do because I'm not a code person and

I would love to know code.

I'm trying to learn code for the record.

That's one of my goals this year is

to learn at least Python.

I feel like that's a good foundation to

start with.

And I feel bad saying like, oh,

you should go do this thing when I

can't really contribute to that.

But it's, you know,

privacy and security are, you know,

Kerry Parker from Firewall's Don't Stop

Dragons,

he likes to say that privacy is a

team sport.

And like you were saying with the

messengers,

and when we do things that raise the

default level of privacy and security,

it, it raises everyone with it.

What's the phrase?

Like a rising tide lifts all boats.

And so it's,

it's not me trying to sound entitled and

be like, well,

these developers need to do what I want.

It's like, no,

like if we put more emphasis into

security, everyone benefits by default.

And,

Yeah,

clearly the article backs up what you were

saying about we're just going to see more

of this,

whether it's on the server side or the

desktop side.

Yeah,

my wife now has two Linux devices because

she now has a Steam Deck and a

Pop!

OS machine, which, quick side note,

when she got the Steam Deck,

I took great joy in telling her,

now you can tell people I use Arch,

by the way.

So I had to make the joke.

Yay!

I feel accomplished tonight.

Yeah, that's all I got.

Just kind of backing up what you were

saying.

Let's move on to some form threads that

have been popular this week.

I think one that has gotten a lot

of discussion over the past few days was

about Mailbox,

which is recommended on our site as an

email provider.

There's this thread on the Mailbox form,

basically,

that was also linked on our form for

further discussion,

talking about

um some issues that people are having with

guard and mailbox if if you're unfamiliar

or haven't checked their website in a

while mailbox recently um

went through a whole refresh.

It seems like they redesigned their whole

website.

They refreshed all of their apps.

And it seems like they might have changed

development.

And so unfortunately,

this isn't something I would say that

we've gotten a great chance to take a

longer look at.

but um i would ask anyone who like

uses mailbox right now if you have any

experiences with things changing or um

potential problems with this new version

definitely let us know in this forum

thread because um it's something that we

want to keep an eye on i know

we have a couple team members um who

do use mailbox i don't personally but um

we

we're gonna have them look into some of

these things and hopefully find out more

about what's going on but yeah it's

according to these the users on these

forums um there's potential security

issues it seems like with um with guard

which is their um tool that basically

encrypts all of your messages pgp so it's

important um and i guess it's uh

leaving behind traces,

even after you log out on a machine,

that seems to be the main gist of

the issue.

So again,

that's something that we'll want to

validate,

but it's something you might want to be

aware of if you are a mailbox user.

And hopefully we'll have more information

to share on that soon.

I think that kind of covers it.

Did you see any comments in this thread

that you wanted to cover?

No,

I just wanted to back up what you

said about if anybody is a Mailbox user

who can kind of shed some light.

Because it very quickly devolved.

I don't want to say devolved.

That's not the right word.

It very quickly turned into people

discussing Proton and Tudor and some of

the features they offer.

Because a lot of people were like, oh,

I use Mailbox because it has this feature.

And people were like, well,

Proton offers that.

And they say, oh,

but I don't like that it doesn't do

this.

And they say, well, Tudor does that.

You know, which is fine.

It's totally fine.

But it kind of...

As an outsider coming in who's never used

Mailbox,

I'm kind of looking at this and I'm

like, so what is the issue?

Like,

I actually asked Jonah that before we

started recording.

I was like, what is Guard?

What is going on?

But yeah, and it's also interesting.

It's not necessarily related,

but we did kind of note that Mailbox.org

appears to have really facelifted their

website.

So...

Which it looks great, for the record.

Looks super modern, super slick,

very awesome.

But it does seem that they've done a

lot of things,

both on the user end and behind the

scenes.

And yeah,

I think we're just trying to get a

better idea of exactly what's going on

here.

And if there is anything to be concerned

about,

we definitely want to make sure mailbox

users know.

We want to make sure that we know,

so we can see if there's any concerns

that affect our recommendations,

all that kind of stuff.

Yeah,

all of those mailbox changes are

relatively recent,

so it sounds like they're more extensive

than I had thought when they first

announced that.

For sure.

All right.

Did we have any other forum threads you

wanted to discuss,

or should we turn it over to questions?

I don't think so.

I think we can look through the chat

and see if anyone had any questions for

us this week and the forum thread as

well.

Let me get it pulled up.

But if you have any,

you want to highlight right off the bat,

definitely get started.

Let's see.

I'm looking through the forum thread right

now.

You,

I don't know if we do have any

answers to this.

One of our members bits on a,

Bits on Data Dev says,

referring to the headline story,

the confer AI, he says,

where do the trusted execution

environments run?

They'd be more trusted if I knew where

these were and how they're insulated from

the surrounding environment.

For now,

I can't seem to find much info on

it.

It's Marlin Spike,

so I feel pretty sure I can play

with this for fun,

but I'm not about to have a therapy

session with it anytime soon,

though that should never be the use case

for AI in an ideal world.

Thank you for getting one step ahead of

me there.

Yeah, I don't.

Unfortunately, correct me if I'm wrong,

but I think a lot of what we

know either comes from like that article.

I don't know if Moxie's done any direct

interviews yet,

but there's reputable sources like Ars

Technica, for example.

But there's also,

I think I mentioned it at the top,

but I may have accidentally stumbled over

myself and rushed through it.

If you go to confer.to,

which is their website,

and I think right at the beginning,

it prompts you to log in.

There is no like free tier for this

thing.

But it says on that login page,

it's like, oh, click here to learn more.

And he has three blog posts.

I do remember now, I did say this,

where he digs in a little bit more

to the details.

I don't know if he gives you that

level of detail that you're looking for,

but if you have any more technical

questions, I would start there for sure,

because that blog post gets very

technical.

Yeah,

that's probably a good place to look.

I haven't seen where Confer is hosted

specifically,

if you're talking about the hosting

provider side of things.

Typically...

Like I know with Maple AI, for example,

they run on Amazon Web Services and Amazon

is selling a service right now to people

who run this sort of product where you

can rent access on these trusted execution

environments that are hardware validated.

And I know Intel and AMD have this

Intel...

Notably,

Signal has been using Trusted Execution

Environments for things like contact

discovery and other features if you've

seen any of their integrations with

Intel's platform on their blog.

So that's the sort of thing that this

is like.

The trusted executed environments, yeah,

they're running on various providers,

and it's mainly relying on the hardware to

isolate that environment from the rest of

the stack.

However,

and I know that we talked about this,

because I remember talking about this in a

previous episode.

Maybe we can find that and link it

after, but I...

problem with all of these things is that

like they aren't fully validated yet and

protecting against physical access like i

said earlier is not something that they

were designed to protect against in the

first place now they're kind of being used

for that purpose maybe we'll see

improvements

to that end i don't know but like

at the end of the day vulnerabilities um

in these platforms have been found before

one was found very recently because we

talked about it on this show and other

ones have been found in the past and

very often um but not always but very

often

They do rely on some sort of physical

access,

but it just kind of shows that like

these aren't the best protections against

people who have physical access to the

machines.

You can't fully rely on them for that.

And the only thing that they can really

do is kind of isolate the code that's

running and also

in theory,

validate the code that's being run.

But the code that's being run could be

anything.

It could be code that's spying on you,

for example.

And if that's running in the TEE and

you don't know it,

what protection is it really giving you?

None at all.

And so I think in this case,

like with Confer,

I believe everything should be open source

so people can audit this.

But are people auditing the code that's

being run in these trusted execution

environments across the board?

I don't really know if that's the case.

I don't know.

What Confer has done about that,

but I don't think that's happening with a

lot of these other companies who are doing

similar things to Confer.

Totally agree.

But a quick note,

I don't know if you remember in the

or if you saw in the original article

about Confer,

the Ars Technica one we spoke about at

the top.

Towards the bottom,

it talks about how he's actually offering

remote attestation.

or I don't know how to pronounce that

word,

but how you can remotely verify that the

code is running on the server,

that it has not been tampered with and

that there is no additional code.

Um, I,

I don't know to my non-technical brain,

that sounds like a big claim.

So I don't know anything about that,

but I'm just saying they did say that's

a thing at least with confer.

Yeah.

And, and we see this with other, um,

Similar platforms as well.

I just keep bringing up Maple because it's

the only one that we've...

talked about on the forum a bit,

but they have cryptographic proofs that

they publish on their website as well,

where you can ensure that you're talking

with their secure servers.

But what does that tell you about the

code that's actually being run on it

itself?

That is unclear.

And that's the main thing that I would

caution people against.

Just because you know that you're

interacting with a trusted piece of code

doesn't mean that the code is trustworthy.

It really depends.

like, they don't even have to be like,

confer doesn't have to be malicious to

have bugs in their code.

There's buggy code all the time.

And so it's not a guarantee by any

means that there's no way to get your

data out of this.

And yeah,

that's the kind of thing that I would

be very concerned about.

I don't really know how what you're

describing would, um,

work in like a web-based client like

Confer because then we get into another

issue where we talk about this on our

website with end-to-end encrypted web

applications like Proton where Proton

could in theory send you a totally

different version of the website that like

runs JavaScript that decrypts your data,

for example,

and they could do it surreptitiously where

it would be very difficult to detect that

they're doing it.

I mean,

unless you're like going through the

inspect element source code and you're

looking at all the JavaScript and you're

seeing if it's different and then you just

have to assume

they're targeting you and they didn't just

push out an update.

Like that's the kind of thing that's very

hard to detect.

And I don't think Confer can really do

anything about that without a native

client.

I think going back to Apple's AI

implementation,

if we want to talk about the security,

their private compute,

be a bit better against this because they

can run code on in your operating system

like on ios that validates the servers

that they can't i mean if they implement

this properly apple everything is

proprietary so who knows what they're

doing you can't you can't really trust

these platforms either but i'm just saying

in theory running a native client there

could be some validation involved but with

something like a web app like confer

If they change the server,

they want to redirect you to a malicious

server,

they can just give you a different version

of the website that connects to this

malicious server and says, yep,

the server's verified.

It's all good.

And how would you know otherwise?

I don't think that would be very easy

for most people to detect.

So yeah, all of this private AI stuff,

if it's running in the cloud...

and not locally,

I wouldn't really trust it.

And that is the main reason that on

our website,

we do have some AI recommendations.

But if you're going to use AI at

all,

we only recommend local AI models at this

time because it's really the only way to

ensure that your data that you're

inputting into it isn't going to be

monitored or logged by other parties.

That's just the reality of the situation

at the moment.

Fair enough.

Here's another question about Confer from

the forum.

Do you think Confer's way of doing AI

is something other AI products will follow

suit,

and how difficult would it be for existing

AI products to migrate that?

I would say...

I think that it's likely that other AI

products will do this.

It does seem like NVIDIA is putting more

resources into this trusted execution

environment from what I've seen in their

GPUs for AI data centers to use.

Again,

like I said a few times on this

show already,

what Conferred is doing isn't super new.

We've seen it from other companies like

Maple and like Apple.

I think confer could be doing a better

job.

They probably have more security minded

people behind it.

I don't know.

I haven't looked too much into it,

but like it's, it's,

it's something that has been done before.

It's something that I think we'll probably

continue to see happening.

Um, and as far as I know,

there isn't much stopping AI companies

from, from implementing this in hardware,

especially as the hardware supports this

and gets better.

Um,

So I don't know why most companies like

ChatGPT or OpenAI would be incentivized to

do this.

It seems like they're perfectly happy to

just have all of your data.

So I don't know if it will happen,

but it definitely could happen for sure.

Yeah, I agree.

It's the incentive thing for me.

I'm sure, as you noted,

there's already other companies trying to

do this and trying to create those private

alternatives,

but

As far as the ones that are around,

the OpenAI, Anthropic,

I don't see what their incentive would be

to do it.

We did have one more quick thing in

the forum.

It's actually a shout out from,

we have a pretty active member who goes

by Nombre Falso.

And he said,

there's an age verification bill making

its way through Florida.

SB four, eight,

two would require you to verify your

identity to use an AI chat bot.

But he makes a pretty good point that

with AI getting so integrated into things

like Gemini, for example,

where Google's rolling it out to every

part of their product system,

does that mean that eventually they can

make the argument that you have to verify

your identity to use Gmail?

So pretty concerning stuff.

And if you're,

In Florida,

definitely go check out that link in the

forum and learn more about it.

I only have access to the YouTube chat.

I'm not sure if we've missed anything in

some of the other chats,

because I know we're live streaming to a

few different platforms right now.

I don't believe we have.

So I guess we can do kind of

a last call for any questions if people

are still wondering about anything.

Otherwise...

It looks like we've gotten through

everything on the forum.

Yeah.

I'm not seeing anything in the YouTube

chat that we haven't already addressed.

There's the question about Jami.

We did have a new member sign up

tonight.

What is that?

Oh, man.

I don't know if I can pronounce this

because I think this is a Greek name.

Ionis Karopoulos.

But they became a member on YouTube

tonight,

I think at the beginning of the stream.

And I think we missed that.

So thank you so much for signing up

and supporting Privacy Guides and our

mission.

Really appreciate it.

We got another question in the chat from

unredacted.

Any update on the bad internet bills that

we had talked about?

As far as I know,

it's been pretty slow over the holidays.

I haven't seen any new news,

but they are continuing to advance.

I haven't seen any good news in that

direction either.

And I don't think...

No news is good news in this case.

I think no news means that they're working

on things behind the scenes to continue

pushing it through.

So yeah,

when we see more updates on that,

we'll definitely be sharing on our social

media and keeping people up to date.

But I don't know.

It's hard to keep track of all of

the many vacations that Congress feels

welcome to take away from their jobs.

And so I don't know if they're actually

doing anything right now over in

Washington or if they're just kind of

lounging around for the winter.

So that could have something to do with

it.

Yeah.

But yeah, if there's updates,

I'll let you know.

Yeah,

it looks like Congress reconvened on

January fifth, if I'm reading this right,

the subcommittee markup of six bills.

It says that's the Committee on Energy and

Commerce,

which I think is what a lot of

these bills fell under,

if I remember correctly.

Yeah, they met up yesterday, actually.

And next meeting was scheduled today at

three.

So three p.m.

So, I mean, theoretically,

if anything happened,

hopefully we should hear about it any day

now.

It could be ongoing right now.

Maybe that's the update.

You know, that's true.

For all the hate that politicians

rightfully get,

they usually do work pretty late when they

have these meetings.

So, yeah,

they might be talking about it as we

speak.

Everybody, uh, focus real,

real hard and we're going to send them

a message to tell them to stop being

stupid.

Please.

Yeah.

Jokes aside.

Um, yeah.

Underdacted said, thanks.

Hard to keep track these days.

Yeah.

Trust me.

You're telling me there's so much to keep

track of and, uh, but all righty.

Well,

I think that kind of wraps everything up

then Nate,

do you want to take the outro here?

Sure.

I can do that.

So all the updates from this week will

be shared on the blog.

So if you are not signed up yet

for the newsletter, go ahead and do that.

Or you can of course,

subscribe with your favorite RSS reader.

For people who prefer audio,

we also offer a podcast available on all

platforms and RSS,

and this video will also be synced to

PeerTube.

Privacy Guides is an impartial nonprofit

organization that is focused on building a

strong privacy advocacy community and

delivering the best digital privacy and

consumer technology rights advice on the

internet.

If you wanna support our mission,

then you can make a donation on our

website, privacyguides.org.

make a donation click the red heart icon

located in the top right corner of the

page you can contribute using standard

fiat currency via debit or credit card or

you can donate anonymously using monero or

your favorite cryptocurrency becoming a

paid member unlocks exclusive perks like

early access to video content priority

during the live stream q a you'll also

get a cool badge on your profile in

the privacy guides forum and the warm

fuzzy feeling of supporting independent

media so thank you guys so much for

watching and we'll see you next week

thanks everyone