alt.hn

3/25/2026 at 10:28:49 AM

I tried to prove I'm not AI. My aunt wasn't convinced

https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake

by dabinat

3/25/2026 at 11:24:41 AM

AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.

Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.

by a2128

3/25/2026 at 11:49:57 AM

Partially agree. However, this problem has existed with scam e-mails since the 90s.

For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.

by roflmaostc

3/25/2026 at 12:11:24 PM

How do you prove the signature isn't fake?

Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.

All of those have their issues.

by TheOtherHobbes

3/25/2026 at 12:33:28 PM

I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.

by olmo23

3/25/2026 at 12:32:40 PM

people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.

by tenacious_tuna

3/25/2026 at 12:37:51 PM

If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.

by bigfishrunning

3/25/2026 at 7:28:08 PM

Enshittification never stopped we just stopped talking about it because it became normal. Quality does not matter anymore. I agree its depressing, seeing AI Slop being pushed and no one even putting the time or effort in to say this is bad and you should feel bad.

by daheza

3/25/2026 at 1:50:45 PM

That's a different problem though. It's doing it on their behalf, not on behalf of a scammer who's impersonating them.

by Ajedi32

3/25/2026 at 3:29:29 PM

Until their computer is taken over....

by pixl97

3/25/2026 at 2:24:54 PM

Well we should treat that as their own output. If it's crap, treat it the same way you would if they produced the crap themselves.

by MarsIronPI

3/26/2026 at 6:30:58 AM

> Ultimately ID requires either a government ID service, a third party corporate ID service,

These are valid approaches to the problem, but they are not necessary.

> or some kind of open hybrid - which doesn't exist.

PGP exists for decades. It doesn't have a great UX, it isn't used outside of its narrow niches, but it exists and does exactly this.

by ordu

3/26/2026 at 7:49:19 PM

PGP works if you vouch for keys in person, both of you are honest and can be trusted to act in good faith when not in person, have good key chain and rotation hygiene, and the private keys can't be exfiltrated.

by heavyset_go

3/26/2026 at 7:02:44 AM

Picture this: your grandma calls you in a panic, and you tell her, "Drop me your public PGP key so I can verify the signature".. PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person

by KurSix

3/26/2026 at 12:32:02 PM

> PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person

Can this problem be solved with better software?

I believe it can, it is just average person doesn't need PGP. No demand for software solving this problem, therefore no software for that.

The problem can be solved, like a storage for known PGP public keys with their history: like where the key was acquired, and a simple algo that calculated trust to the key as a probability of it being valid (or what adjective cryptographers would use in this case?).

You can start with PGP keys of people you know, getting them as QR codes offline, marking them as "high trust" and then pull from them keys stored at their devices (lowering their trust levels by the way). There are some issues how to calculate probability, because when we pull some keys from different sources we can't know are their reported trust levels are independent variables or not, but I believe you can deal with it, by pulling the whole chain of transfers of the key, starting from the owner of the key and ending at your device.

It is just a rough idea, how it can be made. Maybe other solutions are possible. My point is: the ugliness of PGP is a result of PGP was made by nerds and for the nerds. There is no demand for PGP-like solutions outside of nerd communities. But maybe LLM induced corrosion of trust will create demand?

by ordu

3/25/2026 at 3:07:14 PM

Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it.

by SirMaster

3/26/2026 at 9:40:51 AM

> If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

In the interview scenario, generating an email signature is hardly beyond what an AI can do.

You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.

by SomeUserName432

3/26/2026 at 3:58:58 AM

> It is AI generated, then we would loose trust in that person

You are assuming that only you can generate fake AI videos of yourself.

by pjaoko

3/26/2026 at 4:52:31 AM

OP was talking about journalists attesting to the authenticity of video they produce

by nsomaru

3/25/2026 at 2:31:45 PM

As with any problem, scale changes its nature.

With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.

With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.

At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.

by strogonoff

3/25/2026 at 3:30:31 PM

> (or have transactions of up to certain size)

And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing.

by pixl97

3/25/2026 at 3:42:47 PM

The highlighted parallel is usually drawn between cryptocurrency and cash, not between cryptocurrency and banks. With both cash and cryptocurrency, as is the idea behind the analogy, 1) there’s no intermediary and 2) once it’s gone, it’s gone. Obviously, the banking system is not immune to fraud (not sure why you think I made that claim, unless your definition of “cash” includes electronic transfers), but banks and/or payment systems can (and do) resolve these cases and have certain KYC requirements.

by strogonoff

3/25/2026 at 12:57:42 PM

There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.

by mk89

3/25/2026 at 11:56:35 AM

Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now. It doesn’t even compare.

by Forgeties79

3/25/2026 at 1:50:23 PM

I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.

by hansonkd

3/25/2026 at 1:51:56 PM

> Information found online will also no longer be trustable

Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.

> we already experience misleading articles today

Again, had been happening for decades.

> footage of some incident somewhere may have been entirely fabricated by AI

Not like we did not already have doctored footage plaguing the public.

> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video

Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).

We may be dealing with the problem of spam, but the problems have already been there.

by friendzis

3/25/2026 at 2:00:25 PM

All these are true, but just as it happened before the internet, it's accelerating even further. There are clear costs that cannot just be hand waved away.

by pstuart

3/25/2026 at 2:22:37 PM

I'm not sure we can say it's accelerating. The techniques that adversarial actors use has always been changing and when they shift tactics it can take a while for an adequate defense is adopted. We're still dealing with sql injection in the owasp top ten. What I think would indicate an acceleration is when the most security oriented organizations continuously fail to defend against new attacks. If we start hearing about JPMorgan and Google getting popped every month or two, we're in trouble.

by ottah

3/25/2026 at 3:17:52 PM

The acceleration is in the decrease of the cost to produce misinformation.

Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake.

We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months.

by ACS_Solver

3/25/2026 at 4:11:15 PM

> The acceleration is in the decrease of the cost to produce misinformation.

So it's a spam issue. And normally, while annoying it's possible to fight spam, however on these topics we have built structures that disable the very mechanisms allowing us to fight spam. That's worrying.

The fact that someone can instruct their computer to astroturf their flight tracking app on some forum for nerds is irrelevant - people have been instructing "marketing agencies" to astroturf their brand of caffeinated sugar water on tv, radio and press for decades and centuries. For a very long time the "traditional media" was aware that their ability to sell astroturfing capacity was hanging on their general trustworthiness. Then the internets rose to prominence, traditional media followed by selling more and more of their capacity to astroturfers. Now we have a worrying situation that the internets might be spammed by astroturfers a bit too much, but the backup is broken already. Now that's truly frightening.

Welcome to the post-truth world, where objective references outside of your own village cannot exist.

by friendzis

3/25/2026 at 9:09:50 PM

It's an algorithm issue. When people hold a media consumption device in front of their face all day and the algorithms are played, then it's literally a brainwashing device.

by pstuart

3/26/2026 at 2:52:36 AM

It is not an algorithm issue. It would still be a huge problem with zero algorithmic social media.

by Dylan16807

3/25/2026 at 2:10:28 PM

Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.

There will be some regulatory capture in between.

World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.

by thisisit

3/25/2026 at 2:36:33 PM

Verification needs to work the other way around, some kind of verifiable chain of trust for photos and videos from real cameras. Watermarking all generated media is impossible.

by Miraste

3/25/2026 at 3:05:24 PM

I don't really understand why this is so hard or why it wasn't just done from the get go.

Just have Apple and Google digitally sign videos and photos recorded from phones and then have Google and Meta, etc display that they are authentic when shown on their platforms.

by SirMaster

3/25/2026 at 3:26:13 PM

You're talking about the metadata of the files, which can always be edited and someone will inevitably try to make software to do exactly that. Also, Adobe's proposal for handling generated content is exactly this and they're not able to get buy-in from other companies.

by alpha_squared

3/25/2026 at 3:28:50 PM

Edit the metadata in what way? It's a cryptographic hash.

If the bits that make up the video as was recorded by the camera don't match the hash anymore, then you know it was modified. That doesn't mean it's fake, it just means use skepticism when viewing. On the other hand the ones that have not been modified and still match can be trusted.

by SirMaster

3/25/2026 at 4:16:09 PM

Essentially 0% of professional photography or videography uses "straight out of the camera" (SOOC) JPEGs or video. It's always raw photos or "log" video, then edited to look like what the photographer actually saw. The signal would be so noisy as to be useless.

by SAI_Peregrinus

3/25/2026 at 9:23:13 PM

But we are talking about consumer devices here.

Are you saying Apple and Google can't put a secure hash into the output from their camera apps that apply after their internal processing is done?

by SirMaster

3/26/2026 at 7:15:56 AM

Sure they could, but then you trim the video by 2 seconds, tweak the colors, or just send it over WhatsApp, which recompresses the file with its own encoder. The hash breaks instantly. Cryptography protects bits, but video is about visual meaning. The slightest pixel modification kills the hardware signature. Plus, it does absolutely nothing to fix the "analog hole" problem - a scammer can just point that cryptographically signed iphone camera at a high-quality deepfake playing on a monitor

by KurSix

3/26/2026 at 2:01:56 PM

I would assume whatsapp would read the hash and verify it when the video is chosen to be sent to someone, so the reciever would see that the video that was selected by the sender was indeed authentic. Assuming you trust meta to re-encode it and not mess with it.

As far as recording a monitor, I guess, but I feel like you can tell that someone is recording a monitor.

As far as editing, no it wont work in those cases, but the point here is not to verify ALL videos, but to have an easy way for people to verify important videos. People will learn that if you edit it, it won't be verified, so they will be less inclined to edit it if they want to make it clear it's an authentic video. Think like people recording some event going down on the streets etc or recording a video message for family and friends.

If AI video generation is going to get that good, don't you think it would be a good idea to have a way to record provably authentic videos if we need? Like a police interaction or something. There is no real reason to need to edit that.

Also, could a video hash just be computed every X seconds, and give the user the choice to trim the video at each of those intervals?

by SirMaster

3/25/2026 at 3:26:39 PM

It becomes a hard problem quickly when you introduce editing, and most photos and videos on social media are edited. I'm not sure how it would work. It seems more feasible than universal watermarks, though.

by Miraste

3/25/2026 at 7:05:04 PM

It's pretty much impossible to do this in a useful way, _and_ it would also cement even more control over the media landscape to those companies.

by rcxdude

3/25/2026 at 3:04:26 PM

> Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.

i've thought about this off and on and how to implement it. Not easily, was my general takeaway.

or rather, it's easily to implement but you're in a adversarial relationship with bad actors and easy implementations may be easily broken

e.g. your certs gotta come from somewhere and stay protected, and how do you update and control them. key management for every single camera on every phone, etc.

by red-iron-pine

3/25/2026 at 1:13:40 PM

"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist.

by collinmcnulty

3/26/2026 at 4:27:18 AM

We need some sort of end to end verification. Aka from the sender camera to the receiver display / speakers.

Maybe Apple will be able to pull it off? Aka if you FaceTime me I know that you are a person

by whatever1

3/26/2026 at 3:05:52 AM

How do you do when people don't protect their signatures? there is already scam where people get tricked into forwarding message from their own numbers to other people or email.

by kelvinjps10

3/25/2026 at 11:55:53 AM

> footage of some incident somewhere may have been entirely fabricated by AI,

Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”

by Forgeties79

3/25/2026 at 12:43:08 PM

Either way, the lack of trust is the damage.

by bigfishrunning

3/25/2026 at 1:05:21 PM

Definitely

by Forgeties79

3/25/2026 at 12:36:06 PM

We are still in the early stage of AI and already I struggle to tell what is real or fake on my Twitter feed. It will only get better in its deception with time.

You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.

People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.

by chistev

3/26/2026 at 12:38:41 PM

No No

AI has platued, it's not getting better!

by Bombthecat

3/25/2026 at 11:35:43 AM

What's the solution apart from an identity providing service?

by whateverboat

3/25/2026 at 11:43:40 AM

I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down.

by a2128

3/25/2026 at 12:09:54 PM

People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem

by intrasight

3/25/2026 at 12:44:36 PM

People get hacked -- a device could be flawless, but if a person is a victim of "Social Engineering" and hands the attacker a password, there's nothing the designer of the device could do about it.

by bigfishrunning

3/25/2026 at 1:12:18 PM

2FA has tried to solve exactly this. Not many attacked people will hand over their password AND their phone. Yes I know, they might hand over one authentication code (and I know people who did exactly that)... We should also look into reducing the attack surface - if you get Instagram hacked you shouldn't get your Facebook hacked as well. But the current big tech centralization leads us to that single point of failure, because they don't care about the user's concerns only market grab. So... what now? Do we get the politics into this?

by soco

3/26/2026 at 2:04:39 PM

You're on the right path. As long as we continue to use email as a fallback to every other form of authentication, it will remain a single point of failure and a relatively weak one at that.

OP is still correct. No matter what, humans will remain the weakest link...it's in our nature to sympathize and every one of us has distracted/weak moments. It's just a matter of time; look at the guy who runs haveibeenpwnd...getting pwned.

by slumberlust

3/25/2026 at 1:25:23 PM

One authentication code is often all that's needed to *change where the authentication codes are sent*

Not to mention that most 2FA still uses SMS, which has it's own well-understood security flaws.

by bigfishrunning

3/25/2026 at 1:41:15 PM

Best thing I think of is domain names. Domains are tied to addresses and billing, and sites are people or businesses, with physical locations one can visit.

Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.

by prox

3/25/2026 at 11:59:40 AM

Agreed. The sphere of trust around each of us will shrink back to only those in our physical proximity. Outside of that, no one can be trusted.

by nathanaldensr

3/25/2026 at 2:21:51 PM

Touching grass. Valuing in-person connections. Focusing on the community, meatspaces and actual people around you.

Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].

This sentiment is unpopular, but it's true. Prioritize true connections and experiences.

by jjulius

3/25/2026 at 11:43:41 AM

I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed.

by Gigachad

3/25/2026 at 11:49:32 AM

I think you might be right and I think I'll like some of the consequences and hate some of the others.

More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).

Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.

by rkomorn

3/26/2026 at 12:17:12 AM

Agreed. I don't think there is any saving the internet as a social space long term. And I'm not entirely sad about that either. I think a return to in person interaction, public social spaces, and a retreat from social media would do the world a lot of good.

Though there is a nightmarish possibility that people just accept this and willingly interact purely with bots, giving up all real relationships for AI ones.

by Gigachad

3/25/2026 at 11:51:39 AM

linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know.

identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.

by dominotw

3/25/2026 at 11:44:05 AM

That's just shifting the problem not solving it.

by adithyassekhar