ai still can't break math

i was watching some videon on tiktok about some parents watching an AI generated video of their daughter from the future sharing her fears about what AI is able to do. and it’s all sorts of things, like fake videos of her that her peers can school can embarrass her over all the way to CSAM. and it got me thinking a bit about AI generated images and videos, verified accounts, and the like.

growing up, “you can’t believe what you see on TV” was something i heard quite often, especially when watching scifi movies and the like. for me, it was maybe more a comfort to know the monsters aren’t real, that sort of thing. but, i think i just kind of took that for granted, that, you really can’t believe everything you see on TV. and largely, that skepticism extended to things on the internet.

and this has kind of always been a problem online where users are largely anonymous. and a long time ago, we’ve already had solutions for this in cryptography. like, if you had something you wanted to share and you wanted people to know it was actually you who wrote it, you had to use encryption and digital signatures. PGP/GPG i used to think would be really important parts of our lives in the future, so much so, i used to go to PGP key-signing parties to further build my “web of trust”. but, really nothing every came of it. it has stayed a niche, crypto computer nerd thing.

weirdly, what did take off, was having “verified badges” on social media profiles. which, not the same thing at all as strong cryptography. it was i guess a low enough bar of trust for people to accept. but it still couldn’t account for things like if accounts were hacked or if the platform itself was corrupt.

and then, what do you do if AI can make videos or text so convincing, it can make others believe that it is you? i’m fairly lucky in that i’m a nobody, nobody is going to care to impersonate me to try to ransom money from my non-existant rich relatives. but, i could easily imagine someone doing these impersonations on a large scale to scam the elderly who might be convinced by a phone call.

so i guess this all winds back to what happens if you can’t beleive anything you read or see or hear? AI can be very convincing, but in its current form, it doesn’t break the laws of physics and so strong cryptography is still safe. so why not write important things and then cryptographically sign them?

it definitely solves a lot of authenticity problems and largely it is platform agnostic, because it’s the content that is signed. verification is easy too, because you only need the public key porition of someone’s cryptographic keys.

so, for example, let’s say you’re making a video plee to someone and you want to assure them that your video is real and not generated by some stranger using AI. you could take the video file you created, digitally sign that, and distribute the two together. so when people download your video and signature, they can verify the two are authentic. you could take a transcript of your video, then digitally sign that transcript. because that’s the content of your video, that should be fairly safe.

and the cryptography is sound, it’s the reason every website uses HTTPS to secure their communication. we use it everyday already, but very few people do it themselves to authenticate their own content. maybe this will all become important again. who knows.

anyway, i ran out of steam writing this at the moment lol, but it’s weird how i used to be really, really into this stuff. and then it just didn’t seem all that important back then. but here we are now. time is a flat circle. maybe i’ll start going to key signing parties again.