- cross-posted to:
- videos@lemmy.world
- youshouldknow@lemmy.world
- cross-posted to:
- videos@lemmy.world
- youshouldknow@lemmy.world
cross-posted from: https://sh.itjust.works/post/57126527
How to use data poisoning to trick the algorithm that’s profiling you (and why “personalization” is more fragile than you think)
Note: For education and defensive awareness only. I’m explaining the concept of data poisoning so teams can recognize risks and build safer systems. I’m not encouraging or providing guidance for misuse. :)
If you’re being tracked, scored, and predicted from your clicks… this is how the machine actually works (and how it breaks).
If a retailer can guess you’re pregnant before your family knows… imagine what ad platforms and recommendation feeds can infer about your money, your health, and your next life move from boring little signals you barely notice.
I’m Addie. I’ve spent 15 years in cybersecurity, and I teach cyber threats before they blindside you. In this vid, I break down the real mechanics behind prediction engines, why “scale” doesn’t protect models from manipulation, and how tiny amounts of poison in training data (or your own behavior) can make these systems confidently wrong.
Here’s what you’ll be able to do after this:
Understand how behavioral profiling and predictive analytics pull “private truths” from normal shopping and scrolling Spot how personalized ads and recommendation systems build a story about you from clicks, watch time, and purchases Learn what data poisoning means (in plain English) and why it works at web scale See how an AI backdoor attack can hide in massive training sets without “breaking” accuracy Recognize why adtech and real time bidding are fragile when signals get polluted by bots and noise Understand model collapse and what happens when AI training data becomes AI-generated sludgeStart testing feedback loops safely so you can build hacking instincts without doing anything reckless Sources:
Could you just stop reposting this fucking useless video? 🙄
If you explain why is useless, sure.
I explained it the last time I saw it posted.
Did you watch it yourself? I think it’s quite obvious. She doesn’t explain how you can poison your dataset. At best she’s explaining the concept and telling you how to make your ad-feed more “wholesome”.
Also, the script reeks of slop.
Ofc is an easy digestible quick content to inform people. Clickbait? A bit for sure. AI script? who knows? While she didn’t give an easy step-by step guide at least she’s explaining why small poisoning may actually help breaking big models. And you can find more detailed info in the link in the description. https://arxiv.org/abs/2302.10149 https://arxiv.org/pdf/2302.10149
The next video is how to keep flies from landing in your mouth, it’s going to be a banger because I guess Youtube people have a real problem with it.
As well as anyone i believe. Feel free to argument why.




