Breach, Bots & Deepfake Drama
12 January 2026A total of 9 breach events were found and analysed resulting in 1,860,834 exposed accounts containing a total of 21 different data types of personal datum. The breaches found publicly and freely available included 1M+ Valid USA Forex 1 Million, Aternos [2], Costco - Taiwan, Do Big GPT and Alain Afflelou. Sign in to view the full
library of breach events which includes, where available, reference articles relating to
each breach.
Categories of Personal Data Discovered
Contact, Geolocation, Digital Behaviour, Finance, Sociodemographic, Technology, Commerce, National Identifiers, Health and Environment.
In the past week, our team uncovered 9 new data breach events exposing approximately 1,860,834 user accounts (yes, about 1.86 million people just got that sinking feeling). These leaks spilled a cocktail of 21 different types of personal data, from the usual name-rank-serial-number stuff to more sensitive tidbits. Notable breaches on the list include an ironically named “Valid USA Forex 1M” database, a hit on Aternos (popular with Minecraft server nerds), a leak from Costco Taiwan, something called “Do Big GPT” (apparently doing big data leaks), and a breach at French eyewear retailer Alain Afflelou.
If your organisation’s email domain pops up in these dumps, pay attention. Why? Because credentials from third-party breaches are hacker gold, they try those passwords on your corporate accounts faster than you can say “password reuse.” In fact, a staggering 88% of web app attacks involve stolen credentials[1]. That means an employee reusing their work password on, say, some forum that got breached, is basically handing cybercriminals a master key to your network. The takeaway: use unique passwords (and throw multi-factor auth on ’em for good measure) so your company doesn’t star in the next “Big Breach” headline[2][1]. Remember, awareness is the first line of defense, and maybe also a good reason to poke your team about updating that 90-day-old password.
Cyber Spotlight, Oddities & Outrages This Week
Musk’s AI Gets a Dressing-Down (Literally): The UK is shaking a stern finger at Elon Musk again, this time over indecent AI-generated images on X. Musk’s new Grok AI image tool was caught generating naughty deepfakes, and Britain’s regulators absolutely lost the plot. In response to a government ultimatum, X jammed Grok’s image creator behind a paywall (figuring trolls are cheap, I guess)[3]. But officials say that’s not nearly enough, Claire Waxman, the UK’s Victims’ Commissioner, even declared that “X was no longer a safe space for women” on account of the flood of fake nudes[4]. Now, don’t get us wrong: non-consensual explicit images are serious trauma fuel that need tackling. But one has to chuckle at lawmakers acting like this is some novel apocalypse, people have been making indecent images since the internet’s dial-up days. The difference? Now any joker with an AI can undress a photo of someone with a single click, which is disturbing on about 500 levels (especially when it targets minors, truly appalling). The UK is threatening to ban X in Britain if Musk doesn’t get a grip. Good luck, Elon, looks like “free speech absolutism” isn’t flying on this side of the pond. Lawmakers face a whack-a-mole game here: it’s not just X’s tool; all AI image generators can be misused for this. They’ll need more than a paywall and stern words to put this genie back in the bottle.Your TV Is Spying (and Lying), Smile for the Camera: In “well, that’s creepy” news, five major TV manufacturers, including LG, Samsung, and TCL, got slapped with lawsuits in Texas for basically spying on you through your Smart TV. Turns out your TV isn’t just binge-watching with you; it’s binge-watching you. The practice is something called Automated Content Recognition (ACR), and it’s exactly as invasive as it sounds. Every few seconds, your smart TV takes a little screenshot of whatever you’re watching (yes, even if you’re gaming on a console over HDMI)[5]. It then generates a digital fingerprint of the scene and matches it against a database to identify the content[6], kind of like Shazam, but for breaching your privacy. So whether you’re streaming the latest hit series or playing old Mario Kart, Big Brother TV knows. Why do they do this? To track your viewing habits for ads and “recommendations,” of course[7][8]. The Texas Attorney General calls ACR an “uninvited, invisible digital invader”[9] and is dragging these companies to court for violating consumer protection laws. The juiciest part: many of these TVs made it ridiculously hard to opt out of the tracking. Some TCL models didn’t even have an opt-out at all, effectively forcing users to agree to surveillance in the fine print[10]. (Classic move: “By using this TV, you consent to us watching you back.”) The lawsuits argue that’s deceptive and unlawful. We’ll keep an eye on this one, ironically using much less intrusive methods than a peeping Smart TV.
Free VPN, Expensive Privacy: Time for our weekly lesson in “there’s no free lunch on the internet.” A hugely popular free VPN browser extension (with over 6 million Chrome users) turned out to be an absolute privacy dumpster fire. Researchers caught the extension, named Urban VPN, doing the very thing VPNs are supposed to prevent: harvesting users’ data, in this case, your AI chatbot conversations![11][12] That’s right, this browser plugin was quietly logging everything you typed into ChatGPT, Google’s Bard, Bing Chat, you name it, and sending it off to its parent company (a data broker) for profit. The kicker: Urban VPN had a 4.7-star rating and a shiny “Recommended” badge from Google, which sure aged poorly. Starting in an update last July, the extension began injecting scripts into every tab whenever you opened an AI chat, effectively acting like a man-in-the-middle, grabbing all your prompts and the AI’s answers, plus conversation IDs and timestamps[13]. And it didn’t matter if the VPN was “on” or “off”, the snooping was always on[14], with no opt-out (other than uninstalling). Urban VPN’s makers claimed they only collected data to “protect” users from unsafe AI content, like phishing links[15]. Uh-huh… interesting definition of “protect,” considering they vacuumed up entire conversations instead of, you know, just scanning for malicious links. This is a classic reminder that when a product is free, often you are the product[16]. So if you’re using a “free VPN” extension that promises the world, maybe give it a side-eye, or better yet, a prompt trip to the trash bin.
Vulnerability Chat, Top 5 Exploits to Know About
Cybercriminals kicked off the new year popping vulnerabilities like it’s their birthday. Here are the five hottest vulnerabilities cybersecurity folks were chatting about last week, and why you should care:HPE OneView RCE (CVE-2025-37164): A “perfect 10” critical bug in HPE’s OneView infrastructure management software was added to CISA’s exploited list, meaning attackers are actively abusing it[17]. This nasty code injection flaw lets an outsider execute arbitrary code on the OneView appliance, potentially taking over whatever it manages (think entire servers and data center gear). HPE rushed out a hotfix on Dec 17 after a researcher dropped a proof-of-concept exploit[18][19]. If your org uses OneView and hasn’t patched yet, assume breach and patch yesterday. An attacker with OneView access has the keys to your kingdom.
Ancient PowerPoint Ghost (CVE-2009-0556): No, that isn’t a typo, a 15-year-old Microsoft PowerPoint vulnerability is also on CISA’s radar now[17]. It’s a code execution bug: just by a user opening a rigged PPT file, an attacker could run malware on their machine[20]. Microsoft patched this back when Obama was president, but apparently enough systems never got the memo (likely those on outdated Office versions) that attackers are still finding victims. Let this sink in: a 2009 bug is actively being used to hack people in 2026. Time to hunt down those old unpatched PCs hiding in your environment and finally lay them to rest.
“MongoBleed”, MongoDB Memory Leak (CVE-2025-14847): Dubbed “basically Heartbleed for MongoDB”[21][22], this high-severity flaw in the popular database is a data snooper’s dream. By sending malformed compressed data, an attacker can trick MongoDB into leaking chunks of its memory. That could include all sorts of sensitive info, user data, passwords, keys, whatever happens to be sitting in RAM[22]. A proof-of-concept hit GitHub over the holidays, and by New Year’s, reports of active exploitation surfaced[23][22]. The MongoDB team was quick to patch (fixes are available for all supported versions[24]). If you run a vulnerable MongoDB instance, upgrading is urgent. Otherwise, an attacker might silently bleed your data dry (pun intended). Can’t patch immediately? At least disable zlib compression on the server as a temporary mitigation[24][25].
WatchGuard Firebox Firewall 0-day (CVE-2025-32978): Owners of WatchGuard Firebox appliances had a rough week. A critical unauthenticated RCE in the Firebox’s VPN (IKEv2) module emerged, and attackers are already exploiting it in the wild[26][27]. This bug (score 9.3) allows a complete takeover of affected firewall devices, effectively punching a hole straight through your perimeter defense[27]. WatchGuard scrambled an emergency patch and even published indicators of compromise (IOCs) since they’ve seen active attacks hitting unpatched boxes[28]. Pro tip: if you haven’t updated your firewall firmware, do it now. And if your Firebox was exposing its VPN or management interface to the internet, assume bad guys were knocking. This is another reminder that edge devices (VPNs, firewalls, routers) are prime targets, keep them updated and minimise what services you expose.
D-Link “DNSChanger” Router RCE (CVE-2026-0625): Old D-Link DSL routers don’t usually make news, until researchers find a critical command injection bug and see it being actively used by attackers[29][30]. This one is gnarly: an unauthenticated attacker can exploit a flaw in a router’s dnscfg.cgi script (used for DNS config) to run any shell commands they want[31]. In plain English: a remote attacker could completely hijack vulnerable routers, messing with your internet traffic or joining your network. Logs show hackers have been probing this bug since at least late November[30]. The affected models are legacy (several popular DSL router models from ~2016-2019) and many are end-of-life with no official fixes[32][33]. If by chance you have one of these units in service, replace it. Once a device goes EoL and unpatchable, it’s open season for attackers. This D-Link flaw is being leveraged in DNS hijacking campaigns[34][35], where attackers redirect all your web traffic to malicious servers. There’s no sugar-coating it: a vulnerable gateway means your entire network is at risk until you swap it out.
Information Privacy Headlines, Data & Privacy News Bites
States Push Privacy to 20/20: On January 1st, three more U.S. states (Indiana, Kentucky, and Rhode Island) activated new comprehensive consumer privacy laws, bringing the total to 20 states with their own data protection rules[36]. Companies now face a patchwork of requirements, from giving residents the right to access or delete their data, to limits on selling data without opt-out, and special rules for minors’ data. Notably, some existing state laws also got upgrades: e.g., Connecticut’s privacy law gained stricter rules on data minimisation and profiling, and Utah added a right for consumers to correct their info[37]. The trend is clear: privacy legislation in the US is full steam ahead, even without a federal law. If your business handles consumer data, it’s time to update that compliance checklist (and maybe your New Year’s resolution is to actually read those state privacy notices you’ve been putting off!).EU Court Slaps Meta’s Hand (Again): Meta (Facebook’s parent) rang in 2026 with a massive legal defeat in Europe. Austria’s Supreme Court ruled Meta’s targeted ad model illegal under the GDPR, setting a precedent across the EU[38]. The case, originally launched by privacy activist Max Schrems, took 11 years to resolve and could force Meta to seriously overhaul how it monetises user data in the EU. The court decided Meta has been processing user info for personalised ads without valid consent, including grabbing data from third-party sites/apps and even inferring sensitive traits (like your politics or health status) to target ads[39][40]. Under the ruling, EU users can demand Meta hand over all their data and disclose who it’s shared with, within 14 days[41]. Meta also must stop using sensitive personal data for ads unless users explicitly opt-in[40]. Oh, and if Meta doesn’t comply? The decision is enforceable EU-wide and could lead to hefty fines or even prison time for Meta’s executives (in theory) for non-compliance[42]. Meta, unsurprisingly, says the judgment is about “old practices” (they claim they no longer do some of these things) and is digging in on its stance that its current “pay or okay” ad consent approach is GDPR-compliant. Regardless, privacy regulators and companies alike are watching closely, this is a biggie that could reshape online advertising in Europe.
Meta Purges 550K Teen Accounts: Facing a world-first law in Australia that bans kids under 16 from social media, Meta shut down over half a million accounts believed to belong to under-16 users[43][44]. Australia’s “kids off social media” law took effect Dec 10 and threatens fines up to A$50 million (~US$33M) if platforms don’t comply[45]. Meta’s response: approximately 330k Instagram accounts, 173k Facebook accounts, and 40k Threads accounts of under-16s were wiped in Australia[43]. This move makes Australia the first democracy to attempt such a sweeping social media age ban, meant to curb harms to minors[46]. Meta isn’t thrilled, they publicly criticised the law, suggesting that strict age gating will just drive teens to less regulated apps (the “whack-a-mole” effect)[47]. They’d prefer standardised age verification across the industry instead of outright bans. It’s a bold experiment in online child safety, whether it meaningfully protects kids or just sends them to TikTok alternatives remains to be seen. Other countries are watching Australia as a potential blueprint (or cautionary tale) for youth online protection.
Global Crackdown on AI Deepfakes: Governments around the globe are turning up the heat on AI-generated fake images and videos, especially explicit or privacy-invading ones. The UK’s media regulator Ofcom opened an investigation into Elon Musk’s platform X (Twitter) after its new AI chatbot Grok produced a flood of sexualised “deepfake” images of women and even minors[48]. Officials are probing whether X failed its duty to protect users from illegal content. Over in Asia, Indonesia became the first country to outright block access to Grok, they slammed the door on it due to concerns over AI-generated pornographic content[49]. Meanwhile, Germany’s Justice Ministry** announced it’s drafting measures to more effectively combat malicious deepfakes[50], aiming to give authorities legal tools to punish those who use AI to violate people’s personal rights (for example, fake nudes intended to humiliate someone). All this comes as calls grow worldwide for safeguards against AI abuse, from revenge porn deepfakes to AI-generated propaganda. The message to AI providers is clear: get your content moderation and safety rails in place, or regulators will do it for you (with bans and fines at the ready).
New Year Brings New AI Privacy Rules: A wave of AI-focused laws kicked in with 2026, highlighting the intersection of privacy and artificial intelligence. In New York, a novel law is now in effect requiring advertisers to disclose when images or video use “synthetic media” (AI-generated humans) in ads[51]. No more sneaking AI-created models or influencers into commercials without telling viewers, transparency is the goal, so people aren’t duped by an AI avatar posing as a real person. Over in California, a groundbreaking law (SB 243) just took effect regulating AI “companion” chatbots[52]. It forces companies offering human-like AI chat pals to clearly disclose that it’s AI (especially if the bot’s so realistic a user might think it’s human)[53]. They also have to put in safeguards for minors, e.g. preventing the AI from giving adult content to kids, and periodically reminding young users “Hey, this isn’t a real person”[54]. Interestingly, the law even requires bot makers to monitor for signs of a user expressing suicidal thoughts and provide resources (the brave new world of AI mental health intervention)[54]. On the federal level, the U.S. administration issued an Executive Order aiming to preempt or challenge state AI laws that might go overboard and hinder innovation[55]. It sets up a task force to possibly fight certain state regulations (citing interstate commerce and free speech issues), a reminder that the tug-of-war between AI tech development and privacy/safety regulation is just beginning. Buckle up for a contentious year on that front.
[1] [2] Verizon DBIR 2025: Credentials Are Still #1 Threat
https://www.descope.com/blog/post/dbir-2025
[3] [4] Elon Musk’s X threatened with UK ban over wave of indecent AI images | Grok AI | The Guardian
https://www.theguardian.com/technology/2026/jan/09/musks-x-ordered-by-uk-government-to-tackle-wave-of-indecent-imagery-or-face-ban
[5] [10] Lawsuits Allege Smart TVs Spy on Texans Inside Their Homes - Texas Scorecard
https://texasscorecard.com/state/lawsuits-allege-smart-tvs-spy-on-texans-inside-their-homes/
[6] What TV Advertisers Need To Know About ACR In 2023 | AdExchanger
https://www.adexchanger.com/data-exchanges/what-tv-advertisers-need-to-know-about-acr-in-2023/
[7] Texas sues Smart TV Makers for Spying on People's Watch Habits
https://www.privacyguides.org/news/2025/12/24/texas-sues-smart-tv-makers-for-spying-on-peoples-watch-habits/
[8] [9] Texas sues TV makers for taking screenshots of what people watch
https://www.bleepingcomputer.com/news/security/texas-sues-tv-makers-for-spying-on-users-selling-data-without-consent/
[11] Chrome extension slurps up AI chats after users installed it for privacy | Malwarebytes
https://www.malwarebytes.com/blog/news/2025/12/chrome-extension-slurps-up-ai-chats-after-users-installed-it-for-privacy
[12] [13] [14] Browser Extension Harvests 8M Users’ AI Chatbot Data
https://www.darkreading.com/endpoint-security/chrome-extension-harvests-ai-chatbot-data
[15] [16] 8 Million Users' AI Conversations Sold for Profit by "Privacy" Extensions
https://www.koi.ai/blog/urban-vpn-browser-extension-ai-conversations-data-collection
[17] [18] [19] [20] CISA flags exploited Office relic alongside fresh HPE flaw • The Register
https://www.theregister.com/2026/01/08/cisa_oneview_powerpoint_bugs/
[21] [22] [23] [24] [25] 'Heartbleed of MongoDB' under active exploit • The Register
https://www.theregister.com/2025/12/30/mongodb_vuln_exploited_cve_2025_14847/
[26] [27] [28] Critical-rated WatchGuard Firebox flaw under active attack • The Register
https://www.theregister.com/2025/12/19/watchguard_firebox/
[29] [30] [31] [32] [33] [34] [35] Ongoing Attacks Exploiting Critical RCE Vulnerability in Legacy D-Link DSL Routers
https://thehackernews.com/2026/01/active-exploitation-hits-legacy-d-link.html
[36] [37] 2026 Year in Preview: U.S. Data, Privacy, and Cybersecurity Predictions | Wilson Sonsini
https://www.wsgr.com/en/insights/2026-year-in-preview-us-data-privacy-and-cybersecurity-predictions.html
[38] [39] [40] [41] [42] Austria's top court rules Meta's ad model illegal, orders overhaul of user data practices in EU | Reuters
https://www.reuters.com/sustainability/boards-policy-regulation/austrias-top-court-rules-metas-ad-model-illegal-orders-overhaul-user-data-2025-12-18/
[43] [44] [45] [46] [47] Meta shuts 550,000 accounts on Australia’s kids social media ban
https://www.chinadailyasia.com/hk/article/626877
[48] [49] [50] Data Privacy News | Today's Latest Stories | Reuters
https://www.reuters.com/legal/data-privacy/
[51] [52] [53] [54] [55] New Privacy, Data Protection and AI Laws in 2026 - Pearl Cohen
https://www.pearlcohen.com/new-privacy-data-protection-and-ai-laws-in-2026/