A small disappearance

A while back I noticed that a number of things had quietly stopped working. My YouTube channel went silent — uploads were technically there, but views and comments dried up to a degree that wasn’t normal variance. Posts on LinkedIn that linked to my own articles got vanishingly little reach, sometimes none at all. On Thingiverse, several routine actions started bouncing off a Cloudflare challenge page that no amount of patience or different browsers could clear.

Nothing told me why. There was no email, no warning, no community-strike notice, no banner in any dashboard. The two most plausible candidates I could think of were:

  1. A small steganography tool I had published on GitHub, and
  2. A batch of AI-assisted music I had been uploading.

But “plausible candidate” is not the same as “confirmed cause,” and that gap — between obvious downstream symptoms and totally invisible upstream rules — is the actual subject of this post.

I read four older research-style documents on this topic that had been written for me by an LLM. They were vivid and dramatic. They talked about an “Algorithmic Oubliette,” “Civil Death,” cross-product “contamination,” and a unified “Entity Sentiment Score” that follows you from a flagged repository to a ghosted YouTube comment to a spam-filtered email. Some of what those documents claimed is well-documented. A surprising amount of it isn’t, or is misattributed, or conflates several different platforms’ independent enforcement systems into one mega-conspiracy.

So this is the rewrite I wish I’d had when I first started looking. It is meant to be measured: every concrete claim about how a system works has a citation, and where the evidence runs out, I say so. The personal experience is the hook, not the proof.

What we actually know about the present-day Google

Google is, legally, a monopolist

This is no longer a contested point — it’s a court finding. In August 2024, U.S. District Judge Amit Mehta ruled in United States v. Google LLC that Google had violated Section 2 of the Sherman Act and held an illegal monopoly in general search and search text advertising.1 After a 15-day remedies trial in May 2025, Mehta issued a remedies opinion on 2 September 2025, followed by a finalized judgment on 5 December 2025.2

The remedies stopped short of the structural break-up the DOJ wanted (no forced sale of Chrome or Android), but they did:

  • Bar Google from entering exclusive distribution contracts for Search, Chrome, Google Assistant, and the Gemini app for six years;
  • Compel Google to share parts of its search index and aggregated user-interaction data with qualified competitors;
  • Extend the conduct rules explicitly to Google’s GenAI products so the same monopoly playbook can’t be re-used in AI search.3 4

Both sides have appealed. As of early 2026 the matter is heading to the D.C. Circuit Court of Appeals.5

So when older write-ups call Google a “monopolist in search,” they are not editorializing — they are quoting the operative legal finding.

The EU is investigating Google’s “site reputation abuse” policy

The European Commission opened a formal Digital Markets Act investigation on 13 November 2025 into Google’s site reputation abuse policy. The Commission’s preliminary monitoring suggested Google was demoting news and other publishers in Search results when those publishers carried content from commercial partners (sponsored articles, branded hubs, affiliate sections). If the policy is found to breach the DMA’s fair-and-non-discriminatory access rules, fines can reach 10 % of Alphabet’s global turnover, or 20 % for repeat infringement.6 7

This matters for the broader story because it’s an antitrust regulator acknowledging, in writing, that Google’s quality / spam policies can function as ranking penalties that hurt third parties even when those third parties believe they are doing nothing wrong. That’s the same general shape as an individual creator getting demoted by a policy they don’t know they tripped — just at the level of news publishers.

It is also worth noting Google’s own response: the company called the investigation “misguided” and warned it could “reward bad actors who rely on parasitic content strategies.”8 You can read the policy itself in two ways at the same time — as a legitimate anti-spam tool, and as a strong demotion lever wielded with limited transparency.

Google Safe Browsing protects billions of devices — and produces a lot of false positives

Google Safe Browsing is real and large. The official numbers Google publishes itself are that it protects more than 5 billion devices and scans on the order of 10 billion URLs and files per day,9 and the system increasingly relies on machine-learning classifiers rather than static blacklists.

The policy categories matter for understanding false positives. Safe Browsing covers Malware, Unwanted Software, and Social Engineering. The third category — “social engineering / deceptive sites” — is the one most prone to false positives because the heuristic looks at structural and visual patterns: a login form on a site whose styling resembles a major brand, for instance.10

The result is that GitHub Pages users routinely report being flagged. The pattern is well-documented in the GitHub community discussions:

  • Sites containing nothing but a “Hello World” HTML file get flagged when published from accounts that have a previous Safe Browsing history.11
  • Educational projects that mimic the look of well-known sites (Netflix clone, login portals) regularly trigger phishing classification.12
  • Because GitHub Pages is path-routed under username.github.io, a flag on one repository can affect the visibility of every other repository that user hosts.10

GitHub’s own community moderators acknowledge this, and the recommended fixes (request a Safe Browsing review, rename slugs, deploy to a different provider as a control) are workarounds, not solutions to the underlying classifier behavior.

What this doesn’t establish — and what the older write-ups overclaim — is that this Safe Browsing flag automatically propagates as a “negative trust score” to your Gmail reputation, your YouTube comments, your Ads account, and so on. Some of that integration exists in narrow cases (a domain on the Safe Browsing blocklist will get a “Deceptive site ahead” interstitial in Chrome, and the same data feeds the Web Risk API used by other tools). But there is no public, auditable architecture that says “GitHub flag → cascading penalties across the entire Google account graph.” That is inferred, not documented.

reCAPTCHA v3 really does behavioral profiling — and there’s a real privacy debate about it

reCAPTCHA v3 calculates a risk score between 0.0 and 1.0 for each interaction, derived from the user’s behavior, device signals, browser fingerprint, and (importantly) any active Google Account session.13 The system is invisible by default — there is no checkbox or puzzle — and the risk score is what tells the host site whether to allow the action.

This is not just industry chatter. The peer-reviewed and gray literature is pretty clear:

  • A 2024 paper presented at USENIX Security argued that reCAPTCHA v2 should be abandoned outright; the same researchers note that v3 “is purely behavioral … privacy-invasive, since we (the public) don’t know how it works. It’s essentially a ‘black box.’”14
  • A 2019 reinforcement-learning attack defeated reCAPTCHA v3’s behavior-based challenges roughly 97 % of the time — an awkward result for a system whose privacy cost is justified by its security value.14
  • Wikipedia’s own summary, well-sourced, notes that the system “favors those who have an active Google account login, and displays a higher risk towards those using anonymizing proxies and VPN services.”15 Cloudflare publicly switched away from reCAPTCHA in April 2020, citing privacy concerns.15
  • Comparative studies confirm that v3’s “vast amount of tracking” is the central trade-off of the design.16

So the privacy critique is real. The maximalist version of the claim — that reCAPTCHA v3 is silently linking your “anonymous” forum posts to your Gmail identity using gait analysis from your phone’s accelerometer — is something else: it is a leap from documented capabilities to inferred deployments, and I haven’t found any independent technical confirmation that this specific cross-platform de-anonymization actually happens at scale.

DMCA notices do affect ranking — Google has said so

Here is one place where the dramatic version is closer to the truth than the conservative one. Google publicly said back in 2012 that “the number of valid copyright removal notices we receive for any given site” can be used as a ranking signal, and that the policy was specifically meant to help users find legitimate sources of content.17

Operationally, this works through Lumen. Google ingests an enormous volume of DMCA takedown notices (roughly 75 million per month at the time of the 2016 transparency report) and forwards copies of those notices to the Lumen database for transparency.18 When Search delists a result for a copyright complaint, the SERP often shows a footer linking to the Lumen entry.

Two things are true about this at the same time:

  1. The system is actively abused. Lumen’s own researchers documented an organized scheme of more than 33,000 DMCA notices using back-dated “fake originals” to try to delist legitimate articles. In that particular set, 99.2 % of the abusive takedowns failed — but the 0.8 % that succeeded still removed roughly 300 legitimate URLs from search.19
  2. Once a notice is filed, the notice itself (sender, recipient, complained-of URL) sits in Lumen indefinitely. Lumen redacts only specific fields (URLs are truncated to top-level domain in public view since 2019).20

So, yes — DMCA notices can demote you, and the public record of them is durable. What’s not documented is the further claim that Lumen entries are pulled into a unified “person-entity reputation score” that propagates through unrelated Google products. That step is inferred.

The “violation by associated account” policy is real

This one tends to surprise people. Google’s enforcement explicitly contemplates suspending one account because of activity on another account that Google has linked to the same person, often without telling you which other account or how the link was made.21

The way this surfaces in practice is well-described in user reports: a suspension notice citing “Violation by associated account”, no disclosure of which other account triggered it, and no specific remediation path beyond the standard appeal form. Google’s own Cloud documentation acknowledges the cross-product reach: a suspended Workspace user “can keep using Drive, Calendar, and other Google services” in some cases, but not in others; suspended Cloud projects shut down all attached workloads; and Cloud Billing suspensions cascade to every resource attached to that billing account.22 23

What this confirms: cross-product enforcement is a real, documented policy mechanism. What it does not confirm: that it operates through a sci-fi-grade “Entity Sentiment Score” derived from your stylistic fingerprint and gait data. Most known cases trace back to mundane signals like shared payment instruments, shared phone numbers, shared recovery emails, or repeated logins from the same device fingerprint.

YouTube did, in fact, change the rules under AI-music creators in 2025

This one is directly relevant to my own situation, and it’s worth being precise about, because it cuts cleanly through some of the more conspiratorial framings.

In July 2025, YouTube updated its monetization policies to explicitly target what it now calls “inauthentic content” (the renamed and clarified successor to the older “repetitious content” rule). In 2026, the “reused content” policy was updated again to explicitly cover AI audio. The result, as documented by industry trackers:

  • Channels publishing pure AI-music dumps (especially generic instrumentals from a single tool with formulaic titling, low watch-time-per-video, and high upload cadence) became primary enforcement targets.24 25
  • According to one industry estimate, roughly 40 % of pure-AI-music channels have been disqualified from the Partner Program or had monetization suspended since late 2025.24
  • YouTube’s own July 2025 policy guidance warns that music “without clear human input” may face “limited reach, blocked monetisation, or takedowns.”26

Limited reach without an explicit notice is, operationally, what most creators experience as a “shadow ban.” It isn’t called that in the policy, but the symptom — uploads existing and being technically searchable while receiving very little algorithmic distribution — is exactly what the policy is designed to produce for content YouTube classifies as low-effort.

If your channel was an AI-music channel, this is by far the most likely explanation. Not a sentiment score, not a steganography tool, not a Lumen entry. A specific policy, narrowly targeting your specific content type, rolled out at a specific time.

What probably wasn’t Google at all

This is the part that I think the older documents got most badly wrong: they collapsed several genuinely independent platform reputation systems into a single Google-shaped explanation. A clearer mapping of my own three symptoms looks like this:

Cloudflare blocks (the Thingiverse case) are usually not Google

Cloudflare runs its own threat-intelligence system and IP reputation database. When Cloudflare challenges or blocks you, the most common reasons are: your IP previously hosted abusive traffic, your IP shows bot-like patterns to Cloudflare’s WAF, you’re on a residential range that recently had infected hosts, or the site owner has set custom rules.27 28

Cloudflare’s documentation is explicit that “Cloudflare doesn’t reveal information about why an IP is getting blacklisted,” but the usual suspects are malware on a network, an IP previously used by abusive infrastructure (the negative reputation persists), or even neighboring IPs in the same range dragging down the whole block.28

None of that is Google. The fact that Thingiverse uses Cloudflare doesn’t import Google’s view of you into Cloudflare’s enforcement. They are independent systems running on independent data. This was a useful realization for me, and it should be useful for anyone else who has been quietly told by an LLM that Google has somehow seized control of every CDN on the internet.

LinkedIn’s product team has gone back and forth publicly on whether posts containing external links are penalized. As of late 2025 a senior product director publicly denied an intentional algorithmic penalty for links.29 But the practical experience of marketers and the industry’s empirical write-ups remains that posts with external links underperform link-free posts by a substantial margin, with one widely cited estimate of a 25–35 % reach reduction.30

Whether this is an explicit penalty, a downstream effect of dwell-time-on-platform optimization, or simple algorithmic preference for native content, the salient point is the same: it’s a LinkedIn product decision, not a Google one.

My GitHub steganography tool: a plausible Safe Browsing trigger, not a verified one

Here I have to be honest about the limits of what I can prove. Stegomalware — malware that hides payloads inside images or other media — is an active, well-documented threat category. Wikipedia’s overview, sourced from peer-reviewed and law-enforcement work, summarizes the field well: the EU-funded SIMARGL project under Horizon 2020 specifically built ML detection systems for this; Europol’s CUIng (Criminal Use of Information Hiding) initiative was founded in 2016 and now spans dozens of countries.31

Tools and code that demonstrate steganography techniques live in a difficult space. They are used by security researchers, CTF players, and academic projects, but the same techniques are documented as being in active criminal use. So it’s not implausible that an automated heuristic would flag a steganography utility — especially if it contains compiled binaries — and propagate that flag through Safe Browsing’s Web Risk API. But “plausible” is not “confirmed.”

What I could not find is any public mechanism by which a GitHub repository’s Safe Browsing classification visibly demotes its author across other Google products. The propagation may exist; it isn’t documented in a way I can cite.

The honest synthesis

Here is the picture I’m now comfortable defending:

The verifiable parts of the original framing are real and serious. Google is, by court finding, a monopolist in general search and search advertising. It runs an enforcement infrastructure (Safe Browsing) that is large enough to make false-positive collateral damage statistically inevitable. It uses DMCA takedown volume as a ranking signal and has said so publicly. It enforces policy across associated accounts and across products in ways that are hard to predict and harder to appeal. It runs reCAPTCHA v3 as a behavioral profiling system whose privacy properties are reasonably criticized by the academic community. It updated its YouTube monetization rules in July 2025 in a way that significantly demoted AI-music content with limited human input.

The dramatic parts of the original framing — the “Algorithmic Oubliette,” the “Trinity of Binding,” the unified Entity Sentiment Score, the gait-and-stylometry-driven civil death — are mostly inferential extensions, not documented architecture. Stylometric authorship attribution works in the lab on short texts, even on tweets,32 33 but I have not found credible technical documentation that Google operationally uses it to link your Reddit pseudonym to your Gmail account in real time. It’s not that this is impossible — capabilities exist; deployments are unverified.

Several of the symptoms in my own case turn out not to be Google at all. Cloudflare runs an independent reputation system. LinkedIn runs its own algorithm. The fact that things go wrong simultaneously across three different services tends to feel like coordinated suppression but is more parsimoniously explained as three different enforcement systems making independent calls about something I did — for me, plausibly the AI-music uploads (clearly inside YouTube’s July 2025 enforcement scope) and the steganography tool (a known false-positive trigger for Safe Browsing classifiers).

There is no central appeals process for the modern platform stack. This is the deepest structural problem, and it has nothing to do with whether the conspiratorial framing is true. Even if every individual decision is locally reasonable, the user who is hit by simultaneous independent decisions has no map of the territory and no single counterparty to argue with. The DSA in Europe is meant to address some of this through transparency obligations, and the DMA investigation into “site reputation abuse” is testing one slice of it for news publishers,6 but as an individual creator you are still arguing with seven black boxes at once, each of which can plausibly say “the others did it.”

What I think a careful reader should take away

A few practical positions I now hold, after sorting through all of this:

The first is epistemic. When something invisible happens to you in the platform stack, the natural temptation is to construct a unified theory of who did it and why. Resist this. The platforms are mostly independent, their classifiers are mostly opaque, and the most economical explanation is often that you tripped a specific narrow rule on one specific platform in a way that has obvious-in-hindsight overlap with a totally separate rule on a totally separate platform.

The second is operational. If the platforms ever publish a policy change in your specific area of activity (AI-generated music, steganography, browser-fingerprinting research, scraping, anything that lives near a moderation boundary), assume that change is now your most likely cause for any subsequent mysterious enforcement on that platform. Read the policy carefully before constructing more elaborate theories.

The third is structural. The DOJ ruling, the DMA investigation, and the documented pattern of cross-account suspensions without explanation all point at the same underlying problem: the gap between what platforms can do and what they are required to explain. That gap is where the dramatic framings get their oxygen, and it’s also where the actual policy fight is — much more than in any individual creator’s experience of going dark.

The fourth, and the only one that’s really about me: document everything. Screenshots, dates, exact error text, the URL and the response body of every block, when YouTube last surfaced a video versus when it stopped, every email — even if no one ever reads them, having a real timeline of facts is what lets you tell a measured story like this one instead of a dramatic one. The dramatic story is more fun. The measured story is the one that will hold up if you want to do anything useful with it later.


Sources


A note on the original documents

This piece replaces four older LLM-generated research-style documents on the same theme. I want to flag the most important corrections so that the original framings don’t keep circulating uncorrected.

The original documents repeatedly cite U.S. Patent 11516223B2 (“Secure personalized trust-based messages classification system and method”) as a Google patent describing stylometric authentication. The patent is real and the description is roughly accurate, but the assignee is Safehouse Signals Inc., not Google. The fact that the patent is hosted on patents.google.com (which indexes essentially every U.S. patent) was conflated with Google authorship. This matters because the patent is then used to support claims about Google’s specific operational capabilities that the patent itself does not establish.

The original documents talk about an “Entity Sentiment Score” that propagates negative signals from a flagged GitHub repository to YouTube comment ghosting and Gmail spam routing. Several real Google patents describe author-reputation systems (notably US 8150842B2 from the now-defunct Google Authorship / rel=author program) but none establish that Google currently runs a unified cross-product reputation score driven by the kinds of inputs the documents describe. It is plausible. It is not documented.

The original documents describe “Civil Death” through Google as a near-totalizing loss of digital identity. The cross-product enforcement is real (the “violation by associated account” policy is documented), but the dramatic framing skips over the much more banal reality that most documented suspensions trace to identifiable signals (shared payment instruments, recovery emails, device fingerprints, phone numbers) rather than to a unified sentiment graph.

The original documents attribute Cloudflare-mediated blocks (Thingiverse and similar) to Google’s reach. Cloudflare runs its own threat-intelligence and IP-reputation systems, independent of Google’s Safe Browsing. Conflating them obscures the actual mechanism and makes the problem feel more centralized than it is.

I leave the original documents untouched in my own archive as an example of the genre — they’re a useful artifact of how a sophisticated-sounding LLM output can mix verifiable facts, real patents, real court cases, and dramatic interpretive scaffolding into a single document where the reader has no easy way to tell which is which.


  1. U.S. Department of Justice, “Department of Justice Wins Significant Remedies Against Google” (press release, Sept. 2 2025), summarizing the August 2024 liability ruling — https://www.justice.gov/opa/pr/department-justice-wins-significant-remedies-against-google ↩︎

  2. CNBC, “Judge finalizes remedies in Google antitrust case” (Dec. 5 2025) — https://www.cnbc.com/2025/12/05/judge-finalize-remedies-in-google-antitrust-case.html ↩︎

  3. Winston & Strawn, “Antitrust Remedies in United States v. Google: AI and the Evolving Search Market” — https://www.winston.com/en/blogs-and-podcasts/competition-corner/antitrust-remedies-in-united-states-v-google-ai-and-the-evolving-search-market ↩︎

  4. National Taxpayers Union, “Google Antitrust Ruling: Key Takeaways from the District Court’s Decision” — https://www.ntu.org/publications/detail/google-antitrust-ruling-key-takeaways-from-the-district-courts-decision ↩︎

  5. United States v. Google LLC (2020), Wikipedia (current as of early 2026) — https://en.wikipedia.org/wiki/United_States_v._Google_LLC_(2020) ↩︎

  6. European Commission, “Commission opens investigation into potential Digital Markets Act breach by Google in demoting media publishers’ content in search results” (Nov. 13 2025) — https://digital-strategy.ec.europa.eu/en/news/commission-opens-investigation-potential-digital-markets-act-breach-google-demoting-media ↩︎ ↩︎

  7. European Commission, official DMA proceedings page — https://digital-markets-act.ec.europa.eu/commission-opens-investigation-potential-digital-markets-act-breach-google-demoting-media-publishers-2025-11-13_en ↩︎

  8. EUobserver, “EU probes Google for punitive demotions of news rankings” (Nov. 13 2025) — https://euobserver.com/rule-of-law/ar40b4cc11 ↩︎

  9. Google Blog, “Defending 1 billion Chrome users with Enhanced Protection” — https://blog.google/products/chrome/google-chrome-safe-browsing-one-billion-users/ ↩︎

  10. GitHub Community, “GitHub Pages: Site Flagged? Here’s What to Know” (Discussion #175572) — https://github.com/orgs/community/discussions/175572 ↩︎ ↩︎

  11. GitHub Community, “GitHub Pages site always shows browser phishing warning, even with clean HTML/CSS” (Discussion #169269) — https://github.com/orgs/community/discussions/169269 ↩︎

  12. GitHub Community, “Deceptive site ahead” (Discussion #57690) — https://github.com/orgs/community/discussions/57690 ↩︎

  13. Friendly Captcha, “reCAPTCHA v3 Guide” — https://friendlycaptcha.com/insights/recaptcha-v3/ ↩︎

  14. The Register, “Google’s reCAPTCHA v2 just labor exploitation, boffins say” (July 2024), summarizing Searles, Prapty & Tsudik, “Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHA v2” — https://www.theregister.com/2024/07/24/googles_recaptchav2_labor/ ↩︎ ↩︎

  15. reCAPTCHA, Wikipedia — https://en.wikipedia.org/wiki/ReCAPTCHA ↩︎ ↩︎

  16. “A Comparative Analysis of the Effectiveness of Recaptcha V3 against Recaptcha V2, Hidden Fields, and Other Anti-Spam Techniques” (2024) — https://www.researchgate.net/publication/386742491 ↩︎

  17. Zeo, “What to do with DMCA Notices in SEO Efforts?” — discusses Google’s 2012 statement that the volume of copyright notices is a ranking signal — https://zeo.org/resources/blog/what-to-do-with-dmca-notices-in-seo-efforts ↩︎

  18. Lumen Database Team, “Evolution of DMCA Notices: Trends and a Timeline” — https://lumendatabase-org.medium.com/evolution-of-dmca-notices-trends-and-a-timeline-32636581af01 ↩︎

  19. Lumen Database Team, “Over thirty thousand DMCA notices reveal an organized attempt to abuse copyright law” — https://lumendatabase-org.medium.com/over-thirty-thousand-dmca-notices-reveal-an-organized-attempt-to-abuse-copyright-law-9aa7c07a2ccc ↩︎

  20. Lumen (website), Wikipedia — https://en.wikipedia.org/wiki/Lumen_(website) ↩︎

  21. User reports of Google’s “Violation by associated account” suspensions, Quora — https://www.quora.com/Google-suspended-my-account-for-reason-saying-Violation-by-associated-Account-when-I-dont-have-any-violation-What-options-do-I-have-now-when-I-asked-them-to-share-about-an-associated-account-that-caused-the-problem ↩︎

  22. Google Cloud, “Project suspension guidelines” — https://cloud.google.com/resource-manager/docs/project-suspension-guidelines ↩︎

  23. Google Workspace Help, “Restore a suspended Gmail account” — https://knowledge.workspace.google.com/admin/users/restore-a-suspended-gmail-account ↩︎

  24. OutlierKit Resources, “AI-Generated Music on YouTube: Monetization Policy, RPM, and Which Niches Still Work in 2026” — https://outlierkit.com/resources/ai-generated-music-youtube-monetization-2026/ ↩︎ ↩︎

  25. MilX, “AI slop on YouTube and how to protect your channel in 2026” — https://milx.app/en/news/why-youtube-just-suspended-thousands-of-ai-channels-and-how-to-protect-yours ↩︎

  26. Silverman Sound (Shane Ivers), “AI Music Copyright: Legal Risks Content Creators Must Know (2026),” summarizing YouTube’s July 2025 policy update — https://www.silvermansound.com/ai-music-copyright-legal-risks-content-creators ↩︎

  27. WhatIsMyIPAddress, “What to Do When Cloudflare is Blocking You” — https://whatismyipaddress.com/cloudflare-blocking-me ↩︎

  28. Cloudflare Community, “Getting blocked on all cloudflare servers” — https://community.cloudflare.com/t/getting-blocked-on-all-cloudflare-servers/392198 ↩︎ ↩︎

  29. LinkedIn (via Threads), Sr. Director of Product Management statement that posts with links are not intentionally penalized for reach — https://www.threads.com/@mattnavarra/post/DOWa_61Cown/ ↩︎

  30. Umbrella Local, “Avoiding LinkedIn’s Outbound Link Penalties” — discusses the empirical 25–35 % reach reduction observed for link-containing posts — https://umbrellalocal.com/avoiding-linkedins-outbound-link-penalties/ ↩︎

  31. Stegomalware, Wikipedia — overview of how steganography is used by malware in the wild, including the Europol CUIng initiative and the EU SIMARGL project — https://en.wikipedia.org/wiki/Stegomalware ↩︎

  32. “Stylometric Analysis for Authorship Attribution on Twitter,” Springer — https://link.springer.com/chapter/10.1007/978-3-319-03689-2_3 ↩︎

  33. “Writer Identification Using Microblogging Texts for Social Media Forensics,” arXiv — https://arxiv.org/pdf/2008.01533 ↩︎