Since early 2021, threat actors have orchestrated a highly scalable campaign that exploited the popularity and trust of the video-sharing platform YouTube to deliver infostealer malware. According to Check Point’s research, this campaign dubbed the YouTube Ghost Network has uploaded more than 3,000 malicious videos to date, with activity tripling in 2025 alone.
The modus operandi of the network relies on three coordinated roles within compromised or created YouTube channels:
-
Video-accounts upload tutorial-style or cheat-software themed videos that point to download links.
-
Post-accounts publish community updates and redirect users to malicious archives stored on services such as MediaFire, Dropbox or Google Drive.
-
Interact-accounts artificially boost engagement metrics likes, comments, views to lend legitimacy to the malicious content. One researcher described the strategy as “taking advantage of trust signals including views, likes and comments” to make malicious content appear benign.
How the infection chain works
Users searching for popular game cheats, cracked software or popular tools often land on videos that mimic legitimate walkthroughs. The descriptions or pinned comments include links often obfuscated via URL shorteners that lead to archive files or installers. Once a user downloads and executes the payload, malware families such as Lumma Stealer, Rhadamanthys Stealer or RedLine Stealer are deployed.
In one example channel with nearly 10 k subscribers, the actor replaced legitimate content with a malware-distribution video advertising free crypto-software; that channel had been compromised for over a year before the malicious uploads began.
Why this campaign stands out
Unlike one-off phishing or malware push campaigns, the Ghost Network is resilient. It partitions roles so that banned accounts can be replaced without disrupting the operation. The attackers maintain modular infrastructure that can adapt quickly even after takedowns.
Moreover, by embedding themselves in YouTube’s ecosystem of high-trust content (tutorials, game cheats, tool walkthroughs), the attackers exploit cognitive trust: users expect video tutorials to be helpful. That expectation becomes the attack surface. In fact, many videos reached between 147,000 and 293,000 views before removal.
Implications for security professionals
For security teams, this network exposes several risk vectors:
Firstly, content platforms like YouTube are being weaponised for malware delivery not just drive-by downloads. Teams must treat video links and description entries as potential threat vectors.
Secondly, the masquerade of legitimate software (pirated, cracked tools) remains a consistent lure. Even tech-savvy users can fall prey when trust signals are high.
Thirdly, the engagement-boosting mechanism means malicious content might reside on seemingly legitimate or popular channels. The presence of high view-counts or engagement may reduce suspicion rather than increase it.
Finally, the infrastructure is modular and resilient which means takedown alone is unlikely to fully mitigate risk. Security teams must implement monitoring and response strategies that assume the attacker will regenerate assets.
Mitigation strategies
Security teams should employ the following controls:
-
Monitor inbound links from high-traffic video platforms. When a user downloads software promoted on YouTube, inspect the source of the installer, verify code signing, and treat untrusted archives with caution.
-
Use endpoint detection and threat-intelligence feeds to track stealer families like Lumma, Rhadamanthys and RedLine. Correlate telemetry with suspicious downloads flagged via YouTube links.
-
Raise awareness among users that not all high-view or high-engagement videos are safe. Specifically, discourage downloading software from descriptions or pinned comments of tutorial-style videos.
-
Collaborate with video platforms to flag large-scale campaigns that exhibit fake engagement or unusual upload patterns (e.g., rapid substitution of content in compromised channels).
-
Have incident response plans ready for stealer infections since account hijacks, credential theft and lateral movement typically follow.
As attackers continue to evolve, platforms built to host user content will remain attractive bots in the malware delivery chain. The Ghost Network demonstrates how threat actors can weaponise legitimate features likes, comments, posts to propagate malware at scale. Security teams must expand their threat models beyond traditional email/phishing vectors to include video platforms, online tutorials, and social-media style delivery.