Malicious Machine Learning Models Discovered on Hugging Face: Report

I show You how To Make Huge Profits In A Short Time With Cryptos!

Hugging Face, the artificial intelligence (AI) and machine learning (ML) hub, is said to contain malicious ML models. A cybersecurity research firm discovered two such models that contain code that can be used to package and distribute malware to those who download these files. As per the researchers, threat actors are using a hard-to-detect method, dubbed Pickle file serialisation, to insert malicious software. The researchers claimed to have reported the malicious ML models, and Hugging Face has removed them from the platform.

Researchers Discover Malicious ML Models in Hugging Face

ReversingLabs, a cybersecurity research firm, discovered the malicious ML models and detailed the new exploit being used by threat actors on Hugging Face. Notably, a large number of developers and companies host open-source AI models on the platform that can be downloaded and used by others.

The firm discovered that the modus operandi of the exploit involves using Pickle file serialisation. For the unaware, ML models are stored in a variety of data serialisation formats, which can be shared and reused. Pickle is a Python module that is used for serialising and deserialising ML model data. It is generally considered an unsafe data format as Python code can be executed during the deserialisation process.

In closed platforms, Pickle files have access to limited data that comes from trusted sources. However, since Hugging Face is an open-source platform, these files are used broadly allowing attackers to abuse the system to hide malware payloads.

During the investigation, the firm found two models on Hugging Face that contained malicious code. However, these ML models were said to escape the platform’s security measures and were not flagged as unsafe. The researchers named the technique of inserting malware “nullifAI” as “it involves evading existing protections in the AI community for an ML model.”

These models were stored in PyTorch format, which is essentially a compressed Pickle file. The researchers found that the models were compressed using the 7z format which prevented them from being loaded using PyTorch’s “torch.load()” function. This compression also prevented Hugging Face’s Picklescan tool from detecting the malware.

The researchers claimed that this exploit can be dangerous as unsuspecting developers who download these models will unknowingly end up installing the malware on their devices. The cybersecurity firm reported the issue to the Hugging Face security team on January 20 and claimed that the models were removed in less than 24 hours. Additionally, the platform is said to have made changes to the Picklescan tool to better identify such threats in “broken’ Pickle files.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

سكس محارم حقيقي awktec.com xnxxقطر sleeping mom hentai hentaipics.org dog days anime hentai small cock sfico.info thaman sex x videos movies penyporn.mobi village girls xnxx kerelasex xxx-tube-list.info hd naked sex video
ローカルテレビ局統括プロデューサー g爆乳淫獣妻 設楽アリサ 42歳 avデビュー 細身に似つかわしくないgカップ人妻と眼鏡が曇るほど熱く激しい超濃密セックス sakurajav.mobi 音あずさ 無修正 selfie porn bdsmporntrends.com sholay hindi movie full hd sexy beerus mirhentai.com gragas hentai يلا اباحيه farmsextube.net سكس في الغردقه punjabi sexy movie hd hqtube.mobi rape scandal mms
karasuma pink xhentaisex.com aisai nettori puja sex story pornorolik.org www worldsex.com quantico sex pornstarslist.info peporonity red tube.com indian bravosex.mobi nepali pussy indian fsiblog com gotubexxx.com chaturbate indian