As per the report from Protect AI, a company that is exclusively working with Artificial Intelligence (AI) and ML (Machine Learning) systems, some major vulnerabilities have been identified in tools used within the AI/ML open-source projects.
These threats have surfaced over the past few months and the first detectors are the members of the Huntr bug bounty platform for AI and ML. Huntr is an active community-based bounty program with over 15,000 members who actively search for vulnerabilities in the AI/ML space. This is the only bounty program that especially focuses on the AI and ML space.
Protect AI launched the program back in August 2023 intending to furnish vital intelligence regarding possible risks and enable an expeditious reaction to safeguard AI systems. The bounty actively protects AI/ML open-source software (OSS), systems and foundational models.
“With over 15,000 members now, Protect AI’s huntr is the largest and most concentrated set of threat researchers and hackers focused exclusively on AI/ML security,” Daryan Dehghanpisheh, president and co-founder of Protect AI. “Huntr’s operating model is focused on simplicity, transparency, and rewards. The automated features and Protect AI’s triage expertise in contextualizing threats for maintainers help all contributors of open-source software in AI to build more secure software packages” added Daryan.
Three Critical Vulnerabilities
MLflow Remote Code Execution: Server takeover and loss of sensitive data are the two potential outcomes of this flaw. Users may end up using evil data sources that are executed remotely. Besides, these malicious code actors could execute commands by mimicking real users.
MLflow Arbitrary File Overwrite: A bypass was detected in the MLflow function that validates the security of a file path. This flow allows a cybercriminal to overwrite files on the MLflow server from a remote location. Consequently, denial of service attacks, system takeover as well as destruction may take place.
MLflow Local File Include: Under this flaw, the MLflow is being manipulated when it is installed on any operating system. The manipulation is done at a scale where the OS displays the content of sensitive files even if they are encrypted. This leads to a system takeover and the stealing of sensitive information.
In the full report, Protect AI highlights the urgency of focusing on these weaknesses as open-source platforms are gaining more traction these days. Furthermore, they have furnished a comprehensive list of suggestions for users if they are affected by any of the flaws.
Protect AI’s co-founder Daryan Dehghanpisheh told Metaverse Post, “Urgency in addressing AI/ML system vulnerabilities hinges on their business impact. With AI/ML’s critical role in contemporary business and the severe nature of potential exploits, most organizations will find this urgency high. The primary challenge in securing AI/ML systems lies in comprehending risks across the MLOps lifecycle.”
“To mitigate these risks, companies must conduct threat modeling for their AI and ML systems, identify exposure windows, and implement suitable controls within an integrated and comprehensive MLSecOps program,” he added.