LLMs

Package hallucination: LLMs may deliver malicious code to careless devs

Package hallucination: LLMs may deliver malicious code to careless devs 2025-04-14 at 15:46 By Zeljka Zorz LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting” (courtesy of Seth Larson, Security Developer-in-Residence at the Python Software Foundation). A known occurrence Many software […]

React to this headline:

Loading spinner

Package hallucination: LLMs may deliver malicious code to careless devs Read More »

The quiet data breach hiding in AI workflows

The quiet data breach hiding in AI workflows 2025-04-14 at 08:30 By Mirko Zorz As AI becomes embedded in daily business workflows, the risk of data exposure increases. Prompt leaks are not rare exceptions. They are a natural outcome of how employees use large language models. CISOs cannot treat this as a secondary concern. To

React to this headline:

Loading spinner

The quiet data breach hiding in AI workflows Read More »

Excessive agency in LLMs: The growing risk of unchecked autonomy

Excessive agency in LLMs: The growing risk of unchecked autonomy 2025-04-08 at 08:39 By Help Net Security For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have permissions to execute commands. However, as these

React to this headline:

Loading spinner

Excessive agency in LLMs: The growing risk of unchecked autonomy Read More »

The rise of compromised LLM attacks

The rise of compromised LLM attacks 2025-04-07 at 07:03 By Help Net Security In this Help Net Security video, Sohrob Kazerounian, Distinguished AI Researcher at Vectra AI, discusses how the ongoing rapid adoption of LLM-based applications has already introduced new cybersecurity risks. These vulnerabilities will not be in the LLM itself, but rather in how

React to this headline:

Loading spinner

The rise of compromised LLM attacks Read More »

Two things you need in place to successfully adopt AI

Two things you need in place to successfully adopt AI 2025-03-31 at 08:32 By Help Net Security Organizations should not shy away from taking advantage of AI tools, but they need to find the right balance between maximizing efficiency and mitigating organizational risk. They need to put in place: 1. A seamless AI security policy

React to this headline:

Loading spinner

Two things you need in place to successfully adopt AI Read More »

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing 

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  2025-03-05 at 13:02 By Ryan Naraine Knostic provides a “need-to-know” filter on the answers generated by enterprise large language models (LLM) tools. The post Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  appeared first on SecurityWeek. This article is

React to this headline:

Loading spinner

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  Read More »

Man vs. machine: Striking the perfect balance in threat intelligence

Man vs. machine: Striking the perfect balance in threat intelligence 2025-02-24 at 08:00 By Mirko Zorz In this Help Net Security interview, Aaron Roberts, Director at Perspective Intelligence, discusses how automation is reshaping threat intelligence. He explains that while AI tools can process massive data sets, the nuanced judgment of experienced analysts remains critical. Roberts

React to this headline:

Loading spinner

Man vs. machine: Striking the perfect balance in threat intelligence Read More »

DeepSeek’s popularity exploited by malware peddlers, scammers

DeepSeek’s popularity exploited by malware peddlers, scammers 2025-01-29 at 15:18 By Zeljka Zorz As US-based AI companies struggle with the news that the recently released Chinese-made open source DeepSeek-R1 reasoning model performs as well as theirs for a fraction of the cost, users are rushing to try out DeepSeek’s AI tool. In the process, they

React to this headline:

Loading spinner

DeepSeek’s popularity exploited by malware peddlers, scammers Read More »

GitLab CISO on proactive monitoring and metrics for DevSecOps success

GitLab CISO on proactive monitoring and metrics for DevSecOps success 2025-01-09 at 07:32 By Mirko Zorz In this Help Net Security interview, Josh Lemos, CISO at GitLab, talks about the shift from DevOps to DevSecOps, focusing on the complexity of building systems and integrating security tools. He shares tips for maintaining development speed, fostering collaboration,

React to this headline:

Loading spinner

GitLab CISO on proactive monitoring and metrics for DevSecOps success Read More »

Microsoft: “Hack” this LLM-powered service and get paid

Microsoft: “Hack” this LLM-powered service and get paid 2024-12-09 at 18:04 By Zeljka Zorz Microsoft, in collaboration with the Institute of Science and Technology Australia and ETH Zurich, has announced the LLMail-Inject Challenge, a competition to test and improve defenses against prompt injection attacks. The setup and the challenge LLMail is a simulated email client

React to this headline:

Loading spinner

Microsoft: “Hack” this LLM-powered service and get paid Read More »

Assessing AI risks before implementation

Assessing AI risks before implementation 2024-11-25 at 06:33 By Help Net Security In this Help Net Security video, Frank Kim, SANS Institute Fellow, explains why more enterprises must consider many challenges before implementing advanced technology in their platforms. Without adequately assessing and understanding the risks accompanying AI integration, organizations will not be able to harness

React to this headline:

Loading spinner

Assessing AI risks before implementation Read More »

Harmonic Raises $17.5M to Defend Against AI Data Harvesting

Harmonic Raises $17.5M to Defend Against AI Data Harvesting 2024-10-02 at 15:46 By Ryan Naraine Harmonic has raised a total of $26 million to develop a new approach to data protection using pre-trained, specialized language models.  The post Harmonic Raises $17.5M to Defend Against AI Data Harvesting appeared first on SecurityWeek. This article is an

React to this headline:

Loading spinner

Harmonic Raises $17.5M to Defend Against AI Data Harvesting Read More »

Could APIs be the undoing of AI?

Could APIs be the undoing of AI? 2024-09-30 at 08:01 By Help Net Security Application programming interfaces (APIs) are essential to how generative AI (GenAI) functions with agents (e.g., calling upon them for data). But the combination of API and LLM issues coupled with rapid rollouts is likely to see numerous organizations having to combat

React to this headline:

Loading spinner

Could APIs be the undoing of AI? Read More »

AI cybersecurity needs to be as multi-layered as the system it’s protecting

AI cybersecurity needs to be as multi-layered as the system it’s protecting 2024-09-09 at 08:01 By Help Net Security Cybercriminals are beginning to take advantage of the new malicious options that large language models (LLMs) offer them. LLMs make it possible to upload documents with hidden instructions that are executed by connected system components. This

React to this headline:

Loading spinner

AI cybersecurity needs to be as multi-layered as the system it’s protecting Read More »

ITSM concerns when integrating new AI services

ITSM concerns when integrating new AI services 2024-08-06 at 07:31 By Help Net Security Let’s talk about a couple of recent horror stories. Late last year, a Chevrolet dealership deployed a chatbot powered by a large language model (LLM) on their homepage. This LLM, trained with detailed specifications of Chevrolet vehicles, was intended to respond

React to this headline:

Loading spinner

ITSM concerns when integrating new AI services Read More »

How companies increase risk exposure with rushed LLM deployments

How companies increase risk exposure with rushed LLM deployments 2024-07-10 at 07:31 By Mirko Zorz In this Help Net Security interview, Jake King, Head of Threat & Security Intelligence at Elastic, discusses companies’ exposure to new security risks and vulnerabilities as they rush to deploy LLMs. King explains how LLMs pose significant risks to data

React to this headline:

Loading spinner

How companies increase risk exposure with rushed LLM deployments Read More »

Monocle: Open-source LLM for binary analysis search

Monocle: Open-source LLM for binary analysis search 2024-07-08 at 06:31 By Help Net Security Monocle is open-source tooling backed by a large language model (LLM) for performing natural language searches against compiled target binaries. Monocle can be provided with a binary and search criteria (authentication code, vulnerable code, password strings, etc.), and it will decompile

React to this headline:

Loading spinner

Monocle: Open-source LLM for binary analysis search Read More »

DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding

DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding 2024-05-01 at 17:17 By Ionut Arghire AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding. The post DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding appeared first on SecurityWeek. This article is an excerpt from

React to this headline:

Loading spinner

DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding Read More »

Scroll to Top