LLMs

You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code

You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code 2025-07-03 at 16:03 By Zeljka Zorz Popular AI chatbots powered by large language models (LLMs) often fail to provide accurate information on any topic, but researchers expect threat actors to ramp up their efforts to get them to spew […]

React to this headline:

Loading spinner

You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code Read More »

AI tools are everywhere, and most are off your radar

AI tools are everywhere, and most are off your radar 2025-07-03 at 08:06 By Anamarija Pogorelec 80% of AI tools used by employees go unmanaged by IT or security teams, according to Zluri’s The State of AI in the Workplace 2025 report. AI is popping up all over the workplace, often without anyone noticing. If

React to this headline:

Loading spinner

AI tools are everywhere, and most are off your radar Read More »

How cybercriminals are weaponizing AI and what CISOs should do about it

How cybercriminals are weaponizing AI and what CISOs should do about it 2025-07-01 at 08:31 By Mirko Zorz In a recent case tracked by Flashpoint, a finance worker at a global firm joined a video call that seemed normal. By the end of it, $25 million was gone. Everyone on the call except the employee

React to this headline:

Loading spinner

How cybercriminals are weaponizing AI and what CISOs should do about it Read More »

We know GenAI is risky, so why aren’t we fixing its flaws?

We know GenAI is risky, so why aren’t we fixing its flaws? 2025-06-27 at 07:33 By Help Net Security Even though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t keeping up with the risks, according to Cobalt. GenAl

React to this headline:

Loading spinner

We know GenAI is risky, so why aren’t we fixing its flaws? Read More »

Users lack control as major AI platforms share personal info with third parties

Users lack control as major AI platforms share personal info with third parties 2025-06-25 at 07:02 By Help Net Security Some of the most popular generative AI and large language model (LLM) platforms, from companies like Meta, Google, and Microsoft, are collecting sensitive data and sharing it with unknown third parties, leaving users with limited

React to this headline:

Loading spinner

Users lack control as major AI platforms share personal info with third parties Read More »

Free AI coding security rules now available on GitHub

Free AI coding security rules now available on GitHub 2025-06-17 at 16:47 By Sinisa Markovic Developers are turning to AI coding assistants to save time and speed up their work. But these tools can also introduce security risks if they suggest flawed or unsafe code. To help address that, Secure Code Warrior has released a

React to this headline:

Loading spinner

Free AI coding security rules now available on GitHub Read More »

Before scaling GenAI, map your LLM usage and risk zones

Before scaling GenAI, map your LLM usage and risk zones 2025-06-17 at 08:46 By Mirko Zorz In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by implementing guardrails to mitigate GenAI-specific risks like prompt injection, insecure outputs,

React to this headline:

Loading spinner

Before scaling GenAI, map your LLM usage and risk zones Read More »

86% of all LLM usage is driven by ChatGPT

86% of all LLM usage is driven by ChatGPT 2025-06-11 at 07:01 By Help Net Security ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest models, such as GPT-4o and GPT-4o mini, even when more affordable alternatives

React to this headline:

Loading spinner

86% of all LLM usage is driven by ChatGPT Read More »

The hidden risks of LLM autonomy

The hidden risks of LLM autonomy 2025-06-04 at 08:42 By Help Net Security Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate content. Today, they can access databases and business applications, interact with external

React to this headline:

Loading spinner

The hidden risks of LLM autonomy Read More »

Why security teams cannot rely solely on AI guardrails

Why security teams cannot rely solely on AI guardrails 2025-05-12 at 09:19 By Mirko Zorz In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. The findings highlight how even billion-dollar LLMs can be bypassed using surprisingly simple techniques,

React to this headline:

Loading spinner

Why security teams cannot rely solely on AI guardrails Read More »

Even the best safeguards can’t stop LLMs from being fooled

Even the best safeguards can’t stop LLMs from being fooled 2025-05-08 at 08:48 By Mirko Zorz In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He discusses common organizational mistakes and the necessary precautions for securing sensitive data when

React to this headline:

Loading spinner

Even the best safeguards can’t stop LLMs from being fooled Read More »

SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories

SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories 2025-04-23 at 08:36 By Mirko Zorz By connecting powerful language models like GPT-4o and Claude Sonnet 3.5 to real-world tools, the open-source tool SWE-agent allows them to autonomously perform complex tasks: from fixing bugs in live GitHub repositories and solving cybersecurity challenges, to browsing

React to this headline:

Loading spinner

SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories Read More »

When AI agents go rogue, the fallout hits the enterprise

When AI agents go rogue, the fallout hits the enterprise 2025-04-17 at 08:45 By Mirko Zorz In this Help Net Security interview, Jason Lord, CTO at AutoRABIT, discusses the cybersecurity risks posed by AI agents integrated into real-world systems. Issues like hallucinations, prompt injections, and embedded biases can turn these systems into vulnerable targets. Lord

React to this headline:

Loading spinner

When AI agents go rogue, the fallout hits the enterprise Read More »

Package hallucination: LLMs may deliver malicious code to careless devs

Package hallucination: LLMs may deliver malicious code to careless devs 2025-04-14 at 15:46 By Zeljka Zorz LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting” (courtesy of Seth Larson, Security Developer-in-Residence at the Python Software Foundation). A known occurrence Many software

React to this headline:

Loading spinner

Package hallucination: LLMs may deliver malicious code to careless devs Read More »

The quiet data breach hiding in AI workflows

The quiet data breach hiding in AI workflows 2025-04-14 at 08:30 By Mirko Zorz As AI becomes embedded in daily business workflows, the risk of data exposure increases. Prompt leaks are not rare exceptions. They are a natural outcome of how employees use large language models. CISOs cannot treat this as a secondary concern. To

React to this headline:

Loading spinner

The quiet data breach hiding in AI workflows Read More »

Excessive agency in LLMs: The growing risk of unchecked autonomy

Excessive agency in LLMs: The growing risk of unchecked autonomy 2025-04-08 at 08:39 By Help Net Security For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have permissions to execute commands. However, as these

React to this headline:

Loading spinner

Excessive agency in LLMs: The growing risk of unchecked autonomy Read More »

The rise of compromised LLM attacks

The rise of compromised LLM attacks 2025-04-07 at 07:03 By Help Net Security In this Help Net Security video, Sohrob Kazerounian, Distinguished AI Researcher at Vectra AI, discusses how the ongoing rapid adoption of LLM-based applications has already introduced new cybersecurity risks. These vulnerabilities will not be in the LLM itself, but rather in how

React to this headline:

Loading spinner

The rise of compromised LLM attacks Read More »

Two things you need in place to successfully adopt AI

Two things you need in place to successfully adopt AI 2025-03-31 at 08:32 By Help Net Security Organizations should not shy away from taking advantage of AI tools, but they need to find the right balance between maximizing efficiency and mitigating organizational risk. They need to put in place: 1. A seamless AI security policy

React to this headline:

Loading spinner

Two things you need in place to successfully adopt AI Read More »

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing 

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  2025-03-05 at 13:02 By Ryan Naraine Knostic provides a “need-to-know” filter on the answers generated by enterprise large language models (LLM) tools. The post Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  appeared first on SecurityWeek. This article is

React to this headline:

Loading spinner

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  Read More »

Scroll to Top