LLMs

Shadow AI: New ideas emerge to tackle an old problem in new form

Shadow AI: New ideas emerge to tackle an old problem in new form 2025-10-31 at 09:13 By Zeljka Zorz Shadow AI is the second-most prevalent form of shadow IT in corporate environments, 1Password’s latest annual report has revealed. Based on a survey of over 5,000 IT/security professionals and knowledge workers in the US, UK, Europe, […]

React to this headline:

Loading spinner

Shadow AI: New ideas emerge to tackle an old problem in new form Read More »

AI chatbots are sliding toward a privacy crisis

AI chatbots are sliding toward a privacy crisis 2025-10-31 at 09:00 By Sinisa Markovic AI chat tools are taking over offices, but at what cost to privacy? People often feel anonymous in chat interfaces and may share personal data without realizing the risks. Cybercriminals see the same opening, and it may only be a matter

React to this headline:

Loading spinner

AI chatbots are sliding toward a privacy crisis Read More »

Faster LLM tool routing comes with new security considerations

Faster LLM tool routing comes with new security considerations 2025-10-23 at 09:23 By Sinisa Markovic Large language models depend on outside tools to perform real-world tasks, but connecting them to those tools often slows them down or causes failures. A new study from the University of Hong Kong proposes a way to fix that. The

React to this headline:

Loading spinner

Faster LLM tool routing comes with new security considerations Read More »

Companies want the benefits of AI without the cyber blowback

Companies want the benefits of AI without the cyber blowback 2025-10-22 at 07:19 By Anamarija Pogorelec 51% of European IT and cybersecurity professionals said they expect AI-driven cyber threats and deepfakes to keep them up at night in 2026, according to ISACA. AI takes centre stage in threat outlook The main reason for this concern

React to this headline:

Loading spinner

Companies want the benefits of AI without the cyber blowback Read More »

Most AI privacy research looks the wrong way

Most AI privacy research looks the wrong way 2025-10-20 at 13:19 By Mirko Zorz Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors argue that while most technical studies target data memorization, the biggest risks come from

React to this headline:

Loading spinner

Most AI privacy research looks the wrong way Read More »

When trusted AI connections turn hostile

When trusted AI connections turn hostile 2025-10-16 at 09:02 By Mirko Zorz Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers can quietly take control of hosts, manipulate LLM behavior, and deceive users, all while staying undetected by

React to this headline:

Loading spinner

When trusted AI connections turn hostile Read More »

GPT needs to be rewired for security

GPT needs to be rewired for security 2025-10-02 at 09:18 By Help Net Security LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and even planning travel. But in the SOC (where mistakes have real cost), today’s models stumble on work that demands high

React to this headline:

Loading spinner

GPT needs to be rewired for security Read More »

A2AS framework targets prompt injection and agentic AI security risks

A2AS framework targets prompt injection and agentic AI security risks 2025-10-01 at 08:31 By Mirko Zorz AI systems are now deeply embedded in business operations, and this introduces new security risks that traditional controls are not built to handle. The newly released A2AS framework is designed to protect AI agents at runtime and prevent real-world

React to this headline:

Loading spinner

A2AS framework targets prompt injection and agentic AI security risks Read More »

Microsoft spots LLM-obfuscated phishing attack

Microsoft spots LLM-obfuscated phishing attack 2025-09-25 at 19:00 By Zeljka Zorz Cybercriminals are increasingly using AI-powered tools and (malicious) large language models to create convincing, error-free emails, deepfakes, online personas, lookalike/fake websites, and malware. There’s even been a documented instance of an attacker using the agentic AI coding assistant Claude Code (along with Kali Linux)

React to this headline:

Loading spinner

Microsoft spots LLM-obfuscated phishing attack Read More »

Building a stronger SOC through AI augmentation

Building a stronger SOC through AI augmentation 2025-09-24 at 09:22 By Mirko Zorz In this Help Net Security interview, Tim Bramble, Director of Threat Detection and Response at OpenText, discusses how SOC teams are gaining value from AI in detecting and prioritizing threats. By learning what “normal” looks like across users and systems, AI helps

React to this headline:

Loading spinner

Building a stronger SOC through AI augmentation Read More »

LLMs can boost cybersecurity decisions, but not for everyone

LLMs can boost cybersecurity decisions, but not for everyone 2025-09-19 at 09:11 By Mirko Zorz LLMs are moving fast from experimentation to daily use in cybersecurity. Teams are starting to use them to sort through threat intelligence, guide incident response, and help analysts handle repetitive work. But adding AI into the decision-making process brings new

React to this headline:

Loading spinner

LLMs can boost cybersecurity decisions, but not for everyone Read More »

Google introduces VaultGemma, a differentially private LLM built for secure data handling

Google introduces VaultGemma, a differentially private LLM built for secure data handling 2025-09-16 at 09:31 By Sinisa Markovic Google has released VaultGemma, a large language model designed to keep sensitive data private during training. The model uses differential privacy techniques to prevent individual data points from being exposed, which makes it safer for handling confidential

React to this headline:

Loading spinner

Google introduces VaultGemma, a differentially private LLM built for secure data handling Read More »

Garak: Open-source LLM vulnerability scanner

Garak: Open-source LLM vulnerability scanner 2025-09-10 at 09:00 By Help Net Security LLMs can make mistakes, leak data, or be tricked into doing things they were not meant to do. Garak is a free, open-source tool designed to test these weaknesses. It checks for problems like hallucinations, prompt injections, jailbreaks, and toxic outputs. By running

React to this headline:

Loading spinner

Garak: Open-source LLM vulnerability scanner Read More »

BruteForceAI: Free AI-powered login brute force tool

BruteForceAI: Free AI-powered login brute force tool 2025-09-03 at 09:31 By Help Net Security BruteForceAI is a penetration testing tool that uses LLMs to improve the way brute-force attacks are carried out. Instead of relying on manual setup, the tool can analyze HTML content, detect login form selectors, and prepare the attack process automatically. It

React to this headline:

Loading spinner

BruteForceAI: Free AI-powered login brute force tool Read More »

AI isn’t taking over the world, but here’s what you should worry about

AI isn’t taking over the world, but here’s what you should worry about 2025-08-29 at 10:03 By Help Net Security In this Help Net Security video, Josh Meier, Senior Generative AI Author at Pluralsight, debunks the myth that AI could “escape” servers or act on its own. He explains how large language models actually work,

React to this headline:

Loading spinner

AI isn’t taking over the world, but here’s what you should worry about Read More »

Agentic AI coding assistant helped attacker breach, extort 17 distinct organizations

Agentic AI coding assistant helped attacker breach, extort 17 distinct organizations 2025-08-28 at 15:29 By Zeljka Zorz Cybercriminals have started “vibe hacking” with AI’s help, AI startup Anthropic has shared in a report released on Wednesday. An attacker used the agentic AI coding assistant Claude Code for nearly all steps of a data extortion operation

React to this headline:

Loading spinner

Agentic AI coding assistant helped attacker breach, extort 17 distinct organizations Read More »

AI Security Map: Linking AI vulnerabilities to real-world impact

AI Security Map: Linking AI vulnerabilities to real-world impact 2025-08-27 at 09:40 By Mirko Zorz A single prompt injection in a customer-facing chatbot can leak sensitive data, damage trust, and draw regulatory scrutiny in hours. The technical breach is only the first step. The real risk comes from how quickly one weakness in an AI

React to this headline:

Loading spinner

AI Security Map: Linking AI vulnerabilities to real-world impact Read More »

LLMs at the edge: Rethinking how IoT devices talk and act

LLMs at the edge: Rethinking how IoT devices talk and act 2025-08-26 at 08:01 By Mirko Zorz Anyone who has set up a smart home knows the routine: one app to dim the lights, another to adjust the thermostat, and a voice assistant that only understands exact phrasing. These systems call themselves smart, but in

React to this headline:

Loading spinner

LLMs at the edge: Rethinking how IoT devices talk and act Read More »

Why a new AI tool could change how we test insider threat defenses

Why a new AI tool could change how we test insider threat defenses 2025-08-25 at 09:04 By Mirko Zorz Insider threats are among the hardest attacks to detect because they come from people who already have legitimate access. Security teams know the risk well, but they often lack the data needed to train systems that

React to this headline:

Loading spinner

Why a new AI tool could change how we test insider threat defenses Read More »

Review: Adversarial AI Attacks, Mitigations, and Defense Strategies

Review: Adversarial AI Attacks, Mitigations, and Defense Strategies 2025-08-25 at 07:50 By Mirko Zorz Adversarial AI Attacks, Mitigations, and Defense Strategies shows how AI systems can be attacked and how defenders can prepare. It’s essentially a walkthrough of offensive and defensive approaches to AI security. About the author John Sotiropoulos is the Head Of AI

React to this headline:

Loading spinner

Review: Adversarial AI Attacks, Mitigations, and Defense Strategies Read More »

Scroll to Top