LLMs

BruteForceAI: Free AI-powered login brute force tool

BruteForceAI: Free AI-powered login brute force tool 2025-09-03 at 09:31 By Help Net Security BruteForceAI is a penetration testing tool that uses LLMs to improve the way brute-force attacks are carried out. Instead of relying on manual setup, the tool can analyze HTML content, detect login form selectors, and prepare the attack process automatically. It […]

React to this headline:

Loading spinner

BruteForceAI: Free AI-powered login brute force tool Read More »

AI isn’t taking over the world, but here’s what you should worry about

AI isn’t taking over the world, but here’s what you should worry about 2025-08-29 at 10:03 By Help Net Security In this Help Net Security video, Josh Meier, Senior Generative AI Author at Pluralsight, debunks the myth that AI could “escape” servers or act on its own. He explains how large language models actually work,

React to this headline:

Loading spinner

AI isn’t taking over the world, but here’s what you should worry about Read More »

Agentic AI coding assistant helped attacker breach, extort 17 distinct organizations

Agentic AI coding assistant helped attacker breach, extort 17 distinct organizations 2025-08-28 at 15:29 By Zeljka Zorz Cybercriminals have started “vibe hacking” with AI’s help, AI startup Anthropic has shared in a report released on Wednesday. An attacker used the agentic AI coding assistant Claude Code for nearly all steps of a data extortion operation

React to this headline:

Loading spinner

Agentic AI coding assistant helped attacker breach, extort 17 distinct organizations Read More »

AI Security Map: Linking AI vulnerabilities to real-world impact

AI Security Map: Linking AI vulnerabilities to real-world impact 2025-08-27 at 09:40 By Mirko Zorz A single prompt injection in a customer-facing chatbot can leak sensitive data, damage trust, and draw regulatory scrutiny in hours. The technical breach is only the first step. The real risk comes from how quickly one weakness in an AI

React to this headline:

Loading spinner

AI Security Map: Linking AI vulnerabilities to real-world impact Read More »

LLMs at the edge: Rethinking how IoT devices talk and act

LLMs at the edge: Rethinking how IoT devices talk and act 2025-08-26 at 08:01 By Mirko Zorz Anyone who has set up a smart home knows the routine: one app to dim the lights, another to adjust the thermostat, and a voice assistant that only understands exact phrasing. These systems call themselves smart, but in

React to this headline:

Loading spinner

LLMs at the edge: Rethinking how IoT devices talk and act Read More »

Why a new AI tool could change how we test insider threat defenses

Why a new AI tool could change how we test insider threat defenses 2025-08-25 at 09:04 By Mirko Zorz Insider threats are among the hardest attacks to detect because they come from people who already have legitimate access. Security teams know the risk well, but they often lack the data needed to train systems that

React to this headline:

Loading spinner

Why a new AI tool could change how we test insider threat defenses Read More »

Review: Adversarial AI Attacks, Mitigations, and Defense Strategies

Review: Adversarial AI Attacks, Mitigations, and Defense Strategies 2025-08-25 at 07:50 By Mirko Zorz Adversarial AI Attacks, Mitigations, and Defense Strategies shows how AI systems can be attacked and how defenders can prepare. It’s essentially a walkthrough of offensive and defensive approaches to AI security. About the author John Sotiropoulos is the Head Of AI

React to this headline:

Loading spinner

Review: Adversarial AI Attacks, Mitigations, and Defense Strategies Read More »

Using lightweight LLMs to cut incident response times and reduce hallucinations

Using lightweight LLMs to cut incident response times and reduce hallucinations 2025-08-21 at 09:03 By Mirko Zorz Researchers from the University of Melbourne and Imperial College London have developed a method for using LLMs to improve incident response planning with a focus on reducing the risk of hallucinations. Their approach uses a smaller, fine-tuned LLM

React to this headline:

Loading spinner

Using lightweight LLMs to cut incident response times and reduce hallucinations Read More »

How security teams are putting AI to work right now

How security teams are putting AI to work right now 2025-08-18 at 09:42 By Mirko Zorz AI is moving from proof-of-concept into everyday security operations. In many SOCs, it is now used to cut down alert noise, guide analysts during investigations, and speed up incident response. What was once seen as experimental technology is starting

React to this headline:

Loading spinner

How security teams are putting AI to work right now Read More »

Employees race to build custom AI apps despite security risks

Employees race to build custom AI apps despite security risks 2025-08-15 at 07:37 By Help Net Security The latest Netskope findings show a 50% increase in GenAI platform usage among enterprise end-users, driven by growing employee demand for tools to develop custom AI applications and agents. Top LLM interfaces by percentage in organizations (source: Netskope)

React to this headline:

Loading spinner

Employees race to build custom AI apps despite security risks Read More »

Free courses: Master AI tools from Microsoft, AWS, and Google

Free courses: Master AI tools from Microsoft, AWS, and Google 2025-08-14 at 07:32 By Anamarija Pogorelec Learn how AI technologies can be applied to enhance security, create safe and responsible applications, develop intelligent agents, and improve information discovery. You’ll gain practical skills, explore new tools, and work on projects that help you apply what you

React to this headline:

Loading spinner

Free courses: Master AI tools from Microsoft, AWS, and Google Read More »

What GPT‑5 means for IT teams, devs, and the future of AI at work

What GPT‑5 means for IT teams, devs, and the future of AI at work 2025-08-07 at 20:58 By Sinisa Markovic OpenAI has released GPT‑5, the newest version of its large language model. It’s now available to developers and ChatGPT users, and it brings some real changes to how AI can be used in business and

React to this headline:

Loading spinner

What GPT‑5 means for IT teams, devs, and the future of AI at work Read More »

AI can write your code, but nearly half of it may be insecure

AI can write your code, but nearly half of it may be insecure 2025-08-07 at 09:15 By Help Net Security While GenAI excels at producing functional code, it introduces security vulnerabilities in 45 percent of cases, according to Veracode’s 2025 GenAI Code Security Report, which analyzed code produced by over 100 LLMs across 80 real-world

React to this headline:

Loading spinner

AI can write your code, but nearly half of it may be insecure Read More »

What attackers know about your company thanks to AI

What attackers know about your company thanks to AI 2025-08-01 at 08:48 By Help Net Security In this Help Net Security video, Tom Cross, Head of Threat Research at GetReal Security, explores how generative AI is empowering threat actors. He breaks down three key areas: how GenAI lowers the technical barrier for attackers, enables highly

React to this headline:

Loading spinner

What attackers know about your company thanks to AI Read More »

New AI model offers faster, greener way for vulnerability detection

New AI model offers faster, greener way for vulnerability detection 2025-07-31 at 08:33 By Mirko Zorz A team of researchers has developed a new AI model, called White-Basilisk, that detects software vulnerabilities more efficiently than much larger systems. The model’s release comes at a time when developers and security teams face mounting pressure to secure

React to this headline:

Loading spinner

New AI model offers faster, greener way for vulnerability detection Read More »

Fighting AI with AI: How Darwinium is reshaping fraud defense

Fighting AI with AI: How Darwinium is reshaping fraud defense 2025-07-29 at 16:07 By Mirko Zorz AI agents are showing up in more parts of the customer journey, from product discovery to checkout. And fraudsters are also putting them to work, often with alarming success. In response, cyberfraud prevention leader Darwinium is launching two AI-powered

React to this headline:

Loading spinner

Fighting AI with AI: How Darwinium is reshaping fraud defense Read More »

Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities

Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities 2025-07-28 at 08:13 By Mirko Zorz Vulnhuntr is an open-source tool that finds remotely exploitable vulnerabilities. It uses LLMs and static code analysis to trace how data moves through an application, from user input to server output. This helps it spot complex, multi-step vulnerabilities that traditional tools

React to this headline:

Loading spinner

Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities Read More »

Are your employees using Chinese GenAI tools at work?

Are your employees using Chinese GenAI tools at work? 2025-07-21 at 07:35 By Anamarija Pogorelec Nearly one in 12 employees are using Chinese-developed generative AI tools at work, and they’re exposing sensitive data in the process. That’s according to new research from Harmonic Security, which analyzed the behavior of roughly 14,000 end users in the

React to this headline:

Loading spinner

Are your employees using Chinese GenAI tools at work? Read More »

Behind the code: How developers work in 2025

Behind the code: How developers work in 2025 2025-07-11 at 13:01 By Anamarija Pogorelec How are developers working in 2025? Docker surveyed over 4,500 people to find out, and the answers are a mix of progress and ongoing pain points. AI is gaining ground but still unevenly used. Security is now baked into everyday workflows.

React to this headline:

Loading spinner

Behind the code: How developers work in 2025 Read More »

Employees are quietly bringing AI to work and leaving security behind

Employees are quietly bringing AI to work and leaving security behind 2025-07-11 at 08:06 By Help Net Security While IT departments race to implement AI governance frameworks, many employees have already opened a backdoor for AI, according to ManageEngine. The rise of unauthorized AI use Shadow AI has quietly infiltrated organizations across North America, creating

React to this headline:

Loading spinner

Employees are quietly bringing AI to work and leaving security behind Read More »

Scroll to Top