LLMs

Free courses: Master AI tools from Microsoft, AWS, and Google

Free courses: Master AI tools from Microsoft, AWS, and Google 2025-08-14 at 07:32 By Anamarija Pogorelec Learn how AI technologies can be applied to enhance security, create safe and responsible applications, develop intelligent agents, and improve information discovery. You’ll gain practical skills, explore new tools, and work on projects that help you apply what you […]

React to this headline:

Loading spinner

Free courses: Master AI tools from Microsoft, AWS, and Google Read More »

What GPT‑5 means for IT teams, devs, and the future of AI at work

What GPT‑5 means for IT teams, devs, and the future of AI at work 2025-08-07 at 20:58 By Sinisa Markovic OpenAI has released GPT‑5, the newest version of its large language model. It’s now available to developers and ChatGPT users, and it brings some real changes to how AI can be used in business and

React to this headline:

Loading spinner

What GPT‑5 means for IT teams, devs, and the future of AI at work Read More »

AI can write your code, but nearly half of it may be insecure

AI can write your code, but nearly half of it may be insecure 2025-08-07 at 09:15 By Help Net Security While GenAI excels at producing functional code, it introduces security vulnerabilities in 45 percent of cases, according to Veracode’s 2025 GenAI Code Security Report, which analyzed code produced by over 100 LLMs across 80 real-world

React to this headline:

Loading spinner

AI can write your code, but nearly half of it may be insecure Read More »

What attackers know about your company thanks to AI

What attackers know about your company thanks to AI 2025-08-01 at 08:48 By Help Net Security In this Help Net Security video, Tom Cross, Head of Threat Research at GetReal Security, explores how generative AI is empowering threat actors. He breaks down three key areas: how GenAI lowers the technical barrier for attackers, enables highly

React to this headline:

Loading spinner

What attackers know about your company thanks to AI Read More »

New AI model offers faster, greener way for vulnerability detection

New AI model offers faster, greener way for vulnerability detection 2025-07-31 at 08:33 By Mirko Zorz A team of researchers has developed a new AI model, called White-Basilisk, that detects software vulnerabilities more efficiently than much larger systems. The model’s release comes at a time when developers and security teams face mounting pressure to secure

React to this headline:

Loading spinner

New AI model offers faster, greener way for vulnerability detection Read More »

Fighting AI with AI: How Darwinium is reshaping fraud defense

Fighting AI with AI: How Darwinium is reshaping fraud defense 2025-07-29 at 16:07 By Mirko Zorz AI agents are showing up in more parts of the customer journey, from product discovery to checkout. And fraudsters are also putting them to work, often with alarming success. In response, cyberfraud prevention leader Darwinium is launching two AI-powered

React to this headline:

Loading spinner

Fighting AI with AI: How Darwinium is reshaping fraud defense Read More »

Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities

Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities 2025-07-28 at 08:13 By Mirko Zorz Vulnhuntr is an open-source tool that finds remotely exploitable vulnerabilities. It uses LLMs and static code analysis to trace how data moves through an application, from user input to server output. This helps it spot complex, multi-step vulnerabilities that traditional tools

React to this headline:

Loading spinner

Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities Read More »

Are your employees using Chinese GenAI tools at work?

Are your employees using Chinese GenAI tools at work? 2025-07-21 at 07:35 By Anamarija Pogorelec Nearly one in 12 employees are using Chinese-developed generative AI tools at work, and they’re exposing sensitive data in the process. That’s according to new research from Harmonic Security, which analyzed the behavior of roughly 14,000 end users in the

React to this headline:

Loading spinner

Are your employees using Chinese GenAI tools at work? Read More »

Behind the code: How developers work in 2025

Behind the code: How developers work in 2025 2025-07-11 at 13:01 By Anamarija Pogorelec How are developers working in 2025? Docker surveyed over 4,500 people to find out, and the answers are a mix of progress and ongoing pain points. AI is gaining ground but still unevenly used. Security is now baked into everyday workflows.

React to this headline:

Loading spinner

Behind the code: How developers work in 2025 Read More »

Employees are quietly bringing AI to work and leaving security behind

Employees are quietly bringing AI to work and leaving security behind 2025-07-11 at 08:06 By Help Net Security While IT departments race to implement AI governance frameworks, many employees have already opened a backdoor for AI, according to ManageEngine. The rise of unauthorized AI use Shadow AI has quietly infiltrated organizations across North America, creating

React to this headline:

Loading spinner

Employees are quietly bringing AI to work and leaving security behind Read More »

You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code

You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code 2025-07-03 at 16:03 By Zeljka Zorz Popular AI chatbots powered by large language models (LLMs) often fail to provide accurate information on any topic, but researchers expect threat actors to ramp up their efforts to get them to spew

React to this headline:

Loading spinner

You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code Read More »

AI tools are everywhere, and most are off your radar

AI tools are everywhere, and most are off your radar 2025-07-03 at 08:06 By Anamarija Pogorelec 80% of AI tools used by employees go unmanaged by IT or security teams, according to Zluri’s The State of AI in the Workplace 2025 report. AI is popping up all over the workplace, often without anyone noticing. If

React to this headline:

Loading spinner

AI tools are everywhere, and most are off your radar Read More »

How cybercriminals are weaponizing AI and what CISOs should do about it

How cybercriminals are weaponizing AI and what CISOs should do about it 2025-07-01 at 08:31 By Mirko Zorz In a recent case tracked by Flashpoint, a finance worker at a global firm joined a video call that seemed normal. By the end of it, $25 million was gone. Everyone on the call except the employee

React to this headline:

Loading spinner

How cybercriminals are weaponizing AI and what CISOs should do about it Read More »

We know GenAI is risky, so why aren’t we fixing its flaws?

We know GenAI is risky, so why aren’t we fixing its flaws? 2025-06-27 at 07:33 By Help Net Security Even though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t keeping up with the risks, according to Cobalt. GenAl

React to this headline:

Loading spinner

We know GenAI is risky, so why aren’t we fixing its flaws? Read More »

Users lack control as major AI platforms share personal info with third parties

Users lack control as major AI platforms share personal info with third parties 2025-06-25 at 07:02 By Help Net Security Some of the most popular generative AI and large language model (LLM) platforms, from companies like Meta, Google, and Microsoft, are collecting sensitive data and sharing it with unknown third parties, leaving users with limited

React to this headline:

Loading spinner

Users lack control as major AI platforms share personal info with third parties Read More »

Free AI coding security rules now available on GitHub

Free AI coding security rules now available on GitHub 2025-06-17 at 16:47 By Sinisa Markovic Developers are turning to AI coding assistants to save time and speed up their work. But these tools can also introduce security risks if they suggest flawed or unsafe code. To help address that, Secure Code Warrior has released a

React to this headline:

Loading spinner

Free AI coding security rules now available on GitHub Read More »

Before scaling GenAI, map your LLM usage and risk zones

Before scaling GenAI, map your LLM usage and risk zones 2025-06-17 at 08:46 By Mirko Zorz In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by implementing guardrails to mitigate GenAI-specific risks like prompt injection, insecure outputs,

React to this headline:

Loading spinner

Before scaling GenAI, map your LLM usage and risk zones Read More »

86% of all LLM usage is driven by ChatGPT

86% of all LLM usage is driven by ChatGPT 2025-06-11 at 07:01 By Help Net Security ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest models, such as GPT-4o and GPT-4o mini, even when more affordable alternatives

React to this headline:

Loading spinner

86% of all LLM usage is driven by ChatGPT Read More »

The hidden risks of LLM autonomy

The hidden risks of LLM autonomy 2025-06-04 at 08:42 By Help Net Security Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate content. Today, they can access databases and business applications, interact with external

React to this headline:

Loading spinner

The hidden risks of LLM autonomy Read More »

Why security teams cannot rely solely on AI guardrails

Why security teams cannot rely solely on AI guardrails 2025-05-12 at 09:19 By Mirko Zorz In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. The findings highlight how even billion-dollar LLMs can be bypassed using surprisingly simple techniques,

React to this headline:

Loading spinner

Why security teams cannot rely solely on AI guardrails Read More »

Scroll to Top