LLMs

Microsoft: “Hack” this LLM-powered service and get paid

Microsoft: “Hack” this LLM-powered service and get paid 2024-12-09 at 18:04 By Zeljka Zorz Microsoft, in collaboration with the Institute of Science and Technology Australia and ETH Zurich, has announced the LLMail-Inject Challenge, a competition to test and improve defenses against prompt injection attacks. The setup and the challenge LLMail is a simulated email client […]

React to this headline:

Loading spinner

Microsoft: “Hack” this LLM-powered service and get paid Read More »

Assessing AI risks before implementation

Assessing AI risks before implementation 2024-11-25 at 06:33 By Help Net Security In this Help Net Security video, Frank Kim, SANS Institute Fellow, explains why more enterprises must consider many challenges before implementing advanced technology in their platforms. Without adequately assessing and understanding the risks accompanying AI integration, organizations will not be able to harness

React to this headline:

Loading spinner

Assessing AI risks before implementation Read More »

Harmonic Raises $17.5M to Defend Against AI Data Harvesting

Harmonic Raises $17.5M to Defend Against AI Data Harvesting 2024-10-02 at 15:46 By Ryan Naraine Harmonic has raised a total of $26 million to develop a new approach to data protection using pre-trained, specialized language models.  The post Harmonic Raises $17.5M to Defend Against AI Data Harvesting appeared first on SecurityWeek. This article is an

React to this headline:

Loading spinner

Harmonic Raises $17.5M to Defend Against AI Data Harvesting Read More »

Could APIs be the undoing of AI?

Could APIs be the undoing of AI? 2024-09-30 at 08:01 By Help Net Security Application programming interfaces (APIs) are essential to how generative AI (GenAI) functions with agents (e.g., calling upon them for data). But the combination of API and LLM issues coupled with rapid rollouts is likely to see numerous organizations having to combat

React to this headline:

Loading spinner

Could APIs be the undoing of AI? Read More »

AI cybersecurity needs to be as multi-layered as the system it’s protecting

AI cybersecurity needs to be as multi-layered as the system it’s protecting 2024-09-09 at 08:01 By Help Net Security Cybercriminals are beginning to take advantage of the new malicious options that large language models (LLMs) offer them. LLMs make it possible to upload documents with hidden instructions that are executed by connected system components. This

React to this headline:

Loading spinner

AI cybersecurity needs to be as multi-layered as the system it’s protecting Read More »

ITSM concerns when integrating new AI services

ITSM concerns when integrating new AI services 2024-08-06 at 07:31 By Help Net Security Let’s talk about a couple of recent horror stories. Late last year, a Chevrolet dealership deployed a chatbot powered by a large language model (LLM) on their homepage. This LLM, trained with detailed specifications of Chevrolet vehicles, was intended to respond

React to this headline:

Loading spinner

ITSM concerns when integrating new AI services Read More »

How companies increase risk exposure with rushed LLM deployments

How companies increase risk exposure with rushed LLM deployments 2024-07-10 at 07:31 By Mirko Zorz In this Help Net Security interview, Jake King, Head of Threat & Security Intelligence at Elastic, discusses companies’ exposure to new security risks and vulnerabilities as they rush to deploy LLMs. King explains how LLMs pose significant risks to data

React to this headline:

Loading spinner

How companies increase risk exposure with rushed LLM deployments Read More »

Monocle: Open-source LLM for binary analysis search

Monocle: Open-source LLM for binary analysis search 2024-07-08 at 06:31 By Help Net Security Monocle is open-source tooling backed by a large language model (LLM) for performing natural language searches against compiled target binaries. Monocle can be provided with a binary and search criteria (authentication code, vulnerable code, password strings, etc.), and it will decompile

React to this headline:

Loading spinner

Monocle: Open-source LLM for binary analysis search Read More »

DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding

DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding 2024-05-01 at 17:17 By Ionut Arghire AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding. The post DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding appeared first on SecurityWeek. This article is an excerpt from

React to this headline:

Loading spinner

DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding Read More »

Immediate AI risks and tomorrow’s dangers

Immediate AI risks and tomorrow’s dangers 2024-03-08 at 08:37 By Helga Labus “At the most basic level, AI has given malicious attackers superpowers,” Mackenzie Jackson, developer and security advocate at GitGuardian, told the audience last week at Bsides Zagreb. These superpowers are most evident in the growing impact of fishing, smishing and vishing attacks since

React to this headline:

Loading spinner

Immediate AI risks and tomorrow’s dangers Read More »

Meta plans to prevent disinformation and AI-generated content from influencing voters

Meta plans to prevent disinformation and AI-generated content from influencing voters 2024-02-27 at 14:50 By Zeljka Zorz Meta, the company that owns some of the biggest social networks in use today, has explained how it means to tackle disinformation related to the upcoming EU Parliament elections, with a special emphasis on how it plans to

React to this headline:

Loading spinner

Meta plans to prevent disinformation and AI-generated content from influencing voters Read More »

Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting

Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting 2024-02-14 at 22:02 By Ryan Naraine Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks. The post Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting appeared first on SecurityWeek. This

React to this headline:

Loading spinner

Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting Read More »

How are state-sponsored threat actors leveraging AI?

How are state-sponsored threat actors leveraging AI? 2024-02-14 at 18:31 By Helga Labus Microsoft and OpenAI have identified attempts by various state-affiliated threat actors to use large language models (LLMs) to enhance their cyber operations. Threat actors use LLMs for various tasks Just as defenders do, threat actors are leveraging AI (more specifically: LLMs) to

React to this headline:

Loading spinner

How are state-sponsored threat actors leveraging AI? Read More »

Protecting against AI-enhanced email threats

Protecting against AI-enhanced email threats 2024-02-13 at 07:31 By Helga Labus Generative AI based on large language models (LLMs) has become a valuable tool for individuals and businesses, but also cybercriminals. Its ability to process large amounts of data and quickly generate results has contributed to its widespread adoption. AI in the hands of cybercriminals

React to this headline:

Loading spinner

Protecting against AI-enhanced email threats Read More »

Security Experts Describe AI Technologies They Want to See

Security Experts Describe AI Technologies They Want to See 2024-01-22 at 19:32 By Ryan Naraine SecurityWeek interviews a wide spectrum of security experts on AI-driven cybersecurity use-cases that are worth immediate attention. The post Security Experts Describe AI Technologies They Want to See appeared first on SecurityWeek. This article is an excerpt from SecurityWeek RSS

React to this headline:

Loading spinner

Security Experts Describe AI Technologies They Want to See Read More »

AI Data Exposed to ‘LeftoverLocals’ Attack via Vulnerable AMD, Apple, Qualcomm GPUs

AI Data Exposed to ‘LeftoverLocals’ Attack via Vulnerable AMD, Apple, Qualcomm GPUs 2024-01-17 at 15:31 By Eduard Kovacs Researchers show how a new attack named LeftoverLocals, which impacts GPUs from AMD, Apple and Qualcomm, can be used to obtain AI data. The post AI Data Exposed to ‘LeftoverLocals’ Attack via Vulnerable AMD, Apple, Qualcomm GPUs

React to this headline:

Loading spinner

AI Data Exposed to ‘LeftoverLocals’ Attack via Vulnerable AMD, Apple, Qualcomm GPUs Read More »

Protecto Joins Cadre of Startups in AI Data Protection Space

Protecto Joins Cadre of Startups in AI Data Protection Space 08/11/2023 at 21:47 By Ryan Naraine Silicon Valley startup is pitching APIs to help organizations protect data and ensure compliance throughout the AI deployment lifecycle. The post Protecto Joins Cadre of Startups in AI Data Protection Space appeared first on SecurityWeek. This article is an

React to this headline:

Loading spinner

Protecto Joins Cadre of Startups in AI Data Protection Space Read More »

Generative AI Startup Nexusflow Raises $10.6 Million

Generative AI Startup Nexusflow Raises $10.6 Million 29/09/2023 at 19:16 By Ionut Arghire Nexusflow scores funding to build an open-source LLM that can deliver high accuracy when retrieving data from multiple security sources. The post Generative AI Startup Nexusflow Raises $10.6 Million appeared first on SecurityWeek. This article is an excerpt from SecurityWeek RSS Feed

React to this headline:

Loading spinner

Generative AI Startup Nexusflow Raises $10.6 Million Read More »

Scroll to Top