Artificial Intelligence

Differential privacy in AI: A solution creating more problems for developers?

Differential privacy in AI: A solution creating more problems for developers? 2024-09-19 at 08:01 By Help Net Security In the push for secure AI models, many organizations have turned to differential privacy. But is the very tool meant to protect user data holding back innovation? Developers face a tough choice: balance data privacy or prioritize […]

React to this headline:

Loading spinner

Differential privacy in AI: A solution creating more problems for developers? Read More »

Security leaders consider banning AI coding due to security risks

Security leaders consider banning AI coding due to security risks 2024-09-19 at 06:02 By Help Net Security 92% of security leaders have concerns about the use of AI-generated code within their organization, according to Venafi. Tension between security and developer teams 83% of security leaders say their developers currently use AI to generate code, with

React to this headline:

Loading spinner

Security leaders consider banning AI coding due to security risks Read More »

Intezer Raises $33M to Extend AI-Powered SOC Platform

Intezer Raises $33M to Extend AI-Powered SOC Platform 2024-09-17 at 20:17 By Ryan Naraine Intezer is looking to tap into booming market for AI-powered tooling to address the severe shortage of skilled cybersecurity professionals.  The post Intezer Raises $33M to Extend AI-Powered SOC Platform appeared first on SecurityWeek. This article is an excerpt from SecurityWeek

React to this headline:

Loading spinner

Intezer Raises $33M to Extend AI-Powered SOC Platform Read More »

The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks

The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks 2024-09-17 at 17:34 By Kevin Townsend When it comes to adversarial use of AI, the real question is whether the AI threat is a deep fake, or whether the deepfake is the AI threat. The post The AI Threat: Deepfake or Deep Fake?

React to this headline:

Loading spinner

The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks Read More »

Distributed Denial of Truth (DDoT): The Mechanics of Influence Operations and The Weaponization of Social Media

Distributed Denial of Truth (DDoT): The Mechanics of Influence Operations and The Weaponization of Social Media 2024-09-13 at 16:01 By Jose Tozo With the US election on the horizon, it’s a good time to explore the concept of social media weaponization and its use in asymmetrically manipulating public opinion through bots, automation, AI, and shady

React to this headline:

Loading spinner

Distributed Denial of Truth (DDoT): The Mechanics of Influence Operations and The Weaponization of Social Media Read More »

Benefits and best practices of leveraging AI for cybersecurity

Benefits and best practices of leveraging AI for cybersecurity 2024-09-12 at 06:31 By Help Net Security AI has become a key player in protecting valuable organizational insights from threats. Thanks to AI-enabled data protection practices such as behavior monitoring, enterprises no longer have to be reactive to a cyberattack but can be proactive before a

React to this headline:

Loading spinner

Benefits and best practices of leveraging AI for cybersecurity Read More »

Google’s AI Model Faces European Union Scrutiny From Privacy Watchdog

Google’s AI Model Faces European Union Scrutiny From Privacy Watchdog 2024-09-12 at 04:17 By Associated Press Ireland’s Data Protection Commission said it has opened an inquiry into Google’s Pathways Language Model 2, also known as PaLM2. The post Google’s AI Model Faces European Union Scrutiny From Privacy Watchdog appeared first on SecurityWeek. This article is

React to this headline:

Loading spinner

Google’s AI Model Faces European Union Scrutiny From Privacy Watchdog Read More »

Compliance and Risk Management Startup Datricks Raises $15 Million

Compliance and Risk Management Startup Datricks Raises $15 Million 2024-09-11 at 18:34 By Ionut Arghire The Tel Aviv company attracts $15 million in a Series A investment to build an AI-powered compliance and risk management platform. The post Compliance and Risk Management Startup Datricks Raises $15 Million appeared first on SecurityWeek. This article is an

React to this headline:

Loading spinner

Compliance and Risk Management Startup Datricks Raises $15 Million Read More »

The AI Convention: Lofty Goals, Legal Loopholes, and National Security Caveats

The AI Convention: Lofty Goals, Legal Loopholes, and National Security Caveats 2024-09-10 at 15:16 By Kevin Townsend Signed on September 5, 2024, the AI Convention is a laudable intent but suffers from the usual exclusions and exemptions necessary to satisfy multiple nations. The post The AI Convention: Lofty Goals, Legal Loopholes, and National Security Caveats

React to this headline:

Loading spinner

The AI Convention: Lofty Goals, Legal Loopholes, and National Security Caveats Read More »

How human-led threat hunting complements automation in detecting cyber threats

How human-led threat hunting complements automation in detecting cyber threats 2024-09-10 at 07:01 By Mirko Zorz In this Help Net Security interview, Shane Cox, Director, Cyber Fusion Center at MorganFranklin Consulting, discusses the evolving methodologies and strategies in threat hunting and explains how human-led approaches complement each other to form a robust defense. Cox also

React to this headline:

Loading spinner

How human-led threat hunting complements automation in detecting cyber threats Read More »

AI cybersecurity needs to be as multi-layered as the system it’s protecting

AI cybersecurity needs to be as multi-layered as the system it’s protecting 2024-09-09 at 08:01 By Help Net Security Cybercriminals are beginning to take advantage of the new malicious options that large language models (LLMs) offer them. LLMs make it possible to upload documents with hidden instructions that are executed by connected system components. This

React to this headline:

Loading spinner

AI cybersecurity needs to be as multi-layered as the system it’s protecting Read More »

Best practices for implementing the Principle of Least Privilege

Best practices for implementing the Principle of Least Privilege 2024-09-09 at 07:02 By Mirko Zorz In this Help Net Security interview, Umaimah Khan, CEO of Opal Security, shares her insights on implementing the Principle of Least Privilege (PoLP). She discusses best practices for effective integration, benefits for operational efficiency and audit readiness, and how to

React to this headline:

Loading spinner

Best practices for implementing the Principle of Least Privilege Read More »

The AI Wild West: Unraveling the Security and Privacy Risks of GenAI Apps

The AI Wild West: Unraveling the Security and Privacy Risks of GenAI Apps 2024-09-05 at 17:31 By Alastair Paterson GenAI users are uploading data to over eight apps every month – what are the security and privacy concerns? The post The AI Wild West: Unraveling the Security and Privacy Risks of GenAI Apps appeared first

React to this headline:

Loading spinner

The AI Wild West: Unraveling the Security and Privacy Risks of GenAI Apps Read More »

Acuvity Raises $9 Million Seed Funding for Gen-AI Governance and In-house Development

Acuvity Raises $9 Million Seed Funding for Gen-AI Governance and In-house Development 2024-09-05 at 17:31 By Kevin Townsend Activity emerged from stealth with $9 million seed funding to provide solutions for enterprises to safely adopt GenAI. The post Acuvity Raises $9 Million Seed Funding for Gen-AI Governance and In-house Development appeared first on SecurityWeek. This

React to this headline:

Loading spinner

Acuvity Raises $9 Million Seed Funding for Gen-AI Governance and In-house Development Read More »

How Do You Know When AI is Powerful Enough to be Dangerous? Regulators Try to Do the Math

How Do You Know When AI is Powerful Enough to be Dangerous? Regulators Try to Do the Math 2024-09-05 at 14:16 By Associated Press An AI model trained on 10 to the 26th floating-point operations per second must now be reported to the U.S. government and could soon trigger even stricter requirements in California. The

React to this headline:

Loading spinner

How Do You Know When AI is Powerful Enough to be Dangerous? Regulators Try to Do the Math Read More »

Clearview AI Fined $33.7 Million by Dutch Data Protection Watchdog Over ‘Illegal Database’ of Faces

Clearview AI Fined $33.7 Million by Dutch Data Protection Watchdog Over ‘Illegal Database’ of Faces 2024-09-03 at 17:16 By Associated Press Dutch agency said a database with billions of photos of faces amounted to serious violations of GDPR. The post Clearview AI Fined $33.7 Million by Dutch Data Protection Watchdog Over ‘Illegal Database’ of Faces

React to this headline:

Loading spinner

Clearview AI Fined $33.7 Million by Dutch Data Protection Watchdog Over ‘Illegal Database’ of Faces Read More »

California Advances Landmark Legislation to Regulate Large AI Models

California Advances Landmark Legislation to Regulate Large AI Models 2024-08-30 at 16:01 By Associated Press Efforts in California to establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote. The post California Advances Landmark Legislation to Regulate Large AI Models appeared first on SecurityWeek. This article is an excerpt from SecurityWeek

React to this headline:

Loading spinner

California Advances Landmark Legislation to Regulate Large AI Models Read More »

Cisco to Acquire AI Security Firm Robust Intelligence

Cisco to Acquire AI Security Firm Robust Intelligence 2024-08-27 at 15:01 By Eduard Kovacs Cisco intends to acquire Robust Intelligence, a California-based company that specializes in securing AI applications. The post Cisco to Acquire AI Security Firm Robust Intelligence appeared first on SecurityWeek. This article is an excerpt from SecurityWeek RSS Feed View Original Source

React to this headline:

Loading spinner

Cisco to Acquire AI Security Firm Robust Intelligence Read More »

GenAI models are easily compromised

GenAI models are easily compromised 2024-08-22 at 06:01 By Help Net Security 95% of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models, according to Lakera. Attack methods specific to GenAI, or prompt attacks, are easily used by anyone to manipulate the applications, gain

React to this headline:

Loading spinner

GenAI models are easily compromised Read More »

Scroll to Top