generative AI

Old methods, new technologies drive fraud losses

Old methods, new technologies drive fraud losses 2024-08-28 at 06:01 By Help Net Security GenAI, deepfakes and cybercrime are critical threats putting intensifying pressures on businesses, according to Experian. Top online security concerns for consumers According to the FTC, consumers reported losing more than $10 billion to fraud in 2023 alone, representing a 14% increase […]

React to this headline:

Loading spinner

Old methods, new technologies drive fraud losses Read More »

GenAI buzz fading among senior executives

GenAI buzz fading among senior executives 2024-08-26 at 05:01 By Help Net Security GenAI adoption has reached a critical phase, with 67% of respondents reporting their organization is increasing its investment in GenAI due to strong value to date, according to Deloitte. “The State of Generative AI in the Enterprise: Now decides Next,” is based

React to this headline:

Loading spinner

GenAI buzz fading among senior executives Read More »

GenAI models are easily compromised

GenAI models are easily compromised 2024-08-22 at 06:01 By Help Net Security 95% of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models, according to Lakera. Attack methods specific to GenAI, or prompt attacks, are easily used by anyone to manipulate the applications, gain

React to this headline:

Loading spinner

GenAI models are easily compromised Read More »

The AI balancing act: Unlocking potential, dealing with security issues, complexity

The AI balancing act: Unlocking potential, dealing with security issues, complexity 2024-08-15 at 06:31 By Help Net Security The rapid integration of AI and GenAI technologies creates a complex mix of challenges and opportunities for organizations. While the potential benefits are clear, many companies struggle with AI literacy, cautious adoption, and the risks of immature

React to this headline:

Loading spinner

The AI balancing act: Unlocking potential, dealing with security issues, complexity Read More »

AI security 2024: Key insights for staying ahead of threats

AI security 2024: Key insights for staying ahead of threats 2024-08-08 at 07:01 By Mirko Zorz In this Help Net Security interview, Kojin Oshiba, co-founder of Robust Intelligence, discusses his journey from academic research to addressing AI security challenges in the industry. Oshiba highlights vulnerabilities in technology systems and the proactive measures needed to mitigate

React to this headline:

Loading spinner

AI security 2024: Key insights for staying ahead of threats Read More »

Securing against GenAI weaponization

Securing against GenAI weaponization 2024-08-08 at 06:31 By Help Net Security In this Help Net Security video, Aaron Fulkerson, CEO of Opaque, discusses how the weaponization of generative AI (GenAI) has made existing data privacy practices (like masking, anonymization, tokenization, etc.) obsolete. Fulkerson provides recommendations for companies to realize they must proactively plan to mitigate

React to this headline:

Loading spinner

Securing against GenAI weaponization Read More »

Enhancing threat detection for GenAI workloads with cloud attack emulation

Enhancing threat detection for GenAI workloads with cloud attack emulation 2024-07-29 at 08:01 By Help Net Security Cloud GenAI workloads inherit pre-existing cloud security challenges, and security teams must proactively evolve innovative security countermeasures, including threat detection mechanisms. Traditional cloud threat detection Threat detection systems are designed to allow early detection of potential security breaches;

React to this headline:

Loading spinner

Enhancing threat detection for GenAI workloads with cloud attack emulation Read More »

The most urgent security risks for GenAI users are all data-related

The most urgent security risks for GenAI users are all data-related 2024-07-25 at 06:01 By Help Net Security Regulated data (data that organizations have a legal duty to protect) makes up more than a third of the sensitive data being shared with GenAI applications—presenting a potential risk to businesses of costly data breaches, according to

React to this headline:

Loading spinner

The most urgent security risks for GenAI users are all data-related Read More »

GenAI network acceleration requires prior WAN optimization

GenAI network acceleration requires prior WAN optimization 2024-07-19 at 07:32 By Help Net Security As GenAI models used for natural language processing, image generation, and other complex tasks often rely on large datasets that must be transmitted between distributed locations, including data centers and edge devices, WAN optimization is essential for robust deployment of GenAI

React to this headline:

Loading spinner

GenAI network acceleration requires prior WAN optimization Read More »

ChatGPTriage: How can CISOs see and control employees’ AI use?

ChatGPTriage: How can CISOs see and control employees’ AI use? 2024-07-16 at 08:01 By Help Net Security It’s been less than 18 months since the public introduction of ChatGPT, which gained 100 million users in less than two months. Given the hype, you would expect enterprise adoption of generative AI to be significant, but it’s

React to this headline:

Loading spinner

ChatGPTriage: How can CISOs see and control employees’ AI use? Read More »

Pressure mounts for C-Suite executives to implement GenAI solutions

Pressure mounts for C-Suite executives to implement GenAI solutions 2024-07-15 at 06:01 By Help Net Security 87% of C-Suite executives feel under pressure to implement GenAI solutions at speed and scale, according to RWS. Despite these pressures, 76% expressed an overwhelming excitement across their organization for the potential benefits of GenAI. However, this excitement is

React to this headline:

Loading spinner

Pressure mounts for C-Suite executives to implement GenAI solutions Read More »

Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge? 2024-07-10 at 16:46 By Kevin Townsend Few people understand AI, nor how to use nor control it, nor where it is going. Yet politicians wish to regulate it. The post Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge? appeared first on

React to this headline:

Loading spinner

Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge? Read More »

US Disrupts AI-Powered Russian Bot Farm on X

US Disrupts AI-Powered Russian Bot Farm on X 2024-07-10 at 15:01 By Ionut Arghire The US and allies blame Russian state-sponsored threat actors for using Meliorator AI software to create a social media bot farm. The post US Disrupts AI-Powered Russian Bot Farm on X appeared first on SecurityWeek. This article is an excerpt from

React to this headline:

Loading spinner

US Disrupts AI-Powered Russian Bot Farm on X Read More »

How companies increase risk exposure with rushed LLM deployments

How companies increase risk exposure with rushed LLM deployments 2024-07-10 at 07:31 By Mirko Zorz In this Help Net Security interview, Jake King, Head of Threat & Security Intelligence at Elastic, discusses companies’ exposure to new security risks and vulnerabilities as they rush to deploy LLMs. King explains how LLMs pose significant risks to data

React to this headline:

Loading spinner

How companies increase risk exposure with rushed LLM deployments Read More »

Infostealing malware masquerading as generative AI tools

Infostealing malware masquerading as generative AI tools 2024-07-05 at 08:01 By Help Net Security Over the past six months, there has been a notable surge in Android financial threats – malware targeting victims’ mobile banking funds, whether in the form of ‘traditional’ banking malware or, more recently, cryptostealers, according to ESET. Vidar infostealer targets Windows

React to this headline:

Loading spinner

Infostealing malware masquerading as generative AI tools Read More »

Maintaining human oversight in AI-enhanced software development

Maintaining human oversight in AI-enhanced software development 2024-07-03 at 07:31 By Mirko Zorz In this Help Net Security, Martin Reynolds, Field CTO at Harness, discusses how AI can enhance the security of software development and deployment. However, increased reliance on AI-generated code introduces new risks, requiring human oversight and integrated security practices to ensure safe

React to this headline:

Loading spinner

Maintaining human oversight in AI-enhanced software development Read More »

The impossibility of “getting ahead” in cyber defense

The impossibility of “getting ahead” in cyber defense 2024-07-02 at 07:01 By Help Net Security As a security professional, it can be tempting to believe that with sufficient resources we can achieve of state of parity, or even relative dominance, over cyber attackers. After all, if we got to an ideal state – fully staffed

React to this headline:

Loading spinner

The impossibility of “getting ahead” in cyber defense Read More »

Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique 2024-06-28 at 16:31 By Eduard Kovacs Microsoft has tricked several gen-AI models into providing forbidden information using a jailbreak technique named Skeleton Key. The post Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique appeared first on SecurityWeek. This article is an excerpt from SecurityWeek RSS Feed View Original Source

React to this headline:

Loading spinner

Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique Read More »

Why are threat actors faking data breaches?

Why are threat actors faking data breaches? 2024-06-24 at 07:16 By Help Net Security Earlier this year Europcar discovered a hacker selling info on its 50 million customers on the dark web. The European car rental company immediately launched an investigation, only to discover that the data being sold was completely doctored, possibly using generative

React to this headline:

Loading spinner

Why are threat actors faking data breaches? Read More »

42% plan to use API security for AI data protection

42% plan to use API security for AI data protection 2024-06-18 at 06:01 By Help Net Security While 75% of enterprises are implementing AI, 72% report significant data quality issues and an inability to scale data practices, according to F5. Data and the systems companies put in place to obtain, store, and secure it are

React to this headline:

Loading spinner

42% plan to use API security for AI data protection Read More »

Scroll to Top