A latest survey of 500 safety professionals by HackerOne, a safety analysis platform, discovered that 48% imagine AI poses probably the most important safety threat to their group. Among their high considerations about AI are:
- Leaked coaching knowledge (35%).
- Unauthorized use (33%).
- Hacking of AI fashions by exterior events (32%).
These considerations spotlight the pressing want for corporations to rethink their AI safety methods earlier than vulnerabilities develop into actual threats.
AI tends to generate false positives for safety groups
While the total report on hacker-based safety will not be out there till late fall, extra analysis is required by a SANS Institute Report Sponsored by HackerOne revealed that 58% of safety professionals imagine that safety groups and risk actors may discover themselves in an “arms race” to leverage generative AI techniques and strategies of their work.
Security professionals within the SANS survey mentioned they’ve discovered success utilizing AI to automate tedious duties (71%). However, the identical respondents acknowledged that risk actors may leverage AI to make their operations extra environment friendly. Specifically, respondents “have been most involved about AI-based phishing campaigns (79%) and automatic vulnerability exploitation (74%).”
SEE: Security chiefs annoyed with AI-generated code.
“Security groups want to search out the most effective purposes for AI to maintain up with adversaries whereas additionally taking into consideration its current limitations, or they threat creating extra work for themselves,” mentioned Matt Bromiley, an analyst on the SANS Institute, in a press release.
The answer? AI implementations needs to be externally audited. Over two-thirds of respondents (68%) selected “exterior audit” as the simplest technique to determine AI security and safety points.
“Teams at the moment are extra sensible in regards to the present limitations of AI” than they have been final yr, Dane Sherrets, senior options architect at HackerOne, mentioned in an e mail to TechRepublic. “Humans carry a whole lot of vital context to each defensive and offensive safety that AI cannot replicate but. Issues like hallucinations have additionally made groups hesitant to deploy the know-how in vital techniques. However, AI continues to be nice for growing productiveness and performing duties that do not require deep context.”
Additional findings from the SANS 2024 AI survey, launched this month, embrace:
- 38% plan to undertake AI into their safety technique sooner or later.
- 38.6% of respondents mentioned they skilled shortcomings in utilizing AI to detect or reply to cyber threats.
- 40% cite authorized and moral implications as a problem to AI adoption.
- 41.8% of corporations have confronted pushback from workers who do not belief selections made by AI, which SANS says is because of “a scarcity of transparency.”
- Currently, 43% of organizations use synthetic intelligence of their safety technique.
- AI know-how in safety operations is most frequently utilized in anomaly detection techniques (56.9%), malware detection (50.5%), and automatic incident response (48.9%).
- Fifty-eight % of respondents mentioned AI techniques wrestle to detect new threats or reply to anomalous indicators, which SANS attributes to a scarcity of coaching knowledge.
- Of those that reported shortcomings in utilizing AI to detect or reply to cyber threats, 71% mentioned AI was producing false positives.
Anthropic Asks Security Researchers for Input on AI Security Measures
Generative AI maker Anthropic has expanded its bug bounty program on HackerOne in August.
Specifically, Anthropic desires the hacker group to emphasize take a look at “the mitigations we use to forestall misuse of our fashions,” together with making an attempt to bypass guardrails designed to forestall AI from offering recipes for explosives or cyberattacks. Anthropic says it should award as much as $15,000 to those that efficiently determine new jailbreaking assaults and can give HackerOne safety researchers early entry to its upcoming safety mitigation system.