Generative AI in Security: Risks and Mitigation Strategies

6 hours ago 1

Generative AI became tech’s fiercest buzzword seemingly overnight with the merchandise of ChatGPT. Two years later, Microsoft is utilizing OpenAI instauration models and fielding questions from customers astir however AI changes the information landscape.

Siva Sundaramoorthy, elder unreality solutions information designer astatine Microsoft, often answers these questions. The information adept provided an overview of generative AI — including its benefits and information risks — to a assemblage of cybersecurity professionals astatine ISC2 successful Las Vegas connected Oct. 14.

What information risks tin travel from utilizing generative AI?

During his speech, Sundaramoorthy discussed concerns astir GenAI’s accuracy. He emphasized that the exertion functions arsenic a predictor, selecting what it deems the astir apt reply — though different answers mightiness besides beryllium close depending connected the context.

Cybersecurity professionals should see AI usage cases from 3 angles: usage, application, and platform.

“You request to recognize what usage lawsuit you are trying to protect,” Sundaramoorthy said.

He added: “A batch of developers and radical successful companies are going to beryllium successful this halfway bucket [application] wherever radical are creating applications successful it. Each institution has a bot oregon a pre-trained AI successful their environment.”

SEE: AMD revealed its competitor to NVIDIA’s heavy-duty AI chips past week arsenic the hardware warfare continues.

Once the usage, application, and level are identified, AI tin beryllium secured likewise to different systems — though not entirely. Certain risks are much apt to look with generative AI than with accepted systems. Sundaramoorthy named 7 adoption risks, including:

  • Bias.
  • Misinformation.
  • Deception.
  • Lack of accountability.
  • Overreliance.
  • Intellectual spot rights.
  • Psychological impact.

AI presents a unsocial menace map, corresponding to the 3 angles mentioned above:

  • AI usage successful information tin pb to disclosure of delicate information, shadiness IT from third-party LLM-based apps oregon plugins, oregon insider menace risks.
  • AI applications successful information tin unfastened doors for punctual injection, information leaks oregon infiltration, oregon insider menace risks.
  • AI platforms tin present information problems done information poisoning, denial-of-service attacks connected the model, theft of models, exemplary inversion, oregon hallucinations.

Attackers tin usage strategies specified arsenic punctual converters — utilizing obfuscation, semantic tricks, oregon explicitly malicious instructions to get astir contented filters — oregon jailbreaking techniques. They could perchance exploit AI systems and poison grooming data, execute punctual injection, instrumentality vantage of insecure plugin design, motorboat denial-of-service attacks, oregon unit AI models to leak data.

“What happens if the AI is connected to different system, to an API that tin execute immoderate benignant of codification successful immoderate different systems?” Sundaramoorthy said. “Can you instrumentality the AI to marque a backdoor for you?”

Security teams indispensable equilibrium the risks and benefits of AI

Sundaramoorthy uses Microsoft’s Copilot often and finds it invaluable for his work. However, “The worth proposition is excessively precocious for hackers not to people it,” helium said.

Other symptom points information teams should beryllium alert of astir AI include:

  • The integration of caller exertion oregon plan decisions introduces vulnerabilities.
  • Users indispensable beryllium trained to accommodate to caller AI capabilities.
  • Sensitive information entree and processing with AI systems creates caller risks.
  • Transparency and power indispensable beryllium established and maintained passim the AI’s lifecycle.
  • The AI proviso concatenation tin present susceptible oregon malicious code.
  • The lack of established compliance standards and the accelerated improvement of champion practices marque it unclear however to unafraid AI effectively.
  • Leaders indispensable found a trusted pathway to generative AI-integrated applications from the apical down.
  • AI introduces unsocial and poorly understood challenges, specified arsenic hallucinations.
  • The ROI of AI has not yet been proven successful the existent world.

Additionally, Sundaramoorthy explained that generative AI tin neglect successful some malicious and benign ways. A malicious nonaccomplishment mightiness impact an attacker bypassing the AI’s safeguards by posing arsenic a information researcher to extract delicate information, similar passwords. A benign nonaccomplishment could hap erstwhile biased contented unintentionally enters the AI’s output owed to poorly filtered grooming data.

Trusted ways to unafraid AI solutions

Despite the uncertainty surrounding AI, determination are immoderate tried-and-trusted ways to unafraid AI solutions successful a reasonably thorough manner. Standard organizations specified arsenic NIST and OWASP supply hazard absorption frameworks for moving with generative AI. MITRE publishes the ATLAS Matrix, a room of known tactics and techniques attackers usage against AI.

Furthermore, Microsoft offers governance and valuation tools that information teams tin usage to measure AI solutions. Google offers its ain version, the Secure AI Framework.

Organizations should guarantee idiosyncratic information does not participate grooming exemplary information done capable information sanitation and scrubbing. They should use the principle of slightest privilege erstwhile fine-tuning a model. Strict entree power methods should beryllium utilized erstwhile connecting the exemplary to outer information sources.

Ultimately, Sundaramoorthy said, “The champion practices successful cyber are champion practices successful AI.”

To usage AI — oregon not to usage AI

What astir not utilizing AI astatine all? Author and AI researcher Janelle Shane, who spoke astatine the ISC2 Security Congress opening keynote, noted 1 enactment for information teams is not to usage AI owed to the risks it introduces.

Sundaramoorthy took a antithetic tack. If AI tin entree documents successful an enactment that should beryllium insulated from immoderate extracurricular applications, helium said, “That is not an AI problem. That is an entree power problem.”

Disclaimer: ISC2 paid for my airfare, accommodations, and immoderate meals for the ISC2 Security Congres lawsuit held Oct. 13 – 16 successful Las Vegas.

Read Entire Article