..

電気通信システムと管理

原稿を提出する arrow_forward arrow_forward ..

Making AI Safer through identification and remediation of AI Risks Across Security, Ethics, Transparency, Compliance, and Other Pillars

Abstract

Murai Rao & Kashyap Murali

100 Billion Dollars. That’s how much AI spending is projected to rise to in a mere 3 years. With such rapid investment in a next-generational technology, its safety becomes of heightening concern to everyone that comes in touch with it from customers to vendors alike. Questions about the AI models safety across any potential security risks, particular bias towards a minority group, a lack of transparency in the decision-making process, or compliance violations from the AI environment are raised, and don’t have a proper answer. These questions don’t just apply to a specific vertical but it’s domain-agnostic and ensuring that a businesses AI is safe is what will set them apart from their customers. This talk will explore what AI safety is, why we need it, and how to ensure that one’s AI model is safe both technically and procedurally.

免責事項: この要約は人工知能ツールを使用して翻訳されており、まだレビューまたは確認されていません

この記事をシェアする

インデックス付き

arrow_upward arrow_upward