網路攻擊的數量和複雜程度日益增加,因此請務必善用 AI 的潛力,協助提升安全性。AI 有助於減少威脅數量、減輕資安專業人員的手動作業負擔,並彌補網路安全領域專家不足的問題。
原則總覽
運用 AI 功能改善現有的安全系統和程序。
您可以使用 Gemini in Security,以及服務內建的 Google Cloud AI 功能。
這些 AI 功能可在安全防護生命週期的每個階段提供協助,進而改變安全防護做法。舉例來說,你可以使用 AI 執行下列操作:
分析及說明潛在惡意程式碼,不必執行反向工程。
減少網路安全從業人員的重複性工作。
使用自然語言產生查詢,並與安全性事件資料互動。
顯示相關資訊。
提供快速回覆建議。
協助修正事件。
摘要列出錯誤設定和安全漏洞這類高優先順序的快訊、醒目顯示可能的影響,並建議緩解措施。
安全自主程度
面對不斷演進的網路安全威脅,AI 和自動化功能可協助您提升安全成效。運用 AI 技術提升安全性,可實現更高程度的自主性,偵測及防範威脅,並改善整體安全防護機制。Google 將使用 AI 執行安全防護工作時的自主程度分為四個等級,並說明 AI 在協助和最終主導安全防護工作時,扮演的角色會如何逐步增加:
手動:人員在整個安全防護生命週期中,執行所有安全防護工作 (預防、偵測、決定優先順序和回應)。
輔助:Gemini 等 AI 工具可歸納資訊、產生洞察資料及提供建議,進而提升人類工作效率。
半自動化:許多安全性工作主要將由 AI 處理,只有在必要時才需要人員協助。
自主:AI 會根據貴機構的目標和偏好設定,以值得信賴的助理身分推動安全生命週期,盡量減少人為介入。
建議
以下各節說明如何運用 AI 提升安全性。各節也會說明建議事項如何與 Google 安全 AI 架構 (SAIF) 的核心要素保持一致,以及這些建議事項與安全自主程度的關聯性。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-02-05 (世界標準時間)。"],[[["\u003cp\u003eAI significantly enhances cloud security by reducing threats, minimizing manual effort, and addressing the shortage of cybersecurity experts.\u003c/p\u003e\n"],["\u003cp\u003eAI capabilities, including Gemini in Security and built-in Google Cloud services, can assist at every security lifecycle stage, from analyzing malicious code to summarizing alerts.\u003c/p\u003e\n"],["\u003cp\u003eUsing AI for security can achieve increasing levels of autonomy: manual, assisted, semi-autonomous, and autonomous, where AI increasingly leads security tasks.\u003c/p\u003e\n"],["\u003cp\u003eAI helps simplify security, making it more accessible to experts and non-experts alike, by summarizing alerts, recommending mitigations, and automating tasks like malware analysis.\u003c/p\u003e\n"],["\u003cp\u003eImplementing secure development practices for AI systems is crucial, and AI can be leveraged for secure coding, data cleaning, and validating tools, which are relevant to all levels of security autonomy.\u003c/p\u003e\n"]]],[],null,["# Use AI for security\n\nThis principle in the security pillar of the\n[Google Cloud Well-Architected Framework](/architecture/framework)\nprovides recommendations to use AI to help you improve the security of your\ncloud workloads.\n\nBecause of the increasing number and sophistication of cyber attacks, it's\nimportant to take advantage of AI's potential to help improve security. AI can\nhelp to reduce the number of threats, reduce the manual effort required by security\nprofessionals, and help compensate for the scarcity of experts in the cyber-security\ndomain.\n\nPrinciple overview\n------------------\n\nUse AI capabilities to improve your existing security systems and processes.\nYou can use\n[Gemini in Security](/security/ai)\nas well as the intrinsic AI capabilities that are built into Google Cloud services.\n\nThese AI capabilities can transform security by providing assistance across\nevery stage of the security lifecycle. For example, you can use AI to do the\nfollowing:\n\n- Analyze and explain potentially malicious code without reverse engineering.\n- Reduce repetitive work for cyber-security practitioners.\n- Use natural language to generate queries and interact with security event data.\n- Surface contextual information.\n- Offer recommendations for quick responses.\n- Aid in the remediation of events.\n- Summarize high-priority alerts for misconfigurations and vulnerabilities, highlight potential impacts, and recommend mitigations.\n\nLevels of security autonomy\n---------------------------\n\nAI and automation can help you achieve better security outcomes when you're\ndealing with ever-evolving cyber-security threats. By using AI for security, you\ncan achieve greater levels of autonomy to detect and prevent threats and improve\nyour overall security posture. Google defines four\n[levels of autonomy](https://services.google.com/fh/files/misc/google-cloud-product-vision-ai-powered-security.pdf#page=6)\nwhen you use AI for security, and they outline the increasing role of AI in\nassisting and eventually leading security tasks:\n\n1. **Manual**: Humans run all of the security tasks (prevent, detect, prioritize, and respond) across the entire security lifecycle.\n2. **Assisted**: AI tools, like Gemini, boost human productivity by summarizing information, generating insights, and making recommendations.\n3. **Semi-autonomous**: AI takes primary responsibility for many security tasks and delegates to humans only when required.\n4. **Autonomous**: AI acts as a trusted assistant that drives the security lifecycle based on your organization's goals and preferences, with minimal human intervention.\n\nRecommendations\n---------------\n\nThe following sections describe the recommendations for using AI for security.\nThe sections also indicate how the recommendations align with Google's Secure AI\nFramework (SAIF)\n[core elements](https://developers.google.com/machine-learning/resources/saif)\nand how they're relevant to the\n[levels of security autonomy](#levels_of_security_autonomy).\n\n- [Enhance threat detection and response with AI](#enhance_threat_detection_and_response_with_ai)\n- [Simplify security for experts and non-experts](#simplify_security_for_experts_and_non-experts)\n- [Automate time-consuming security tasks with AI](#automate_time-consuming_security_tasks_with_ai)\n- [Incorporate AI into risk management and governance processes](#incorporate_ai_into_risk_management_and_governance_processes)\n- [Implement secure development practices for AI systems](#implement_secure_development_practices_for_ai_systems)\n\n| **Note:** For more information about Google Cloud's overall vision for using Gemini across our products to accelerate AI for security, see the whitepaper [Google Cloud's Product Vision for AI-Powered Security](https://services.google.com/fh/files/misc/google-cloud-product-vision-ai-powered-security.pdf).\n\n### Enhance threat detection and response with AI\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Security operations (SecOps)\n- Logging, auditing, and monitoring\n\nAI can analyze large volumes of security data, offer insights into threat actor\nbehavior, and automate the analysis of potentially malicious code. This\nrecommendation is aligned with the following SAIF elements:\n\n- Extend detection and response to bring AI into your organization's threat universe.\n- Automate defenses to keep pace with existing and new threats.\n\nDepending on your implementation, this recommendation can be relevant to the\nfollowing levels of autonomy:\n\n- **Assisted**: AI helps with threat analysis and detection.\n- **Semi-autonomous**: AI takes on more responsibility for the security task.\n\n[Google Threat Intelligence](/security/products/threat-intelligence),\nwhich uses AI to analyze threat actor behavior and malicious code, can help you\nimplement this recommendation.\n\n### Simplify security for experts and non-experts\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Security operations (SecOps)\n- Cloud governance, risk, and compliance\n\nAI-powered tools can summarize alerts and recommend mitigations, and these\ncapabilities can make security more accessible to a wider range of personnel.\nThis recommendation is aligned with the following SAIF elements:\n\n- Automate defenses to keep pace with existing and new threats.\n- Harmonize platform-level controls to ensure consistent security across the organization.\n\nDepending on your implementation, this recommendation can be relevant to the\nfollowing levels of autonomy:\n\n- **Assisted**: AI helps you to improve the accessibility of security information.\n- **Semi-autonomous**: AI helps to make security practices more effective for all users.\n\nGemini in\n[Security Command Center](/security/products/security-command-center)\ncan provide summaries of alerts for misconfigurations and vulnerabilities.\n\n### Automate time-consuming security tasks with AI\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Infrastructure security\n- Security operations (SecOps)\n- Application security\n\nAI can automate tasks such as analyzing malware, generating security rules, and\nidentifying misconfigurations. These capabilities can help to reduce the\nworkload on security teams and accelerate response times. This recommendation is\naligned with the SAIF element about automating defenses to keep pace with\nexisting and new threats.\n\nDepending on your implementation, this recommendation can be relevant to the\nfollowing levels of autonomy:\n\n- **Assisted**: AI helps you to automate tasks.\n- **Semi-autonomous**: AI takes primary responsibility for security tasks, and only requests human assistance when needed.\n\nGemini in\n[Google SecOps](/security/products/security-operations)\ncan help to automate high-toil tasks by assisting analysts, retrieving relevant\ncontext, and making recommendations for next steps.\n\n### Incorporate AI into risk management and governance processes\n\nThis recommendation is relevant to the following\n[focus area](/architecture/framework/security#focus_areas_of_cloud_security):\nCloud governance, risk, and compliance.\n\nYou can use AI to build a model inventory and risk profiles. You can also use AI\nto implement policies for data privacy, cyber risk, and third-party risk. This\nrecommendation is aligned with the SAIF element about contextualizing AI system\nrisks in surrounding business processes.\n\nDepending on your implementation, this recommendation can be relevant to the\nsemi-autonomous level of autonomy. At this level, AI can orchestrate security\nagents that run processes to achieve your custom security goals.\n\n### Implement secure development practices for AI systems\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Application security\n- AI and ML security\n\nYou can use AI for secure coding, cleaning training data, and validating tools\nand artifacts. This recommendation is aligned with the SAIF element about\nexpanding strong security foundations to the AI ecosystem.\n\nThis recommendation can be relevant to all levels of security autonomy, because\na secure AI system needs to be in place before AI can be used effectively for\nsecurity. The recommendation is most relevant to the assisted level, where\nsecurity practices are augmented by AI.\n\nTo implement this recommendation, follow the\n[Supply-chain Levels for Software Artifacts (SLSA)](https://slsa.dev)\nguidelines for AI artifacts and use validated container images."]]