📢 Malaysia’s Ministry of Science, Technology, and Innovation, the Malaysian Communications & Multimedia Commission, and the Department of Personal Data Protection have jointly published Malaysia’s national AI guidelines titled ‘Malaysia National Guidelines on AI Governance & Ethics’. Take a look below for my key takeaways (but first the important stuff) 👇: Let’s connect! 🤝Are you an organisation? Get in touch here: https://lnkd.in/e6GYrsrf 📰 Sign up to the SecureAI newsletter here: https://lnkd.in/ebanc77m 🗣️Book Rafah Knight for your next speaking engagement/ guest lecture here: https://lnkd.in/efFDXBPT 💡AI Security Takeaways… 1️⃣ It is crucial to recognise the risks associated with AI systems (e.g., data sources, system design, and the deployment context). 2️⃣ It is important to understand how adversaries could manipulate AI systems- this is vital for building strong security practices. 3️⃣ AI systems must be reliable, robust, and tested against various risk scenarios to mitigate vulnerabilities. 4️⃣ It is important to implement strong encryption, access controls, and regular security assessments to protect AI systems from cyber threats. 5️⃣ It is important to ensure data privacy of individuals is protected, this can be achieved by using de-identification and anonymisation in the design phase. 6️⃣ Decisions made using AI system must be understandable this is essential to build trust and accountability. 7️⃣ Having mechanisms like human-in-the-loop (HITL) provides the opportunity for human intervention where required. 8️⃣ It is crucial that AI does not discriminate against any particular group of people ; and is accessible to diverse communities. 9️⃣ AI systems must be compliant with with data protection and cybersecurity regulations. 🔟 It is important to conduct audits, risk assessments, and updates to AI systems. #dataprivacy #security #GenAI #Cybersecurity #cyber #ResponsibleAI #EthicalAI #TrustworthyAI #TrustandSafety #OnlineSafety #AI
Rafah Knight’s Post
More Relevant Posts
-
Had the pleasure of watching Rafah Knight speak at the techUK Cyber Den yesterday. Rafah has some practical insights on the risks associated with AI systems. Malaysia are really taking AI and Cyber Security seriously with the National Guidelines on AI Governance & Ethics and The Malaysia Cyber Security Act 2024. #procheckup #malaysia #techuk #secureai
📢 Malaysia’s Ministry of Science, Technology, and Innovation, the Malaysian Communications & Multimedia Commission, and the Department of Personal Data Protection have jointly published Malaysia’s national AI guidelines titled ‘Malaysia National Guidelines on AI Governance & Ethics’. Take a look below for my key takeaways (but first the important stuff) 👇: Let’s connect! 🤝Are you an organisation? Get in touch here: https://lnkd.in/e6GYrsrf 📰 Sign up to the SecureAI newsletter here: https://lnkd.in/ebanc77m 🗣️Book Rafah Knight for your next speaking engagement/ guest lecture here: https://lnkd.in/efFDXBPT 💡AI Security Takeaways… 1️⃣ It is crucial to recognise the risks associated with AI systems (e.g., data sources, system design, and the deployment context). 2️⃣ It is important to understand how adversaries could manipulate AI systems- this is vital for building strong security practices. 3️⃣ AI systems must be reliable, robust, and tested against various risk scenarios to mitigate vulnerabilities. 4️⃣ It is important to implement strong encryption, access controls, and regular security assessments to protect AI systems from cyber threats. 5️⃣ It is important to ensure data privacy of individuals is protected, this can be achieved by using de-identification and anonymisation in the design phase. 6️⃣ Decisions made using AI system must be understandable this is essential to build trust and accountability. 7️⃣ Having mechanisms like human-in-the-loop (HITL) provides the opportunity for human intervention where required. 8️⃣ It is crucial that AI does not discriminate against any particular group of people ; and is accessible to diverse communities. 9️⃣ AI systems must be compliant with with data protection and cybersecurity regulations. 🔟 It is important to conduct audits, risk assessments, and updates to AI systems. #dataprivacy #security #GenAI #Cybersecurity #cyber #ResponsibleAI #EthicalAI #TrustworthyAI #TrustandSafety #OnlineSafety #AI
To view or add a comment, sign in
-
Kudos to Malaysia, one of the jurisdictions to watch in Asia Governance and Ethics will prove to be a good starting point #AI #Governance #Ethics #Malaysia #RiskManagement Advisory MERKUR GROUP MERKUR eSOLUTIONS
📢 Malaysia’s Ministry of Science, Technology, and Innovation, the Malaysian Communications & Multimedia Commission, and the Department of Personal Data Protection have jointly published Malaysia’s national AI guidelines titled ‘Malaysia National Guidelines on AI Governance & Ethics’. Take a look below for my key takeaways (but first the important stuff) 👇: Let’s connect! 🤝Are you an organisation? Get in touch here: https://lnkd.in/e6GYrsrf 📰 Sign up to the SecureAI newsletter here: https://lnkd.in/ebanc77m 🗣️Book Rafah Knight for your next speaking engagement/ guest lecture here: https://lnkd.in/efFDXBPT 💡AI Security Takeaways… 1️⃣ It is crucial to recognise the risks associated with AI systems (e.g., data sources, system design, and the deployment context). 2️⃣ It is important to understand how adversaries could manipulate AI systems- this is vital for building strong security practices. 3️⃣ AI systems must be reliable, robust, and tested against various risk scenarios to mitigate vulnerabilities. 4️⃣ It is important to implement strong encryption, access controls, and regular security assessments to protect AI systems from cyber threats. 5️⃣ It is important to ensure data privacy of individuals is protected, this can be achieved by using de-identification and anonymisation in the design phase. 6️⃣ Decisions made using AI system must be understandable this is essential to build trust and accountability. 7️⃣ Having mechanisms like human-in-the-loop (HITL) provides the opportunity for human intervention where required. 8️⃣ It is crucial that AI does not discriminate against any particular group of people ; and is accessible to diverse communities. 9️⃣ AI systems must be compliant with with data protection and cybersecurity regulations. 🔟 It is important to conduct audits, risk assessments, and updates to AI systems. #dataprivacy #security #GenAI #Cybersecurity #cyber #ResponsibleAI #EthicalAI #TrustworthyAI #TrustandSafety #OnlineSafety #AI
To view or add a comment, sign in
-
AI has become a buzzword in recent times. While holding great potential for efficiency and innovation, also carries certain inherent threats such as vulnerability to cyber security risks. In light of this, the Cyber Security Agency of Singapore ("CSA") has developed the Guidelines on Securing AI Systems ("Guidelines") to help system owners secure AI throughout its lifecycle. In this legal update, our Technology, Media & Telecommunications Practice Group provides a deep dive into the Guidelines and highlights its key features, which include: ◾ Understanding the various AI threats, including classical cybersecurity risks and novel attacks such as Adversarial Machine Learning (ML) ◾ Four-step risk assessments process to identify potential risks and priorities ◾ Guidelines for securing the five lifecycle stages of the AI System CSA will be conducting a public consultation on the Guidelines and the Companion Guide until 15 September 2024. 🔊 Follow us on LinkedIn to stay updated on all the latest legal updates! 🔊 Read our latest legal update here: https://lnkd.in/gVc3uNdx Check out Rajah & Tann Asia’s AI Tool Kit here: https://lnkd.in/gK5BY39q Learn more about our Technology, Media & Telecommunications Practice: https://lnkd.in/gwQtH_Nx Rajesh Sreenivasan: https://lnkd.in/g-qir9C Steve Tan: https://lnkd.in/gZ6VfEtz Benjamin Cheong: https://lnkd.in/dwmRMYmb #AI #Security #ResponsibleAI #Innovation #DigitalTrust #cybersecurity #policy #Singapore #legalupdate #guide #legal #lawyer #law #RajahTannAsia #RTA #LawyersWhoKnowAsia #LWKA
To view or add a comment, sign in
-
A framework for AI security governance was released at China Cybersecurity Week in Guangzhou. Developed by China's National Technical Committee 260 on Cybersecurity, the framework outlines principles for managing AI security, including accommodative and prudent approaches to safety, swift risk governance, and promoting cooperation for joint governance. It analyzes AI's risks and provides technical and preventive measures for secure AI application. The framework aims to foster social participation and create a safe, reliable, and transparent environment for AI development. Source : https://lnkd.in/gzEMgcM8 #AISecurity #CyberGovernance #TechSafety #AIStandards
To view or add a comment, sign in
-
AI & Cyber Security Updates 🤖 With safety at the top of the agenda of the AI Seoul Summit starting today, which builds upon the legacy of the Bletchley AI Safety Summit, it is worth watching this space 👀. Note that the UK's AI Safety Institute is in growth mode with their first overseas office to open in San Francisco this summer. 🔐The Department for Science, Innovation & Technology (DSIT) has developed a voluntary Code of Practice based on the National Cyber Security Centre’s (NCSC) November 2023 guidelines to ensure a 'secure by design' approach across all AI technologies. The UK Government intends for this Code of Practice to be developed as a global standard by the European Telecommunications Standards Institute (ETSI). It will be interesting to see whether this prompts the European Union Agency for Cybersecurity (ENISA) to accelerate their plans to create a voluntary scheme. For any of you looking at AI tools, you are invited to fill in an online survey questionnaire until 10 July 2024 (see 📝 link below) to provide your views on the Code of Practice principles tackling every stage of the AI life cycle (secure design, development, deployment and maintenance). #AI #AISeoulSummit #cybersecurity
To view or add a comment, sign in
-
AI & Cyber Security Updates 🤖 With safety at the top of the agenda of the AI Seoul Summit starting today, which builds upon the legacy of the Bletchley AI Safety Summit, it is worth watching this space 👀. Note that the UK's AI Safety Institute is in growth mode with their first overseas office to open in San Francisco this summer. 🔐The Department for Science, Innovation & Technology (DSIT) has developed a voluntary Code of Practice based on the National Cyber Security Centre’s (NCSC) November 2023 guidelines to ensure a 'secure by design' approach across all AI technologies. The UK Government intends for this Code of Practice to be developed as a global standard by the European Telecommunications Standards Institute (ETSI). It will be interesting to see whether this prompts the European Union Agency for Cybersecurity (ENISA) to accelerate their plans to create a voluntary scheme. For any of you looking at AI tools, you are invited to fill in an online survey questionnaire until 10 July 2024 (see 📝 link below) to provide your views on the Code of Practice principles tackling every stage of the AI life cycle (secure design, development, deployment and maintenance). #AI #AISeoulSummit #cybersecurity https://lnkd.in/es2jx8jt
Call for views on the Cyber Security of AI
gov.uk
To view or add a comment, sign in
-
AI & Cyber Security Updates 🤖 With safety at the top of the agenda of the AI Seoul Summit starting today, which builds upon the legacy of the Bletchley AI Safety Summit, it is worth watching this space 👀. Note that the UK's AI Safety Institute is in growth mode with their first overseas office to open in San Francisco this summer. 🔐The Department for Science, Innovation & Technology (DSIT) has developed a voluntary Code of Practice based on the National Cyber Security Centre’s (NCSC) November 2023 guidelines to ensure a 'secure by design' approach across all AI technologies. The UK Government intends for this Code of Practice to be developed as a global standard by the European Telecommunications Standards Institute (ETSI). It will be interesting to see whether this prompts the European Union Agency for Cybersecurity (ENISA) to accelerate their plans to create a voluntary scheme. For any of you looking at AI tools, you are invited to fill in an online survey questionnaire until 10 July 2024 (see 📝 link below) to provide your views on the Code of Practice principles tackling every stage of the AI life cycle (secure design, development, deployment and maintenance). #AI #AISeoulSummit #cybersecurity
Call for views on the Cyber Security of AI
gov.uk
To view or add a comment, sign in
-
Artificial intelligence is redefining how we approach cybersecurity, significantly improving data and systems protection. The implementation of AI in this field has proven its ability to automate and optimize processes, providing a more personalized and proactive approach to emerging threats. This technology not only enhances the speed and accuracy in detecting attacks but also has the capacity to continuously learn, adapting to the increasingly complex and sophisticated challenges we face daily. It is crucial to acknowledge the need for balance, ensuring that human judgment brings essential control and insight to these technological innovations. Among the many benefits, AI also presents significant ethical dilemmas. Data privacy, potential bias in algorithms, and reliance on automated systems are critical issues that must be addressed carefully. Collaboration between technologists, regulators, and ethics experts is essential. The goal is to maximize the benefits of AI while protecting individual privacy, ensuring a secure and responsible digital future. Our preparation for a safer digital landscape requires not only a conscious adoption of technology but also the creation of regulations and norms that evolve in tandem with these advancements. It is vital to foster a trusted environment where AI collaborates effectively with human oversight to provide comprehensive and adaptive protection. #ArtificialIntelligence #Cybersecurity #DigitalProtection #TechnologicalInnovation #DigitalEthics https://ow.ly/q6Jz50TPFu0
To view or add a comment, sign in
-
A lot of banter around how to secure AI in the last few months. Here’s a comprehensive framework covering people, process and technology aspects around securing AI and protecting your private Data.
The second pillar of our Responsible AI Framework is all about strengthening AI model security with robust controls, vulnerability testing, and AI red teaming. All this, while ensuring privacy and regulatory compliance. Are you ready to take the first step toward aligning with the UAE government’s Charter for the Development and Use of Artificial Intelligence? Our experts are here to guide you. Write to us at contactus@cpx.net #ArtificialIntelligence #AI #EthicalAI #ResponsibleAI #TransparentAI #AIGovernance #AIinUAE #Cybersecurity
To view or add a comment, sign in
-
🎙️The Intersection of the Cyber Resilience Act and AI Act: AI Requirements in Cybersecurity Given that the AI Act recently entered into force, building on the regulatory framework established by the Cyber Resilience Act (CRA), the European Union is solidifying its approach to AI in cybersecurity. Both acts address the critical role of AI in cybersecurity and impose stringent requirements to ensure safety and trust. 💡Essential AI cybersecurity requirements: ♟️Both the CRA and AI Act categorize AI systems based on their potential impact on health, safety, and fundamental rights. ♟️AI systems must provide clear information and maintain accountability, especially those interacting with users or processing sensitive data. ♟️High-risk AI systems must comply with essential cybersecurity requirements set out in both regulations, ensuring robust data quality, transparency, human oversight, and risk mitigation measures. ♟️The CRA states that AI systems embedded in digital products be secure from design through their entire lifecycle, ensuring continuous protection against cyber threats. 💡Integration of AI Categories in CRA: Products with digital elements classified as high-risk AI systems according to Article 6 of the AI Act must comply with the essential requirements set out in the CRA. Moreover, when these high-risk AI systems meet the essential CRA requirements, they should be considered in compliance with the cybersecurity requirements of Article 15 of the AI Act too. Conformity assessment procedures for these high-risk AI systems should follow the provisions of Article 43 of the AI Act, ensuring no reduction in the necessary level of assurance for critical products with digital elements. 👉These integrated regulations ensure that AI technologies not only drive innovation but also uphold the highest standards of security and ethics, reflecting the EU’s commitment to a resilient digital environment. #CyberResilienceAct #AIAct #AI #CyberSecurity
To view or add a comment, sign in
Thank you! will add this to our free Global AI Regulation (and policy) tracker.