Prof. Hernan Huwyler, Cyber Compliance and Risk Academic Director, IE Law School

Professor Hernan Huwyler is a governance, risk, and compliance specialist, developing compliance and cybersecurity controls for multinational companies. He currently holds executive roles in Copenhagen, working for organizations like Danske Bank, Deloitte North Europe, and ISS. Hernan has a background in quantitative risk, predictive modeling, finance, technology, and governance practices, having previously worked for Veolia, ExxonMobil, and other leading companies in the US. He is the academic director at the IE Law School for executive programs in compliance. Hernan is a frequent lecturer on AI, data management, regulations, and auditing at large professional and academic institutions.

 

As organizations increasingly adopt artificial intelligence solutions like virtual customer chatbots, recommendation engines, fraud detection systems, predictive models, computer vision, and supply chain optimization, specific secure and compliant policies and controls need to be defined and monitored. This article outlines key areas for AI adoption to set boundaries when using, developing, and procuring AI-based solutions.

Acceptable Use of AI Policy

When guiding employees and consultants to use AI tools to generate text and images, organizations need to establish an acceptable use of AI policy. This policy should allow the prompting only with non-restricted data, limit the sharing of company documents, software codes, business plans, contracts, and personal data in prompts and custom GPTs, and block access to unapproved web apps using generative AI. Approved use cases can include drafting emails, summarizing manuals, researching code functions, and illustrating brochures. However, fact-checking is necessary to identify hallucinations, and the approval process for use cases should be clear. Organizations can also restrict usage to approved processing resources to prevent overspending, require human oversight, and ensure compliance with AI, privacy, consumer protection, intellectual property, and anti-discrimination laws.

To develop an effective acceptable use of AI Policy, organizations should first gain insights into how AI systems are identified and managed by the asset management team, factoring in future initiatives and projects for adopting AI-powered solutions. By creating a comprehensive catalog of current and prospective AI systems, the policy can better comprehend the scope and utilization of these technologies by employees, contractors, partners, and customers. The collaboration with cross-functional stakeholders, including AI developers, data scientists, application owners, business experts, and compliance officers, is highly needed to identify use cases, risks, and specific requirements. The policy should clearly outline prohibited activities and establish guidelines for the type of data, controls, and ownership for AI systems. Additionally, establishing a reporting channel for concerns regarding unacceptable use and implementing comprehensive training programs to educate employees on responsible AI practices are essential components of an effective policy.

Responsible AI Policy

To develop a responsible AI policy that aligns with fairness, transparency, privacy, and accountability principles, organizations should involve subject matter experts, top management, employees, customers, and partners. This policy requires the previous assessment of risks to AI principles, such as data quality, security, bias, and potential harm. This type of policy establish clear roles, responsibilities, and decision-making processes for AI use, development, and procurement. This includes assigning roles and responsibilities for AI governance, establishing clear decision-making processes, and ensuring that there is adequate oversight and accountability for AI use.

Organizations should consider factors such as the sensitivity of the data used, the criticality of the use case, and the potential for harm. This evaluation should be conducted regularly to ensure that the organization’s risk profile is up-to-date and accurate.  The policy needs to allocate control responsibilities to set limits on data collection and usage, perform security controls, and establish procedures for monitoring and reporting potential bias or harm. Organizations should establish procedures for monitoring the performance and impacts of AI systems, including regular audits and assessments. This will help identify potential issues or concerns and make adjustments as necessary to ensure that AI principles are being upheld.

AI developers, modelers, and architects should adhere to a responsible AI policy when developing AI algorithms. This policy should ensure the tracing of training data to verify intellectual property rights and obtain consents, restrict training data to exclude age, race, income, and other prohibited categories, and request impact assessments on data agreements. Additionally, metrics objectives should align with the risk exposure associated with responsible AI principles, and only approved AI techniques for explainability should be utilized. Organizations should vigilantly monitor for potential misuse and restrict usage to approved processing resources to prevent overspending.

AI Procurement Policy

When assessing potential providers of AI-based solutions, consultancy services, and training data, organizations should establish an AI procurement policy to enforce adherence to responsible AI principles when third-parties are involved. This policy should clearly outline the intended uses, systems, and interfaces in the license terms, assign responsibilities to restrict accesses, users, and uses involving the AI supplier, and establish security and algorithm controls. Additionally, accuracy, precision, uptime, and performance targets should be defined, and human rights and societal impacts should be evaluated. Ownership and licensing rights for AI inputs and outcomes should be specified, and responsibility and liability for any harm or damages incurred should be assigned.

In general, organizations should identify the possibility of misuse based on the sensitivity of data, the criticality of the use case, and the potential for harm. Regulatory requirements, end-user and societal expectations, users, data providers, decision-makers, suppliers, and interested parties should be considered. Assessments should go beyond the “intended uses” to identify vulnerabilities and threats in the training and feedback data, the model training, the deployment, the use, and the decommissioning. Common misuse scenarios include producing deep fakes to propagate misinformation, acquiring knowledge on committing crimes and cyber attacks, engaging in terrorist attacks or military invasions, and taking advantage of consumers by manipulating algorithms to inflate prices during vulnerable conditions.

Adopting AI securely and compliantly requires organizations to establish clear boundaries, implement acceptable use policies, establish responsible AI policies for developers, create AI procurement policies, and identify possible misuse scenarios. By following these best practices, organizations can improve the productivity of process and innovate in their services with AI systems while managing risks and ensuring compliance with laws, contracts and societal expectations.

Content Disclaimer

Related Articles