This document is a work-in-progress draft for Strises AI Strategy

Strise’s position on the AI-Act

Intro

Acknowledging ongoing concerns about the utilization of AI technologies for businesses, we wish to offer a clear position on our interpretation of AI risks concerning Strise. This interpretation aligns with the classifications proposed in the EU Artificial Intelligence Act. This document describes our safe and considerate use of AI in Strise, which aims to maximize user value in handling Anti-Money Laundering (AML) and Know Your Customer (KYC) related tasks.

This document specifically refers to the functionality and capabilities offered in Strise Review and Strise Monitor. The two Strise Modules are built to support AML and Compliance processes.

The AI Act

The Artificial Intelligence Act, proposed by the European Union, aims to mitigate potential risks posed by AI systems by establishing a clear legal framework for their development, deployment, and usage. The AI Act classifies AI systems into categories based on the severity of potential impact.

Unacceptable-risk AI Systems

These systems present a clear risk of causing considerable harm by violating fundamental rights in EU law. This would include any AI system that deploys subliminal techniques beyond a user's consciousness to significantly distort behavior or exploit vulnerabilities.

High-risk AI Systems

These AI applications could result in substantial adverse effects if they malfunction, are used incorrectly, or are used with malicious intent. Such AI systems are typically found in critical sectors such as healthcare (e.g., surgical robots), transport (e.g., autonomous vehicles), or legal contexts (e.g., predictive policing).

Limited-risk AI Systems

This category includes AI systems that are subject to transparency obligations. To illustrate, this would encompass systems like chatbots, wherein users must be made aware of their interaction with an AI system, not a human.

Minimal-risk AI Systems

These are AI systems that are free to use as they represent only insignificant or no risk at all for the rights or safety of the users, like spam filters and video game bots.

Strise’s AI Risk Classification position

Based on these risk categories, Strise aligns in the domain of category 3, Limited-risk AI Systems, with certain obligations applied due to the AI Act. The Strise system also uses generative AI and needs to comply with obligations regarding its utilization of generative AI.

AI Act: Limited Risk and Generative AI Obligations

According to the EU AI Act, Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI.

Moreover, according to the EU AI-act regarding Generative AI (e.g. ChatGPT) would have to comply with transparency requirements that disclose that the content was generated by AI, that the model is designed to prevent it from generating illegal content, and that companies would potentially need to disclose summaries of any copyrighted data used for training its internal model.