PROCESSING...

Anti-Money Laundering
Consulting Services & Strategies

0 Items - Total: $0.00 CAD

Interview with SafetyDetectives: A Deep Dive into AML and Data Privacy

In a candid interview with SafetyDetectives, Amber Scott and David Vijan, co-founders of Outlier Compliance Group, delve into the intricacies of anti-money laundering (AML) and data privacy in the evolving landscape of financial regulation. With backgrounds as former bankers turned compliance experts, Amber and David offer a unique perspective on the challenges and innovations shaping AML strategies today.

Can you please introduce yourself and talk about your role at Outlier?

Amber: Hi, I’m Amber Scott, the co-founder and CEO at Outlier Compliance Group. David and I were both previously bankers, working in the compliance space. For me, the idea for Outlier started once I left banking and started working in the consulting space. I saw how the leverage model worked, which was the idea that, essentially, if you throw enough smart folks at a problem, you can solve it. This was really different from the approach that Malcolm Gladwell espoused in his book Outliers, which is the idea that to be terribly good at something, you have to practice it a lot, roughly 10,000 hours.

When Outlier was founded, the idea was really that everyone on the team would have at least 10,000 hours of in-house compliance experience, so that people would understand compliance, how organizations work, and how operationalizing those concepts really worked in the long term.

David: Hi, I am David Vijan. I am a co-founder and CRO here at Outlier. We are an AML consulting firm, a compliance consulting firm, that specializes in AML, privacy, and other regulatory compliance consulting matters.

With financial crime tactics becoming more sophisticated, what sets your AML solution apart from others in detecting these threats?

Amber: I think it’s important to preface that our solutions are really consulting services, as opposed to software. When it comes to software, I won’t say that we’re exactly software agnostic, because we do recommend solutions and we always look for those solutions to be a good fit for our clients. However, in theory, we could work with any software solution.

I think that there are always two really important considerations.

  1. Does the software in question meet the regulatory requirements? Meaning, is it up to the regulator’s expectations in terms of what needs to be implemented.
  2. Does it manage the risk effectively?

Ideally, both of those conditions are met.

How does artificial intelligence and machine learning play a role in your solution’s detection and reporting capabilities?

David: As Amber mentioned, our wheelhouse is not in software related solutions per se. AI in general is great. We do have to remember the rule of garbage in, garbage out. That’s definitely something that we have to keep in mind here. AI really has to be understood by compliance staff.

We’ve seen compliance teams play around with AI, and they’re trying to develop policies and procedures using it. And while it does spit out something, it doesn’t have the level of detail that would meet the expectations of the regulator. It wouldn’t pass muster.

That’s a very important piece to the process, as it needs to be explainable to the regulator, but also meet their requirements and expectations. Because at the end of the day, it’s the regulator’s expectations that we’re really trying to satisfy.

Also, with AI, the rationale for decisions needs to be able to be translated into human-readable language. If you present something to someone, and they’re not able to recreate or understand it, it doesn’t really meet the needs of our regulatory obligations or the capabilities of what we need it to do.

Amber: This is incredibly important in an examination context with your regulator. If you’re an in-house compliance person, and you’re going to be called upon to explain how you came to a certain decision. The answer can’t be “I did what the robot told me to do”, “it came out of a black box”, or “we don’t understand the rationale for a decision”. It has to be something that you can translate to human-readable, human-understandable language, and that needs to be part of your documentation all the way down.

How do you approach data privacy and security, especially when dealing with sensitive financial data?

Amber: Amber: I think it’s important to acknowledge that there’s a natural tension between anti-money laundering (AML) and privacy. For us, at Outlier as a service firm, we consider it to be very important to minimize the amount of data and personal information that we ingest, particularly when we’re talking about our customer’s customer.

However, that’s not always practical or even possible for our clients who have very different requirements. From their perspective, it’s always important to understand:

  • Where the data lives across various systems
  • How you are using that data
  • How different systems are communicating with one another, both your own internal systems and your vendor systems, that you’re going to be using to do various functions.

Having a solid mapping of where that personal information, or PI, lives, and how that PI is used, is incredibly important and to keep that updated on a regular basis.

At the other end, not just knowing what’s happening during that lifecycle, but you need to have a plan to be able to anonymize or purge PI that’s no longer required, or no longer in use.

There’s this funny thing about data that when we’re holding on to personal information or sensitive information, the risk associated with that data never goes away. It can actually increase over time where the usefulness of that data decreases over time. So you have something that just stays risky but doesn’t stay useful to you. That alone needs to be a motivator to start to look at how we age off this data and how we move away from just retaining data forever. That doesn’t necessarily have a use for us. And that isn’t something that we could justify having if it were problematic.

David: Those are very important pieces. In our consulting services, we often see clients that don’t know where the data lives. It’s really important to understand where it’s mapped. Under privacy legislation, and we’re not really going to get into that, there are principles and one of them is limited use. Consent is given for a certain piece and sometimes we hear the business say, “Oh, well, we’ll use the data for something else later.” Well, there’s a whole other consent requirement you have to go back to. To Amber’s point, is there really a reason to hang on to data as it ages? Yes, in some cases, there are regulatory requirements, but we’ve seen data that goes back 10 – 20 years still in organizations systems. Is there a reason it’s still there and what is the risk? It’s probably not worth hanging on to it that long.

Can you discuss the significance of real-time monitoring versus batch processing in AML detection and reporting?

David: There definitely is value in having both approaches, and often you need both. Real-time is going to help with certain things such as fraud in progress, things that need to be captured right away. An example of that is listed person or sanctions. Those are transactions that you want to stop and that’s where real-time is going to really be important.

But sometimes batch reporting is needed because it actually learns. There are longer transaction patterns that it’s detecting, that will actually help you with different types of alerts. It’s important to look over those patterns over time and for those parameters to be changed. So that the system adapts over time and patterns become normal.

Amber: Absolutely. Nothing stays the same, except for the idea that things will change eventually.

That segues nicely to our next questions. How do you see the future of AML evolving, especially with the advent of new payment methods and financial technologies?

I think it’s important to say that monitoring at scale is impossible without technology solutions. We still, from time to time, see things where people are saying all of our monitoring is manual. I think we’re coming into a space where that’s not going to be the expectations of regulators at all. And it’s important to note that. There is an expectation that we’re using some kind of technology solution, and those solutions are going to continue to evolve.

The best solutions, in my opinion, consider the whole scope of a customer’s activity. This means their activity across different products and services. For example, if a customer has a mortgage, checking account, and credit card with us, we’re not looking at the risks of each of those products in isolation. We’re seeing the scope of the activity across all the products and services that the customer is using with us.

We’re also looking at the changes in patterns over time. We’re bringing in open-source intelligence or OSINT. So, what do we know about that customer from different potential sources? Where there’s virtual currency, we’re also looking at the risks that can be incurred from on-chain activity. If we know that a certain wallet is associated with that customer, we look at the risk of that wallet, not just in the transactions that are happening with our institution, but we’re able to monitor the general level of that wallet over time and what that wallet is interacting with.

Similarly, we can see connections between customers, so groups of people and entities that transact with each other, people that may own companies or entities together, sit on boards together, those types of things where you have multiple touchpoints between individuals. I think, in particular, if there’s one of those individuals that suddenly becomes high risk, that’s something that can trigger us to take a look at the other individuals to see if they may be involved in similar activity that would also change their risk ratings.

I think one of the biggest challenges is still data across various regions and across various languages. As we move more towards open banking and open data, I think this becomes very interesting because there are a number of external data points that we’ll be able to pull in and use in terms of monitoring and risk in very novel ways that we don’t necessarily see today.

 

Changes to PIPEDA, Canada’s Private-Sector Privacy Law

Background

On November 17, 2020, Bill C-11, the Digital Charter Implementation Act, 2020 was introduced. If passed, the proposed Act would repeal part 1 of the Personal Information Protection and Electronic Documents Act (PIPEDA) and a new Consumer Privacy Protection Act (CPPA) would regulate the way in which personal information is collected, used and disclosed by private sector organizations in the course of their commercial activity.

The bill would also create an administrative tribunal to hear appeals of decisions made by the Privacy Commissioner of Canada and impose penalties. Currently, such appeals are heard in federal court.

As technology continues to evolve, the proposed Act is meant to protect Canadians by creating and enhancing current obligations, including:

  • Increasing control and transparency when Canadians’ personal information is handled by companies;
  • Giving Canadians the freedom to move their personal information from one organization to another;
  • Ensuring that Canadians have the ability to request that their personal information be destroyed;
  • Providing the Privacy Commissioner with broad order-making powers, including the ability to force an organization to comply; and
  • Fines of up to 5% of revenue or $25 million.

What Will Change?

The proposed Act brings about many changes. Highlighted below are what we feel are some of the most significant:

Privacy Program: Organizations would be required to maintain a privacy management program setting out policies and procedures the organization takes to protect and deal with personal information. The Office of the Privacy Commissioner (OPC) could request these procedures at any time.

Consent: The Act adopts elements of the OPCGuidelines for obtaining meaningful consent, creating transparency requirements.

Exceptions: The Act defines a list of “business activities” for which an organization can process personal information without consent.

Transfers to Service Providers: The Act would establish that consent is not required to transfer personal information to a service provider.

Automated Decision-MakingIf an organization uses an “automated decision system”, under the Act, they must ensure how a prediction, recommendation or decision about a person is made is documented.

Data Mobility: The Act would allow that on the request of an individual, an organization must, as soon as feasible, disclose the personal information it has on file of the individual to another organization if those organizations are subject to a “data mobility framework”.

Disposal of PI: The Act would provide individuals with an explicit right to request the deletion of their personal information.

Revised OPC powers: The OPC would have the authority to issue enforcement orders and recommend penalties. Currently, the OPC only has the power to recommend measures after an investigation.

Private Right of Action: The Act would allow individuals to sue companies within two years following a regulatory investigation. The individual would have to prove loss in order to recover damages.

Codes of practice and certification: The Act would allow for the creation of codes of practice and certification programs to facilitate compliance with the Act, which would be subject to approval by the OPC.

What Do We Do?

For now, we wait but plan for changes to your privacy program in the years ahead. If the bill is passed, the draft legislation will be open for a comment period in which you are encouraged to submit comments. The OPC released a statement on November 19, 2020 related to the bill. Our guess is we will see amendments based on the OPCs statement.

We’re Here To Help

If you have questions related to this or privacy legislation in general, please contact us.

Meaningful Consent

Meaningful Consent

The Office of the Privacy Commissioner of Canada’s Guidelines for obtaining meaningful consent became effective on January 1, 2019. The new guideline builds on examining the current state of consent in Canada (see Background section below), and is meant to assist businesses in distinguishing between those things an organization “must do” to obtain meaningful consent, and those things an organization “should do” related to consent.

The guideline is comprised of seven guiding principles for obtaining meaningful consent. These are:

  1. Emphasize key elements (What personal information is being collected, with whom personal information is being shared, for what purposes personal information is collected, used or disclosed and risk of harm and other consequences);
  2. Allow individuals to control the level of detail they get and when;
  3. Provide individuals with clear options to say ‘yes’ or ‘no’;
  4. Be innovative and creative;
  5. Consider the consumer’s perspective;
  6. Make consent a dynamic and ongoing process; and
  7. Be accountable: Stand ready to demonstrate compliance.

Consent – Must Dos

The new guideline lists out the following things an organization must do in order to meet their obligations related to consent:

  1. Make privacy information readily available in complete form, while giving emphasis or bringing attention to the four key elements (What personal information is being collected, with sufficient precision for individuals to meaningfully understand what they are consenting to, with what parties personal information is being shared, for what purposes personal information is being collected, used or disclosed, in sufficient detail for individuals to meaningfully understand what they are consenting to and risks of harm and other consequences).
  1. Provide information in manageable and easily-accessible ways.
  2. Make available to individuals a clear and easily accessible choice for any collection, use or disclosure that is not necessary to provide the product or service.
  3. Consider the perspective of your consumers, to ensure consent processes are user-friendly and generally understandable.
  4. Obtain consent when making significant changes to privacy practices, including use of data for new purposes or disclosures to new third parties.
  5. Only collect, use or disclose personal information for purposes that a reasonable person would consider appropriate, under the circumstances.
  6. Allow individuals to withdraw consent (subject to legal or contractual restrictions).

There are also requirements related to the form of consent and consent for children under the age of 13. 

Background

The new guideline builds on previous publications examining the current state of consent.

In May 2016, the Office of the Privacy Commissioner of Canada (OPC) published a discussion paper exploring potential enhancements to the Personal Information Protection and Electronic Documents Act (PIPEDA). The paper asked organizations, individuals and other interested parties to provide comments related to key issues and potential solutions to the consent model as currently formulated.

On June 15, 2017 the Office of the Privacy Commissioner of Canada (OPC) published a report on qualitative public opinion research conducted with Canadians on the issue of consent under the PIPEDA. The purpose of the research was to understand Canadians’ opinions, attitudes, and concerns with respect to consent.

It was noted that the question of consent became a recurring theme in discussions and emerged as the key measure used by participants for assessing what are acceptable or not acceptable uses of personal information by companies. There was widespread agreement among participants that consent implies both understanding and acceptance of terms and conditions related to the collection and use of their personal information.

On September 21, 2017, the OPC also published their Report on Consent in their 2016-17 Annual Report to Parliament. The report outlined recommendations to address consent challenges posed by the digital age.

Keep In Mind

Consent is one of the foundational elements of PIPEDA. To ensure your organization is always meeting requirements related to consent, you should be able to answer yes (and evidence) the following questions from the OPC’s PIPEDA Self-Assessment Tool related to consent, regardless of the types of products or services you offer:

  • You obtain customer consent for any collection, use or disclosure of personal information.
  • If you don’t obtain customer consent for the collection, use and disclosure of personal information, you have determined that it is not required under s.7 of PIPEDA.
  • You make reasonable efforts to ensure that clients and customers are notified of the purposes for which personal information will be used or disclosed.
  • You do not require clients and customers to consent to the collection, use or disclosure of personal information beyond what is necessary to fulfill explicitly specified and limited purposes as a condition of supplying a product or service.
  • You assess the purposes and limit the collection, use and disclosure of personal information when it is required as a condition for obtaining a product or service.
  • You obtain consent through lawful and fair means.
  • You allow a client or customer to withdraw consent at any time subject to legal or contractual restrictions and reasonable notice.
  • You inform clients and customers of the implication of the withdrawal of consent.
  • You consider the sensitivity and intended use of personal information, and the reasonable expectations of clients and customers in determining which form of consent (implied or expressed) you will accept for the collection, use and disclosure of personal information.

It is important to note that evidence of consent should be retained in a manner that is easily retrievable and easily sortable.  

We’re Here To Help

If you have questions about this new guideline regarding your consent obligations under PIPEDA, or compliance in general, please contact us.

Mandatory Breach Reporting under PIPEDA

Back in late 2017 we published an article on breach reportingOn November 1, 2018, the new provisions to the Personal Information Protection and Electronic Documents Act (PIPEDA) related to breach of security safeguards along with the Breach of Security Safeguards Regulations came into force.

The regulations require organizations to report to the Office of the Privacy Commissioner (OPC) and affected individuals, any breach of security safeguards involving personal information under its control, if it is reasonable to believe the breach creates a “real risk of significant harm”. Failure to report a breach is punishable by a fine of up to CAD 100,000.

On October 29, 2018, the OPC published the final guidance intended to assist organizations with the Breach of Security Safeguards Regulations. The guidance provides direction on how organizations can assess whether a breach creates a “real risk of significant harm” (the guidance provides a non-exhaustive list of the types of harm that will be considered significant) and provides a breach report form that organizations may use to report a breach to the OPC.

We’re Here To Help

If you have questions regarding how your organization will be impacted by these requirements, or any questions related to privacy legislation in general, please contact us.

Finalized Breach of Security Safeguards Regulations

Back in June of 2015, the Digital Privacy Act, received royal assent resulting in amendments to the Personal Information Protection and Electronic Documents Act (PIPEDA). Most amendments came into force at that time, except for the much-anticipated requirements related to breach notification. These requirements will come into force once regulations have been developed and put into place and will affect any organization that collects, uses or discloses personal information in the course of commercial activities.

 On September 2, 2017, a draft of those regulations was published for public comment in the Canada Gazette and on April 18, 2018 the final Breach of Security Safeguards Regulations under PIPEDA were published. The regulations set out prescribed requirements for mandatory breach reporting and will come into force on November 1, 2018.

The objective of the regulations is to:

  • Ensure that all Canadians receive consistent information about data breaches that pose a risk of significant harm to them.
  • Ensure that data breach notifications contain sufficient information to enable individuals to understand the significance and potential impact of the breach.
  • Ensure that the Commissioner receives consistent and comparable information about data breaches that pose a risk of significant harm.
  • Ensure that the Commissioner is able to provide effective oversight and verify that organizations are complying.

The regulations require organizations to report, to the privacy Commissioner, any breach of security safeguards involving personal information under its control if it is reasonable to believe the breach creates a real risk of significant harm. The regulations state that such a report must contain the following:

  • a description of the circumstances of the breach and, if known, the cause;
  • the day or the period in which the breach occurred;
  • a description of the personal information that was involved in the breach;
  • an estimate of the number of individuals impacted – were the breach creates a real risk of significant harm;
  • the steps that the organization has taken to reduce the risk of harm to the impacted individuals;
  • the steps that the organization has taken or will take to notify impacted individuals; and
  • the name and contact information of a person who can answer, on behalf of the organization, the Privacy Commissioner’s questions about the breach.

Organizations that experience such a breach will have also have to do the  following:

  • Determining if the breach poses a “real risk of significant harm” to any individual whose personal information was involved in the breach by conducting a risk assessment;
  • Notifying affected individuals if it is determined that there is a real risk of significant harm. How the notification will take place depends on serval factors such as if contact information of the impacted individuals is known, cost, and if the method chosen to deliver such a notification will cause further harm;
  • Issuing notification that contains:
    • a description of the circumstances of the breach;
    • the day or period during which the breach occurred;
    • a description of the personal information that was involved in the breach;
    • the steps that the organization has taken to reduce the risk of harm to the impacted individuals;
    • the steps that the impacted individuals could take to reduce the risk of harm resulting from the breach;
    • a toll-free number or email address that the impacted individuals can use to obtain further information about the breach; 
    • information about the organization’s internal complaint process and about the individual’s right, under PIPEDA and that they can make a complaint with the Privacy Commissioner;
  • Notifying other organizations or government institution if they believe the they may be able to reduce the risk of harm to the impacted individuals.  (i.e. law enforcement agencies). If this is the case, consent of individuals is not required for such disclosures; and
  • Keeping records of any data breach for a minimum of 24 months.

In determining if there is a “real risk of significant harm”, the assessment of risk conducted must consider factors such as the sensitivity of the personal information involved, whether or not the data was encrypted, whether the personal information was misused, if the information has been recovered, etc. The true risk of such factors may not always be known at the time that the risk assessment is first conducted.  One distinction from the draft regulations is that the final regulations also refer to harm “that could result from the breach” rather than harm “resulting from the breach”. This final wording is more practical than that of the language found in the draft, as potential harms will often be speculative at the time the breach is first discovered.

In reporting “as soon as feasible,” the final regulations allow for an organization to submit new information to the Commissioner after the initial report has been submitted. This is a significant improvement over the draft regulations, since organizations often do not have all information at the time a report is required to be submitted.

We’re Here To Help

If you have questions regarding these new requirements or any questions related to privacy legislation in general, please contact us.

PIPEDA’s Security Breach Notification Provisions

Back in September we published an article on Breach of Security Safeguards Regulation. Those requirements will come into force on November 1, 2018, according to an Order in Council issued on March 26, 2018.

The much-anticipated requirements will require organizations to report, to the privacy commissioner and affected individuals, any breach of security safeguards involving personal information under its control if it is reasonable to believe the breach creates a real risk of significant harm.

While the final regulation is not yet available, a draft of the regulation can be found here.

We’re Here To Help

If you have questions regarding how your organization will be impacted by these requirements or any questions related to privacy legislation in general, please contact us.

Breach of Security Safeguards Regulations

Back in June of 2015, the Digital Privacy Act received royal assent, resulting in amendments to the Personal Information Protection and Electronic Documents Act (PIPEDA). Most amendments came into force at that time, except for the much-anticipated requirements related to breach notification. These requirements will come into force once regulations have been developed and put into place, and will affect any organization that collects, uses or discloses personal information in the course of commercial activities.

On September 2, 2017, a draft of those regulations was published in the Canada Gazette. The draft regulations will require organizations to report, to the privacy commissioner, any breach of security safeguards involving personal information under its control if it is reasonable to believe the breach creates a real risk of significant harm. The draft regulations state that such a report would have to contain the following:

  • a description of the circumstances of the breach and, if known, the cause;
  • the day or the period in which the breach occurred;
  • a description of the personal information that was involved in the breach;
  • an estimate of the number of individuals impacted – where the breach creates a real risk of significant harm;
  • the steps that the organization has taken to reduce the risk of harm to the impacted individuals;
  • the steps that the organization has taken or will take to notify impacted individuals; and
  • the name and contact information of a person who can answer, on behalf of the organization, the Privacy Commissioner’s questions about the breach.

Organizations that experience such a breach will also have to do the  following:

  • Determine if the breach poses a “real risk of significant harm” to any individual whose personal information was involved in the breach by conducting a risk assessment;
  • Notify affected individuals if it is determined that there is a real risk of significant harm. How the notification will take place depends on serval factors such as if contact information of the impacted individuals is known, cost, and if the method chosen to deliver such a notification will cause further harm;
  • Issue notification that contains:
    • a description of the circumstances of the breach;
    • the day or period during which the breach occurred;
    • a description of the personal information that was involved in the breach;
    • the steps that the organization has taken to reduce the risk of harm to the impacted individuals;
    • the steps that the impacted individuals could take to reduce the risk of harm resulting from the breach;
    • a toll-free number or email address that the impacted individuals can use to obtain further information about the breach; and
    • information about the organization’s internal complaint process and about the individual’s rights under PIPEDA, and that they can make a complaint with the privacy commissioner;
  • Notify other organizations or government institutions if they believe they may be able to reduce the risk of harm to the impacted individuals (i.e. law enforcement agencies). If this is the case, consent of individuals is not required for such disclosures; and
  • Keep records of any data breach for a minimum of 24 months.

The determination if there is a real risk of significant harm to an individual, and reporting “as soon as feasible” requirements, are likely to be the most challenging for organizations.

In determining if there is a “real risk of significant harm”, the assessment of risk conducted must consider factors such as the sensitivity of the personal information involved, whether or not the data was data encrypted, whether the personal information could be misused, if the information has been recovered, etc. The true risk of such factors may not always be known at the time that the risk assessment is first conducted. If not known, it may be best to use a worst case scenario in the assessment.

In reporting “as soon as feasible” after an organization determines that the breach has occurred, to both the Privacy Commissioner and impacted individuals, organizations may be hesitant to provide specific information. Reasons why organizations may be hesitant may include, details and information may change as further investigating of the breach is conducted, or for fear of litigation risk down the road. Additionally, there is reputational risk that organizations will be concerned about. When notifying the Privacy Commissioner, organizations may want to state that the investigation is ongoing and that updates will be provided in a timely manner. When notifying impacted individuals, organizations should ensure that all required information is contained in the notification. It is best to be transparent and truthful in such notifications, as not doing so may cause even greater litigation and reputational risk.

Regulatory Impact Analysis and Regulations

The draft regulations are open for a comment period, to read full details of the draft and the accompanying regulatory impact analysis statement please visit the Canada Gazette.

We’re Here To Help

If you have questions regarding this or any questions related to privacy legislation in general, please contact us.

Return to Blog Listing