AI-driven Data Collection 

Customer-centric companies live and breathe their customers and are laser-focused on providing amazing experiences. Firms want to learn as much as they can about their existing and potential customers. The more a business can learn on its customer needs, the more a company can pivot its resources to provide more value to its consumer base and run a more efficient and profitable business. In collecting customer data, firms have found innovative ways to source, normalize and aggregate large data sets. The use of artificial intelligence (AI) and machine learning (ML) come with benefits as well as regulatory challenges. AI can increase E2E efficiency, enhance accuracy and decision making, uncover market gaps and opportunities, empower employees with automation and provide a hyper personalized customer experience. AI can also come with privacy and data protection hurdles that should be overcome to facilitate a compliant and sustainable use of AI.

AI Regulatory Considerations 

As firms and their third-parties leverage AI to collect and/or process customer data, they should be aware of the relevant privacy and data protection laws that govern their use of AI. The EU General Data Protection Regulation (GDPR) and the U.S. California Consumer Privacy Act (CCPA) introduced stringent requirements on the processing of personal information. New regulations such as the California Privacy Rights Act, China’s Personal Information Protection Law, Virginia’s Consumer Data Protection Act, the Colorado Privacy Act and upcoming US State regulation such as the Utah Consumer Privacy Act have added to the complexity and compliance efforts given their disparate requirements. Currently, the US federal government is considering two new privacy laws, the American Data Privacy and Protection Act and the Consumer Online Privacy Rights Act, which will assist the federal government to enforce privacy regulations across the US including algorithmic assessment and privacy by design provisions. These evolving regulations coupled with the rise of high-profile data breach incidents have strengthened the mandate on ethical and compliant data processing.  

Looking beyond North America, the EU is in the process of developing an AI Act to monitor the use of AI by organizations. This would require organizations to establish risk management systems, data and data governance, technical documentation, record keeping, transparency an

Industry AI Privacy Challenges 

It is important to be familiar with and highlight the impacts of these laws and the challenges organizations may face when developing an AI model that is compliant with all relevant laws. As regulations around the use of data and AI increase so do the challenges that organizations encounter. For example, WW International, Inc. (formerly known as Weight Watchers) and a subsidiary called Kurbo, Inc., illegally collected children’s personal data as young as eight, and then illegally harvested their personal and sensitive health information. The organization was fined $1.5 million for violation of COPPA. Along with the financial penalty determined by the FTC, WW International, Inc. was required to delete related work products which used the data that was collected – i.e., algorithms that may have been developed and trained using those data need to be destroyed. This is an added layer to the compliance expectations organizations face.  

As the use of data collection increases additional challenges arise.  

    • Increasing and expanding regulations compel organizations to frequently modify their business practices to be in compliance. Without a strong data and AI management framework, complying with new regulations can be burdensome and expensive 
    • Customer concerns over data processing are impacted by the trust they have in the organizations collecting their data. Incorrect data and AI management practices can lead to the deployment of AI that undermines organization’s reputation and damage public trust  
    • AI algorithms that are biased and opaque can lead to unexpected outcomes and raise ethical issues. Algorithms that are created from insufficiently representative data can result in bias and discrimination 
    • AI systems can pose a threat to customer privacy as a result of using personal data and when deployed may seem to be an intrusion on someone’s privacy life 

The penalties for violating these data regulations can compound across the data value chain. Not only do they need to delete the raw data, but downstream work products should be evaluated for “contamination” of the data as well. As the quantity of data collected increases, firms encounter new challenges brought on by this deluge of information. These can range from identifying the source of their data, tagging features and datasets, enabling the correct entitlements for confidentiality and sensitivity of data and how users are accessing specific information. 

While collecting data may present new opportunities for organizations to innovate and incorporate AI in their day-to-day business it also presents additional risks and challenges, from compromising consumer privacy to algorithmic bias to a lack of trust in the organization. AI poses unique challenges to the data lifecycle. Governance from data collection to data use to data disposal is necessary to assess the compliance of an organization and build trust with their customers. 

Empowering AI Compliance  

As AI decisions increasingly impact and influence people at scale, the importance of implementing proper governance becomes crucial to the safe and compliant use of data. As a leader in “Data and Analytics Service Providers”, Accenture believes that to implement AI responsibly, the organization should “establish transparent, cross-domain, governance structures, identifying roles, expectations and accountability to build internal confidence and trust in AI technologies.” 

Apple has made it a priority to keep privacy front and center when utilizing customer data to develop their machine learning models. This includes removing identifying features from their data but also consciously increasing the level of customer privacy using differential privacy and establish an opt-in only program.  

To develop responsible and fair models, careful consideration and attention should be given throughout the stages of the data lifecycle including data collection, data usage, data modeling and data deletion. To determine complete and ongoing attention is given towards model design and development, organizations should establish:

  • Proper data governance around how data is collected and managed (e.g., correctly cataloguing and tracking data). 
  • Privacy by design that is incorporated in the data value chain through informed consent (opt-in and opt-out), data minimization and data anonymization as well as incorporating the proper security controls for data access. 
  • Guardrails for building AI models that are fair, unbiased and transparent through Responsible AI principles.
    • Understand the definition of fairness within model or use case context – including conducting algorithmic fairness assessments  
    • Architect models and systems are explainable and transparent across processes and functions  
    • Empower individuals in your business to raise doubts or concerns with AI systems 
    • Leverage privacy principles and safeguards to ensure personal and/or sensitive data is never used unethically 

    Accenture is helping companies to implement ethical principles within clear processes and procedures that establish accountability early in their model development. It is critical to invest time into identifying potential sources of bias and key fairness metrics to determine that AI products are responsible by design. Using the EU AI Trustworthy Guidelines as a backdrop, a global communications vendor engaged Accenture to help the client develop internal ethical principles and translate them into operational actions, activities, and structures. Accenture helped to define a governance framework which supported data-to-day activities while also monitoring each phase of the AI lifecycle. Ultimately, these tools can help measure performance over time to assess the continued safety of algorithms and models. 

    Looking forward, the focus of organizations can revolve less around the technicalities of data deletion, algorithm deletion or model deletion. Rather, the focus can be oriented towards building a holistic governance framework. As AI becomes more prevalent for regular business practices, it is important to monitor its unrestrained development and application. Regulatory bodies are continuing to pursue new remedies to curb illegal and abusive data practices. Organizations that have data and AI built into their business should be aware of the consequences of having a relaxed data and model governance program in place. Accenture is ready to help you meet your goals. 

    Submit a Comment

    Your email address will not be published.