Questions around AI and privacy

by Ryno Nel | February 18, 2025

AI has been on many people’s minds for the past few years. As the initial hype may be subsiding, individuals and companies are still processing how AI, and especially Generative Chatbots, will impact their businesses.  Some have already come to the conclusion that AI will become an integral part of their daily operations.

The question, however, is that if AI becomes an integral part of the business’s daily operations, how does this impact the company’s current policies and procedures when it comes to data privacy?  As many of you might know, the current popular Generative Chatbots are already embroiled in some data privacy/copyright scandals.  Most of these scandals revolve around the data that was initially used to train and update the model. We have seen stories surface that many of these companies used copyrighted films and books, as well as data from websites that did not allow for scraping techniques to train the AI models.

One might argue that after the model training is completed, there is nothing to worry about as the companies no longer need my data as a user for their AI model. However, that is not exactly the whole story.  Many of these companies (especially if it is a cloud-based free version of the AI model). will use the users input data as additional data to improve and enhance their AI model. This means that your client’s bank statement (which you asked your favourite AI Chatbot to help you analyse) can, in some instances, become part of the model’s training dataset.

Now this poses a few questions with regards to current data privacy regulations in South Africa.

  • One aspect to consider is if this sharing of data, using a software program, could be seen as sharing data with a third party, and if so, must the data subject consent to having their data used in these AI models that are hosted by third parties?
  • Or maybe, is the action of uploading or inputting the data, which may contain some form of customer data, also subjected to the same data privacy standards compared to other client data?
  • Or does this mean that if a business wants to incorporate AI into its business operations (where it can obtain access to private data) that these AI’s be hosted on-site or as a walled garden for only the business to use?

There and many more are questions that most adopters of this technology are struggling with, and unfortunately, the local regulatory bodies are behind in helping us obtain the right answers to these questions.  However,  in South Africa, the Information Regulator is responsible for overseeing the protection of personal information, as outlined by the Protection of Personal Information Act (POPIA), which aligns with global data protection standards, such as the GDPR in the EU. When it comes to entering  data into an AI program, the Information Regulator will consider if the responsible party, who enters the data, can prove that they comply with all eight requirements for lawful processing, as outlined in POPIA.  This becomes very difficult if one does not fully understand how AI models treat inputted data and therefore careful consideration is the right course of action here.

Hopefully, clear guidelines will be provided soon, but until then it is important to understand how these systems use data and adjust your company’s data protection policies accordingly.

At wauko we are passionate about business! Enhancing the cash flow and bottom-line results for our clients is our priority, but data protection and privacy remain critical.  As such we have implemented processes, protocols and policies to assist in mitigating these risks.

If you are interested in collaborating on the subject of data protection and privacy, contact Ryno Nel on 021 819 7819 or at rnel@wauko.com to connect with us.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *