Candidate/Employer Login
By signing in you agree to the T&C's
Click here to register Forgotten password?
         

Industry News


  • 10/29/2024 8:42:22 AM
    Warning for LinkedIn users in South Africa

    LinkedIn has been accused of using its users’ data without consent, raising significant legal and ethical concerns.

    The South African Artificial Intelligence Association (SAAIA) has called on the Information Regulator to investigate whether LinkedIn is violating the law by using user data to train its artificial intelligence (AI) models.

    Nathan-Ross Adams, head of regulatory affairs at SAAIA and key contributor to the submission, stated: “Our letter of complaint to the Information Regulator is focused on LinkedIn’s use of South African users’ personal information to train its generative artificial intelligence (AI) models.”

    SAAIA’s three main concerns were the following:

    • It does not comply with the lawful processing conditions under Chapter 3 of POPIA (Protection of Personal Information Act);
    • This practice likely constitutes an interference with personal information as defined in section 73 of POPIA;
    • Given the significant public interest, this matter warrants an investigation by the Information Regulator.

    On The Money Show with Stephen Grootes, Adams said that the company acted with a lack of courtesy and transparency.

    “I am concerned about the fact that the choice architecture, or simply put – how we decide whether our data is trained or not – has not been done in a way that complies with South African privacy laws like POPIA.”

    This means that users’ personal information may be used without them knowing what it’s being used for.

    He explained that LinkedIn, the world’s largest professional network platform, allows users to connect based on information such as their employment history.

    “Many people post on LinkedIn to create this sense of a shared professional experience.”

    What happened recently, which gave rise to this complaint, is that by default, LinkedIn changed its terms and automatically opted most countries across the world into their AI-based training platform, Adams said.

    Essentially, this platform trains its AI model based on information like user interactions on the platform, users’ posts, and their profiles.

    “And this goes against privacy laws which require specific, voluntary and informed consent.”

    Interestingly though, he pointed out that LinkedIn chose to exclude the European Economic Area (EEA), the rest of the EU, and the UK.

    “So if other countries with similar laws to South Africa are being excluded, why does it apply to us as well?”

    What is especially alarming is that it isn’t clear whether private conversations on LinkedIn are being used to train this AI platform.

    “Based on what we know now, and how the privacy terms of LinkedIn are currently worded, it isn’t clear at this point,” Adams said.

    “The vague reason that they’ve given is to improve the LinkedIn experience.”

    “Because of this vague approach that they’ve taken, we don’t know what’s actually being used. And when it comes to these AI models that are pretty advanced, it’s very difficult for them to explain the different ways that it’s being used as well.”

    He pointed out that this move isn’t exclusive to LinkedIn, with many other companies across the globe also using user data to train AI platforms.

    “It’s one of the areas that many companies are struggling with.”

    “How do we embrace this technology in a way that’s really innovative, that’s going to move humanity forward, but also in a way that our stakeholders – which includes the users of LinkedIn – can trust at the end of the day?”

    However, Adams said that the difference is that the way LinkedIn approached this change is likely does not comply with the POPI Act.

    It specifically violates people’s rights to know how their data is used, to understand where it goes, and what data is being utilised.

    “Other platforms have specifically labelled the AI systems that they’re using.”

    “For example, with meta, it would be Meta AI. With Google, it’s Gemini. With Open AI, it’s Chat GPT.”

    “So there’s the expectation. It’s a very clear and upfront expectation,” Adams said.

    Users are aware that when they use these platforms, their inputs will be used to train and improve its AI model.

    “But when you have a platform that is not generally expected to be using an AI model in this way, the consumer, the user, needs to know upfront.”

    Users need to know be aware when there is a change in terms, but in this case, it “wasn’t released until after this was discovered.”

    While there are also monetary and intellectual property issues, one of the biggest concerns is that people haven’t been given the ability to make the choice themselves.

    “It’s more about autonomy and control and the ability to make decisions for oneself,” Adams explained.

    “The challenge that we have is that the setting was turned on by default.”

    This means that not only do people not necessarily know this is happening, but if they want to opt out, they need to “put in the effort to unsubscribe themselves from the AI model with an action that most people won’t even know to take.”

    - BUSINESSTECH

     


Terms & Conditions  |  Privacy Policy  |  Basic Conditions Of Employment Act (Download PDF) |  Popi |  User Terms
© 2024 Total Recruitment Solutions All Rights Reserved