Dynamic Business Logo
Home Button
Bookmark Button

AI generated

Everything SMEs using AI need to know about compliance

As AI becomes more integral to business operations, Australian SMEs must understand the evolving privacy laws that govern the use of customer data. With stricter regulations on the horizon, it’s crucial for businesses to ensure compliance to avoid risks and safeguard customer trust. Here’s what every SME using AI needs to know about privacy laws and how to stay compliant.

A recent report from Vanta, a leading trust management platform, reveals that most Australian businesses are unprepared for upcoming privacy law reforms, particularly regarding the use of customer data to train artificial intelligence (AI). The findings show that only 28% of Australian businesses are currently compliant with regulations governing the use of customer data for AI training. Additionally, only 24% ensure that data is anonymised before being used in AI models. These statistics come as the Australian government prepares to enforce stricter privacy laws aimed at protecting consumer data.

The report highlights a concerning gap in AI best practices among Australian businesses compared to international standards. For example, only 54% of Australian businesses have a formal AI policy—11 percentage points lower than in the UK—and just half have dedicated teams to oversee AI security and compliance, lagging behind the US and UK.

Jonathon Coleman, APAC General Manager at Vanta, cautioned that AI presents both vast opportunities and significant risks, especially when it comes to using customer data. “As privacy regulations tighten, businesses must adapt quickly to ensure they are using AI responsibly and compliantly. Proving compliance, however, remains a challenge,” he said. Coleman also noted that AI can help businesses automate up to 85% of compliance processes, offering a solution to streamline the traditionally cumbersome compliance documentation.

Some businesses, like global fintech InDebted, are already implementing strong AI data protection practices. Pierre Bergamin, CTO of InDebted, shared, “To build trust, we ensure all our AI use is fully compliant, relying on anonymised customer data to prevent any risk of personal information resurfacing.”

As privacy regulations continue to evolve, businesses will need to adopt best practices for AI and data management to ensure customer trust and avoid compliance pitfalls.

How privacy law applies to ai and customer data

The Privacy Act 1988 and Australian Privacy Principles (APPs) govern the use of personal information in AI systems. Organisations must understand their obligations under these laws, especially when using customer data for AI training or testing.

What is personal information?

Personal information includes any data that can identify an individual, such as names, contact details, and even images or videos. It also includes sensitive information such as race, health data, and political opinions, which are given higher privacy protection. Notably, even false information generated by AI systems, such as deepfakes or AI hallucinations, can still be considered personal information under the law.

Risks of using personal data in AI

When personal information is used in AI systems, it is challenging to track how the data is used or remove it once inputted, especially with generative AI models. Such information is vulnerable to security threats, and even anonymised data can be re-identified. This highlights the importance of businesses exercising caution when using personal data for AI, ensuring compliance with the APPs.

APP 6 and the use of personal information

APP 6 stipulates that personal data must only be used for the purpose it was collected for or a closely related secondary purpose. Businesses must ensure that any use or disclosure of personal information in AI systems aligns with these principles. If data is used for a secondary purpose, it must meet specific conditions, such as obtaining consent or ensuring that individuals would reasonably expect the use.

Best practices for using personal data in AI

Organisations should minimize the amount of personal information entered into AI systems and carefully consider whether their use complies with APP 6. It is also recommended that sensitive personal information not be entered into publicly available AI systems, such as chatbots, due to the significant privacy risks involved.

Collecting personal information through AI

Businesses collecting personal data via AI systems, such as chatbots, must comply with APP 3 and APP 5. APP 3 mandates that personal information collected through these systems be necessary for the business’s functions and gathered through lawful means. Businesses must also inform individuals that they are interacting with an AI system rather than a human, and comply with transparency requirements under APP 5.

Resources:

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

What do you think?

    Be the first to comment

Add a new comment

Yajush Gupta

Yajush Gupta

Yajush is a journalist at Dynamic Business. He previously worked with Reuters as a business correspondent and holds a postgrad degree in print journalism.

View all posts