Whether you’re a dedicated AI advocate or a confirmed AI sceptic, you need to tread carefully. Because while AI can introduce a lot of good to your business, it can also bring a lot of risk.
It’s safe to say AI is not a fad. It’s here to stay and it will be an increasingly important part of how we work and play. You only have to look at the investments going into AI – and where that investment is coming from – for proof it’s not going anywhere.
As business owners, we’ve got a decision to make about AI. It’s not if, but when, we use it. Some of you will already be experimenting with it, some of you may have it embedded into your business across a few different areas. Others may have steered well clear up until now.
Wherever you are on your adoption curve, one thing needs to be kept in mind above everything else: security and safety, and the ethical use across the business. For this, you need to create a framework that enables everyone to use AI responsibly and ethically.
Because just as AI and how it can be used is constantly evolving, so too are the security implications, loopholes and the illegal uses. And you need to sense-check AI’s biases, too.
Implementing AI safely and securely is essential for any business – and while you can retrofit some processes and protocols, it’s better to consider it all when you’re right at the beginning. Here’s how you can build an ethical AI framework in your business.
Avoid BYO AI
Many of us have played around with AI in our personal lives – we’ve signed up for a ChatGPT account or some other AI. Which is great in terms of familiarising ourselves with what’s possible. What you want to avoid, however, is different people within the business bringing different AI’s to work, and uploading company information. For one, it’s really bad business practice. For two, you’ve got no visibility or control over what information is going into a platform. No AI should be used by your people unless you’ve authorised it and it’s a company account that you have access to. Otherwise, you risk sensitive information being uploaded to who knows where – and it’s impossible then to step back.
Read the small print – where’s your data going?
On many ‘free’ iterations of AI platforms, the small print will read something like ‘your interactions will be used to train our AI models’. In other words, we’ll store everything you upload and use that information as we wish. Which, on some levels, is fine if it’s a generic interaction you’re having. If it’s more sensitive information – customer details or business strategy info, for example – you need to be more careful.
Remember the ‘generative’ element of AI
The likes of ChatGPT and Gemini, for example, are what’s called genAI – Generative AI – and as such, they learn and develop. Unlike Excel, where you’ll get the same answer today as you would have yesterday and you will in 10 years, genAI is different. You’re not guaranteed to get the same answer to the same query today and tomorrow. In fact, it’s unlikely you will.
Remember! You’re still responsible
There is a good opportunity for AI to help every day – from regulations to manuals, knowledge can be uploaded that you and your team can access, ask questions of and get guidance on. However it’s vitally important to always remember that any AI model can hallucinate, can get things wrong, can interpret something incorrectly. And, ultimately, you’re responsible. So remember to never take the AI as gospel – because it could get it wrong.
Put the right guardrails in place
When you’re implementing AI, having the right guardrails in place is important. What’s it going to be used for, and what are you going to put in it? Where’s the process for checks? If you’re using it to assess CVs for interviewing new recruits, what have you done to ensure there’s no inherent bias in relation to the candidates the AI is selecting?
Building an ethical AI framework: The 10 Australian voluntary AI guardrails at a glance
In September 2024, the Australian Government published a set of voluntary guardrails it advises businesses to work to:
Guardrail one creates the foundation for your organisation’s use of AI. Set up the required accountability processes to guide your organisation’s safe and responsible use of AI, including:an overall owner for AI usean AI strategyany training your organisation will need.
Set up a risk management process that assesses the AI impact and risk based on how you use the AI system. Begin with the full range of potential harms with information from a stakeholder impact assessment (guardrail 10). You must complete risk assessments on an ongoing basis to ensure the risk mitigations are effective.
You must have appropriate data governance, privacy and cyber security measures in place to appropriately protect AI systems. These will differ depending on use case and risk profile, but organisations must account for the unique characteristics of AI systems such as:data qualitydata provenance cyber vulnerabilities.
Thoroughly test AI systems and AI models before deployment, and then monitor for potential behaviour changes or unintended consequences. You should perform these tests according to your clearly defined acceptance criteria that consider your risk and impact assessment.
It is critical to enable human control or intervention mechanisms as needed across the AI system lifecycle. AI systems are generally made up of multiple components supplied by different parties in the supply chain. Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms.
Create trust with users. Give people, society and other organisations confidence that you are using AI safely and responsibly. Disclose when you use AI, its role and when you are generating content using AI. Disclosure can occur in many ways. It is up to the organisation to identify the most appropriate mechanism based on the use case, stakeholders and technology used.
Organisations must provide processes for users, organisations, people and society impacted by AI systems to challenge how they are using AI and contest decisions, outcomes or interactions that involve AI.
Organisations must provide information to other organisations across the AI supply chain so they can understand:the components used, including data, models and systemshow it was builtand manage the risk of the use of the AI system.
Organisations must maintain records to show that they have adopted and are complying with the guardrails. This includes maintaining an AI inventory and consistent AI system documentation.
It is critical for organisations to identify and engage with stakeholders over the life of the AI system. This helps organisations identify potential harms and understand if there are any potential or real unintended consequences from the use of AI. Deployers must identify potential bias, minimise negative effects of unwanted bias, ensure accessibility and remove ethical prejudices from the AI solution or component.
Learn how electricians can use AI tools to analyse customer feedback, spot recurring issues, and improve service quality. [...]<p><a class="btn btn-secondary understrap-read-more-link" href="https://gemcell.com.au/news/from-noise-to-insight-ai-customer-feedback/">Read More...<span class="screen-reader-text"> from From noise to insight: leveraging AI to analyse customer feedback and perfect your electrical service delivery</span></a></p>
While there’s a lot to be said for a traditional Christmas party, there are other activities that can unlock some serious business benefits. [...]<p><a class="btn btn-secondary understrap-read-more-link" href="https://gemcell.com.au/news/work-christmas-party-alternatives/">Read More...<span class="screen-reader-text"> from Six alternative work Christmas parties for your team</span></a></p>
Messaging your mates is one thing – but should you be texting your clients? [...]<p><a class="btn btn-secondary understrap-read-more-link" href="https://gemcell.com.au/news/text-messages-clients-customers/">Read More...<span class="screen-reader-text"> from Using text messages to communicate with clients and customers</span></a></p>