GenAI needs to be used responsibly – here’s how

AI continues to dominate headlines, with businesses increasingly recognising the value it brings to the workplace. And yet, more than one in four organisations have now banned the use of GenAI in the workplace due to apprehensions around privacy and data security risks.

As GenAI continues to make headway, its pivotal role in shaping the future of work is undeniable. Implementing a blanket ban is not a viable business strategy, and will not serve as the easy fix you’re looking for.

Business owners need to cultivate an environment in which employees are encouraged and supported to leverage AI tools responsibly. To foster responsible adoption and use of generative AI within companies, a multi-faceted approach is essential.

Whitelist approved tools that meet organisational demands

Every business has unique needs and requirements, so begin by clearly identifying and approving generative AI tools that meet your organisation’s standards for compliance, cost-effectiveness and practicality. This selective endorsement process ensures that only vetted applications are used.

Your team is likely already using some (think ChatGPT, GitHub Copilot, Grammarly, etc.) due to their widespread availability, so it’s wise to streamline applications to ensure coherence and compliance.

Develop an AI policy

If your organisation lacks a specific AI usage policy, make one. This policy should be drafted after a thorough assessment of your organisation’s current AI usage, and clearly articulate the types of data that are permitted for AI interactions, as well as those that are off-limits.

Through an established policy, you’re able to set out rules and responsibilities for employees and prevent unauthorised AI use, mitigating the risk of sensitive data exposure. Without this, you risk subjecting yourself to an array of headaches down the line, like violating NDAs by inputting private information into a third-party source. Staying informed about upcoming laws and regulations like the EU’s AI Act is also essential to guarantee industry best practices.

Naturally, this also helps to cultivate a more cohesive organisational structure, allowing employees to utilise GenAI tools with confidence. Do ensure that you continue to update the policy over time and that your employees are kept informed along the way. After all, they are the main users of this and, without their compliance, your organisation could find itself at risk.

Educate and train employees

With an estimated 30% of hours currently spent on activities today likely to be automated in the future, it’s essential that employees are upskilled to be able to work with AI quickly and confidently. Currently, only 7% of respondents surveyed had received any AI skills training in the last 12 months.

Empower your workforce by providing education and training on the opportunities and limitations of generative AI. Offer a comprehensive overview of what GenAI-powered tools are, their capabilities and safe and effective usage practices – it’s easy to forget that this is an alien concept for most!

Make sure that you discuss its potential applications within the organisation, whether this be for automating processes, predictive maintenance, data analysis or something else entirely. It’s also worth addressing ethical and legal issues surrounding GenAI, such as bias in algorithms, hallucinations, and opening a forum for Q&As.

Once the theory is put in place, you can organise hands-on training, offering tutorials and workshops so employees can interact with AI software and put what they’ve learned into practice.

Select privacy-conscious services

When making the choice on the right GenAI software for your business, opt for services that do not utilise your data for training their models publicly or offer mechanisms to keep your data secure within a proprietary framework.

You can do this by carefully reviewing privacy policies before utilising the software to understand their unique data practices, including what they collect, how they use it, and with whom it’s shared. Recent customer feedback is a good starting point for some desk research. If there isn’t transparency and clarity in their policies – avoid!

Fun fact: reputable providers like Microsoft, Alphabet, and OpenAI are known for their robust security measures and options for data privacy and commercial data protection.

Promote an AI-forward culture

Encourage a culture that embraces the transformative potential of generative AI through strategic initiatives, cultural shifts and practical implementation.

As you might expect, leadership buy-in goes a long way. With senior management loudly championing AI initiatives, the tone is set for embracing technological advancements, with more junior staff members feeling empowered to take advantage of these tools themselves.

In a company promoting an AI-forward culture, most (if not all) of actions and ideas are supported by Generative AI. This does not mean, the GenAI results are always incorporated. On the contrary, they can be actively rejected. They should, however, become a core part of the business-as-usual and be utilised by default, and not as an option.

On a more practical note, all employees should be equipped with the necessary tools and resources to facilitate AI development. There’s no point in promoting an AI-forward culture if your infrastructure is unable to cope with what that entails! Consider investing in data infrastructure, cloud computing resources, and collaboration tools tailored to AI projects.

By adopting these strategies, your organisation will not only ensure the responsible integration of generative AI in the workplace but also position itself to fully leverage the potential of emerging technologies. This proactive approach will enable your company to optimise operational efficiency, spur innovation, and sustain a competitive advantage, all while safeguarding the integrity of your digital infrastructure and data.

+ posts

Michal Szymczak is the Head of AI Strategy at Zartis. With over ten years of experience, Michal is software engineer with a demonstrated history of working in the information technology and services industry. Alongside excellent communication and management skills, he is skilled in Web Development, .NET Framework, Application Lifecycle Management, Cloud (both Azure and AWS), and Scrum.

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

UK IP Benefits and How to Get One

There are many reasons why you may get a...

Navigating the Landscape of AI Adoption in Business

In today's rapidly evolving technological landscape, the integration of...

Three Ways to Strengthen API Security

APIs (Application Programming Interfaces) are a critical driver of...

A Comprehensive Guide To The Cloud Native Database [2024]

Databases are crucial for storing and managing important information....

AI is the future foundation of business’ ESG frameworks

ESG has emerged as a key focus for businesses...

Subscribe to our Newsletter