From the lab to production: Driving GenAI business success

Generative AI (GenAI) was the technology story of 2023. Spurred on by the breakaway success of ChatGPT, the likes of Amazon, Microsoft and Google have accelerated their own efforts, creating a tidal wave of innovation that promised to reshape the way businesses and users harness the power of technology to drive productivity. It’s already made significant strides in various sectors such as pharma and law. But what we’ve seen so far is just the beginning. The true power of GenAI will only become clear once organisations take it out of the experimental stage and begin to use it more widely in production.

However, in order to ride the wave rather than get caught up in it, organisations must overcome some key challenges around cost and trust. Doing so will require a robust data roadmap that leverages the cloud.

Cost and trust are the biggest barriers

When it comes to GenAI, the old computing maxim of “garbage in, garbage out” applies—you can’t expect to generate useful results if the model is trained on untrustworthy data. The challenge is that data governance and security are still at a nascent stage in many organisations, with crucial information often locked away in silos—making it effectively unusable without costly integration. In practice, this means that AI training data may be poor quality and lack crucial business context, which can lead to hallucinations (fictional information that seems realistic) or factual responses that lack the necessary context. Either way, it adds no value for the business.

Another critical pain point is the high cost of in-house GenAI projects. While outsourcing comes with its own security, compliance and potential risk, doing everything internally can be eye-wateringly expensive. A single cutting-edge GPUs, designed specifically for running large language models (LLMs), costs around $30,000. And an organisation wanting to run training a model with, say, 175 billion parameters might need 2,000 GPUs. That’s a hefty bill in the region of tens of millions of dollars.

Taking GenAI from the lab to production

This is why cloud infrastructure is becoming increasingly popular as a foundation for AI. Cloud providers have the GPU resources to empower customers to scale their GenAI projects and only pay for what they use. This enables organisations to experiment with GenAI and turn off the model once they’ve finished tinkering, rather than having to provision GPU in on-premise environments. That saves on CapEx and provides the flexibility organisations need to take operations back in-house in the future if required.

Once they’ve decided to implement cloud, how can organisations get GenAI projects out of the lab and deliver value in production environments? The BRIESO model – Build, Refine, Identify, Experiment, Scale and Optimise – is instructive here:

Build: First, create a modern data architecture and universal enterprise data mesh. Whether on-premises or in the cloud, this will enable the organisation to gain visibility and control of its data. It will also help by establishing a unified ontology for mapping, securing and achieving compliance across all data silos. Look for tools which not only meet current demand but have the scalability to accommodate future growth. Open source solutions often offer the greatest flexibility.

Refine: Next, it’s time to refine and optimise data according to existing business requirements. It’s important at this stage to anticipate future requirements as accurately as possible. This will reduce the chances of migrating too much unnecessary data, which will add no value but may increase the cost of the project significantly.

Identify: Spot opportunities to utilise cloud for specific workloads. A workload analysis will be useful here in helping to determine where most value could be derived. It’s about connecting data across locations – whether on-premises or in multiple clouds – to optimise the project. Now is also a good time to consider potential use cases for development.

Experiment: Try pre-built, third-party GenAI frameworks to find the one that best aligns with business requirements. There are plenty to choose from, including AWS’s Bedrock (Hugging Face), Azure’s OpenAI (ChatGPT) and Google’s AI Platform (Vertex). It’s important not to rush into a decision too early. The model must integrate closely with existing enterprise data for the project to stand any chance of success.

Scale and Optimise: Once a suitable platform is chosen, consider picking one or two use cases to scale into a production model. Continuously optimise the process, but keep an eye on GPU-related costs in case they start to spiral. As the organisation’s GenAI capabilities start to grow, look for ways to optimise their use. A flexible AI platform is crucial to long-term success.

The future is here

IT and business leaders are understandably excited about the transformative potential of GenAI applications. From enhanced customer service to seamless supply chain management and supercharged DevOps – it’s no surprise that 98% of global executives agree AI foundation models will play an important role in their strategy over the coming 3-5 years.

But before anyone gets too carried away, there’s still plenty of work to be done. A modern data architecture must be the starting point for any successful AI project. Then it’s time to refine, identify, experiment, scale and optimise. The future awaits.

Paul Mackay
+ posts

Paul Mackay is Regional Vice President Cloud - EMEA & APAC at Cloudera.

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Building a people-centric strategy to unlock AI’s potential

Today, there is a real atmosphere of excitement for...

Beyond Borders: Cloud Solutions for Universal Interoperability

In the journey towards transforming ways of working, businesses...

The Future of Marketing: Automation vs Innovation

Does AI Understand Your Brand Voice? AI is dropping jaws...

AI Act – New Rules, Same Task

The first law for AI was approved this month...

Time to Ditch Traditional Tools for Cloud Security

Reliance on cloud technologies has significantly expanded the attack...

Subscribe to our Newsletter