“Computer – Enhance!” Using AI to maximise developer capabilities.

Developer teams are facing ever-increasing pressure to innovate and perform as businesses evolve into software-driven organisations. In what can be a cutthroat race to beat competitors, even the slightest advantage can be transformational. Yet given the lack of best practices, and often understanding, around generative AI, CIOs face a crucial dilemma.

Lock developers away from AI tools entirely, and competitors willing to exploit the technology will leap ahead. Yet overly permissive use can create major issues, from unwittingly sharing sensitive data and IP, to accidentally committing copyright theft. Unwise use of AI can even cause reputational damage. Ultimately the best position lies somewhere in the middle, and since developers are already making use of AI tools, finding a responsible approach that creates maximum value is essential.

Due diligence

The first step is understanding the landscape. A sign of AI’s rapid adoption is the growing awareness of “shadow AI” that exists outside of IT teams’ visibility, for instance because developers are using ChatGPT or other tools of their own volition. This use might be harmless, or it might be opening the organisation up to huge risks. The only way to be sure is to discover exactly what tools are being used, and how. Nothing is more dangerous than an unknown unknown.

The IT team can then begin building its AI policy, using its investigation into shadow AI to guide its thinking. By knowing what developers and others want to accomplish with AI, the organisation can ensure its own approach supports these goals in a safe, controlled manner.

This won’t be an overnight process – as well as existing AI use, teams will need to understand the different tools available; their unique strengths, weaknesses and vulnerabilities; and the developer team and business’s overall goals. But it will be the key to answering those age-old questions.

Owning the means of production

The next step to consider is mitigating the risks associated with generative AI. For most, the largest is the way in which these tools learn and share data. Many individuals still don’t understand that all the information they share in their AI queries is essentially public. Anything entered, from an innocuous question to the layout of a specific data set, will be shared with the AI’s central database – potentially informing answers to other users, and permanently existing outside of the organisation’s control.

This question of ownership will be familiar to enterprises that have already battled to retain control over their data when adopting cloud services. The most desirable solution might be a non-public AI instance: where the organisation retains control of the AI’s database, and so all of the data in queries and responses. But this isn’t a simple fix: owning and managing such an instance will require resources enterprises might not have access to. And if an organisation owns the database, it also needs to ensure that there is enough data for the AI to “learn” from. After all, the more developers need to second-guess an AI and help it towards learning, the less value it has.

The Case for an Open Source Foundation

A way to mitigate Legal questions in software, especially in the open source world where ownership questions are legions, is to go through a foundation. This model has worked well with open source software, clarifies ownership and made companies feel safe in using that software. And while some companies are currently training proprietary LLMs, some others are making their model and data set open source. There are still so many questions unanswered, who has the right to use a model, to train it, fine tune it, can it be done with any data, what about a contaminative license like AGPL and the likes? What about ethical questions? HYPER[LINK: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics] Something modeled after the Eclipse or Apache foundation would bring clarity and possibly encourage the adoption of OpenSource LLMs.

Fixed focus

The crucial question for most organisations will be where and how can developers use AI in a way that helps their work without increasing risk? This is where a more targeted approach can pay dividends.

For instance, forbidding developers from using AI to develop production code that directly uses company data isn’t an overly authoritarian measure – and most developers would resent the idea that they can’t create their own final code. However, suggesting and troubleshooting test code that doesn’t use corporate data; answering common queries; or suggesting ways to follow best practice are all ways AI can support and enhance developers without creating risk.

Ideally, this approach should be integrated with the tools developers already use. For example, if existing development tools or databases include AI-supported options they will be more accessible, and become part of developers’ workflows much more easily.

The critical component

AI hasn’t changed an essential truth. Any business’s most important asset is still people – whether customers, partners or employees. Despite fears that AI will replace developers, it lacks the creativity, reliability and ultimately the intelligence to do so. Instead, if used the right way it can supercharge performance without increasing risk – giving development teams a platform to apply their skills and creativity where they are most valuable.

+ posts

Laurent is a Director, Developer Relation & Strategy

CIF Presents TWF – Ems Lord

Newsletter

Related articles

Building a people-centric strategy to unlock AI’s potential

Today, there is a real atmosphere of excitement for...

Beyond Borders: Cloud Solutions for Universal Interoperability

In the journey towards transforming ways of working, businesses...

The Future of Marketing: Automation vs Innovation

Does AI Understand Your Brand Voice? AI is dropping jaws...

AI Act – New Rules, Same Task

The first law for AI was approved this month...

Time to Ditch Traditional Tools for Cloud Security

Reliance on cloud technologies has significantly expanded the attack...

Subscribe to our Newsletter