Companies need their data to work for them and support their business, but many struggle to get their database environments right. Existing approaches to managing data and databases are often haphazard and informal – the digital equivalent of duct tape and bubblegum. While this might work for MacGyver, he only had to get to the end of an episode; he didn’t need to live with his creations long-term.

Rather than managing their databases properly, many firms are simply taping them together, cutting corners, and failing to structure them for growth or cost savings.

 

What is your data problem?

Databases today have to support complex and fluid environments, with a lot more scale than in the past. This often involves multiple database technologies used in one stack. Businesses are no longer solely running Oracle or SQL Server; instead, they have to be “everything shops.”

According to our Open Source Data Management Software Survey, over 90% of organisations have more than one database technology in their work environment, and 85% use more than one open source database technology. 89% of respondents have more than one OSS database, with 43% running both PostgreSQL and a variant of MySQL. Meanwhile, 54% are running some purpose-built NoSQL database, and 73% are running a relational database as well as a NoSQL purpose-built database.

This creates a major challenge: firms are managing multiple database technologies, but lack the skill set to run them all efficiently. In addition, with developers taking a more active role in technology stacks and growing the environment, there are often hundreds if not thousands of database installations, large and small, that need to be managed and maintained at the same time.

This scattered approach to databases is already having a massive impact. Downtime is prevalent and many firms have to take the costly step of updating their hardware to keep up.

There is another problem with scale: the bigger the organisation, the more problems you discover. When there are a large number of developer groups, and teams are stretched, firms may not have the specialist people they need to support their environments. Personnel-wise, companies have shifted away from employing database administrators and focusing instead on more generalist staff.

At the same time, you don’t have to look that far to see firms having massive outages. Database breaches or outages are widespread – and often caused by issues that could be easily resolved. For example, data can be leaked because of configuration issues such as forgetting to set a password. Or someone can make a small configuration change, test it on one server, but accidentally apply it to all servers in the field.

 

Taking a more structured approach

So, what changes should firms make to achieve the cost savings and growth they are striving for?  First, it is important to acknowledge that in the current environment, companies should understand their workload and put processes in place to iterate around environments over time. One challenge in the modern IT space is that usage of apps can change drastically based on user numbers growing rapidly over time. From a technical perspective this means that the set-up that worked six months ago won’t work today.

To resolve this, it is important to prioritise which apps and databases are the most critical. At the same time, less important apps can be dealt with by additional  automation. It is impossible to develop an app without planning for it to be cloud native – so people should ensure their databases run like apps and can scale up and down.

Alongside all this, it is important to standardise. All tooling should be in one place, and everyone involved in supporting the applications should acknowledge how app workloads are changing and ensure things are set up to run effectively. There are methods and tools available to help with data backup and recovery management, that can replay workloads and ensure systems can fail over properly.

 

New ways to run applications and databases

In addition to using smart technology tools, many businesses are adopting a DevOps culture with automation. But, even these firms still have outages and the applications themselves require ongoing testing. Teams are also considering tools such as Kubernetes to manage and scale environments that run in software containers using automation and orchestration.

These new approaches to running applications offer greater options and allow you to more easily scale up. However, an important consideration is how you communicate the need for change to management. This is not always simple as there is rarely one single thing that will solve an organisation’s data management issues. Instead, expertise and better thought processes are needed.

Cost is always a critical problem for business leaders, so using cost metrics in your approach will help convince them that change is needed. For example, firms experiencing growing numbers of users have often had to upgrade their cloud instances frequently over a one year period – for those expanding the fastest, this could happen up to six times in twelve months. Each of these instance changes represents a big jump in spending, so not preparing for growth can be more expensive in the long run. This data can be used internally to unlock investment upfront and prevent costs from skyrocketing.

Additionally, skill set is key. Developers and architects are today’s decision makers around technology, but all their staff need proper training and education. This need for better skills has led to trends such as cloud and database-as-a-service. For some firms, these services are useful as things are then managed by a third party. However, there is a layer above that which still requires attention, as cloud providers don’t tune the database instance to the workload of your app, manage potential slowdowns, or help you avoid bottlenecks.

As teams take new infrastructure and deployment approaches from testing into production, those new applications will need support. It is here that detailed knowledge and skills are still required.

 

Making changes and planning ahead

Overall, firms need a proper set-up in place and tooling to be able to scale up and – the way the economy is at the moment – scale down. Every time a new app spins up and workloads change, it is important to have tools and capabilities already in place to ensure things are ready to go from a database perspective, as well as being focused on the application side.

Utilising this planned approach can remove the need for the duct tape and bubblegum that keeps systems linked together. A best-practice approach that looks at how to make the most of the potential that exists in these new systems can deliver better results. It might seem complex at first, but in the long run solving these problems early can help reduce costs, simplify management over time, and streamline performance.