In the last decade, successful organisations have become the equivalent of Willy Wonka’s chocolate factory. A psychedelic wonderland for pushing out highly creative and innovative ideas that need to be turned into reality, and fast. At the heart of this transformation is IT, which has gone from being the support function that keeps the lights on, to a vital mechanism that transforms ideas into applications that add business value. You only have to look at how often updates are pushed to a smart phone to see how quickly this is happening.
As organisations seek the agility to create new features and applications daily, adoption of cloud platforms and methodologies has increased due to its promise of transformative gains in execution, increased service offerings and lowered infrastructure costs. The cloud has become the backbone for delivery of lightening speed change and innovation.
[easy-tweet tweet=”The #cloud has become the backbone for delivery of lightening speed change and innovation” via=”no” usehashtags=”no”]
However, moving applications and their associated data to the cloud can be slow, and not without risk. Regulators are mandating the protection of sensitive data at all times. With the cost and pace of regulatory reform continuing to march on, organisations are unable to justify the risk of having sensitive information in the cloud.
The obstacles to the Cloud
However, in reality it’s not governance, risk or compliance concerns that are the true barrier to progress. Instead, a huge blind spot is emerging in existing data security strategies. We have seen that organisations are spending a lot of money securing their production data. Yet the stringent security controls and protocols that are relied upon to mask sensitive data are not being applied to the databases that are being used to create new features or applications. Our research shows that 80 per cent of this non-production data sits unmasked within IT environments, opening the door to cyber criminals.
Even with the existence of regulations like PCI compliance, Solvency II and the Data Protection Directive, personally identifiable information like name, age and location is visible to any developer, tester or analyst with the right access details. This means non-production environments are quickly emerging as the least secure point of entry for savvy cyber criminals – both on premise and in the cloud.
That’s not to say there isn’t technology that can help. When it comes to migrating to the cloud, organisations need to develop security approaches that start from the inside out. This means that before data is even migrated to the cloud then it needs to be transported in a form that is unusable if stolen by malicious cyber-criminals. The answer to being able to take advantage of cloud services is to embed data security into everyday practices and solve the data-masking stigma.
organisations need to develop security approaches that start from the inside out
Data masking, the process of obfuscating or scrambling the data, exists, but it’s a costly, timely and manual exercise. Waiting an extra week to mask data each time the business needs a refresh can mean slipping behind the competition. As a workaround, some companies end up using synthetic or dummy data, which contains all of the characteristics of production data but with none of the sensitive content. This solves the data privacy issue, but with production and test data not matching, it’s a fast route to more bugs entering the development process. And bugs mean delays.
The golden ticket
[easy-tweet tweet=”Organisations need to insert a new layer into IT architecture that automatically masks #data as part of its delivery” via=”no” usehashtags=”no”]
Instead of taking a weekly or monthly snapshot of production data and then manually applying data masking on an adhoc basis, organisations need to insert a new layer into IT architecture that automatically masks the data as part of its delivery. One approach is to create a permanent, one-time copy of production data and then apply virtualisation and masking at the data level. This makes it possible to mask an entire database and then use this to generate multiple virtual copies of the data that are also masked.
By adopting this approach, organisations can guarantee easy provisioning of masked data. IT can retain control by setting the data masking policy, data retention rules and who has access to the data. Developers, testers and analysts can all provision, refresh or reset data in minutes and they only ever see the masked data. It also allows a centralised view of the organisations’ data, and safeguards information for whoever needs it, for whatever project. Whether on premise, off shore or in the cloud, all data is secured and can be delivered as a service to the application that needs it.
Whether it’s from outside hackers or malicious insiders, those that want to steal or leak data will always target the weakest point within IT systems. By bringing data masking into a service based model, organisations can readily extend masked data to any environment – including the cloud – instead of relying on synthetic data or duplicates of non-masked copies. Obfuscating data becomes a fluid part of data delivery, which means that any non-production environment can be moved to the cloud and with zero danger of data theft. Even if data is compromised then it’s true value and integrity is not.
security becomes an enabler
Even more importantly, security becomes an enabler. Ask any organisation if they know where all their data is and the chances are they won’t know. With secure data being delivered as a service, organisations are able to centralise data, mask it and keep tabs on where it’s going. Within 24 hours, businesses have full visibility into where every copy of their production data is and the confidence that it is masked. As a result, organisations become more agile, with both storage and production times reduced. With the blind spot resolved, organisations can then realise the benefits of the cloud and accelerate migrations without risking security.