It can be hard to understand exactly what the phrase ‘Big Data’ means since it doesn’t have an official definition, but it really refers to just what it sounds like: a whole lot of data.

Although the concept of Big Data is relatively new, its origins date back to the 60s and 70s as this is when the first data centres and databases were formed. Back then, data collection and analysis were just beginning to shape the business world (a process which eventually became known as ‘business intelligence’) but, with the advent of commercial internet in the 90s, the practice skyrocketed, leading to ever more sophisticated uses for data technology in the 2000s and beyond.

The really interesting thing about Big Data, though, is where it comes from and the ways it may be used. Both are complex considerations since the field of data science and data analytics (and the tech they rely upon) are constantly evolving.

We can see this at work in our own use of data-driven technology, which is increasing exponentially. Things like smartphone apps, GPS, and fitness trackers, social media, online messaging services, and mobile-banking all process, share, and store large amounts of data about us in order to function. All these technologies also have a relatively short shelf-life, as they are soon superseded by new software updates and new, better functioning devices. With each upgrade comes increased functionality and convenience; the price for which is usually – you guessed it – more data.

So, the more that technology integrates with and improves our lives, the more data there is to be captured and processed about us. To put this into perspective, the amount of data we produce is growing at a mindboggling rate of 2.5 quintillion bytes per day!

The 3Vs of Big Data

The sheer amount of data gathered across multiple internet-enabled devices known, collectively, as the Internet of Things (IoT) is a result of three elements. These are the ‘3Vs’ that are said to characterise Big Data:

  • Volume – The IoT is growing exponentially year on year, as are the amount of internet users and, subsequently, the amount of data available for analysis.
  • Variety – Data doesn’t just refer to numbers and statistics as it did once upon a time. Big Data is made up of both structured and unstructured data, including (but not limited to) non-numerical data such as emails, pictures, and voicemails.
  • Velocity – velocity refers to both the speed at which data is generated (which, we know, is enormous) and the speed at which this data may be processed. It’s worth noting that advancements in technology mean data can now be analysed pretty much in real time, as is the case with smart meters, chatbots, e-commerce, and etc.

Big Data in Practice

It’s hard to envisage Big Data without its comrade, cloud computing. Scalable as well as cost-effective, the cloud provides the necessary infrastructure that Big Data demands if it is to be used effectively for analysis.

For example, many organisations virtualise their servers in the cloud to make them more efficient, and to make data management and subdivision more straightforward. As you can imagine, being able to rapidly crunch large volumes of data from numerous sources is useful for businesses that are looking to data for market-confidence. It also allows organisations to form more complete answers about us, the customer/data subject.

Let’s take an emerging industry like financial services technology (or ‘fintech’ for short) as an example. In many ways, fintech has thrived because of its user-centric approach to things like insurance, foreign currency, and money transfers. In what is often viewed as direct opposition to the large, traditional banks, fintech has somehow humanised financial services, using cloud-based platforms and mobile apps to bring accessibility, convenience, and transparency to the end user. Indeed, the global adoption rate of fintech products sits at around 52% – and this number is rising rapidly.

Big Data is the driving force behind fintech’s better and more efficient customer service because it enables firms to profile and categorise their customer-base. Categories may depend on age, gender, socioeconomic status, geographical location, spending habits, online behaviour, debt history, general health, and much more. Using this data to cross-analyse users, fintechs can easily personalise banking suggestions and target their marketing to meet customer needs. They can also identify the most valuable customers in terms of spending-power, offering certain financial products only to statistically viable subsets.

Other examples of Big Data use by fintech companies (as well as other commercial/retail enterprises) is what’s called natural language processing, or ‘NLP’. NLP is a type of machine learning that works by processing data in the form of language and cross-referencing it with the wealth of information already held about the data subject. An example may be chatting with a ‘digital finance assistant’ in your mobile banking app. By accessing Big Data, the chatbot can easily make personalised recommendations that feel natural and relevant. Understanding customers and their behavioural patterns in this way also helps fintechs manage risk, detect fraud, and optimise performance.

So, whilst new technologies often appear more human, they are, in fact, artificially intelligent. Their celebrated usability is a result of Big Data and is driven by complex algorithms and AI. Although many argue that consumers will benefit from increased access to more personalised, cost-effective products that encourage fair competition, others are more cautious; after all, Big Data begs big questions:

  • What happens if data security is compromised?
  • Who (or what) is held accountable by regulatory watchdogs for decisions made by robots?
  • Just how do firms using Big Data protect our consumer rights?

Big Data and the Law

These questions are particularly pertinent under GDPR (or the UK’s implementation of it, the Data Protection Act 2018). Under this legislation, accountability sits with data controllers (organisations that own data) to adequately protect personal data and to ensure that it is processed ethically and lawfully. This means that, in order to comply, data controllers must:

  • Be transparent about how they intend to use data (including putting measures in place to track and audit data use and for customers to access records about how their data is being used).
  • Obtain informed consent from data subjects to use their data in the manner they want to. Organisations risk breaching data privacy and data security laws if they carry-out profiling on data they only have implied consent for.
  • Ensure that automated decision software is fair and unbiased.
  • Protect data integrity by using only accurate data and updating this data as and when required.

One way in which fintechs and other technology-based applications plan to remain compliant with data protection laws involves yet another innovation: regulatory technology, or ‘regtech’. Indeed, the continued crossover between regulation and technology seems essential in the future as firms encounter ever more regulatory and reporting requirements. Extending disruptive digital technologies to regulation does seem like the next logical step.

Website | + posts

Darren Hockley is MD of eLearning provider, DeltaNet International. The company specialise in the development of engaging compliance courses designed to mitigate risk and improve employee performance.

AI Readiness - Harnessing the Power of Data and AI

Newsletter

Related articles

A Practical Guide to the EU AI Act

Disclaimer: This article is opinion-based; please seek legal advice...

Building a Smart City

If you ask me how I picture the future,...

Mastering Hypervisors for Enhanced Business Efficiency

The cloud computing landscape is a complex ecosystem characterised...

Cloud Computing’s Role in Digital Transformation

Definition of Digital Transformation Digital transformation refers to the process...

The hidden costs of technical debt inaction

With technology moving at a rapid pace, you would...