Whether we call it pharma-tech, life sciences or perhaps even health-tech, a new data-driven health industry is emerging.
Big data analytics has already proved an effective means of fuelling development in drug research, so in this respect, its impact is being felt at the procedural end of the industry. But data analytics also helps at the human level.
Providing insight into patient care needs for care-workers and helping them to make sense of the noise around them in busy working environments, it can provide schedule-based reminders to inform when medication needs to be distributed. It can also provide automated alerts that track warning signs to signify when patients may be becoming increasingly unwell.
Noisy data everywhere
That element of ‘noise’ pervades throughout healthcare into what we could call data noise. Even the most modern healthcare practices operate with operational silos and fragmented facilities management and billing systems. Add that complexity to the fragmentation that occurs between different medical service lines… and it's clear to see how complex data can spiral upwards and start to represent a health risk in and of itself.
The answer that some firms in this space are adopting has a front end and back end element. At the front end, data visualisation technologies are helping to coalesce complex data sources and provide dashboard illustrations that express the live state of operations in any given facility.
But at the back end, pharma-tech data warehouse systems also need a booster shot. For databases, that advantage can come from Graphical Processing Unit (GPU) accelerated power that has the power to handle gigabytes, terabytes and even petabytes.
Originally designed for graphics-intensive software applications (hence the G in the name), a GPU accelerated database can handle the massive speed of data ingestion and analytics needed in today’s healthcare environments.
Based upon the data ingestion task and analytics required of it, these systems can automate the intake and compression of data into its appropriate place so that it can be used for the medical function to which it relates.
Information ingestion indigestion
Possibly one of the toughest pills to swallow in pharma-data management is the ingestion challenge. Next-generation genomic sequencers, digital imaging devices and wearables are all adding to the growing mountain of pharma-data that needs to be managed.
This situation is compounded by the fragmented nature of many research institute data pools, with some of the most valuable information sometimes solely residing on one individual researcher’s desktop or laptop machine.
Data compression techniques can help here. This is an approach designed to transform data into a smaller format, but with zero loss of accuracy as a result of the transformation. The information needs to be decompressed before it can be subsequently used, but the efficiency factor means it can be worth the effort.
An ‘encoding’ process sees data converted out of spreadsheet text-based formats into a common and specific data format that takes up less disk space and so is more efficient. When ingestion and compression techniques are used in unison, an entire data lifecycle - ingest, compress, encode, decode, analyse and visualise - becomes possible.
Health industry commentators and broadcasters have discussed the increasing workload that healthcare workers have had to endure throughout the Covid-19 (Coronavirus) pandemic. As the healthcare industry now starts to employ the use of artificial intelligence (AI) with big data analytics to drive predictive modelling capable of forecasting the likely effect of drug concoctions on patients, the data workload needed to fuel this work will also naturally increase.
The healthcare data workload also increases when we start to build increasingly complex medical treatments tuned to a patient’s individual genetic make-up. When these systems are twinned with models that also take into account a patient’s behavioural state, lifestyle and perhaps even their economic wealth, then the data workload increases further still.
As we look to the future and the development of nano-medicines that are capable of sending precise drug payloads into exact areas or even exact cells of the body, then we will be working at an exponentially higher level of data management that will require a higher degree of automation and power in the data acceleration layer, as we have already suggested.
Our future pharma-tech prognosis
Going forward then, we need to reengineer the healthcare industry for a new stream of data ingestion and data encoding to rapidly handle the complex queries that will be richer, broader, wider and more pervasive. But pharma-tech will only be able to shoulder the new data-rich landscape that it has to perform analytics in if we think strategically about the infrastructural elements of data management that we have tabled here.
It might sound like a tough pill to swallow initially and there will be some hiccups (technology ones… and probably human ones too) along the way. But overall, the prognosis is good for the data-driven pharma-tech future. Now, please wash your hands.