For all the flash and charisma of generative artificial intelligence (AI), the biggest transformations of this new era may be buried deep in the software stack.
Hidden from view, AI algorithms are changing the world one database at a time. They’re upending systems built to track the world’s data in endless regular tables, replacing them with newer AI capabilities that are complex, adaptive and seemingly intuitive.
The updates are coming at every level of the data storage stack. Basic data structures are under review. Database makers are transforming how we store information to work better with AI models. The role of the database administrator, once staid and mechanistic, is evolving to be more expansive. Out with the bookish clerks and in with the mind-reading wizards.
Here are 10 ways the database is changing, adapting and improving as AI becomes increasingly omnipresent.
Vectors and embeddings
AI developers like to store information as long vectors of numbers. In the past, databases stored these values as rows, with each number in a separate column. Now, some databases support pure vectors, so there’s no need to break the information into rows and columns. Instead, the databases store them together. Some vectors used for storage are hundreds or even thousands of numbers long.
Such vectors are usually paired with embeddings, a schema for converting complex data into a single list of numbers. Designing embeddings is still very much an art, and often relies on knowledge of the underlying domain. When embeddings are well-designed, databases can offer quick access and complex queries.
Adding vectors to databases brings more than convenience. New query functions can do more than just search for exact matches. They can locate the “closest” values, which helps implement systems like recommendation engines or anomaly detection. Embedding data in the vector space simplifies tricky problems involving matching and association to mere geometric distance.
Vector databases like Pinecone, Vespa, Milvus, Margo and Weaviate offer vector queries. Some unexpected tools like Lucene or Solr also offer a similarity match that can deliver similar results with large blocks of unstructured text.
The new vector-based query systems feel more magical and mysterious than what we had in days of yore.
The old queries would look for matches; these new AI-powered databases sometimes feel more like they’re reading the user’s mind. They use similarity searches to find data items that are “close” and those are often a good match for what users want.
The math underneath it all may be as simple as finding the distance in n-dimensional space, but somehow that’s enough to deliver the unexpected.
These algorithms have long run separately as full applications, but they’re slowly being folded into the database themselves, where they can support better, more complex queries.
Oracle is just one example of a database that’s targeting this marketplace. Oracle has long offered various functions for fuzzy matching and similarity search. Now it directly offers tools customized for industries like online retail.
In the past, databases built simple indices that supported faster searching by particular columns. Database administrators were skilled at crafting elaborate queries with joins and filtering clauses that ran faster with just the right indices. Now, vector databases are designed to create indices that effectively span all the values in a vector. We’re just beginning to figure out all the applications for finding vectors that are “nearby” each other.
But that’s just the start. When the AI is trained on the database, it effectively absorbs all the information in it. Now, we can send queries to the AI in plain language and the AI will search in complex and adaptive ways.
AI is not just about adding some new structure to the database. Sometimes it's adding new structure inside the data itself. Some data arrives in a messy pile of bits. There may be images with no annotations or big blobs of text written by someone long ago.
Artificial intelligence algorithms are starting to clean up the mess, filter out the noise, and impose order on messy datasets. They fill out the tables automatically. They can classify the emotional tone of a block of text, or guess the attitude of a face in a photograph.
Small details can be extracted from images and the algorithms can also learn to detect patterns. They’re classifying the data, extracting important details, and creating a regular, cleanly delineated tabular view of the information.
Amazon Web Services offers various data classification services that connect AI tools like SageMaker with databases like Aurora.
Good databases handle many of the details of data storage. In the past, programmers still had to spend time fussing over various parameters and schemas used by the database in order to make them function efficiently. The role of database administrator was established to handle these tasks.
Many of these higher-level meta-tasks are being automated now, often by using machine learning algorithms to understand query patterns and data structures. They’re able to watch the traffic on a server and develop a plan to adjust to demands. They can adapt in real-time and learn to predict what users will need.
Oracle offers one of the best examples. In the past, companies paid big salaries to database administrators who tended their databases. Now, Oracle calls its databases autonomous because they come with sophisticated AI algorithms that adjust performance on the fly.
Running a good database requires not just keeping the software functioning but also ensuring that the data is as clean and free of glitches as possible. AIs simplify this workload by searching for anomalies, flagging them, and maybe even suggesting corrections.
They might find places where a client’s name is misspelled, then find the correct spelling by searching the rest of the data. They can also learn incoming data formats and ingest the data to produce a single unified corpus, where all the names, dates, and other details are rendered as consistently as possible.
Microsoft’s SQL Server is an example of a database that’s tightly integrated with Data Quality Services to clean up any data with problems like missing fields or duplicate dates.
Creating more secure data storage is a special application for machine learning. Some are using machine learning algorithms to look for anomalies in their data feed because these can be a good indication of fraud. Is someone going to the ATM late at night for the first time? Has the person ever used a credit card on this continent? AI algorithms can sniff out dangerous rows and turn a database into a fraud detection system.
Some organisations are applying these algorithms internally. AIs aren’t just trying to optimise the database for usage patterns; they’re also looking for unusual cases that may indicate someone is breaking in. It’s not every day that a remote user requests complete copies of entire tables. A good AI can smell something fishy.
IBM’s Guardium Security is one example of a tool that’s integrated with the data storage layers to control access and watch for anomalies.
Merging the database and generative AI
In the past, AIs stood apart from the database. When it was time to train the model, the data would be extracted from the database, reformatted, then fed into the AI.
New systems train the model directly from the data in place. This can save time and energy for the biggest jobs, where simply moving the data might take days or weeks. It also simplifies life for devops teams by making training an AI model as simple as issuing one command.
There's even talk of replacing the database entirely. Instead of sending the query to a relational database, they'll send it directly to an AI which will just magically answer queries in any format.
The approach has its downsides. In some cases, AIs hallucinate and come up with answers that are flat-out wrong. In other cases, they may change the format of their output on a whim.
But when the domain is limited enough and the training set is deep and complete, artificial intelligence can deliver satisfactory results. And it does it without the trouble of defining tabular structures and forcing the user to write queries that find data inside them. Storing and searching data with generative AI can be more flexible for both users and creators.