Stay in the know. Subscribe to Currents
Current

Your AI Doesn’t Need a Bigger Brain, It Needs to Understand Your Business

4 Mins read

We’re currently facing a big problem: a false sense of productivity when using AI. A report recently published by Workday found that 85% of employees say they save one to seven hours a week by using AI, but nearly 40% of those time savings are lost to rework. This includes correcting errors, rewriting content, and verifying outputs from one-size-fits-all AI tools.

When considering how LLMs (Large Language Models) work under the hood, this comes as no surprise. The widely used transformer architecture powers next-token prediction, which makes models great guessers. Relying on that ability alone isn’t enough. LLMs have to make a lot of assumptions when they’re inevitably given limited information.

Every company already has a plethora of data, structured and unstructured, that could significantly change how useful AI truly is. What if we could plug AI right into our businesses? With secure and controlled access to everything your company is, the game completely changes.

AI Plugged In

Imagine a digital twin of your business. Everything it suggests, orchestrates, and automates is completely relevant to the inner workings of the business and the goals your company is trying to reach.

Rather than picturing a “jack of all trades” superstar employee, picture this concept as your business personified. It interacts with everyone in the company in different ways, based on their role and current work. It changes, grows, and matures with time. It understands your data deeply yet remains flexible as things change.

To power a platform like this, you need to take advantage of the various components and strategies for context engineering. It’s easy enough to say you’re going to let AI access all your data, but without a framework behind how data is exposed, you will quickly overload the context window and render AI incapable of sense-making.

One strategy commonly used is Retrieval Augmented Generation (RAG). This is the practice of fetching data that’s related somehow to the user’s request and feeding it to the AI as additional context before generating a response. This strategy greatly improves LLM accuracy and reduces hallucinations.

Graph RAG is a specific type of RAG that pulls data in from a graph database. Graph DBs store data as nodes and edges, creating a web of connections. In most databases, relationships are implied with foreign keys and references, but in graph databases, relationships are explicit objects.

Graph DBs are optimized for questions like:

  • “How is A connected to B?”
  • “What depends on this if I change it?”
  • “Who are the friends of friends of this user?”

Instead of expensive joins that grow slower as data grows, graph databases follow pointers directly between nodes and have near-constant-time hops, even as the data scales. If your core question is “How do things relate and what happens if I follow that thread?” a graph database usually wins.

3 Essential Graphs for Superior AI Context

One of the first things new employees learn is how the company’s core business entities fit together: feature names, product components, team ownership, and the relationships between them. All of this information makes up the Knowledge Graph. This is the foundation that defines the basic structure of the company.

For example, your favorite grocery store may have entities, such as “Store,” “Customer,” “Product,” “Supplier,” “Delivery Truck,” etc., and relationships like “MADE_BY_BRAND,” “LOCATED_IN_AISLE,” and “EXPIRES_ON.” Having all these entities and relationships mapped out in a graph gives AI a glimpse into the inner workings of a business.

Knowledge graphs are extremely helpful tools for AI to have good entity recognition. When referring to a specific entity in a conversation with AI, this context helps it understand exactly what you’re talking about and how it relates to other aspects of the business.

With these core definitions in place, the second graph that needs to be utilized consists of the metadata for your actual data sources. If the Knowledge Graph is filled with ontological definitions of entities and relationships, the Data Graph tells AI where to actually pull the data from.

A grocery store will have a variety of data sources stored in different places—POS data in a CDW, inventory data in an ERP system, customer data in a CRM, etc. Each data source will also have a unique schema, with (hopefully) column name conventions and data types.

When we’re ready to fetch data to include in an AI context, we need to know where it lives and how it’s shaped. We also need to know how each column maps to the nodes and edges of the Knowledge Graph. This metadata can be stored in the Data Graph for quick and efficient retrieval when needed.

The third graph that can really bolster the accuracy of AI responses is the Decision Graph. This graph captures decision-making within a business and shows how it grows over time. You can think of it as a memory database for AI.

It’s important that decisions are captured in as many places as possible. In the article, “AI’s trillion-dollar opportunity: Context graphs,” the authors highlight the problem of missing decision traces. Judgements aren’t always stored. For example, some approval chains happen outside systems. “A VP approves a discount on a Zoom call or in a Slack DM. The opportunity record shows the final price. It doesn’t show who approved the deviation or why.” Unfortunately, hard and fast rules don’t always exist in every scenario, and exception logic often lives in people’s heads.

The Decision Graph is the missing piece of context that enables AI to adapt to each scenario rather than blindly follow rules.

Building the Brain

Hitting the context window limit is a problem that isn’t going away anytime soon. That’s why it’s more important than ever to figure out what information AI needs in a given situation, as well as what information it doesn’t need. Context graphs can make up the difference.

AI doesn’t need more memory; it needs more understanding. The digital twin of your business needs a brain.

Emme Tuft is a Senior Software Engineer specializing in custom web applications powered by data and AI. She is part of the Developer Innovation team at Domo, where she builds “art-of-the-possible” solutions that bring ideas to life and helps major customers develop AI and data products on the Domo platform. Emme’s background in bioinformatics and full-stack development informs her approach to building scalable, data-driven applications. She is also the author of a recent article in the Journal of Open Research Software on building a programming assignment management system. 

Photo courtesy Valeria Nikitina for Unsplash+

Related posts
CurrentTechnology

Protect Your Business in 2026: Legal Experts Share Common Risks Facing Businesses Using AI

3 Mins read
U.S. small businesses are rapidly embracing AI, with 66% now using AI in some form. Adoption is particularly high among younger entrepreneurs—72% of Gen Z and…
CurrentManage

Is the ‘Job-Ready’ Myth Undermining Entry-Level Employee Success?

3 Mins read
In years past, a professional’s entrance into the workforce came with a clear path of learning and development, whether through mentorship by…
CurrentMarketing

Your SMB Is About to See a Surge in Digital Traffic, and Privacy-Led Marketing Is How You Convert It

4 Mins read
Small business owners will soon (if not already) see a shift in digital traffic. AI-driven search, recommendation engines, and social platforms are…