Price Performant Compute
Drastically Reduce Your Modern Data Platform Costs
Using the right tools for the right jobs to optimize costs and performance in cloud data management
The Vehicle Analogy
Imagine you run a business and you need different tools for different tasks. For example, you wouldn't use a heavy-duty truck for a quick trip to the grocery store or a sports car to haul construction materials. Similarly, when companies move their data to the cloud, they often choose one main tool to store and process all their data, like putting everything in one big warehouse or lake. However, this warehouse or lake is very expensive to use for all kinds of tasks, especially for small or less important ones. Gartner (2023) notes that most organisations overspend in the cloud because they default to a single, high-cost compute layer for all workloads, even though most tasks do not require it.
Not all tasks need such a powerful and costly warehouse
The problem is that not all tasks need such a powerful and costly warehouse. In fact, most of these tasks are small and don't need that much power, making this approach very costly and inefficient. McKinsey’s analysis of cloud waste (2023) highlights that up to 70% of cloud spend can be eliminated simply by aligning workload types to appropriate compute resources.
The Solution
The solution is to store different types of data in different places that are best suited for them. For example, data used for artificial intelligence (AI) should go to a place designed for AI tasks, while data used for business reports should go to a place that's better for creating those reports. Deloitte Insights (2023) specifically recommends “fit-for-purpose data architectures,” pointing out that AI workloads perform best on specialised repositories, while BI tasks are far more cost-effective on analytic-optimised systems.
AI Data
Store in AI-optimized repositories
Business Reports
Store in analytics-focused systems
Operational Data
Store in transaction-optimized databases
The Data Control Plane
To manage all this, we use a "data control plane," which acts like a smart dispatcher. It knows where all the data is stored and directs each task to the right place, like directing a sports car to the grocery store and a truck to the construction site. This way, the company can handle its data efficiently and cost-effectively, using the right tools for the right jobs. Forrester (2024) describes this as the emerging ‘unified control plane’, a method to simplify multi-repository complexity while improving operational performance.
Price Performant Compute (PPC)
I call this price performant compute (PPC). Thinking back to the vehicle analogy, PPC is like making sure you're using the right vehicle for the right job to get the best value for your money.

In the context of managing data for an organisation, "price performant compute" means using the most cost-effective and efficient computing resources for each specific task. Instead of running every data task on a super expensive and powerful system, which can be very costly, we use the appropriate system that fits the need of the task. For example, simple tasks run on cheaper systems, while more complex tasks that need more power use the more expensive systems.
This approach ensures that the company is spending its money wisely and getting the best performance possible for each task, without overspending on computing power that isn't needed. It's all about matching the task to the right resource to optimize costs and performance.
The Challenge of Managing Multiple Repositories
Imagine you own a business with several storage locations for different types of items. You have a warehouse for large equipment, a storage unit for office supplies, and a refrigerated unit for perishable goods. While each location is ideal for its specific items, managing all these different storage spots can become complicated. You need to keep track of where everything is, ensure each location is maintained properly, and move items between locations as needed. This can be time-consuming and requires a lot of coordination.
Similarly, in a company, storing different types of data in different repositories that are best suited for each type makes sense. However, managing all these separate repositories can be challenging. You need to know where all your data is stored, ensure each repository is functioning correctly, and manage the movement and processing of data across these repositories. This can be complex and require significant effort and resources. IDC (2024) highlights that multi-repository ecosystems often introduce silos and operational overhead unless unified by a governance or control plane.
How a Control Plane can Help:
Data Tiles: A Data Mesh Solution
At Data Tiles, we have developed a data mesh solution that operates as a "control plane" to provide this central oversight. Here's how it works:
Unified View
The solution provides one entry point to access all your data, regardless of where it's stored. Just like a control room that shows you the status of all your storage locations on one screen, the solution lets you see and manage all your data repositories from a single interface. This aligns with Forrester’s 2024 recommendation that organisations adopt unified access layers to reduce complexity and increase data usability.
Smart Routing
Data products are developed to ensure that when you need to use your data, that workload is directed to the most appropriate repository. It knows the translation for repository best suited for each type of task and executes that task accordingly. This is like the control room automatically dispatching the right vehicle for each job, ensuring efficiency and cost-effectiveness. Deloitte’s AI and Data Infrastructure report emphasises that routing tasks to fit-for-purpose compute layers improves both speed and cloud cost efficiency.
Simplified Management
By using the solution, you no longer have to manually manage each repository. The control plane handles the coordination, ensuring that all repositories are working together seamlessly. This reduces the complexity and effort required to manage your data. IDC refers to this as “computational governance,” a key principle of modern data mesh implementations.

The control plane solves the challenge of managing multiple repositories by providing a centralised system that offers a unified view of all your data, smart routing of tasks, and simplified management. This ensures that you can efficiently and effectively use your data without the headache of managing each repository individually.
Meet Latttice!
Imagine you have a smart assistant for your business that can manage all your different storage locations, warehouses, storage units, refrigerated units, and more. This assistant knows exactly where everything is and can automatically move items to the right place and fetch them when needed. This is what Latttice does for your data.
What Latttice provides:
Unified Access Point
A single, easy-to-use entry point to access all your data, no matter where it is stored. Just like having one app to manage all your different storage locations, Latttice gives you one platform to access and control all your data repositories.
Smart Task Routing
Easy development of data products across multiple repositories, with AI powered, zero code, functionality. This rapidly enables you to perform tasks with your data, against the most suitable repository. For instance, if you need to run an AI task, it executes against a repository designed for AI workloads. If you need to generate a business report, it executes tasks against a repository best suited for that purpose. This ensures that each task is handled efficiently and cost-effectively. Forrester's 2024 State of Data Strategy report specifically highlights the importance of zero-code and AI-enabled data access for business teams.
Simplified Management
Latttice takes care of the complex coordination between different data repositories. You don't have to worry about manually managing each one or moving data around. Latttice ensures everything works together smoothly, reducing the hassle and effort on your part.
How Does This Work?
Centralized Control
Latttice acts like a control tower for your data, giving you a comprehensive view and control over all your data repositories from one place.
Optimized Performance
By directing each task to the right repository, Latttice ensures that your computing resources are used efficiently. This means you get the best performance without overspending on expensive computing power for tasks that don't need it. Gartner repeatedly highlights workload-aware routing as a core strategy for reducing cloud waste.
Ease of Use
With Latttice, you don't need to be a data expert to manage your data. Its AI powered, zero code, user-friendly interface makes it easy for anyone to access and use data effectively, without worrying about the underlying complexities.

Real-World Example
Let's provide an example. A retail business needs to analyse sales data to make informed decisions. With Latttice, you simply access the platform, and it fetches the sales data from the appropriate repository. If you then need to run a complex AI model to predict future sales trends, Latttice executes this task to a powerful AI-optimized repository. All this happens seamlessly, without you having to worry about where the data is stored or how to process it. IDC (2024) notes that separating repositories for reporting versus AI is one of the most effective ways to reduce cost and improve model performance.
Conclusion
The concept of price performant compute (PPC) addresses the inefficiency and high costs associated with running all data tasks on expensive computing resources. By storing different types of data in repositories tailored to specific workloads, such as AI or business intelligence, companies can optimize their computing expenses. Latttice simplifies this process by providing a centralized control plane that unifies access to all data repositories, smartly executing tasks against the appropriate systems. This approach ensures efficient use of resources, reduces costs, and streamlines data management, allowing companies to focus on leveraging their data for better decision-making without the burden of managing complex and capital-intensive infrastructures.
Join a data conversation,
Cameron Price.
Loading...
Further Reading & References:
  1. Gartner (2023)How to Optimize Cloud Costs Without Compromising Performance. Highlights the importance of matching workloads to the right compute resources to achieve cost efficiency.
  1. McKinsey & Company (2023)Cloud's Trillion-Dollar Prize Is Up for Grabs. Discusses how organizations can optimize cloud usage by choosing fit-for-purpose architectures rather than over-provisioning.
  1. Forrester (2024)The State of Data Strategy. Emphasizes the need for unified control planes to manage multi-repository environments effectively.
  1. Deloitte Insights (2023)AI and Data Infrastructure: Aligning the Right Tools with the Right Tasks. Details how targeted compute allocation improves both speed and cost outcomes.
  1. IDC (2024)Data Mesh and the Modern Enterprise. Explores how data mesh principles, combined with centralized governance, reduce complexity and improve agility in multi-repository ecosystems.