Data engineering, encompassing ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes, is often the unsung hero of data projects. However, a closer look at many data warehouse initiatives shows that a disproportionate amount of time and architecture (greater than 70%) is spent on these processes. In the current paradigm, we find that most of the data movement orchestrated by ETL/ELT is superfluous (greater than 80%), leading to inflated project timelines and costs. This inefficiency is exacerbated by the integration of big data technologies, which not only introduce complexity due to their distributed nature but also require highly specialized skills that are in short supply. The rarity of such expertise makes these projects expensive and difficult to scale. Furthermore, the complexity of these data pipelines often leads to maintenance nightmares, where a significant quantity of resources are diverted to simply keeping the lights on rather than innovating or gaining insights from the data.