The increase of the scale of LHC computing expected for Run 3 and even more so for Run 4 (HL-LHC) over the course of the next 10 years will most certainly require radical changes to the computing models and the data processing of the LHC experiments. Translating the requirements of the physics programme into resource needs is an extremely complicated process and subject to significant uncertainties. Currently there is no way to do that without using complex tools and procedures developed internally by each LHC collaboration. Recently there has been much interest in developing a common model for estimating resource costs, which would be beneficial for experiments, WLCG and sites and in particular to understand and optimise the path towards HL-LHC. For example, it could be used to estimate the impact of changes in the computing models or to optimise the resource allocation at the site level. In this presentation we expose some preliminary ideas on how this could be achieved, with a special focus on the site perspective and provide some real world examples.