The computational, storage, and network requirements of the Compact Muon Solenoid (CMS) Experiment, from Run 1 at LHC to the future Run 4 at High Luminosity Large Hadron Collider (HL-LHC), have scaled by at least an order of magnitude. Computing in CMS plays a significant role, from the first steps of data processing to the last stage of delivering analyzed data to physicists. In this talk, we will share the insights and lessons learned over the past ten years during Run1 and Run2 and discuss the developments and upgrades completed during the current shutdown of the LHC. In this paper, we analyze the evolution of CMS Computing tools in the areas of distributed grid computing infrastructure, data management and data production. We also quantitatively assess and evaluate the key performance indicators in maintenance and operations and highlight the upcoming challenges and solutions for the future.