A major effort to re-engineer existing HEP software is needed for the future efficient exploitation of resources being invested in computer centres used by HEP experiments. New generations of computers have started to exploit higher levels of parallelism, i.e. new CPU micro-architectures and computing systems with multiple CPUs. Significant agility will be needed to adapt, and even re-design, the algorithms and data structures of existing HEP code to fully utilize the available processing power. Evidently much work needs to be done to evaluate and select the best emerging software technologies, and to adapt our codes to new programming models that can execute efficiently in parallel on these new computing architectures. This paper discusses the motivations for a new initiative in bringing the whole HEP software community together to work on preparing our data processing applications to meet future challenges for improving software performance.