A 2-days LPCC workshop on public fast simulators for the LHC was held 11-12 June 2012 at CERN. The first day, Monday, was mostly dedicated to reviews and discussion of three existing fast detector simulates: PGS, Turbosim and Delphes, and of two schemes which are currently under development: ATOM and MadAnalysis5. In addition to the presentation of the five codes by their authors, we had short contributions by several people providing user feedback. A few comments that seem in order: - PGS is a parametric simulator that was developed by J. Conway to emulate Tevatron experiments, but also performs well for CMS and ATLAS analyses. PGS is written in Fortran and used a lot by theorists, in particular in the US. John Conway does not want to maintain it "eternally", but we think it would be very useful for the community to have some further development of this tool. - DELPHES, being written in C++ is a "modern version" of PGS, developed and maintained by a group in Louvain and developed on a community basis. It is the current state of the art, and many people--experimentalists as well as theorists--are using it. - Turbosim takes a different approach then the two codes above: it abstracts a generator to simulation mapping from a large sample of simulated events and applies the map to particle or generator level events to obtain the simulated output. It was created by Bruce Knuteson in the Tevatron era, and was also shown to work well in CMS. Requires 1-2 people to be maintained. The downside is that it relies on collaboration-internal information, and it is not clear to what extent it can be a "public" tool. - Atom is a framework for analysis re-use, using the Rivet library of past experimental analysis codes and results. Atom currently models detector effects with parametrized efficiencies, but integration with fast detector simulations is in development. - MadAnalysis5 is a new and very general framework for collider phenomenology analyses developed by Eric Conte, Benjamin Fuks, and Guillaume Serret. It comes with two modes of running. The first one, easier to handle, uses the strengths of a powerful Python interface in order to implement physics analyses by means of a set of intuitive commands. The second one requires to implement the analyses in the C++ programming language, directly within the core of the analysis framework. MadAnalysis5 developers wish to include a fast detector simulation by means of shared libraries. The 2nd half of Monday afternoon and all of Tuesday was dedicated to extensive discussions of I/O formats, run performance, analysis tools, object implementation, difficult topologies, and validation. Regarding I/O formats, an important point concerned the huge sizes of the hepmc files which are used as input, and suggestions were made to cope with these large files, the most notable one being about using named pipes. Making the output files more configurable, and providing a more flexible analysis framework where it is easier to input the efficiencies were also discussed. A long to-do list was discussed on the subject of object definitions, which would help to refine the performance of the simulators. There were specific requests to make isolation criteria (even functions) easily accessible to the user, update the b and tau tagging, add photon conversion, and add a dedicated object for long lived charged particles. Question of pile-up, and whether to simulate it as calorimeter noise or more realistically was also thoroughly discussed. The recent performance papers from experiments including pile-up will be investigated in order to fully decide on the implementation. Difficult topologies discussed were exotics, boosted objects, topologies with soft objects, and multi-object topologies. Validation was another big subject. Here one has to distinguish between validation of the code, validation of objects, and validation of analyses performance. Object validation is also dependent on the topology in which the object is considered. In particular the validation of object implementations is a highly important task at the moment and requires input from ATLAS and CMS experiments; a variety of distributions (pT, ETmiss, etc) for benchmark processes will be necessary for detailed comparisons. Relevant processes to consider are for example ttbar, boosted tops, multi-tops. We will dig out what is currently available in particular for ttbar, and we hope that detailed performance studies by ATLAS and CMS will become public soon. There were also discussions on validation measures. Finally, we had a presentation of Geant5, which took a much wider perspective than the tools-specific questions discussed at the workshop. Summary and outlook: - The general view was that it would be extremely important to develop a modular version of DELPHES. For example, object definition, including efficiency functions, should be done in the form of modules which provide basic functionality and are easy to modify/extend by the expert user. - Discussions (and actual work) have thus started to introduce a new comprehensive framework--i.e. not a new program but only a series of concepts and conventions for format and communication--for detector simulations. By design, it is asked to be able to link, in a modular and flexible fashion, all the features included in the current version of Delphes and PGS, as well as all the items lying on the development route of MadAnalysis. Moreover, it must offer a way to alleviate the limitations of the current available tools pointed out by their users. - We also concluded that the existence of at least two independent public*) fast simulation tools is highly desirable. This allows the users to select the tool closer to their needs, and enables cross checks. We nevertheless encourage the developers of the different tools to collaborate, also in contact with the experiments, and agree on issues of common interest, for example on the nature and format of the experimental input. - We need a dedicated effort towards validation, and much information is needed from the experimental collaborations to this end. An issue to be followed up in a future dedicated workshop. *) Note added: A "public" simulation code should not rely for its release (and release of its updates) on the approval and endorsement of anyone but the primary authors. The experimental input should be based on publicly available information provided by the experiments, and the use of these inputs for development, validation and tuning, is the reponsibility of the code authors only.