Dual-energy (DE) chest x-rays (CXRs) provide the ability to selectively imaging two relevant materials, soft tissue and bone structures, to better characterize various chest pathologies and potentially improve diagnosis of CXRs. Recently deep-learning-based image synthesis techniques have attracted much attention as another approach to replace exist DE methods (i.e., dual-exposure based, sandwich-detector based, etc) because of their superior ability for image mapping. Cycle-consistent generative adversarial network (Cycle-GAN) is the central issue in the synthesizing medical images. In this study, we propose a method to utilize Cycle-GAN for image-to-image translation between the conventional and selective images of two relevant materials. In addition, to avoid anatomical structural errors in synthesized results, we use the correlation coefficient loss to directly enforce the structural similarity between the input selective image of two relevant materials and the synthesized image, and to combine the shape consistency information for improving the synthesized DE image. Our results indicate that the proposed network method effectively showed superior ability of synthetic imaging of two relevant materials. Its effectiveness was validated by comparing image performance to those from other network methods such as U-Net, multi-level wavelet CNN (MWCNN) for DE synthetic CXRs.