10-14 October 2016
San Francisco Marriott Marquis
America/Los_Angeles timezone

A Comparison of Deep Learning Architectures with GPU Acceleration and Their Applications

11 Oct 2016, 14:00
GG A+B (San Francisco Mariott Marquis)


San Francisco Mariott Marquis

Oral Track 5: Software Development Track 5: Software Development


Dr Jianlin Zhu (South-central University For Nationalities (CN))Dr Jin Huang (Wuhan Textile University (CN))


The goal of the comparison is to summarize the state-of-the-art techniques of deep learning which is boosted with modern GPUs. Deep learning, which is also known as deep structured learning or hierarchical learning, is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers composed of multiple non-linear transformations. Deep learning is part of a broader family of machine learning methods based on learning representations of data. The representations are inspired by advances in neuroscience and are loosely based on interpretation of information processing and communication patterns in a nervous system, such as neural coding which attempts to define a relationship between various stimuli and associated neuronal responses in the brain. In this paper, a brief history of deep learning research is discussed first. Then, different deep learning models such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks are analyzed to summarize major work reported in the deep learning literature. Then we discuss the general deep learning system architecture including hardware layer and software middleware. In this architecture GPU subsystem is widely used to accelerate computation and its architecture is especially discussed. To show the performance of the deep learning system with GPU acceleration, we choose various deep learning models, compare their performances with/without GPU and list various acceleration rates. Various deep learning models have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics. Selected applications are reviewed to show state-of-the-art results on various tasks. Finally, future directions of deep learning are discussed.

Tertiary Keyword (Optional) Computing middleware
Primary Keyword (Mandatory) Artificial intelligence/Machine learning
Secondary Keyword (Optional) High performance computing

Primary author

Dr Jin Huang (Wuhan Textile University (CN))


Daicui Zhou (Central China Normal University CCNU (CN)) Dr Jianlin Zhu (South-central University For Nationalities (CN))

Presentation Materials

There are no materials yet.