Name: | Description: | Size: | Format: | |
---|---|---|---|---|
185.41 KB | Adobe PDF |
Advisor(s)
Abstract(s)
Over the last years, deep learning architectures have
gained attention by winning important international detection
and classification challenges. However, due to high levels of
energy consumption, the need to use low-power devices at
acceptable throughput performance is higher than ever. This
paper tries to solve this problem by introducing energy efficient
deep learning based on local training and using low-power mobile
GPU parallel architectures, all conveniently supported by the
same high-level description of the deep network. Also, it proposes
to discover the maximum dimensions that a particular type
of deep learning architecture—the stacked autoencoder—can
support by finding the hardware limitations of a representative
group of mobile GPUs and platforms.
Description
Keywords
Parallel processing Mobile GPU Low-power Energy savings Deep Learning Stacked Autoencoders