Go to content
SV På svenska

Deep Learning on the Edge

Reference number
SM19-0033
Start and end dates
200101-221231
Amount granted
1 500 000 SEK
Administrative organization
KTH - Royal Institute of Technology
Research area
Information, Communication and Systems Technology

Summary

Neural networks (NN) have shown significant improvements in many applications of artificial intelligence (AI) such as image classification and speech recognition. The current general-purpose processors cannot process the large computations involved in NNs with high performance. Thereby, the advancement of NNs lies in the development of embedded hardware, which expands the usage of AI applications to mobile and other edge devices. Currently, such services are provided through the cloud, running on CPUs or GPUs. However, cloud-based execution raises severe concerns about security, internet connectivity, and power consumption. As a result, the need for on-device processing is increasing steadily. The main objective of this mobility plan is to develop algorithms and techniques to enable the local and efficient execution of NNs on resource-limited hardware platforms. We first analyze the currently developed algorithms at Ericsson to make them lighter weight (without the loss of accuracy) by considering the limited hardware resources on edge devices. In the next step, we develop specialized hardware for the efficient execution of NN algorithms. The expected results are 1- generalized solutions to simplify algorithms for hardware execution; 2- fast and customized hardware architectures for efficient execution of NN algorithms. We also believe this research would strengthen the connection between KTH and Ericsson and could open future possible collaborations.

Popular science description

Neural networks (NN) have shown significant improvements in many applications of artificial intelligence (AI) such as image classification and speech recognition. The current general-purpose processors cannot process the large computations involved in NNs with high performance. Thereby, the advancement of NNs lies in the development of embedded hardware, which expands the usage of AI applications to mobile and other edge devices. Currently, such services are provided through the cloud, running on CPUs or GPUs. However, cloud-based execution raises severe concerns about security, internet connectivity, and power consumption. As a result, the need for on-device processing is increasing steadily. The main objective of this mobility plan is to develop algorithms and techniques to enable the local and efficient execution of NNs on resource-limited hardware platforms. We first analyze the currently developed algorithms at Ericsson to make them lighter weight (without the loss of accuracy) by considering the limited hardware resources on edge devices. In the next step, we develop specialized hardware for the efficient execution of NN algorithms. The expected results are 1- generalized solutions to simplify algorithms for hardware execution; 2- fast and customized hardware architectures for efficient execution of NN algorithms. We also believe this research would strengthen the connection between KTH and Ericsson and could open future possible collaborations.