Hoppa till innehåll
EN In english

Deep Learning on the Edge

Diarienummer
SM19-0033
Start- och slutdatum
200101-221231
Beviljat belopp
1 500 000 kr
Förvaltande organisation
KTH - Royal Institute of Technology
Forskningsområde
Informations-, kommunikations- och systemteknik

Summary

Neural networks (NN) have shown significant improvements in many applications of artificial intelligence (AI) such as image classification and speech recognition. The current general-purpose processors cannot process the large computations involved in NNs with high performance. Thereby, the advancement of NNs lies in the development of embedded hardware, which expands the usage of AI applications to mobile and other edge devices. Currently, such services are provided through the cloud, running on CPUs or GPUs. However, cloud-based execution raises severe concerns about security, internet connectivity, and power consumption. As a result, the need for on-device processing is increasing steadily. The main objective of this mobility plan is to develop algorithms and techniques to enable the local and efficient execution of NNs on resource-limited hardware platforms. We first analyze the currently developed algorithms at Ericsson to make them lighter weight (without the loss of accuracy) by considering the limited hardware resources on edge devices. In the next step, we develop specialized hardware for the efficient execution of NN algorithms. The expected results are 1- generalized solutions to simplify algorithms for hardware execution; 2- fast and customized hardware architectures for efficient execution of NN algorithms. We also believe this research would strengthen the connection between KTH and Ericsson and could open future possible collaborations.

Populärvetenskaplig beskrivning

Neural networks (NN) have shown significant improvements in many applications of artificial intelligence (AI) such as image classification and speech recognition. The current general-purpose processors cannot process the large computations involved in NNs with high performance. Thereby, the advancement of NNs lies in the development of embedded hardware, which expands the usage of AI applications to mobile and other edge devices. Currently, such services are provided through the cloud, running on CPUs or GPUs. However, cloud-based execution raises severe concerns about security, internet connectivity, and power consumption. As a result, the need for on-device processing is increasing steadily. The main objective of this mobility plan is to develop algorithms and techniques to enable the local and efficient execution of NNs on resource-limited hardware platforms. We first analyze the currently developed algorithms at Ericsson to make them lighter weight (without the loss of accuracy) by considering the limited hardware resources on edge devices. In the next step, we develop specialized hardware for the efficient execution of NN algorithms. The expected results are 1- generalized solutions to simplify algorithms for hardware execution; 2- fast and customized hardware architectures for efficient execution of NN algorithms. We also believe this research would strengthen the connection between KTH and Ericsson and could open future possible collaborations.