The industry also continues to examine ways in which engineers are using neural networks and, specifically, looking at the problem of network simplification in the grand scheme of things. Neural networks have initially been used as classifiers. For example, in an automotive application, they can distinguish between a tree and a traffic sign in real time. Now, neural networks are being extended into full scene analysis. Using our automotive application example, full scene analysis would involve more sophisticated recognition activities such as the ability to first distinguish between a sign, a stop sign, and a man wearing a shirt with the number 50 on it, and then to be able to respond accordingly based on the assessment.
There are also ways to explore frame sequences to address larger issues such as context of the scene. For example, from a single frame it’s hard to determine if a pedestrian on the sidewalk is about to cross the street or not; frame sequences would provide more clarity. Additionally, sensor fusion technologies open up new possibilities beyond the images captured by traditional RGB cameras. By automatically fusing additional information that is captured by sensors into the pipeline of the neural network, that network can be trained to provide even richer insights.
Improving Efficiency of Today’s Neural Networks
There’s also room to vastly improve the efficiency of today’s neural networks. Today’s networks require too much memory bandwidth and compute resources, especially multiply-add. Accuracy can be enhanced and fine-tuned. Reducing redundancy in the coefficients and the effective depth of the network can help improve both the training efficiency and performance of a neural network.
Running CNN algorithms on a DSP with clusters of cores, rather than a GPU, can also yield more efficiency, along with performance scaling. Indeed, a bandwidth-optimized vision cluster with configurable processors is