媒體報導

2021-04-22 11:03:12Patty ChenProvide comprehensive AI modeling tools NYCU makes self-driving car identification more accurate.


Jiun-In Guo, Deputy Dean of the College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University and Director of the e-AI RD Center.

 
The application of embedded AI technology is gradually expanding. Deep learning is currently one of the most commonly used algorithms. This algorithm needs to establish a complete and accurate training model for the inference (Inference) side to be successful. In this forum, Jiun-In Guo, deputy dean of the College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University and director of the e-AI RD Center, gave a brilliant speech on the topic of "Construction and Application of Embedded AI Deep Learning Operation Model".
 
The Intelligent Vision System Laboratory (NYCU iVS Lab) of National Yang Ming Chiao Tung University focuses on various smart vision research, and self-driving cars are also part of it. Jiun-In Guo said that self-driving cars have become a common trend in the global automotive and technology industries. NYCU iVS Lab's research in this field includes various functions and related technologies required by ADAS. In the sensor part, in addition to the visual sensor, its research also includes LiDAR. He pointed out that image recognition is currently the mainstream development direction of AI. In the automotive field, AI can also be applied to LiDAR for object detection and analysis.
 
Regarding the recommendation of importing AI, he said that developers must first master the core technologies of graphics, software and hardware, and then perform AI modeling. At this link, Jiun-In Guo emphasizes that fixed-point rather than floating-point operations must be used when modeling. It can meet the requirements of self-driving car systems. In response to the current AI design trends and challenges, Jiun-In Guo took a recent electric vehicle accident as an example to point out the crux of the problem. There was a car accident on a highway in Taiwan a few days ago. The driver let go of the electric car, but the electric car directly hits a recumbent container truck on the road ahead. Under normal conditions, the brand's electric car can detect the vehicle in front of it when it is too close. It will automatically brake, but in this incident, AI was unable to recognize whether a stationary and lying container truck was a vehicle, and the white body affected its visual judgment, which eventually led to a car accident
 
From this incident, it can be seen that there are several problems with AI in self-driving cars, such as the camera cannot detect vehicles in the lane, fog and strong light will interfere with the system's recognition of white cars, and the radar may ignore static vehicles, The integration method of the two sensors of camera and radar needs to be improved, etc. Now NYCU iVS Lab is committed to solving the above-mentioned problems.
 
Jiun-In Guo then talked about the core technology and application of embedded AI induction. He pointed out that for standard embedded deep learning development, data must be set and marked, and then a training model must be constructed. NYCU iVS Lab has launched different platforms for the above links, allowing AI developers to have quick and easy tools in different links, helping the industry shorten the development timeline.
 
Jiun-In Guo said that the tools launched by NYCU iVS Lab have been tested and are highly practical. Take the setting and labeling of data as an example. The ezLabel tool provided by NYCU iVS Lab in this section requires only two frames before and after. Mark the objects in the entire image, greatly reducing the manual marking time; ezLabel is an open network platform that can be used by deep learning experts and ordinary people from all over the world. At present, ezLabel 2.3version has accumulated more than 610 users.
 
In the model construction part, NYCU iVS Lab constructs the SSD lightweight model and MTSAN (Multi-Task Semantic Attention Network). The SSD lightweight model solves the pain points of such models that were difficult to detect thin and long objects due to insufficient anchor density in the past. After the addition of CSPNet, NYCU iVS Lab not only strengthens the calculation speed and accuracy, but also the amount of calculations and parameters have also been reduced by half. As for MTSAN, it combines object detection technology to use pixels to divide the field and to enhance the characteristics of the object. Jiun-In Guo pointed out that just this action can increase the accuracy (mAP) by 4.5%.
 
After the self-driving car is introduced into the MTSAN, it can be integrated with the front vehicle collision prevention (FCWS) or the lane shift system (LDWS) to accurately determine the lane. When driving on the mountain road, it can identify the curved lane line, and it can also be added 2D and 3D convolution (Convolution) behavior analysis technology to predict the direction and possibility of overtaking of rear-end vehicles
 
At the end of the speech, Jiun-In Guo quoted the blueprint for the development of AI in the United States in the next 20 years to summarize. He said that the future of AI must be integrated with the context, and at the same time create an open knowledge field, gather the power of everyone, so that AI can realize human intelligence and reactions, in order to carry out meaningful interactions. In addition, AI must also be able to learn by itself and integrate All kinds of information of the surrounding environment, cultivate the ability to deal with difficult challenges.
 
As for the AI application of self-driving cars, he pointed out the need to strengthen the research and development of various perception technologies so that vehicles can accurately recognize various types of objects on the road and their movement intentions. This will be the focus of future industry, education and research. Through these research and development, the probability of car accidents will be significantly reduced, and then construct a safe and reliable traffic field.
 

Source:https://www.digitimes.com.tw/iot/article.asp?cat=130&cat1=40&id=0000607775_SPR8I9Y662CLUO66K07LI