Cutting-edge research will be presented at CVPR 2019
We are thrilled to announce that our new paper about compact networks for image embedding has been accepted in CVPR 2019, the world’s premier computer vision conference.
The paper “Learning Metrics from Teachers: Compact Networks for Image Embedding” will be presented in Long Beach, California, USA, in June 2019. Congratulations to our amazing PhD Candidate Vacit Oguz Yazici, and Arnau Ramisa, VP Innovation at Wide Eyes; and to our collaborators at CVC, Lu Yu, Xialei Liu, Joost van de Weijer and Yongmei Cheng.
As usual, the organizers have first publicly release only the accepted papers’ IDs (our ID is 1651). The details will come later. Well, here’s a spoiler alert…
A nice technique for training efficient networks is knowledge distillation. With this technique a very slow, but accurate, “teacher” neural network is used to assist in training a smaller “student” neural network, which will typically result in much better accuracy for the “student” network. However, knowledge distillation is only defined for classification networks, which severely limits its applicability in certain tasks, like similar search. In this paper we propose two new loss functions to extend knowledge distillation to metric learning networks, and we show the benefits of using it. By exploiting knowledge from more complex “teacher” networks, we obtain a significantly better recall score in all setups evaluated, compared to using the standard training process for the “student” network.
CVPR brings in great people and their great research; there’s always something new to see and learn. Of course, there’s always papers that publish new groundbreaking results and bring in some great new knowledge into the field. We are convinced that this paper will be one of these cool papers at CVPR 2019.