威尼斯在线平台

威尼斯在线平台   学科科研  工作动态

“the second international workshop on the intelligent information processing for smart city(iip4sc)”huiyiyugao

会议地点: 厚德楼220会议室

时间:2020年1月13日

会议名称: The Second International Workshop on the Intelligent Information Processing for Smart City Information Processing for Smart City

 

Time

Jan13th(Monday)

8:30-

8:40

Opening Remarks

 

 

Chairs: Josef   Kittler, Honghui Fan

 

Room: HouDe   Building 220, Jiangsu University of Technology

 

8:40-

9:20

Talk 1: Client-Specific Anomaly Detection, Part I, Josef Kittler

9:20-

10:00

Talk 2: Deep Metric Learning for Visual Content Understanding, Part I, Jiwen Lu

10:00-10:40

Talk 3: Weak Person Re-identification, Part I, Wei-Shi Zheng

10:40-11:20

Talk 4: Deep Hashing for Large-scale Image and Video Retrieval, Part I,   Ruiping Wang

 

11:20-11:50

Talk 5: Robust Facial Analysis in The Wild, Zhen-Hua Feng

11:50-12:20

Talk 6: Feature Selection for Advanced Visual Tracking, Tianyang Xu

15:00-

15:10

Opening Ceremony,   Chair: Xiao-Jun Wu

Chairs: Jiwen Lu,

 Jun Sun

 

Room: IoT   School D328, Jiangnan University

 

15:10-

15:50

Talk 1: Client-Specific Anomaly Detection, Part II, Josef Kittler

15:50-

16:30

Talk 2: Deep Metric Learning for Visual Content Understanding, Part II,   Jiwen Lu

16:30-17:10

Talk 3: Weak Person Re-identification, Part II, Wei-Shi Zheng

17:10-17:50

Talk 4: Deep Hashing for Large-scale Image and Video Retrieval, Part II,   Ruiping Wang

17:50-18:20

Talk 5: PANNs: Large-Scale Pretrained Audio Neural Networks for Audio   Pattern Recognition, Qiuqiang Kong

 

 

Organizations of IIP4SC:

Advisory Co-Chairs: Josef Kittler

General Co-Chairs: Xiao-Jun Wu, Jiwen Lu, Feiyue Ye

Program Co-Chairs: Honghui Fan, Jun Sun                                

 

Title: Client-Specific Anomaly Detection

Abstract: In many applications, the core decision making task is anomaly detection. This can be formulated in a number of ways depending on the amount and type of data available for the design of the anomaly detection system. In some applications only normal data is available for training. In such scenarios, the problem of anomaly detection converts to one of one class classification. We review various approaches to one-class anomaly detection and discuss their application to face spoofing attack detection. We will then focus on client specific designs and demonstrate their merit. Through extensive experiments using different one-class systems, it will be shown that the use of client-specific information in a one-class anomaly detection formulation (both in model construction as well as decision boundary selection) improves the performance significantly. We also show that while two-class solutions perform better than anomaly-based approaches in known attack scenarios, the converse is true in the unseen attack case.

Bio: Josef Kittler received the B.A., Ph.D., and D.Sc. degrees from the University of Cambridge, in 1971, 1974, and 1991, respectively. He is a distinguished Professor of Machine Intelligence at the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, U.K. He conducts research in biometrics, video and image database retrieval, medical image analysis, and cognitive vision. He published the textbook Pattern Recognition: A Statistical Approach and over 700 scientific papers. His publications have been cited more than 60,000 times (Google Scholar). He is series editor of Springer Lecture Notes on Computer Science. He currently serves on the Editorial Boards of Pattern Recognition Letters, Pattern Recognition and Artificial Intelligence, Pattern Analysis and Applications. He also served as a member of the Editorial Board of IEEE Transactions on Pattern Analysis and Machine Intelligence during 1982-1985. He served on the Governing Board of the International Association for Pattern Recognition (IAPR) as one of the two British representatives during the period 1982-2005, President of the IAPR during 1994-1996.

Title: Deep Metric Learning for Visual Content Understanding

Abstract: In this talk, I will overview the trend of deep metric learning techniques and discuss how they are employed to boost the performance of various visual content understanding tasks. Specifically, I will introduce some of our proposed deep metric learning methods including discriminative deep metric learning, deep localized metric learning, deep coupled metric learning, multi-manifold deep metric learning, deep transfer metric learning, deep adversarial metric learning, and multi-view deep metric learning, which are developed for different application-specific visual content understanding tasks such as face recognition, person re-identification, object recognition, action recognition, visual tracking, image set classification, and visual search. Lastly, I will discuss some open problems in deep metric learning to show how to further develop more advanced deep metric learning methods in the future.

Bio: Jiwen Lu is currently an Associate Professor with the Department of Automation, Tsinghua University, China. His current research interests include computer vision, pattern recognition, machine learning, and intelligent robotics. He has authored/co-authored over 200 scientific papers in these areas, where 70 of them are PAMI/IJCV/CVPR/ICCV/ECCV papers. He was a recipient of the National 1000 Young Talents Program of China in 2015, the National Science Fund of China Award for Excellent Young Scholars in 2018, the Best Platinum Paper Award of ICME’ 2018, and the Multimedia Rising Star Award of IEEE ICME’2019. He serves as the Co-Editor-of-Chief for Pattern Recognition Letters, an Associate Editor for IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Biometrics, Behavior, and Identity Sciences, and Pattern Recognition. He was/is the Program Co-Chair of IEEE ICME’2020 and IEEE AVSS’2020, and an Area Chair for CVPR’2020, ICME’2015/2017-2019, ICIP’2017-2019, ICPR 2018, and ICB 2015-2016.

 

 

 

 

 

 

 

 

 

 

 

 

 

Title: Weak Person Re-identification

Abstract: Person re-identification (re-id) is an important research topic for visual surveillance. In practice, however, person re-id is still suffering from many unresolved serious influences, such as illumination, clothing change, etc. In addition, at present most of the performance of re-id algorithms heavily depends on the annotation of mass data, and how to deal with a large number of weak annotation or the identification of person re-id under no annotation data modeling is still an urgent need to solve. In this talk we will introduce the weak person re-identification research, including weakly supervised solutions for the solving re-id with weak labels and some new models to solve the re-id with weak visual cues.

Bio: Wei-Shi Zheng is now a Professor with Sun Yat-sen University. Dr. Zheng received the PhD degree in Applied Mathematics from Sun Yat-sen University in 2008. He is now a full Professor at Sun Yat-sen University. He has now published more than 120 papers, including more than 90 publications in main journals (including 12 TPAMI/IJCV papers) and top conferences (ICCV, CVPR, IJCAI, AAAI). He has joined the organisation of four tutorial presentations in ACCV 2012, ICPR 2012, ICCV 2013 and CVPR 2015. His research interests include person/object association and activity understanding in visual surveillance, and the related large-scale machine learning algorithm. Especially, Dr. Zheng has active research on person re-identification in the last five years. He serves a lot for many journals and conference, and he was announced to perform outstanding review in recent top conferences (ECCV 2016 & CVPR 2017). He has ever joined Microsoft Research Asia Young Faculty Visiting Programme. He has ever served as a senior PC/area chair/associate editor of AVSS 2012, ICPR 2018, IJCAI 2019/2020, AAAI 2020 and BMVC 2018/2019. He is an IEEE MSA TC member. He is an associate editor of Pattern Recognition. He is a recipient of Excellent Young Scientists Fund of the National Natural Science Foundation of China, and a recipient of Royal Society-Newton Advanced Fellowship of United Kingdom.

 

 

 

 

 

 

 

 

 

 

 

 

Title: Deep Hashing for Large-scale Image and Video Retrieval

Abstract: Recent years have witnessed the explosive growth of image and video data on the Internet, posing great challenges to retrieving images/videos relevant to given query. At the meantime, the retrieval tasks have also become more diverse, such as 1) retrieving images from the same category, 2) retrieving images with specified attributes, and 3) the combination of the above tasks. To deal with such tasks, hashing is often adopted for its high efficiency in both time and storage. In this talk, I will introduce recent progresses in our group towards this topic. First, to tackle the traditional category retrieval, we propose a novel Deep Supervised Hashing (DSH) method that takes pairs of images as training inputs and learns the desired compact binary codes as the output of each image in an end-to-end manner. Then, to address the multiple retrieval tasks, we propose a unified framework named Dual Purpose Hashing (DPH) to jointly preserve the category and attribute similarities in a multi-task learning fashion. Furthermore, we extend the idea of DSH to Deep Heterogeneous Hashing (DHH) that learns unified binary codes for both images and videos in a single framework to tackle the task of cross image-video retrieval.

Bio: Ruiping Wang is a Professor at the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS). He has published more than 70 papers in peer-reviewed journals and conferences, including IEEE TPAMI, IJCV, CVPR, ICCV, ECCV, ICML. Dr. Wang serves as an Associate Editor for Pattern Recognition (Elsevier), Neurocomputing (Elsevier), The Visual Computer (Springer), and IEEE Biometrics Compendium, Area Chair for IEEE WACV2018-2020, ICME2019/2020, and Publication Chair for IEEE FG 2018, IJCB 2020. He has co-organized tutorials in CVPR 2015/ECCV 2016/ICCV 2019, and workshop at ACCV 2016/CVPR 2019. His current research interests include video-based face recognition/retrieval, large-scale image retrieval, visual scene understanding, distance metric learning, and manifold learning. He is a recipient of Excellent Young Scientists Fund of the National Natural Science Foundation of China.


 

Title: Robust Facial Analysis in The Wild

Abstract: Image and video based facial analysis is one of the most interesting research topics in computer vision and pattern recognition. It plays very important roles in CCTV surveillance, broader control, security systems, human-computer interaction, etc. However, it is a very challenging task to perform robust facial analysis in the wild, presenting large-scale appearance variations, e.g., in pose, expression, illumination, makeup, image blurring, partial occlusion etc. In this talk, I will introduce some recent advances in unconstrained facial analysis applications, including 2D facial landmark localisation, 3D face reconstruction and face recognition.

Bio: Zhen-Hua Feng is a senior research fellow at the Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, United Kingdom. He received the Ph.D. degree in machine intelligence from the University of Surrey in 2016. His research interests include pattern recognition, machine learning and computer vision. He has published more than 40 scientific papers in top-ranking conferences and journals, including IJCV, CVPR, ICCV, ECCV, IEEE and ACM transactions, etc. He has received the 2017 European Biometrics Industry Award from the European Association for Biometrics (EAB), the 2017 Departmental Prize for Excellence in Research from the EEE department of the University of Surrey and the AMDO 2018 Best Paper Award for Commercial Applications.


 

Title: Feature Selection for Advanced Visual Tracking

Abstract: Visual object tracking is one of the most popular topics in computer vision and machine intelligence, motivated by a wide spectrum of practical applications in robotics, medical image analysis, intelligent transportation and human-computer interaction. In this talk, we will introduce some advanced tracking methods based on feature selection techniques. Accuracy, acceleration and robustness are discussed in designing more practical trackers.

Bio: Tianyang Xu received the B.Sc. degree in electronic science and engineering from Nanjing University, Nanjing, China, in 2011. He received the PhD degree at the School of Internet of Things Engineering, Jiangnan University, Wuxi, China, in 2019. He is currently a research fellow at the Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, United Kingdom. His research interests include visual tracking and deep learning. He has published several scientific papers, including International Conference on Computer Vision, IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, Pattern Recognition etc. He achieved top 1 tracking performance in the VOT2018 public dataset.


 

Title: PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition

 

Abstract: Audio pattern recognition is an essential task for sound understanding, and is an important research topic in the machine learning area including several tasks such as audio tagging, acoustic scene classification and sound event detection. Recently neural networks have been applied to solve audio pattern recognition problems. However, previous systems focus on small datasets, which limits the performance of audio pattern recognition systems. Recently in computer vision and natural language processing, systems pretrained on large datasets have generalized well to several tasks. However, there is limited research on pretraining neural networks on large datasets for audio pattern recognition. We propose large-scale pretrained audio neural networks (PANNs) trained on AudioSet. We propose to use Wavegram, a feature learned from waveform, and the mel spectrogram as input. We investigate the performance and complexity of a variety of convolutional neural networks. Our proposed AudioSet tagging system achieves a state-of-the-art mean average precision (mAP) of 0.439, outperforming the best previous system of 0.392. We transferred a PANN to six audio pattern recognition tasks and achieve state-of-the-art performance in many tasks. Source code and pretrained models have been released.

Bio: Dr Qiuqiang Kong is now a research scientist at the ByteDance AI Lab. He received the B.Sc. and M.Eng. degrees from the South China University of Technology in 2012 and 2015, respectively. He received the Ph.D. degree from the Centre for Vision, Speech and Signal Processing at the University of Surrey, United Kingdom, in 2019. His research interests include sound understanding, audio signal processing and music information retrieval. He has published more than 20 paper in top venues of his research fields, such as ICASSP and IEEE/ACM Transactions on Audio, Speech, and Language Processing. He won the 1st prize of the large-scale weakly supervised sound event detection for smart cars task in the Detection and Classification of Acoustic Scenes.