Latest News
Publisher: College of Information Science and Technology
Date: April 11, 2025
The Jinan University Integrated Media Center proudly announces that two notable research achievements from the College of Information Science and Technology have been accepted for presentation at the prestigious AAAI 2025 Conference. The Association for the Advancement of Artificial Intelligence (AAAI) is regarded as a top-tier conference in the domains of artificial intelligence and machine learning. According to the latest recommendations from the Chinese Computer Society (CCF), AAAI is classified as an A-level conference in the field. This year, a total of 12,957 submissions were made to the AAAI 2025 conference, with a highly competitive acceptance rate of just 23.4%.
Overview of Selected Papers
1. Visual Transformer Outperforms WideResNets in Robustness on Small-Scale Datasets
•Authors: Wu Juntao, Song Ziyu, Zhang Xiaoyu, Xie Shujun, Lin Longxin, Wang Ke (All from Jinan University)
•Corresponding Author: Wang Ke
•Abstract: Historically, Visual Transformers (ViTs) have been viewed as less effective than WideResNet models for achieving robust performance on small-scale datasets. Although WideResNet has maintained a state-of-the-art (SOTA) status in robustness accuracy on datasets such as CIFAR-10 and CIFAR-100, this paper challenges that notion. It investigates whether ViTs can, in fact, exceed the robustness and accuracy of WideResNet. The results affirmatively show that, by leveraging data generated via diffusion models for adversarial training, ViTs surpass WideResNet in both robustness and accuracy. Under the infinite norm threat model with epsilon=8/255, the proposed method achieved robust accuracy levels of 74.97% on CIFAR-10 and 44.07% on CIFAR-100, representing improvements of +3.9% and +1.4%, respectively, over the previous SOTA models. Notably, the ViT-B/2 model utilized in this study consists of only one-third of the parameters compared to the WRN-70-16 model, yet delivers superior performance. This groundbreaking achievement opens new pathways for future research and suggests that models utilizing ViTs or alternative efficient architectures may eventually supplant the long-established dominance of WideResNet.
2. Syntactic Methods for Computational Complementarity and Correct Abstraction in Situational Calculus
•Authors: Fang Liangda, Wang Xiaoman, Chen Chang (Jinan University), Luo Karen (Dongguan University of Technology), Cui Zhenhe (Hunan University of Science and Technology), Guan Quanlong (Jinan University)
•Corresponding Author: Guan Quanlong
•Abstract: Abstraction is a crucial concept in artificial intelligence, yet, to date, no syntactic method has been developed to compute a complete and correct abstract action theory from a given low-level basic action theory combined with refined mapping. This paper addresses this gap by proposing a variant of situational calculus—termed linear integer situational calculus—serving as a descriptive framework for high-order basic action theory. The authors apply and adapt the abstract framework originally proposed by Banihashemi, De Giacomo, and Lesperance, transitioning it to a framework that connects linear integer situational calculus with extended situational calculus. Additionally, the article identifies a specific type of Golog program, named restricted actions, designed to limit low-order Golog programs and impose conditions on refined mappings. Ultimately, this work introduces a syntactic operation enabling the calculation of a complete and accurate abstract action theory from low-order basic action theory alongside restricted refinement mapping.
These achievements reflect the continuous innovation and research excellence at Jinan University's College of Information Science and Technology, contributing significantly to the ongoing advancements in artificial intelligence.
Copyright © 2016 Jinan University. All Rights Reserved.