Eagle W H Liang

Eagle W H Liang

PhD Student

Email: w.liang@adelaide.edu.au

Additional Photo

News

    20240805 FinSen Dataset Publish - Refer to [Github]

Publications [Google Scholar]

  • Enhancing Financial Market Predictions: Causality-Driven Feature Selection - Wenhao Liang, Zhengyang Li, Weitong Chen. Presented at ADMA 2024. This paper explores the integration of financial news and stock market data across 197 countries with LSTM models to improve the accuracy of market predictions.
  • Correlation Analysis of Adversarial Attack in Time Series Classification - Zhengyang Li, Wenhao Liang, Chang Dong, Weitong Chen, Dong Huang. Presented at ADMA 2024. This study investigates the vulnerability of time series classification models to adversarial attacks, with a focus on how these models process local versus global information under such conditions.

Research Group

  • DT Lab - Data Transpose Lab at The University of Adelaide focuses on AI, big data, and cloud computing research aimed at transforming industries through cutting-edge technology.
  • Lab Leaders - Dr. Wei Zhang is a Senior Lecturer and Associate Head of People and Culture at the School of Computer and Mathematical Sciences, and a researcher at the Australian Institute for Machine Learning (AIML), The University of Adelaide. She is also an Honorary Lecturer at the School of Computing, Macquarie University. She received her PhD degree from the University of Adelaide in Computer Science in 2017.
  • Lab Leaders - Dr. Tony Weitong Chen is an ARC EC Industry Fellow, a lecturer at the University of Adelaide (UoA) and a researcher at the Australian Institute for Machine Learning (AIML), having previously served as an Associate Lecturer and Post-Doc Research Fellow at the University of Queensland. He earned his PhD from the University of Queensland in 2020, after completing both his Master's and Bachelor's degrees at the University of Queensland and at Griffith University respectively. His research primarily focuses on Machine Learning with a special interest in its applications in medical data.
  • Located in: The University of Adelaide, North Terrace, Adelaide, SA

Research Scope: Model Optimization

  • Efficient Model Training and Inference: Explore topics like quantization, pruning, knowledge distillation, neural architecture search, and memory-efficient training for optimizing models.
  • Hyperparameter Optimization: Research advanced optimization techniques like Bayesian optimization, genetic algorithms, and multi-objective optimization to improve model performance.
  • Optimization Algorithms: Investigate advanced optimizers, second-order methods, and adaptive learning rate techniques to enhance model convergence and stability.
  • Meta-Learning and Few-Shot Learning: Focus on developing models that can quickly adapt to new tasks with minimal data using techniques like MAML and optimization-based meta-learning.
  • Model Calibration: Study techniques to improve model calibration, such as uncertainty estimation, temperature scaling, and extensions to focal loss for better handling of imbalanced data.
  • Model Interpretability and Explainability: Research explainable AI methods that make models more interpretable through saliency maps, SHAP values, and post-hoc explanation techniques.
  • Causal Reasoning: Explore causal inference methods to identify cause-and-effect relationships in data, improving the robustness and generalization of machine learning models.
  • Transfer Learning and Fine-Tuning: Work on optimizing the pretraining and fine-tuning processes in transfer learning models, with a focus on domain adaptation and continual learning.
  • Fairness, Robustness, and Privacy in Models: Explore methods to ensure fairness, robustness to adversarial attacks, and privacy preservation in models through differential privacy and federated learning.
  • Optimizing for Specific Hardware: Research how to optimize models for edge computing, TinyML, and specialized AI hardware such as GPUs, TPUs, and FPGAs.
  • Optimization in Reinforcement Learning: Study policy optimization methods and the benefits of model-free vs. model-based approaches in reinforcement learning.
  • Emerging Trends: Investigate cutting-edge topics such as quantum machine learning, Segment Anything Model 2 (SAM 2) and neurosymbolic AI, which combine neural networks with symbolic reasoning for optimized learning.