Jiajun Zhou

Logo

The creative spirits. The underdogs. The resolute. The determined. The indefatigable. The outsiders. The defiant. The independent thinkers. The fighters and the true believers.

View the Project on GitHub lloo099/thomasjjc

πŸ“° AI Researcher

Ph.D. Student @ HKU | Research Associate @ UCSB | Specializing in Efficient AI, LLM Optimization, and Hardware-Aware Training

πŸš€ Actively seeking full-time opportunities in AI research, ML systems, or hardware-software co-design starting in 2025/2026.

πŸ“« zhoutomas177@gmail.com(Personal) LinkedIn
βœ‰οΈ ryjjc@connect.hku.hk πŸ“ Mountain View, California (Now)

πŸŽ“ Education

Ph.D., EEE | The University of Hong Kong (December 2025)

M.S., IC Design | HKUST (December 2019)

B.Eng., IC Design | National Huaqiao University (June 2018)


πŸ’Ό Experience

Research Intern @ Samsung Research America

May. 2025 – Present

Research Associate @ UCSB

Sept. 2023 – Apr. 2025 | Advisor: Prof. Zheng Zhang

Research Assistant @ CUHK

Apr. 2021 – Dec. 2021 | Advisor: Prof. Guoliang Xing

Mixed-Signal IC Design Engineer @ ASTRI

Sep. 2019 – Mar. 2021 | Technology Co-Design Group


🧠 Selected Publications

  1. QuZO: Quantized Zeroth-Order Fine-Tuning for LLMs – (Under Review)
  2. LoRETTA: Tensor-Train Adaptation for LLMs – NAACL 2024
  3. DyBit: Dynamic Bit-Precision Inference – IEEE TCAD 2023
  4. MSD: Mixing Signed Digits on FPGAs – FCCM 2023
  5. NoiseZO: RRAM-Driven ZO Optimization – DAC 2025
  6. HKLUT: Hundred-kilobyte lookup tables for Super-resolution - IJCAI 2024
  7. PECAN: Product-Quantized CAM Network – DATE 2023
  8. Lite It Fly: All-Deformable Butterfly Network – TNNLS (in brief)

πŸ“ Full publication list on Google Scholar


πŸš€ Research Highlights

Machine learning and systems, with a focus on efficient training and inference:


πŸ“· Fun Fact

I enjoy exploring the intersection of AI algorithms and hardwareβ€”whether it’s crafting efficient LLM models, squeezing memory on an edge chip, or analyzing training efficiency.


🀝 Academic

I’m passionate about bridging academia and decentralized technologyβ€”whether it’s co-authoring papers on efficient LLM training, collaborating with global research labs, or exploring blockchain infrastructure projects that bring AI infrastructure and intelligent agents on-chain.


πŸ”§ Technical Skills

Languages: Python, C/C++, MATLAB, Verilog
Frameworks & Platforms: PyTorch, TensorFlow (incl. Lite & Keras), CUDA
Tools: Cadence, Xilinx Vivado & ISE, HSpice, Modelsim, VS Code

Visitor Map