Jinjia Li
Researcher
Jinjia Li is passionate about advancing AI systems through reasoning optimization, with expertise in developing efficient architectures that enhance logical inference capabilities. His research focuses on improving computational efficiency in knowledge-intensive tasks by optimizing models' reasoning pathways. He specializes in techniques such as knowledge distillation, sparse attention mechanisms, and neurosymbolic approaches that maintain accuracy while reducing computational complexity.
Previously, Jinjia implemented reasoning optimization frameworks that achieved 40% faster inference speeds without sacrificing performance in complex QA systems. His work bridges theoretical advancements with practical deployment considerations, particularly in resource-constrained environments. He is proficient in PyTorch, TensorRT, and compiler-level optimizations for AI accelerators.
Jinjia actively contributes to open-source projects involving efficient reasoning architectures and has published research on dynamic computation methods for multi-hop reasoning. He is currently exploring adaptive computation strategies for variable-complexity queries, energy-efficient reasoning in edge devices, and verifiable reasoning systems with formal guarantees.