职位描述
该职位还未进行加V认证,请仔细了解后再进行投递!
Position Overview
We are seeking an experienced AI Research Scientist to lead foundation model development initiatives. The ideal candidate will have hands-on experience in training large-scale models at major tech companies and a proven track record in advancing the state-of-the-art in foundation models.
Key Responsibilities
Lead the architecture design and training of large-scale foundation models
Develop and optimize model training pipelines for distributed systems
Drive research initiatives in model scaling, efficiency, and performance
Implement innovative approaches to improve model capabilities and training efficiency
Collaborate with the engineering team to productionize research breakthroughs
Guide technical decisions related to model architecture and training strategies
Mentor junior researchers and contribute to building our research culture
Required Qualifications
Ph.D. in Computer Science, Machine Learning, or related field
3+ years of experience in training large-scale models at major tech companies, including:
International tech leaders (e.g., Google, Meta, Microsoft, OpenAI, Anthropic) OR
Leading Chinese tech companies (e.g., ByteDance, Alibaba, Baidu, Tencent, SenseTime, Huawei)
Proven experience with distributed training systems and large-scale model optimization
Deep understanding of transformer architectures and their variants
Strong track record in developing and training foundation models
Extensive experience with PyTorch and/or JAX
Publication record in top-tier conferences (NeurIPS, ICML, ICLR)
Preferred Qualifications
Experience with both Chinese and international AI ecosystems
Familiarity with Chinese AI infrastructure (e.g., ModelArts, PAI, ByteMLab)
Background in scaling laws and efficient training strategies
Experience with video generation models or multimodal architectures
Track record of open-source contributions to major ML frameworks
Experience with ML infrastructure design and implementation
Familiarity with mixed-precision training and model parallelism
Experience with custom CUDA kernels and optimization
Technical Expertise
Large-Scale Training: Distributed training frameworks, model parallelism strategies
Infrastructure:
International cloud platforms (AWS/GCP)
Chinese cloud platforms (Alibaba Cloud, Tencent Cloud, Huawei Cloud)
Languages: Python, CUDA, C++ (optional)
Frameworks:
Standard: PyTorch, JAX, DeepSpeed, Megatron-LM
Chinese ecosystem: PaddlePaddle, MindSpore (plus)
Development Tools: Git, Docker, Kubernetes
Monitoring: Weights & Biases, MLflow, or similar tools
What We Offer
Opportunity to shape the future of foundation models in video generation
Leadership role in technical decision-making
Access to substantial computing resources and infrastructure
Competitive compensation package including equity
Regular collaboration with top researchers in the field
Support for conference attendance and research publication
International exposure and collaboration opportunities
Location
Hong Kong (on-site, Hong Kong Science and Technology Park)
Expected Impact
Drive the development of next-generation foundation models
Lead research initiatives that push the boundaries of model capabilities
Build and mentor a world-class research team
工作地点
地址:香港香港香港沙田区香港科学园10W栋317-318
![](http://img.jrzp.com/jrzpfile/cityrcw/SearchJob/images/jg.png)
![](https://img.jrzp.com/images_server/comm/nan1923.png)
职位发布者
张先生HR
Video Rebirth Limited
![](http://img.jrzp.com/jrzpfile/cityrcw/images/sfrz_yrz.png)
-
计算机软件
-
11-20人
-
外商独资·外企办事处
-
香港科学园10W栋317-318