[Remote] AI Speech Research Intern

Remote Full-time
Note: The job is a remote job and is open to candidates in USA. Centific AI Research seeks a PhD Research Intern to design and evaluate speech‑first models, with a focus on Spoken Language Models (SLMs) that reason over audio and interact conversationally. You’ll move ideas from prototype to practical demos, working with scientists and engineers to deliver measurable impact. Responsibilities End‑to‑end speech dialogue systems (speech‑in/speech‑out) and speech‑aware LLMs Alignment between speech encoders and text backbones via lightweight adapters Efficient speech tokenization and temporal compression suitable for long‑form audio Reliable evaluation across recognition, understanding, and generation tasks—including robustness and safety Latency‑aware inference for streaming and real‑time user experiences Prototype a conversational SLM using an SSL speech encoder and a compact adapter on an existing LLM; compare against strong baselines Create a data recipe that blends conversational speech with instruction‑following corpora; run targeted ablations and report findings Build an evaluation harness that covers ASR/ST/SLU and speech QA, including streaming metrics (latency, stability, endpointing) Ship a minimal demo with streaming inference and logging; document setup, metrics, and reliability checks Author a crisp internal write‑up: goals, design choices, results, and next steps for productionization Skills PhD candidate in CS/EE (or related) with research in speech, audio ML, or multimodal LMs Fluency in Python and PyTorch, with hands‑on GPU training; familiarity with torchaudio or librosa Working knowledge of modern sequence models (Transformers or SSMs) and training best practices Depth in at least one area: (a) discrete speech tokens/temporal compression, (b) modality alignment to LLMs via adapters, or (c) post‑training/instruction tuning for speech tasks Strong experimentation habits: clean code, ablations, reproducibility, and clear reporting Experience with speech generation (neural codecs/vocoders) or hybrid text+speech decoding Background in multilingual or code‑switching speech and domain adaptation Hands‑on work evaluating safety, bias, hallucination, or spoofing risks in speech systems Distributed training/serving (FSDP/DeepSpeed), and experience with ESPnet, SpeechBrain, or NVIDIA NeMo Benefits Comprehensive healthcare, dental, and vision coverage 401k plan Paid time off (PTO) And more! Company Overview Zero distance innovation for GenAI creators and industries Expertly engineering platforms and curating multimodal, multilingual data, we empower the ‘Magnificent Seven’ and enterprise clients with safe, scalable AI deployment We a team of over 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. It was founded in 2020, and is headquartered in Redmond, Washington, USA, with a workforce of 5001-10000 employees. Its website is Company H1B Sponsorship Centific has a track record of offering H1B sponsorships, with 10 in 2025, 22 in 2024, 14 in 2023. Please note that this does not guarantee sponsorship for this specific role.
Apply Now
← Back to Home