[Remote] AI Social Risk Analyst

Remote Full-time
Note: The job is a remote job and is open to candidates in USA. OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The AI Social Risk Analyst will be responsible for identifying and mitigating risks associated with AI-enabled social experiences, providing strategic analysis, and developing actionable risk intelligence to maintain a safe online ecosystem. Responsibilities • Map and prioritize the AI-social risk landscape • Build and continuously refine a clear picture of how AI is used in social-like products (e.g., Sora-powered clips, group chats, messaging assistants, creator tools) • Design and maintain harm taxonomies tailored to AI-mediated communication (e.g., synthetic harassment, coordinated AI-assisted brigading, synthetic identity/brand misuse, reputational and intimate harms) • Maintain a risk register and prioritization framework that surfaces the top issues by severity, prevalence, exposure, and trajectory • Partner with investigations, operations, and product teams to surface new patterns of misuse across Sora, chats, and partner integrations • Run structured deep dives on incidents, from synthetic impersonation and scams to targeted harassment or coordinated influence using AI-generated media • Connect individual incidents into system-level stories about actors, incentives, product design weaknesses, and cross-product spillover • Translate findings into clear, ranked risk lists and concrete proposals for mitigations that product, safety, and policy teams can execute on • Collaborate with Safety Systems, Integrity, and Product to scope solutions such as classification improvements, UX guardrails, friction, enforcement flows, and detection signals • Track whether mitigation work is landing: follow key indicators, pressure-test assumptions, and push for course corrections when the data demands it • Help define the core metrics and signals that indicate whether AI-social environments are safe (e.g., key harm prevalence, severity distributions, escalation rates, brand safety issues) • Work with data science and visualization colleagues to shape monitoring views and dashboards that highlight leading indicators and unusual changes in user behavior or abuse patterns • Propose targeted probes, structured reviews, and experiments that surface new risk modes around major launches and feature changes • Produce concise, decision-ready briefs on AI-social risks for leadership, safety forums, and partner teams • Run scenario analyses that explore how AI-social harms might evolve over the next 6–24 months (e.g., how attackers might adapt to Sora, how group chats could be used for coordination, likely pressure points for brands and public figures) • Benchmark OpenAI’s AI-social risk profile and mitigations against external incidents and other platforms, highlighting gaps, strengths, and opportunities • Contribute to product readiness and launch reviews by laying out expected abuse modes, risk tradeoffs, and monitoring/response plans • Turn risk insights into practical guidance for internal teams (product, marketing, partnerships, comms) and, where appropriate, external partners using OpenAI technologies in social and brand contexts • Develop reusable frameworks, playbooks, FAQs, and briefing materials that make it easier for the broader organization to understand AI-social risks and respond consistently Skills • Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work focused on social media, messaging, online communities, or adjacent environments • Demonstrated ability to analyze complex online harms (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritized recommendations • Strong analytical skills and comfort working with both qualitative and quantitative inputs, including: (1) Casework, incident reports, OSINT, product context, and policy frameworks. (2) Basic metrics and trends in partnership with data science (e.g., harm prevalence, severity profiles, exposure, escalation rates) • Strong adversarial and product intuition, able to foresee how actors might adapt AI-social and creative tools for misuse and evaluate how product mechanics, incentives, and UX decisions influence risk • Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguous spaces and support decision-making • Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams, including pushing for clarity on tradeoffs and following through on mitigation work • Excellent written and verbal communication skills, including experience producing concise, executive-ready briefs and explaining sensitive, complex issues in grounded, concrete terms • Comfort operating in fast-changing, ambiguous environments: you can identify weak signals, form hypotheses, test them quickly, and adjust as the product and threat landscape evolves Company Overview • OpenAI is an AI research and deployment company that develops advanced AI models, including ChatGPT. It is a sub-organization of OpenAI Foundation. It was founded in 2015, and is headquartered in San Francisco, California, USA, with a workforce of 201-500 employees. Its website is Company H1B Sponsorship • OpenAI has a track record of offering H1B sponsorships, with 1 in 2025, 1 in 2024, 1 in 2023, 18 in 2022, 10 in 2021, 6 in 2020. Please note that this does not guarantee sponsorship for this specific role. Apply tot his job
Apply Now

Similar Opportunities

AI Data Analyst

Remote

Tech Mahindra Data Analyst & AI Jobs Work from Anywhere

Remote

AI Automation Expert & AI Automation Engineer; Remote | AI & Automation

Remote

[Remote] Data Analyst with AI & terraform

Remote

Senior Data Analyst, AI Evaluation

Remote

Selector AI SME/Selector AI Lead/Selector AI Consultant-Remote

Remote

Finance AI Specialist Quantitative Finance AI

Remote

Principal AI Software Engineer- Warehouse Systems (REMOTE)

Remote

[Remote] Senior QA Engineer (Automation)

Remote

AI Strategy Consultant

Remote

Experienced Remote Customer Experience Representative – Delivering Exceptional Service and Support to Valued Customers at arenaflex

Remote

USAA is hiring: Remote Property Damage Analyst (CARSON CITY) in Mound House

Remote

[Remote] IAM Systems Engineer II – SailPoint ISC

Remote

Financial Analyst

Remote

**Experienced Remote Data Entry Specialist – Join the Pioneering Team at blithequark**

Remote

Experienced Remote Customer Care Representative – Delivering Exceptional Support and Service from the Comfort of Your Own Home at blithequark

Remote

**Experienced Full Stack Data Entry Specialist – Remote Work Opportunity for Beginners at arenaflex**

Remote

Experienced Remote Full-Time Live Chat Agent – Delivering Exceptional Customer Service in the Health, Wellness, and Fitness Industry

Remote

Sales Executive – B2B Neukundenakquise / Schweiz (m/w/d)

Remote

Experienced Remote Chat Moderator and Digital Community Manager – Flexible Hours and Comprehensive Training Provided

Remote
← Back to Home