Job Details

Principal AI Security & Risk Researcher

  2026-04-22     Ciph Lab     San Francisco,CA  
Description:

Ciph Lab | Remote | Equity-Only (Pre-Seed)About Ciph LabCiph Lab is building Intelligence Resources™—software that operationalizes responsible AI governance at scale. We're a 4-month-old AI governance company, AI-first and remote-first, transitioning from consultancy to agents and SaaS platform.AI security isn't static—new jailbreaks, prompt injections, and model vulnerabilities emerge constantly. Traditional security assessments can't keep pace. We're building adaptive governance systems with security-by-design that evolve as the threat landscape changes.The OpportunityWe're seeking a Principal AI Security & Risk Researcher to join our founding research team and lead our security track. This isn't traditional red teaming or pentesting—you'll be designing continuous security monitoring systems and building frameworks that help enterprises assess and mitigate AI risks at scale.You'll research emerging AI threats (jailbreaks, prompt injections, model vulnerabilities), translate findings into actionable security frameworks, and collaborate with our technical team to build automated security testing and audit telemetry.This is a founding research role with equity ownership in defining how organizations approach AI security.What You'll DoAI Security Research:Research emerging AI attack vectors, guardrail bypasses, and defense mechanismsMonitor threat intelligence feeds and security research communitiesExperiment with new AI security tools and assessment methodologiesStay current with LLM vulnerabilities, adversarial techniques, and model safetySecurity Framework Design:Design security assessment frameworks for generative AI and agentic systemsDevelop risk evaluation methodologies that adapt as threats evolveCreate audit telemetry and security monitoring protocolsTranslate security research into operational frameworks that enterprises can deployBuilding Adaptive Systems:Collaborate with the technical team to build automated security testing toolsDesign continuous threat monitoring and alerting systemsCreate security validation processes for framework updatesEnsure monitoring systems themselves are secure (meta-security)Build audit trails for compliance documentationThought Leadership:Contribute to Ciph Lab's weekly newsletter on AI security and riskPosition the company as a trusted voice in AI security governanceShare insights publicly (while protecting proprietary methods)What We're Looking ForRequired:5+ years in cybersecurity, with 2+ years focused on AI/ML security, red teaming, or adversarial testingDeep understanding of LLM architectures, prompt injection, jailbreaking, and model safety mechanismsExperience developing security testing frameworks or vulnerability assessment toolsStrong research capabilities with ability to translate technical findings into actionable frameworksPreferred:Experience with AI governance frameworks (NIST AI RMF, ISO 42001, EU AI Act)Background in enterprise risk assessment or security audit methodologiesFamiliarity with agent architectures, RAG systems, or multi-modal AI securityPublished work in AI security, adversarial ML, or related fieldsCritical Attributes:Self-directed: You identify threats proactively, set research priorities, and drive security strategy without oversightSystems thinker: You see how security connects to governance, compliance, and technical implementationContinuous learner: You stay ahead of rapidly evolving AI threats and defense mechanismsCollaborative: You work effectively with legal, governance, and technical expertsDisciplined remote worker: You manage time effectively, maintain momentum on long-term research, and show up consistentlyWhat Makes This DifferentNot your typical security role:You're building adaptive security infrastructure , not just finding vulnerabilitiesYou work at the intersection of AI security, governance, and complianceYou're designing living security frameworks that update as threats emergeYou're shaping standards in an emerging field with limited precedentHigh autonomy, flexible structure:Remote-first, manage your own scheduleWeekly team meetings (Wednesdays 5-6 pm PT)Async collaboration via Slack and shared tools5-10 hours/week commitment (scales up during peak periods)Research-first culture:Time budgeted for learning and experimentationExpected to share discoveries and insights with the teamContribute to thought leadership and industry positioningCommitment & CompensationTime: 5-10 hours/week + 1 hour weekly meetingStructure: Part-time, flexible, remoteCompensation: 0.5-2% equity (4-year vest, 1-year cliff)This role is for someone who:Values equity ownership in defining AI security standardsWants a ground-floor opportunity in AI governanceSees AI security expertise as a high-value emerging specialtyThrives in ambiguity and early-stage environmentsTreats equity as motivation to build something meaningfulSuccess in This RoleFirst 30 days: Audit existing frameworks through a security lens, identify vulnerabilities, and propose a research roadmapFirst 90 days: Deliver AI security assessment methodology, design threat monitoring strategy, and begin building security tools with the technical teamOngoing: Keep frameworks secure as threats evolve, contribute thought leadership, and advance automated security testingWhy This MattersAI governance without robust security is performative compliance. Organizations need frameworks that don't just check boxes—they genuinely reduce risk.As AI threats evolve (and they will), enterprises need systems that automatically detect, assess, and respond to new vulnerabilities. Your work ensures that happens.You'll help define what "auditable AI security" means in practice.How to ApplySend to founder@ciph-lab.com :Resume/CVBrief note (200-300 words) on your interest in AI security governance and what you'd bring to this roleWe review applications on a rolling basis.Ciph Lab is an equal opportunity employer. We value diverse perspectives and multidimensional talent.#J-18808-Ljbffr


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search