GenAI Expert Analyst
Job Description
We are seeking a highly skilled and creative AI Red Team Security Analyst to join ouradvanced threat-hunting squad focused on identifying vulnerabilities in generative AI systems and large language models (LLMs). This role emphasizes proactive prompt hacking and adversarial testing to uncover edge-case failures and security risks, including prompt injections, data leakage, and model exploitation.Key Responsibilities:
Prompt-Based Vulnerability Discovery:● Design and execute adversarial prompt attacks to identify and exploit weaknesses inLLM guardrails, safety alignment, and context handling. ● Writing adversarial prompts to identify weaknesses in various AI models, including Large Language Models (LLMs), Text-to-Image, Text-to-Video, and beyond.● Utilize advanced red-teaming techniques such as:○ Token manipulation○ Role-playing scenarios○ Indirect and recursive instructions○ Context confusion and prompt injection○ Hypothetical and fictional framing○ Multi-step/chained reasoning attacks○ Format manipulation and obfuscation○ Character/persona-based social engineering○ Emotional manipulation tactics○ Logic-based confusion and contradiction testsSource Development & Intelligence Cultivation:● Develop and maintain relationships with underground sources, forums, and semi-privatechannels to gain early access to new attack vectors and methods.● Identify influential actors, thought leaders, and exploit developers within the threatlandscape to anticipate future risks.● Act as a liaison between threat intelligence and red teaming functions to ensure acontinuous feedback loop from real-world adversaries.Threat Modeling & Scenario Design:● Craft realistic, high-impact adversarial scenarios and attack simulations to test LLMresilience under sophisticated manipulations.● Model threat actor behaviors across different domains (e.g., social engineering,misinformation, data exfiltration).Reporting & Documentation:● Document findings, attack paths, and risk assessments with actionable insights forengineering, policy, and safety teams.● Produce red team reports and contribute to living documentation of known exploitpatterns and mitigation strategies.Cross-Team Collaboration & Tooling:● Partner with AI safety, security, and product teams to drive model hardening efforts.● Contribute to internal tools for automated prompt testing, anomaly detection, andbehavior logging.Additional Wants: ● Familiarity with Generative AI models, though direct technical experience is nota prerequisite. ● Experience with various model types (Text-to-Text, Text-to-Image) is desirable.Familiarity with different abuse areas in the T&S field. ● Prior experience with OSINT (Open Source Intelligence) will be considered anasset.*Prompt writing experience is a great advantage