GenAI Expert Analyst
Location
Anywhere
Experience
Senior
Occupation
Freelance
Company Type
Start-up
Salary
Job Description
Posted on:
June 16, 2025
We are seeking a highly skilled and creative AI Red Team Security Analyst to join our
advanced threat-hunting squad focused on identifying vulnerabilities in generative AI systems and large language models (LLMs). This role emphasizes proactive prompt hacking and adversarial testing to uncover edge-case failures and security risks, including prompt injections, data leakage, and model exploitation.
Key Responsibilities:
Prompt-Based Vulnerability Discovery:
● Design and execute adversarial prompt attacks to identify and exploit weaknesses in
LLM guardrails, safety alignment, and context handling.
● Writing adversarial prompts to identify weaknesses in various AI models, including Large Language Models (LLMs), Text-to-
Image, Text-to-Video, and beyond.
● Utilize advanced red-teaming techniques such as:
○ Token manipulation
○ Role-playing scenarios
○ Indirect and recursive instructions
○ Context confusion and prompt injection
○ Hypothetical and fictional framing
○ Multi-step/chained reasoning attacks
○ Format manipulation and obfuscation
○ Character/persona-based social engineering
○ Emotional manipulation tactics
○ Logic-based confusion and contradiction tests
Source Development & Intelligence Cultivation:
● Develop and maintain relationships with underground sources, forums, and semi-private
channels to gain early access to new attack vectors and methods.
● Identify influential actors, thought leaders, and exploit developers within the threat
landscape to anticipate future risks.
● Act as a liaison between threat intelligence and red teaming functions to ensure a
continuous feedback loop from real-world adversaries.
Threat Modeling & Scenario Design:
● Craft realistic, high-impact adversarial scenarios and attack simulations to test LLM
resilience under sophisticated manipulations.
● Model threat actor behaviors across different domains (e.g., social engineering,
misinformation, data exfiltration).
Reporting & Documentation:
● Document findings, attack paths, and risk assessments with actionable insights for
engineering, policy, and safety teams.
● Produce red team reports and contribute to living documentation of known exploit
patterns and mitigation strategies.
Cross-Team Collaboration & Tooling:
● Partner with AI safety, security, and product teams to drive model hardening efforts.
● Contribute to internal tools for automated prompt testing, anomaly detection, and
behavior logging.
Additional Wants:
● Familiarity with Generative AI models, though direct technical experience is not
a prerequisite.
● Experience with various model types (Text-to-Text, Text-to-Image) is desirable.
Familiarity with different abuse areas in the T&S field.
● Prior experience with OSINT (Open Source Intelligence) will be considered an
asset.
*Prompt writing experience is a great advantage
advanced threat-hunting squad focused on identifying vulnerabilities in generative AI systems and large language models (LLMs). This role emphasizes proactive prompt hacking and adversarial testing to uncover edge-case failures and security risks, including prompt injections, data leakage, and model exploitation.
Key Responsibilities:
Prompt-Based Vulnerability Discovery:
● Design and execute adversarial prompt attacks to identify and exploit weaknesses in
LLM guardrails, safety alignment, and context handling.
● Writing adversarial prompts to identify weaknesses in various AI models, including Large Language Models (LLMs), Text-to-
Image, Text-to-Video, and beyond.
● Utilize advanced red-teaming techniques such as:
○ Token manipulation
○ Role-playing scenarios
○ Indirect and recursive instructions
○ Context confusion and prompt injection
○ Hypothetical and fictional framing
○ Multi-step/chained reasoning attacks
○ Format manipulation and obfuscation
○ Character/persona-based social engineering
○ Emotional manipulation tactics
○ Logic-based confusion and contradiction tests
Source Development & Intelligence Cultivation:
● Develop and maintain relationships with underground sources, forums, and semi-private
channels to gain early access to new attack vectors and methods.
● Identify influential actors, thought leaders, and exploit developers within the threat
landscape to anticipate future risks.
● Act as a liaison between threat intelligence and red teaming functions to ensure a
continuous feedback loop from real-world adversaries.
Threat Modeling & Scenario Design:
● Craft realistic, high-impact adversarial scenarios and attack simulations to test LLM
resilience under sophisticated manipulations.
● Model threat actor behaviors across different domains (e.g., social engineering,
misinformation, data exfiltration).
Reporting & Documentation:
● Document findings, attack paths, and risk assessments with actionable insights for
engineering, policy, and safety teams.
● Produce red team reports and contribute to living documentation of known exploit
patterns and mitigation strategies.
Cross-Team Collaboration & Tooling:
● Partner with AI safety, security, and product teams to drive model hardening efforts.
● Contribute to internal tools for automated prompt testing, anomaly detection, and
behavior logging.
Additional Wants:
● Familiarity with Generative AI models, though direct technical experience is not
a prerequisite.
● Experience with various model types (Text-to-Text, Text-to-Image) is desirable.
Familiarity with different abuse areas in the T&S field.
● Prior experience with OSINT (Open Source Intelligence) will be considered an
asset.
*Prompt writing experience is a great advantage