3 Security Lit Jobs
Security Lit - AI/ML Security Engineer (4-5 yrs)
Security Lit
posted 13d ago
Key skills for the job
Job Title : AI/ML Security Engineer.
We are seeking an experienced AI/ML Security Engineer to address emerging AI risks, including deepfake voice morphing, image injection, and AI-powered fraud schemes that could impact the bank's operations and customer trust.
The ideal candidate will proactively test AI/ML systems, evaluate existing controls, and design innovative solutions to mitigate risks associated with malicious AI use in the banking sector.
Key Responsibilities :
Testing and Mitigation of AI-Related Risks :
- Identify and mitigate AI-based threats such as deepfake voice morphing, fraudulent image manipulation, and injection-based attacks.
- Conduct testing on AI/ML models to simulate real-world adversarial attacks, including voice cloning, synthetic identity fraud, and AI-driven phishing calls.
- Develop detection techniques and preventive mechanisms for AI-generated fraud.
Evaluation of Existing Controls :
- Assess the bank's current security controls to identify vulnerabilities where AI/ML abuse could occur.
- Perform testing to evaluate the resilience of fraud detection systems against AI-powered threats.
AI/ML Application Security Testing :
- Perform penetration testing and security reviews of AI/ML systems, APIs, and models to ensure resilience against adversarial AI risks.
- Identify risks related to AI/ML misuse and data poisoning attacks in the bank's models.
Research & Development of Defense Mechanisms :
- Stay updated on emerging AI risks, such as generative AI for fraud, voice synthesis, and image manipulation techniques.
- Collaborate with cross-functional teams to implement advanced detection systems for deepfakes and AI-based fraud calls.
Collaboration and Reporting :
- Work with fraud prevention, cybersecurity, and development teams to strengthen defenses against AI-related threats.
- Provide comprehensive reports on identified risks, testing results, and actionable mitigation plans.
Required Skills & Experience :
Experience :
- 4-5+ years of experience in cybersecurity, fraud detection, or AI/ML security roles.
- Hands-on experience in testing and mitigating AI-driven threats, such as deepfakes and adversarial attacks.
Technical Skills :
- Strong understanding of AI/ML concepts, including generative models (GANs), synthetic media, and adversarial machine learning.
- Experience with tools for deepfake detection, voice morphing, and image manipulation analysis.
- Proficiency in programming/scripting languages (e.g Python) for model testing and automation.
- Familiarity with AI/ML testing frameworks and cloud AI services (AWS, Azure, GCP).
Analytical Skills :
- Ability to analyze and simulate AI-driven attacks such as voice cloning, synthetic image injection, and fraud calls.
Certifications (Preferred) : Relevant certifications such as OSCP, CEH, or AI/ML-focused certifications.
Preferred Qualifications :
- Experience identifying deepfake and synthetic media attacks.
- Familiarity with voice morphing tools, fraud detection techniques, and AI abuse scenarios.
- Awareness of regulatory and compliance requirements related to AI risk in the BFSI sector.
Let's build a safer digital world-together.
Functional Areas: Software/Testing/Networking
Read full job description