Staff Security Engineer - Credit Karma
GoCo.io Inc
Staff Security Engineer - Credit Karma
Company Overview
Intuit is the global financial technology platform that powers prosperity for the people and communities we serve. With approximately 100 million customers worldwide using products such as TurboTax, Credit Karma, QuickBooks, and Mailchimp, we believe that everyone should have the opportunity to prosper. We never stop working to find new, innovative ways to make that possible.
Job Overview
We’re hiring a Staff Product Security Engineer to lead the design, development and deployment of security capabilities across both traditional application security and AI/ML systems. You’ll build and integrate security tooling leveraging open-source and vendor solutions to strengthen our Secure Development Lifecycle and vulnerability reduction efforts (including SAST, DAST, SCA, secrets scanning, and vulnerability management) while also securing the full AI lifecycle: data ingestion, training/fine-tuning, evaluation, model registry, inference, agentic workflows, and MCP servers/tools.
You’ll partner closely with product engineering, ML engineering, and platform teams to implement scalable controls, define standards, and operationalize continuous assurance across apps and AI systems, covering secure coding practices, supply chain integrity, identity and access controls, runtime protections, and AI-specific risks such as model security, prompt/tool safety, and AI pipeline governance.
Responsibilities
What You’ll Do
- Lead security architecture reviews and threat modeling across apps/APIs/cloud and AI/ML systems (agents, MCP servers, tool integrations, orchestration).
- Implement security controls across the SDLC and AI lifecycle.
- Build “secure-by-default” automation and guardrails (policy-as-code, CI/CD gates, least privilege/sandboxing, provenance verification).
- Own and mature SAST/DAST/SCA and vuln management: tool tuning, pipeline integration, triage, remediation workflows, metrics/SLAs.
- Evaluate and integrate OSS/vendor AppSec and AI security tooling (scanning, secrets, prompt safety, agent runtime monitoring, data leakage controls).
- Deliver reusable secure patterns/SDKs and partner with platform teams on runtime hardening (IAM, secrets, Kubernetes, logging/monitoring, isolation).
- Automate testing for OWASP and AI-specific risks; integrate into release gates and continuous monitoring.
- Define standards aligned with enterprise policy and AISPM-style practices; enable teams and communicate risk/roadmaps to leadership.
Qualifications
What We’re Looking For
- 6+ years in product/application security in large-scale systems.
- Demonstrated experience building or operationalizing security tooling (CI/CD integrations, scanners, policy engines, security automation, detection/monitoring).
- Strong foundation in security architecture, design reviews, and threat modeling for modern cloud-native systems.
- Practical understanding of AI/ML systems and workflows: model development lifecycle, model registry/deployments, evals, vector databases/RAG, and agent frameworks.
- Deep familiarity with common software vulnerabilities (OWASP Top 10) and modern cloud threats; strong ability to communicate risk to engineers.
- Ability to collaborate with software engineers and ML engineers—meeting business goals while enforcing security requirements.
- Experience applying security and compliance frameworks (examples: NIST, ISO 27001/27002 concepts, SOC2 controls, OAuth/OIDC, PCI where relevant).
- Proficiency in one or more: Python, Go, Java, TypeScript/Node, Rust, Scala.
What Would Be Great to See
- Hands-on experience securing agentic workflows, tool calling, function execution, and MCP servers (or similar tool/plugin servers).
- Experience with LLM platforms and deployments (e.g., GPT, Gemini, Claude, Llama) and associated security risks and mitigations.
- Familiarity with AI threat landscape and testing approaches: prompt injection (direct/indirect), tool injection, RAG poisoning, data leakage, jailbreaks, model extraction/inversion risks.
- Experience with provenance and integrity controls: artifact signing, attestations, SBOMs, SLSA-style build practices, model/dataset lineage, registry governance.
- Familiarity with secure model onboarding (third-party/open model risk), license/compliance considerations, and lifecycle governance.
- Exposure to cloud security tooling and environments (e.g., GCP/AWS/Azure), Kubernetes, service mesh, IAM, secrets management (Vault/KMS), OPA/policy-as-code, CI/CD (CircleCI/GitHub Actions), and observability (Splunk).
- Experience designing enterprise-wide security patterns and standards (reference architectures, paved roads).
- Strong cryptography fundamentals and real-world usage (TLS, HMAC, key management, encryption at rest/in transit).
Intuit provides a competitive compensation package with a strong pay for performance rewards approach. This position may be eligible for a cash bonus, equity rewards and benefits, in accordance with our applicable plans and programs (see more about our compensation and benefits at Intuit®: Careers | Benefits). Pay offered is based on factors such as job-related knowledge, skills, experience, and work location. To drive ongoing fair pay for employees, Intuit conducts regular comparisons across categories of ethnicity and gender.