Riga, Latvia (Remote)
We're looking for an
AI Engineering Specialist (Agentic Delivery)
to join the Insurance Solutions team
About the Role
We are hiring two AI Engineering Specialists to join our PINS team as we transition toward AI-native engineering practices. This is a practitioner role for engineers who have embraced autonomous AI coding agents as their primary development method.
This is not an AI/ML research position, prompt engineering role, or AI solution architect position. We are looking for software engineers who use agentic AI tools – particularly Claude Code – as their core engineering instrument to ship production software.
The emergence of frontier models capable of sustained autonomous work has fundamentally changed software development. Engineers can now delegate multi-hour coding tasks to AI agents, orchestrate multiple agents working in parallel, and maintain human oversight while dramatically accelerating delivery. We need practitioners who have already made this transition and can help our teams do the same.
Context
PINS develops and operates an insurance core platform and related products, including claims automation, document processing, and customer support solutions.
Our engineering transformation focuses on:
Adopting autonomous AI coding agents for daily engineering work
Accelerating delivery while maintaining reliability and quality standards
Building team capabilities in AI-native development practices
Establishing engineering workflows optimized for human-AI collaboration
What You Will Do
Delivery Acceleration
Ship production features using AI-augmented workflows
Refactor, migrate, and modernize existing codebases with agent assistance
Automate routine engineering tasks through agent workflows
Produce outcomes that are safe, maintainable, and operable in production
Transfer AI-native development practices to teammates through demonstration and mentoring
Build shared prompt libraries, workflow documentation, and reusable tooling
Contribute to team standards for AI-assisted code review and quality assurance
Help colleagues transition from traditional to AI-augmented development methods
What "Success" Looks Like
You treat AI as a delivery engine, not an experiment. You:
Convert vague requests into precise requirements and agent-executable tasks
Orchestrate agents to produce changes across multiple files and services
Ensure changes are verified, not just generated:
Tests updated/added, executed, and passing
Edge cases and failure modes considered
Readiness for code review and merging demonstrated
Produce outcomes that are safe, maintainable, and operable in production
Measure your success by delivery outcomes and team velocity, not personal commit count
Mandatory Requirements
Demonstrated daily use of Claude Code or equivalent autonomous coding agents in production work
Understanding of agentic workflow patterns: task decomposition, sequential and parallel execution, human-in-the-loop checkpoints
Experience with project configuration (CLAUDE.md files, rules files, context management)
Ability to effectively delegate multi-step tasks to AI agents and verify results
Practical experience with prompt engineering: clear specification, iterative refinement, context optimization
Strong programming background with production system experience
Polyglot proficiency: ability to read, review, and debug code in multiple languages (Java, Python, TypeScript) even if you don't write them manually every day – the AI writes, you verify
Understanding of software architecture, testing practices, and code quality standards
Experience working with legacy systems and real-world production constraints
Familiarity with version control workflows, code review practices, and CI/CD pipelines
Delegate, Review, Own mentality: you hand off implementation to agents, spend the majority of time reviewing (not writing) code, and take full responsibility for outcomes regardless of who wrote the syntax
Tolerance for ambiguity: comfortable working with probabilistic tools, handling AI mistakes by improving context and specifications rather than abandoning the approach
Strong focus on maintainability, quality, and measurable business outcomes
Systematic verification habits for AI-generated output
Never merge code you don't understand
Strong Advantages
Experience orchestrating multiple AI agents working in parallel
Multi-step workflow automation with proper error handling and recovery
Automated validation loops: agent-assisted testing, regression protection, release checklists
Cost and token budget management for sustained agent operations
Experience with sub-agent delegation and specialization patterns
Experience transitioning teams from traditional to AI-augmented development
Track record of building shared tooling, documentation, or processes for AI adoption
Ability to address resistance and build adoption through demonstrated results
Cloud platform experience (Azure, AWS, or GCP)
Container orchestration and deployment automation
Infrastructure-as-code practices
Exposure to regulated industries (insurance, finance, healthcare)
Experience with enterprise software development constraints
Understanding of compliance and audit requirements
Optional Competency: AI Product Development
This competency is not required but represents an opportunity for expanded scope within the team.
The Litmus Test
Traditional Engineer:
"I used Copilot to autocomplete a function."
AI-Native Engineer:
"I wrote a spec for the claims processing module, fed it to Claude Code with our architecture context, reviewed the 12-file PR it generated, had it write and run the integration tests, fixed two edge cases it missed, and merged it – all in one afternoon."
We are hiring the latter.
What We Offer
We are excited to expand our team. Apply and let's talk! 🤩
For more information visit our home page, Facebook, Instagram, and LinkedIn profiles, and see the team in action on YouTube.