We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic. By clicking “Accept,” you agree to our website's cookie use as described in our Cookie Policy. You can change your cookie settings at any time by clicking “Preferences.”
    Skip to main content
    Back to Careers

    Platform / DevOps Engineer

    EngineeringRemote / HybridB2B / Employment Contract15 000 – 25 000 PLN (take-home)

    About Lekta AI

    Founded in 2016 in Kraków, Poland, Lekta AI is a technological company working with major European banking, insurance, and telco enterprises. The company was set up with an ambitious goal of providing the best customer service in the world. Lekta stands out from the crowd of conversational AI companies because we combine technological expertise with domain-specific knowledge of the contact center industry veterans. Through the use of our proprietary conversational neurosymbolic engine and backend integrations, our systems are intent, and context-aware, handling over 1.5 million conversations every month. Today, with offices in Poland and Spain, Lekta is rapidly expanding its business division to scale our operations internationally.

    About the role

    Lekta is hiring a Platform / DevOps Engineer to own the infrastructure that carries our platform end-to-end — runtime, IDE, and the layers beneath. You'll work across both public cloud and private cloud deployments, because the customers we serve span both. Infrastructure decisions in this domain land directly in product quality: sub-second latency, multi-tenant isolation, deterministic compliance, and inference cost at scale all live in your remit. The systems you design now will shape how Lekta scales for years to come.

    We are building the next generation of conversational AI — a platform where the creator's experience is itself AI-native. Think of what Lovable did for app-building, applied to conversational agents: a visual IDE, a copilot that helps the user shape, refine and evolve the agent, and a runtime engineered to actually deploy what gets built into enterprise production. What makes the engineering genuinely hard is the combination our agents have to deliver simultaneously: fluent (natural language, real conversation), fast (sub-second on live voice channels), and reliable (predictable behaviour under real customer pressure, every time). Most stacks force a choice between two of those at the cost of the third. We don't, and that constraint shapes almost every engineering problem we work on.

    AI is a power tool, and we treat it like one. Lekta engineers are encouraged — and equipped — to use AI deeply across their work: Claude Code, model APIs, copilot tooling. We invest in the accounts, the tooling, and the time it takes to get fluent with them. We don't measure people on lines committed; we measure on what gets delivered and how well it holds up in production. We are not an agentic-workflow company. AI generates; the engineer directs and reviews. Every change going out under your name has been read, understood, and signed off by you.

    Responsibilities

    • Design, run and evolve Lekta's multi-tenant production infrastructure on Kubernetes across public and private cloud environments
    • Own the infrastructure for real-time conversational channels, where latency is part of the product spec
    • Build the routing, fallback, and reliability layer for inference workloads across multiple model providers
    • Run cost engineering for the platform — per-tenant attribution, budgets, and optimisation of inference and infrastructure spend
    • Establish observability that lets on-call engineers reason about agent behaviour, not just request timing
    • Own the CI/CD pipeline that protects production deploys, with quality and regression gates
    • Establish the compliance baseline for regulated enterprise customers — secrets, PII handling, audit trails

    Qualities

    • Significant production experience running Kubernetes at meaningful scale
    • Multi-tenant SaaS architecture background, with a clear view on isolation and pooling trade-offs
    • Hands-on experience operating across both public and private cloud environments
    • Strong Postgres skills — replication, indexing, partitioning, tuning
    • IaC fluency and configuration-level networking depth (ingress, mTLS, service mesh, DNS) — debugged, not just read about
    • Articulated view on when to use managed services and when to self-host, grounded in both
    • Track record of shipping production work with AI in your daily loop, and reviewing what it produces
    • Real-time audio infrastructure (SIP / WebRTC / RTP), inference cost optimisation, or banking / fintech compliance is a plus
    • Fluent in English (B2) and Polish (C1), other languages are a plus

    Why join us

    • Visible impact — your engineering decisions land in real customer experience, not in roadmap slides
    • Real ownership — few layers between you and production, and close work with engineering leadership and founders
    • AI-first workflow with company-paid tooling — Claude Code, model access, IDEs of your choice
    • Flexible contract types (B2B / Employment Contract), work arrangements, and working hours
    • Remote, hybrid or in-office — from a location of your choice
    • Direct, low-bureaucracy, results-oriented culture; no micromanagement
    • Competitive salary, with an honest conversation about trajectory as Lekta scales

    We will accept CVs in both Polish and English. The recruitment process can be conducted in the candidate's language of choice, however, fluency in English is required. Only selected candidates will be contacted. If you believe you could make a contribution to our business but don't see a role for yourself, please submit your CV through our website's dedicated open application section.

    Interested in this role?