Posthog

Backend Engineer — Ingestion

Posthog1 weeks ago
Location

Remote (EMEA)

Workplace

Remote

Type

Full Time

Salary

GBP 120,000 – 180,000

Level

Mid

Role

Backend Engineer

Posted

Apr 20, 2026

Full TimeRemoteMid

The role

Summary

Backend Engineer focused on high-throughput data ingestion pipelines at PostHog, a well-funded product analytics platform processing billions of events monthly. This role offers end-to-end ownership of critical infrastructure that powers analytics, feature flags, CDP, and revenue analytics products, requiring expertise in distributed systems, event streaming, and multi-tenant SaaS architecture.

What you'll do

Data Ingestion Pipeline Architecture: Design, implement, and evolve the complete event ingestion pipeline that captures and processes 10+ billion events monthly from PostHog customers, ensuring scalability to handle 100+ billion events as the platform grows
End-to-End System Ownership: Own the entire ingestion service lifecycle from initial design through deployment and production operation, making independent architectural decisions and trade-offs without committee oversight
Reliability and Data Integrity: Ensure 100% reliable event delivery and data integrity for all customer events flowing through the platform, implementing proper error handling, retry logic, deduplication, and monitoring
Performance Optimization: Continuously optimize pipeline performance using profiling, benchmarking, and systems-level analysis to reduce latency, decrease resource consumption, and maximize throughput
Multi-Tenant Infrastructure: Build and maintain infrastructure supporting dozens of PostHog products (analytics, feature flags, CDP, revenue analytics) consuming ingested event data, ensuring tenant isolation and fair resource allocation
Open-Source Development: Contribute to PostHog's open-source ingestion components, writing production-grade Rust and Node.js code that the community can audit, extend, and integrate into their own systems
Incident Response and On-Call: Participate in on-call rotation, respond to production incidents affecting event ingestion, conduct thorough post-mortems, and implement preventive measures to improve system resilience
Technology Evaluation: Evaluate and benchmark emerging technologies against current stack (Kafka, PostgreSQL, ClickHouse, Redis, S3), making data-driven decisions about tool adoption based on performance, reliability, and operational burden
Collaboration with Product Teams: Work closely with feature flag, analytics, CDP, and revenue analytics teams to understand their data requirements, optimize query patterns, and ensure ingestion pipeline meets their performance and reliability needs
Mentorship and Knowledge Sharing: Document architectural decisions, contribute to internal knowledge base, and help mentor junior engineers on distributed systems principles and PostHog's ingestion infrastructure

What we look for

Technical

High-Throughput Distributed SystemsProven experience designing and building event-driven distributed systems capable of reliably processing and delivering billions of events with minimal latency
Kafka and Stream ProcessingProduction-grade experience with Apache Kafka, including architecture decisions, topic design, partition strategies, and handling backpressure in high-volume scenarios
Database Systems at ScaleHands-on work with PostgreSQL, Redis, ClickHouse, or similar technologies managing large data volumes, understanding replication, failover, and query optimization
Multi-Tenant Data IsolationDemonstrated experience building multi-tenant SaaS platforms with proper tenant isolation, row-level security, and scalable data partitioning strategies
Systems ProgrammingStrong ability to write efficient, low-latency code in Rust, Go, Node.js, or similar languages; understanding memory management and performance characteristics

Education

Computer Science FoundationBachelor's degree in Computer Science, Computer Engineering, or related field; or equivalent professional software engineering experience demonstrating core CS principles

Experience

Distributed Systems ExperienceMinimum 4-5 years of professional backend engineering experience, with at least 2+ years specifically working on distributed, high-throughput data systems
Production SaaS OperationsExperience owning production systems in multi-tenant SaaS environments, including responsibility for uptime, performance, scaling decisions, and incident response
Data Platform ExperienceBackground building or maintaining data platforms, analytics infrastructure, event ingestion systems, or similar components that process significant data volumes
Rapid Iteration ExperienceDemonstrated ability to ship production code frequently and safely, with comfort deploying multiple times daily and managing risk through careful testing and monitoring

Skills

Required skills

Distributed Systems ArchitectureDesign and implement highly scalable, event-driven distributed systems capable of processing billions of events monthly with emphasis on fault tolerance and data consistency
Backend Programming LanguagesProficiency in Node.js, Go, Rust, or similar systems programming languages for high-throughput data processing and infrastructure development
Message Streaming PlatformsHands-on experience with Apache Kafka or similar event streaming systems, including topics, partitioning, consumer groups, and high-volume event ingestion patterns
Database Systems at ScaleProduction experience with PostgreSQL, Redis, ClickHouse, or similar databases, understanding query optimization, replication, and handling massive data volumes
Multi-Tenant SaaS ArchitectureExperience building and operating multi-tenant software-as-a-service platforms with isolated data models, tenant-aware query patterns, and scalable infrastructure
Rapid Deployment and CI/CDAbility to ship production changes frequently and safely using modern deployment practices, automated testing, and continuous integration pipelines without service degradation

Nice to have

Customer Data Platform ExperienceBackground working with CDP, analytics platforms, metric collection systems, or log aggregation engines that process and route high-volume customer data
On-Call and Incident ManagementExperience carrying a pager, responding to production incidents, conducting post-mortems, and implementing monitoring and alerting strategies for mission-critical systems
Cloud Infrastructure and DevOpsComfortable provisioning, maintaining, and optimizing cloud infrastructure on AWS, GCP, or similar platforms, including containerization, orchestration, and infrastructure-as-code
Performance EngineeringProficiency with benchmarking tools, profiling, flame graphs, and optimization techniques for identifying and resolving performance bottlenecks in data pipelines
Observability and MonitoringKnowledge of observability practices including structured logging, metrics collection, distributed tracing, and working with monitoring platforms to ensure system visibility
Open-Source ContributionExperience contributing to or maintaining open-source projects, familiarity with GitHub workflows, code review practices, and community-driven development

Compensation & benefits

Salary

GBP 120,000 – 180,000 (annual)

Stock options

Available

Benefits

Remote Work Flexibility

Fully remote position with European/UK timezone preference, enabling flexible work-life balance while collaborating with a distributed team spanning North America and Europe

Autonomous Decision Making

Complete autonomy in choosing projects and technical directions based on impact to customers and personal interests; engineers lead product teams and make architectural decisions

Transparent Company Operations

Access to public company handbook covering strategy, compensation philosophy, board meeting notes, fundraising plans, and product roadmap enabling informed decision-making at all levels

Meeting-Free Building Time

Tuesdays and Thursdays designated as meeting-free days with default async communication (PRs, Issues, Slack) to maximize uninterrupted coding and deep work sessions

Fast-Paced Product Development

Opportunity to ship frequently with immediate visibility into impact; work with small, autonomous teams of highly capable engineers who can outship much larger organizations

Building Mission-Critical Infrastructure

Your code directly impacts customer experience by operating in the hot path of event ingestion; see tangible results from architectural decisions through immediate metrics and user feedback

Open-Source Impact

Build open-source software components that you can showcase publicly; contribute to tools used by the broader data engineering community beyond PostHog customers

Strong Financial Position

Company has raised over 100 million dollars from top-tier investors, is default alive with 10% month-over-month revenue growth, and operates efficiently enabling long-term planning

Inclusive Work Environment

Commitment to diversity and inclusion with dedicated handbook resources; flexible accommodations for candidates with disabilities during interview process and ongoing employment

Innovation and Experimentation Culture

Freedom to pursue unconventional solutions and experimental approaches; the company values ambitious technical choices and views calculated risk-taking as a competitive advantage


Apply for this position

You'll be redirected to the company's application page


Posthog

Posthog

View all jobs

Posthog is a product analytics tool that helps teams track their product and understand their users.

San Francisco, California, United StatesFounded 2019posthog.com

Tech Stack

Languages
RustNode.js/JavaScriptGo
Frameworks
Event Streaming ArchitecturesMicroservices Architecture
Databases
Apache KafkaPostgreSQLClickHouseRedisAmazon S3
Tools
DockerKubernetesGit and GitHubCloud Providers (AWS/GCP)Datadog or Similar APM
Other
SQLgRPC/Protocol BuffersData Serialization FormatsLinux and System Administration
Apply Now