Snowflake

Senior Software Engineer - Observe Data Management

Snowflake1 weeks ago
Location

US-CA-Menlo Park

Type

Full Time

Salary

USD 200,000 – 287,500

Level

Senior

Role

Senior Software Engineer

Posted

Apr 20, 2026

Full TimeSenior

The role

Summary

Snowflake is seeking a Senior Software Engineer for its Observe Data Management team, responsible for designing and scaling high-throughput data ingestion pipelines that process over 1 petabyte of telemetry data daily. The role involves developing performance-critical distributed systems components and contributing to OpenTelemetry's open-source ecosystem while maintaining enterprise-grade reliability and low-latency data processing.

What you'll do

Data Pipeline Engineering: Design, build, and scale high-throughput data ingestion and processing pipelines handling petabyte-scale telemetry including logs, metrics, traces, and events
System Development: Develop performance-critical, distributed systems components in Go and/or C++ that operate reliably across AWS and Azure cloud environments
Open-Source Contribution: Contribute to OpenTelemetry and drive the company's open-source strategy, including external community engagement and upstream contributions
System Reliability: Architect solutions that maintain enterprise-grade availability and low latency under extreme data volumes
Cross-Team Collaboration: Work with SRE, product, and platform teams to define data reliability standards and improve detection-to-resolution times for customers
Technical Leadership: Help shape the technical roadmap for the Data Management team and mentor engineers across the organization

What we look for

Technical

Programming LanguagesProficiency in Go and/or C++, with ability to write high-performance, production-grade systems code
Distributed SystemsDeep expertise in distributed systems architecture and development
Cloud PlatformsHands-on experience building and running services across AWS and Azure

Education

Academic QualificationB.S. in Computer Science, Engineering, or equivalent practical experience

Experience

Software Engineering5+ years of software engineering experience with focus on distributed systems
Data Pipeline ExperienceDemonstrated experience designing and operating large-scale data ingestion or stream processing pipelines

Skills

Required skills

Go ProgrammingProficient system-level programming in Go
C++ ProgrammingAdvanced development capabilities in C++
Distributed SystemsStrong understanding of distributed system design and implementation
Systems ProgrammingExpertise in concurrency, memory management, networking, and I/O

Nice to have

OpenTelemetryExperience with OpenTelemetry SDKs, instrumentation, or ecosystem tooling
Open Source ContributionTrack record of open-source project contributions or maintainership
Data LakehouseFamiliarity with Apache Iceberg or other open table formats and data lakehouse architectures

Compensation & benefits

Salary

USD 200,000 – 287,500 (annual)

Benefits

Innovative Work Environment

Opportunity to work with cutting-edge AI-powered observability platform at a leading data cloud company

Ownership Culture

High-impact role with genuine system ownership and direct influence on enterprise customer reliability

Professional Growth

Chance to work on petabyte-scale data infrastructure and contribute to open-source technologies


Interview process

  1. 1
    Initial Screening Recruiter phone screen to assess candidate background and fit
  2. 2
    Technical Interview In-depth technical discussion focusing on distributed systems, systems programming, and problem-solving skills
  3. 3
    Systems Design Challenge Evaluate candidate's ability to design scalable, reliable data processing architectures
  4. 4
    Team Fit Interview Discussion with team members to assess cultural alignment and collaborative potential
  5. 5
    Final Interview Meeting with senior leadership to discuss long-term potential and strategic contributions

Apply for this position

You'll be redirected to the company's application page