میاره
میاره

Senior Data Engineer

Tehran/ Tarasht
Full Time
Saturday to Wednesday
-
-
51 - 200 employees
Internet Provider / E-commerce / Online Services
Iranian company dealing only with Iranian entities
1396
Privately held
توضیحات بیشتر

key Requirements

5 years experience in similar position

Job Description

Job Objective

We are building a governed, self-service data platform that enables teams to create and document their own analytics while maintaining a single source of truth for company-wide metrics.

You will join the data team to build and operate the core data platform. This is an infrastructure-first role with minimal stakeholder-facing responsibilities. You will not be responsible for requirements interview, end-user support, or cross-team coordination. Success is measured by technical execution: platform reliability, performance, correctness, automation, and adherence to engineering best practices.

Evaluation is based on technical outcomes: reliability, performance, correctness, automation, and adherence to engineering best practices.

Key Responsibilities

  • Design and operate production-grade data ingestion (batch and streaming/CDC).

  • Build scalable transformation layers with clear modeling, incremental processing, and reproducible backfills.

  • Implement testing and data quality gates and support their enforcement through tooling and automation.

  • Automate documentation and maintain clear, versioned platform standards (schemas, conventions, runbooks).

  • Tune and scale ClickHouse and PostgreSQL for OLAP performance, concurrency, and cost-efficiency.

  • Implement and operate platform observability (metrics, dashboards, alerting) and reduce incidents through preventive engineering.

  • Operate and maintain streaming infrastructure: Kafka, Kafka Connect, and Debezium (CDC), including stability and upgrades.

  • Build and maintain platform CI/CD with GitLab CI/CD and strong release hygiene.

  • Maintain engineering quality through code review, documentation, and adherence to defined architectural standards.

  •   Responsibilities are shared across the data platform team; no single engineer is expected to own all components end-to-end.

 Role Complexity

This role is intentionally structured to maximize time spent on building and operating the data platform:

  • Low meeting load / low context switching. The Data Platform Lead handles most cross-team coordination and intake.

  • You focus on engineering outcomes. Your time goes into implementing platform architecture, reliability, performance, quality controls, and automation.

  • Written interfaces over meetings. Standards, contracts, documentation, and tooling are the primary way teams consume the platform.

In this role, you will have:

  • Real technical influence: You will contribute to architectural decisions and lead implementation of agreed designs, under the direction of the Data Platform Lead.

  • Few distractions: minimal stakeholder management; strong focus time for deep engineering.

  • Hard problems, production scale: performance tuning, operational reliability, streaming/CDC, and automation at scale.

Requirements

  • +5 years of experience as a Data Engineer / Data Platform Engineer.

  • Proven track record of delivering high-reliability, high-scale pipelines and platform components in production.

  • Practical experience tuning large-scale ClickHouse and PostgreSQL databases.

  • Experience handling large datasets using PySpark for batch processing.

  • Ability to operate and maintain a production Kafka setup including Kafka Connect and Debezium (CDC).

Skills

  • Expert SQL and query optimization, with the ability to reason about execution plans and performance trade-offs.

  • Strong Python for production-grade data systems (not notebooks-only).

  • Hands-on experience with Airflow (or equivalent orchestration) in production.

  • Practical, deep experience with ClickHouse and PostgreSQL, including tuning and scaling at high volume.

  • Monitoring/alerting experience using Grafana (or equivalent) to ensure system health.

  • Strong understanding of data engineering architecture: idempotency, retries, lineage, partitioning, SLAs/SLOs, backfills.

  • Production experience with Kafka ecosystems: brokers, topics, consumer groups, lag, Connect, Debezium.

  • Familiarity with Kubernetes.

  • Strong code review habits and documentation standards.

Preferred Skills

  • Experience with object storage tooling like MinIO (S3-compatible) and lake-style patterns.

  • Experience designing governed self-service systems: templates, guardrails, contracts, standardized layers.

Tech Stack (Current / Expected)

  • Orchestration: Airflow

  • Storage/Compute: ClickHouse, PostgreSQL

  • Streaming/CDC: Kafka, Kafka Connect, Debezium

  • Object Storage: MinIO

  • Platform: Kubernetes, GitLab CI/CD

  • Observability: Grafana

·        Primary programming language: Python

 

Benefits & Perks

  • Location: Miare Headquarters – Sharif Innovation Station (Tarasht, Tehran)

  • Full-time, on-site position

  • Competitive compensation aligned with senior-level IC impact and market benchmarks.

  • Flexible hours

 

Job Requirements

Age
25 - 40 Years Old
Gender
Men / Women

ثبت مشکل و تخلف آگهی

ارسال رزومه برای میاره

این آگهی بسته شده است
insight applicant

مقایسه من با 117 متقاضی دیگر