Description:
Join our Data Crawling Team and focus on acquiring data at scale. You’ll design and maintain crawlers, parsing logic, and backend services that power large-scale web data collection. The role is Python-focused, hands-on with crawling technologies, and ideal for developers who enjoy solving challenges in extraction, automation, and backend system design.
Responsibilities:
- Develop and maintain scalable data crawlers and backend services.
- Build stable, reusable, and efficient data pipelines.
- Optimize and refactor existing systems for performance and reliability.
- Research and prototype new crawling methods, parsing techniques, and pipeline designs.
Core Requirements:
- Strong Python skills (OOP, scripting, libraries).
- Solid programming fundamentals and debugging ability.
- Familiarity with databases (SQL/NoSQL: PostgreSQL, MySQL, Redis).
- Knowledge of scraping frameworks/tools (Scrapy, Selenium, Playwright).
- Understanding of data parsing libraries (BeautifulSoup, lxml, PyQuery, etc).
- Experience with version control (Git), Linux and Docker
Preferred (Plus):
- Familiare with message brokers (Kafka, RabbitMQ).
- Knowledge of concurrency, parallelism, and scalable system design.
- Understanding of SQL, ETL pipelines, and big data concepts.
- Familiarity with DevOps practices (CI/CD).
Soft Skills:
- Problem-solving and critical thinking
- Curiosity and fast learning
- Teamwork and communication
- Adaptability and time management
Benefits:
- Close-knit
- Game's Time
- Monthly Gathering
- Occasional packages and gifts
- Flexible working hours
- Insurance
- Lunch / breakfast
- Release Bonus