Contract: Senior Database Automation Engineer (LATAM)
Seeking a Senior Database Automation Engineer (LATAM) with 7+ years of experience in relational databases and strong Python skills to automate database systems.
Join Upwork's Data Infrastructure team within the Data Platform Services (DPS) organization, responsible for designing, operating, and automating all database systems (Postgres, MySQL, DynamoDB, MongoDB) across Upwork’s global infrastructure. This role combines database engineering expertise with software development rigor to build mission-critical automations for Upwork’s core data assets.
You’ll orchestrate complex systems spanning Terraform, RDS, Presto, and Rancher to solve challenges like:
- Zero-downtime migrations and cross-region replication
- End-to-end database provisioning (infrastructure deployment, user/access configuration, service integration)
- Vulnerability management and security hardening at scale
- Incident response for high-severity database alerts (24/7 on-call rotation)
Key Responsibilities:
- Design and implement Python-based automation frameworks (not scripts) for database lifecycle management
- Collaborate with infrastructure teams to integrate systems via APIs (AWS, Kubernetes, HashiCorp)
- Optimize Postgres performance, replication, and backup strategies (99% of relational DB use cases)
- Participate in LATAM Time-friendly on-call shifts with weekend coverage
Must-haves (required skills):
- Hybrid expertise: Deep experience in both database engineering/administration and software development. Candidates who have transitioned from database engineering to software development (or vice versa) are especially encouraged.
- Programming skills: Strong background in Python (required); ability to develop robust automation beyond basic scripting. Experience with Ruby or Perl is acceptable if you can quickly adapt to Python.
- Database expertise: 7+ years of professional experience with relational databases, with a strong preference for Postgres. Experience with MySQL or Oracle is also valued. NoSQL experience (e.g., DynamoDB) is a plus but not required.
- SQL proficiency: Solid understanding of SQL; experience with procedural languages (PL/pgSQL for Postgres or PL/SQL for Oracle) is beneficial but not essential, as most automation is done in Python.
- Cloud & DevOps familiarity: Experience with Terraform and related infrastructure-as-code tools is a plus, but not a core requirement. Familiarity with cloud environments (AWS, GCP, Azure) is helpful.
- Automation mindset: Passion for automating repetitive tasks and improving operational efficiency.
- Ownership & accountability: Proactive, resourceful, and able to take full responsibility for solving problems and delivering outcomes.
- Collaboration: Strong communication skills; able to work effectively in a distributed, multicultural team.
Additional Details:
- Location: Candidates based in the LATAM timezone.
- On-call rotation: The role requires participation in a 24/7 on-call schedule, including weekends. Actual incident frequency is low, but availability during assigned shifts is essential. Flexibility is provided to balance workload after incidents.
Why Join Upwork’s Data Infrastructure Team?
- Work on challenging, high-impact automation projects at the heart of Upwork’s business.
- Collaborate with experienced engineers in a supportive, global team environment.
- Gain exposure to a wide array of technologies and complex systems orchestration.
- Opportunity to shape and improve the core data infrastructure of the world’s leading work marketplace.
If you are passionate about databases, enjoy building automation for complex systems, and thrive in a collaborative, distributed environment, we encourage you to apply.