Gravity IT Resources
Title: Sr. DevOps Engineer
We are seeking a Senior DevOps Engineer to develop cloud-based data infrastructure for the purpose of serving rich analytics to customers through SaaS products and other data-driven solutions. You will work on builds, configurations, provisioning, configuration, monitoring, alerting, deployment, and management of infrastructure across a multitude of distributed cloud-based services to support products. You will build automations and leverage IasC tools to make your work repeatable, consistent, and less toilsome. You will be a driven, collaborative problem solver with outstanding cloud data engineering and technical skills. The ideal candidate will have very strong communication and analytical skills, an ability to work as part of a highly collaborative Agile team, and a driven curiosity to solve challenging problems.
- Build terraform scripts to create infrastructure repeatably
- Work with Security teams to provision relevant service accounts and permissions
- Build and extend CICD build pipelines for ease of deployment and robust build/test/lint/scan checks
- Set up and tune Kubernetes workloads. Modify configurations to meet app needs, ensure uptime, and keep costs reasonable.
- Work with Data Scientists, Application Developers, and DevOps Engineers to architect build pipelines and enable the development of machine learning models, data driven applications, and other analytical solutions for both internal consumption and external SaaS customers
- Design table structure for proper level of normalization and performance to match use cases
- Develop cloud-based systems for ingesting customer data across multiple sources in real time and via streaming processes
- Create highly performant SQL queries/sprocs and python code against a variety of data systems
- Perform RDBMS management tasks such as query optimization, DDL, view creation, permission management, index recommendation, etc.
- Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality
- Support bug fixing and performance analysis along the data pipeline
- Be an active participant in Agile development methodology and ceremonies.
- BS/MS degree in Computer Science, Engineering, Mathematics, Statistics or equivalent hands-on experience
- Demonstrated willingness to learn quickly, adapt and apply that knowledge quickly.
- At least three years demonstrated experience working with cloud-based systems (GCP preferred)
- Expertise in at least one scripting language (preferably Python)
- Experience with monitoring/alerting systems (Grafana, Prometheus, Graphite, etc)
- Ability to travel quarterly
- Excellent communication & presentation skills
- Software delivery experience in a Cloud/DevOps environment, GitFlow, and containerization knowledge
What will set you apart:
- GCP professional certification
- CKA or CKS professional certification
- Demonstrated leadership and experience setting up and managing Kubernetes clusters at scale.
- Robust Kubernetes tooling experience such as Kubecost, Jaegar, Service Mesh, Helm, etc.
- Demonstrated experience interacting with data systems such as APIs, BigQuery, BigTable and Redis.
- Expert level knowledge building and maintaining microservices based event-driven systems.
- Experience deploying and maintaining managed services via Infrastructure-as-Code (IaC).
- SQL expertise or DBA experience using RDBMS like Postgres, MySQL, SQL Server and/or Oracle.
- Database experience including indexing, partitioning, normalization and database design (data lake, data warehouse and ODS).
- Experience with DevOps systems such as Terraform, Gitbub Actions, ArgoCD, etc.
- Familiarity working with APIs for data scraping and importing data via JSON and YAML
- Experience working as a technical customer liaison
- Understanding the principles of Master Data Management and Data Quality Services
- Previously Interacted with and retrieved data from POS systems