Sr. Data Solutions Engineer

Gravity IT Resources
Apply Now
Title: Sr. Data Solutions Engineer
Job Type: Contract-To-Hire
Location: Miramar, FL (Hybrid)
Job Description:
The Senior Data Solutions Engineer is responsible for building, managing, and optimizing complex reusable enterprise data pipelines effectively and in a timely manner through the development lifecycle to be used by internal consumers, such as business/data analysts and data scientists. Additionally, this role will operate in a fast pace environment working closely with the Data Science and Business teams on Revenue Management initiatives which concentrates in building data solutions, curated data to enable intelligent pricing automation, which relies on machine-learning and data-derived business rules. The Senior Engineer would use both technical and analytical skills to understand and solve business problems using available resources and current technology stack, while ensuring data governance and data security compliance.
- Create and maintain technical design documentation
- Gather and document business requirements, data mapping and designing
- Create, build, and maintain complex data pipelines from disparate sources that meet functional / non-functional business requirements
- Create, maintain, and reuse existing ETL/ELT processes, employing a variety of data integration and data preparation tools
- Mentor engineers in finding optimal and efficient solutions for designing, preparing, and storing data for analytical and operational use cases.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing pipelines for greater scalability, etc.
- Work with stakeholders including Product, Data and Business teams to assist with data-related technical issues and support their data needs
- Create datasets for: (1) operational reports, key performance indicators/metrics, or other insights into current organizational activities, (2) analytics and data science to provide the ability to uncover the answers to major questions that help organizations make objective decisions and/or gain a competitive edge
- Write, debug and implement complex queries involving multiple tables or databases across platform(s)
- Collaborate with the Enterprise Architecture team to ensure alignment on data standards and processes
- Work with data and analytics experts to strive for greater functionality in data systems
- Manages and develops data processing and assigns customers to marketing segments in the CDP; using demographic data, behavioral data, and intent signals and works with data scientists to incorporate machine learning outcomes into the overall customer segmentation model
Required Skills:
- Significant experience in using best practices in designing, building and managing data pipelines that require data transformations as well as metadata and workload management
- Significant experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional and new data integration technologies (such as ETL, ELT, data replication, change data captures, message-oriented data movement, API design, stream data integration and data virtualization)
- Significant experience with streaming technologies (Kafka, Pubsub, Kinesis) and log-based architectures and experience writing batch and stream processing jobs (i.e. Apache Beam, Google Cloud DataFlow, Apache Spark, Apache Storm)
- Significant experience in performing root cause analysis on internal and external data and processes to identify issues and opportunities for improvement
- Expert level knowledge with programming languages including SQL, PL/SQL, T-SQL
- Expert level knowledge with relational SQL databases such as Oracle and SQL Server
- Significant experience with a scripting language & streaming technologies: Python, Java, Scala, Kafka, etc.
- Experience with NoSQL databases are a plus
- Experience supporting and working with cross-functional teams in a dynamic environment
Minimum Qualifications:
- Bachelor’s Degree in Computer Science, IT or equivalent
- 5+ years of experience in a data/cloud engineering role
- 5+ years of experience working and creating datasets for a data warehouse
- 5+ years of experience with ETL development tools, Informatica or Azure Data Factory (ADF) preferred
- 3+ years of cloud experience, Azure preferred