Data Engineer

Gravity IT Resources
Apply Now
Job Title: Data Engineer
Location: Columbus, Ohio (Remote)
Job-Type: Full Time
Employment Eligibility: Gravity cannot transfer nor sponsor a work visa for this position. Applicants must be eligible to work in the U.S. for any employer directly (we are not open to contract or “corp to corp” agreements).
Position Overview:
Gravity is looking for a Data Engineer who will be responsible for the design and construction of data flows that assemble and refine complex data sets into usable information supporting organization initiatives. This individual is also responsible for some oversight of people and team activities to ensure the successful delivery of the team’s work streams.
They will work with data architects, developers, analysts, and other stakeholders to design/build pipelines and cloud-based technical processes consistent and in support of the architecture direction to satisfy functional/non-functional requirements. The scope of their work will support and drive capabilities of Business Intelligence, Operational Reporting, Enterprise Data Warehouse, Data Lake, Big Data, and Enterprise Application Integrations.
Duties & Responsibilities:
- Create and maintain pipeline and cloud-based services
- Work with data and design teams to define solutions and support data requirements
- Support peers and associated activities that enhance the overall productivity and capabilities of the department and team.
- Project coordination and planning
Required Experience & Skills:
- BS / Graduate Degree in Computer Science, Engineering, Mathematics, Statistics or related field
- 3+ years of experience in similar technical roles (ETL, Application Development, Data Science, Big Data, Reporting)
- Exposure and experience with document markups (JSON,XML), document data stores and REST API endpoints for data retrieval and update
- Reasonable background with relational databases and SQL
- Experience building and optimizing data pipelines and data sets
- Experience with cloud technologies and associated purpose. AWS preferred, such S3, EC2, EMR, DynamoDB, Aurora, Athena, Glue, Lambda.
- Ability to analyze data, find patterns, identify issues, and enhance and improve the integrity and quality of data and associated technical processes
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- Working knowledge of message queuing, stream processing, and scalable data stores / processes
- Experience with data warehouse and associated modeling / design (data mart, dimensions, facts)
- Exposure and familiarity with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. (Python preferred)
- Experience with Enterprise integration and ETL platforms/IPaaS (SnapLogic, Informatica, SSIS)
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Strong organizational and interpersonal skills
- Protect and take care of our company and member’s data every day by committing to work within our company ethics and policies
- MarkLogic or Snowflake experience a big plus.