back Back to Jobs

Data DevOps Engineer

Location: San Diego, California, United States
Job # 12147773
Date Posted: 04-01-2019
Data DevOps Engineer- San Diego, CA 
Pay Rate: DOE
Direct Hire
Start Date: ASAP

Our client is a leading IT consulting service headquartered in San Diego, CA.  With well known clients, they seek to build their team of data experts to work directly with their clients. At this time, they are looking for a Data DevOps with key Skill sets: (Kafka/Confluent and StreamSets) who will be responsible for building infrastructure as code, automating development environments, implementing monitoring and alerting to increase stability and resilience of data system, as well as, allowing engineers to focus data engineering.
.  Ideal candidates are experienced data pipeline builders and data wranglers who enjoy optimizing and building data systems from the ground up, getting data from source systems to analytic systems, and modeling data for analytical reporting. They have also learned quite a bit about managing customers and the relevance of external and internal perceptions of their work product and how they relate to satisfaction. Having the confidence and knowledge to recommend solutions and the experience to know what will and won’t work are important traits for this consultant.
Tasks:
 
1.  Pipeline release automation​
     Deliverables: 
  • Design and build automated release process for pipelines in Streamsets data collectors using Gitlab CI/CD
  • Integrate automated tests using Gitlab CI/CD and or QuerySurge Triggers
 2.  Infrastructure Management​
       Deliverables:
  • Maintain infrastructure as code using Terraform for application, monitoring, and alerting software in both AWS and Kubernetes environments
  • Manage release deployments for Elasticsearch, Streamsets data collectors, Prometheus, Alert Manager and proprietary dev tools
3.  Monitoring, Alerting and Logging​
      Deliverables:
  • Manage metrics collection and alerting using Prometheus, Alert Manager and Grafana on all systems: Streamsets data collectors, Kafka, Elasticsearch, and AWS serverless applications
  • Manage QA report delivery and storage with S3 as static website
  • Implement and maintain logging infrastructure (Architecture TBD)
4.  Documentation
      Deliverables:
  • Diagrams of all architecture and processes to troubleshoot, release, and recover downed systems
  • Documentation of SLA’s and alerting thresholds

Requirements:

Universal Skills
Must possess the following set of fundamental skills:
  • Uses technology to contribute to development of customer objectives and to achieve goals in creative and effective ways.
  • Communicates clearly and effectively in careful consideration of the audience, and in terms and tone appropriate to them.
  • Accepts responsibility for the successful delivery of a superior work product.
  • Gathers requirements and composes estimates in collaboration with the customer.
  • Respects coworkers and has a casual, friendly attitude.
  • Has an interest and passion for technology. This is not a joke, and yes, it’s a requirement.
Technology:
  • The primary skill sets are Kafka/Confluent and StreamSets (or something similar to Streamsets such as Kinesis.
    • Terraform, Lambda, Dynamo, Athena architecture
  • Experience with data warehouse tools (Teradata, Oracle, Netezza, SQL, etc.) as well as cloud-based data warehouse tools (Snowflake, Redshift, Google BigQuery).
  • Experience building and optimizing traditional and/or event driven data pipelines.
  • Advanced working SQL knowledge and experience working with relational databases.
  • Familiarity with data processing tools such as Hadoop, Apache Spark, Hydra, etc.
  • Knowledge of cloud-based or streaming solutions such as Confluent and Kafka, Databricks and Spark Streaming.
  • Experience with ETL/ELT tools such as Matillion, FiveTran, Talend, Informatica, Oracle Data Integrator, or IBM Infosphere, and understands the pros/cons of transforming data in ETL or ELT fashion.
  • Good understanding of data warehouse concepts of schemas, tables, views, materialized views, stored procedures, and roles/security.
  • Adept at building processes to support data transformation, data structures, metadata, dependency and workload management.
  • Experience with BI tools such as Looker, Tableau, PowerBI, and Microstrategy.
  • Familiarity with StreamSets a plus.
  • Investigate emerging technologies.
  • Research most appropriate technology solution to solve complex and unique business problems.
  • Research and manage important and complex design decisions.
Consulting:
  • Direct interaction with the customer regarding significant matters often involving coordination among groups.
  • Work on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors.
  • Exercise good judgment in selecting methods, techniques and evaluation criteria for obtaining solutions.
  • Attend sales calls as technical expert and offer advice or qualified recommendations based on clear and relevant information.
  • Research and vet project requirements with customer and technical leadership.
  • Assist in the creation of SOWs, proposals, estimates and technical documentation.
  • Act as vocal advocate for Fairway and pursue opportunities for continued work from each customer.
Supervision:
  • Determine methods and procedures on new or special assignments.
  • Requires minimal day-to-day supervision from the client management team.
Experience:
  • Typically requires 5+ years of related experience.
  • Typically requires BS in computer science or higher.

Benefits:

  • Work from Home
  • Flexible Hours
  • 100% covered employee health insurance
  • 401(k) with employer match
  • Fun team building events/days/activities
  • New HQ with adjustable desks
#ZR
this job portal is powered by CATS