back Back to Jobs

Data Architect

Location: San Diego, California, United States
Job # 12147896
Date Posted: 04-01-2019
Data Architect- San Diego, CA 
Pay Rate: DOE
Direct Hire
Start Date: ASAP

Our client is a leading IT consulting service headquartered in San Diego, CA.  With well known clients, they seek to build their team of data experts to work directly with their clients. At this time, they are looking for a Data Architect with key skill sets: (Kafka/Confluent and StreamSets) who will be responsible for designing logical and physical data models for Snowflake and the real-time database. They will design and implement best practices for real-time/near time data ingestion into Snowflake including partitioning and clustering schemes, unique id generation, logging, auditing, error handling, and replay strategies. Architect schema migration strategy.

.  Ideal candidates are experienced data pipeline builders and data wranglers who enjoy optimizing and building data systems from the ground up, getting data from source systems to analytic systems, and modeling data for analytical reporting. They have also learned quite a bit about managing customers and the relevance of external and internal perceptions of their work product and how they relate to satisfaction. Having the confidence and knowledge to recommend solutions and the experience to know what will and won’t work are important traits for this consultant.
Tasks:
 
1.  Create Enterprise Data Models​
     Deliverables: 
  • Build tables in Snowflake to support unified data models for fan, event, venue, artist, sales, attendance, etc
  • ER/Ds and DFDs and other diagrams
  • Source/Target mappings
 2.  Architect and Implement Schema Versioning and Migration Strategy​
       Deliverables:
  • Design and build processes for versioning and migrating schemas for Snowflake. Include functions such as adding columns, removing columns, column deprecation, materialized views, automatic clustering, movement from dev -> QA -> preprod -> prod
  • Design process for providing support for concurrent versions of tables
  • Design and help build functioning CI/CD pipeline to automate migrations
3.  Define reference architecture (best practices) for data ingestion into Snowflake
      Deliverables:
  • Build Streamsets data collectors pipeline(s) that demonstrate ingestion best practices including id generation, de-duping, cleansing, enriching, late arriving data, error-handling, data replay
  • Supporting documentation

Requirements:

Universal Skills
Must possess the following set of fundamental skills:
  • Uses technology to contribute to development of customer objectives and to achieve goals in creative and effective ways.
  • Communicates clearly and effectively in careful consideration of the audience, and in terms and tone appropriate to them.
  • Accepts responsibility for the successful delivery of a superior work product.
  • Gathers requirements and composes estimates in collaboration with the customer.
  • Respects coworkers and has a casual, friendly attitude.
  • Has an interest and passion for technology. This is not a joke, and yes, it’s a requirement.
Technology:
  • The primary skill sets are Kafka/Confluent and StreamSets (or something similar to Streamsets such as Kinesis.
    • Terraform, Lambda, Dynamo, Athena architecture
  • Experience with data warehouse tools (Teradata, Oracle, Netezza, SQL, etc.) as well as cloud-based data warehouse tools (Snowflake, Redshift, Google BigQuery).
  • Experience building and optimizing traditional and/or event driven data pipelines.
  • Advanced working SQL knowledge and experience working with relational databases.
  • Familiarity with data processing tools such as Hadoop, Apache Spark, Hydra, etc.
  • Knowledge of cloud-based or streaming solutions such as Confluent and Kafka, Databricks and Spark Streaming.
  • Experience with ETL/ELT tools such as Matillion, FiveTran, Talend, Informatica, Oracle Data Integrator, or IBM Infosphere, and understands the pros/cons of transforming data in ETL or ELT fashion.
  • Good understanding of data warehouse concepts of schemas, tables, views, materialized views, stored procedures, and roles/security.
  • Adept at building processes to support data transformation, data structures, metadata, dependency and workload management.
  • Experience with BI tools such as Looker, Tableau, PowerBI, and Microstrategy.
  • Familiarity with StreamSets a plus.
  • Investigate emerging technologies.
  • Research most appropriate technology solution to solve complex and unique business problems.
  • Research and manage important and complex design decisions.
Consulting:
  • Direct interaction with the customer regarding significant matters often involving coordination among groups.
  • Work on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors.
  • Exercise good judgment in selecting methods, techniques and evaluation criteria for obtaining solutions.
  • Attend sales calls as technical expert and offer advice or qualified recommendations based on clear and relevant information.
  • Research and vet project requirements with customer and technical leadership.
  • Assist in the creation of SOWs, proposals, estimates and technical documentation.
  • Act as vocal advocate for Fairway and pursue opportunities for continued work from each customer.
Supervision:
  • Determine methods and procedures on new or special assignments.
  • Requires minimal day-to-day supervision from the client management team.
Experience:
  • Typically requires 5+ years of related experience.
  • Typically requires BS in computer science or higher.

Benefits:

  • Work from Home
  • Flexible Hours
  • 100% covered employee health insurance
  • 401(k) with employer match
  • Fun team building events/days/activities
  • New HQ with adjustable desks
#ZR
this job portal is powered by CATS