Jobs for Veterans, Veteran Job Board | GiJobs.com

Post Jobs

Job Information

Home Depot Data Engineer in Houston, Texas

POSITION PURPOSE

Congrats! You re learning about an exciting new role with The Home Depots Global Custom Commerce (GCC) team that will revitalize and invigorate the way we manage and view data. For this role, we are looking for some one who can develop, implement, test and maintain data pipelines (batch & streaming) and data structures within a cloud based column-oriented data store. This person will support the GCC BI and Data Science strategic initiatives to drive better customer experiences and more profitable outcomes

Why work here? Our entrepreneurial roots and maverick mentality, coupled with the resources and backing of the #1 home improvement retailer in the world, The Home Depot, is a unique opportunity for you to be a transformative retail disrupter. Plus, GCC is the world s largest online window covering company, and we ve got a demonstrably awesome 20-year track record. From our open-floor office to our open-door ethos, our culture is rooted in improving, evolving, and having fun (we re pretty serious about cake, cook-offs, ping pong, meaningful work and exciting projects). Most importantly, our team members are always inspired, engaged, and ready for growth. That means you ll have the resources and the runway to create truly magical, out-of-the-box work. Moreover, you will play an important role in leveraging our culture, people, systems, processes, and technology ultimately to provide incredible customer experiences, while growing business for GCC and The Home Depot. This is your chance to be part of something big, in a small start-up environment. We re ranked as one of The Top 5 Workplaces in Texas and have consistently won the following awards:The Best Place to Work in Houston (Houston Business Journal), Houston s Top Workplaces (Houston Chronicle) and Houston s Best and Brightest.

MAJOR TASKS, RESPONSIBILITES AND KEY ACCOUNTABILITIES

20%- Implement a real time streaming data ingestion and processing pipeline using Google Dataflow (Apache Beam)

20%- Interface with business intelligence analyts and others in IT (i.e. data engineers, architects, WebOps) in frequent whiteboard sessions to dicsucuss the design, implementation, and testing of data pipleines

20%- Maintain data architecture standards and ETL/ELT best practices consistent with a column oritented data store in an analytic use case

20%- Active and engaged particitpation in the Scrum delivery process

20%- Support solutions in production

NATURE AND SCOPE

This role reports to the Sr. Manager of EDW.

This role has no direct reports.

ENVIRONMENTAL JOB REQUIREMENTS

Environment:

Located in a comfortable indoor area. Any unpleasant conditions would be infrequent and not objectionable.

Travel:

Typically requires overnight travel less than 10% of the time.MINIMUM QUALIFICATIONS

Must be eighteen years of age or older.

Must be legally permitted to work in the United States.

Education Required:

The knowledge, skills and abilities typically acquired through the completion of a bachelor's degree program or equivalent degree in a field of study related to the job.

Years of Relevant Work Experience: 6 years

Physical Requirements:

Most of the time is spent sitting in a comfortable position and there is frequent opportunity to move about. On rare occasions there may be a need to move or lift light articles.

Preferred Qualifications:

Familiarity with Agile methodologies

Experience with Spark dataframes, SparkSQL, Kafka, KSQL, real-time streaming, and message bus technologies

Experience with data science technologies such as SAS, R, Matlab, or similar

Experience with data warehousing and dimensional modeling

Knowledge, Skills, Abilities and Competencies:

Experience in building real time streaming data ingestion and processing pipeline using Apache Beam (running on either Google Datflow or Apache (Apex, Flink, or Spark) or Kafka in an analytics or data science use case

Experience with data processing tools (e.g. Hadoop, Spark, Dataflow, etc.)

Experience building ETL/ELT pipelines

Experience with column-oriented databases (e.g Redhift, BigQuery, Vertica)

Ability to go from whiteboard discussion to code

Ability to effectively communicate with technical and non-technical audiences

Strong programming ability

Success in a highly dynamic environment and ability to shift priorities with agility

Ability to act independently with minimal supervision

Willingness to explore and implement new ideas and technologies

Experience working directly with subject matter experts in both business and technology domains

Experience as a team lead We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.

We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.

DirectEmployers