Job was saved successfully.
Job was removed from Saved Jobs.

Job Details


Verizon Communications Inc

Lead Developer - Data Engineer ( 587966-1B )

Technology

Lead Developer

Yearly

No

Temple Terrace, Florida, United States

When you join Verizon

Verizon is one of the world’s leading providers of technology and communications services, transforming the way we connect across the globe. We’re a diverse network of people driven by our shared ambition to shape a better future. Here, we have the ability to learn and grow at the speed of technology, and the space to create within every role. Together, we are moving the world forward – and you can too. Dream it. Build it. Do it here.

What you’ll be doing...

This position will be responsible for fraud analytics, data pipelining, point anomaly model creation and creating real time dashboards providing measurements for trending, operations, and executive KPIs in the Identity Access Management and Fraud Detection development organization supporting all Verizon external customers. This position will be required to develop and design Machine Learning and big data solutions using distributed computing technologies that relate to real time problems being faced by the business. In addition to this the position will be responsible for technically leading resources based on the requirements provided by the architect. There will be multiple technologies involved in the process and a deep understanding of Big Data tool sets is required. This position will also be responsible for implementing new and enhancing existing dashboard measurements and alarms used by IT, Operations, Finance, Legal and Security organizations built using Splunk, Oracle, Kibana and other data sources consolidating logging detail counters in Elastic and then exposing this data on real-time Splunk and Grafana dashboards.

The position requires a dynamic, highly motivated, senior developer that will use AI/ML Machine Learning and Data analyst positions responsible for fraud data analytics, data pipelining, point anomaly model creation and creating real time dashboards. Candidates will be required to develop and design Machine Learning and big data solutions using distributed computing technologies that relate to real time problems being faced by the business. There will be multiple technologies involved in the process and a deep understanding of Big Data tool sets is required. Candidates need to be strategic thinkers with the ability to solve complex problems and apply business mind based on knowledge and experience in data science including mining data from primary and secondary sources, then reorganizing said data in a format that can be easily read by either human or machine. This position supports monitoring and analyzing the trends of the dashboards, enabling real-time alarming and coordinating with application teams on logging standardization. Candidates will be required to develop and design LogStash and other jobs to consolidate application log counters in a standard methodology and output on Elastic Search and Influx DB, building process to migrate history as well as real-time data manipulation and create and organize Grafana dashboards providing business transactional details, operational and server health details, and customer aggregated funnels across channels.

  • Orchestrate / design / develop reusable data products utilizing data transformation, real time data ingestion, performing tasks connected with data analytics, testing, and system design in the big data and cloud environments.
  • Research and implement appropriate ML algorithms and tools.
  • Research fraud data analytics, data pipelining, point anomaly model creation and creating real time dashboards and provide key insights on a regular basis by analysing fraud and other channels logs.
  • Develop and design Machine Learning in a big data environment using distributed computing technologies that relate to real time problems being faced by the business using various languages and tools such as Python, R, h2O, TensorFlow, etc.
  • Mine data from primary and secondary sources, then reorganizing said data in a format that can be easily read by either human or machine.
  • Use statistical tools to interpret data sets, paying particular attention to trends and patterns that could be valuable for diagnostic and predictive analytics efforts.
  • Preparing reports for executive leadership that effectively communicate trends, patterns, and predictions using relevant data.
  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.

What we’re looking for...

You'll need to have:

  • Bachelor's degree or four or more years of work experience.
  • Four or more years of relevant work experience.
  • Fluent understanding of best practices for building Data Lake and analytical architectures on big data platform.
  • Four years of programming experience in Java/Scala.
  • Experience withmachine learning algorithms, data models, and data analytics.
  • Experience collaborating with data engineers, architects, data scientists, and enterprise platform team in designing and deploying data products.
  • Experience with Big Data technologies.
  • Experience designing and building data pipelines.
  • Willingness to work occasional evenings and weekends.

Even better if you have:

  • BS in computer science or a related field.
  • Industry-related certifications.
  • Expert in Python and R programming language.
  • Experience in data analytics using Elastic search and splunk.
  • Experience building dashboards real-time and batch using Spark, Nifi, Log stash, elastic search and Grafana / Looker.
  • Experience with ML tools like H2O / Tensorflow / Prediction.io.
  • Experience with Distributed Data platforms (HDFS, Elasticsearch, Splunk, Casandra).
  • Experience working with NoSql and in memory databases.
  • Experience withKafka and experience with cross DC replication.
  • Experience with CI/CD process - GIT (Bitbucket), Jenkins, Jira, Confluence.
  • Understanding ofbest practices for building Data Lake and analytical architectures on Hadoop, Teradata and public cloud infrastructure.
  • Experience working with Docker images.
  • Experience with SBT and POM.
  • Knowledge of application secure coding standards including OWASP best practices.
  • Experience with Google drive suite services (e.g., Google Docs, Google Sheets).
  • Excellent verbal, written, organizational, time management, and interpersonal communication skills; can present findings concisely and effectively to critical audiences.
  • Proactive style, excels in a deadline-driven environment, works well under pressure with ability to organize and manage multiple priorities.
  • Ability to work with the business and technical team leads to develop and understand requirements.
  • Knowledgeable of current industry standards for secure web and mobile design.
  • Self-starter with strong self-management skills.
  • Ability to organize and manage multiple priorities.
  • Strong analytical and problem solving abilities.
  • High level of motivation.
  • Technology savvy and the ability to learn new things quickly.
  • Absolute customer focus.

Equal Employment Opportunity

We're proud to be an equal opportunity employer - and celebrate our employees' differences, including race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, and Veteran status. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more.

COVID-19 Vaccination Requirement

Verizon requires new hires to be fully vaccinated against COVID-19. Verizon provides reasonable accommodations consistent with legal requirements (e.g., for medical or religious reasons).