Oracle Software Developer 2 in Harrisburg, Pennsylvania
Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc.
As a member of the software engineering division, you will apply basic to intermediate knowledge of software architecture to perform software development tasks associated with developing, debugging or designing software applications or operating systems according to provided design specifications. Build enhancements within an existing software architecture and occasionally suggest improvements to the architecture.
Duties and tasks are standard with some variation; displays understanding of roles, processes and procedures. Performs moderately complex problem solving with assistance and guidance in understanding and applying company policies and processes. BS degree or equivalent experience relevant to functional area. 1 year of software engineering or related experience.
Oracle is an Affirmative Action-Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, protected veterans status, age, or any other characteristic protected by law.
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world s biggest challenges.
We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualized infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customer s business critical applications
Oracle Big Data Overview
SQL relational data has been fundamental to computing within the business context for the last 30 years. It has been table stakes for running any modern business. Transactional systems have powered enterprise data management, e-commerce, and were the initial engines for the Internet era.
As compute technology has expanded, internet services have exploded and massive amounts of data have become available. Multi-structured data from machine sensors, connected devices, click streams, system logs, etc., all with the potential of affording the business significant value needs to be collected, processed and analyzed. However, this data does not fit well into the relational paradigm, in part because it often requires scale-out capability going from terabytes of data to petabytes making it unsuitable for traditional data management systems.
These new sources of data drove new SQL - no SQL development, new no SQL databases, and also systems such as Hadoop that have become popular over the last three to five years. Fundamentally, enterprises really want and need to derive value out of the data for the business; often times as a combination of traditional and new data sources Once joined, this combination can provide new insights and be mined with technologies like machine learning and artificial intelligence.
In short, this is what we're aiming for with the next generation of Big Data technology. Oracle's vision overall is simple and straightforward:
Manage both the traditional and the new data sets together on a single cloud platform
Leverage cloud object storage as the primary data means to store data
Allow any user to work with any kind of data quickly, securely, and efficiently
Provide an Oracle managed platform where customers can focus on value vs. infrastructure provisioning and maintenance
At a high level, the key requirements for this platform include
Ability to integrate various datasets across disparate data sources
Enable discovery, lineage, governance of all enterprise data
Offer high performance compute infrastructure with the appropriate compute capabilities to easily process, analyze and visualize all data under management
Use of ML and AI as an integrated part of the cloud services to serve thousands of customers while maintaining a reasonable spend.
The Oracle Big data platform already has key offerings supporting both the vision and the requirements:
Big Data Clouda Hadoop centric platform offering traditional Hadoop technologies such as MapReduce and Hive in addition to Spark, Alluxio
Event Huba Kafka based messaging platform serving as a pillar for data ingest, eventing and inter process communication
Data Integration Platforma complete platform offering a broad range of capabilities for data integration ranging from ingest, replication, ETL and data quality.
Some of these services are being re-envisioned to be more closely aligned with the overall OCI platform as well as better align with the cloud data lake use cases. The services currently under development include:
Data Flowan autonomous serverless cloud service offering Apache Spark on the cloud. This will form the foundational execution engine for SQL on data lake, batch and big data workloads.**
Data Cataloga service integrated with all Oracle data services to provide a metadata based services required for data governance, lineage and discovery; a key ingredient of any data lake offering.**
On top of this, there are several initiatives underway to offer cutting edge data processing capabilities that are unique to Oracle which can be shared in person.
In the last year, Oracle has acquired two important companies to help round out the Big Data Platform: datascience.com and SparkLine Data.
Datascience.comoffers a complete data scientist workbench for authoring, managing and deploying machine and deep learning. Hundreds ofdata science teams use the platform to organize work, easily access data and computing resources, and execute end-to-end model development workflows. We see the dataScience.com platform as key to improving productivity, reduce operational costs and deploy machine learning solutions faster and effortlessly.
Sparkline Dataoffers**a unique to the industry technology for building terabyte scale data-warehouses designed to work with modern distributed computing frameworks such as Apache Spark.
Our strategy is built around using the best of open source software optimized for the cloud. Our team already consists of PMC members, committers and contributors across a broad range of open source projects. These team members that are key to delivering on our strategy.
The initial rollout of the product strategy will be in place calendar year 2019 with production availability of Data Flow, Data Catalog, Data Integration and Data Science cloud services. IN the foreseeable future, service teams will be working on advancing the capabilities of individual services in while ensuring that customer use cases are in focus.
The intersection of [a real] cloud, big data and machine learning make the new
new Oracle a very exciting place to work and an affords newcomers an opportunity to have a material impact on products and services being developed as well as an awesome opportunity to work on one of the few full scale Cloud operations still in progress.
Job: *Product Development
Title: Software Developer 2
Requisition ID: 19001LTO
Other Locations: United States
- Oracle Jobs