Oportun is a mission-driven, technology-powered provider of inclusive, affordable financial services and a certified Community Development Financial Institution (CDFI).
We seek to serve the 100 million people in the US who are shut out of the financial mainstream because they are credit invisible or are mis-scored because they have limited credit history. By lending money to hardworking, low-to-moderate income individuals, we help them move forward in their lives, demonstrate their creditworthiness, and establish the credit history they need to access new opportunities.
Since 2006, we have lent over $6.8 billion through over 3.1 million affordable small dollar loans and have helped over 730,000 people start establishing credit. In recognition of inventive approach, we were recognized by Time Magazine as one of 50 Genius Companies inventing the future.
The Bay Area News Group recognized Oportun as a Top Workplace in 2019. Come and be a part of our community of employees, partners, and customers who are devoted to expanding financial opportunity for millions. When we work together, we can make life better.
Do you want to be part of a BIG data transformation journey? Do you love exploring new avenues and pioneer things in the technology space? Do you love designing and implementing business critical data management & engineering solutions using emerging technologies? Do you enjoy solving complex business problems in a fast-paced, collaborative, and iterative delivery environment? If this excites you, then keep reading!
We're seeking a hands-on Engineer Data Operations that can design, code and provide architecture solutions for the team. The right candidate for this role is passionate about technology, can interact with product owners/analysts and technical stakeholders, thrives under pressure, and is hyper-focused on delivering exceptional results with good teamwork skills.
Design and Develop scalable Big Data solutions across the entire data supply chain with a focus on ensuring the functionality delivered can be monitored for health and the design is extensible
Create or implement solutions for metadata management.
Create and review technical and user-focused documentation for data solutions (data models, data dictionaries, business glossaries, process and data flows, architecture diagrams, etc.).
Extend and enhance the business Data Warehouse and Data Lake
Solve for complex data integrations across multiple systems.
Design and execute strategies for real-time data analysis and decisioning.
Collaborate with management, business partners, analysts, developers, architects, and engineers to support all data quality efforts.
Participate in 2nd level production support
Verify accuracy of data, testing methods and the maintenance and support of the Analytics Data Platform
You don't just learn how things work, you learn why. Understanding how systems work at a fundamental level is a passion of yours.
Be open and willing to learn new skills!
WHAT SKILLS ARE WE LOOKING FOR IN AN IDEAL CANDIDATE?
In data management, data access (Big Data, traditional Data Marts and Data Warehousing).
In Advanced programming (python, Shell scripting, and Java)
With interactive and batch processing using Spark SQL and spark scripting.
In applied data technologies:
Kafka, Spark Streaming
Current data warehousing concepts and technologies like Redshift, Spark, Hadoop, web services etc. to support business-driven decisioning
In data architecture and data assembly
In Data Governance and Data Security
Experience (requires little direction):
Functional requirements, detailed technical specifications, and test cases for new or modified projects
Understanding of data sources (e.g., 3rd party RDBMS, MS access, SQL server, Oracle, and MySQL)
Data integration tools (Talend preferred)
Data manipulation scripting languages
Business Intelligence, MDM, XML, SOA/WebServices
Executing deliverables using Agile
Proficiency in verbal and written English (90%)
Excellent Organizational and Project Management skills
Bachelor’s degree in computer science/data processing or equivalent
5+ years of experience in Data Warehousing or similar analytic data experience
5+ years of experience with Java programming and developing frameworks
2+ years’ experience with Hadoop and Spark
2+ years’ experience with Amazon EMR/EC2 (or equivalent)
2+ years’ experience with Python
Experience with Postgres and MySql
A solid understanding of basic core computer science concepts
Familiarity with Linux
Experience with Bitbucket and a solid understanding of core concepts with Git a plus
Familiarity with Jenkins and CI/CD
Experience with AWS technologies such as Aurora, Athena, EMR, Redshift, S3
Experience with Scala a plus
Experience with Talend Data Integration (Big Data) platform a plus