As part of Big Data team, you will be working on building a legacy towards Home Credit group. We believe that information is everything, and data are part of it. We are looking for candidate who has passion for data, who will go extra-mile to extend his/her knowledge, and willing to share insights to others. Your contribution to Big Data team will ensure Home Credit Indonesia position as a data-driven company, to help people fulfill their dreams and ambitions in a financially secure way, by providing innovative consumer finance services.
Data Engineer is one of key-role in Big Data team. You will provide consultation in building Enterprise Data Lake from scratch, find the best approach to collect, transform and stored data, maintain scalable but stable platform, and work together with team to implement models.
* Establish Enterprise Data Lake.
* Write automated job for data collection, with a proper monitoring and notification system.
* Integrate to various data sources: Databases (Relational & NoSQL), Text Files, Web Services, Emails, Log Files, Websites, Call Recordings, IVR, Cloud data sources, etc.
* Transform various data formats: tables, delimited text, free text, JSON, HTML, XML, sound-wave, bitmaps, etc.
* Research for optimal solution in building data lake. It could be research on software, hardware, procedures, best practices, or new data sources.
* Share findings and lesson-learned to BICC team and other teams on BI Forum, with well explained documentation on Phabricator.
* Implement all task and ad-hoc requests in high quality with proper analysis, code documentation (well commented and stored in Git), and high quality testing.
* Provide expertise and consultation about data architecture and infrastructure.
* Cooperate with Business Intelligence and other IT teams located in Jakarta and Czech Republic.
* Implement machine learning algorithms provided to scalable distributed system.
* Bachelor in Computer Science / Computer Engineering / Information Technology
* Having more than 3 years of professional experience in data field, from various position such as ETL (Extract Transform Load) developer, database administrator, or EAI (Enterprise Application Integration) developer.
* Able to converse in English fluently.
* 2+ years of experience with SQL (Oracle, MySQL, Hive, etc).
* 2+ years of software engineering experience (Java, Python, R), with Git as versioning tool.
* 1+ years of experience with ETL tool (Pentaho, DataStage, SSIS, ODI, Spark, etc)
* 1+ years of experience with any of Linux distro.
* Basic knowledge of Hadoop ecosystem is an advantage.
* Experience in NoSQL database (neo4j, MongoDB, HBase) is an advantage.
* Crave for new technologies with fast learning and understanding process.