Hybrid Details:
3 days/week onsite
Duration:
11 months to start
Job Duties:
- Create integration and application technical design documentation.
- Conduct peer-reviews of functional design documentation.
- Complete development, configuration, test cases and unit testing
- Perform code reviews and ensure standards are applied to each solution component.
- Resolve complex defects during testing phases and identify root causes.
- Support and execute performance testing.
- Familiar with GIT repo and CI/CD deployment process
- Production Support: Assist in troubleshooting production environment and tuning environments.
- Ensure best practices are followed from a technical perspective during all phases of the project.
Minimum Skills Required:
- At least 6+ years of relevant experience in design, development, complete end-end design of enterprise-wide big data solution.
- Experience in designing & developing a big data solution using Spark, Scala, AWS Glue, Lambda, SNS/SQS, Cloudwatch is a must.
- Strong Application development experience in Scala/Python.
- Strong Database SQL experience, preferably Redshift.
- Experience in Snowflake is an added advantage.
- Experience with ETL/ELT process and frameworks is a must.
- Strong background in AWS cloud services like lambda, glue, s3, emr, sns, sqs, cloudwatch, redshift
- Expertise in SQL and experience with relational databases like Oracle, MySql, PostgreSQL
- Proficient in Python programming for data engineering tasks, automations
- Experience with shell scripting in Linux/Unix environments.
- Experience with Big Data – Hadoop, Spark
- Financial Services experience required
- Nice to have - knowledge in Machine Learning models, regression, validation
#LI-Hybrid