- Architect and implement best-in-class solutions using technologies such as Spark, Cassandra, Kafka, Airflow, Google Cloud DataFlow, BigQuery, Snowflake, Amazon Kinesis / RedShift among others
- Create robust and automated pipelines to ingest and process structured and unstructured data from source systems into analytical platforms using batch and streaming mechanisms
- Work with data scientists to operationalize and scale machine learning training and scoring components by joining and aggregating data from multiple datasets to produce complex models and low-latency feature stores.
- Leverage graph databases and other NoSQL data stores to accomplish tasks that are not possible with traditional databases
- 3+ years of deep software development and engineering experience
- 1+ years of experience in the data and analytics space
- 1+ years in key aspects of software engineering such as parallel data processing, data flows, REST APIs, JSON, XML, and microservice architectures.
- 2+ years of Java and/or Scala experience.
- 2+ years of RDBMS concepts with strong SQL skills
Knowledge, Skills, and Abilities:
- Collegial and collaborative working style
- Must be a self-starter and team player, capable of working and communicating with internal and client resources
- Some consulting experience preferred
- Strong verbal and written communication skills
- An Understanding of what it takes to make a modern data solution production-ready including operations and monitoring strategies and tools
- Detail-oriented with the curiosity that compels you to dive deep into the problem, whether to identify the root cause of a quality issue or understand hidden patterns
- Understanding and experience with stream processing and analytics tools and technologies such as Kafka, Spark Streaming, Storm, Flink, etc.
- Experience working in a scrum/agile environment and associated tools (Jira)
- Proficient with application build and continuous integration tools (e.g., Maven, Gradle, SBT, Jenkins, Git, etc.)
- General knowledge of big data and analytics solutions provided by major public cloud vendors (AWS, Google Cloud Platform, Microsoft Azure).
- Hands-on experience with DevOps solutions such as Jenkins, Puppet, Chef, Ansible, CloudFormation, etc.
- Any certifications related to Big Data platforms, NoSQL databases or cloud providers are a plus
- Experience with large data sets and associated job performance tuning and troubleshooting.
- Ability to wrangle data at scale using tools such as BigQuery, Hive, Spark, and other distributed data processing tools.
- At least upper-intermediate level of English
WHAT WILL YOU GET WITH ELEKS
- Above average compensation and competitive Social package
- Close cooperation with a customer
- Challenging tasks
- Competence development
- Ability to influence project technologies
- Team of professionals
- Dynamic environment with low level of bureaucracy
- Medical insurance