Princeton, NJ | Direct Hire
- Position: Data Engineer
- Location: Princeton NJ
- Salary: up to $130, 000
Our Client in Princeton NJ is looking for a Big Data Engineer to directly join their team.
Our Client is growing and building out their Data and Analytics Reporting team. The Big Data Engineer will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
The ideal candidate will have a strong understanding of traditional databases and data analysis procedures, as well as big data practices in a Hadoop data lake environment. You must be tech-savvy, possess excellent troubleshooting skills and be agile enough to jump in to new technologies in the data space.
The goal of this role is to ensure that information flows in a timely and secure manner, to and from the units within the business and external parties, enabling swift, data driven decisions.
- Selecting and integrating Big Data tools and frameworks required to provide requested capabilities
- Implement ETL processes and document ETL processes
- Monitor performance and advise any necessary infrastructure changes
- Define data retention policies and assist in enforcement
- Formulate techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data
- Devise and implement efficient and secure procedures for data handling and analysis with attention to all technical aspects
- Design and manage the migration of data from legacy systems to new data solutions
- Monitor and analyze information and data systems and evaluate their performance to discover ways of enhancing them (new technologies, upgrades etc.)
- Troubleshoot data-related problems and authorize maintenance or modifications
- Bachelor’ s Degree in Mathematics, Computer Science, Information Management or Statistics is preferred
- Proficient understanding of distributed computing principles
- Management of Hadoop cluster, with all included services
- Ability to solve any ongoing issues with operating the cluster
- Proficiency with Hortonworks HDP ecosystem
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with Spark
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks
- Experience with various messaging systems, such as Kafka
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience in SyncSort
- Experience with Flume, SSIS
- Experience with Hortonworks HDF (NiFi)
If you feel like you are the right fit for the job above, please click the apply online button below and I will be sure to reach out ASAP!