Job Requirements
1. Familiar with the construction and management of Hadoop/CDH ecological environment system, master the principles and usage of open source projects such as Hadoop, MapReduce, HDFS, YARN, Zookeeper, Spark, and have practical experience in cluster construction and tuning
2. Familiar with JAVA/SCALA development, experience in big data platform development, and practical development experience in FLINK and SPARK frameworks
3. Have basic experience in back-end business development and can use GOLANG for basic WEB interface development
4. Familiar with linux operating system, conventional system commands, fluent in shell and python writing
5. Familiar with sql tuning, and have practical experience in mysql, tidb indexing, and slow sql analysis
6. Have an understanding of real-time computing solutions and have contact with CDC computing solutions in large-scale data volumes
7. Have experience in using redis, kafka, ES, and are familiar with the general use of ESSQL
8. Good teamwork and communication skills
Job Responsibilities
1. Write offline disaster preparedness scripts, be proficient in shell or python, and have the ability to analyze and solve SQL performance
2. Write batch processing scripts, use spark or flink for data processing
3. Write stream processing scripts, use flink for data processing, need to understand upsert streams, and have an understanding of flink development
4. Table structure design, have a deep understanding of the business, and the designed table must take into account performance and scalability compatibility
5. Backend web business, use golang for backend CRUD interface development