Hadoop/Big Data Senior Software Engineer (Platform) Flurry
THIS JOB HAS EXPIRED The Flurry platform utilizes the power of Hadoop to maximum and continues to push the envelope daily. The use of MapReduce technology for highly distributed computing and HBase for storage and querying at Flurry powers processing of billions of sessions daily. Cluster tuning and optimization necessary to support such scale requires top-notch expertise in Hadoop ecosystem and distributed computing.
The Flurry Platform Team is looking for a senior software engineer with distributed computing experience in the Hadoop ecosystem to bring our platform to the highest level of robustness, performance, features and efficiency required to support mission-critical functionality at Flurry.The platform engineer will work in our agile environment responding to large-scale data processing needs of Flurry business with the best use of open source technologies, advancing the state of distributed computing and storage.
We are looking for smart people who excel in large scale distributed computation. We strive to create an environment of casual intensity where people enjoy coming to work every day.
Analyze performance and stability characteristics of Flurry platform to identify bottlenecks, failure points and security holes contributed by open source software in the system
Design and implement enhancements and bug fixes in open source software to meet the needs of large scale data and nodes used by Flurry platform
Contribute tested solutions back into community and achieve contributor/committer status
Monitor the status of open source software updates and releases to ensure the platform receives necessary patches.
Work on upgrades and migrations to keep up Flurry platform to date with stable open source releases appropriately.
Work on platform automation test infrastructure and test suite to ensure changes in open source stacks does not destabilize platform and its services.
Provide direction and requirements necessary to monitor the platform components built on top of open source stacks.
Work on high availability, replication, backup and disaster recovery solutions required for Flurry platform.
Skills and Experience:
Extensive software development experience with highly-scalable, distributed, large multi-node environments
6+ years of Unix environment experience (Red Hat Linux, FreeBSD)
Extensive high-quality, object oriented software development experience using Java or C++ deployed on Linux/Unix
Expert at building, configuring and monitoring highly available, large scale distributed systems
Strong system and application troubleshooting and performance tuning skills (Hardware, Linux, Networking, JVMs, etc.)
Understanding of Hadoop ecosystem (HDFS, HBase, Map-Reduce, Zookeeper)
Well-versed in highly scalable solutions in data storage, analysis and reporting for large-scale, distributed data sets
Knowledge of the core elements of file system, kernel and database internals ? latency, throughput, reliability, availability, consistency, security, etc.
Experience in contributing to Apache open source projects a plus
Attitude and Behavior:
The candidate must demonstrate go-getter attitude to drive solutions to problems. S/he must be able to foster teamwork, promote team collaboration and communication, and gauge project progress. S/he must be personable, use respectable manners, maintain stable composure, demonstrate a positive attitude, able to work under pressure and has multi-tasking capabilities.
Bachelor?s degree (BS) in Computer Science or equivalent experience (master or PhD a plus)
||490 Second Street |
San Francisco, CA 94107
THIS JOB HAS EXPIRED