VIEW ALL APPLY NOW
Data Warehouse Engineer II
MINDBODY | Product Development
Skills & Requirements
JOB FAMILY SUMMARY:
JOB FAMILY SUMMARY:
MINDBODY’s Data Science department makes sense of our data world through meaningful and actionable insights, identifying trends, delivering reports, informing product and infrastructure advancements, and promoting a culture of data-driven decision making.
The Data Warehouse Engineering (DE) branch of MINDBODY’s Data Science department is focused on providing innovative and large-scale data platform solutions in a shared services model to support enterprise data needs of MindBody. This involves building data pipelines to pull together information from different source systems; integrating, consolidating and cleansing data; and structuring it for use in reporting and analytical applications. This team also architects distributed systems, data stores, and collaborates with data science teams in building right solutions for them. The data provided by DE team is used by data science team in supporting key functions and initiatives within Product Development, Business Development, Customer Experience/Success, Marketing and Sales at MindBody.
Data Warehouse Engineer II focuses on designing, implementing and supporting new and existing data solutions- data processing, and data sets to support various advanced analytical needs. You will create and support data pipelines to integrate with multiple external sources using APIs, databases, flat files. You will liaise with members of the wider MindBody Data Science teams to ensure alignment with existing systems and consistency with internal standards and best practice.
MINIMUM QUALIFICATIONS AND REQUIREMENTS:
- Bachelor's degree or higher in a quantitative/technical field (e.g. Computer Science, Statistics, Engineering
- 5+ years of relevant experience in one of the following areas:Data Warehouse, business intelligence or business analytics
- 5+ years of hands-on experience in writing complex, highly-optimized SQL queries across large datasets
- 3+ years of experience in scripting languages like Python etc.
- Experience in data modeling, ETL development, and Data warehousing
- Data Warehousing Experience with Oracle, Redshift, etc.
- Experience with AWS services including S3, Redshift, EMR and RDS
- Experience with Big Data Technologies (Hadoop, Hive, HBase, Pig, Spark, etc.)
- Experience in working and delivering end-to-end projects independently
- Strong attention to detail, analytical mindset, and highly organized
- Desire to work in a fast paced, potentially ambiguous, start-up-like atmosphere
- Strong technical aptitude and demonstrated ability to quickly evaluate and learn new technologies
- Strong interpersonal skills, with the ability to work independently and within a team environment
PRINCIPAL DUTIES AND RESPONSIBILITIES:
- Design, implement, and support a platform providing access to large datasets
- Interface with other technology teams to extract, transform, and load data from a wide variety of data sources
- Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, and Redshift
- Model data and metadata for ad hoc and pre-built reporting
- Interface with business customers, gathering requirements and delivering complete reporting solutions
- Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
- Build and deliver high quality datasets to support business analyst and customer reporting needs.
- Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
- Provide on-call support for after-hours critical batch process issues.
- Participate in strategic & tactical planning discussions, including annual budget processes
- All other duties as assigned