About Fisker Inc.
California-based Fisker Inc. is revolutionizing the automotive industry by developing the most emotionally desirable and eco-friendly electric vehicles on Earth. Passionately driven by a vision of a clean future for all, the company is on a mission to become the No. 1 e-mobility service provider with the world’s most sustainable vehicles. To learn more, visit www.FiskerInc.com – and enjoy exclusive content across Fisker’s social media channels: Facebook, Instagram, Twitter, YouTube and LinkedIn. Download the revolutionary new Fisker mobile app from the App Store or Google Play store.
- Work with leaders, engineering and data scientists to understand data needs.
- Design, build and launch efficient and reliable data pipelines to best utilize connected vehicle data for real-time systems and within data warehouses.
- Educate your partners: Use your data and analytics experience to ‘see what’s missing’, identifying and addressing gaps in their existing logging and processes.
- Help insure that best practices are followed when storing, retrieving and accessing data.
- 3+ years of Python development experience.
- 3+ years of SQL experience.
- 3+ years of experience with workflow management engines (i.e. Airflow, Luigi, Prefect, Dagster, digdag.io, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M).
- 3+ years experience with Data Modeling.
- Experience in organizing queries, tables and pipelines with proper indexing, partition and sharding.
- 3+ years experience in custom ETL design, implementation and maintenance.
- Experience working with cloud or on-prem Big Data/MPP analytics platform (i.e. Clickhouse, Spark, Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar).
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- Experience with more than one coding language, ideally Go or C++ and java.
- Experience with designing and implementing real-time pipelines.
- Experience with data quality and validation.
- Experience with SQL performance tuning and E2E process optimization.
- Experience with notebook-based Data Science workflow.
- Experience with Airflow.
- Experience querying massive datasets using Spark, Presto, Hive, Impala, etc.