Job Type

Data Engineer

Remote - US, USA
Do you meet the requirements?
Check resume match


Juul Labs’ mission is to impact the lives of the world’s one billion adult smokers by eliminating combustible cigarettes. We have the opportunity to address one of the world’s most intractable challenges through a commitment to exceptional quality, research, design, and innovation. Backed by leading technology investors, we are committed to the same excellence when it comes to hiring great talent.

We are a diverse team that is united by this common purpose and we are hiring the world’s best engineers, scientists, designers, product managers, operations experts, and customer service and business professionals. If the opportunity to build your career at one of the fastest growing companies is compelling, read on for more details.


  • Leverage Python to design scalable data solutions and data pipeline frameworks to automate the ingestion, processing and delivery of both structured and unstructured data.
  • Drive development of large-scale data engineering projects.
  • Create data pipelines in airflow, DBT and the general suite of Google Cloud Platform.
  • Build, manage, and support data models.
  • Comfortable working in a scrum agile environment using Jira.
  • Work in a group setting with our sales, operations, research, and finance teams on data storage, retrieval, and analysis
  • Develop new systems and tools to enable stakeholders to consume and understand data more intuitively
  • Create and establish design standards and assurance processes to ensure compatibility and operability of data connections, flows and storage requirements
  • Validate model transformations for data integrity (source/target tables values and counts are expected, ensurance of proper data cleansing)
  • Keep Juul on the cutting edge of data technologyOur Data Stack:
  • Airflow, Fivetran
  • Google Cloud Platform -GCP (BigQuery, Storage, Dataflow, Pub/Sub, Cloud Functions/Run, Vertex AI, Cloud Build)
  • DBT
  • Monte Carlo, Datafold
  • Tableau


  • 2+ years of data engineering or software engineering experience witha focus on data
  • Knowledge in code development using Python to process large-scale datasets and workflows
  • Trained using python libraries and packages (pandas, pyarrow) in conjunction with the Google Cloud Platform (BigQuery, Storage, Pub/Sub)
  • Experience with version control (Git) and containers (Docker)
  • Proficient in analytical SQL in support of data modeling and manipulating multiple data formats


Preferred bachelor’s degree in Computer Science, Engineering, Math, or equivalent experience


  • A place to grow your career. We’ll help you set big goals - and exceed them
  • People. Work with talented, committed and supportive teammates
  • Equity and performance bonuses. Every employee is a stakeholder in our success
  • Boundless snacks and drinks
  • Cell phone subsidy, commuter benefits and discounts on JUUL products
  • Excellent medical, dental and vision benefits