Data Engineer


Who we are

Founded in 1984, Summit Partners is a global alternative investment firm that is currently managing more than $35 billion in capital dedicated to growth equity, fixed income and public equity opportunities. Summit invests across growth sectors of the economy and has invested in more than 500 companies in eCommerce, technology, healthcare and other growth industries. These companies have completed more than 160 public equity offerings, and more than 200 have been acquired through strategic mergers and sales. For more information, please see and check us out on LinkedIn!

We build the industry-leading data platform to help investors find the next great growth companies.  To do this, Summit Partners is looking for an experienced software engineer to specialize in data acquisition, transformation, processing, and storage.

We are growing our elite, agile engineering team to build next-generation, data-driven tools for investment analysis. We have several complex and engaging data projects to achieve our ambitious goals and need smart, enthusiastic, and creative engineers to achieve them. This is an opportunity to work with a great team on engaging projects using state-of-the-art tools and services.

If using the latest tools and technologies to build scalable high velocity data pipelines, information extraction, machine classification with terabytes of data interests you, contact us!

What you'll do

  • Work with an all star team building a unique data and analytics platform used by our global investing teams
  • Build and maintain mechanisms to acquire and ingest data efficiently in ways that scale with more data and more data sources
  • Perform transformations to optimize application performance, analytics, and modeling
  • Determine the best data-store for each project, focusing on minimizing technological footprint
  • Work with our Data Science team to provide data for analysis and model-building
  • Work with our application team to provide data for operational services

Candidates should have at least five years of professional software-engineering experience as well as proficiency in modern, agile SLDC methodologies.

What you’re like

  • Curious, passionate, and motivated to find value in data and communicate it to teammates.
  • Candidates should have at least two years’ worth of experience with the following:
  • Demonstrated experience with multiple relational and non-relational databases, particularly Postgres, MongoDB (or similar) and columnar and time-series data. Experience with Amazon’s Redshift and Aurora offerings are a bonus
  • Confidence with using Elasticsearch/Opensearch or similar search engine
  • Knowledge of Python and one or more ETL tools (e.g. AWS Glue, dbt, Airflow, etc.)
  • Familiarity with AWS infrastructure and services
  • Passion for building and deploying large scale data pipelines
  • Comfortable with managing infrastructure as code tools like Terraform
  • Able to work independently, in a distributed team