Data Engineer

Throughput, a pioneer in Supply Chain AI software is looking for qualified applicants with least 3-5 years of experience in Data Engineering. If you love working with large volumes and different types of data sets in a complex environment, this opportunity is for you. You will be working with datasets with industries as varied as Automotive Manufacturing, Inventory Management & Warehousing, Transportation & Logistics, IOT etc.

Job Responsibilities:
  • Design, model, architect, install, test and maintain highly scalable data management systems including Cloud based & on-premise databases.
  • Architect, build & maintain data pipeline infrastructure for extraction, cleaning, transformation, and loading (batch processing & streaming) of data from a wide variety of data sources including Data Lakes, cloud based relational or non-relational databases employing Data Integration Tools (Talend, Informatica etc.) and/or scripting languages like Python.
  • Performance tuning, monitoring, alerting and support for all data infrastructure.
  • Maintain data security & privacy guidelines. Implement backup & disaster recovery procedures.
  • Work with stakeholders (Internal & External) including Product, Engineering & Operations teams to assist with data-related issues and ensure adoption.
Skill Requirements:
  • Minimum 3-5 years’ experience in a Data Engineer position or similar role.
  • Advanced SQL knowledge and experience working with relational/non-relational databases, query authoring (SQL) tools and excellent SQL troubleshooting skills.
  • Advanced knowledge using Data Integration Tools (Talend, Informatica etc.) to build robust data pipelines.
  • Expertise with cloud based Big Data & Streaming Technologies including setting up Data Lakes, Hadoop, Apache Storm / Kafka & Real-time streaming with Kinesis.
  • Set-up & Maintain Document & Cache Stores including MongoDB, Redis, and/or Aerospike
  • Experience setting up and maintaining Apache Spark clusters.
  • Database management including set-up, design, architecture & maintenance of standard relational & columnar databases such as MySQL, Postgres, Redshift, Snowflake.
  • Working knowledge of Shell scripting, Python and / or Java.
  • Ability to work in a rapidly evolving, fast paced start-up environment.
  • Bachelor’s degree in Computer Science or related field.
Location:

Seattle, WA region or Silicon Valley, CA region is a nice-to-have but not required. Anywhere else remotely in the US works well if candidate is accustomed to doing so, successfully completing projects remotely, and open to occasional travel (3-6 times per year) for team-building on/off-site meetings.

About ThroughPut:

ThroughPut Inc. (www.throughput.ai) is the Artificial Intelligence (AI) Supply Chain pioneer that enables companies to detect, prioritize and alleviate dynamic operational bottlenecks in real-time. ThroughPut’s supply chain Bottleneck Management System (BMS) runs on the ELI platform, helping clients utilize their existing enterprise databases, such as ERP, MES, and PLC, to automatically solve bottlenecks today. The technology was designed by Fortune 500 geo market logistics leaders and Silicon Valley analytics experts with decades of experience in the industrial technology and enterprise space. ELI thinks like an operations manager and provides human domain expertise and insights in a timely manner to the right operations leads, which current, status-quo static Business Intelligence and Analytics tools do not effectively capture nor leverage. ThroughPut’s dynamic insights include real-time resource allocation recommendations, granular root causes, and operational process stability analysis.

Contact:

jobs@throughput.ai

Apply for
this Position

Apply at our company and together, we’ll work towards creating a waste free environment.






Stay Connected!

Subscribe to our newsletters and stay updated