Company & Role Overview
Summary
Requisition ID:
Remote Work Available:
- Create mapping (source to target) logic and develop CDC/ETL/ELT jobs to support the data pipeline, following best practices and producing supporting documentation
- Design and implement data quality checks as well as error handling throughout the data pipeline
- Perform Source System Analysis for Data Ingestion in Data Lake Platform
- Maintain and enhance data models across all the organization's data systems
- Work with the Scrum Master and DataOps/Release engineers for PI Planning, Sprint planning, tracking, backlog grooming, and Sprint retrospectives
- Actively participate in code and design peer reviews
- Ensure solutions and processes comply with company policies for security, governmental and local laws, as well as data privacy regulations and Sarbanes-Oxley (SOX).
- Develop and demonstrate a thorough understanding of the Trulieve business
- Develop and demonstrate a high degree of Data Literacy and understanding of Data and Analytics competencies, including reporting, data science, data engineering, and data governance
- Respond to and resolve support incidents as assigned, meeting services SLA's; assist in managing issues throughout the process including problem identification, root cause analysis, and remediation
- Support managing issues throughout the process including problem identification, root cause analysis and solutioning
- Support user acceptance testing, production deployments, and hyper-care transitions
- Perform other duties as assigned.
- 3+ years of experience with a proven track record as a Data Engineer
- Bachelor Degree in computer science, math, analytics, engineering, or related field
- Expert knowledge of writing and executing SQL Queries, Python is a plus
- Expert knowledge of SQL and Relational Database design
- Hands on experience with a variety of data integration methodologies (ELT/ETL) in a data warehouse environment; experience using Python for web-based API integration and ETL scripting is a plus
- Ability to read and interpret solution design, data architecture and design, and data models
- Experience developing and supporting AWS cloud-based data stores, databases, and other data serving layers
- Solid understanding of information/data management techniques such as Data Warehousing, ETL Processing, and Data Modeling
- Solid understanding of various business processes and organizations and the data they generate and consume
- Experience designing and developing normalized and dimensional models
- Expertise in data warehouse performance optimization
- Ability to translate business requirements into data engineering requirements for building cloud-based data solutions
- Hands on experience with SAP HANA modeling, and strong knowledge of SAP ECC or S/4 tables and fields
- Hands on experience preferred with Parquet and Key Value Pair.
- Experience with reporting tools such as Tableau, SAP Analytics Cloud, and Power BI
- Experience with Microsoft PowerPoint and diagraming tools such as Microsoft Visio
- Familiarity with all aspects of data management: data governance, data mastering, meta-data management, data taxonomies and ontologies
- Experience in an Agile Scrum or SAFe environment
- Strong written and verbal communication skills
- Ability to work independently as well as on a team
- Self-motivated, continuous improvement mindset and can work with minimal direction
- Experience in balancing multiple projects and efficiently meeting goals in a fast-paced environment
- Must be able to pass a level 2 background check
Nearest Major Market:
Working Environment
(No Information)