Job Description
<p><u><strong>Key Responsibilities:<br><br></strong></u></p><ul><li>Design and develop conceptual, logical, and physical data models to support business requirements and data architecture standards.</li><li>Collaborate with business analysts, data architects, and data engineers to understand data requirements and translate them into effective data models.</li><li>Ensure data models are aligned with industry best practices, data governance policies, and regulatory compliance requirements.</li><li>Conduct data analysis and profiling to identify data quality issues, anomalies, and inconsistencies.</li><li>Optimize data models for performance, scalability, and data retrieval efficiency.</li><li>Work closely with database administrators and developers to implement data models in database management systems.</li><li>Document data model designs, data flows, and metadata for effective communication and reference.</li><li>Convert the Oracle SQL statements into HQL scripts for data extraction, transformation, and loading in the Hive environment.</li><li>Conduct performance tuning and optimization of the HQL scripts to enhance query efficiency and data processing speed.</li><li>Perform data modelling reviews and provide recommendations for improvement.</li><li>Stay up-to-date with emerging trends, technologies, and best practices in data modelling and database design.</li><li>Participate in data governance initiatives and contribute to the overall data management strategy of the organization.</li></ul><p><u><strong>Qualifications and Skills:<br><br></strong></u></p><ul><li>Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.</li><li>5+ years of experience in data modelling, data warehousing, and ETL processes</li><li>Strong understanding of data modelling concepts (e.g., entity-relationship modelling, dimensional modelling).</li><li>Experience in designing data models for relational databases and familiarity with database management systems (e.g., Oracle, SQL Server, MySQL).</li><li>Knowledge of big data environments such as Apache Hadoop, Apache Spark, Hive, or Apache Cassandra to design data models suitable for distributed and scalable processing.</li><li>Experience in using Version control systems like Git enable data modelers to manage changes, track revisions, and collaborate effectively on data model designs.</li><li>Knowledge of data governance principles and practices.</li><li>Excellent analytical and problem-solving skills.</li><li>Strong communication and collaboration skills to work effectively with cross-functional teams.</li><li>Ability to prioritize tasks and work on multiple projects simultaneously.</li><li>Attention to detail and a commitment to producing high-quality deliverables.</li></ul>