Follow a predefined load/step/commit model to ensure reliability and recovery.Ability to transform the record by merging them with data from different tables.Ability to process these records in parallel for accelerated loading.Ability to extract only new/changed records from a database.
In this blog, I will be demonstrating an example which satisfies the some of the key requirements for a typical ETL flow – The transformed data is then into target systems such as a SaaS applications, database, or Hadoop, etc. This transformation might require merging the extracted data with data from other sources, change the data format, and then map it into the target system. Once you have obtained the data, you have to transform this data to conform to the target systems requirements. In a typical scenario, you have to extract large amounts of data either reading a flat file or polling a database or by invoking an API using the platform. The goal of this blog post is to give you a short introduction on how to implement a simple etl (Extract, Transform, and Load) scenario using Mulesoft's batch processing module.Īnypoint Platform brings together leading application integration technology with powerful data integration capabilities for implementing such a use case. We recently introduced our HowTo blog series, which is designed to present simple use-case tutorials to help you as you evaluate Anypoint Platform.