CDC Connectors for Apache Flink®
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Go to file
MOBIN a130718c96
[FLINK-36858][pipeline-connector][kafka] Fix JsonRowDataSerializationSchema compatibility issue with Flink 1.20
This closes  #3784

Co-authored-by: Leonard Xu <xbjtdcq@gmail.com>
3 weeks ago
.github [FLINK-34545][pipeline-connector/ob]Add OceanBase pipeline connector to Flink CDC 4 weeks ago
.idea [FLINK-34180] Migrate doc website from ververica to flink (#3028) 11 months ago
docs [FLINK-36865][cdc] Provide UNIX_TIMESTAMP series functions in YAML pipeline 3 weeks ago
flink-cdc-cli [hotfix][k8s] Fix command-line option `from-savepoint` does not work in K8s deployment mode 3 weeks ago
flink-cdc-common [FLINK-36865][cdc] Provide UNIX_TIMESTAMP series functions in YAML pipeline 3 weeks ago
flink-cdc-composer [FLINK-35325][transform] Skip insufficient_quota error when running test case using ad model. (#3849) 3 weeks ago
flink-cdc-connect [FLINK-36858][pipeline-connector][kafka] Fix JsonRowDataSerializationSchema compatibility issue with Flink 1.20 3 weeks ago
flink-cdc-dist [hotfix] Fix Java 11 target compatibility & add tests (#3633) 1 month ago
flink-cdc-e2e-tests [FLINK-36858][pipeline-connector][kafka] Fix JsonRowDataSerializationSchema compatibility issue with Flink 1.20 3 weeks ago
flink-cdc-migration-tests [tests][build] Update migration test matrix to 3.2.0 and later 1 month ago
flink-cdc-pipeline-model [FLINK-35325][transform] Skip insufficient_quota error when running test case using ad model. (#3849) 3 weeks ago
flink-cdc-pipeline-udf-examples [hotfix] Fix Java 11 target compatibility & add tests (#3633) 1 month ago
flink-cdc-runtime [FLINK-36865][cdc] Provide UNIX_TIMESTAMP series functions in YAML pipeline 3 weeks ago
tools [tests][build] Update migration test matrix to 3.2.0 and later 1 month ago
.asf.yaml [FLINK-34664][cdc] Add .asf.yaml file for project 11 months ago
.dlc.json [docs] Update download links to up-to-date cdc version (#3766) 2 months ago
.gitignore [build] Use flink-shaded-force-shading to force all submodules to generate dependency-reduced-pom.xml 1 year ago
.gitmodules [build][CI] Introduce automated PR governance workflows 7 months ago
Dockerfile [release] Update version to 3.3-SNAPSHOT 6 months ago
LICENSE [hotfix][cdc][docs] Update LICENSE file 11 months ago
NOTICE [hotfix][build] Add missed third-party OSS NOTICE 2 months ago
README.md [hotfix] Fix building status badge in README as workflow files have been refactored (#3807) 1 month ago
pom.xml [hotfix] Fix Java 11 target compatibility & add tests (#3633) 1 month ago

README.md

Flink CDC

Test Release Build Nightly Build License

Flink CDC is a distributed data integration tool for real time data and batch data. Flink CDC brings the simplicity and elegance of data integration via YAML to describe the data movement and transformation in a Data Pipeline.

The Flink CDC prioritizes efficient end-to-end data integration and offers enhanced functionalities such as full database synchronization, sharding table synchronization, schema evolution and data transformation.

Flink CDC framework desigin

Getting Started

  1. Prepare a Apache Flink cluster and set up FLINK_HOME environment variable.
  2. Download Flink CDC tar, unzip it and put jars of pipeline connector to Flink lib directory.
  3. Create a YAML file to describe the data source and data sink, the following example synchronizes all tables under MySQL app_db database to Doris :
 source:
   type: mysql
   hostname: localhost
   port: 3306
   username: root
   password: 123456
   tables: app_db.\.*

 sink:
   type: doris
   fenodes: 127.0.0.1:8030
   username: root
   password: ""

 transform:
   - source-table: adb.web_order01
     projection: \*, format('%S', product_name) as product_name
     filter: addone(id) > 10 AND order_id > 100
     description: project fields and filter
   - source-table: adb.web_order02
     projection: \*, format('%S', product_name) as product_name
     filter: addone(id) > 20 AND order_id > 200
     description: project fields and filter

 route:
   - source-table: app_db.orders
     sink-table: ods_db.ods_orders
   - source-table: app_db.shipments
     sink-table: ods_db.ods_shipments
   - source-table: app_db.products
     sink-table: ods_db.ods_products

 pipeline:
   name: Sync MySQL Database to Doris
   parallelism: 2
   user-defined-function:
     - name: addone
       classpath: com.example.functions.AddOneFunctionClass
     - name: format
       classpath: com.example.functions.FormatFunctionClass
  1. Submit pipeline job using flink-cdc.sh script.
 bash bin/flink-cdc.sh /path/mysql-to-doris.yaml
  1. View job execution status through Flink WebUI or downstream database.

Try it out yourself with our more detailed tutorial. You can also see connector overview to view a comprehensive catalog of the connectors currently provided and understand more detailed configurations.

Join the Community

There are many ways to participate in the Apache Flink CDC community. The mailing lists are the primary place where all Flink committers are present. For user support and questions use the user mailing list. If you've found a problem of Flink CDC, please create a Flink jira and tag it with the Flink CDC tag.
Bugs and feature requests can either be discussed on the dev mailing list or on Jira.

Contributing

Welcome to contribute to Flink CDC, please see our Developer Guide and APIs Guide.

License

Apache 2.0 License.

Special Thanks

The Flink CDC community welcomes everyone who is willing to contribute, whether it's through submitting bug reports, enhancing the documentation, or submitting code contributions for bug fixes, test additions, or new feature development.
Thanks to all contributors for their enthusiastic contributions.