[docs] Bump connector version to flink 1.15.2 in docs (#1684)

pull/1695/head
Hang Ruan 2 years ago committed by GitHub
parent 1111f0270e
commit 8685fd4027
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -31,6 +31,7 @@ The following table shows the version mapping between Flink<sup>®</sup> CDC Con
| <font color="DarkCyan">2.0.*</font> | <font color="MediumVioletRed">1.13.*</font> |
| <font color="DarkCyan">2.1.*</font> | <font color="MediumVioletRed">1.13.*</font> |
| <font color="DarkCyan">2.2.*</font> | <font color="MediumVioletRed">1.13.\*</font>, <font color="MediumVioletRed">1.14.\*</font> |
| <font color="DarkCyan">2.3.*</font> | <font color="MediumVioletRed">1.13.\*</font>, <font color="MediumVioletRed">1.14.\*</font>, <font color="MediumVioletRed">1.15.\*</font> |
## Features

@ -405,7 +405,7 @@ Data Type Mapping
----------------
[BSON](https://docs.mongodb.com/manual/reference/bson-types/) short for **Binary JSON** is a bin­ary-en­coded seri­al­iz­a­tion of JSON-like format used to store documents and make remote procedure calls in MongoDB.
[Flink SQL Data Type](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/types/) is similar to the SQL standards data type terminology which describes the logical type of a value in the table ecosystem. It can be used to declare input and/or output types of operations.
[Flink SQL Data Type](https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/table/types/) is similar to the SQL standards data type terminology which describes the logical type of a value in the table ecosystem. It can be used to declare input and/or output types of operations.
In order to enable Flink SQL to process data from heterogeneous data sources, the data types of heterogeneous data sources need to be uniformly converted to Flink SQL data types.
@ -525,7 +525,7 @@ Reference
- [Replica set protocol](https://docs.mongodb.com/manual/reference/replica-configuration/#mongodb-rsconf-rsconf.protocolVersion)
- [Connection String Options](https://docs.mongodb.com/manual/reference/connection-string/#std-label-connections-connection-options)
- [BSON Types](https://docs.mongodb.com/manual/reference/bson-types/)
- [Flink DataTypes](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/types/)
- [Flink DataTypes](https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/table/types/)
FAQ
--------

@ -686,7 +686,7 @@ $ ./bin/flink run \
--fromSavepoint /tmp/flink-savepoints/savepoint-cca7bc-bb1e257f0dab \
./FlinkCDCExample.jar
```
**注意:** 请参考文档 [Restore the job from previous savepoint](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/#command-line-interface) 了解更多详细信息。
**注意:** 请参考文档 [Restore the job from previous savepoint](https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/cli/#command-line-interface) 了解更多详细信息。
数据类型映射
----------------

@ -693,7 +693,7 @@ $ ./bin/flink run \
--fromSavepoint /tmp/flink-savepoints/savepoint-cca7bc-bb1e257f0dab \
./FlinkCDCExample.jar
```
**Note:** Please refer the doc [Restore the job from previous savepoint](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/#command-line-interface) for more details.
**Note:** Please refer the doc [Restore the job from previous savepoint](https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/cli/#command-line-interface) for more details.
Data Type Mapping
----------------

@ -1,7 +1,7 @@
# Changelog JSON Format
**WARNING:** The CDC format `changelog-json` is deprecated since Flink CDC version 2.2.
The CDC format `changelog-json` was introduced at the point that Flink didn't offer any CDC format. Currently, Flink offers several well-maintained CDC formats i.e.[Debezium CDC, MAXWELL CDC, CANAL CDC](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/table/formats/overview/), we recommend user to use above CDC formats.
The CDC format `changelog-json` was introduced at the point that Flink didn't offer any CDC format. Currently, Flink offers several well-maintained CDC formats i.e.[Debezium CDC, MAXWELL CDC, CANAL CDC](https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/table/formats/overview/), we recommend user to use above CDC formats.
### Compatibility Note

@ -61,7 +61,7 @@ docker-compose down
*Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. *
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-db2-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-db2-cdc/2.3-SNAPSHOT/flink-sql-connector-db2-cdc-2.3-SNAPSHOT.jar)
**3. Launch a Flink cluster and start a Flink SQL CLI**

@ -109,7 +109,7 @@ db.customers.insertMany([
```Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. ```
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-mongodb-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mongodb-cdc/2.3-SNAPSHOT/flink-sql-connector-mongodb-cdc-2.3-SNAPSHOT.jar)
4. Launch a Flink cluster, then start a Flink SQL CLI and execute following SQL statements inside:

@ -73,11 +73,11 @@ This command automatically starts all the containers defined in the Docker Compo
We can also visit [http://localhost:5601/](http://localhost:5601/) to see if Kibana is running normally.
### Preparing Flink and JAR package required
1. Download [Flink 1.13.2](https://archive.apache.org/dist/flink/flink-1.13.2/flink-1.13.2-bin-scala_2.11.tgz) and unzip it to the directory `flink-1.13.2`
2. Download following JAR package required and put them under `flink-1.13.2/lib/`:
1. Download [Flink 1.15.2](https://archive.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz) and unzip it to the directory `flink-1.15.2`
2. Download following JAR package required and put them under `flink-1.15.2/lib/`:
**Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. **
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3-SNAPSHOT/flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar)
- [flink-sql-connector-postgres-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-postgres-cdc/2.3-SNAPSHOT/flink-sql-connector-postgres-cdc-2.3-SNAPSHOT.jar)
@ -151,7 +151,7 @@ We can also visit [http://localhost:5601/](http://localhost:5601/) to see if Kib
1. Use the following command to change to the Flink directory:
```
cd flink-1.13.2
cd flink-1.15.2
```
2. Use the following command to start a Flink cluster:
@ -311,7 +311,7 @@ After finishing the tutorial, run the following command to stop all containers i
```shell
docker-compose down
```
Run the following command to stop the Flink cluster in the directory of Flink `flink-1.13.2`:
Run the following command to stop the Flink cluster in the directory of Flink `flink-1.15.2`:
```shell
./bin/stop-cluster.sh
```

@ -111,7 +111,7 @@ VALUES (default, '2020-07-30 10:08:22', 'Jark', 50.50, 102, false),
```Download links are only available for stable releases.```
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-oceanbase-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-oceanbase-cdc/2.3-SNAPSHOT/flink-sql-connector-oceanbase-cdc-2.3-SNAPSHOT.jar)
### Use Flink DDL to create dynamic table in Flink SQL CLI

@ -54,7 +54,7 @@ docker-compose down
*Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. *
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-oracle-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-oracle-cdc/2.3-SNAPSHOT/flink-sql-connector-oracle-cdc-2.3-SNAPSHOT.jar)
**3. Launch a Flink cluster and start a Flink SQL CLI**

@ -63,12 +63,12 @@ This command automatically starts all the containers defined in the Docker Compo
We can also visit [http://localhost:5601/](http://localhost:5601/) to see if Kibana is running normally.
### Preparing Flink and JAR package required
1. Download [Flink 1.13.2](https://archive.apache.org/dist/flink/flink-1.13.2/flink-1.13.2-bin-scala_2.11.tgz) and unzip it to the directory `flink-1.13.2`
2. Download following JAR package required and put them under `flink-1.13.2/lib/`:
1. Download [Flink 1.15.2](https://archive.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz) and unzip it to the directory `flink-1.15.2`
2. Download following JAR package required and put them under `flink-1.15.2/lib/`:
**Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. **
- [flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3-SNAPSHOT/flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar)
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
### Preparing data in databases
#### Preparing data in PolarDB-X
@ -116,7 +116,7 @@ We can also visit [http://localhost:5601/](http://localhost:5601/) to see if Kib
1. Use the following command to change to the Flink directory:
```
cd flink-1.13.2
cd flink-1.15.2
```
2. Use the following command to start a Flink cluster:
@ -255,7 +255,7 @@ After finishing the tutorial, run the following command to stop all containers i
```shell
docker-compose down
```
Run the following command to stop the Flink cluster in the directory of Flink `flink-1.13.2`:
Run the following command to stop the Flink cluster in the directory of Flink `flink-1.15.2`:
```shell
./bin/stop-cluster.sh
```

@ -63,7 +63,7 @@ docker-compose down
*Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. *
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-sqlserver-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-sqlserver-cdc/2.3-SNAPSHOT/flink-sql-connector-sqlserver-cdc-2.3-SNAPSHOT.jar)

@ -116,7 +116,7 @@ docker-compose down
*Download links are available only for stable releases, SNAPSHOT dependency need build by yourself. *
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-tidb-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-tidb-cdc/2.3-SNAPSHOT/flink-sql-connector-tidb-cdc-2.3-SNAPSHOT.jar)

@ -109,7 +109,7 @@ db.customers.insertMany([
```下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译```
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-mongodb-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mongodb-cdc/2.3-SNAPSHOT/flink-sql-connector-mongodb-cdc-2.3-SNAPSHOT.jar)
4. 然后启动 Flink 集群,再启动 SQL CLI.

@ -69,11 +69,11 @@ docker-compose up -d
该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问 [http://localhost:5601/](http://localhost:5601/) 来查看 Kibana 是否运行正常。
### 下载 Flink 和所需要的依赖包
1. 下载 [Flink 1.13.2](https://archive.apache.org/dist/flink/flink-1.13.2/flink-1.13.2-bin-scala_2.11.tgz) 并将其解压至目录 `flink-1.13.2`
2. 下载下面列出的依赖包,并将它们放到目录 `flink-1.13.2/lib/` 下:
1. 下载 [Flink 1.15.2](https://archive.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz) 并将其解压至目录 `flink-1.15.2`
2. 下载下面列出的依赖包,并将它们放到目录 `flink-1.15.2/lib/` 下:
**下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译**
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3-SNAPSHOT/flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar)
- [flink-sql-connector-postgres-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-postgres-cdc/2.3-SNAPSHOT/flink-sql-connector-postgres-cdc-2.3-SNAPSHOT.jar)
@ -147,7 +147,7 @@ docker-compose up -d
1. 使用下面的命令跳转至 Flink 目录下
```
cd flink-1.13.2
cd flink-15.2
```
2. 使用下面的命令启动 Flink 集群
@ -308,7 +308,7 @@ Flink SQL> INSERT INTO enriched_orders
```shell
docker-compose down
```
在 Flink 所在目录 `flink-1.13.2` 下执行如下命令停止 Flink 集群:
在 Flink 所在目录 `flink-1.15.2` 下执行如下命令停止 Flink 集群:
```shell
./bin/stop-cluster.sh
```

@ -110,7 +110,7 @@ VALUES (default, '2020-07-30 10:08:22', 'Jark', 50.50, 102, false),
```下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译```
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-oceanbase-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-oceanbase-cdc/2.3-SNAPSHOT/flink-sql-connector-oceanbase-cdc-2.3-SNAPSHOT.jar)
### 在 Flink SQL CLI 中使用 Flink DDL 创建表

@ -54,7 +54,7 @@ docker-compose down
*下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译*
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-oracle-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-oracle-cdc/2.3-SNAPSHOT/flink-sql-connector-oracle-cdc-2.3-SNAPSHOT.jar)
**3. 然后启动 Flink 集群,再启动 SQL CLI:**

@ -105,12 +105,12 @@ VALUES (default, '2020-07-30 10:08:22', 'Jark', 50.50, 102, false),
```
### 下载 Flink 和所需要的依赖包
1. 下载 [Flink 1.13.2](https://archive.apache.org/dist/flink/flink-1.13.2/flink-1.13.2-bin-scala_2.11.tgz) 并将其解压至目录 `flink-1.13.2`
2. 下载下面列出的依赖包,并将它们放到目录 `flink-1.13.2/lib/` 下
1. 下载 [Flink 1.15.2](https://archive.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz) 并将其解压至目录 `flink-1.15.2`
2. 下载下面列出的依赖包,并将它们放到目录 `flink-1.15.2/lib/` 下
```下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译```
- 用于订阅PolarDB-X Binlog: [flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.3-SNAPSHOT/flink-sql-connector-mysql-cdc-2.3-SNAPSHOT.jar)
- 用于写入Elasticsearch: [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- 用于写入Elasticsearch: [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
3. 启动flink服务:
```shell
./bin/start-cluster.sh

@ -63,7 +63,7 @@ docker-compose down
```下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译```
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-sqlserver-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-sqlserver-cdc/2.3-SNAPSHOT/flink-sql-connector-sqlserver-cdc-2.3-SNAPSHOT.jar)

@ -116,7 +116,7 @@ docker-compose down
```下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地编译```
- [flink-sql-connector-elasticsearch7_2.11-1.13.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7_2.11/1.13.2/flink-sql-connector-elasticsearch7_2.11-1.13.2.jar)
- [flink-sql-connector-elasticsearch7-1.15.2.jar](https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch7/1.15.2/flink-sql-connector-elasticsearch7-1.15.2.jar)
- [flink-sql-connector-tidb-cdc-2.3-SNAPSHOT.jar](https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-tidb-cdc/2.3-SNAPSHOT/flink-sql-connector-tidb-cdc-2.3-SNAPSHOT.jar)

Loading…
Cancel
Save