@PublicEvolving
public interface StreamTableEnvironment
extends org.apache.flink.table.api.TableEnvironment
DataStream API.
It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
DataStream into Table and vice-versa.Tables and other meta objects from a catalog.Note: If you don't intend to use the DataStream API, TableEnvironment is meant
for pure table programs.
| Modifier and Type | Method and Description |
|---|---|
org.apache.flink.table.descriptors.StreamTableDescriptor |
connect(org.apache.flink.table.descriptors.ConnectorDescriptor connectorDescriptor)
Creates a table source and/or table sink from a descriptor.
|
static StreamTableEnvironment |
create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment)
Creates a table environment that is the entry point and central context for creating Table and SQL
API programs that integrate with the Java-specific
DataStream API. |
static StreamTableEnvironment |
create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.api.EnvironmentSettings settings)
Creates a table environment that is the entry point and central context for creating Table and SQL
API programs that integrate with the Java-specific
DataStream API. |
static StreamTableEnvironment |
create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.api.TableConfig tableConfig)
Deprecated.
Use
create(StreamExecutionEnvironment) and TableEnvironment.getConfig()
for manipulating TableConfig. |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Creates a view from the given
DataStream in a given path. |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
org.apache.flink.api.common.JobExecutionResult |
execute(String jobName)
Triggers the program execution.
|
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Converts the given
DataStream into a Table. |
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Converts the given
DataStream into a Table with specified field names. |
void |
insertInto(org.apache.flink.table.api.Table table,
org.apache.flink.table.api.StreamQueryConfig queryConfig,
String sinkPath,
String... sinkPathContinued)
Deprecated.
use
TableEnvironment.insertInto(String, Table) |
<T> void |
registerDataStream(String name,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Deprecated.
|
<T> void |
registerDataStream(String name,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Deprecated.
|
<T,ACC> void |
registerFunction(String name,
org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
Registers an
AggregateFunction under a unique name in the TableEnvironment's catalog. |
<T,ACC> void |
registerFunction(String name,
org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
Registers an
TableAggregateFunction under a unique name in the TableEnvironment's
catalog. |
<T> void |
registerFunction(String name,
org.apache.flink.table.functions.TableFunction<T> tableFunction)
Registers a
TableFunction under a unique name in the TableEnvironment's catalog. |
void |
sqlUpdate(String stmt,
org.apache.flink.table.api.StreamQueryConfig config)
Evaluates a SQL statement such as INSERT, UPDATE or DELETE; or a DDL statement;
NOTE: Currently only SQL INSERT statements are supported.
|
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
Class<T> clazz,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into a DataStream of add and retract messages. |
create, createTemporaryView, dropTemporaryTable, dropTemporaryView, explain, explain, explain, from, fromTableSource, getCatalog, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, insertInto, insertInto, listCatalogs, listDatabases, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, loadModule, registerCatalog, registerFunction, registerTable, registerTableSink, registerTableSink, registerTableSource, scan, sqlQuery, sqlUpdate, unloadModule, useCatalog, useDatabasestatic StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment)
DataStream API.
It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
DataStream into Table and vice-versa.Tables and other meta objects from a catalog.Note: If you don't intend to use the DataStream API, TableEnvironment is meant
for pure table programs.
executionEnvironment - The Java StreamExecutionEnvironment of the TableEnvironment.static StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment, org.apache.flink.table.api.EnvironmentSettings settings)
DataStream API.
It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
DataStream into Table and vice-versa.Tables and other meta objects from a catalog.Note: If you don't intend to use the DataStream API, TableEnvironment is meant
for pure table programs.
executionEnvironment - The Java StreamExecutionEnvironment of the TableEnvironment.settings - The environment settings used to instantiate the TableEnvironment.@Deprecated static StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment, org.apache.flink.table.api.TableConfig tableConfig)
create(StreamExecutionEnvironment) and TableEnvironment.getConfig()
for manipulating TableConfig.DataStream API.
It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
DataStream into Table and vice-versa.Tables and other meta objects from a catalog.Note: If you don't intend to use the DataStream API, TableEnvironment is meant
for pure table programs.
executionEnvironment - The Java StreamExecutionEnvironment of the TableEnvironment.tableConfig - The configuration of the TableEnvironment.<T> void registerFunction(String name, org.apache.flink.table.functions.TableFunction<T> tableFunction)
TableFunction under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.T - The type of the output row.name - The name under which the function is registered.tableFunction - The TableFunction to register.<T,ACC> void registerFunction(String name, org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
AggregateFunction under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.T - The type of the output value.ACC - The type of aggregate accumulator.name - The name under which the function is registered.aggregateFunction - The AggregateFunction to register.<T,ACC> void registerFunction(String name, org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
TableAggregateFunction under a unique name in the TableEnvironment's
catalog. Registered functions can only be referenced in Table API.T - The type of the output value.ACC - The type of aggregate accumulator.name - The name under which the function is registered.tableAggregateFunction - The TableAggregateFunction to register.<T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
DataStream into a Table.
The field names of the Table are automatically derived from the type of the
DataStream.T - The type of the DataStream.dataStream - The DataStream to be converted.Table.<T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
DataStream into a Table with specified field names.
Example:
DataStream<Tuple2<String, Long>> stream = ...
Table tab = tableEnv.fromDataStream(stream, "a, b");
T - The type of the DataStream.dataStream - The DataStream to be converted.fields - The field names of the resulting Table.Table.@Deprecated <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
createTemporaryView(String, DataStream)DataStream.
Registered views can be referenced in SQL queries.
The field names of the Table are automatically derived
from the type of the DataStream.
The view is registered in the namespace of the current catalog and database. To register the view in
a different catalog use createTemporaryView(String, DataStream).
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
T - The type of the DataStream to register.name - The name under which the DataStream is registered in the catalog.dataStream - The DataStream to register.<T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
DataStream in a given path.
Registered views can be referenced in SQL queries.
The field names of the Table are automatically derived
from the type of the DataStream.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
T - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.@Deprecated <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, String fields)
createTemporaryView(String, DataStream, String)DataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.registerDataStream("myTable", stream, "a, b")
The view is registered in the namespace of the current catalog and database. To register the view in
a different catalog use createTemporaryView(String, DataStream).
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
T - The type of the DataStream to register.name - The name under which the DataStream is registered in the catalog.dataStream - The DataStream to register.fields - The field names of the registered view.<T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, String fields)
DataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.createTemporaryView("cat.db.myTable", stream, "a, b")
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
T - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.fields - The field names of the created view.<T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Table into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the resulting DataStream.table - The Table to convert.clazz - The class of the type of the resulting DataStream.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Table into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the resulting DataStream.table - The Table to convert.typeInfo - The TypeInformation that specifies the type of the DataStream.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Table into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the resulting DataStream.table - The Table to convert.clazz - The class of the type of the resulting DataStream.queryConfig - The configuration of the query to generate.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Table into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the resulting DataStream.table - The Table to convert.typeInfo - The TypeInformation that specifies the type of the DataStream.queryConfig - The configuration of the query to generate.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz)
Table into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the requested record type.table - The Table to convert.clazz - The class of the requested record type.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Table into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the requested record type.table - The Table to convert.typeInfo - The TypeInformation of the requested record type.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz, org.apache.flink.table.api.StreamQueryConfig queryConfig)
Table into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the requested record type.table - The Table to convert.clazz - The class of the requested record type.queryConfig - The configuration of the query to generate.DataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo, org.apache.flink.table.api.StreamQueryConfig queryConfig)
Table into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.T - The type of the requested record type.table - The Table to convert.typeInfo - The TypeInformation of the requested record type.queryConfig - The configuration of the query to generate.DataStream.org.apache.flink.table.descriptors.StreamTableDescriptor connect(org.apache.flink.table.descriptors.ConnectorDescriptor connectorDescriptor)
Descriptors allow for declaring the communication to external systems in an implementation-agnostic way. The classpath is scanned for suitable table factories that match the desired configuration.
The following example shows how to read from a Kafka connector using a JSON format and registering a table source "MyTable" in append mode:
tableEnv
.connect(
new Kafka()
.version("0.11")
.topic("clicks")
.property("zookeeper.connect", "localhost")
.property("group.id", "click-group")
.startFromEarliest())
.withFormat(
new Json()
.jsonSchema("{...}")
.failOnMissingField(false))
.withSchema(
new Schema()
.field("user-name", "VARCHAR").from("u_name")
.field("count", "DECIMAL")
.field("proc-time", "TIMESTAMP").proctime())
.inAppendMode()
.createTemporaryTable("MyTable")
connect in interface org.apache.flink.table.api.TableEnvironmentconnectorDescriptor - connector descriptor describing the external systemvoid sqlUpdate(String stmt, org.apache.flink.table.api.StreamQueryConfig config)
All tables referenced by the query must be registered in the TableEnvironment.
A Table is automatically registered when its Table#toString() method is
called, for example when it is embedded into a String.
Hence, SQL queries can directly reference a Table as follows:
// register the configured table sink into which the result is inserted.
tEnv.registerTableSink("sinkTable", configuredSink);
Table sourceTable = ...
String tableName = sourceTable.toString();
// sourceTable is not registered to the table environment
tEnv.sqlUpdate(s"INSERT INTO sinkTable SELECT * FROM tableName", config);
stmt - The SQL statement to evaluate.config - The QueryConfig to use.@Deprecated void insertInto(org.apache.flink.table.api.Table table, org.apache.flink.table.api.StreamQueryConfig queryConfig, String sinkPath, String... sinkPathContinued)
TableEnvironment.insertInto(String, Table)Table to a TableSink that was registered under the specified name.
See the documentation of TableEnvironment.useDatabase(String) or
TableEnvironment.useCatalog(String) for the rules on the path resolution.
table - The Table to write to the sink.queryConfig - The StreamQueryConfig to use.sinkPath - The first part of the path of the registered TableSink to which the Table is
written. This is to ensure at least the name of the TableSink is provided.sinkPathContinued - The remaining part of the path of the registered TableSink to which the
Table is written.org.apache.flink.api.common.JobExecutionResult execute(String jobName) throws Exception
The program execution will be logged and displayed with the provided name
It calls the StreamExecutionEnvironment.execute(String) on the underlying
StreamExecutionEnvironment. In contrast to the TableEnvironment this
environment translates queries eagerly. Therefore the values in QueryConfig
parameter are ignored.
execute in interface org.apache.flink.table.api.TableEnvironmentjobName - Desired name of the jobException - which occurs during job execution.Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.