@Internal public final class StreamTableEnvironmentImpl extends org.apache.flink.table.api.internal.TableEnvironmentImpl implements StreamTableEnvironment
StreamTableEnvironment. This enables conversions from/to
DataStream.
It binds to a given StreamExecutionEnvironment.
| Constructor and Description |
|---|
StreamTableEnvironmentImpl(org.apache.flink.table.catalog.CatalogManager catalogManager,
org.apache.flink.table.module.ModuleManager moduleManager,
org.apache.flink.table.catalog.FunctionCatalog functionCatalog,
org.apache.flink.table.api.TableConfig tableConfig,
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.delegation.Planner planner,
org.apache.flink.table.delegation.Executor executor,
boolean isStreamingMode) |
| Modifier and Type | Method and Description |
|---|---|
org.apache.flink.table.descriptors.StreamTableDescriptor |
connect(org.apache.flink.table.descriptors.ConnectorDescriptor connectorDescriptor)
Creates a table source and/or table sink from a descriptor.
|
static StreamTableEnvironment |
create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.api.EnvironmentSettings settings,
org.apache.flink.table.api.TableConfig tableConfig) |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Creates a view from the given
DataStream in a given path. |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment |
execEnv()
This is a temporary workaround for Python API.
|
String |
explain(boolean extended) |
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Converts the given
DataStream into a Table. |
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Converts the given
DataStream into a Table with specified field names. |
void |
insertInto(org.apache.flink.table.api.Table table,
org.apache.flink.table.api.StreamQueryConfig queryConfig,
String sinkPath,
String... sinkPathContinued)
Writes the
Table to a TableSink that was registered under the specified name. |
protected boolean |
isEagerOperationTranslation() |
protected org.apache.flink.table.operations.QueryOperation |
qualifyQueryOperation(org.apache.flink.table.catalog.ObjectIdentifier identifier,
org.apache.flink.table.operations.QueryOperation queryOperation) |
<T> void |
registerDataStream(String name,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Creates a view from the given
DataStream. |
<T> void |
registerDataStream(String name,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
<T,ACC> void |
registerFunction(String name,
org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
Registers an
AggregateFunction under a unique name in the TableEnvironment's catalog. |
<T,ACC> void |
registerFunction(String name,
org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
Registers an
TableAggregateFunction under a unique name in the TableEnvironment's
catalog. |
<T> void |
registerFunction(String name,
org.apache.flink.table.functions.TableFunction<T> tableFunction)
Registers a
TableFunction under a unique name in the TableEnvironment's catalog. |
void |
sqlUpdate(String stmt,
org.apache.flink.table.api.StreamQueryConfig config)
Evaluates a SQL statement such as INSERT, UPDATE or DELETE; or a DDL statement;
NOTE: Currently only SQL INSERT statements are supported.
|
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
Class<T> clazz,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
Converts the given
Table into a DataStream of add and retract messages. |
protected void |
validateTableSource(org.apache.flink.table.sources.TableSource<?> tableSource) |
create, createTable, createTemporaryView, dropTemporaryTable, dropTemporaryView, execute, explain, explain, from, fromTableSource, getCatalog, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, getPlanner, insertInto, insertInto, listCatalogs, listDatabases, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, loadModule, registerCatalog, registerFunction, registerTable, registerTableSink, registerTableSink, registerTableSource, scan, sqlQuery, sqlUpdate, unloadModule, useCatalog, useDatabaseclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitcreate, create, create, executecreate, createTemporaryView, dropTemporaryTable, dropTemporaryView, explain, explain, from, fromTableSource, getCatalog, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, insertInto, insertInto, listCatalogs, listDatabases, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, loadModule, registerCatalog, registerFunction, registerTable, registerTableSink, registerTableSink, registerTableSource, scan, sqlQuery, sqlUpdate, unloadModule, useCatalog, useDatabasepublic StreamTableEnvironmentImpl(org.apache.flink.table.catalog.CatalogManager catalogManager,
org.apache.flink.table.module.ModuleManager moduleManager,
org.apache.flink.table.catalog.FunctionCatalog functionCatalog,
org.apache.flink.table.api.TableConfig tableConfig,
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.delegation.Planner planner,
org.apache.flink.table.delegation.Executor executor,
boolean isStreamingMode)
public static StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment, org.apache.flink.table.api.EnvironmentSettings settings, org.apache.flink.table.api.TableConfig tableConfig)
public <T> void registerFunction(String name, org.apache.flink.table.functions.TableFunction<T> tableFunction)
StreamTableEnvironmentTableFunction under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.registerFunction in interface StreamTableEnvironmentT - The type of the output row.name - The name under which the function is registered.tableFunction - The TableFunction to register.public <T,ACC> void registerFunction(String name, org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
StreamTableEnvironmentAggregateFunction under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.registerFunction in interface StreamTableEnvironmentT - The type of the output value.ACC - The type of aggregate accumulator.name - The name under which the function is registered.aggregateFunction - The AggregateFunction to register.public <T,ACC> void registerFunction(String name, org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
StreamTableEnvironmentTableAggregateFunction under a unique name in the TableEnvironment's
catalog. Registered functions can only be referenced in Table API.registerFunction in interface StreamTableEnvironmentT - The type of the output value.ACC - The type of aggregate accumulator.name - The name under which the function is registered.tableAggregateFunction - The TableAggregateFunction to register.public <T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
StreamTableEnvironmentDataStream into a Table.
The field names of the Table are automatically derived from the type of the
DataStream.fromDataStream in interface StreamTableEnvironmentT - The type of the DataStream.dataStream - The DataStream to be converted.Table.public <T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
StreamTableEnvironmentDataStream into a Table with specified field names.
Example:
DataStream<Tuple2<String, Long>> stream = ...
Table tab = tableEnv.fromDataStream(stream, "a, b");
fromDataStream in interface StreamTableEnvironmentT - The type of the DataStream.dataStream - The DataStream to be converted.fields - The field names of the resulting Table.Table.public <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
StreamTableEnvironmentDataStream.
Registered views can be referenced in SQL queries.
The field names of the Table are automatically derived
from the type of the DataStream.
The view is registered in the namespace of the current catalog and database. To register the view in
a different catalog use StreamTableEnvironment.createTemporaryView(String, DataStream).
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
registerDataStream in interface StreamTableEnvironmentT - The type of the DataStream to register.name - The name under which the DataStream is registered in the catalog.dataStream - The DataStream to register.public <T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
StreamTableEnvironmentDataStream in a given path.
Registered views can be referenced in SQL queries.
The field names of the Table are automatically derived
from the type of the DataStream.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView in interface StreamTableEnvironmentT - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.public <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, String fields)
StreamTableEnvironmentDataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.registerDataStream("myTable", stream, "a, b")
The view is registered in the namespace of the current catalog and database. To register the view in
a different catalog use StreamTableEnvironment.createTemporaryView(String, DataStream).
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
registerDataStream in interface StreamTableEnvironmentT - The type of the DataStream to register.name - The name under which the DataStream is registered in the catalog.dataStream - The DataStream to register.fields - The field names of the registered view.public <T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, String fields)
StreamTableEnvironmentDataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.createTemporaryView("cat.db.myTable", stream, "a, b")
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView in interface StreamTableEnvironmentT - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.fields - The field names of the created view.protected org.apache.flink.table.operations.QueryOperation qualifyQueryOperation(org.apache.flink.table.catalog.ObjectIdentifier identifier,
org.apache.flink.table.operations.QueryOperation queryOperation)
qualifyQueryOperation in class org.apache.flink.table.api.internal.TableEnvironmentImplpublic <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
StreamTableEnvironmentTable into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toAppendStream in interface StreamTableEnvironmentT - The type of the resulting DataStream.table - The Table to convert.clazz - The class of the type of the resulting DataStream.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
StreamTableEnvironmentTable into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toAppendStream in interface StreamTableEnvironmentT - The type of the resulting DataStream.table - The Table to convert.typeInfo - The TypeInformation that specifies the type of the DataStream.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
StreamTableEnvironmentTable into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toAppendStream in interface StreamTableEnvironmentT - The type of the resulting DataStream.table - The Table to convert.clazz - The class of the type of the resulting DataStream.queryConfig - The configuration of the query to generate.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo,
org.apache.flink.table.api.StreamQueryConfig queryConfig)
StreamTableEnvironmentTable into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toAppendStream in interface StreamTableEnvironmentT - The type of the resulting DataStream.table - The Table to convert.typeInfo - The TypeInformation that specifies the type of the DataStream.queryConfig - The configuration of the query to generate.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz)
StreamTableEnvironmentTable into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toRetractStream in interface StreamTableEnvironmentT - The type of the requested record type.table - The Table to convert.clazz - The class of the requested record type.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
StreamTableEnvironmentTable into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toRetractStream in interface StreamTableEnvironmentT - The type of the requested record type.table - The Table to convert.typeInfo - The TypeInformation of the requested record type.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz, org.apache.flink.table.api.StreamQueryConfig queryConfig)
StreamTableEnvironmentTable into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toRetractStream in interface StreamTableEnvironmentT - The type of the requested record type.table - The Table to convert.clazz - The class of the requested record type.queryConfig - The configuration of the query to generate.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo, org.apache.flink.table.api.StreamQueryConfig queryConfig)
StreamTableEnvironmentTable into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toRetractStream in interface StreamTableEnvironmentT - The type of the requested record type.table - The Table to convert.typeInfo - The TypeInformation of the requested record type.queryConfig - The configuration of the query to generate.DataStream.public org.apache.flink.table.descriptors.StreamTableDescriptor connect(org.apache.flink.table.descriptors.ConnectorDescriptor connectorDescriptor)
StreamTableEnvironmentDescriptors allow for declaring the communication to external systems in an implementation-agnostic way. The classpath is scanned for suitable table factories that match the desired configuration.
The following example shows how to read from a Kafka connector using a JSON format and registering a table source "MyTable" in append mode:
tableEnv
.connect(
new Kafka()
.version("0.11")
.topic("clicks")
.property("zookeeper.connect", "localhost")
.property("group.id", "click-group")
.startFromEarliest())
.withFormat(
new Json()
.jsonSchema("{...}")
.failOnMissingField(false))
.withSchema(
new Schema()
.field("user-name", "VARCHAR").from("u_name")
.field("count", "DECIMAL")
.field("proc-time", "TIMESTAMP").proctime())
.inAppendMode()
.createTemporaryTable("MyTable")
connect in interface StreamTableEnvironmentconnect in interface org.apache.flink.table.api.TableEnvironmentconnect in class org.apache.flink.table.api.internal.TableEnvironmentImplconnectorDescriptor - connector descriptor describing the external systempublic void sqlUpdate(String stmt, org.apache.flink.table.api.StreamQueryConfig config)
StreamTableEnvironmentAll tables referenced by the query must be registered in the TableEnvironment.
A Table is automatically registered when its Table#toString() method is
called, for example when it is embedded into a String.
Hence, SQL queries can directly reference a Table as follows:
// register the configured table sink into which the result is inserted.
tEnv.registerTableSink("sinkTable", configuredSink);
Table sourceTable = ...
String tableName = sourceTable.toString();
// sourceTable is not registered to the table environment
tEnv.sqlUpdate(s"INSERT INTO sinkTable SELECT * FROM tableName", config);
sqlUpdate in interface StreamTableEnvironmentstmt - The SQL statement to evaluate.config - The QueryConfig to use.public void insertInto(org.apache.flink.table.api.Table table,
org.apache.flink.table.api.StreamQueryConfig queryConfig,
String sinkPath,
String... sinkPathContinued)
StreamTableEnvironmentTable to a TableSink that was registered under the specified name.
See the documentation of TableEnvironment.useDatabase(String) or
TableEnvironment.useCatalog(String) for the rules on the path resolution.
insertInto in interface StreamTableEnvironmenttable - The Table to write to the sink.queryConfig - The StreamQueryConfig to use.sinkPath - The first part of the path of the registered TableSink to which the Table is
written. This is to ensure at least the name of the TableSink is provided.sinkPathContinued - The remaining part of the path of the registered TableSink to which the
Table is written.@Internal public org.apache.flink.streaming.api.environment.StreamExecutionEnvironment execEnv()
protected void validateTableSource(org.apache.flink.table.sources.TableSource<?> tableSource)
validateTableSource in class org.apache.flink.table.api.internal.TableEnvironmentImplprotected boolean isEagerOperationTranslation()
isEagerOperationTranslation in class org.apache.flink.table.api.internal.TableEnvironmentImplpublic String explain(boolean extended)
explain in interface org.apache.flink.table.api.TableEnvironmentexplain in class org.apache.flink.table.api.internal.TableEnvironmentImplCopyright © 2014–2020 The Apache Software Foundation. All rights reserved.