天天看點

Flink擴充API

一.簡介

為了在Scala和Java API之間保持相當程度的一緻性,用于批處理和流傳輸的标準API省略了一些允許在Scala中進行高水準表達的功能。

如果想享受完整的Scala體驗,則可以選擇加入通過隐式轉換來增強Scala API的擴充。

要使用所有可用的擴充,隻需導入相應的擴充元件即可:

1.DataSet API

import org.apache.flink.api.scala.extensions._
           

2.DataStream API

import org.apache.flink.streaming.api.scala.extensions._
           

另外,也可以根據需要導入單個擴充。

二.擴充模式比對

通常,DataSet和DataStream API都不接受匿名模式比對函數來比對元組,案例類或集合,如下所示:

val data: DataSet[(Int, String, Double)] = // [...]
data.map {
  case (id, name, temperature) => // [...]
  // The previous line causes the following compilation error:
  // "The argument types of an anonymous function must be fully known. (SLS 8.5)"
}
           

此擴充在DataSet和DataStream Scala API中引入了新方法,這些新方法在擴充API中具有一對一的對應關系。這些擴充方法支援匿名模式比對功能。

1.DataSet API

Flink擴充API
Flink擴充API
Flink擴充API

DataSet要專門使用此擴充,可以添加以下内容:

import org.apache.flink.api.scala.extensions.acceptPartialFunctions
           

2.DataStream API

Flink擴充API
Flink擴充API
Flink擴充API

DataStream要專門使用此擴充,可以添加以下内容:

import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions
           

三.代碼案例

以下代碼片段顯示了如何一起使用這些擴充方法(與DataSet API一起使用)的示例:

package cn.extensions

import org.apache.flink.api.scala._
import org.apache.flink.api.scala.ExecutionEnvironment

/**
  * Created by Administrator on 2020/5/29.
  */
case class Person(x: String, y: Int)
object Match {
  def main(args: Array[String]): Unit = {
    // 設定execution執行環境
    val env = ExecutionEnvironment.getExecutionEnvironment

    val text = "Apache Flink apache spark apache solr hbase hive flink kafka redis tachyon redis"
    val persons = text.toLowerCase.split(" ").map(row => Person(row, 1))
    
    import org.apache.flink.api.scala.extensions._
    val ds = env.fromCollection(persons)
    val result = ds.filterWith {
      case Person(x, y) => y > 0
    }.groupingBy{
      case Person(x, _) => x
    }.sum("y")

    result.first(5).print()
  }
}
           

異常報錯資訊:

Exception in thread "main" java.lang.UnsupportedOperationException: Aggregate does not support grouping with KeySelector functions, yet.
	at org.apache.flink.api.scala.operators.ScalaAggregateOperator.translateToDataFlow(ScalaAggregateOperator.java:220)
	at org.apache.flink.api.scala.operators.ScalaAggregateOperator.translateToDataFlow(ScalaAggregateOperator.java:55)
	at org.apache.flink.api.java.operators.OperatorTranslation.translateSingleInputOperator(OperatorTranslation.java:148)
	at org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:102)
	at org.apache.flink.api.java.operators.OperatorTranslation.translateSingleInputOperator(OperatorTranslation.java:146)
	at org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:102)
	at org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:63)
	at org.apache.flink.api.java.operators.OperatorTranslation.translateToPlan(OperatorTranslation.java:52)
	at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:955)
	at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:922)
	at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:85)
	at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:816)
	at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
	at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
	at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1726)
	at cn.extensions.Match$.main(Match.scala:29)
	at cn.extensions.Match.main(Match.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

           

可知是使用groupingWith導緻的,意思是不支援使用該擴充API,換回原來的groupWith即可:

Flink擴充API

繼續閱讀