天天看點

Spring Cloud Stream Elmhurst SR1 翻譯

Spring Cloud Stream Reference Guide

Table of Contents

  • Spring Cloud Stream Core
  • Binder Implementations
  • Appendices
  • Appendix A: Building
  • A.1. Basic Compile and Test
  • A.2. Documentation
  • A.3. Working with the code
  • A.4. Sign the Contributor License Agreement
  • A.5. Code Conventions and Housekeeping

Spring Cloud Stream Core

1. Quick Start

You can try Spring Cloud Stream in less then 5 min even before you jump into any details by following this three-step guide.

您可以在不到5分鐘的時間内嘗試Spring Cloud Stream,甚至在您按照這個三步指南跳轉到任何細節之前。

We show you how to create a Spring Cloud Stream application that receives messages coming from the messaging middleware of your choice (more on this later) and logs received messages to the console. We call it LoggingConsumer. While not very practical, it provides a good introduction to some of the main concepts and abstractions, making it easier to digest the rest of this user guide.

我們将向您展示如何建立一個Spring Cloud Stream應用程式,該應用程式接收來自您選擇的消息傳遞中間件的消息(稍後将詳細介紹)并将收到的消息記錄到控制台。我們稱之為LoggingConsumer。雖然不太實用,但它提供了一些主要概念和抽象的良好介紹,使其更容易消化本使用者指南的其餘部分。

The three steps are as follows:

  1. Creating a Sample Application by Using Spring Initializr
  2. Importing the Project into Your IDE
  3. Adding a Message Handler, Building, and Running

這三個步驟如下:

  1. 使用Spring Initializr建立示例應用程式
  2. 将項目導入IDE
  3. 添加消息處理程式,建構和運作

1.1. Creating a Sample Application by Using Spring Initializr

To get started, visit the Spring Initializr. From there, you can generate our LoggingConsumer application. To do so:

  1. In the Dependencies section, start typing stream. When the “Cloud Stream” option should appears, select it.
  2. Start typing either 'kafka' or 'rabbit'.
  3. Select “Kafka” or “RabbitMQ”.

    Basically, you choose the messaging middleware to which your application binds. We recommend using the one you have already installed or feel more comfortable with installing and running. Also, as you can see from the Initilaizer screen, there are a few other options you can choose. For example, you can choose Gradle as your build tool instead of Maven (the default).

  4. In the Artifact field, type 'logging-consumer'.

    The value of the Artifact field becomes the application name. If you chose RabbitMQ for the middleware, your Spring Initializr should now be as follows:

  1. Click the Generate Project button.

    Doing so downloads the zipped version of the generated project to your hard drive.

  2. Unzip the file into the folder you want to use as your project directory.

要開始使用,請通路Spring Initializr。從那裡,您可以生成我們的LoggingConsumer應用程式。為此:

  1. 在“ 依賴關系”部分中,開始鍵入stream。當出現“Cloud Stream”選項時,選擇它。
  2. 開始輸入'kafka'或'rabbit'。
  3. 選擇“Kafka”或“RabbitMQ”。

基本上,您選擇應用程式綁定的消息傳遞中間件。我們建議您使用已安裝的或安裝和運作時感覺更舒适。此外,從Initilaizer螢幕中可以看到,您可以選擇其他一些選項。例如,您可以選擇Gradle作為建構工具而不是Maven(預設值)。

  1. 在“ 工件”字段中,鍵入“logging-consumer”。

Artifact字段的值成為應用程式名稱。如果你選擇RabbitMQ作為中間件,你的Spring Initializr現在應該如下:

  1. 單擊“ 生成項目”按鈕。

這樣做會将生成的項目的壓縮版本下載下傳到硬碟驅動器。

  1. 将檔案解壓縮到要用作項目目錄的檔案夾中。
We encourage you to explore the many possibilities available in the Spring Initializr. It lets you create many different kinds of Spring applications.
我們鼓勵您探索Spring Initializr中的許多可能性。它允許您建立許多不同類型的Spring應用程式。

1.2. Importing the Project into Your IDE   将項目導入IDE

Now you can import the project into your IDE. Keep in mind that, depending on the IDE, you may need to follow a specific import procedure. For example, depending on how the project was generated (Maven or Gradle), you may need to follow specific import procedure (for example, in Eclipse or STS, you need to use File → Import → Maven → Existing Maven Project).

現在,您可以将項目導入IDE。請記住,根據IDE,您可能需要遵循特定的導入過程。例如,根據項目的生成方式(Maven或Gradle),您可能需要遵循特定的導入過程(例如,在Eclipse或STS中,您需要使用File→Import→Maven→Existing Maven Project)。

Once imported, the project must have no errors of any kind. Also, src/main/java should contain com.example.loggingconsumer.LoggingConsumerApplication.

導入後,項目必須沒有任何錯誤。另外,src/main/java應該包含com.example.loggingconsumer.LoggingConsumerApplication。

Technically, at this point, you can run the application’s main class. It is already a valid Spring Boot application. However, it does not do anything, so we want to add some code.

從技術上講,此時,您可以運作應用程式的主類。它已經是一個有效的Spring Boot應用程式。但是,它沒有做任何事情,是以我們想添加一些代碼。

1.3. Adding a Message Handler, Building, and Running   添加消息處理器,建構,并運作

Modify the com.example.loggingconsumer.LoggingConsumerApplication class to look as follows:

将com.example.loggingconsumer.LoggingConsumerApplication類修改為如下所示:

@SpringBootApplication

@EnableBinding(Sink.class)

public class LoggingConsumerApplication {

public static void main(String[] args) {

                SpringApplication.run(LoggingConsumerApplication.class, args);

        }

@StreamListener(Sink.INPUT)

        public void handle(Person person) {

                System.out.println("Received: " + person);

        }

public static class Person {

                private String name;

                public String getName() {

                        return name;

                }

                public void setName(String name) {

                        this.name = name;

                }

                public String toString() {

                        return this.name;

                }

        }

}

As you can see from the preceding listing:

  • We have enabled Sink binding (input-no-output) by using @EnableBinding(Sink.class). Doing so signals to the framework to initiate binding to the messaging middleware, where it automatically creates the destination (that is, queue, topic, and others) that are bound to the Sink.INPUT channel.
  • We have added a handler method to receive incoming messages of type Person. Doing so lets you see one of the core features of the framework: It tries to automatically convert incoming message payloads to type Person.

從前面的清單中可以看出:

  • 我們Sink通過使用啟用了綁定(輸入 - 無輸出)@EnableBinding(Sink.class)。這樣做會向架構發出信号,以啟動與消息傳遞中間件的綁定,進而自動建立綁定到Sink.INPUT通道的目标(即隊列,主題和其他)。
  • 我們添加了一種handler方法來接收類型的傳入消息Person。這樣做可以讓您看到架構的核心功能之一:它嘗試自動将傳入的消息有效負載轉換為類型Person。

You now have a fully functional Spring Cloud Stream application that does listens for messages. From here, for simplicity, we assume you selected RabbitMQ in step one. Assuming you have RabbitMQ installed and running, you can start the application by running its main method in your IDE.

您現在擁有一個功能齊全的Spring Cloud Stream應用程式,可以偵聽消息。從這裡開始,為簡單起見,我們假設您在第一步中選擇了RabbitMQ 。假設您已安裝并運作RabbitMQ,則可以通過main在IDE中運作其方法來啟動應用程式。

You should see following output:

你應該看到以下輸出:

        --- [ main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg, bound to: input

        --- [ main] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]

        --- [ main] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory#2a3a299:0/[email protected]. . .

        . . .

        --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter      : started inbound.input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg

        . . .

        --- [ main] c.e.l.LoggingConsumerApplication         : Started LoggingConsumerApplication in 2.531 seconds (JVM running for 2.897)

Go to the RabbitMQ management console or any other RabbitMQ client and send a message to input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg. The anonymous.CbMIwdkJSBO1ZoPDOtHtCg part represents the group name and is generated, so it is bound to be different in your environment. For something more predictable, you can use an explicit group name by setting spring.cloud.stream.bindings.input.group=hello (or whatever name you like).

轉到RabbitMQ管理控制台或任何其他RabbitMQ用戶端并發送消息input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg。該anonymous.CbMIwdkJSBO1ZoPDOtHtCg部分代表組名稱并生成,是以它在您的環境中必然會有所不同。對于更可預測的内容,您可以通過設定spring.cloud.stream.bindings.input.group=hello(或任何您喜歡的名稱)來使用顯式組名。

The contents of the message should be a JSON representation of the Person class, as follows:

消息的内容應該是類的JSON表示Person,如下所示:

{"name":"Sam Spade"}

Then, in your console, you should see:

然後,在您的控制台中,您應該看到:

Received: Sam Spade

You can also build and package your application into a boot jar (by using ./mvnw clean install) and run the built JAR by using the java -jar command.

您還可以将應用程式建構并打包到引導jar中(通過使用./mvnw clean install),并使用該java -jar指令運作建構的JAR 。

Now you have a working (albeit very basic) Spring Cloud Stream application.

現在您有一個工作(盡管非常基本的)Spring Cloud Stream應用程式。

2. What’s New in 2.0?

Spring Cloud Stream introduces a number of new features, enhancements, and changes. The following sections outline the most notable ones:

  • New Features and Components
  • Notable Enhancements

Spring Cloud Stream引入了許多新功能,增強功能和更改。以下部分概述了最值得注意的部分:

  • 新功能群組件
  • 值得注意的增強功能

2.1. New Features and Components   新功能群組件

  • Polling Consumers: Introduction of polled consumers, which lets the application control message processing rates. See “Using Polled Consumers” for more details. You can also read this blog post for more details.
  • Micrometer Support: Metrics has been switched to use Micrometer. MeterRegistry is also provided as a bean so that custom applications can autowire it to capture custom metrics. See “Metrics Emitter” for more details.
  • New Actuator Binding Controls: New actuator binding controls let you both visualize and control the Bindings lifecycle. For more details, see Binding visualization and control.
  • Configurable RetryTemplate: Aside from providing properties to configure RetryTemplate, we now let you provide your own template, effectively overriding the one provided by the framework. To use it, configure it as a @Bean in your application.
  • 輪詢消費者:引入輪詢的消費者,讓應用程式控制消息處理速率。有關詳細資訊,請參閱“ 使用輪詢的使用者 ”。您還可以閱讀此部落格文章了解更多詳情。
  • 千分尺支援:度量标準已切換為使用千分尺。 MeterRegistry也作為bean提供,以便自定義應用程式可以自動裝配它以捕獲自定義名額。有關詳細資訊,請參閱“ 度量标準發射器 ”。
  • 新的執行器綁定控件:新的執行器綁定控件可讓您可視化和控制Bindings生命周期。有關更多詳細資訊,請參閱綁定可視化和控件。
  • 可配置的RetryTemplate:除了提供要配置的屬性之外RetryTemplate,我們現在允許您提供自己的模闆,有效地覆寫架構提供的模闆。要使用它,請@Bean在應用程式中将其配置為a 。

2.2. Notable Enhancements   值得注意的增強功能

This version includes the following notable enhancements:

  • Both Actuator and Web Dependencies Are Now Optional
  • Content-type Negotiation Improvements
  • Notable Deprecations

此版本包括以下顯着增強功能:

  • Actuator和Web依賴關系現在都是可選的
  • 内容類型的談判改進
  • 值得注意的貶值

2.2.1. Both Actuator and Web Dependencies Are Now Optional

This change slims down the footprint of the deployed application in the event neither actuator nor web dependencies required. It also lets you switch between the reactive and conventional web paradigms by manually adding one of the following dependencies.

如果不需要執行器或Web依賴性,則此更改會減少已部署應用程式的占用空間。它還允許您通過手動添加以下依賴項之一在響應和傳統Web範例之間切換。

The following listing shows how to add the conventional web framework:

以下清單顯示了如何添加傳統的Web架構:

<dependency>

        <groupId>org.springframework.boot</groupId>

        <artifactId>spring-boot-starter-web</artifactId>

</dependency>

The following listing shows how to add the reactive web framework:

以下清單顯示了如何添加響應式Web架構:

<dependency>

        <groupId>org.springframework.boot</groupId>

        <artifactId>spring-boot-starter-webflux</artifactId>

</dependency>

The following list shows how to add the actuator dependency:

以下清單顯示了如何添加執行器依賴項:

<dependency>

    <groupId>org.springframework.boot</groupId>

    <artifactId>spring-boot-starter-actuator</artifactId>

</dependency>

2.2.2. Content-type Negotiation Improvements   内容類型協商改進

One of the core themes for verion 2.0 is improvements (in both consistency and performance) around content-type negotiation and message conversion. The following summary outlines the notable changes and improvements in this area. See the “Content Type Negotiation” section for more details. Also this blog post contains more detail.

  • All message conversion is now handled only by MessageConverter objects.
  • We introduced the @StreamMessageConverter annotation to provide custom MessageConverter objects.
  • We introduced the default Content Type as application/json, which needs to be taken into consideration when migrating 1.3 application or operating in the mixed mode (that is, 1.3 producer → 2.0 consumer).
  • Messages with textual payloads and a contentType of text/…​ or …​/json are no longer converted to Message<String> for cases where the argument type of the provided MessageHandler can not be determined (that is, public void handle(Message<?> message) or public void handle(Object payload)). Furthermore, a strong argument type may not be enough to properly convert messages, so the contentTypeheader may be used as a supplement by some MessageConverters.

verion 2.0的核心主題之一是圍繞内容類型協商和消息轉換的改進(在一緻性和性能方面)。以下摘要概述了該領域的顯着變化和改進。有關詳細資訊,請參閱“ 内容類型協商 ”部分。另外這個部落格文章中包含更多細節。

  • 現在,所有消息轉換僅由MessageConverter對象處理。
  • 我們引入了@StreamMessageConverter注釋來提供自定義MessageConverter對象。
  • 我們引入了預設值Content Typeas application/json,在遷移1.3應用程式或在混合模式下運作時需要考慮這一點(即1.3生産者→2.0使用者)。
  • 與文本消息的有效載荷和contentType的text/…​或…​/json不再轉換為Message<String>對于其中提供的參數類型的情況下MessageHandler不能确定(即,public void handle(Message<?> message)或public void handle(Object payload))。此外,強大的參數類型可能不足以正确轉換消息,是以contentType标題可能被某些人用作補充MessageConverters。

2.3. Notable Deprecations   值得注意的廢棄

As of version 2.0, the following items have been deprecated:

  • Java Serialization (Java Native and Kryo)
  • Deprecated Classes and Methods

從2.0版開始,不推薦使用以下項目:

  • Java序列化(Java Native和Kryo​​)
  • 不推薦使用的類和方法

2.3.1. Java Serialization (Java Native and Kryo)   Java序列化()

JavaSerializationMessageConverter and KryoMessageConverter remain for now. However, we plan to move them out of the core packages and support in the future. The main reason for this deprecation is to flag the issue that type-based, language-specific serialization could cause in distributed environments, where Producers and Consumers may depend on different JVM versions or have different versions of supporting libraries (that is, Kryo). We also wanted to draw the attention to the fact that Consumers and Producers may not even be Java-based, so polyglot style serialization (i.e., JSON) is better suited.

JavaSerializationMessageConverter并KryoMessageConverter保持現在。但是,我們計劃在未來将它們從核心軟體包和支援中移除。這種棄用的主要原因是标記基于類型的,特定于語言的序列化可能在分布式環境中引起的問題,其中生産者和消費者可能依賴于不同的JVM版本或具有不同版本的支援庫(即Kryo)。我們還想提請注意消費者和生産者甚至可能不是基于Java的事實,是以多語言樣式序列化(即JSON)更适合。

2.3.2. Deprecated Classes and Methods   不推薦使用的類和方法

The following is a quick summary of notable deprecations. See the corresponding {spring-cloud-stream-javadoc-current}[javadoc] for more details.

  • SharedChannelRegistry. Use SharedBindingTargetRegistry.
  • Bindings. Beans qualified by it are already uniquely identified by their type — for example, provided Source, Processor, or custom bindings:

public interface Sample {

        String OUTPUT = "sampleOutput";

@Output(Sample.OUTPUT)

        MessageChannel output();

}

  • HeaderMode.raw. Use none, headers or embeddedHeaders
  • ProducerProperties.partitionKeyExtractorClass in favor of partitionKeyExtractorName and ProducerProperties.partitionSelectorClass in favor of partitionSelectorName. This change ensures that both components are Spring configured and managed and are referenced in a Spring-friendly way.
  • BinderAwareRouterBeanPostProcessor. While the component remains, it is no longer a BeanPostProcessorand will be renamed in the future.
  • BinderProperties.setEnvironment(Properties environment). Use BinderProperties.setEnvironment(Map<String, Object> environment).

以下是顯着棄用的快速摘要。有關更多詳細資訊,請參閱相應的{spring-cloud-stream-javadoc-current} [javadoc]。

  • SharedChannelRegistry。使用SharedBindingTargetRegistry。
  • Bindings。通過它合格豆已經通過獨特的類型識别-例如,提供Source,Processor或自定義綁定:

public interface Sample {

    String OUTPUT =“sampleOutput”;

    @Output(Sample.OUTPUT)

    MessageChannel輸出();

}

  • HeaderMode.raw。使用none,headers或embeddedHeaders
  • ProducerProperties.partitionKeyExtractorClass支援partitionKeyExtractorName和ProducerProperties.partitionSelectorClass贊成partitionSelectorName。此更改確定兩個元件都是Spring配置和管理的,并以Spring友好的方式引用。
  • BinderAwareRouterBeanPostProcessor。雖然該元件仍然存在,但它不再是一個元件,BeanPostProcessor并且将來會重命名。
  • BinderProperties.setEnvironment(Properties environment)。使用BinderProperties.setEnvironment(Map<String, Object> environment)。

This section goes into more detail about how you can work with Spring Cloud Stream. It covers topics such as creating and running stream applications.

本節詳細介紹了如何使用Spring Cloud Stream。它涵蓋了建立和運作流應用程式等主題。

3. Introducing Spring Cloud Stream   介紹Spring Cloud Stream

Spring Cloud Stream is a framework for building message-driven microservice applications. Spring Cloud Stream builds upon Spring Boot to create standalone, production-grade Spring applications and uses Spring Integration to provide connectivity to message brokers. It provides opinionated configuration of middleware from several vendors, introducing the concepts of persistent publish-subscribe semantics, consumer groups, and partitions.

Spring Cloud Stream是一個用于建構消息驅動的微服務應用程式的架構。Spring Cloud Stream建構于Spring Boot之上,用于建立獨立的生産級Spring應用程式,并使用Spring Integration提供與消息代理的連接配接。它提供了來自多個供應商的中間件的固定配置,介紹了持久性釋出 - 訂閱語義,消費者組,以及分區的概念。

You can add the @EnableBinding annotation to your application to get immediate connectivity to a message broker, and you can add @StreamListener to a method to cause it to receive events for stream processing. The following example shows a sink application that receives external messages:

您可以将@EnableBinding注解添加到應用程式以立即連接配接到消息代理,并且可以将@StreamListener注解添加到方法以使其接收流處理事件。以下示例顯示了接收外部消息的接收器應用程式:

@SpringBootApplication

@EnableBinding(Sink.class)

public class VoteRecordingSinkApplication {

  public static void main(String[] args) {

    SpringApplication.run(VoteRecordingSinkApplication.class, args);

  }

  @StreamListener(Sink.INPUT)

  public void processVote(Vote vote) {

      votingService.recordVote(vote);

  }

}

The @EnableBinding annotation takes one or more interfaces as parameters (in this case, the parameter is a single Sink interface). An interface declares input and output channels. Spring Cloud Stream provides the Source, Sink, and Processor interfaces. You can also define your own interfaces.

@EnableBinding注解接收一個或多個接口參數(在這種情況下,該參數是一個單個的Sink接口)。接口聲明輸入和輸出管道。Spring Cloud Stream提供了Source,Sink,和Processor接口。您還可以定義自己的接口。

The following listing shows the definition of the Sink interface:

下面顯示了Sink接口的定義:

public interface Sink {

  String INPUT = "input";

  @Input(Sink.INPUT)

  SubscribableChannel input();

}

The @Input annotation identifies an input channel, through which received messages enter the application. The @Output annotation identifies an output channel, through which published messages leave the application. The @Input and @Output annotations can take a channel name as a parameter. If a name is not provided, the name of the annotated method is used.

@Input注解辨別一個輸入管道,通過它接收進入應用程式的消息。@Output注解辨別一個輸出通道,通過它釋出離開應用程式的消息。@Input和@Output注解可以接收管道名稱作為參數。如果未提供名稱,則使用注解方法的名稱。

Spring Cloud Stream creates an implementation of the interface for you. You can use this in the application by autowiring it, as shown in the following example (from a test case):

Spring Cloud Stream為您建立了一個接口實作。您可以通過自動裝配在應用程式中使用它,如以下示例所示(來自測試用例):

@RunWith(SpringJUnit4ClassRunner.class)

@SpringApplicationConfiguration(classes = VoteRecordingSinkApplication.class)

@WebAppConfiguration

@DirtiesContext

public class StreamApplicationTests {

  @Autowired

  private Sink sink;

  @Test

  public void contextLoads() {

    assertNotNull(this.sink.input());

  }

}

4. Main Concepts   主要概念

Spring Cloud Stream provides a number of abstractions and primitives that simplify the writing of message-driven microservice applications. This section gives an overview of the following:

Spring Cloud Stream提供了許多抽象和原語,簡化了消息驅動的微服務應用程式的編寫。本節概述了以下内容:

  • Spring Cloud Stream’s application model   Spring Cloud Stream的應用程式模型
  • The Binder Abstraction   Binder抽象
  • Persistent publish-subscribe support   持久化釋出-訂閱支援
  • Consumer group support   消費者組支援
  • Partitioning support   分區支援
  • A pluggable Binder SPI   可插拔的Binder SPI

4.1. Application Model   應用程式模型

A Spring Cloud Stream application consists of a middleware-neutral core. The application communicates with the outside world through input and output channels injected into it by Spring Cloud Stream. Channels are connected to external brokers through middleware-specific Binder implementations.

Spring Cloud Stream應用程式由中間件中立的核心組成。應用程式通過Spring Cloud Stream注入其中的輸入和輸出管道與外界通信。通過中間件特定的Binder實作,将管道連接配接到外部代理。

Figure 1. Spring Cloud Stream Application

4.1.1. Fat JAR   胖JAR

Spring Cloud Stream applications can be run in stand-alone mode from your IDE for testing. To run a Spring Cloud Stream application in production, you can create an executable (or “fat”) JAR by using the standard Spring Boot tooling provided for Maven or Gradle. See the Spring Boot Reference Guide for more details.

Spring Cloud Stream應用程式可以在IDE中以獨立模式運作以進行測試。要在生産中運作Spring Cloud Stream應用程式,可以使用為Maven或Gradle提供的标準Spring Boot工具建立可執行(或“胖”)JAR。有關更多詳細資訊,請參見Spring Boot Reference Guide。

4.2. The Binder Abstraction   Binder抽象

Spring Cloud Stream provides Binder implementations for Kafka and Rabbit MQ. Spring Cloud Stream also includes a TestSupportBinder, which leaves a channel unmodified so that tests can interact with channels directly and reliably assert on what is received. You can also use the extensible API to write your own Binder.

Spring Cloud Stream為Kafka和Rabbit MQ提供了Binder實作。Spring Cloud Stream還包含一個TestSupportBinder,它保留了一個未修改的管道,以便測試可以直接與管道互動,并可靠地斷言收到的内容。您還可以使用可擴充API編寫自己的Binder。

Spring Cloud Stream uses Spring Boot for configuration, and the Binder abstraction makes it possible for a Spring Cloud Stream application to be flexible in how it connects to middleware. For example, deployers can dynamically choose, at runtime, the destinations (such as the Kafka topics or RabbitMQ exchanges) to which channels connect. Such configuration can be provided through external configuration properties and in any form supported by Spring Boot (including application arguments, environment variables, and application.yml or application.properties files). In the sink example from the Introducing Spring Cloud Stream section, setting the spring.cloud.stream.bindings.input.destination application property to raw-sensor-data causes it to read from the raw-sensor-data Kafka topic or from a queue bound to the raw-sensor-data RabbitMQ exchange.

Spring Cloud Stream使用Spring Boot進行配置,Binder抽象使Spring Cloud Stream應用程式可以靈活地連接配接到中間件。例如,部署者可以在運作時動态選擇管道連接配接的目的地(例如Kafka主題或RabbitMQ交換)。可以通過外部配置屬性以及Spring Boot支援的任何形式(包括應用程式參數,環境變量,和application.yml或application.properties檔案)來提供此類配置。在Introducing Spring Cloud Stream部分的接收器示例中,将spring.cloud.stream.bindings.input.destination應用程式屬性設定為raw-sensor-data以使其從raw-sensor-data Kafka主題或綁定到raw-sensor-data RabbitMQ交換的隊列中讀取。

Spring Cloud Stream automatically detects and uses a binder found on the classpath. You can use different types of middleware with the same code. To do so, include a different binder at build time. For more complex use cases, you can also package multiple binders with your application and have it choose the binder( and even whether to use different binders for different channels) at runtime.

Spring Cloud Stream自動檢測并使用類路徑中找到的綁定器。您可以使用具有相同代碼的不同類型的中間件。為此,請在建構時包含不同的綁定器。對于更複雜的用例,您還可以在應用程式中打包多個綁定器,并讓它在運作時選擇綁定器(甚至是否為不同的通道使用不同的綁定器)。

4.3. Persistent Publish-Subscribe Support   持久化釋出-訂閱支援

Communication between applications follows a publish-subscribe model, where data is broadcast through shared topics. This can be seen in the following figure, which shows a typical deployment for a set of interacting Spring Cloud Stream applications.

應用程式之間的通信遵循釋出 - 訂閱模型,其中資料通過共享主題廣播。這可以在下圖中看到,該圖顯示了一組互動式Spring Cloud Stream應用程式的典型部署。

Figure 2. Spring Cloud Stream Publish-Subscribe

Data reported by sensors to an HTTP endpoint is sent to a common destination named raw-sensor-data. From the destination, it is independently processed by a microservice application that computes time-windowed averages and by another microservice application that ingests the raw data into HDFS (Hadoop Distributed File System). In order to process the data, both applications declare the topic as their input at runtime.

傳感器向HTTP端點報告的資料将發送到名為raw-sensor-data的公共目的地。從目的地開始,它由一個計算時間窗平均值的微服務應用程式和另一個将原始資料攝入HDFS(Hadoop分布式檔案系統)的微服務應用程式單獨處理。為了處理資料,兩個應用程式都将主題聲明為運作時的輸入。

The publish-subscribe communication model reduces the complexity of both the producer and the consumer and lets new applications be added to the topology without disruption of the existing flow. For example, downstream from the average-calculating application, you can add an application that calculates the highest temperature values for display and monitoring. You can then add another application that interprets the same flow of averages for fault detection. Doing all communication through shared topics rather than point-to-point queues reduces coupling between microservices.

釋出 - 訂閱通信模型降低了生産者和消費者的複雜性,并允許将新應用程式添加到拓撲中,而不會中斷現有流程。例如,在平均值計算應用程式的下遊,您可以添加計算顯示和監視的最高溫度值的應用程式。然後,您可以添加另一個應用程式來解釋相同的平均流量以進行故障檢測。通過共享主題而不是點對點隊列進行所有通信可以減少微服務之間的耦合。

While the concept of publish-subscribe messaging is not new, Spring Cloud Stream takes the extra step of making it an opinionated choice for its application model. By using native middleware support, Spring Cloud Stream also simplifies use of the publish-subscribe model across different platforms.

雖然釋出 - 訂閱消息的概念并不新鮮,但Spring Cloud Stream采取了額外的步驟,使其成為其應用程式模型的自覺選擇。通過使用原生中間件支援,Spring Cloud Stream還簡化了跨不同平台的釋出 - 訂閱模型的使用。

4.4. Consumer Groups   消費者組

While the publish-subscribe model makes it easy to connect applications through shared topics, the ability to scale up by creating multiple instances of a given application is equally important. When doing so, different instances of an application are placed in a competing consumer relationship, where only one of the instances is expected to handle a given message.

雖然釋出 - 訂閱模型使通過共享主題輕松連接配接應用程式,但通過建立給定應用程式的多個執行個體來擴充的能力同樣重要。執行此操作時,應用程式的不同執行個體将放置在競争的消費者關系中,其中隻有一個執行個體需要處理給定的消息。

Spring Cloud Stream models this behavior through the concept of a consumer group. (Spring Cloud Stream consumer groups are similar to and inspired by Kafka consumer groups.) Each consumer binding can use the spring.cloud.stream.bindings.<channelName>.group property to specify a group name. For the consumers shown in the following figure, this property would be set as spring.cloud.stream.bindings.<channelName>.group=hdfsWrite or spring.cloud.stream.bindings.<channelName>.group=average.

Spring Cloud Stream通過消費者組的概念對此行為進行模組化。(Spring Cloud Stream消費者組與Kafka消費者組類似并受其啟發。)每個消費者綁定都可以使用spring.cloud.stream.bindings.<channelName>.group屬性來指定組名稱。對于下圖中顯示的消費者,此屬性将設定為spring.cloud.stream.bindings.<channelName>.group=hdfsWrite或spring.cloud.stream.bindings.<channelName>.group=average。

Figure 3. Spring Cloud Stream Consumer Groups

All groups that subscribe to a given destination receive a copy of published data, but only one member of each group receives a given message from that destination. By default, when a group is not specified, Spring Cloud Stream assigns the application to an anonymous and independent single-member consumer group that is in a publish-subscribe relationship with all other consumer groups.

訂閱給定目的地的所有組都會收到已釋出資料的副本,但每個組中隻有一個成員從該目的地接收給定的消息。預設情況下,當未指定組時,Spring Cloud Stream會将應用程式配置設定給與所有其他消費者組處于釋出 - 訂閱關系的一個匿名且獨立的單成員消費者組。

4.5. Consumer Types   消費者類型

Two types of consumer are supported:

  • Message-driven (sometimes referred to as Asynchronous)
  • Polled (sometimes referred to as Synchronous)

支援兩種類型的消費者:

  • 消息驅動(有時稱為異步)
  • 輪詢(有時稱為同步)

Prior to version 2.0, only asynchronous consumers were supported. A message is delivered as soon as it is available and a thread is available to process it.

When you wish to control the rate at which messages are processed, you might want to use a synchronous consumer.

在2.0版之前,僅支援異步消費者。消息一旦可用就會傳遞,并且有一個線程可以處理它。

如果要控制處理消息的速率,可能需要使用同步消費者。

4.5.1. Durability   持久性

Consistent with the opinionated application model of Spring Cloud Stream, consumer group subscriptions are durable. That is, a binder implementation ensures that group subscriptions are persistent and that, once at least one subscription for a group has been created, the group receives messages, even if they are sent while all applications in the group are stopped.

與Spring Cloud Stream的固定應用模型一緻,消費者組訂閱是持久的。也就是說,綁定器實作確定組訂閱是持久的,并且一旦建立了組的至少一個訂閱,該組就接收消息,即使它們是在組中的所有應用程式都被停止時發送的。

Anonymous subscriptions are non-durable by nature. For some binder implementations (such as RabbitMQ), it is possible to have non-durable group subscriptions.
匿名訂閱本質上是非持久的。對于某些綁定器實作(例如RabbitMQ),可以具有非持久的組訂閱。

In general, it is preferable to always specify a consumer group when binding an application to a given destination. When scaling up a Spring Cloud Stream application, you must specify a consumer group for each of its input bindings. Doing so prevents the application’s instances from receiving duplicate messages (unless that behavior is desired, which is unusual).

通常,在将應用程式綁定到給定目的地時,最好始終指定消費者組。擴充Spring Cloud Stream應用程式時,必須為每個輸入綁定指定一個消費者組。這樣做可以防止應用程式的執行個體接收重複的消息(除非需要這種行為,這是不正常的)。

4.6. Partitioning Support   分區支援

Spring Cloud Stream provides support for partitioning data between multiple instances of a given application. In a partitioned scenario, the physical communication medium (such as the broker topic) is viewed as being structured into multiple partitions. One or more producer application instances send data to multiple consumer application instances and ensure that data identified by common characteristics are processed by the same consumer instance.

Spring Cloud Stream支援在給定應用程式的多個執行個體之間對資料進行分區。在分區方案中,實體通信媒體(例如代理主題)被視為結構化進入多個分區。一個或多個生産者應用程式執行個體将資料發送到多個消費者應用程式執行個體,并確定由共同特征辨別的資料由同一個消費者執行個體處理。

Spring Cloud Stream provides a common abstraction for implementing partitioned processing use cases in a uniform fashion. Partitioning can thus be used whether the broker itself is naturally partitioned (for example, Kafka) or not (for example, RabbitMQ).

Spring Cloud Stream提供了一種通用抽象,用于以統一的方式實作分區處理用例。是以,無論代理本身是否自然分區(例如,自然分區的Kafka)(例如,非自然分區的RabbitMQ),都可以使用分區。

Figure 4. Spring Cloud Stream Partitioning

Partitioning is a critical concept in stateful processing, where it is critical (for either performance or consistency reasons) to ensure that all related data is processed together. For example, in the time-windowed average calculation example, it is important that all measurements from any given sensor are processed by the same application instance.

分區是有狀态進行中的一個關鍵概念,其中確定所有相關資料一起處理至關重要(出于性能或一緻性原因)。例如,在時間視窗平均值計算示例中,重要的是來自任何給定傳感器的所有測量值都由同一應用程式執行個體處理。

To set up a partitioned processing scenario, you must configure both the data-producing and the data-consuming ends.
要設定分區處理方案,必須同時配置資料生成和資料消費兩端。

5. Programming Model   程式設計模型

To understand the programming model, you should be familiar with the following core concepts:

  • Destination Binders: Components responsible to provide integration with the external messaging systems.
  • Destination Bindings: Bridge between the external messaging systems and application provided Producers and Consumers of messages (created by the Destination Binders).
  • Message: The canonical data structure used by producers and consumers to communicate with Destination Binders (and thus other applications via external messaging systems).

要了解程式設計模型,您應該熟悉以下核心概念:

  • 目标綁定器:負責提供與外部消息系統內建的元件。
  • 目标綁定:外部消息系統和應用程式提供的消息的生産者和消費者的之間的橋接(由目标綁定器建立)。
  • 消息:生産者和使用者用于與目标綁定器(以及通過外部消息系統的其他應用程式)通信的規範資料結構。

5.1. Destination Binders   目标綁定器

Destination Binders are extension components of Spring Cloud Stream responsible for providing the necessary configuration and implementation to facilitate integration with external messaging systems. This integration is responsible for connectivity, delegation, and routing of messages to and from producers and consumers, data type conversion, invocation of the user code, and more.

目标綁定器是Spring Cloud Stream的擴充元件,負責提供必要的配置和實作,以促進與外部消息系統的內建。此內建負責與生産者和消費者之間的消息的連接配接,委派,和路由,資料類型轉換,使用者代碼的調用等。

Binders handle a lot of the boiler plate responsibilities that would otherwise fall on your shoulders. However, to accomplish that, the binder still needs some help in the form of minimalistic yet required set of instructions from the user, which typically come in the form of some type of configuration.

綁定器處理許多鍋爐闆的責任,否則它們會落在你的肩膀上。然而,為了實作這一點,綁定器仍然需要來自使用者的簡約但需要的指令集的形式的一些幫助,其通常以某種類型的配置的形式出現。

While it is out of scope of this section to discuss all of the available binder and binding configuration options (the rest of the manual covers them extensively), Destination Binding does require special attention. The next section discusses it in detail.

雖然讨論所有可用的綁定器和綁定配置選項超出了本節的範圍(本手冊的其餘部分将對其進行全面介紹),但目标綁定确實需要特别注意。下一節将詳細讨論它。

5.2. Destination Bindings   目标綁定

As stated earlier, Destination Bindings provide a bridge between the external messaging system and application-provided Producers and Consumers.

如前所述,目标綁定在外部消息系統和應用程式提供的生産者和消費者之間提供了一個橋梁。

Applying the @EnableBinding annotation to one of the application’s configuration classes defines a destination binding. The @EnableBinding annotation itself is meta-annotated with @Configuration and triggers the configuration of the Spring Cloud Stream infrastructure.

将@EnableBinding注解應用于其中一個應用程式的配置類可定義目标綁定。@EnableBinding注解本身是具有@Configuration的元注解,并觸發Spring Cloud Stream基礎設施的配置。

The following example shows a fully configured and functioning Spring Cloud Stream application that receives the payload of the message from the INPUT destination as a String type (see Content Type Negotiation section), logs it to the console and sends it to the OUTPUT destination after converting it to upper case.

以下示例顯示了一個完全配置且正常運作的Spring Cloud Stream應用程式,該應用程式将來自INPUT目标的消息負載接收為String類型(請參閱内容類型協商部分),将消息負載記錄到控制台,并在将其轉換為大寫後将其發送到OUTPUT目标。

@SpringBootApplication

@EnableBinding(Processor.class)

public class MyApplication {

public static void main(String[] args) {

SpringApplication.run(MyApplication.class, args);

}

@StreamListener(Processor.INPUT)

@SendTo(Processor.OUTPUT)

public String handle(String value) {

System.out.println("Received: " + value);

return value.toUpperCase();

}

}

As you can see the @EnableBinding annotation can take one or more interface classes as parameters. The parameters are referred to as bindings, and they contain methods representing bindable components. These components are typically message channels (see Spring Messaging) for channel-based binders (such as Rabbit, Kafka, and others). However other types of bindings can provide support for the native features of the corresponding technology. For example Kafka Streams binder (formerly known as KStream) allows native bindings directly to Kafka Streams (see Kafka Streams for more details).

如您所見,@EnableBinding注解可以接收一個或多個接口類作為參數。這些參數稱為綁定,它們包含表示可綁定元件的方法。這些元件通常是基于通道的綁定器(例如Rabbit,Kafka等)的消息通道(請參閱Spring Messaging)。然而,其他類型的綁定可以為相應技術的原生特征提供支援。例如,Kafka Streams binder(以前稱為KStream)允許直接原生綁定到Kafka Streams(有關詳細資訊,請參閱Kafka Streams)。

Spring Cloud Stream already provides binding interfaces for typical message exchange contracts, which include:

  • Sink: Identifies the contract for the message consumer by providing the destination from which the message is consumed.
  • Source: Identifies the contract for the message producer by providing the destination to which the produced message is sent.
  • Processor: Encapsulates both the sink and the source contracts by exposing two destinations that allow consumption and production of messages.

Spring Cloud Stream已經為典型的消息交換協定提供了綁定接口,其中包括:

  • Sink:通過提供消息所用的目标來辨別消息消費者的合同。
  • Source:通過提供發送生成的消息的目标來辨別消息生産者的合同。
  • Processor:通過公開允許消費和生成消息的兩個目标來封裝Sink和Source合同。

public interface Sink {

  String INPUT = "input";

  @Input(Sink.INPUT)

  SubscribableChannel input();

}

public interface Source {

  String OUTPUT = "output";

  @Output(Source.OUTPUT)

  MessageChannel output();

}

public interface Processor extends Source, Sink {}

While the preceding example satisfies the majority of cases, you can also define your own contracts by defining your own bindings interfaces and use @Input and @Output annotations to identify the actual bindable components.

雖然前面的示例滿足大多數情況,但您也可以通過定義自己的綁定接口以及使用@Input和@Output注解辨別實際的可綁定元件來定義自己的合同。

For example:

public interface Barista {

    @Input

    SubscribableChannel orders();

    @Output

    MessageChannel hotDrinks();

    @Output

    MessageChannel coldDrinks();

}

Using the interface shown in the preceding example as a parameter to @EnableBinding triggers the creation of the three bound channels named orders, hotDrinks, and coldDrinks, respectively.

使用前面例子中顯示的接口作為@EnableBinding注解的一個參數将觸發三個綁定通道的建立,分别是命名為orders,hotDrinks和coldDrinks。

You can provide as many binding interfaces as you need, as arguments to the @EnableBinding annotation, as shown in the following example:

您可以根據需要提供盡可能多的綁定接口,作為@EnableBinding注解的參數,如以下示例所示:

@EnableBinding(value = { Orders.class, Payment.class })

In Spring Cloud Stream, the bindable MessageChannel components are the Spring Messaging MessageChannel(for outbound) and its extension, SubscribableChannel, (for inbound).

在Spring Cloud Stream中,可綁定MessageChannel元件是Spring Messaging MessageChannel(用于出站)及其擴充SubscribableChannel(用于入站)。

Pollable Destination Binding   可輪詢的目的地綁定

While the previously described bindings support event-based message consumption, sometimes you need more control, such as rate of consumption.

雖然之前描述的綁定支援基于事件的消息消費,但有時您需要更多控制,例如消費速率。

Starting with version 2.0, you can now bind a pollable consumer:

從2.0版開始,您現在可以綁定可輪詢消費者:

The following example shows how to bind a pollable consumer:

以下示例顯示如何綁定可輪詢消費者:

public interface PolledBarista {

    @Input

    PollableMessageSource orders();

. . .

}

In this case, an implementation of PollableMessageSource is bound to the orders “channel”. See Using Polled Consumers for more details.

在這種情況下,PollableMessageSource的實作被綁定到orders“通道”。有關詳細資訊,請參閱使用輪詢的使用者。

Customizing Channel Names   自定義管道名稱

By using the @Input and @Output annotations, you can specify a customized channel name for the channel, as shown in the following example:

通過使用@Input和@Output注解,您可以為通道指定自定義通道名稱,如以下示例所示:

public interface Barista {

    @Input("inboundOrders")

    SubscribableChannel orders();

}

In the preceding example, the created bound channel is named inboundOrders.

在前面的示例中,建立的綁定通道被命名為inboundOrders。

Normally, you need not access individual channels or bindings directly (other then configuring them via @EnableBinding annotation). However there may be times, such as testing or other corner cases, when you do.

通常,您無需直接通路單個通道或綁定(除了通過@EnableBinding注解配置它們之外)。但是,有時可能會出現測試或其他角落情況。

Aside from generating channels for each binding and registering them as Spring beans, for each bound interface, Spring Cloud Stream generates a bean that implements the interface. That means you can have access to the interfaces representing the bindings or individual channels by auto-wiring either in your application, as shown in the following two examples:

除了為每個綁定生成通道并将它們注冊為Spring bean之外,對于每個綁定接口,Spring Cloud Stream都會生成一個實作該接口的bean。這意味着您可以通過在應用程式中自動裝配來通路表示綁定或單個通道的接口,如以下兩個示例所示:

Autowire Binding interface

自動裝配綁定接口

@Autowire

private Source source

public void sayHello(String name) {

    source.output().send(MessageBuilder.withPayload(name).build());

}

Autowire individual channel

自動裝配單個通道

@Autowire

private MessageChannel output;

public void sayHello(String name) {

    output.send(MessageBuilder.withPayload(name).build());

}

You can also use standard Spring’s @Qualifier annotation for cases when channel names are customized or in multiple-channel scenarios that require specifically named channels.

您還可以在自定義通道名稱的情況下或在需要特定命名通道的多通道方案中使用标準Spring的@Qualifier注解。

The following example shows how to use the @Qualifier annotation in this way:

以下示例顯示如何以這種方式使用@Qualifier注解:

@Autowire

@Qualifier("myChannel")

private MessageChannel output;

5.3. Producing and Consuming Messages   生産及消費消息

You can write a Spring Cloud Stream application by using either Spring Integration annotations or Spring Cloud Stream native annotation.

您可以使用Spring Integration注釋或Spring Cloud Stream原生注釋編寫Spring Cloud Stream應用程式。

5.3.1. Spring Integration Support

Spring Cloud Stream is built on the concepts and patterns defined by Enterprise Integration Patterns and relies in its internal implementation on an already established and popular implementation of Enterprise Integration Patterns within the Spring portfolio of projects: Spring Integration framework.

Spring Cloud Stream建立在Enterprise Integration Patterns定義的概念和模式之上,并依賴于其内部實作,在Spring項目組合中已經建立和流行的企業內建模式實作: Spring Integration架構。

So its only natiural for it to support the foundation, semantics, and configuration options that are already established by Spring Integration

For example, you can attach the output channel of a Source to a MessageSource and use the familiar @InboundChannelAdapter annotation, as follows:

是以它唯一的理由就是支援Spring Integration已經建立的基礎,語義,和配置選項。

例如,您可以将Source的輸出通道附加到MessageSource上并使用熟悉的@InboundChannelAdapter注釋,如下所示:

@EnableBinding(Source.class)

public class TimerSource {

  @Bean

  @InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "10", maxMessagesPerPoll = "1"))

  public MessageSource<String> timerMessageSource() {

    return () -> new GenericMessage<>("Hello Spring Cloud Stream");

  }

}

Similarly, you can use @Transformer or @ServiceActivator while providing an implementation of a message handler method for a Processor binding contract, as shown in the following example:

同樣,您可以使用@Transformer或@ServiceActivator注解,同時為處理器綁定契約提供消息處理程式方法的實作,如以下示例所示:

@EnableBinding(Processor.class)

public class TransformProcessor {

  @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)

  public Object transform(String message) {

    return message.toUpperCase();

  }

}

While this may be skipping ahead a bit, it is important to understand that, when you consume from the same binding using @StreamListener annotation, a pub-sub model is used. Each method annotated with @StreamListener receives its own copy of a message, and each one has its own consumer group. However, if you consume from the same binding by using one of the Spring Integration annotation (such as @Aggregator, @Transformer, or @ServiceActivator), those consume in a competing model. No individual consumer group is created for each subscription.
雖然這可能會稍微跳過一點,但重要的是要了解,當您使用@StreamListener注釋消費同一個綁定時,會使用釋出-訂閱模型。每個使用@StreamListener注釋的方法都會收到自己的消息副本,每個消息都有自己的消費者組。不過,如果你通過使用Spring Integration的注解之一(如@Aggregator,@Transformer,或@ServiceActivator)消費同一個綁定時,這些消費于競争模型。沒有為每個訂閱建立單個消費者組。

5.3.2. Using @StreamListener Annotation   使用@StreamListener注解

Complementary to its Spring Integration support, Spring Cloud Stream provides its own @StreamListener annotation, modeled after other Spring Messaging annotations (@MessageMapping, @JmsListener, @RabbitListener, and others) and provides conviniences, such as content-based routing and others.

作為其Spring Integration支援的補充,Spring Cloud Stream提供了自己的@StreamListener注釋,在其他Spring消息的注解之後仿造(@MessageMapping,@JmsListener,@RabbitListener,等),并提供友善,如基于内容的路由等。

@EnableBinding(Sink.class)

public class VoteHandler {

  @Autowired

  VotingService votingService;

  @StreamListener(Sink.INPUT)

  public void handle(Vote vote) {

    votingService.record(vote);

  }

}

As with other Spring Messaging methods, method arguments can be annotated with @Payload, @Headers, and @Header.

與其他Spring Messaging的方法一樣,方法的參數可以使用@Payload,@Headers,和@Header注解。

For methods that return data, you must use the @SendTo annotation to specify the output binding destination for data returned by the method, as shown in the following example:

對于傳回資料的方法,必須使用@SendTo注釋指定方法傳回的資料的輸出綁定目标,如以下示例所示:

@EnableBinding(Processor.class)

public class TransformProcessor {

  @Autowired

  VotingService votingService;

  @StreamListener(Processor.INPUT)

  @SendTo(Processor.OUTPUT)

  public VoteResult handle(Vote vote) {

    return votingService.record(vote);

  }

}

5.3.3. Using @StreamListener for Content-based routing   使用@StreamListener進行基于内容的路由

Spring Cloud Stream supports dispatching messages to multiple handler methods annotated with @StreamListener based on conditions.

Spring Cloud Stream支援将消息分派給使用基于條件的@StreamListener注釋的多個處理程式方法。

In order to be eligible to support conditional dispatching, a method must satisfy the follow conditions:

  • It must not return a value.
  • It must be an individual message handling method (reactive API methods are not supported).

為了有資格支援條件分派,方法必須滿足以下條件:

  • 它不能傳回值。
  • 它必須是單獨的消息處理方法(不支援反應式API方法)。

The condition is specified by a SpEL expression in the condition argument of the annotation and is evaluated for each message. All the handlers that match the condition are invoked in the same thread, and no assumption must be made about the order in which the invocations take place.

條件由注釋的條件參數中的SpEL表達式指定,并對每條消息進行評估。比對條件的所有處理程式都在同一個線程中調用,并且不必假設調用發生的順序。

In the following example of a @StreamListener with dispatching conditions, all the messages bearing a header type with the value bogey are dispatched to the receiveBogey method, and all the messages bearing a header type with the value bacall are dispatched to the receiveBacall method.

在以下帶排程條件的@StreamListener示例中,header類型為bogey值的所有消息都将被排程到receiveBogey方法,header類型為bacall值的所有消息都将被排程到receiveBacall方法。

@EnableBinding(Sink.class)

@EnableAutoConfiguration

public static class TestPojoWithAnnotatedArguments {

    @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bogey'")

    public void receiveBogey(@Payload BogeyPojo bogeyPojo) {

       // handle the message

    }

    @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bacall'")

    public void receiveBacall(@Payload BacallPojo bacallPojo) {

       // handle the message

    }

}

Content Type Negotiation in the Context of condition   條件上下文中的内容類型協商

It is important to understand some of the mechanics behind content-based routing using the condition argument of @StreamListener, especially in the context of the type of the message as a whole. It may also help if you familiarize yourself with the Content Type Negotiation before you proceed.

使用@StreamListener的條件參數來了解基于内容的路由背後的一些機制非常重要,尤其是在整個消息類型的上下文中。如果您在繼續操作之前熟悉内容類型協商,這也可能有所幫助。

Consider the following scenario:

請考慮以下情形:

@EnableBinding(Sink.class)

@EnableAutoConfiguration

public static class CatsAndDogs {

    @StreamListener(target = Sink.INPUT, condition = "payload.class.simpleName=='Dog'")

    public void bark(Dog dog) {

       // handle the message

    }

    @StreamListener(target = Sink.INPUT, condition = "payload.class.simpleName=='Cat'")

    public void purr(Cat cat) {

       // handle the message

    }

}

The preceding code is perfectly valid. It compiles and deploys without any issues, yet it never produces the result you expect.

上述代碼完全有效。它編譯和部署沒有任何問題,但它永遠不會産生您期望的結果。

That is because you are testing something that does not yet exist in a state you expect. That is becouse the payload of the message is not yet converted from the wire format (byte[]) to the desired type. In other words, it has not yet gone through the type conversion process described in the Content Type Negotiation.

那是因為你正在測試一些在你期望的狀态下尚不存在的東西。這是因為消息的有效負載尚未從有線格式(byte[])轉換為所需類型。換句話說,它尚未經曆内容類型協商中描述的類型轉換過程。

So, unless you use a SPeL expression that evaluates raw data (for example, the value of the first byte in the byte array), use message header-based expressions (such as condition = "headers['type']=='dog'").

是以,除非使用評估原始資料的SPeL表達式(例如,位元組數組中第一個位元組的值),否則請使用基于消息頭的表達式(例如condition = "headers['type']=='dog'")。

At the moment, dispatching through @StreamListener conditions is supported only for channel-based binders (not for reactive programming) support.
目前,通過@StreamListener條件進行排程隻支援基于通道的綁定器(不支援響應式程式設計)。

5.3.4. Using Polled Consumers   使用輪詢的消費者

When using polled consumers, you poll the PollableMessageSource on demand. Consider the following example of a polled consumer:

使用輪詢消費者時,您可以按需輪詢PollableMessageSource。考慮以下輪詢消費者的示例:

public interface PolledConsumer {

    @Input

    PollableMessageSource destIn();

    @Output

    MessageChannel destOut();

}

Given the polled consumer in the preceding example, you might use it as follows:

鑒于前面示例中的輪詢消費者,您可以按如下方式使用它:

@Bean

public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut) {

    return args -> {

        while (someCondition()) {

            try {

                if (!destIn.poll(m -> {

                    String newPayload = ((String) m.getPayload()).toUpperCase();

                    destOut.send(new GenericMessage<>(newPayload));

                })) {

                    Thread.sleep(1000);

                }

            }

            catch (Exception e) {

                // handle failure (throw an exception to reject the message);

            }

        }

    };

}

The PollableMessageSource.poll() method takes a MessageHandler argument (often a lambda expression, as shown here). It returns true if the message was received and successfully processed.

PollableMessageSource.poll()方法接受一個MessageHandler參數(通常是lambda表達式,如此處所示)。如果收到并成功處理了消息,則傳回true。

As with message-driven consumers, if the MessageHandler throws an exception, messages are published to error channels, as discussed in “[binder-error-channels]”.

與消息驅動的消費者一樣,如果MessageHandler抛出異常,則将消息釋出到錯誤通道,如“ [binder-error-channels] ”中所述。

Normally, the poll() method acknowledges the message when the MessageHandler exits. If the method exits abnormally, the message is rejected (not re-queued). You can override that behavior by taking responsibility for the acknowledgment, as shown in the following example:

通常,poll()方法在MessageHandler退出時确認消息。如果方法異常退出,則拒絕該消息(不重新排隊)。您可以通過承擔确認責任來覆寫該行為,如以下示例所示:

@Bean

public ApplicationRunner poller(PollableMessageSource dest1In, MessageChannel dest2Out) {

    return args -> {

        while (someCondition()) {

            if (!dest1In.poll(m -> {

                StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).noAutoAck();

                // e.g. hand off to another thread which can perform the ack

                // or acknowledge(Status.REQUEUE)

            })) {

                Thread.sleep(1000);

            }

        }

    };

}

You must ack (or nack) the message at some point, to avoid resource leaks.
您必須在某個時候确認(或否定确認)消息,以避免資源洩漏。
Some messaging systems (such as Apache Kafka) maintain a simple offset in a log. If a delivery fails and is re-queued with StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).acknowledge(Status.REQUEUE);, any later successfully ack’d messages are redelivered.
某些消息系統(例如Apache Kafka)在日志中維護一個簡單的偏移量。如果使用StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).acknowledge(Status.REQUEUE);傳遞失敗并重新排隊,則會重新傳遞任何以後成功确認的消息。

There is also an overloaded poll method, for which the definition is as follows:

還有一個重載的poll方法,其定義如下:

poll(MessageHandler handler, ParameterizedTypeReference<?> type)

The type is a conversion hint that allows the incoming message payload to be converted, as shown in the following example:

type是一個轉換提示,允許轉換傳入消息負載,如以下示例所示:

boolean result = pollableSource.poll(received -> {

Map<String, Foo> payload = (Map<String, Foo>) received.getPayload();

            ...

}, new ParameterizedTypeReference<Map<String, Foo>>() {});

5.4. Error Handling   錯誤處理

Errors happen, and Spring Cloud Stream provides several flexible mechanisms to handle them. The error handling comes in two flavors:

  • application: The error handling is done within the application (custom error handler).
  • system: The error handling is delegated to the binder (re-queue, DL, and others). Note that the techniques are dependent on binder implementation and the capability of the underlying messaging middleware.

錯誤發生時,Spring Cloud Stream提供了幾種靈活的機制來處理它們。錯誤處理有兩種形式:

  • application:錯誤處理在應用程式中完成(自定義錯誤處理程式)。
  • system:将錯誤處理委托給綁定器(重新排隊,DL,等)。請注意,這些技術取決于綁定器實作和底層消息中間件的功能。

Spring Cloud Stream uses the Spring Retry library to facilitate successful message processing. See Retry Template for more details. However, when all fails, the exceptions thrown by the message handlers are propagated back to the binder. At that point, binder invokes custom error handler or communicates the error back to the messaging system (re-queue, DLQ, and others).

Spring Cloud Stream使用Spring Retry庫來促進消息處理成功。有關詳細資訊,請參閱Retry Template。但是,當全部失敗時,消息處理程式抛出的異常将傳播回綁定器。此時,綁定器調用自定義錯誤處理程式或将錯誤傳回消息系統(重新排隊,DLQ,等)。

Application Error Handling   應用程式錯誤處理

There are two types of application-level error handling. Errors can be handled at each binding subscription or a global handler can handle all the binding subscription errors. Let’s review the details.

有兩種類型的應用程式級錯誤處理。可以在每個綁定訂閱處處理錯誤,或者全局處理程式可以處理所有綁定訂閱錯誤。我們來看看細節。

Figure 5. A Spring Cloud Stream Sink Application with Custom and Global Error Handlers

For each input binding, Spring Cloud Stream creates a dedicated error channel with the following semantics <destinationName>.errors.

對于每個輸入綁定,Spring Cloud Stream使用以下語義<destinationName>.errors建立專用錯誤通道。

The <destinationName> consists of the name of the binding (such as input) and the name of the group (such as myGroup).
<destinationName>由綁定名稱(如名稱的input)群組名(如名稱myGroup)組成。

Consider the following:

考慮以下:

@StreamListener(Sink.INPUT) // destination name 'input.myGroup'

public void handle(Person value) {

throw new RuntimeException("BOOM!");

}

@ServiceActivator(inputChannel = Processor.INPUT + ".myGroup.errors") //channel name 'input.myGroup.errors'

public void error(Message<?> message) {

System.out.println("Handling ERROR: " + message);

}

In the preceding example the destination name is input.myGroup and the dedicated error channel name is input.myGroup.errors.

在前面的示例中,目标名稱是input.myGroup,專用錯誤通道名稱是input.myGroup.errors。

The use of @StreamListener annotation is intended specifically to define bindings that bridge internal channels and external destinations. Given that the destination specific error channel does NOT have an associated external destination, such channel is a prerogative of Spring Integration (SI). This means that the handler for such destination must be defined using one of the SI handler annotations (i.e., @ServiceActivator, @Transformer etc.).
@StreamListener注釋的使用專門用于定義橋接内部通道和外部目标的綁定。鑒于目标特定的錯誤通道沒有關聯的外部目标,此類通道是Spring Integration(SI)的特權。這意味着必須使用SI處理程式注釋之一(即@ServiceActivator,@Transformer等)定義此類目标的處理程式。
If group is not specified anonymous group is used (something like input.anonymous.2K37rb06Q6m2r51-SPIDDQ), which is not suitable for error handling scenarious, since you don’t know what it’s going to be until the destination is created.
如果未指定組則使用匿名組(類似于input.anonymous.2K37rb06Q6m2r51-SPIDDQ),這不适合錯誤處理,因為在建立目标之前您不知道它将是什麼。

Also, in the event you are binding to the existing destination such as:

此外,如果您綁定到現有目标,例如:

spring.cloud.stream.bindings.input.destination=myFooDestination

spring.cloud.stream.bindings.input.group=myGroup

the full destination name is myFooDestination.myGroup and then the dedicated error channel name is myFooDestination.myGroup.errors.

則完整的目标名稱是myFooDestination.myGroup,專用的錯誤通道名稱是myFooDestination.myGroup.errors。

Back to the example…​

回到例子......

The handle(..) method, which subscribes to the channel named input, throws an exception. Given there is also a subscriber to the error channel input.myGroup.errors all error messages are handled by this subscriber.

訂閱input通道的handle(..)方法會抛出異常。鑒于還存在input.myGroup.errors錯誤通道的訂閱者,是以所有錯誤消息都由該訂閱者處理。

If you have multiple bindings, you may want to have a single error handler. Spring Cloud Stream automatically provides support for a global error channel by bridging each individual error channel to the channel named errorChannel, allowing a single subscriber to handle all errors, as shown in the following example:

如果您有多個綁定,則可能需要單個錯誤處理程式。Spring Cloud Stream通過将每個獨立的錯誤通道橋接到命名為errorChannel的通道自動為全局錯誤通道提供支援,允許單個訂閱者處理所有錯誤,如以下示例所示:

@StreamListener("errorChannel")

public void error(Message<?> message) {

        System.out.println("Handling ERROR: " + message);

}

This may be a convenient option if error handling logic is the same regardless of which handler produced the error.

如果錯誤處理邏輯相同,無論哪個處理程式産生錯誤,這可能是一個友善的選項。

Also, error messages sent to the errorChannel can be published to the specific destination at the broker by configuring a binding named error for the outbound target. This option provides a mechanism to automatically send error messages to another application bound to that destination or for later retrieval (for example, audit). For example, to publish error messages to a broker destination named myErrors, set the following property:

此外,通過将命名為error的綁定配置為出站目标,可以将發送到errorChannel的錯誤消息釋出到代理的特定目标。此選項提供了一種機制,可以将錯誤消息自動發送到綁定到該目标的另一個應用程式,或供以後檢索(例如,審計)。例如,要将錯誤消息釋出到命名為myErrors的代理目标,請設定以下屬性:

spring.cloud.stream.bindings.error.destination=myErrors.

The ability to bridge global error channel to a broker destination essentially provides a mechanism which connects the application-level error handling with the system-level error handling.
将全局錯誤通道橋接到代理目标的能力實質上提供了一種将應用程式級錯誤處理與系統級錯誤處理相連接配接的機制。

System Error Handling   系統錯誤處理

System-level error handling implies that the errors are communicated back to the messaging system and, given that not every messaging system is the same, the capabilities may differ from binder to binder.

系統級錯誤處理意味着将錯誤傳遞回消息系統,并且假設并非每個消息系統都相同,則功能可能因綁定器而異。

That said, in this section we explain the general idea behind system level error handling and use Rabbit binder as an example. NOTE: Kafka binder provides similar support, although some configuration properties do differ. Also, for more details and configuration options, see the individual binder’s documentation.

也就是說,在本節中,我們将解釋系統級錯誤處理背後的一般概念,并以Rabbit綁定器為例。注意:雖然某些配置屬性有所不同,但Kafka綁定器提供了類似的支援。另外,有關更多詳細資訊和配置選項,請參閱各個綁定器的文檔。

If no internal error handlers are configured, the errors propagate to the binders, and the binders subsequently propagate those errors back to the messaging system. Depending on the capabilities of the messaging system such a system may drop the message, re-queue the message for re-processing or send the failed message to DLQ. Both Rabbit and Kafka support these concepts. However, other binders may not, so refer to your individual binder’s documentation for details on supported system-level error-handling options.

如果未配置内部錯誤處理程式,則錯誤會傳播到綁定器,然後綁定器會将這些錯誤傳播回消息系統。根據消息系統的功能,這樣的系統可以丢棄消息,重新排隊消息以進行重新處理或将失敗的消息發送到DLQ。Rabbit和Kafka都支援這些概念。但是,其他綁定器可能不會,是以請參閱各個綁定器的文檔,以擷取有關受支援的系統級錯誤處理選項的詳細資訊。

Drop Failed Messages   丢棄失敗消息

By default, if no additional system-level configuration is provided, the messaging system drops the failed message. While acceptable in some cases, for most cases, it is not, and we need some recovery mechanism to avoid message loss.

預設情況下,如果未提供其他系統級配置,則消息系統将丢棄失敗的消息。雖然在某些情況下可接受,但在大多數情況下,它不是,我們需要一些恢複機制來避免消息丢失。

DLQ - Dead Letter Queue   死信隊列

DLQ allows failed messages to be sent to a special destination: - Dead Letter Queue.

DLQ允許将失敗的消息發送到特殊目的地: - 死信隊列。

When configured, failed messages are sent to this destination for subsequent re-processing or auditing and reconciliation.

配置後,失敗的消息将發送到此目标,以便後續重新處理或稽核和協調。

For example, continuing on the previous example and to set up the DLQ with Rabbit binder, you need to set the following property:

例如,繼續上一個示例并使用Rabbit綁定器設定DLQ,您需要設定以下屬性:

spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true

Keep in mind that, in the above property, input corresponds to the name of the input destination binding. The consumer indicates that it is a consumer property and auto-bind-dlq instructs the binder to configure DLQ for input destination, which results in an additional Rabbit queue named input.myGroup.dlq.

請記住,在上面的屬性中,input對應于輸入目标綁定的名稱。consumer表示它是一個消費者屬性,并且auto-bind-dlq訓示綁定器為input目标配置DLQ,這會生成一個命名為input.myGroup.dlq的額外Rabbit隊列。

Once configured, all failed messages are routed to this queue with an error message similar to the following:

配置完成後,所有失敗的消息都将路由到此隊列,并顯示類似于以下内容的錯誤消息:

delivery_mode:        1

headers:

x-death:

count:        1

reason:        rejected

queue:        input.hello

time:        1522328151

exchange:

routing-keys:        input.myGroup

Payload {"name”:"Bob"}

As you can see from the above, your original message is preserved for further actions.

從上面的内容可以看出,您的原始消息會被保留以供進一步操作。

However, one thing you may have noticed is that there is limited information on the original issue with the message processing. For example, you do not see a stack trace corresponding to the original error. To get more relevant information about the original error, you must set an additional property:

但是,您可能注意到的一件事是,有關消息處理的原始問題的資訊有限。例如,您沒有看到與原始錯誤對應的堆棧跟蹤。要擷取有關原始錯誤的更多相關資訊,您必須設定其他屬性:

spring.cloud.stream.rabbit.bindings.input.consumer.republish-to-dlq=true

Doing so forces the internal error handler to intercept the error message and add additional information to it before publishing it to DLQ. Once configured, you can see that the error message contains more information relevant to the original error, as follows:

這樣做會強制内部錯誤處理程式攔截錯誤消息,并在将其釋出到DLQ之前向其添加其他資訊。配置完成後,您可以看到錯誤消息包含與原始錯誤相關的更多資訊,如下所示:

delivery_mode:        2

headers:

x-original-exchange:

x-exception-message:        has an error

x-original-routingKey:        input.myGroup

x-exception-stacktrace:        org.springframework.messaging.MessageHandlingException: nested exception is

      org.springframework.messaging.MessagingException: has an error, failedMessage=GenericMessage [payload=byte[15],

      headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=input.hello, amqp_deliveryTag=1,

      deliveryAttempt=3, amqp_consumerQueue=input.hello, amqp_redelivered=false, id=a15231e6-3f80-677b-5ad7-d4b1e61e486e,

      amqp_consumerTag=amq.ctag-skBFapilvtZhDsn0k3ZmQg, contentType=application/json, timestamp=1522327846136}]

      at org.spring...integ...han...MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:107)

      at. . . . .

Payload {"name”:"Bob"}

This effectively combines application-level and system-level error handling to further assist with downstream troubleshooting mechanics.

這有效地結合了應用程式級和系統級錯誤處理,以進一步幫助下遊故障排除機制。

Re-queue Failed Messages   重新排隊失敗消息

As mentioned earlier, the currently supported binders (Rabbit and Kafka) rely on RetryTemplate to facilitate successful message processing. See Retry Template for details. However, for cases when max-attempts property is set to 1, internal reprocessing of the message is disabled. At this point, you can facilitate message re-processing (re-tries) by instructing the messaging system to re-queue the failed message. Once re-queued, the failed message is sent back to the original handler, essentially creating a retry loop.

如前所述,目前支援的綁定器(Rabbit和Kafka)依賴于RetryTemplate以促進消息的成功處理。有關詳細資訊,請參閱Retry Template。但是,對于max-attempts屬性設定為1的情況,将禁用消息的内部重新處理。此時,您可以通過訓示消息系統重新排隊失敗的消息來促進消息重新處理(重新嘗試)。重新排隊後,失敗的消息将被發送回原始處理程式,實質上是建立重試循環。

This option may be feasible for cases where the nature of the error is related to some sporadic yet short-term unavailability of some resource.

對于錯誤的性質與某些資源的某些零星但短期不可用相關的情況,此選項可能是可行的。

To accomplish that, you must set the following properties:

要實作此目的,您必須設定以下屬性:

spring.cloud.stream.bindings.input.consumer.max-attempts=1

spring.cloud.stream.rabbit.bindings.input.consumer.requeue-rejected=true

In the preceding example, the max-attempts set to 1 essentially disabling internal re-tries and requeue-rejected (short for requeue rejected messages) is set to true. Once set, the failed message is resubmitted to the same handler and loops continuously or until the handler throws AmqpRejectAndDontRequeueException essentially allowing you to build your own re-try logic within the handler itself.

在前面的示例中,将max-attempts設定為1基本上禁用内部重試和requeue-rejected(重新排隊拒絕消息的簡稱)被設定為true。一旦設定,失敗的消息将重新送出到同一個處理程式并繼續循環或直到處理程式抛出AmqpRejectAndDontRequeueException,基本上允許您在處理程式本身内建構自己的重試邏輯。

Retry Template   重試模闆

The RetryTemplate is part of the Spring Retry library. While it is out of scope of this document to cover all of the capabilities of the RetryTemplate, we will mention the following consumer properties that are specifically related to the RetryTemplate:

RetryTemplate是Spring Retry庫的一部分。雖然涵蓋RetryTemplate的所有功能超出了本文檔的範圍,但我們仍将提及以下與RetryTemplate特别相關的消費者屬性:

maxAttempts

The number of attempts to process the message.

處理消息的嘗試次數。

Default: 3.

backOffInitialInterval

The backoff initial interval on retry.

重試時的退避初始間隔。

Default 1000 milliseconds.

backOffMaxInterval

The maximum backoff interval.

最大退避間隔。

Default 10000 milliseconds.

backOffMultiplier

The backoff multiplier.

退避乘數。

Default 2.0.

While the preceding settings are sufficient for majority of the customization requirements, they may not satisfy certain complex requirements at, which point you may want to provide your own instance of the RetryTemplate. To do so configure it as a bean in your application configuration. The application provided instance will override the one provided by the framework. Also, to avoid conflicts you must qualify the instance of the RetryTemplate you want to be used by the binder as @StreamRetryTemplate. For example,

雖然前面的設定足以滿足大多數自定義要求,但它們可能無法滿足某些複雜要求,您可能希望提供自己的RetryTemplate執行個體。為此,請将其配置為應用程式配置中的bean。應用程式提供的執行個體将覆寫架構提供的執行個體。另外,為了避免沖突,您必須将想要被綁定器使用的RetryTemplate執行個體限定為@StreamRetryTemplate。例如,

@StreamRetryTemplate

public RetryTemplate myRetryTemplate() {

    return new RetryTemplate();

}

As you can see from the above example you don’t need to annotate it with @Bean since @StreamRetryTemplate is a qualified @Bean.

從上面的例子可以看出,你不需要使用@Bean注釋它,因為@StreamRetryTemplate是合格的@Bean。

5.5. Reactive Programming Support   反應式程式設計支援

Spring Cloud Stream also supports the use of reactive APIs where incoming and outgoing data is handled as continuous data flows. Support for reactive APIs is available through spring-cloud-stream-reactive, which needs to be added explicitly to your project.

Spring Cloud Stream還支援使用響應式API,其中傳入和傳出資料作為連續資料流進行處理。可以通過spring-cloud-stream-reactive支援反應式API,需要将其明确添加到您的項目中。

The programming model with reactive APIs is declarative. Instead of specifying how each individual message should be handled, you can use operators that describe functional transformations from inbound to outbound data flows.

具有反應式API的程式設計模型是聲明性的。您可以使用描述從入站資料流到出站資料流的功能轉換的運算符,而不是指定應如何處理每條消息。

At present Spring Cloud Stream supports the only the Reactor API. In the future, we intend to support a more generic model based on Reactive Streams.

目前Spring Cloud Stream僅支援Reactor API。将來,我們打算支援基于Reactive Streams的更通用的模型。

The reactive programming model also uses the @StreamListener annotation for setting up reactive handlers. The differences are that:

  • The @StreamListener annotation must not specify an input or output, as they are provided as arguments and return values from the method.
  • The arguments of the method must be annotated with @Input and @Output, indicating which input or output the incoming and outgoing data flows connect to, respectively.
  • The return value of the method, if any, is annotated with @Output, indicating the input where data should be sent.

反應式程式設計模型還使用@StreamListener注釋來設定反應式處理程式。不同之處在于:

  • @StreamListener注釋不能指定輸入或輸出,因為它們被提供為該方法的參數和傳回值。
  • 方法參數必須用@Input和@Output注釋,分别訓示傳入和傳出資料流連接配接到哪個輸入或輸出。
  • 方法傳回值(如果有)用@Output注釋,表示應該發送資料的輸入。
Reactive programming support requires Java 1.8.
反應式程式設計支援需要Java 1.8。
As of Spring Cloud Stream 1.1.1 and later (starting with release train Brooklyn.SR2), reactive programming support requires the use of Reactor 3.0.4.RELEASE and higher. Earlier Reactor versions (including 3.0.1.RELEASE, 3.0.2.RELEASE and 3.0.3.RELEASE) are not supported. spring-cloud-stream-reactive transitively retrieves the proper version, but it is possible for the project structure to manage the version of the io.projectreactor:reactor-core to an earlier release, especially when using Maven. This is the case for projects generated by using Spring Initializr with Spring Boot 1.x, which overrides the Reactor version to 2.0.8.RELEASE. In such cases, you must ensure that the proper version of the artifact is released. You can do so by adding a direct dependency on io.projectreactor:reactor-core with a version of 3.0.4.RELEASE or later to your project.
從Spring Cloud Stream 1.1.1及更高版本開始(從版本系列Brooklyn.SR2開始),反應式程式設計支援需要使用Reactor 3.0.4.RELEASE和更高版本。不支援早期的Reactor版本(包括3.0.1.RELEASE,3.0.2.RELEASE和3.0.3.RELEASE)。spring-cloud-stream-reactive傳遞性地檢索正确的版本,但項目結構可以管理io.projectreactor:reactor-core早期版本的版本,尤其是在使用Maven時。對于使用Spring Initializr和Spring Boot 1.x生成的項目就是這種情況,它将Reactor版本覆寫到2.0.8.RELEASE。在這種情況下,您必須確定釋放正确版本的工件。您可以通過向項目io.projectreactor:reactor-core的版本3.0.4.RELEASE或更高版本添加直接依賴項來實作此目的。
The use of term, “reactive”, currently refers to the reactive APIs being used and not to the execution model being reactive (that is, the bound endpoints still use a 'push' rather than a 'pull' model). While some backpressure support is provided by the use of Reactor, we do intend, in a future release, to support entirely reactive pipelines by the use of native reactive clients for the connected middleware.
術語“反應式”的使用目前指的是所使用的反應式API而不是被動反應的執行模型(即,綁定端點仍然使用“推”而不是'拉'模型)。雖然使用Reactor提供了一些背壓支援,但我們打算在未來的版本中通過使用連接配接中間件的原生反應式用戶端來支援完全的反應式管道。

Reactor-based Handlers   基于反應堆的處理程式

A Reactor-based handler can have the following argument types:

  • For arguments annotated with @Input, it supports the Reactor Flux type. The parameterization of the inbound Flux follows the same rules as in the case of individual message handling: It can be the entire Message, a POJO that can be the Message payload, or a POJO that is the result of a transformation based on the Message content-type header. Multiple inputs are provided.
  • For arguments annotated with Output, it supports the FluxSender type, which connects a Flux produced by the method with an output. Generally speaking, specifying outputs as arguments is only recommended when the method can have multiple outputs.

基于Reactor的處理程式可以具有以下參數類型:

  • 對于帶@Input注釋的參數,它支援Reactor Flux類型。入站Flux的參數化遵循與單個消息處理相同的規則:它可以是整個Message,可以是Message負載的POJO,或者是基于Message内容類型頭的轉換結果的POJO 。提供多個輸入。
  • 對于帶@Output注釋的參數,它支援FluxSender類型,該類型将方法生成的Flux與輸出連接配接。一般而言,僅當方法可以具有多個輸出時,才建議将輸出指定為參數。

A Reactor-based handler supports a return type of Flux. In that case, it must be annotated with @Output. We recommend using the return value of the method when a single output Flux is available.

基于Reactor的處理程式支援Flux傳回類型。在這種情況下,它必須使用@Output注釋。我們建議在單個輸出Flux可用時使用方法的傳回值。

The following example shows a Reactor-based Processor:

以下示例顯示了基于Reactor的Processor:

@EnableBinding(Processor.class)

@EnableAutoConfiguration

public static class UppercaseTransformer {

@StreamListener

  @Output(Processor.OUTPUT)

  public Flux<String> receive(@Input(Processor.INPUT) Flux<String> input) {

    return input.map(s -> s.toUpperCase());

  }

}

The same processor using output arguments looks like the following example:

使用輸出參數的同一處理器類似于以下示例:

@EnableBinding(Processor.class)

@EnableAutoConfiguration

public static class UppercaseTransformer {

@StreamListener

  public void receive(@Input(Processor.INPUT) Flux<String> input,

     @Output(Processor.OUTPUT) FluxSender output) {

     output.send(input.map(s -> s.toUpperCase()));

  }

}

Reactive Sources   反應源

Spring Cloud Stream reactive support also provides the ability for creating reactive sources through the @StreamEmitter annotation. By using the @StreamEmitter annotation, a regular source may be converted to a reactive one. @StreamEmitter is a method level annotation that marks a method to be an emitter to outputs declared with @EnableBinding. You cannot use the @Input annotation along with @StreamEmitter, as the methods marked with this annotation are not listening for any input. Rather, methods marked with @StreamEmitter generate output. Following the same programming model used in @StreamListener, @StreamEmitter also allows flexible ways of using the @Output annotation, depending on whether the method has any arguments, a return type, and other considerations.

Spring Cloud Stream反應式支援還通過@StreamEmitter注釋提供了建立反應源的功能。通過使用@StreamEmitter注釋,可以将正常源轉換為反應源。@StreamEmitter是一個方法級别的注釋,用于将方法标記為到使用@EnableBinding聲明的輸出的發射器。您不能同時使用@Input注釋和@StreamEmitter注釋,因為使用此注釋标記的方法不會偵聽任何輸入。相反,标記為@StreamEmitter的方法生成輸出。遵循@StreamListener,@StreamEmitter中使用的相同的程式設計模型還允許靈活的方式使用@Output注釋,具體取決于方法是否具有任何參數,傳回類型,和其他注意事項。

The remainder of this section contains examples of using the @StreamEmitter annotation in various styles.

本節的其餘部分包含各種樣式的使用@StreamEmitter注釋的示例。

The following example emits the Hello, World message every millisecond and publishes to a Reactor Flux:

以下示例每毫秒發出一次Hello, World消息并釋出到Reactor Flux:

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

@StreamEmitter

  @Output(Source.OUTPUT)

  public Flux<String> emit() {

    return Flux.intervalMillis(1)

            .map(l -> "Hello World");

  }

}

In the preceding example, the resulting messages in the Flux are sent to the output channel of the Source.

在前面的示例中,将Flux中的結果消息發送到Source的輸出通道。

The next example is another flavor of an @StreamEmmitter that sends a Reactor Flux. Instead of returning a Flux, the following method uses a FluxSender to programmatically send a Flux from a source:

下一個例子是發送Reactor Flux的@StreamEmmitter的另一種例子。以下方法使用FluxSender以程式設計方式發送來自源的Flux,而不是傳回Flux:

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

  @StreamEmitter

  @Output(Source.OUTPUT)

  public void emit(FluxSender output) {

    output.send(Flux.intervalMillis(1)

            .map(l -> "Hello World"));

  }

}

The next example is exactly same as the above snippet in functionality and style. However, instead of using an explicit @Output annotation on the method, it uses the annotation on the method parameter.

下一個示例在功能和樣式上與上述代碼段完全相同。但是,它不使用方法上的顯式@Output注釋,而是使用方法參數上的注釋。

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

  @StreamEmitter

  public void emit(@Output(Source.OUTPUT) FluxSender output) {

    output.send(Flux.intervalMillis(1)

            .map(l -> "Hello World"));

  }

}

The last example in this section is yet another flavor of writing reacting sources by using the Reactive Streams Publisher API and taking advantage of the support for it in Spring Integration Java DSL. The Publisher in the following example still uses Reactor Flux under the hood, but, from an application perspective, that is transparent to the user and only needs Reactive Streams and Java DSL for Spring Integration:

本節的最後一個示例是另一種使用Reactive Streams Publisher API編寫反應源的方法,并利用Spring Integration Java DSL中對它的支援。下面的例子中的Publisher仍然在引擎蓋下使用Reactor Flux,但是,從應用的角度來看,這是對使用者透明的,隻需要Reactive 流和Spring Integration的Java DSL:

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

  @StreamEmitter

  @Output(Source.OUTPUT)

  @Bean

  public Publisher<Message<String>> emit() {

    return IntegrationFlows.from(() ->

                new GenericMessage<>("Hello World"),

        e -> e.poller(p -> p.fixedDelay(1)))

        .toReactivePublisher();

  }

}

6. Binders   綁定器

Spring Cloud Stream provides a Binder abstraction for use in connecting to physical destinations at the external middleware. This section provides information about the main concepts behind the Binder SPI, its main components, and implementation-specific details.

Spring Cloud Stream提供了一個Binder抽象,用于連接配接外部中間件的實體目标。本節提供有關Binder SPI背後的主要概念,其主要元件,以及特定于實作的細節的資訊。

6.1. Producers and Consumers   生産者和消費者

The following image shows the general relationship of producers and consumers:

下圖顯示了生産者和消費者的一般關系:

Figure 6. Producers and Consumers

A producer is any component that sends messages to a channel. The channel can be bound to an external message broker with a Binder implementation for that broker. When invoking the bindProducer() method, the first parameter is the name of the destination within the broker, the second parameter is the local channel instance to which the producer sends messages, and the third parameter contains properties (such as a partition key expression) to be used within the adapter that is created for that channel.

生産者是向通道發送消息的任何元件。可以将通道綁定到具有該代理的Binder實作的外部消息代理。調用bindProducer()方法時,第一個參數是代理中目标的名稱,第二個參數是生産者向其發送消息的本地通道執行個體,第三個參數包含要在為該通道建立的擴充卡中使用的屬性(如分區鍵表達式)。

A consumer is any component that receives messages from a channel. As with a producer, the consumer’s channel can be bound to an external message broker. When invoking the bindConsumer() method, the first parameter is the destination name, and a second parameter provides the name of a logical group of consumers. Each group that is represented by consumer bindings for a given destination receives a copy of each message that a producer sends to that destination (that is, it follows normal publish-subscribe semantics). If there are multiple consumer instances bound with the same group name, then messages are load-balanced across those consumer instances so that each message sent by a producer is consumed by only a single consumer instance within each group (that is, it follows normal queueing semantics).

消費者是從通道接收消息的任何元件。與生産者一樣,消費者的通道可以綁定到外部消息代理。調用bindConsumer()方法時,第一個參數是目标名稱,第二個參數提供邏輯消費者組的名稱。由給定目标的消費者綁定表示的每個組接收生産者發送到該目标的每個消息的副本(即,它遵循正常的釋出 - 訂閱語義)。如果有多個使用相同組名綁定的消費者執行個體,則會在這些消費者執行個體之間對消息進行負載平衡,以便生産者發送的每條消息僅由每個組中的單個消費者執行個體消費(即,它遵循正常的隊列語義)。

6.2. Binder SPI   綁定器SPI(串行外圍接口)

The Binder SPI consists of a number of interfaces, out-of-the box utility classes, and discovery strategies that provide a pluggable mechanism for connecting to external middleware.

綁定器SPI由許多接口,開箱即用的實用程式類,和發現政策組成,這些政策提供了可連接配接到外部中間件的可插拔機制。

The key point of the SPI is the Binder interface, which is a strategy for connecting inputs and outputs to external middleware. The following listing shows the definnition of the Binder interface:

SPI的關鍵點是Binder接口,這是一種将輸入和輸出連接配接到外部中間件的政策。以下清單顯示了Binder接口的定義:

public interface Binder<T, C extends ConsumerProperties, P extends ProducerProperties> {

    Binding<T> bindConsumer(String name, String group, T inboundBindTarget, C consumerProperties);

    Binding<T> bindProducer(String name, T outboundBindTarget, P producerProperties);

}

The interface is parameterized, offering a number of extension points:

  • Input and output bind targets. As of version 1.0, only MessageChannel is supported, but this is intended to be used as an extension point in the future.
  • Extended consumer and producer properties, allowing specific Binder implementations to add supplemental properties that can be supported in a type-safe manner.

A typical binder implementation consists of the following:

  • A class that implements the Binder interface;
  • A Spring @Configuration class that creates a bean of type Binder along with the middleware connection infrastructure.
  • A META-INF/spring.binders file found on the classpath containing one or more binder definitions, as shown in the following example:

    kafka:\

    org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration

接口已參數化,提供了許多擴充點:

  • 輸入和輸出綁定目标。從版本1.0開始,僅支援MessageChannel,但這将在未來用作擴充點。
  • 擴充的消費者和生産者屬性,允許特定的Binder實作添加可以以類型安全的方式支援的補充屬性。

典型的綁定器實作包括以下内容:

  • 一個實作Binder接口的類;
  • 一個Spring @Configuration類,它建立一個與中間件連接配接基礎結構一起的Binder類型的bean。
  • 在包含一個或多個綁定器定義的類路徑中找到的META-INF/spring.binders檔案,如以下示例所示:

kafka:\

org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration

6.3. Binder Detection   綁定器檢測

Spring Cloud Stream relies on implementations of the Binder SPI to perform the task of connecting channels to message brokers. Each Binder implementation typically connects to one type of messaging system.

Spring Cloud Stream依賴于Binder SPI的實作來執行将通道連接配接到消息代理的任務。每個Binder實作通常連接配接到一種類型的消息系統。

6.3.1. Classpath Detection   類路徑檢測

By default, Spring Cloud Stream relies on Spring Boot’s auto-configuration to configure the binding process. If a single Binder implementation is found on the classpath, Spring Cloud Stream automatically uses it. For example, a Spring Cloud Stream project that aims to bind only to RabbitMQ can add the following dependency:

預設情況下,Spring Cloud Stream依靠Spring Boot的自動配置來配置綁定過程。如果在類路徑上找到單個Binder實作,則Spring Cloud Stream會自動使用它。例如,旨在僅綁定到RabbitMQ的Spring Cloud Stream項目可以添加以下依賴項:

<dependency>

  <groupId>org.springframework.cloud</groupId>

  <artifactId>spring-cloud-stream-binder-rabbit</artifactId>

</dependency>

For the specific Maven coordinates of other binder dependencies, see the documentation of that binder implementation.

有關其他綁定器依賴項的特定Maven坐标,請參閱該綁定器實作的文檔。

6.4. Multiple Binders on the Classpath   類路徑上的多個綁定器

When multiple binders are present on the classpath, the application must indicate which binder is to be used for each channel binding. Each binder configuration contains a META-INF/spring.binders file, which is a simple properties file, as shown in the following example:

當類路徑上存在多個綁定器時,應用程式必須訓示每個通道綁定使用哪個綁定器。每個綁定器配置都包含一個META-INF/spring.binders檔案,該檔案是一個簡單的屬性檔案,如以下示例所示:

rabbit:\

org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration

Similar files exist for the other provided binder implementations (such as Kafka), and custom binder implementations are expected to provide them as well. The key represents an identifying name for the binder implementation, whereas the value is a comma-separated list of configuration classes that each contain one and only one bean definition of type org.springframework.cloud.stream.binder.Binder.

其他提供的綁定器實作(例如Kafka)存在類似的檔案,并且預期自定義綁定器實作也提供它們。鍵表示綁定器實作的辨別名稱,而值是以逗号分隔的配置類清單,每個配置類包含一個且僅包含一個org.springframework.cloud.stream.binder.Binder類型的bean定義。

Binder selection can either be performed globally, using the spring.cloud.stream.defaultBinder property (for example, spring.cloud.stream.defaultBinder=rabbit) or individually, by configuring the binder on each channel binding. For instance, a processor application (that has channels named input and output for read and write respectively) that reads from Kafka and writes to RabbitMQ can specify the following configuration:

綁定器選擇可以全局執行,使用spring.cloud.stream.defaultBinder屬性(例如spring.cloud.stream.defaultBinder=rabbit),或者單獨執行,通過在每個通道綁定上配置綁定器。例如,從Kafka讀取并寫入RabbitMQ的處理器應用程式(具有已命名input和output分别用于讀取和寫入的通道)可指定以下配置:

spring.cloud.stream.bindings.input.binder=kafka

spring.cloud.stream.bindings.output.binder=rabbit

6.5. Connecting to Multiple Systems   連接配接多個系統

By default, binders share the application’s Spring Boot auto-configuration, so that one instance of each binder found on the classpath is created. If your application should connect to more than one broker of the same type, you can specify multiple binder configurations, each with different environment settings.

預設情況下,綁定器共享應用程式的Spring Boot自動配置,以便建立在類路徑中找到的每個綁定器的一個執行個體。如果您的應用程式應連接配接到多個相同類型的代理,則可以指定多個綁定器配置,每個配置具有不同的環境設定。

Turning on explicit binder configuration disables the default binder configuration process altogether. If you do so, all binders in use must be included in the configuration. Frameworks that intend to use Spring Cloud Stream transparently may create binder configurations that can be referenced by name, but they do not affect the default binder configuration. In order to do so, a binder configuration may have its defaultCandidate flag set to false (for example, spring.cloud.stream.binders.<configurationName>.defaultCandidate=false). This denotes a configuration that exists independently of the default binder configuration process.
啟用顯式綁定器配置會完全禁用預設綁定器配置過程。如果這樣做,則所有正在使用的綁定器必須包含在配置中。打算透明地使用Spring Cloud Stream的架構可以建立通過名稱引用的綁定器配置,但它們不會影響預設的綁定器配置。為此,綁定器配置可以将其defaultCandidate标志設定為false(例如,spring.cloud.stream.binders.<configurationName>.defaultCandidate=false)。這表示獨立于預設綁定器配置過程而存在的配置。

The following example shows a typical configuration for a processor application that connects to two RabbitMQ broker instances:

以下示例顯示連接配接到兩個RabbitMQ代理執行個體的處理器應用程式的典型配置:

spring:

  cloud:

    stream:

      bindings:

        input:

          destination: thing1

          binder: rabbit1

        output:

          destination: thing2

          binder: rabbit2

      binders:

        rabbit1:

          type: rabbit

          environment:

            spring:

              rabbitmq:

                host: <host1>

        rabbit2:

          type: rabbit

          environment:

            spring:

              rabbitmq:

                host: <host2>

6.6. Binding visualization and control   綁定可視化和控制

Since version 2.0, Spring Cloud Stream supports visualization and control of the Bindings through Actuator endpoints.

從2.0版開始,Spring Cloud Stream通過執行器端點支援綁定的可視化和控制。

Starting with version 2.0 actuator and web are optional, you must first add one of the web dependencies as well as add the actuator dependency manually. The following example shows how to add the dependency for the Web framework:

從版本2.0開始,執行器和Web是可選的,您必須首先添加一個Web依賴項,并手動添加執行器依賴項。以下示例顯示如何添加Web架構的依賴項:

<dependency>

     <groupId>org.springframework.boot</groupId>

     <artifactId>spring-boot-starter-web</artifactId>

</dependency>

The following example shows how to add the dependency for the WebFlux framework:

以下示例顯示如何為WebFlux架構添加依賴項:

<dependency>

       <groupId>org.springframework.boot</groupId>

       <artifactId>spring-boot-starter-webflux</artifactId>

</dependency>

You can add the Actuator dependency as follows:

您可以按如下方式添加執行器依賴關系:

<dependency>

    <groupId>org.springframework.boot</groupId>

    <artifactId>spring-boot-starter-actuator</artifactId>

</dependency>

To run Spring Cloud Stream 2.0 apps in Cloud Foundry, you must add spring-boot-starter-web and spring-boot-starter-actuator to the classpath. Otherwise, the application will not start due to health check failures.
要在雲計算中運作Spring Cloud Stream 2.0的應用程式,您必須添加spring-boot-starter-web和spring-boot-starter-actuator到classpath中。否則,由于運作狀況檢查失敗,應用程式将無法啟動。

You must also enable the bindings actuator endpoints by setting the following property: --management.endpoints.web.exposure.include=bindings.

您還必須通過設定以下屬性來啟用綁定執行器端點:--management.endpoints.web.exposure.include=bindings。

Once those prerequisites are satisfied. you should see the following in the logs when application start:

一旦滿足這些先決條件。應用程式啟動時,您應該在日志中看到以下内容:

: Mapped "{[/actuator/bindings/{name}],methods=[POST]. . .

: Mapped "{[/actuator/bindings],methods=[GET]. . .

: Mapped "{[/actuator/bindings/{name}],methods=[GET]. . .

To visualize the current bindings, access the following URL: <host>:<port>/actuator/bindings

要顯示目前綁定,請通路以下URL:<host>:<port>/actuator/bindings

Alternative, to see a single binding, access one of the URLs similar to the following: <host>:<port>/actuator/bindings/myBindingName

或者,要檢視單個綁定,請通路與以下内容類似的其中一個URL:<host>:<port>/actuator/bindings/myBindingName

You can also stop, start, pause, and resume individual bindings by posting to the same URL while providing a state argument as JSON, as shown in the following examples:

您還可以通過釋出POST請求到同一URL來停止,啟動,暫停,和恢複單個綁定,同時提供state參數作為JSON,如以下示例所示:

curl -d '{"state":"STOPPED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName curl -d '{"state":"PAUSED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName curl -d '{"state":"RESUMED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName

PAUSED and RESUMED work only when the corresponding binder and its underlying technology supports it. Otherwise, you see the warning message in the logs. Currently, only Kafka binder supports the PAUSED and RESUMED states.
PAUSED和RESUMED隻有在相應的綁定器及其底層技術支援時才能工作。否則,您會在日志中看到警告消息。目前,隻有Kafka綁定器支援PAUSED和RESUMED狀态。

6.7. Binder Configuration Properties   綁定器配置屬性

The following properties are available when customizing binder configurations. These properties exposed via org.springframework.cloud.stream.config.BinderProperties

自定義綁定器配置時,可以使用以下屬性。這些屬性通過org.springframework.cloud.stream.config.BinderProperties暴露。

They must be prefixed with spring.cloud.stream.binders.<configurationName>.

它們必須以spring.cloud.stream.binders.<configurationName>為字首。

type

The binder type. It typically references one of the binders found on the classpath — in particular, a key in a META-INF/spring.binders file.

綁定器類型。它通常引用類路徑中找到的一個綁定器 - 特别是META-INF/spring.binders檔案中的一個鍵。

By default, it has the same value as the configuration name.

預設情況下,它具有與配置名稱相同的值。

inheritEnvironment

Whether the configuration inherits the environment of the application itself.

配置是否繼承應用程式本身的環境。

Default: true.

environment

Root for a set of properties that can be used to customize the environment of the binder. When this property is set, the context in which the binder is being created is not a child of the application context. This setting allows for complete separation between the binder components and the application components.

一組屬性的根,可用于自定義綁定器的環境。設定此屬性後,建立綁定器的上下文不是應用程式上下文的子項。此設定允許綁定器元件和應用元件之間的完全分離。

Default: empty.

defaultCandidate

Whether the binder configuration is a candidate for being considered a default binder or can be used only when explicitly referenced. This setting allows adding binder configurations without interfering with the default processing.

綁定器配置是否可以被視為預設綁定器,或者隻能在顯式引用時使用。此設定允許添加綁定器配置,而不會幹擾預設處理。

Default: true.

7. Configuration Options   配置選項

Spring Cloud Stream supports general configuration options as well as configuration for bindings and binders. Some binders let additional binding properties support middleware-specific features.

Spring Cloud Stream支援正常配置選項以及綁定和綁定器的配置。某些綁定器允許其他綁定屬性支援特定于中間件的功能。

Configuration options can be provided to Spring Cloud Stream applications through any mechanism supported by Spring Boot. This includes application arguments, environment variables, and YAML or .properties files.

可以通過Spring Boot支援的任何機制向Spring Cloud Stream應用程式提供配置選項。這包括應用程式參數,環境變量,以及YAML或.properties檔案。

7.1. Binding Service Properties   綁定服務屬性

These properties are exposed via org.springframework.cloud.stream.config.BindingServiceProperties

這些屬性通過org.springframework.cloud.stream.config.BindingServiceProperties暴露。

spring.cloud.stream.instanceCount

The number of deployed instances of an application. Must be set for partitioning on the producer side. Must be set on the consumer side when using RabbitMQ and with Kafka if autoRebalanceEnabled=false.

應用程式的已部署執行個體數。必須在生産者端設定以進行分區。使用RabbitMQ和Kafka(如果autoRebalanceEnabled=false)時必須在消費者端設定autoRebalanceEnabled=false。

Default: 1.

spring.cloud.stream.instanceIndex

The instance index of the application: A number from 0 to instanceCount - 1. Used for partitioning with RabbitMQ and with Kafka if autoRebalanceEnabled=false. Automatically set in Cloud Foundry to match the application’s instance index.

應用程式的執行個體索引:從0到instanceCount - 1的數字。用于RabbitMQ和Kafka(如果autoRebalanceEnabled=false)的分區。在雲計算中自動設定以比對應用程式的執行個體索引。

spring.cloud.stream.dynamicDestinations

A list of destinations that can be bound dynamically (for example, in a dynamic routing scenario). If set, only listed destinations can be bound.

可動态綁定的目标清單(例如,在動态路由方案中)。如果設定,則隻能綁定列出的目标。

Default: empty (letting any destination be bound).

預設值:空(允許綁定任何目标)。

spring.cloud.stream.defaultBinder

The default binder to use, if multiple binders are configured. See Multiple Binders on the Classpath.

如果配置了多個綁定器,則使用預設綁定器。請參閱類路徑上的多個綁定器。

Default: empty.

spring.cloud.stream.overrideCloudConnectors

This property is only applicable when the cloud profile is active and Spring Cloud Connectors are provided with the application. If the property is false (the default), the binder detects a suitable bound service (for example, a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and uses it for creating connections (usually through Spring Cloud Connectors). When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (for example, relying on the spring.rabbitmq.* properties provided in the environment for the RabbitMQ binder). The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.

此屬性僅在cloud配置檔案處于活動狀态且Spring Cloud Connectors随應用程式提供時才适用。如果屬性是false(預設值),則綁定器會檢測到合适的綁定服務(例如,綁定在雲計算中的RabbitMQ綁定器的RabbitMQ服務)并使用它來建立連接配接(通常通過Spring Cloud Connectors)。設定true為時,此屬性訓示綁定器完全忽略綁定服務并依賴Spring Boot屬性(例如,依賴于RabbitMQ綁定器環境中提供的spring.rabbitmq.*屬性)。在連接配接到多個系統時,此屬性的典型用法是嵌套在自定義環境中。

Default: false.

spring.cloud.stream.bindingRetryInterval

The interval (in seconds) between retrying binding creation when, for example, the binder does not support late binding and the broker (for example, Apache Kafka) is down. Set it to zero to treat such conditions as fatal, preventing the application from starting.

例如,當綁定器不支援後期綁定和代理(例如,Apache Kafka)時,重試綁定建立之間的間隔(以秒為機關)已關閉。将其設定為零以将此類條件視為緻命的,進而阻止應用程式啟動。

Default: 30

7.2. Binding Properties   綁定屬性

Binding properties are supplied by using the format of spring.cloud.stream.bindings.<channelName>.<property>=<value>. The <channelName> represents the name of the channel being configured (for example, output for a Source).

綁定屬性使用spring.cloud.stream.bindings.<channelName>.<property>=<value>的格式提供。<channelName>表示被配置的通道名稱(例如,output為Source)。

To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of spring.cloud.stream.default.<property>=<value>.

為避免重複,Spring Cloud Stream支援設定所有通道的值,格式為spring.cloud.stream.default.<property>=<value>。

In what follows, we indicate where we have omitted the spring.cloud.stream.bindings.<channelName>.prefix and focus just on the property name, with the understanding that the prefix ise included at runtime.

在下文中,我們指出我們在哪裡省略了spring.cloud.stream.bindings.<channelName>.字首并僅關注屬性名稱,并了解運作時包含字首ise。

7.2.1. Common Binding Properties   通用綁定屬性

These properties are exposed via org.springframework.cloud.stream.config.BindingProperties

這些屬性通過org.springframework.cloud.stream.config.BindingProperties暴露。

The following binding properties are available for both input and output bindings and must be prefixed with spring.cloud.stream.bindings.<channelName>. (for example, spring.cloud.stream.bindings.input.destination=ticktock).

以下綁定屬性可用于輸入和輸出綁定,并且必須以spring.cloud.stream.bindings.<channelName>.(例如spring.cloud.stream.bindings.input.destination=ticktock)為字首。

Default values can be set by using the spring.cloud.stream.default prefix (for example`spring.cloud.stream.default.contentType=application/json`).

可以使用spring.cloud.stream.default字首設定預設值(例如`spring.cloud.stream.default.contentType=application/json`)。

destination

The target destination of a channel on the bound middleware (for example, the RabbitMQ exchange or Kafka topic). If the channel is bound as a consumer, it could be bound to multiple destinations, and the destination names can be specified as comma-separated String values. If not set, the channel name is used instead. The default value of this property cannot be overridden.

綁定中間件上通道的目标(例如,RabbitMQ交換或Kafka主題)。如果通道綁定為消費者,則可以綁定到多個目标,并且可以将目标名稱指定為逗号分隔String值。如果未設定,則使用通道名稱。無法覆寫此屬性的預設值。

group

The consumer group of the channel. Applies only to inbound bindings. See Consumer Groups.

通道的消費者組。僅适用于入站綁定。見消費者組。

Default: null (indicating an anonymous consumer).

預設值: null(表示匿名消費者)。

contentType

The content type of the channel. See “Content Type Negotiation”.

通道的内容類型。請參閱“ 内容類型協商 ”。

Default: null (no type coercion is performed).

預設值: null(不執行類型強制)。

binder

The binder used by this binding. See “Multiple Binders on the Classpath” for details.

此綁定使用的綁定器。有關詳細資訊,請參閱“ 類路徑上的多個綁定器 ”。

Default: null (the default binder is used, if it exists).

預設值: null(如果存在,使用預設綁定器)。

7.2.2. Consumer Properties   消費者屬性

These properties are exposed via org.springframework.cloud.stream.binder.ConsumerProperties

這些屬性通過org.springframework.cloud.stream.binder.ConsumerProperties暴露。

The following binding properties are available for input bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.consumer. (for example, spring.cloud.stream.bindings.input.consumer.concurrency=3).

以下綁定屬性僅可用于輸入綁定,并且必須以spring.cloud.stream.bindings.<channelName>.consumer.(例如spring.cloud.stream.bindings.input.consumer.concurrency=3)為字首。

Default values can be set by using the spring.cloud.stream.default.consumer prefix (for example, spring.cloud.stream.default.consumer.headerMode=none).

可以使用spring.cloud.stream.default.consumer字首(例如,spring.cloud.stream.default.consumer.headerMode=none)設定預設值。

concurrency

The concurrency of the inbound consumer.

入站消費者的并發性。

Default: 1.

partitioned

Whether the consumer receives data from a partitioned producer.

消費者是否從分區生産者接收資料。

Default: false.

headerMode

When set to none, disables header parsing on input. Effective only for messaging middleware that does not support message headers natively and requires header embedding. This option is useful when consuming data from non-Spring Cloud Stream applications when native headers are not supported. When set to headers, it uses the middleware’s native header mechanism. When set to embeddedHeaders, it embeds headers into the message payload.

設定none為時,禁用輸入上的header解析。僅對本身不支援消息headers并且需要header嵌入的消息中間件有效。當不支援原生headers時,從非Spring Cloud Stream應用程式中消費資料時,此選項很有用。設定為headers時,它使用中間件的原生header機制。設定為embeddedHeaders時,它會将headers嵌入到消息負載中。

Default: depends on the binder implementation.

預設值:取決于綁定器實作。

maxAttempts

If processing fails, the number of attempts to process the message (including the first). Set to 1 to disable retry.

如果處理失敗,則為處理消息的嘗試次數(包括第一次)。設定1為禁用重試。

Default: 3.

backOffInitialInterval

The backoff initial interval on retry.

重試時的退避初始間隔。

Default: 1000.

backOffMaxInterval

The maximum backoff interval.

最大退避間隔。

Default: 10000.

backOffMultiplier

The backoff multiplier.

退避乘數。

Default: 2.0.

instanceIndex

When set to a value greater than equal to zero, it allows customizing the instance index of this consumer (if different from spring.cloud.stream.instanceIndex). When set to a negative value, it defaults to spring.cloud.stream.instanceIndex. See “Instance Index and Instance Count” for more information.

當設定為大于等于零的值時,它允許自定義此消費者的執行個體索引(如果與spring.cloud.stream.instanceIndex不同)。設定為負值時,預設為spring.cloud.stream.instanceIndex。有關詳細資訊,請參閱“ 執行個體索引和執行個體計數 ”。

Default: -1.

instanceCount

When set to a value greater than equal to zero, it allows customizing the instance count of this consumer (if different from spring.cloud.stream.instanceCount). When set to a negative value, it defaults to spring.cloud.stream.instanceCount. See “Instance Index and Instance Count” for more information.

設定為大于等于零的值時,它允許自定義此消費者的執行個體計數(如果與spring.cloud.stream.instanceCount不同)。設定為負值時,預設為spring.cloud.stream.instanceCount。有關詳細資訊,請參閱“ 執行個體索引和執行個體計數 ”。

Default: -1.

useNativeDecoding

When set to true, the inbound message is deserialized directly by the client library, which must be configured correspondingly (for example, setting an appropriate Kafka producer value deserializer). When this configuration is being used, the inbound message unmarshalling is not based on the contentType of the binding. When native decoding is used, it is the responsibility of the producer to use an appropriate encoder (for example, the Kafka producer value serializer) to serialize the outbound message. Also, when native encoding and decoding is used, the headerMode=embeddedHeaders property is ignored and headers are not embedded in the message. See the producer property useNativeEncoding.

設定為true時,用戶端庫直接反序列化入站消息,必須相應地對其進行配置(例如,設定适當的Kafka生産者值反序列化器)。使用此配置時,入站消息解組不基于綁定的contentType。當使用原生解碼時,生産者有責任使用适當的編碼器(例如,Kafka生産者值序列化器)來序列化出站消息。此外,使用原生編碼和解碼時,将忽略headerMode=embeddedHeaders屬性,并且不會在消息中嵌入headers。檢視生産者屬性useNativeEncoding。

Default: false.

7.2.3. Producer Properties   生産者屬性

These properties are exposed via org.springframework.cloud.stream.binder.ProducerProperties

這些屬性通過org.springframework.cloud.stream.binder.ProducerProperties暴露。

The following binding properties are available for output bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.producer. (for example, spring.cloud.stream.bindings.input.producer.partitionKeyExpression=payload.id).

以下綁定屬性僅可用于輸出綁定,并且必須以spring.cloud.stream.bindings.<channelName>.producer.(例如spring.cloud.stream.bindings.input.producer.partitionKeyExpression=payload.id)為字首。

Default values can be set by using the prefix spring.cloud.stream.default.producer (for example, spring.cloud.stream.default.producer.partitionKeyExpression=payload.id).

可以使用字首spring.cloud.stream.default.producer(例如,spring.cloud.stream.default.producer.partitionKeyExpression=payload.id)設定預設值。

partitionKeyExpression

A SpEL expression that determines how to partition outbound data. If set, or if partitionKeyExtractorClass is set, outbound data on this channel is partitioned. partitionCount must be set to a value greater than 1 to be effective. Mutually exclusive with partitionKeyExtractorClass. See “Partitioning Support”.

一個SpEL表達式,用于确定如何對出站資料進行分區。如果設定,或者設定了partitionKeyExtractorClass,則對此通道上的出站資料進行分區。partitionCount必須設定為大于1的值才能生效。與partitionKeyExtractorClass互斥。請參閱“ 分區支援 ”。

Default: null.

partitionKeyExtractorClass

A PartitionKeyExtractorStrategy implementation. If set, or if partitionKeyExpression is set, outbound data on this channel is partitioned. partitionCount must be set to a value greater than 1 to be effective. Mutually exclusive with partitionKeyExpression. See “Partitioning Support”.

一個PartitionKeyExtractorStrategy實作。如果設定,或者設定了partitionKeyExpression,則對此通道上的出站資料進行分區。partitionCount必須設定為大于1的值才能生效。與partitionKeyExpression互斥。請參閱“ 分區支援 ”。

Default: null.

partitionSelectorClass

A PartitionSelectorStrategy implementation. Mutually exclusive with partitionSelectorExpression. If neither is set, the partition is selected as the hashCode(key) % partitionCount, where key is computed through either partitionKeyExpression or partitionKeyExtractorClass.

一個PartitionSelectorStrategy實作。與partitionSelectorExpression互斥。如果沒有設定,則分區被選擇為hashCode(key) % partitionCount,其中key通過partitionKeyExpression或partitionKeyExtractorClass計算。

Default: null.

partitionSelectorExpression

A SpEL expression for customizing partition selection. Mutually exclusive with partitionSelectorClass. If neither is set, the partition is selected as the hashCode(key) % partitionCount, where key is computed through either partitionKeyExpression or partitionKeyExtractorClass.

用于自定義分區選擇的SpEL表達式。與partitionSelectorClass互斥。如果沒有設定,則分區被選擇為hashCode(key) % partitionCount,其中key通過partitionKeyExpression或partitionKeyExtractorClass計算。

Default: null.

partitionCount

The number of target partitions for the data, if partitioning is enabled. Must be set to a value greater than 1 if the producer is partitioned. On Kafka, it is interpreted as a hint. The larger of this and the partition count of the target topic is used instead.

如果啟用了分區,則為資料的目标分區數。如果生産者已分區,則必須設定為大于1的值。在Kafka上,它被解釋為暗示。使用較大的這個和目标主題的分區計數來代替。

Default: 1.

requiredGroups

A comma-separated list of groups to which the producer must ensure message delivery even if they start after it has been created (for example, by pre-creating durable queues in RabbitMQ).

逗号分隔的組清單,生産者必須確定消息傳遞給它們,即使它們在它建立之後啟動(例如,通過在RabbitMQ中預先建立持久隊列)。

headerMode

When set to none, it disables header embedding on output. It is effective only for messaging middleware that does not support message headers natively and requires header embedding. This option is useful when producing data for non-Spring Cloud Stream applications when native headers are not supported. When set to headers, it uses the middleware’s native header mechanism. When set to embeddedHeaders, it embeds headers into the message payload.

設定none為時,它會禁用輸出中的header嵌入。它僅對于本身不支援消息headers并且需要header嵌入的消息中間件有效。當不支援原生headers時,在為非Spring Cloud Stream應用程式生成資料時,此選項很有用。設定為headers時,它使用中間件的原生header機制。設定為embeddedHeaders時,它會将headers嵌入到消息負載中。

Default: Depends on the binder implementation.

預設值:取決于綁定器實作。

useNativeEncoding

When set to true, the outbound message is serialized directly by the client library, which must be configured correspondingly (for example, setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on the contentType of the binding. When native encoding is used, it is the responsibility of the consumer to use an appropriate decoder (for example, the Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding and decoding is used, the headerMode=embeddedHeaders property is ignored and headers are not embedded in the message. See the consumer property useNativeDecoding.

設定true為時,出站消息由用戶端庫直接序列化,必須相應地配置(例如,設定适當的Kafka生産者值序列化器)。使用此配置時,出站消息編組不基于綁定的contentType。當使用原生編碼時,消費者有責任使用适當的解碼器(例如,Kafka消費者值反序列化器)來反序列化入站消息。此外,使用原生編碼和解碼時,将忽略headerMode=embeddedHeaders屬性,并且不會在消息中嵌入headers。檢視消費者屬性useNativeDecoding。

Default: false.

errorChannelEnabled

When set to true, if the binder supports asynchroous send results, send failures are sent to an error channel for the destination. See “[binder-error-channels]” for more information.

設定為時true,如果綁定器支援異步發送結果,則發送失敗将發送到目标的錯誤通道。有關詳細資訊,請參閱“ [binder-error-channels] ”。

Default: false.

7.3. Using Dynamically Bound Destinations   使用動态綁定目标

Besides the channels defined by using @EnableBinding, Spring Cloud Stream lets applications send messages to dynamically bound destinations. This is useful, for example, when the target destination needs to be determined at runtime. Applications can do so by using the BinderAwareChannelResolver bean, registered automatically by the @EnableBinding annotation.

除了使用@EnableBinding定義的通道外,Spring Cloud Stream還允許應用程式将消息發送到動态綁定的目标。例如,當需要在運作時确定目标時,這很有用。應用程式可以通過使用由@EnableBinding注釋自動注冊的BinderAwareChannelResolver bean來實作。

The 'spring.cloud.stream.dynamicDestinations' property can be used for restricting the dynamic destination names to a known set (whitelisting). If this property is not set, any destination can be bound dynamically.

'spring.cloud.stream.dynamicDestinations'屬性可用于将動态目标名稱限制為已知集合(白名單)。如果未設定此屬性,則可以動态綁定任何目标。

The BinderAwareChannelResolver can be used directly, as shown in the following example of a REST controller using a path variable to decide the target channel:

可直接使用BinderAwareChannelResolver,如圖在下面的REST controller 例子中,使用路徑變量來決定目标通道:

@EnableBinding

@Controller

public class SourceWithDynamicDestination {

    @Autowired

    private BinderAwareChannelResolver resolver;

    @RequestMapping(path = "/{target}", method = POST, consumes = "*/*")

    @ResponseStatus(HttpStatus.ACCEPTED)

    public void handleRequest(@RequestBody String body, @PathVariable("target") target,

           @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {

        sendMessage(body, target, contentType);

    }

    private void sendMessage(String body, String target, Object contentType) {

        resolver.resolveDestination(target).send(MessageBuilder.createMessage(body,

                new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));

    }

}

Now consider what happens when we start the application on the default port (8080) and make the following requests with CURL:

現在考慮當我們在預設端口(8080)上啟動應用程式并使用CURL發出以下請求時會發生什麼:

curl -H "Content-Type: application/json" -X POST -d "customer-1" http://localhost:8080/customers

curl -H "Content-Type: application/json" -X POST -d "order-1" http://localhost:8080/orders

The destinations, 'customers' and 'orders', are created in the broker (in the exchange for Rabbit or in the topic for Kafka) with names of 'customers' and 'orders', and the data is published to the appropriate destinations.

目的地,“customers”和“orders”,在代理(在Rabbit的交換中或在Kafka的主題中)中建立,其名稱為“customers”和“orders”,并且資料将釋出到适當的目的地。

The BinderAwareChannelResolver is a general-purpose Spring Integration DestinationResolver and can be injected in other components — for example, in a router using a SpEL expression based on the target field of an incoming JSON message. The following example includes a router that reads SpEL expressions:

BinderAwareChannelResolver是一個通用的Spring Integration DestinationResolver,可以注入其他元件 - 例如,在路由器中使用基于傳入JSON消息的target字段的SpEL表達式。以下示例包含一個讀取SpEL表達式的路由器:

@EnableBinding

@Controller

public class SourceWithDynamicDestination {

    @Autowired

    private BinderAwareChannelResolver resolver;

    @RequestMapping(path = "/", method = POST, consumes = "application/json")

    @ResponseStatus(HttpStatus.ACCEPTED)

    public void handleRequest(@RequestBody String body, @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {

        sendMessage(body, contentType);

    }

    private void sendMessage(Object body, Object contentType) {

        routerChannel().send(MessageBuilder.createMessage(body,

                new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));

    }

    @Bean(name = "routerChannel")

    public MessageChannel routerChannel() {

        return new DirectChannel();

    }

    @Bean

    @ServiceActivator(inputChannel = "routerChannel")

    public ExpressionEvaluatingRouter router() {

        ExpressionEvaluatingRouter router =

            new ExpressionEvaluatingRouter(new SpelExpressionParser().parseExpression("payload.target"));

        router.setDefaultOutputChannelName("default-output");

        router.setChannelResolver(resolver);

        return router;

    }

}

The Router Sink Application uses this technique to create the destinations on-demand.

路由器接收器應用程式使用此技術按需建立目的地。

If the channel names are known in advance, you can configure the producer properties as with any other destination. Alternatively, if you register a NewBindingCallback<> bean, it is invoked just before the binding is created. The callback takes the generic type of the extended producer properties used by the binder. It has one method:

如果事先知道通道名稱,則可以将生産者屬性配置為與任何其他目标一樣。或者,如果您注冊NewBindingCallback<> bean,則會在建立綁定之前調用它。回調采用綁定器使用的擴充生産者屬性的泛型類型。它有一個方法:

void configure(String channelName, MessageChannel channel, ProducerProperties producerProperties,

        T extendedProducerProperties);

The following example shows how to use the RabbitMQ binder:

以下示例顯示如何使用RabbitMQ綁定器:

@Bean

public NewBindingCallback<RabbitProducerProperties> dynamicConfigurer() {

    return (name, channel, props, extended) -> {

        props.setRequiredGroups("bindThisQueue");

        extended.setQueueNameGroupOnly(true);

        extended.setAutoBindDlq(true);

        extended.setDeadLetterQueueName("myDLQ");

    };

}

If you need to support dynamic destinations with multiple binder types, use Object for the generic type and cast the extended argument as needed.
如果需要支援具有多個綁定器類型的動态目标,請使用Object泛型類型并根據需要轉換擴充參數。

8. Content Type Negotiation   内容類型協商

Data transformation is one of the core features of any message-driven microservice architecture. Given that, in Spring Cloud Stream, such data is represented as a Spring Message, a message may have to be transformed to a desired shape or size before reaching its destination. This is required for two reasons:

  1. To convert the contents of the incoming message to match the signature of the application-provided handler.
  2. To convert the contents of the outgoing message to the wire format.

資料轉換是任何消息驅動的微服務架構的核心功能之一。鑒于此,在Spring Cloud Stream中,此類資料表示為Spring Message,消息在到達目标之前可能必須被轉換為所需的形狀或大小。這有兩個原因:

  1. 轉換傳入消息的内容以比對應用程式提供的處理程式的簽名。
  2. 将傳出消息的内容轉換為有線格式。

The wire format is typically byte[] (that is true for the Kafka and Rabbit binders), but it is governed by the binder implementation.

有線格式通常是byte[](對于Kafka和Rabbit綁定器也是如此),但它受綁定器實作的控制。

In Spring Cloud Stream, message transformation is accomplished with an org.springframework.messaging.converter.MessageConverter.

在Spring Cloud Stream中,消息轉換是通過消息轉換器org.springframework.messaging.converter.MessageConverter完成的。

As a supplement to the details to follow, you may also want to read the following blog post.
作為要遵循的細節的補充,您可能還想閱讀以下部落格文章。

8.1. Mechanics   機制

To better understand the mechanics and the necessity behind content-type negotiation, we take a look at a very simple use case by using the following message handler as an example:

為了更好地了解内容類型協商背後的機制和必要性,我們通過使用以下消息處理程式作為示例來檢視一個非常簡單的用例:

@StreamListener(Processor.INPUT)

@SendTo(Processor.OUTPUT)

public String handle(Person person) {..}

For simplicity, we assume that this is the only handler in the application (we assume there is no internal pipeline).
為簡單起見,我們假設這是應用程式中唯一的處理程式(我們假設沒有内部管道)。

The handler shown in the preceding example expects a Person object as an argument and produces a String type as an output. In order for the framework to succeed in passing the incoming Message as an argument to this handler, it has to somehow transform the payload of the Message type from the wire format to a Person type. In other words, the framework must locate and apply the appropriate MessageConverter. To accomplish that, the framework needs some instructions from the user. One of these instructions is already provided by the signature of the handler method itself (Person type). Consequently, in theory, that should be (and, in some cases, is) enough. However, for the majority of use cases, in order to select the appropriate MessageConverter, the framework needs an additional piece of information. That missing piece is contentType.

前面示例中顯示的處理程式将Person對象作為參數,并生成String類型作為輸出。為了使架構成功将傳入Message作為參數傳遞給此處理程式,它必須以某種方式将Message類型的負載從有線格式轉換為Person類型。換句話說,架構必須找到并應用适當的MessageConverter。為此,架構需要使用者的一些訓示。其中一條指令已由處理程式方法本身(Person類型)的簽名提供。是以,從理論上講,這應該(并且在某些情況下)應該足夠了。但是,對于大多數用例,要選擇合适的MessageConverter,架構需要額外的資訊。那個缺失的部分是contentType。

Spring Cloud Stream provides three mechanisms to define contentType (in order of precedence):

  1. HEADER: The contentType can be communicated through the Message itself. By providing a contentType header, you declare the content type to use to locate and apply the appropriate MessageConverter.
  2. BINDING: The contentType can be set per destination binding by setting the spring.cloud.stream.bindings.input.content-type property.
The input segment in the property name corresponds to the actual name of the destination (which is “input” in our case). This approach lets you declare, on a per-binding basis, the content type to use to locate and apply the appropriate MessageConverter.
  1. DEFAULT: If contentType is not present in the Message header or the binding, the default application/json content type is used to locate and apply the appropriate MessageConverter.

Spring Cloud Stream提供了三種機制來定義contentType(按優先順序排列):

  1. HEADER:contentType可以通過Message本身進行通信。通過提供contentType header,您可以聲明要用于查找和應用适當的MessageConverter的内容類型。
  2. BINDING:每個目标綁定都可以通過spring.cloud.stream.bindings.input.content-type屬性設定contentType。
input屬性名稱中 的段對應于目标的實際名稱(在我們的示例中為“input”)。此方法允許您在每個綁定的基礎上聲明用于查找和應用适當内容的内容類型MessageConverter。
  1. DEFAULT:如果Message header或綁定中不存在contentType,則預設application/json内容類型用于查找和應用适當的MessageConverter。

As mentioned earlier, the preceding list also demonstrates the order of precedence in case of a tie. For example, a header-provided content type takes precedence over any other content type. The same applies for a content type set on a per-binding basis, which essentially lets you override the default content type. However, it also provides a sensible default (which was determined from community feedback).

如前所述,前面的清單還示範了綁定情況下的優先順序。例如,header提供的内容類型優先于任何其他内容類型。這同樣适用于基于每個綁定設定的内容類型,它基本上允許您覆寫預設内容類型。但是,它也提供了合理的預設值(根據社群回報确定)。

Another reason for making application/json the default stems from the interoperability requirements driven by distributed microservices architectures, where producer and consumer not only run in different JVMs but can also run on different non-JVM platforms.

使application/json成為預設值的另一個原因源于分布式微服務架構驅動的互操作性要求,其中生産者和消費者不僅在不同的JVM中運作,而且還可以在不同的非JVM平台上運作。

When the non-void handler method returns, if the the return value is already a Message, that Message becomes the payload. However, when the return value is not a Message, the new Message is constructed with the return value as the payload while inheriting headers from the input Message minus the headers defined or filtered by SpringIntegrationProperties.messageHandlerNotPropagatedHeaders. By default, there is only one header set there: contentType. This means that the new Message does not have contentType header set, thus ensuring that the contentType can evolve. You can always opt out of returning a Message from the handler method where you can inject any header you wish.

當非void處理程式方法傳回時,如果傳回值已經是Message,那麼該Message成為負載。但是,當傳回值不是Message時,将構造新Message,使用傳回值作為負載,同時繼承輸入中的Message,減去由SpringIntegrationProperties.messageHandlerNotPropagatedHeaders定義或過濾的headers。預設情況下,隻有一個header集:contentType。這意味着新的Message沒有contentType header 集,進而確定contentType可以進化。您可以随時選擇不從處理程式方法傳回Message,您可以在其中注入任何所需的header。

If there is an internal pipeline, the Message is sent to the next handler by going through the same process of conversion. However, if there is no internal pipeline or you have reached the end of it, the Message is sent back to the output destination.

如果存在内部管道,則通過相同的轉換過程将Message發送到下一個處理程式。但是,如果沒有内部管道或者您已到達它的末尾,則會将Message發送回輸出目标。

8.1.1. Content Type versus Argument Type   内容類型與參數類型

As mentioned earlier, for the framework to select the appropriate MessageConverter, it requires argument type and, optionally, content type information. The logic for selecting the appropriate MessageConverter resides with the argument resolvers (HandlerMethodArgumentResolvers), which trigger right before the invocation of the user-defined handler method (which is when the actual argument type is known to the framework). If the argument type does not match the type of the current payload, the framework delegates to the stack of the pre-configured MessageConverters to see if any one of them can convert the payload. As you can see, the Object fromMessage(Message<?> message, Class<?> targetClass); operation of the MessageConverter takes targetClass as one of its arguments. The framework also ensures that the provided Message always contains a contentType header. When no contentType header was already present, it injects either the per-binding contentType header or the default contentType header. The combination of contentType argument type is the mechanism by which framework determines if message can be converted to a target type. If no appropriate MessageConverter is found, an exception is thrown, which you can handle by adding a custom MessageConverter (see “User-defined Message Converters”).

如前所述,對于選擇适當的MessageConverter的架構,它需要參數類型和可選的内容類型資訊。選擇适當的MessageConverter的邏輯關鍵在于參數解析(HandlerMethodArgumentResolvers),這在使用者定義的處理程式方法調用之前觸發(在架構已知實際參數類型時)。如果參數類型與目前負載的類型不比對,則架構委托給預先配置的MessageConverters堆棧,以檢視它們中的任何一個是否可以轉換負載。如您所見,MessageConverter的 Object fromMessage(Message<?> message, Class<?> targetClass); 操作将targetClass作為其參數之一。該架構還確定提供的Message始終包含contentType頭。如果尚未存在contentType标頭,則會注入每個綁定的contentType header或預設contentType header。contentType參數類型的組合是架構确定消息是否可以轉換為目标類型的機制。如果找不到合适的MessageConverter,則抛出異常,您可以通過添加自定義MessageConverter來處理該異常(請參閱“ 使用者定義的消息轉換器 ”)。

But what if the payload type matches the target type declared by the handler method? In this case, there is nothing to convert, and the payload is passed unmodified. While this sounds pretty straightforward and logical, keep in mind handler methods that take a Message<?> or Object as an argument. By declaring the target type to be Object (which is an instanceof everything in Java), you essentially forfeit the conversion process.

但是如果負載類型與處理程式方法聲明的目标類型比對怎麼辦?在這種情況下,沒有任何東西可以轉換,并且載荷是未經修改的。雖然這聽起來非常簡單和合乎邏輯,但請記住采用Message<?>或Object作為參數的處理程式方法。通過将目标類型聲明為Object(這是Java中的instanceof所有内容),您基本上會喪失轉換過程。

Do not expect Message to be converted into some other type based only on the contentType. Remember that the contentType is complementary to the target type. If you wish, you can provide a hint, which MessageConverter may or may not take into consideration.
不要指望Message被轉換為某些隻基于contentType的其他類型。請記住,contentType是目标類型的補充。如果您願意,可以提供一個可能會或可能不會考慮MessageConverter的提示。

8.1.2. Message Converters   消息轉換器

MessageConverters define two methods:

MessageConverters定義兩種方法:

Object fromMessage(Message<?> message, Class<?> targetClass);

Message<?> toMessage(Object payload, @Nullable MessageHeaders headers);

It is important to understand the contract of these methods and their usage, specifically in the context of Spring Cloud Stream.

了解這些方法及其用法的合同非常重要,特别是在Spring Cloud Stream的上下文中。

The fromMessage method converts an incoming Message to an argument type. The payload of the Message could be any type, and it is up to the actual implementation of the MessageConverter to support multiple types. For example, some JSON converter may support the payload type as byte[], String, and others. This is important when the application contains an internal pipeline (that is, input → handler1 → handler2 →. . . → output) and the output of the upstream handler results in a Message which may not be in the initial wire format.

fromMessage方法将傳入Message轉換為參數類型。Message的載荷可以是任何類型,并且它取決于支援多種類型的MessageConverter的實際實作。例如,某些JSON轉換器可以支援byte[],String,和其他載荷類型。當應用程式包含内部管道(即,輸入→處理程式1→處理程式2→...→輸出)時,這很重要,并且上遊處理程式的輸出會生成一個可能不是初始有線格式的Message。

However, the toMessage method has a more strict contract and must always convert Message to the wire format: byte[].

但是,toMessage方法具有更嚴格的合同,并且必須始終将Message轉換為有線格式:byte[]。

So, for all intents and purposes (and especially when implementing your own converter) you regard the two methods as having the following signatures:

是以,對于所有意圖和目的(尤其是在實作您自己的轉換器時),您認為這兩種方法具有以下簽名:

Object fromMessage(Message<?> message, Class<?> targetClass);

Message<byte[]> toMessage(Object payload, @Nullable MessageHeaders headers);

8.2. Provided MessageConverters   已提供的消息轉換器

As mentioned earlier, the framework already provides a stack of MessageConverters to handle most common use cases. The following list describes the provided MessageConverters, in order of precedence (the first MessageConverter that works is used):

  1. ApplicationJsonMessageMarshallingConverter: Variation of the org.springframework.messaging.converter.MappingJackson2MessageConverter. Supports conversion of the payload of the Message to/from POJO for cases when contentType is application/json (DEFAULT).
  2. TupleJsonMessageConverter: DEPRECATED Supports conversion of the payload of the Message to/from org.springframework.tuple.Tuple.
  3. ByteArrayMessageConverter: Supports conversion of the payload of the Message from byte[] to byte[] for cases when contentType is application/octet-stream. It is essentially a pass through and exists primarily for backward compatibility.
  4. ObjectStringMessageConverter: Supports conversion of any type to a String when contentType is text/plain. It invokes Object’s toString() method or, if the payload is byte[], a new String(byte[]).
  5. JavaSerializationMessageConverter: DEPRECATED Supports conversion based on java serialization when contentType is application/x-java-serialized-object.
  6. KryoMessageConverter: DEPRECATED Supports conversion based on Kryo serialization when contentTypeis application/x-java-object.
  7. JsonUnmarshallingConverter: Similar to the ApplicationJsonMessageMarshallingConverter. It supports conversion of any type when contentType is application/x-java-object. It expects the actual type information to be embedded in the contentType as an attribute (for example, application/x-java-object;type=foo.bar.Cat).

如前所述,架構已經提供了一個MessageConverters棧來處理大多數常見用例。以下清單按優先順序(使用的第一個有效的MessageConverter)描述了所提供的MessageConverters:

  1. ApplicationJsonMessageMarshallingConverter:org.springframework.messaging.converter.MappingJackson2MessageConverter的變體。支援的Message載荷轉換到POJO,或者相反,當contentType是application/json(預設)時。
  2. TupleJsonMessageConverter:DEPRECATED 支援Message的負載轉換為org.springframework.tuple.Tuple,或者相反。
  3. ByteArrayMessageConverter:支援Message的載荷轉換從byte[]到byte[],當contentType是application/octet-stream的情況下。它本質上是一種傳遞,主要用于向後相容。
  4. ObjectStringMessageConverter:支援任何類型到String的轉換,當contentType是text/plain時。它調用Object的toString()方法,或者,如果負載是byte[],則調用new String(byte[])。
  5. JavaSerializationMessageConverter:DEPRECATED 支援基于Java序列化的轉換,當contentType為application/x-java-serialized-object時。
  6. KryoMessageConverter:DEPRECATED 支援基于Kryo序列化的轉換,當contentType為application/x-java-object時。
  7. JsonUnmarshallingConverter:類似于ApplicationJsonMessageMarshallingConverter。當contentType是application/x-java-object時,它支援任何類型的轉換。它期望将實際類型資訊嵌入到contentType屬性中(例如,application/x-java-object;type=foo.bar.Cat)。

When no appropriate converter is found, the framework throws an exception. When that happens, you should check your code and configuration and ensure you did not miss anything (that is, ensure that you provided a contentType by using a binding or a header). However, most likely, you found some uncommon case (such as a custom contentType perhaps) and the current stack of provided MessageConverters does not know how to convert. If that is the case, you can add custom MessageConverter. See User-defined Message Converters.

如果找不到合适的轉換器,架構将抛出​​異常。當發生這種情況時,您應該檢查您的代碼和配置,并確定您沒有遺漏任何内容(即,確定您通過使用綁定或header提供了contentType)。但是,最有可能的是,您發現了一些不常見的情況(例如自定義contentType)并且目前提供的MessageConverters棧不知道如何轉換。如果是這種情況,您可以添加自定義MessageConverter。請參閱使用者定義的消息轉換器。

8.3. User-defined Message Converters   使用者定義的消息轉換器

Spring Cloud Stream exposes a mechanism to define and register additional MessageConverters. To use it, implement org.springframework.messaging.converter.MessageConverter, configure it as a @Bean, and annotate it with @StreamMessageConverter. It is then apended to the existing stack of `MessageConverter`s.

Spring Cloud Stream公開了一種定義和注冊附加MessageConverters的機制。要使用它,請實作org.springframework.messaging.converter.MessageConverter,将其配置為@Bean,并使用@StreamMessageConverter注釋它。然後将它附加到`MessageConverter`s的現有棧上。

It is important to understand that custom MessageConverter implementations are added to the head of the existing stack. Consequently, custom MessageConverter implementations take precedence over the existing ones, which lets you override as well as add to the existing converters.
重要的是要了解自定義MessageConverter實作被添加到現有棧的頭部。是以,自定義MessageConverter實作優先于現有實作,這使您可以覆寫以及添加到現有轉換器。

The following example shows how to create a message converter bean to support a new content type called application/bar:

以下示例說明如何建立消息轉換器bean以支援名為application/bar的新内容類型:

@EnableBinding(Sink.class)

@SpringBootApplication

public static class SinkApplication {

    ...

    @Bean

    @StreamMessageConverter

    public MessageConverter customMessageConverter() {

        return new MyCustomMessageConverter();

    }

}

public class MyCustomMessageConverter extends AbstractMessageConverter {

    public MyCustomMessageConverter() {

        super(new MimeType("application", "bar"));

    }

    @Override

    protected boolean supports(Class<?> clazz) {

        return (Bar.class.equals(clazz));

    }

    @Override

    protected Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint) {

        Object payload = message.getPayload();

        return (payload instanceof Bar ? payload : new Bar((byte[]) payload));

    }

}

Spring Cloud Stream also provides support for Avro-based converters and schema evolution. See “Schema Evolution Support” for details.

Spring Cloud Stream還為基于Avro的轉換器和模式演變提供支援。有關詳細資訊,請參閱“ 架構演進支援 ”。

9. Schema Evolution Support   架構演進支援

Spring Cloud Stream provides support for schema evolution so that the data can be evolved over time and still work with older or newer producers and consumers and vice versa. Most serialization models, especially the ones that aim for portability across different platforms and languages, rely on a schema that describes how the data is serialized in the binary payload. In order to serialize the data and then to interpret it, both the sending and receiving sides must have access to a schema that describes the binary format. In certain cases, the schema can be inferred from the payload type on serialization or from the target type on deserialization. However, many applications benefit from having access to an explicit schema that describes the binary data format. A schema registry lets you store schema information in a textual format (typically JSON) and makes that information accessible to various applications that need it to receive and send data in binary format. A schema is referenceable as a tuple consisting of:

  • A subject that is the logical name of the schema
  • The schema version
  • The schema format, which describes the binary format of the data

This following sections goes through the details of various components involved in schema evolution process.

Spring Cloud Stream為模式演變提供支援,以便資料可以随着時間的推移而發展,并且仍然可以與較舊或較新的生産者和消費者一起使用,反之亦然。大多數序列化模型,特别是那些旨在跨不同平台和語言進行可移植性的模型,依賴于描述如何在二進制負載中序列化資料的模式。為了序列化資料然後解釋它,發送方和接收方都必須能夠通路描述二進制格式的模式。在某些情況下,可以從序列化的負載類型或反序列化的目标類型推斷出模式。但是,許多應用程式可以通路描述二進制資料格式的顯式模式。模式系統資料庫允許您以文本格式(通常是JSON)存儲模式資訊,并使該資訊可供需要它以二進制格式接收和發送資料的各種應用程式通路。模式可作為元組引用,包括:

  • 作為架構的邏輯名稱的主題
  • 架構版本
  • 模式格式,描述資料的二進制格式

以下部分将詳細介紹模式演變過程中涉及的各個元件。

9.1. Schema Registry Client   架構系統資料庫用戶端

The client-side abstraction for interacting with schema registry servers is the SchemaRegistryClient interface, which has the following structure:

用于與模式系統資料庫伺服器互動的用戶端抽象是SchemaRegistryClient接口,它具有以下結構:

public interface SchemaRegistryClient {

    SchemaRegistrationResponse register(String subject, String format, String schema);

    String fetch(SchemaReference schemaReference);

    String fetch(Integer id);

}

Spring Cloud Stream provides out-of-the-box implementations for interacting with its own schema server and for interacting with the Confluent Schema Registry.

Spring Cloud Stream提供了開箱即用的實作,可以與自己的架構伺服器進行互動,并與Confluent Schema Registry進行互動。

A client for the Spring Cloud Stream schema registry can be configured by using the @EnableSchemaRegistryClient, as follows:

可以使用@EnableSchemaRegistryClient配置Spring Cloud Stream模式系統資料庫的用戶端,如下:

  @EnableBinding(Sink.class)

  @SpringBootApplication

  @EnableSchemaRegistryClient

  public static class AvroSinkApplication {

    ...

  }

The default converter is optimized to cache not only the schemas from the remote server but also the parse() and toString() methods, which are quite expensive. Because of this, it uses a DefaultSchemaRegistryClient that does not cache responses. If you intend to change the default behavior, you can use the client directly on your code and override it to the desired outcome. To do so, you have to add the property spring.cloud.stream.schemaRegistryClient.cached=true to your application properties.
預設轉換器經過優化,不僅可以緩存來自遠端伺服器的模式,還可以緩存非常昂貴的parse()和toString()方法。是以,它使用不緩存響應的DefaultSchemaRegistryClient。如果您打算更改預設行為,可以直接在代碼上使用用戶端并将其覆寫到所需的結果。為此,您必須将屬性spring.cloud.stream.schemaRegistryClient.cached=true添加到應用程式屬性中。

9.1.1. Schema Registry Client Properties   架構系統資料庫用戶端屬性

The Schema Registry Client supports the following properties:

Schema Registry Client支援以下屬性:

spring.cloud.stream.schemaRegistryClient.endpoint

The location of the schema-server. When setting this, use a full URL, including protocol (http or https) , port, and context path.

架構伺服器的位置。設定此項時,請使用完整的URL,包括協定(http或https),端口和上下文路徑。

Default

localhost:8990/

spring.cloud.stream.schemaRegistryClient.cached

Whether the client should cache schema server responses. Normally set to false, as the caching happens in the message converter. Clients using the schema registry client should set this to true.

用戶端是否應緩存架構伺服器響應。通常設定為false,因為緩存發生在消息轉換器中。使用模式系統資料庫用戶端的用戶端應将此設定為true。

Default

true

9.2. Avro Schema Registry Client Message Converters   Avro架構系統資料庫用戶端消息轉換器

For applications that have a SchemaRegistryClient bean registered with the application context, Spring Cloud Stream auto configures an Apache Avro message converter for schema management. This eases schema evolution, as applications that receive messages can get easy access to a writer schema that can be reconciled with their own reader schema.

對于在應用程式上下文中注冊了SchemaRegistryClient bean的應用程式,Spring Cloud Stream會自動配置Apache Avro消息轉換器以進行模式管理。這樣可以簡化模式演變,因為接收消息的應用程式可以輕松通路可與自己的讀取器模式協調的編寫器模式。

For outbound messages, if the content type of the channel is set to application/*+avro, the MessageConverteris activated, as shown in the following example:

對于出站消息,如果通道的内容類型設定為application/*+avro,則MessageConverter激活,如以下示例所示:

spring.cloud.stream.bindings.output.contentType=application/*+avro

During the outbound conversion, the message converter tries to infer the schema of each outbound messages (based on its type) and register it to a subject (based on the payload type) by using the SchemaRegistryClient. If an identical schema is already found, then a reference to it is retrieved. If not, the schema is registered, and a new version number is provided. The message is sent with a contentType header by using the following scheme: application/[prefix].[subject].v[version]+avro, where prefix is configurable and subject is deduced from the payload type.

在出站轉換期間,消息轉換器嘗試推斷每個出站消息的模式(基于其類型),并使用SchemaRegistryClient将其注冊到主題(基于負載類型)。如果已找到相同的模式,則擷取對其的引用。如果不是,則注冊模式,并提供新的版本号。通過以下模式使用contentType header發送消息:application/[prefix].[subject].v[version]+avro,其中prefix是可配置的并且subject從負載類型推導出。

For example, a message of the type User might be sent as a binary payload with a content type of application/vnd.user.v2+avro, where user is the subject and 2 is the version number.

例如,User類型的消息可以作為二進制載荷發送,其内容類型為application/vnd.user.v2+avro,其中user是主題,2是版本号。

When receiving messages, the converter infers the schema reference from the header of the incoming message and tries to retrieve it. The schema is used as the writer schema in the deserialization process.

接收消息時,轉換器會從傳入消息的header中推斷出架構引用,并嘗試擷取它。該模式在反序列化過程中用作編寫器模式。

9.2.1. Avro Schema Registry Message Converter Properties   Avro架構系統資料庫用戶端消息轉換器屬性

If you have enabled Avro based schema registry client by setting spring.cloud.stream.bindings.output.contentType=application/*+avro, you can customize the behavior of the registration by setting the following properties.

如果通過設定spring.cloud.stream.bindings.output.contentType=application/*+avro啟用了基于Avro的架構系統資料庫用戶端,則可以通過設定以下屬性來自定義注冊行為。

spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled

Enable if you want the converter to use reflection to infer a Schema from a POJO.

如果希望轉換器使用反射從POJO中推斷架構,則啟用。

Default: false

spring.cloud.stream.schema.avro.readerSchema

Avro compares schema versions by looking at a writer schema (origin payload) and a reader schema (your application payload). See the Avro documentation for more information. If set, this overrides any lookups at the schema server and uses the local schema as the reader schema. Default: null

Avro通過檢視編寫器模式(原始負載)和讀取器模式(您的應用程式負載)來比較模式版本。有關更多資訊,請參閱Avro文檔。如果設定,則會覆寫架構伺服器上的任何查找,并使用本地架構作為讀取器模式。預設:null

spring.cloud.stream.schema.avro.schemaLocations

Registers any .avsc files listed in this property with the Schema Server.

使用架構伺服器注冊此屬性中列出的所有.avsc檔案。

Default: empty

spring.cloud.stream.schema.avro.prefix

The prefix to be used on the Content-Type header.

要在Content-Type header上使用的字首。

Default: vnd

9.3. Apache Avro Message Converters   Apache Avro消息轉換器

Spring Cloud Stream provides support for schema-based message converters through its spring-cloud-stream-schema module. Currently, the only serialization format supported out of the box for schema-based message converters is Apache Avro, with more formats to be added in future versions.

The spring-cloud-stream-schema module contains two types of message converters that can be used for Apache Avro serialization:

  • Converters that use the class information of the serialized or deserialized objects or a schema with a location known at startup.
  • Converters that use a schema registry. They locate the schemas at runtime and dynamically register new schemas as domain objects evolve.

Spring Cloud Stream通過其spring-cloud-stream-schema子產品為基于模式的消息轉換器提供支援。目前,基于模式的消息轉換器開箱即用的唯一序列化格式是Apache Avro,未來版本中将添加更多格式。

spring-cloud-stream-schema子產品包含兩種類型的消息轉換器,可用于Apache Avro序列化:

  • 使用序列化或反序列化對象的類資訊或具有啟動時已知位置的模式的轉換器。
  • 使用模式系統資料庫的轉換器。他們在運作時定位模式,并在域對象發展時動态注冊新模式。

9.4. Converters with Schema Support   具有架構支援的轉換器

The AvroSchemaMessageConverter supports serializing and deserializing messages either by using a predefined schema or by using the schema information available in the class (either reflectively or contained in the SpecificRecord). If you provide a custom converter, then the default AvroSchemaMessageConverter bean is not created. The following example shows a custom converter:

AvroSchemaMessageConverter通過使用預定義的模式,或使用類中可用的模式資訊(反射性的或包含在SpecificRecord中)支援序列化和反序列化消息。如果您提供自定義轉換器,則不會建立預設的AvroSchemaMessageConverter bean。以下示例顯示了自定義轉換器:

To use custom converters, you can simply add it to the application context, optionally specifying one or more MimeTypes with which to associate it. The default MimeType is application/avro.

要使用自定義轉換器,隻需将其添加到應用程式上下文中,可以選擇指定一個或多個與之關聯的MimeTypes。預設MimeType是application/avro。

If the target type of the conversion is a GenericRecord, a schema must be set.

如果轉換的目标類型是GenericRecord,則必須設定模式。

The following example shows how to configure a converter in a sink application by registering the Apache Avro MessageConverter without a predefined schema. In this example, note that the mime type value is avro/bytes, not the default application/avro.

以下示例顯示如何通過注冊沒有預定義模式的Apache Avro MessageConverter來在接收器應用程式中配置轉換器。在此示例中,請注意mime類型值avro/bytes,而不是預設值application/avro。

@EnableBinding(Sink.class)

@SpringBootApplication

public static class SinkApplication {

  ...

  @Bean

  public MessageConverter userMessageConverter() {

      return new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes"));

  }

}

Conversely, the following application registers a converter with a predefined schema (found on the classpath):

相反,以下應用程式使用預定義模式(在類路徑中找到)注冊轉換器:

@EnableBinding(Sink.class)

@SpringBootApplication

public static class SinkApplication {

  ...

  @Bean

  public MessageConverter userMessageConverter() {

      AvroSchemaMessageConverter converter = new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes"));

      converter.setSchemaLocation(new ClassPathResource("schemas/User.avro"));

      return converter;

  }

}

9.5. Schema Registry Server   架構系統資料庫伺服器

Spring Cloud Stream provides a schema registry server implementation. To use it, you can add the spring-cloud-stream-schema-server artifact to your project and use the @EnableSchemaRegistryServer annotation, which adds the schema registry server REST controller to your application. This annotation is intended to be used with Spring Boot web applications, and the listening port of the server is controlled by the server.port property. The spring.cloud.stream.schema.server.path property can be used to control the root path of the schema server (especially when it is embedded in other applications). The spring.cloud.stream.schema.server.allowSchemaDeletion boolean property enables the deletion of a schema. By default, this is disabled.

Spring Cloud Stream提供架構注冊伺服器實作。要使用它,您可以将spring-cloud-stream-schema-server工件添加到項目中并使用@EnableSchemaRegistryServer注釋,該注釋将模式系統資料庫伺服器REST控制器添加到您的應用程式。此注釋旨在與Spring Boot Web應用程式一起使用,并且伺服器的偵聽端口由server.port屬性控制。spring.cloud.stream.schema.server.path屬性可用于控制模式伺服器的根路徑(特别是當它嵌入其他應用程式時)。spring.cloud.stream.schema.server.allowSchemaDeletion布爾屬性允許模式的缺失。預設情況下,禁用此功能。

The schema registry server uses a relational database to store the schemas. By default, it uses an embedded database. You can customize the schema storage by using the Spring Boot SQL database and JDBC configuration options.

模式系統資料庫伺服器使用關系資料庫來存儲模式。預設情況下,它使用嵌入式資料庫。您可以使用Spring Boot SQL資料庫和JDBC配置選項自定義架構存儲。

The following example shows a Spring Boot application that enables the schema registry:

以下示例顯示了啟用架構系統資料庫的Spring Boot應用程式:

@SpringBootApplication

@EnableSchemaRegistryServer

public class SchemaRegistryServerApplication {

    public static void main(String[] args) {

        SpringApplication.run(SchemaRegistryServerApplication.class, args);

    }

}

9.5.1. Schema Registry Server API   架構系統資料庫伺服器API

The Schema Registry Server API consists of the following operations:

Schema Registry Server API包含以下操作:

  • POST /— see “Registering a New Schema”
  • 'GET /{subject}/{format}/{version}' — see “Retrieving an Existing Schema by Subject, Format, and Version”
  • GET /{subject}/{format}— see “Retrieving an Existing Schema by Subject and Format”
  • GET /schemas/{id}— see “Retrieving an Existing Schema by ID”
  • DELETE /{subject}/{format}/{version}— see “Deleting a Schema by Subject, Format, and Version”
  • DELETE /schemas/{id}— see “Deleting a Schema by ID”
  • DELETE /{subject}— see “Deleting a Schema by Subject”

Registering a New Schema   注冊新架構

To register a new schema, send a POST request to the / endpoint.

The / accepts a JSON payload with the following fields:

  • subject: The schema subject
  • format: The schema format
  • definition: The schema definition

要注冊新架構,請向/端點發送POST請求。

/接受具有以下字段的JSON載荷:

  • subject:架構主題
  • format:架構格式
  • definition:架構定義

Its response is a schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

它的響應是JSON中的模式對象,包含以下字段:

  • id:架構ID
  • subject:架構主題
  • format:架構格式
  • version:架構版本
  • definition:架構定義

Retrieving an Existing Schema by Subject, Format, and Version   按主題,格式,和版本檢索現有架構

To retrieve an existing schema by subject, format, and version, send GET request to the /{subject}/{format}/{version} endpoint.

Its response is a schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

要按主題,格式和版本檢索現有架構,請将GET請求發送到/{subject}/{format}/{version}端點。

它的響應是JSON中的模式對象,包含以下字段:

  • id:架構ID
  • subject:架構主題
  • format:架構格式
  • version:架構版本
  • definition:架構定義

Retrieving an Existing Schema by Subject and Format   按主題和格式檢索現有架構

To retrieve an existing schema by subject and format, send a GET request to the /subject/format endpoint.

Its response is a list of schemas with each schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

要按主題和格式檢索現有架構,GET請向/subject/format端點發送請求。

它的響應是JSON中每個模式對象的模式清單,包含以下字段:

  • id:架構ID
  • subject:架構主題
  • format:架構格式
  • version:架構版本
  • definition:架構定義

Retrieving an Existing Schema by ID   按ID檢索現有架構

To retrieve a schema by its ID, send a GET request to the /schemas/{id} endpoint.

Its response is a schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

要通過其ID檢索架構,GET請向/schemas/{id}端點發送請求。

它的響應是JSON中的模式對象,包含以下字段:

  • id:架構ID
  • subject:架構主題
  • format:架構格式
  • version:架構版本
  • definition:架構定義

Deleting a Schema by Subject, Format, and Version   按主題,格式,和版本删除架構

To delete a schema identified by its subject, format, and version, send a DELETE request to the /{subject}/{format}/{version} endpoint.

要删除由其主題,格式和版本辨別的模式,DELETE請向/{subject}/{format}/{version}端點發送請求。

Deleting a Schema by ID   按ID删除架構

To delete a schema by its ID, send a DELETE request to the /schemas/{id} endpoint.

要按其ID删除架構,DELETE請向/schemas/{id}端點發送請求。

Deleting a Schema by Subject  按主題删除架構

DELETE /{subject}

Delete existing schemas by their subject.

DELETE /{subject}

按主題删除現有架構。

This note applies to users of Spring Cloud Stream 1.1.0.RELEASE only. Spring Cloud Stream 1.1.0.RELEASE used the table name, schema, for storing Schema objects. Schema is a keyword in a number of database implementations. To avoid any conflicts in the future, starting with 1.1.1.RELEASE, we have opted for the name SCHEMA_REPOSITORY for the storage table. Any Spring Cloud Stream 1.1.0.RELEASE users who upgrade should migrate their existing schemas to the new table before upgrading.
本說明僅适用于Spring Cloud Stream 1.1.0.RELEASE的使用者。Spring Cloud Stream 1.1.0.RELEASE使用表名schema來存儲Schema對象。Schema是許多資料庫實作中的關鍵字。為了避免将來出現任何沖突,從1.1.1.RELEASE開始,我們選擇SCHEMA_REPOSITORY了存儲表的名稱。任何更新的Spring Cloud Stream 1.1.0.RELEASE使用者都應該在更新之前将其現有架構遷移到新表。

9.5.2. Using Confluent’s Schema Registry   使用彙合的架構系統資料庫

The default configuration creates a DefaultSchemaRegistryClient bean. If you want to use the Confluent schema registry, you need to create a bean of type ConfluentSchemaRegistryClient, which supersedes the one configured by default by the framework. The following example shows how to create such a bean:

預設配置建立一個DefaultSchemaRegistryClient bean。如果要使用Confluent模式系統資料庫,則需要建立一個ConfluentSchemaRegistryClient類型的bean,它取代架構預設配置的bean。以下示例顯示如何建立此類bean:

@Bean

public SchemaRegistryClient schemaRegistryClient(@Value("${spring.cloud.stream.schemaRegistryClient.endpoint}") String endpoint){

  ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();

  client.setEndpoint(endpoint);

  return client;

}

The ConfluentSchemaRegistryClient is tested against Confluent platform version 4.0.0.
ConfluentSchemaRegistryClient針對Confluent平台版本4.0.0進行測試。

9.6. Schema Registration and Resolution   架構注冊和解析

To better understand how Spring Cloud Stream registers and resolves new schemas and its use of Avro schema comparison features, we provide two separate subsections:

為了更好地了解Spring Cloud Stream如何注冊和解析新架構及其對Avro架構比較功能的使用,我們提供了兩個單獨的小節:

  • “Schema Registration Process (Serialization)”
  • “Schema Resolution Process (Deserialization)”

9.6.1. Schema Registration Process (Serialization)   架構注冊過程(序列化)

The first part of the registration process is extracting a schema from the payload that is being sent over a channel. Avro types such as SpecificRecord or GenericRecord already contain a schema, which can be retrieved immediately from the instance. In the case of POJOs, a schema is inferred if the spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled property is set to true (the default).

注冊過程的第一部分是從通過通道發送的負載中提取模式。Avro類型,例如SpecificRecord或GenericRecord已經包含模式,可以立即從執行個體中檢索。在POJO的情況下,如果spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled屬性設定為true(預設值),則推斷出模式。

Figure 7. Schema Writer Resolution Process

Ones a schema is obtained, the converter loads its metadata (version) from the remote server. First, it queries a local cache. If no result is found, it submits the data to the server, which replies with versioning information. The converter always caches the results to avoid the overhead of querying the Schema Server for every new message that needs to be serialized.

擷取模式之後,轉換器從遠端伺服器加載其中繼資料(版本)。首先,它查詢本地緩存。如果未找到任何結果,則會将資料送出給伺服器,伺服器會回複版本資訊。轉換器始終緩存結果,以避免為每個需要序列化的新消息進行查詢架構伺服器的開銷。

Figure 8. Schema Registration Process

With the schema version information, the converter sets the contentType header of the message to carry the version information — for example: application/vnd.user.v1+avro.

使用模式版本資訊,轉換器設定消息的contentType header以攜帶版本資訊 - 例如:application/vnd.user.v1+avro。

9.6.2. Schema Resolution Process (Deserialization)   架構解析過程(反序列化)

When reading messages that contain version information (that is, a contentType header with a scheme like the one described under “Schema Registration Process (Serialization)”), the converter queries the Schema server to fetch the writer schema of the message. Once it has found the correct schema of the incoming message, it retrieves the reader schema and, by using Avro’s schema resolution support, reads it into the reader definition (setting defaults and any missing properties).

當讀取包含版本資訊的消息(即,具有類似“ 模式注冊過程(序列化) ”中描述的方案的contentType header)時,轉換器查詢模式伺服器以擷取消息的寫入器模式。一旦找到傳入消息的正确模式,它就會檢索讀取器模式,并通過使用Avro的模式解析支援将其讀入讀取器定義(設定預設值和任何缺少的屬性)。

Figure 9. Schema Reading Resolution Process

You should understand the difference between a writer schema (the application that wrote the message) and a reader schema (the receiving application). We suggest taking a moment to read the Avro terminology and understand the process. Spring Cloud Stream always fetches the writer schema to determine how to read a message. If you want to get Avro’s schema evolution support working, you need to make sure that a readerSchema was properly set for your application.
您應該了解編寫器模式(編寫消息的應用程式)和讀取器模式(接收應用程式)之間的差別。我們建議花點時間閱讀Avro術語并了解該過程。Spring Cloud Stream始終擷取編寫器模式以确定如何閱讀消息。如果您希望Avro的架構演變支援能夠正常工作,您需要確定您的應用程式正确設定了讀取器模式readerSchema。

10. Inter-Application Communication   應用程式之間通信

Spring Cloud Stream enables communication between applications. Inter-application communication is a complex issue spanning several concerns, as described in the following topics:

Spring Cloud Stream支援應用程式之間的通信。跨應用程式通信是一個複雜的問題,涉及多個問題,如以下主題中所述:

  • “Connecting Multiple Application Instances”
  • “Instance Index and Instance Count”
  • “Partitioning”

10.1. Connecting Multiple Application Instances   連接配接多個應用程式執行個體

While Spring Cloud Stream makes it easy for individual Spring Boot applications to connect to messaging systems, the typical scenario for Spring Cloud Stream is the creation of multi-application pipelines, where microservice applications send data to each other. You can achieve this scenario by correlating the input and output destinations of “adjacent” applications.

雖然Spring Cloud Stream使獨立的Spring Boot應用程式連接配接到消息系統很容易,但是Spring Cloud Stream的典型場景是建立多個應用程式管道,其中微服務應用彼此之間發送資料。你可以通過關聯“相鄰”應用程式的輸入和輸出目标來實作此方案。

Suppose a design calls for the Time Source application to send data to the Log Sink application. You could use a common destination named ticktock for bindings within both applications.

假設一個設計要求Time Source應用程式将資料發送到Log Sink應用程式。您可以在兩個應用程式中使用綁定的公共目标,命名為ticktock。

Time Source (that has the channel name output) would set the following property:

Time Source(具有通道名稱output)将設定以下屬性:

spring.cloud.stream.bindings.output.destination=ticktock

Log Sink (that has the channel name input) would set the following property:

Log Sink(具有通道名稱input)将設定以下屬性:

spring.cloud.stream.bindings.input.destination=ticktock

10.2. Instance Index and Instance Count   執行個體索引和執行個體計數

When scaling up Spring Cloud Stream applications, each instance can receive information about how many other instances of the same application exist and what its own instance index is. Spring Cloud Stream does this through the spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex properties. For example, if there are three instances of a HDFS sink application, all three instances have spring.cloud.stream.instanceCount set to 3, and the individual applications have spring.cloud.stream.instanceIndex set to 0, 1, and 2, respectively.

在擴充Spring Cloud Stream應用程式時,每個執行個體都可以接收有關同一應用程式存在多少其他執行個體以及它自己的執行個體索引的資訊。Spring Cloud Stream通過spring.cloud.stream.instanceCount和spring.cloud.stream.instanceIndex屬性實作此目的。例如,如果HDFS接收器應用程式有三個執行個體,所有三個執行個體都将spring.cloud.stream.instanceCount設定為3,并且獨自的應用程式分别将spring.cloud.stream.instanceIndex設定為0,1,和2。

When Spring Cloud Stream applications are deployed through Spring Cloud Data Flow, these properties are configured automatically; when Spring Cloud Stream applications are launched independently, these properties must be set correctly. By default, spring.cloud.stream.instanceCount is 1, and spring.cloud.stream.instanceIndex is 0.

當Spring Cloud Stream應用程式通過Spring Cloud Data Flow部署時,這些屬性會自動配置; 當Spring Cloud Stream應用程式獨立啟動時,必須正确設定這些屬性。預設情況下,spring.cloud.stream.instanceCount是1,spring.cloud.stream.instanceIndex是0。

In a scaled-up scenario, correct configuration of these two properties is important for addressing partitioning behavior (see below) in general, and the two properties are always required by certain binders (for example, the Kafka binder) in order to ensure that data are split correctly across multiple consumer instances.

在擴充方案中,正确配置這兩個屬性對于解決分區行為(見下文)非常重要,并且某些綁定器(例如,Kafka綁定器)始終需要這兩個屬性,以確定資料在多個消費者執行個體之間正确分割。

10.3. Partitioning   分區

Partitioning in Spring Cloud Stream consists of two tasks:

Spring Cloud Stream中的分區包含兩個任務:

  • “Configuring Output Bindings for Partitioning”
  • “Configuring Input Bindings for Partitioning”

10.3.1. Configuring Output Bindings for Partitioning   配置輸出綁定以進行分區

You can configure an output binding to send partitioned data by setting one and only one of its partitionKeyExpression or partitionKeyExtractorName properties, as well as its partitionCount property.

您可以通過設定一個且僅一個其partitionKeyExpression或partitionKeyExtractorName屬性,以及它的partitionCount屬性配置輸出綁定來發送分區資料。

For example, the following is a valid and typical configuration:

例如,以下是有效且典型的配置:

spring.cloud.stream.bindings.output.producer.partitionKeyExpression=payload.id

spring.cloud.stream.bindings.output.producer.partitionCount=5

Based on that example configuration, data is sent to the target partition by using the following logic.

基于該示例配置,使用以下邏輯将資料發送到目标分區。

A partition key’s value is calculated for each message sent to a partitioned output channel based on the partitionKeyExpression. The partitionKeyExpression is a SpEL expression that is evaluated against the outbound message for extracting the partitioning key.

對于發送到分區輸出通道的每條消息,基于partitionKeyExpression計算分區key的值。partitionKeyExpression是一個SpEL表達式,該表達式針對提取分區key的出站消息進行評估。

If a SpEL expression is not sufficient for your needs, you can instead calculate the partition key value by providing an implementation of org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy and configuring it as a bean (by using the @Bean annotation). If you have more then one bean of type org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy available in the Application Context, you can further filter it by specifying its name with the partitionKeyExtractorName property, as shown in the following example:

如果SpEL表達式不足以滿足您的需要,您可以通過提供org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy實作并将其配置為bean(通過使用@Bean注釋)來計算分區key值。如果在應用程式上下文中有多個org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy類型的bean可用,則可以通過使用partitionKeyExtractorName屬性指定其名稱來進一步過濾它,如以下示例所示:

--spring.cloud.stream.bindings.output.producer.partitionKeyExtractorName=customPartitionKeyExtractor

--spring.cloud.stream.bindings.output.producer.partitionCount=5

. . .

@Bean

public CustomPartitionKeyExtractorClass customPartitionKeyExtractor() {

    return new CustomPartitionKeyExtractorClass();

}

In previous versions of Spring Cloud Stream, you could specify the implementation of org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy by setting the spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass property. Since version 2.0, this property is deprecated, and support for it will be removed in a future version.
在以前版本的Spring Cloud Stream中,您可以通過設定spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass屬性來指定org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy實作。從版本2.0開始,不推薦使用此屬性,并且将在以後的版本中删除對該屬性的支援。

Once the message key is calculated, the partition selection process determines the target partition as a value between 0 and partitionCount - 1. The default calculation, applicable in most scenarios, is based on the following formula: key.hashCode() % partitionCount. This can be customized on the binding, either by setting a SpEL expression to be evaluated against the 'key' (through the partitionSelectorExpression property) or by configuring an implementation of org.springframework.cloud.stream.binder.PartitionSelectorStrategyas a bean (by using the @Bean annotation). Similar to the PartitionKeyExtractorStrategy, you can further filter it by using the spring.cloud.stream.bindings.output.producer.partitionSelectorName property when more than one bean of this type is available in the Application Context, as shown in the following example:

一旦計算出消息key,分區選擇過程就将目标分區确定為0和partitionCount - 1之間的值。适用于大多數情況的預設計算基于以下公式:key.hashCode() % partitionCount。這可以在綁定上自定義,通過設定要根據'key'(通過partitionSelectorExpression屬性)計算的SpEL表達式,或者通過配置org.springframework.cloud.stream.binder.PartitionSelectorStrategy bean 的實作(通過使用@Bean注釋)來定制。與PartitionKeyExtractorStrategy類似,如果在應用程式上下文中有多個此類型的bean可用時,您可以使用spring.cloud.stream.bindings.output.producer.partitionSelectorName屬性進一步過濾它,如以下示例所示:

--spring.cloud.stream.bindings.output.producer.partitionSelectorName=customPartitionSelector

. . .

@Bean

public CustomPartitionSelectorClass customPartitionSelector() {

    return new CustomPartitionSelectorClass();

}

In previous versions of Spring Cloud Stream you could specify the implementation of org.springframework.cloud.stream.binder.PartitionSelectorStrategy by setting the spring.cloud.stream.bindings.output.producer.partitionSelectorClass property. Since version 2.0, this property is deprecated and support for it will be removed in a future version.
在以前版本的Spring Cloud Stream中,您可以通過設定spring.cloud.stream.bindings.output.producer.partitionSelectorClass屬性來指定org.springframework.cloud.stream.binder.PartitionSelectorStrategy實作。從版本2.0開始,不推薦使用此屬性,并且将在以後的版本中删除對該屬性的支援。

10.3.2. Configuring Input Bindings for Partitioning   配置輸入綁定以進行分區

An input binding (with the channel name input) is configured to receive partitioned data by setting its partitioned property, as well as the instanceIndex and instanceCount properties on the application itself, as shown in the following example:

通過設定輸入綁定的partitioned屬性,以及在應用程式上設定instanceIndex和instanceCount屬性來設定輸入綁定(與通道名稱綁定input)以擷取分區資料,如顯示在下面的例子:

spring.cloud.stream.bindings.input.consumer.partitioned=true

spring.cloud.stream.instanceIndex=3

spring.cloud.stream.instanceCount=5

The instanceCount value represents the total number of application instances between which the data should be partitioned. The instanceIndex must be a unique value across the multiple instances, with a value between 0and instanceCount - 1. The instance index helps each application instance to identify the unique partition(s) from which it receives data. It is required by binders using technology that does not support partitioning natively. For example, with RabbitMQ, there is a queue for each partition, with the queue name containing the instance index. With Kafka, if autoRebalanceEnabled is true (default), Kafka takes care of distributing partitions across instances, and these properties are not required. If autoRebalanceEnabled is set to false, the instanceCount and instanceIndex are used by the binder to determine which partition(s) the instance subscribes to (you must have at least as many partitions as there are instances). The binder allocates the partitions instead of Kafka. This might be useful if you want messages for a particular partition to always go to the same instance. When a binder configuration requires them, it is important to set both values correctly in order to ensure that all of the data is consumed and that the application instances receive mutually exclusive datasets.

instanceCount值表示應在其間分區資料的應用程式執行個體的總數。instanceIndex必須是跨多個執行個體的唯一值,值介于0和instanceCount - 1之間。執行個體索引可幫助每個應用程式執行個體識别從中接收資料的唯一分區。綁定器需要使用不支援原生分區的技術。例如,使用RabbitMQ,每個分區都有一個隊列,隊列名稱包含執行個體索引。使用Kafka,如果autoRebalanceEnabled是true(預設),則Kafka負責跨執行個體分發分區,并且不需要這些屬性。如果autoRebalanceEnabled設定為false,則binder使用instanceCount和instanceIndex來确定執行個體所訂閱的分區(您必須至少具有與執行個體一樣多的分區)。綁定器配置設定分區而不是Kafka。如果您希望特定分區的消息始終轉到同一個執行個體,這可能很有用。當綁定器配置需要它們時,重要的是正确設定兩個值以確定消費所有資料并且應用程式執行個體接收互斥資料集。

While a scenario in which using multiple instances for partitioned data processing may be complex to set up in a standalone case, Spring Cloud Dataflow can simplify the process significantly by populating both the input and output values correctly and by letting you rely on the runtime infrastructure to provide information about the instance index and instance count.

雖然在單機的情況下使用多個執行個體進行分區資料處理的情況可能很複雜,但Spring Cloud Dataflow可以通過正确填充輸入和輸出值并讓您依賴運作時基礎結構來顯着簡化流程來提供有關執行個體索引和執行個體計數的資訊。

11. Testing

Spring Cloud Stream provides support for testing your microservice applications without connecting to a messaging system. You can do that by using the TestSupportBinder provided by the spring-cloud-stream-test-support library, which can be added as a test dependency to the application, as shown in the following example:

Spring Cloud Stream支援在不連接配接消息系統的情況下測試您的微服務應用程式。您可以使用spring-cloud-stream-test-support庫提供的TestSupportBinder,可以将其作為測試依賴項添加到應用程式中,如以下示例所示:

   <dependency>

       <groupId>org.springframework.cloud</groupId>

       <artifactId>spring-cloud-stream-test-support</artifactId>

       <scope>test</scope>

   </dependency>

The TestSupportBinder uses the Spring Boot autoconfiguration mechanism to supersede the other binders found on the classpath. Therefore, when adding a binder as a dependency, you must make sure that the test scope is being used.
在TestSupportBinder使用了Spring Boot自動配置機制,以取代在類路徑中的其它綁定器。是以,在添加綁定器作為依賴關系時,必須確定正在使用test範圍。

The TestSupportBinder lets you interact with the bound channels and inspect any messages sent and received by the application.

TestSupportBinder讓你與綁定通道互動并檢查應用程式發送和接收的任何消息。

For outbound message channels, the TestSupportBinder registers a single subscriber and retains the messages emitted by the application in a MessageCollector. They can be retrieved during tests and have assertions made against them.

對于出站消息通道,TestSupportBinder注冊單個訂閱者并保留應用程式在MessageCollector中發出的消息。可以在測試期間檢索它們并對它們進行斷言。

You can also send messages to inbound message channels so that the consumer application can consume the messages. The following example shows how to test both input and output channels on a processor:

您還可以将消息發送到入站消息通道,以便消費者應用程式可以消費消息。以下示例顯示如何在處理器上測試輸入和輸出通道:

@RunWith(SpringRunner.class)

@SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.RANDOM_PORT)

public class ExampleTest {

  @Autowired

  private Processor processor;

  @Autowired

  private MessageCollector messageCollector;

  @Test

  @SuppressWarnings("unchecked")

  public void testWiring() {

    Message<String> message = new GenericMessage<>("hello");

    processor.input().send(message);

    Message<String> received = (Message<String>) messageCollector.forChannel(processor.output()).poll();

    assertThat(received.getPayload(), equalTo("hello world"));

  }

  @SpringBootApplication

  @EnableBinding(Processor.class)

  public static class MyProcessor {

    @Autowired

    private Processor channels;

    @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)

    public String transform(String in) {

      return in + " world";

    }

  }

}

In the preceding example, we create an application that has an input channel and an output channel, both bound through the Processor interface. The bound interface is injected into the test so that we can have access to both channels. We send a message on the input channel, and we use the MessageCollector provided by Spring Cloud Stream’s test support to capture that the message has been sent to the output channel as a result. Once we have received the message, we can validate that the component functions correctly.

在前面的示例中,我們建立了一個具有輸入通道和輸出通道的應用程式,兩者都通過Processor接口綁定。綁定接口被注入到測試中,以便我們可以通路兩個通道。我們在輸入通道上發送消息,我們使用Spring Cloud Stream的測試支援提供的MessageCollector來捕獲消息已經被發送到輸出通道的結果。收到消息後,我們可以驗證元件是否正常運作。

11.1. Disabling the Test Binder Autoconfiguration   關閉測試綁定器自動配置

The intent behind the test binder superseding all the other binders on the classpath is to make it easy to test your applications without making changes to your production dependencies. In some cases (for example, integration tests) it is useful to use the actual production binders instead, and that requires disabling the test binder autoconfiguration. To do so, you can exclude the org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration class by using one of the Spring Boot autoconfiguration exclusion mechanisms, as shown in the following example:

測試綁定器取代類路徑上所有其他綁定器的目的是使測試應用程式變得很容易,而無需更改生産依賴項。在某些情況下(例如,內建測試),使用實際的生産綁定器代替是有用的,這需要禁用測試綁定器自動配置。為此,您可以使用Spring Boot自動配置排除機制之一排除org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration類,如以下示例所示:

    @SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)

    @EnableBinding(Processor.class)

    public static class MyProcessor {

        @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)

        public String transform(String in) {

            return in + " world";

        }

    }

When autoconfiguration is disabled, the test binder is available on the classpath, and its defaultCandidateproperty is set to false so that it does not interfere with the regular user configuration. It can be referenced under the name, test, as shown in the following example:

禁用自動配置時,類路徑上的測試綁定器可用,并且其defaultCandidate屬性設定為false不會幹擾正常使用者配置。它可以在名稱test下引用,如以下示例所示:

spring.cloud.stream.defaultBinder=test

12. Health Indicator   健康名額

Spring Cloud Stream provides a health indicator for binders. It is registered under the name binders and can be enabled or disabled by setting the management.health.binders.enabled property.

Spring Cloud Stream為綁定器提供了健康訓示器。它是在名稱binders下注冊的,可以通過設定management.health.binders.enabled屬性來啟用或禁用。

By default management.health.binders.enabled is set to false. Setting management.health.binders.enabled to true enables the health indicator, allowing you to access the /health endpoint to retrieve the binder health indicators.

預設management.health.binders.enabled設定為false。設定management.health.binders.enabled為true啟用健康訓示器,允許您通路/health端點以檢索綁定器健康訓示器。

Health indicators are binder-specific and certain binder implementations may not necessarily provide a health indicator.

健康名額是特定于綁定器的,某些綁定器實作可能不一定提供健康訓示器。

13. Metrics Emitter   名額發射器

Spring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports numerous monitoring systems.

Spring Boot Actuator為Micrometer提供依賴關系管理和自動配置,Micrometer是一個支援衆多監控系統的應用程式名額外觀。

Spring Cloud Stream provides support for emitting any available micrometer-based metrics to a binding destination, allowing for periodic collection of metric data from stream applications without relying on polling individual endpoints.

Spring Cloud Stream支援将任何可用的基于微米的度量标準發送到綁定目标,允許定期從流應用程式收集度量标準資料,而無需依賴輪詢各個端點。

Metrics Emitter is activated by defining the spring.cloud.stream.bindings.applicationMetrics.destination property, which specifies the name of the binding destination used by the current binder to publish metric messages.

通過定義spring.cloud.stream.bindings.applicationMetrics.destination屬性來激活度量标準發射器,該屬性指定目前綁定器用于釋出度量标準消息的綁定目标的名稱。

For example:

spring.cloud.stream.bindings.applicationMetrics.destination=myMetricDestination

The preceding example instructs the binder to bind to myMetricDestination (that is, Rabbit exchange, Kafka topic, and others).

前面的示例訓示綁定器綁定到myMetricDestination(即,Rabbit交換,Kafka主題,和其他)。

The following properties can be used for customizing the emission of metrics:

以下屬性可用于自定義名額的釋出:

spring.cloud.stream.metrics.key

The name of the metric being emitted. Should be a unique value per application.

要發出的名額名稱。每個應用程式應該是唯一值。

Default: ${spring.application.name:${vcap.application.name:${spring.config.name:application}}}

spring.cloud.stream.metrics.properties

Allows white listing application properties that are added to the metrics payload

允許添加到名額負載的白名單應用程式屬性

Default: null.

spring.cloud.stream.metrics.meter-filter

Pattern to control the 'meters' one wants to capture. For example, specifying spring.integration.* captures metric information for meters whose name starts with spring.integration.

用于控制想要捕獲的“米”的模式。例如,指定spring.integration.*捕獲名稱以spring.integration開頭的計量表的度量标準資訊。

Default: all 'meters' are captured.

spring.cloud.stream.metrics.schedule-interval

Interval to control the rate of publishing metric data.

用于控制釋出度量标準資料的速率的時間間隔。

Default: 1 min

Consider the following:

考慮以下:

java -jar time-source.jar \

    --spring.cloud.stream.bindings.applicationMetrics.destination=someMetrics \

    --spring.cloud.stream.metrics.properties=spring.application** \

    --spring.cloud.stream.metrics.meter-filter=spring.integration.*

The following example shows the payload of the data published to the binding destination as a result of the preceding command:

以下示例顯示了作為上述指令的結果釋出到綁定目标的資料的負載:

{

"name": "application",

"createdTime": "2018-03-23T14:48:12.700Z",

"properties": {

},

"metrics": [

{

"id": {

"name": "spring.integration.send",

"tags": [

{

"key": "exception",

"value": "none"

},

{

"key": "name",

"value": "input"

},

{

"key": "result",

"value": "success"

},

{

"key": "type",

"value": "channel"

}

],

"type": "TIMER",

"description": "Send processing time",

"baseUnit": "milliseconds"

},

"timestamp": "2018-03-23T14:48:12.697Z",

"sum": 130.340546,

"count": 6,

"mean": 21.72342433333333,

"upper": 116.176299,

"total": 130.340546

}

]

}

Given that the format of the Metric message has slightly changed after migrating to Micrometer, the published message will also have a STREAM_CLOUD_STREAM_VERSION header set to 2.x to help distinguish between Metric messages from the older versions of the Spring Cloud Stream.
鑒于度量标準消息的格式在遷移到Micrometer後略有變化,已釋出的消息也将STREAM_CLOUD_STREAM_VERSION設定标題,2.x以幫助區分舊版Spring Cloud Stream的度量标準消息。

14. Samples

For Spring Cloud Stream samples, see the spring-cloud-stream-samples repository on GitHub.

有關Spring Cloud Stream示例,請參閱GitHub上的spring-cloud-stream-samples存儲庫。

14.1. Deploying Stream Applications on CloudFoundry   在CloudFoundry上部署流應用程式

On CloudFoundry, services are usually exposed through a special environment variable called VCAP_SERVICES.

在CloudFoundry上,服務通常通過名為VCAP_SERVICES的特殊環境變量公開。

When configuring your binder connections, you can use the values from an environment variable as explained on the dataflow Cloud Foundry Server docs.

配置綁定器連接配接時,可以使用環境變量中的值,如資料流Cloud Foundry Server文檔中所述。

Binder Implementations

15. Apache Kafka Binder

15.1. Usage

To use Apache Kafka binder, you need to add spring-cloud-stream-binder-kafka as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:

要使用Apache Kafka綁定器,您需要将spring-cloud-stream-binder-kafka作為依賴項添加到Spring Cloud Stream應用程式中,如以下Maven示例所示:

<dependency>

  <groupId>org.springframework.cloud</groupId>

  <artifactId>spring-cloud-stream-binder-kafka</artifactId>

</dependency>

Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:

或者,您也可以使用Spring Cloud Stream Kafka Starter,如下面的Maven示例所示:

<dependency>

  <groupId>org.springframework.cloud</groupId>

  <artifactId>spring-cloud-starter-stream-kafka</artifactId>

</dependency>

15.2. Apache Kafka Binder Overview   概述

The following image shows a simplified diagram of how the Apache Kafka binder operates:

下圖顯示了Apache Kafka綁定器如何運作的簡化圖:

Figure 10. Kafka Binder

The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka concept. Partitioning also maps directly to Apache Kafka partitions as well.

Apache Kafka Binder實作将每個目标映射到Apache Kafka主題。消費者組直接映射到相同的Apache Kafka概念。分區也直接映射到Apache Kafka分區。

The binder currently uses the Apache Kafka kafka-clients 1.0.0 jar and is designed to be used with a broker of at least that version. This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available. For example, with versions earlier than 0.11.x.x, native headers are not supported. Also, 0.11.x.x does not support the autoAddPartitions property.

綁定器目前使用Apache Kafka kafka-clients 1.0.0 jar,旨在與至少該版本的代理一起使用。此用戶端可以與較舊的代理進行通信(請參閱Kafka文檔),但某些功能可能不可用。例如,對于早于0.11.xx的版本,不支援原生headers。此外,0.11.xx不支援autoAddPartitions屬性。

15.3. Configuration Options

This section contains the configuration options used by the Apache Kafka binder.

For common configuration options and properties pertaining to binder, see the core documentation.

本節包含Apache Kafka綁定器使用的配置選項。

有關綁定器的常見配置選項和屬性,請參閱核心文檔。

Kafka Binder Properties

spring.cloud.stream.kafka.binder.brokers

A list of brokers to which the Kafka binder connects.

Kafka綁定器連接配接的brokers清單。

Default: localhost.

spring.cloud.stream.kafka.binder.defaultBrokerPort

brokers allows hosts specified with or without port information (for example, host1,host2:port2). This sets the default port when no port is configured in the broker list.

brokers允許使用具有或不具有端口資訊的主機(例如,host1,host2:port2)。這在代理清單中未配置端口時設定預設端口。

Default: 9092.

spring.cloud.stream.kafka.binder.configuration

Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder. Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties — for example, security settings.

用戶端屬性(生産者和消費者)的鍵/值映射傳遞給由綁定器建立的所有用戶端。由于生産者和消費者都使用這些屬性,是以應将使用限制為通用屬性 - 例如,安全設定。

Default: Empty map.

spring.cloud.stream.kafka.binder.headers

The list of custom headers that are transported by the binder. Only required when communicating with older applications (⇐ 1.3.x) with a kafka-clients version < 0.11.0.0. Newer versions support headers natively.

由綁定器傳輸的自定義headers清單。僅在與舊版應用程式(⇐ 1.3.x)通信且kafka kafka-clients <0.11.0.0時才需要。較新版本本身支援headers。

Default: empty.

spring.cloud.stream.kafka.binder.healthTimeout

The time to wait to get partition information, in seconds. Health reports as down if this timer expires.

等待擷取分區資訊的時間,以秒為機關。如果此計時器到期,健康狀況将報告為關閉。

Default: 10.

spring.cloud.stream.kafka.binder.requiredAcks

The number of required acks on the broker. See the Kafka documentation for the producer acks property.

broker所需的确認數量。有關生産者acks屬性,請參閱Kafka文檔。

Default: 1.

spring.cloud.stream.kafka.binder.minPartitionCount

Effective only if autoCreateTopics or autoAddPartitions is set. The global minimum number of partitions that the binder configures on topics on which it produces or consumes data. It can be superseded by the partitionCount setting of the producer or by the value of instanceCount * concurrency settings of the producer (if either is larger).

僅在設定autoCreateTopics或autoAddPartitions時生效。綁定器在其生成或消費資料的主題上配置的全局最小分區數。它可以被生産者的partitionCount設定或生産者的instanceCount * concurrency設定的值取代(如果其中任何一個更大)。

Default: 1.

spring.cloud.stream.kafka.binder.replicationFactor

The replication factor of auto-created topics if autoCreateTopics is active. Can be overridden on each binding.

autoCreateTopics處于活動狀态時,自動建立的主題的複制因子。可以在每個綁定上重寫。

Default: 1.

spring.cloud.stream.kafka.binder.autoCreateTopics

If set to true, the binder creates new topics automatically. If set to false, the binder relies on the topics being already configured. In the latter case, if the topics do not exist, the binder fails to start.

如果設定為true,則綁定器會自動建立新主題。如果設定為false,則綁定器依賴于已配置的主題。在後一種情況下,如果主題不存在,則綁定器無法啟動。

This setting is independent of the auto.topic.create.enable setting of the broker and does not influence it. If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
此設定與代理的auto.topic.create.enable設定無關,并且不會影響它。如果伺服器設定為自動建立主題,則可以使用預設代理設定将它們建立為中繼資料檢索請求的一部分。

Default: true.

spring.cloud.stream.kafka.binder.autoAddPartitions

If set to true, the binder creates new partitions if required. If set to false, the binder relies on the partition size of the topic being already configured. If the partition count of the target topic is smaller than the expected value, the binder fails to start.

如果設定為true,則綁定器會根據需要建立新分區。如果設定為false,則綁定器依賴于已配置主題的分區大小。如果目标主題的分區計數小于預期值,則綁定器無法啟動。

Default: false.

spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix

Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.

啟用綁定器中的事務。見Kafka文檔中的transaction.id和spring-kafka文檔中的事務。啟用事務時,将忽略單獨的producer屬性,并且所有生産者都使用spring.cloud.stream.kafka.binder.transaction.producer.*屬性。

Default null (no transactions)

spring.cloud.stream.kafka.binder.transaction.producer.*

Global producer properties for producers in a transactional binder. See spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix and Kafka Producer Propertiesand the general producer properties supported by all binders.

事務綁定器中生産者的全局生産者屬性。請參閱spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix和Kafka Producer屬性以及所有綁定器支援的正常生産者屬性。

Default: See individual producer properties.

spring.cloud.stream.kafka.binder.headerMapperBeanName

The bean name of a KafkaHeaderMapper used for mapping spring-messaging headers to and from Kafka headers. Use this, for example, if you wish to customize the trusted packages in a DefaultKafkaHeaderMapper that uses JSON deserialization for the headers.

用于映射spring-messaging headers與Kafka headers之間的KafkaHeaderMapper的bean名稱。例如,如果您希望在使用用于headers的JSON反序列化的DefaultKafkaHeaderMapper中自定義受信任的包,請使用此選項。

Default: none.

Kafka Consumer Properties   Kafka消費者屬性

The following properties are available for Kafka consumers only and must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.consumer..

以下屬性僅适用于Kafka消費者,必須帶有字首spring.cloud.stream.kafka.bindings.<channelName>.consumer.。

admin.configuration

A Map of Kafka topic properties used when provisioning topics — for example, spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

Kafka主題屬性的Map,配置主題時使用-例如,spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

Default: none.

admin.replicas-assignment

A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments. Used when provisioning new topics. See the NewTopic Javadocs in the kafka-clients jar.

副本配置設定的Map <Integer,List <Integer >>,其中鍵是分區,值是指派。在配置新主題時使用。檢視kafka-clients jar中的NewTopic Javadocs。

預設值:無。

Default: none.

admin.replication-factor

The replication factor to use when provisioning topics. Overrides the binder-wide setting. Ignored if replicas-assignments is present.

配置主題時使用的複制因子。覆寫綁定器範圍的設定。如果replicas-assignments存在則忽略。

Default: none (the binder-wide default of 1 is used).

預設值:none(使用綁定器範圍的預設值1)。

autoRebalanceEnabled

When true, topic partitions is automatically rebalanced between the members of a consumer group. When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex. This requires both the spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex properties to be set appropriately on each launched instance. The value of the spring.cloud.stream.instanceCount property must typically be greater than 1 in this case.

true時,主題分區會在消費者組的成員之間自動重新平衡。false時,為每個消費者配置設定一組基于spring.cloud.stream.instanceCount和spring.cloud.stream.instanceIndex的固定分區。這需要在每個已啟動的執行個體上正确設定spring.cloud.stream.instanceCount和spring.cloud.stream.instanceIndex屬性。在這種情況下,spring.cloud.stream.instanceCount屬性的值通常必須大于1。

Default: true.

ackEachRecord

When autoCommitOffset is true, this setting dictates whether to commit the offset after each record is processed. By default, offsets are committed after all records in the batch of records returned by consumer.poll() have been processed. The number of records returned by a poll can be controlled with the max.poll.records Kafka property, which is set through the consumer configuration property. Setting this to true may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs. Also, see the binder requiredAcks property, which also affects the performance of committing offsets.

當autoCommitOffset是true時,此設定訓示每個記錄處理之後是否送出偏移量。預設情況下,在處理完consumer.poll()傳回的記錄批中的所有記錄後,将送出偏移量。可以使用max.poll.recordsKafka屬性控制輪詢傳回的記錄數,該屬性通過消費者configuration屬性設定。将此設定為true可能會導緻性能下降,但這樣做會降低發生故障時重新傳送記錄的可能性。另外,請參閱binder requiredAcks屬性,該屬性也會影響送出偏移量的性能。

Default: false.

autoCommitOffset

Whether to autocommit offsets when a message has been processed. If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header is present in the inbound message. Applications may use this header for acknowledging messages. See the examples section for details. When this property is set to false, Kafka binder sets the ack mode to org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL and the application is responsible for acknowledging records. Also see ackEachRecord.

是否在處理消息時自動送出偏移量。如果設定為false,則入站消息中将出現帶有org.springframework.kafka.support.Acknowledgment類型的kafka_acknowledgment key的header。應用程式可以使用此header來确認消息。有關詳細資訊,請參閱示例部分。當此屬性設定為false時,Kafka binder将ack模式設定為org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL,應用程式負責确認記錄。另見ackEachRecord。

Default: true.

autoCommitOnError

Effective only if autoCommitOffset is set to true. If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures. If set to true, it always auto-commits (if auto-commit is enabled). If not set (the default), it effectively has the same value as enableDlq, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.

僅在autoCommitOffset設定為true時有效。如果設定為false,則禁止對導緻錯誤的消息進行自動送出,僅自動送出成功的消息。它允許流在上次成功處理的消息中自動重放,以防出現持續故障。如果設定為true,則始終自動送出(如果啟用了自動送出)。如果沒有設定(預設值),它實際上具有與enableDlq相同的值,如果它們被發送到DLQ則自動送出錯誤消息,否則不送出它們。

Default: not set.

resetOffsets

Whether to reset offsets on the consumer to the value provided by startOffset.

是否将消費者的偏移重置為startOffset提供的值。

Default: false.

startOffset

The starting offset for new groups. Allowed values: earliest and latest. If the consumer group is set explicitly for the consumer 'binding' (through spring.cloud.stream.bindings.<channelName>.group), 'startOffset' is set to earliest. Otherwise, it is set to latest for the anonymous consumer group. Also see resetOffsets (earlier in this list).

新組的起始偏移量。允許的值:earliest和latest。如果為消費者“綁定”(通過spring.cloud.stream.bindings.<channelName>.group)明确設定了消費者組,則将“startOffset”設定為earliest。否則,它将為匿名使用者組設定為latest。另見resetOffsets(在此清單的前面)。

Default: null (equivalent to earliest).

預設值:null(相當于earliest)。

enableDlq

When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named error.<destination>.<group>. The DLQ topic name can be configurable by setting the dlqName property. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See Dead-Letter Topic Processing processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[].

設定為true時,它會為消費者啟用DLQ行為。預設情況下,導緻錯誤的消息将轉發到名為error.<destination>.<group>的主題。可以通過設定dlqName屬性來配置DLQ主題名稱。對于錯誤數量相對較小并且重放整個原始主題的情況可能過于繁瑣的情況,這為更常見的Kafka重放場景提供了備選選項。有關詳細資訊,請參閱死信主題處理處理。從2.0版開始,發送到DLQ主題的消息已使用以下标題得到增強:x-original-topic,x-exception-message,和x-exception-stacktrace作為byte[]。

Default: false.

configuration

Map with a key/value pair containing generic Kafka consumer properties.

包含通用Kafka消費者屬性的鍵/值對映射。

Default: Empty map.

dlqName

The name of the DLQ topic to receive the error messages.

用于接收錯誤消息的DLQ主題的名稱。

Default: null (If not specified, messages that result in errors are forwarded to a topic named error.<destination>.<group>).

預設值:null(如果未指定,則導緻錯誤的消息将轉發到名為error.<destination>.<group>的主題)。

dlqProducerProperties

Using this, DLQ-specific producer properties can be set. All the properties available through kafka producer properties can be set through this property.

使用它,可以設定DLQ特定的生産者屬性。可以通過此屬性設定通過kafka生産者屬性提供的所有屬性。

Default: Default Kafka producer properties.

預設值:預設Kafka生産者屬性。

standardHeaders

Indicates which standard headers are populated by the inbound channel adapter. Allowed values: none, id, timestamp, or both. Useful if using native deserialization and the first component to receive a message needs an id (such as an aggregator that is configured to use a JDBC message store).

訓示入站通道擴充卡填充的标準headers。允許值:none,id,timestamp,或both。如果使用本地反序列化并且第一個接收消息的元件需要id(例如配置為使用JDBC消息存儲的聚合器),則非常有用。

Default: none

converterBeanName

The name of a bean that implements RecordMessageConverter. Used in the inbound channel adapter to replace the default MessagingMessageConverter.

實作RecordMessageConverter的bean 名稱。在入站通道擴充卡中用于替換預設的MessagingMessageConverter。

Default: null

idleEventInterval

The interval, in milliseconds, between events indicating that no messages have recently been received. Use an ApplicationListener<ListenerContainerIdleEvent> to receive these events. See Example: Pausing and Resuming the Consumer for a usage example.

訓示最近未收到消息的事件之間的間隔(以毫秒為機關)。使用ApplicationListener<ListenerContainerIdleEvent>來接收這些事件。有關用法示例,請參閱示例:暫停和恢複使用者。

Default: 30000

Kafka Producer Properties   Kafka生産者屬性

The following properties are available for Kafka producers only and must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.producer..

以下屬性僅适用于Kafka生産者,必須以spring.cloud.stream.kafka.bindings.<channelName>.producer.為字首。

admin.configuration

A Map of Kafka topic properties used when provisioning new topics — for example, spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

Kafka主題屬性的Map,配置新主題時使用-例如,spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

Default: none.

admin.replicas-assignment

A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments. Used when provisioning new topics. See NewTopic javadocs in the kafka-clients jar.

副本配置設定的Map <Integer,List <Integer >>,其中鍵是分區,值是指派。在配置新主題時使用。請參閱kafka-clients jar中的NewTopic javadocs。

Default: none.

admin.replication-factor

The replication factor to use when provisioning new topics. Overrides the binder-wide setting. Ignored if replicas-assignments is present.

配置新主題時使用的複制因子。覆寫綁定器範圍的設定。如果replicas-assignments存在則忽略。

Default: none (the binder-wide default of 1 is used).

預設值:none(使用綁定器範圍的預設值1)。

bufferSize

Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.

Kafka生産者在發送之前嘗試批量處理的資料的上限(以位元組為機關)。

Default: 16384.

sync

Whether the producer is synchronous.

生産者是否是同步的。

Default: false.

batchTimeout

How long the producer waits to allow more messages to accumulate in the same batch before sending the messages. (Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.

生産者在發送消息之前等待允許更多消息在同一批次中累積的時間。(通常,生産者根本不會等待,隻是發送在上一次發送過程中累積的所有消息。)非零值可能會以延遲為代價來增加吞吐量。

Default: 0.

messageKeyExpression

A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message — for example, headers['myKey']. The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a byte[].

針對用于填充生成的Kafka消息的key的傳出消息評估的SpEL表達式 - 例如,headers['myKey']。無法使用負載,因為在評估此表達式時,負載已經是byte[]的形式。

Default: none.

headerPatterns

A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka Headers in the ProducerRecord. Patterns can begin or end with the wildcard character (asterisk). Patterns can be negated by prefixing with !. Matching stops after the first match (positive or negative). For example !ask,as* will pass ash but not ask. id and timestamp are never mapped.

逗号分隔的簡單模式清單,用于比對被映射到ProducerRecord中的Kafka Headers的Spring消息頭。模式可以以通配符(星号)開頭或結尾。可以通過添加字首來否定模式!。首次比對後即停止(正面或負面)。例如,!ask,as*将傳遞ash但不傳遞ask。 id和timestamp永遠不會映射。

Default: * (all headers - except the id and timestamp)

預設值: * (所有标題 - 除了id和timestamp)

configuration

Map with a key/value pair containing generic Kafka producer properties.

包含通用Kafka生産者屬性的鍵/值對映射。

Default: Empty map.

The Kafka binder uses the partitionCount setting of the producer as a hint to create a topic with the given partition count (in conjunction with the minPartitionCount, the maximum of the two being the value being used). Exercise caution when configuring both minPartitionCount for a binder and partitionCount for an application, as the larger value is used. If a topic already exists with a smaller partition count and autoAddPartitions is disabled (the default), the binder fails to start. If a topic already exists with a smaller partition count and autoAddPartitions is enabled, new partitions are added. If a topic already exists with a larger number of partitions than the maximum of (minPartitionCount or partitionCount), the existing partition count is used.
Kafka綁定器使用生産者的partitionCount設定作為提示來建立具有給定分區計數的主題(結合使用minPartitionCount,兩者的最大值是正在使用的值)。在為綁定器配置minPartitionCount和為應用程式配置partitionCount時要小心,因為使用的值越大。如果主題已存在且分區計數較小且autoAddPartitions已禁用(預設值),則綁定器無法啟動。如果已存在具有較小分區計數且autoAddPartitions已啟用的主題,則會添加新分區。如果主題已存在且分區數大于(minPartitionCount或partitionCount)的最大分區數,則使用現有分區計數。

Usage examples

In this section, we show the use of the preceding properties for specific scenarios.

在本節中,我們将展示對特定方案使用前面的屬性。

Example: Setting autoCommitOffset to false and Relying on Manual Acking

This example illustrates how one may manually acknowledge offsets in a consumer application.

此示例說明了如何在消費者應用程式中手動确認偏移量。

This example requires that spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset be set to false. Use the corresponding input channel name for your example.

此示例需要将spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset設定為false。使用相應的輸入通道名稱作為示例。

@SpringBootApplication

@EnableBinding(Sink.class)

public class ManuallyAcknowdledgingConsumer {

 public static void main(String[] args) {

     SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);

 }

 @StreamListener(Sink.INPUT)

 public void process(Message<?> message) {

     Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);

     if (acknowledgment != null) {

         System.out.println("Acknowledgment provided");

         acknowledgment.acknowledge();

     }

 }

}

Example: Security Configuration

Apache Kafka 0.9 supports secure connections between client and brokers. To take advantage of this feature, follow the guidelines in the Apache Kafka Documentation as well as the Kafka 0.9 security guidelines from the Confluent documentation. Use the spring.cloud.stream.kafka.binder.configuration option to set security properties for all clients created by the binder.

Apache Kafka 0.9支援用戶端和代理之間的安全連接配接。要利用此功能,請遵循Apache Kafka文檔中的準則以及Confluent文檔中的Kafka 0.9 安全準則。使用spring.cloud.stream.kafka.binder.configuration選項為綁定器建立的所有用戶端設定安全性屬性。

For example, to set security.protocol to SASL_SSL, set the following property:

例如,要設定security.protocol為SASL_SSL,請設定以下屬性:

spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL

All the other security properties can be set in a similar manner.

可以以類似的方式設定所有其他安全屬性。

When using Kerberos, follow the instructions in the reference documentation for creating and referencing the JAAS configuration.

使用Kerberos時,請按照參考文檔中的說明建立和引用JAAS配置。

Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.

Spring Cloud Stream支援使用JAAS配置檔案和Spring Boot屬性将JAAS配置資訊傳遞給應用程式。

Using JAAS Configuration Files   使用JAAS配置檔案

The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties. The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:

可以使用系統屬性為Spring Cloud Stream應用程式設定JAAS和(可選)krb5檔案位置。以下示例顯示如何使用JAAS配置檔案啟動使用SASL和Kerberos的Spring Cloud Stream應用程式:

 java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \

   --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \

   --spring.cloud.stream.bindings.input.destination=stream.ticktock \

   --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT

Using Spring Boot Properties   使用Spring Boot屬性

As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.

作為擁有JAAS配置檔案的替代方法,Spring Cloud Stream提供了一種使用Spring Boot屬性為Spring Cloud Stream應用程式設定JAAS配置的機制。

The following properties can be used to configure the login context of the Kafka client:

以下屬性可用于配置Kafka用戶端的登入上下文:

spring.cloud.stream.kafka.binder.jaas.loginModule

The login module name. Not necessary to be set in normal cases.

登入子產品名稱。沒有必要在正常情況下設定。

Default: com.sun.security.auth.module.Krb5LoginModule.

spring.cloud.stream.kafka.binder.jaas.controlFlag

The control flag of the login module.

登入子產品的控制标志。

Default: required.

spring.cloud.stream.kafka.binder.jaas.options

Map with a key/value pair containing the login module options.

包含登入子產品選項的鍵/值對映射。

Default: Empty map.

The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:

以下示例說明如何使用Spring Boot配置屬性啟動帶有SASL和Kerberos的Spring Cloud Stream應用程式:

 java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \

   --spring.cloud.stream.bindings.input.destination=stream.ticktock \

   --spring.cloud.stream.kafka.binder.autoCreateTopics=false \

   --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \

   --spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \

   --spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \

   --spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \

   --spring.cloud.strea[email protected]

The preceding example represents the equivalent of the following JAAS file:

上面的示例與以下JAAS檔案等效:

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    keyTab="/etc/security/keytabs/kafka_client.keytab"

    principal="[email protected]";

};

If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.

如果所需主題已存在于代理上或将由管理者建立,則可以關閉自動建立,并且隻需要發送用戶端JAAS屬性。

Do not mix JAAS configuration files and Spring Boot properties in the same application. If the -Djava.security.auth.login.config system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
不要在同一個應用程式中混合使用JAAS配置檔案和Spring Boot屬性。如果-Djava.security.auth.login.config系統屬性已存在,則Spring Cloud Stream會忽略Spring Boot屬性。
Be careful when using the autoCreateTopics and autoAddPartitions with Kerberos. Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper. Consequently, relying on Spring Cloud Stream to create/modify topics may fail. In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
使用Kerberos時使用autoCreateTopics和autoAddPartitions要小心。通常,應用程式可能使用在Kafka和Zookeeper中沒有管理權限的主體。是以,依賴Spring Cloud Stream來建立/修改主題可能會失敗。在安全環境中,我們強烈建議您使用Kafka工具建立主題和管理ACL。

Example: Pausing and Resuming the Consumer   暫停和恢複消費者

If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer. This is facilitated by adding the Consumer as a parameter to your @StreamListener. To resume, you need an ApplicationListener for ListenerContainerIdleEvent instances. The frequency at which events are published is controlled by the idleEventInterval property. Since the consumer is not thread-safe, you must call these methods on the calling thread.

如果您希望暫停消費但不會導緻分區重新平衡,則可以暫停和恢複消費者。這可以通過将Consumer作為參數添加到您的@StreamListener來達成。要恢複,您需要一個ListenerContainerIdleEvent執行個體的ApplicationListener。釋出事件的頻率由idleEventInterval屬性控制。由于消費者不是線程安全的,是以必須在調用線程上調用這些方法。

The following simple application shows how to pause and resume:

以下簡單的應用程式顯示了如何暫停和恢複:

@SpringBootApplication

@EnableBinding(Sink.class)

public class Application {

public static void main(String[] args) {

SpringApplication.run(Application.class, args);

}

@StreamListener(Sink.INPUT)

public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {

System.out.println(in);

consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));

}

@Bean

public ApplicationListener<ListenerContainerIdleEvent> idleListener() {

return event -> {

System.out.println(event);

if (event.getConsumer().paused().size() > 0) {

event.getConsumer().resume(event.getConsumer().paused());

}

};

}

}

15.4. Error Channels   錯誤管道

Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel. See Error Handling for more information.

The payload of the ErrorMessage for a send failure is a KafkaSendFailureException with properties:

  • failedMessage: The Spring Messaging Message<?> that failed to be sent.
  • record: The raw ProducerRecord that was created from the failedMessage

There is no automatic handling of producer exceptions (such as sending to a Dead-Letter queue). You can consume these exceptions with your own Spring Integration flow.

從版本1.3開始,綁定器無條件地為每個消費者目标向錯誤通道發送異常,并且還可以配置為将異步生産者發送失敗發送到錯誤通道。有關更多資訊,請參閱錯誤處理

發送失敗的錯誤消息ErrorMessage的負載是一個KafkaSendFailureException,具有以下屬性:

  • failedMessage:發送失敗的Spring Messaging Message<?>。
  • record:從失敗消息failedMessage中建立的原始生産者記錄ProducerRecord

生産者異常沒有自動處理(例如發送到死信隊列)。您可以使用自己的Spring Integration流程來消費這些異常。

15.5. Kafka Metrics   Kafka名額

Kafka binder module exposes the following metrics:

spring.cloud.stream.binder.kafka.someGroup.someTopic.lag: This metric indicates how many messages have not been yet consumed from a given binder’s topic by a given consumer group. For example, if the value of the metric spring.cloud.stream.binder.kafka.myGroup.myTopic.lag is 1000, the consumer group named myGroup has 1000 messages waiting to be consumed from the topic calle myTopic. This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.

Kafka綁定器子產品公開以下名額:

spring.cloud.stream.binder.kafka.someGroup.someTopic.lag:此度量标準訓示給定的消費者組從給定的綁定器主題尚未消費的消息數。例如,如果度量标準的spring.cloud.stream.binder.kafka.myGroup.myTopic.lag值為1000,則名為myGroup的消費者組具有1000個等待從myTopic主題消費的消息。此名額對于向PaaS平台提供自動縮放回報特别有用。

15.6. Dead-Letter Topic Processing   死信Topic處理

Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them. If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic. However, if the problem is a permanent issue, that could cause an infinite loop. The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a “parking lot” topic after three attempts. The application is another spring-cloud-stream application that reads from the dead-letter topic. It terminates when no messages are received for 5 seconds.

因為您無法預測使用者将如何處理死信消息,是以架構不提供任何标準機制來處理它們。如果死信的原因是暫時的,您可能希望将消息路由回原始主題。但是,如果問題是一個永久性問題,那麼可能會導緻無限循環。本主題中的示例Spring Boot應用程式是如何将這些消息路由回原始主題的示例,但是在三次嘗試之後它将它們移動到“停車場”主題。該應用程式是另一個Spring-cloud-stream應用程式,它從死信主題中讀取。它在5秒内沒有收到任何消息時終止。

The examples assume the original destination is so8400out and the consumer group is so8400.

這些示例假設原始目标是so8400out,而消費者組是so8400。

There are a couple of strategies to consider:

  • Consider running the rerouting only when the main application is not running. Otherwise, the retries for transient errors are used up very quickly.
  • Alternatively, use a two-stage approach: Use this application to route to a third topic and another to route from there back to the main topic.

有幾種政策需要考慮:

  • 考慮僅在主應用程式未運作時運作重新路由。否則,瞬态錯誤的重試會很快耗盡。
  • 或者,使用兩階段方法:使用此應用程式路由到第三個主題,使用另一個主題從那裡路由回主要主題。

The following code listings show the sample application:

以下代碼清單顯示了示例應用程式:

application.properties

spring.cloud.stream.bindings.input.group=so8400replay

spring.cloud.stream.bindings.input.destination=error.so8400out.so8400

spring.cloud.stream.bindings.output.destination=so8400out

spring.cloud.stream.bindings.output.producer.partitioned=true

spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot

spring.cloud.stream.bindings.parkingLot.producer.partitioned=true

spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest

spring.cloud.stream.kafka.binder.headers=x-retries

Application

@SpringBootApplication

@EnableBinding(TwoOutputProcessor.class)

public class ReRouteDlqKApplication implements CommandLineRunner {

    private static final String X_RETRIES_HEADER = "x-retries";

    public static void main(String[] args) {

        SpringApplication.run(ReRouteDlqKApplication.class, args).close();

    }

    private final AtomicInteger processed = new AtomicInteger();

    @Autowired

    private MessageChannel parkingLot;

    @StreamListener(Processor.INPUT)

    @SendTo(Processor.OUTPUT)

    public Message<?> reRoute(Message<?> failed) {

        processed.incrementAndGet();

        Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);

        if (retries == null) {

            System.out.println("First retry for " + failed);

            return MessageBuilder.fromMessage(failed)

                    .setHeader(X_RETRIES_HEADER, new Integer(1))

                    .setHeader(BinderHeaders.PARTITION_OVERRIDE,

                            failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))

                    .build();

        }

        else if (retries.intValue() < 3) {

            System.out.println("Another retry for " + failed);

            return MessageBuilder.fromMessage(failed)

                    .setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1))

                    .setHeader(BinderHeaders.PARTITION_OVERRIDE,

                            failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))

                    .build();

        }

        else {

            System.out.println("Retries exhausted for " + failed);

            parkingLot.send(MessageBuilder.fromMessage(failed)

                    .setHeader(BinderHeaders.PARTITION_OVERRIDE,

                            failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))

                    .build());

        }

        return null;

    }

    @Override

    public void run(String... args) throws Exception {

        while (true) {

            int count = this.processed.get();

            Thread.sleep(5000);

            if (count == this.processed.get()) {

                System.out.println("Idle, terminating");

                return;

            }

        }

    }

    public interface TwoOutputProcessor extends Processor {

        @Output("parkingLot")

        MessageChannel parkingLot();

    }

}

15.7. Partitioning with the Kafka Binder   使用Kafka綁定器進行分區

Apache Kafka supports topic partitioning natively.

Apache Kafka原生支援主題分區。

Sometimes it is advantageous to send data to specific partitions — for example, when you want to strictly order message processing (all messages for a particular customer should go to the same partition).

有時将資料發送到特定分區是有利的 - 例如,當您要嚴格訂購消息處理時(特定客戶的所有消息都應該轉到同一分區)。

The following example shows how to configure the producer and consumer side:

以下示例顯示如何配置生産者和消費者方:

@SpringBootApplication

@EnableBinding(Source.class)

public class KafkaPartitionProducerApplication {

    private static final Random RANDOM = new Random(System.currentTimeMillis());

    private static final String[] data = new String[] {

            "foo1", "bar1", "qux1",

            "foo2", "bar2", "qux2",

            "foo3", "bar3", "qux3",

            "foo4", "bar4", "qux4",

            };

    public static void main(String[] args) {

        new SpringApplicationBuilder(KafkaPartitionProducerApplication.class)

            .web(false)

            .run(args);

    }

    @InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000"))

    public Message<?> generate() {

        String value = data[RANDOM.nextInt(data.length)];

        System.out.println("Sending: " + value);

        return MessageBuilder.withPayload(value)

                .setHeader("partitionKey", value)

                .build();

    }

}

application.yml

spring:

  cloud:

    stream:

      bindings:

        output:

          destination: partitioned.topic

          producer:

            partitioned: true

            partition-key-expression: headers['partitionKey']

            partition-count: 12

The topic must be provisioned to have enough partitions to achieve the desired concurrency for all consumer groups. The above configuration supports up to 12 consumer instances (6 if their concurrency is 2, 4 if their concurrency is 3, and so on). It is generally best to “over-provision” the partitions to allow for future increases in consumers or concurrency.
必須配置主題以具有足夠的分區以實作所有消費者組的所需并發性。上面的配置最多支援12個消費者執行個體(如果它們concurrency是2,則為6,如果它們的并發性為3,則為4,依此類推)。通常最好“過度配置”分區以允許将來增加消費者或并發性。
The preceding configuration uses the default partitioning (key.hashCode() % partitionCount). This may or may not provide a suitably balanced algorithm, depending on the key values. You can override this default by using the partitionSelectorExpression or partitionSelectorClassproperties.
上述配置使用預設分區(key.hashCode() % partitionCount)。根據鍵值,這可能會或可能不會提供适當平衡的算法。您可以使用partitionSelectorExpression或partitionSelectorClass屬性覆寫此預設值。

Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side. Kafka allocates partitions across the instances.

由于分區由Kafka原生處理,是以在消費者方面不需要特殊配置。Kafka在執行個體之間配置設定分區。

The following Spring Boot application listens to a Kafka stream and prints (to the console) the partition ID to which each message goes:

以下Spring Boot應用程式偵聽Kafka流并列印(到控制台)每條消息所針對的分區ID:

@SpringBootApplication

@EnableBinding(Sink.class)

public class KafkaPartitionConsumerApplication {

    public static void main(String[] args) {

        new SpringApplicationBuilder(KafkaPartitionConsumerApplication.class)

            .web(false)

            .run(args);

    }

    @StreamListener(Sink.INPUT)

    public void listen(@Payload String in, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {

        System.out.println(in + " received from partition " + partition);

    }

}

application.yml

spring:

  cloud:

    stream:

      bindings:

        input:

          destination: partitioned.topic

          group: myGroup

You can add instances as needed. Kafka rebalances the partition allocations. If the instance count (or instance count * concurrency) exceeds the number of partitions, some consumers are idle.

您可以根據需要添加執行個體。Kafka重新平衡分區配置設定。如果執行個體計數(或instance count * concurrency)超過分區數,則某些消費者處于空閑狀态。

16. Apache Kafka Streams Binder

16.1. Usage

For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:

要使用Kafka Streams綁定器,隻需使用以下Maven坐标将其添加到Spring Cloud Stream應用程式:

<dependency>

  <groupId>org.springframework.cloud</groupId>

  <artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>

</dependency>

16.2. Kafka Streams Binder Overview

Spring Cloud Stream’s Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the Apache Kafka Streams APIs in the core business logic.

Spring Cloud Stream的Apache Kafka支援還包括為Apache Kafka Streams綁定明确設計的綁定器實作。通過這種本地內建,Spring Cloud Stream“processor”應用程式可以直接在核心業務邏輯中使用 Apache Kafka Streams API。

Kafka Streams binder implementation builds on the foundation provided by the Kafka Streams in Spring Kafka project.

Kafka Streams綁定器實作建立在Spring Kafka 項目中Kafka Streams提供的基礎之上。

As part of this native integration, the high-level Streams DSL provided by the Kafka Streams API is available for use in the business logic, too.

作為此原生內建的一部分,Kafka Streams API提供的進階Streams DSL也可用于業務邏輯。

An early version of the Processor API support is available as well.

還提供了早期版本的Processor API支援。

As noted early-on, Kafka Streams support in Spring Cloud Stream strictly only available for use in the Processor model. A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages can be written to an outbound topic. It can also be used in Processor applications with a no-outbound destination.

如前所述,Kafka Streams在Spring Cloud Stream中的支援嚴格僅适用于處理器模型。可以應用從入站主題讀取的消息,業務處理以及轉換後的消息可以寫入出站主題的模型。它也可以在沒有出站目的地的處理器應用程式中使用。

16.2.1. Streams DSL

This application consumes data from a Kafka topic (e.g., words), computes word count for each unique word in a 5 seconds time window, and the computed results are sent to a downstream topic (e.g., counts) for further processing.

該應用程式使用來自Kafka主題(例如words)的資料,在5秒時間視窗中計算每個唯一單詞的單詞計數,并且将計算結果發送到下遊主題(例如counts)以進行進一步處理。

@SpringBootApplication

@EnableBinding(KStreamProcessor.class)

public class WordCountProcessorApplication {

@StreamListener("input")

@SendTo("output")

public KStream<?, WordCount> process(KStream<?, String> input) {

return input

                .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))

                .groupBy((key, value) -> value)

                .windowedBy(TimeWindows.of(5000))

                .count(Materialized.as("WordCounts-multi"))

                .toStream()

                .map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));

    }

public static void main(String[] args) {

SpringApplication.run(WordCountProcessorApplication.class, args);

}

Once built as a uber-jar (e.g., wordcount-processor.jar), you can run the above example like the following.

一旦建構為超級jar(例如,wordcount-processor.jar),您可以運作上面的示例,如下所示。

java -jar wordcount-processor.jar  --spring.cloud.stream.bindings.input.destination=words --spring.cloud.stream.bindings.output.destination=counts

This application will consume messages from the Kafka topic words and the computed results are published to an output topic counts.

此應用程式将消費來自Kafka主題words的消息,并将計算結果釋出到輸出主題counts。

Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are automatically bound as KStream objects. As a developer, you can exclusively focus on the business aspects of the code, i.e. writing the logic required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure is automatically handled by the framework.

Spring Cloud Stream将確定來自傳入和傳出主題的消息自動綁定為KStream對象。作為開發人員,您可以專注于代碼的業務方面,即編寫處理器中所需的邏輯。設定Kafka Streams基礎結構所需的Streams DSL特定配置由架構自動處理。

16.3. Configuration Options

This section contains the configuration options used by the Kafka Streams binder.

For common configuration options and properties pertaining to binder, refer to the core documentation.

本節包含Kafka Streams綁定器使用的配置選項。

有關綁定器的常用配置選項和屬性,請參閱核心文檔。

16.3.1. Kafka Streams Properties

The following properties are available at the binder level and must be prefixed with spring.cloud.stream.kafka.streams.binder. literal.

在綁定器級别可以使用以下屬性,并且必須以spring.cloud.stream.kafka.streams.binder.為字首。

configuration

Map with a key/value pair containing properties pertaining to Apache Kafka Streams API. This property must be prefixed with spring.cloud.stream.kafka.streams.binder.. Following are some examples of using this property.

包含與Apache Kafka Streams API相關的屬性的鍵/值對映射。此屬性必須以spring.cloud.stream.kafka.streams.binder.為字首。以下是使用此屬性的一些示例。

spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde

spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde

spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000

For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs in Apache Kafka Streams docs.

有關可能進入流配置的所有屬性的更多資訊,請參閱Apache Kafka Streams文檔中的StreamsConfig JavaDocs。

brokers

Broker URL

Default: localhost

zkNodes

Zookeeper URL

Default: localhost

serdeError

Deserialization error handler type. Possible values are - logAndContinue, logAndFail or sendToDlq

反序列化錯誤處理程式類型。可能的值是 - logAndContinue,logAndFail或sendToDlq

Default: logAndFail

applicationId

Application ID for all the stream configurations in the current application context. You can override the application id for an individual StreamListener method using the group property on the binding. You have to ensure that you are using the same group name for all input bindings in the case of multiple inputs on the same methods.

目前應用程式上下文中所有流配置的應用程式ID。您可以使用綁定上的group屬性覆寫單個StreamListener方法的應用程式ID。在相同方法的多個輸入的情況下,您必須確定為所有輸入綁定使用相同的組名。

Default: default

The following properties are only available for Kafka Streams producers and must be prefixed with spring.cloud.stream.kafka.streams.bindings.<binding name>.producer. literal.

以下屬性僅适用于Kafka Streams生産者,并且必須以spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.為字首。

keySerde

key serde to use

要使用的鍵正反序列化

Default: none.

valueSerde

value serde to use

要使用的值正反序列化

Default: none.

useNativeEncoding

flag to enable native encoding

啟用原生編碼的标志

Default: false.

The following properties are only available for Kafka Streams consumers and must be prefixed with spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer. literal.

以下屬性僅适用于Kafka Streams消費者,并且必須以spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer.為字首。

keySerde

key serde to use

要使用的鍵正反序列化

Default: none.

valueSerde

value serde to use

要使用的值正反序列化

Default: none.

materializedAs

state store to materialize when using incoming KTable types

使用傳入的KTable類型時要具體化的狀态存儲

Default: none.

useNativeDecoding

flag to enable native decoding

啟用原生編碼的标志

Default: false.

dlqName

DLQ topic name.

DLQ主題名稱。

Default: none.

16.3.2. TimeWindow properties:

Windowing is an important concept in stream processing applications. Following properties are available to configure time-window computations.

視窗化是流處理應用程式中的一個重要概念。以下屬性可用于配置時間視窗計算。

spring.cloud.stream.kafka.streams.timeWindow.length

When this property is given, you can autowire a TimeWindows bean into the application. The value is expressed in milliseconds.

給出此屬性後,您可以将TimeWindows bean自動裝入應用程式。該值以毫秒表示。

Default: none.

spring.cloud.stream.kafka.streams.timeWindow.advanceBy

Value is given in milliseconds.

值以毫秒為機關。

Default: none.

16.4. Multiple Input Bindings   多個輸入綁定

For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka Streams binder provides multiple bindings support.

對于需要多個傳入KStream對象或KStream和KTable對象組合的用例,Kafka Streams綁定器提供多個綁定支援。

Let’s see it in action.

讓我們看看它的實際效果。

16.4.1. Multiple Input Bindings as a Sink   多個輸入綁定作為接收器

@EnableBinding(KStreamKTableBinding.class)

.....

.....

@StreamListener

public void process(@Input("inputStream") KStream<String, PlayEvent> playEvents,

                    @Input("inputTable") KTable<Long, Song> songTable) {

                    ....

                    ....

}

interface KStreamKTableBinding {

    @Input("inputStream")

    KStream<?, ?> inputStream();

    @Input("inputTable")

    KTable<?, ?> inputTable();

}

In the above example, the application is written as a sink, i.e. there are no output bindings and the application has to decide concerning downstream processing. When you write applications in this style, you might want to send the information downstream or store them in a state store (See below for Queryable State Stores).

在上面的示例中,應用程式被寫為接收器,即沒有輸出綁定,應用程式也必須決定下遊處理。以此樣式編寫應用程式時,您可能希望将資訊發送到下遊或将其存儲在狀态存儲中(請參閱下面的可查詢狀态存儲)。

In the case of incoming KTable, if you want to materialize the computations to a state store, you have to express it through the following property.

在傳入KTable的情況下,如果要将計算具體化到狀态存儲,則必須通過以下屬性表達它。

spring.cloud.stream.kafka.streams.bindings.inputTable.consumer.materializedAs: all-songs

16.4.2. Multiple Input Bindings as a Processor   多個輸入綁定作為處理器

@EnableBinding(KStreamKTableBinding.class)

....

....

@StreamListener

@SendTo("output")

public KStream<String, Long> process(@Input("input") KStream<String, Long> userClicksStream,

                                     @Input("inputTable") KTable<String, String> userRegionsTable) {

....

....

}

interface KStreamKTableBinding extends KafkaStreamsProcessor {

    @Input("inputX")

    KTable<?, ?> inputTable();

}

16.5. Multiple Output Bindings (aka Branching)   多個輸出綁定(又稱分支)

Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides support for this feature without compromising the programming model exposed through StreamListener in the end user application.

Kafka Streams允許基于某些謂詞将出站資料拆分為多個主題。Kafka Streams綁定器為此功能提供支援,而不會影響最終使用者應用程式中通過StreamListener公開的程式設計模型。

You can write the application in the usual way as demonstrated above in the word count example. However, when using the branching feature, you are required to do a few things. First, you need to make sure that your return type is KStream[] instead of a regular KStream. Second, you need to use the SendTo annotation containing the output bindings in the order (see example below). For each of these output bindings, you need to configure destination, content-type etc., complying with the standard Spring Cloud Stream expectations.

您可以按照正常方式編寫應用程式,如上面單詞計數示例中所示。但是,在使用分支功能時,您需要執行一些操作。首先,您需要確定傳回類型是KStream[]而不是正常類型KStream。其次,您需要在訂單中使用包含輸出綁定的SendTo注釋(請參閱下面的示例)。對于每個輸出綁定,您需要配置目标,内容類型等,符合标準的Spring Cloud Stream期望。

Here is an example:

這是一個例子:

@EnableBinding(KStreamProcessorWithBranches.class)

@EnableAutoConfiguration

public static class WordCountProcessorApplication {

    @Autowired

    private TimeWindows timeWindows;

    @StreamListener("input")

    @SendTo({"output1","output2","output3})

    public KStream<?, WordCount>[] process(KStream<Object, String> input) {

Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");

Predicate<Object, WordCount> isFrench =  (k, v) -> v.word.equals("french");

Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");

return input

.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))

.groupBy((key, value) -> value)

.windowedBy(timeWindows)

.count(Materialized.as("WordCounts-1"))

.toStream()

.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))))

.branch(isEnglish, isFrench, isSpanish);

    }

    interface KStreamProcessorWithBranches {

            @Input("input")

            KStream<?, ?> input();

            @Output("output1")

            KStream<?, ?> output1();

            @Output("output2")

            KStream<?, ?> output2();

            @Output("output3")

            KStream<?, ?> output3();

        }

}

Properties:

spring.cloud.stream.bindings.output1.contentType: application/json

spring.cloud.stream.bindings.output2.contentType: application/json

spring.cloud.stream.bindings.output3.contentType: application/json

spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms: 1000

spring.cloud.stream.kafka.streams.binder.configuration:

  default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde

  default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde

spring.cloud.stream.bindings.output1:

  destination: foo

  producer:

    headerMode: raw

spring.cloud.stream.bindings.output2:

  destination: bar

  producer:

    headerMode: raw

spring.cloud.stream.bindings.output3:

  destination: fox

  producer:

    headerMode: raw

spring.cloud.stream.bindings.input:

  destination: words

  consumer:

    headerMode: raw

16.6. Message Conversion   消息轉換

Similar to message-channel based binder applications, the Kafka Streams binder adapts to the out-of-the-box content-type conversions without any compromise.

與基于消息通道的綁定器應用程式類似,Kafka Streams綁定器可以适應開箱即用的内容類型轉換,而不會有任何妥協。

It is typical for Kafka Streams operations to know the type of SerDe’s used to transform the key and value correctly. Therefore, it may be more natural to rely on the SerDe facilities provided by the Apache Kafka Streams library itself at the inbound and outbound conversions rather than using the content-type conversions offered by the framework. On the other hand, you might be already familiar with the content-type conversion patterns provided by the framework, and that, you’d like to continue using for inbound and outbound conversions.

Kafka Streams操作通常會知道用于正确轉換鍵和值的SerDe的類型。是以,依賴于Apache Kafka Streams庫本身在入站和出站轉換中提供的SerDe工具而不是使用架構提供的内容類型轉換可能更為自然。另一方面,您可能已經熟悉架構提供的内容類型轉換模式,并且您希望繼續用于入站和出站轉換。

Both the options are supported in the Kafka Streams binder implementation.

Kafka Streams綁定器實作中都支援這兩個選項。

Outbound serialization   出站序列化

If native encoding is disabled (which is the default), then the framework will convert the message using the contentType set by the user (otherwise, the default application/json will be applied). It will ignore any SerDe set on the outbound in this case for outbound serialization.

如果禁用原生編碼(這是預設設定),則架構将使用使用者設定的contentType轉換消息(否則,将應用預設的application/json)。在這種情況下,它将忽略出站序列化的出站上的任何SerDe設定。

Here is the property to set the contentType on the outbound.

以下是在出站上設定contentType屬性。

spring.cloud.stream.bindings.output.contentType: application/json

Here is the property to enable native encoding.

以下是啟用原生編碼的屬性。

spring.cloud.stream.bindings.output.nativeEncoding: true

If native encoding is enabled on the output binding (user has to enable it as above explicitly), then the framework will skip any form of automatic message conversion on the outbound. In that case, it will switch to the Serde set by the user. The valueSerde property set on the actual output binding will be used. Here is an example.

如果在輸出綁定上啟用了原生編碼(使用者必須如上所述顯式啟用它),那麼架構将跳過出站的任何形式的自動消息轉換。在這種情況下,它将切換到使用者設定的Serde。将使用在實際輸出綁定上設定的valueSerde屬性。這是一個例子。

spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde

If this property is not set, then it will use the "default" SerDe: spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

如果未設定此屬性,則它将使用“預設”SerDe : spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

It is worth to mention that Kafka Streams binder does not serialize the keys on outbound - it simply relies on Kafka itself. Therefore, you either have to specify the keySerde property on the binding or it will default to the application-wide common keySerde.

值得一提的是,Kafka Streams 綁定器不會在出站時序列化keys - 它隻依賴于Kafka本身。是以,您必須在綁定上指定keySerde屬性,否則它将預設為應用程式範圍的公共keySerde。

Binding level key serde:

綁定級别的key serde:

spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde

Common Key serde:

公共Key serde:

spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde

If branching is used, then you need to use multiple output bindings. For example,

如果使用分支,則需要使用多個輸出綁定。例如,

interface KStreamProcessorWithBranches {

            @Input("input")

            KStream<?, ?> input();

            @Output("output1")

            KStream<?, ?> output1();

            @Output("output2")

            KStream<?, ?> output2();

            @Output("output3")

            KStream<?, ?> output3();

        }

If nativeEncoding is set, then you can set different SerDe’s on individual output bindings as below.

如果設定了nativeEncoding,那麼您可以在各個輸出綁定上設定不同的SerDe,如下所示。

spring.cloud.stream.kafka.streams.bindings.output1.producer.valueSerde=IntegerSerde

spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=StringSerde

spring.cloud.stream.kafka.streams.bindings.output3.producer.valueSerde=JsonSerde

Then if you have SendTo like this, @SendTo({"output1", "output2", "output3"}), the KStream[] from the branches are applied with proper SerDe objects as defined above. If you are not enabling nativeEncoding, you can then set different contentType values on the output bindings as below. In that case, the framework will use the appropriate message converter to convert the messages before sending to Kafka.

然後,如果您有這樣的SendTo,@SendTo({"output1", "output2", "output3"}),分支中的KStream[]将應用上面定義的适當的SerDe對象。如果未啟用nativeEncoding,則可以在輸出綁定上設定不同的contentType值,如下所示。在這種情況下,架構将使用适當的消息轉換器在發送到Kafka之前轉換消息。

spring.cloud.stream.bindings.output1.contentType: application/json

spring.cloud.stream.bindings.output2.contentType: application/java-serialzied-object

spring.cloud.stream.bindings.output3.contentType: application/octet-stream

Inbound Deserialization   入站反序列化

Similar rules apply to data deserialization on the inbound.

類似的規則适用于入站資料反序列化。

If native decoding is disabled (which is the default), then the framework will convert the message using the contentType set by the user (otherwise, the default application/json will be applied). It will ignore any SerDe set on the inbound in this case for inbound deserialization.

如果禁用原生解碼(這是預設設定),則架構将使用使用者設定的contentType轉換消息(否則,将應用預設的application/json)。在這種情況下,它将忽略入站反序列化的入站上的任何SerDe集。

Here is the property to set the contentType on the inbound.

以下是在入站中設定contentType屬性。

spring.cloud.stream.bindings.input.contentType: application/json

Here is the property to enable native decoding.

以下是啟用原生解碼的屬性。

spring.cloud.stream.bindings.input.nativeDecoding: true

If native decoding is enabled on the input binding (user has to enable it as above explicitly), then the framework will skip doing any message conversion on the inbound. In that case, it will switch to the SerDe set by the user. The valueSerde property set on the actual output binding will be used. Here is an example.

如果在輸入綁定上啟用了原生解碼(使用者必須如上所述明确啟用它),那麼架構将跳過對入站進行任何消息轉換。在這種情況下,它将切換到使用者設定的SerDe。将使用在實際輸出綁定上設定的valueSerde屬性。這是一個例子。

spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde

If this property is not set, it will use the default SerDe: spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

如果未設定此屬性,則将使用預設的SerDe: spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

It is worth to mention that Kafka Streams binder does not deserialize the keys on inbound - it simply relies on Kafka itself. Therefore, you either have to specify the keySerde property on the binding or it will default to the application-wide common keySerde.

值得一提的是,Kafka Streams綁定器不會對入站keys進行反序列化 - 它隻依賴于Kafka本身。是以,您必須在綁定上指定keySerde屬性,否則它将預設為應用程式範圍的公共keySerde。

Binding level key serde:

綁定級别的key serde:

spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde

Common Key serde:

公共Key serde:

spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde

As in the case of KStream branching on the outbound, the benefit of setting value SerDe per binding is that if you have multiple input bindings (multiple KStreams object) and they all require separate value SerDe’s, then you can configure them individually. If you use the common configuration approach, then this feature won’t be applicable.

與出站時KStream分支的情況一樣,每個綁定設定值SerDe的好處是,如果您有多個輸入綁定(多個KStreams對象)并且它們都需要單獨的SerDe值,那麼您可以單獨配置它們。如果使用公共配置方法,則此功能将不适用。

16.7. Error Handling   錯誤處理

Apache Kafka Streams provide the capability for natively handling exceptions from deserialization errors. For details on this support, please see this Out of the box, Apache Kafka Streams provide two kinds of deserialization exception handlers - logAndContinue and logAndFail. As the name indicates, the former will log the error and continue processing the next records and the latter will log the error and fail. LogAndFail is the default deserialization exception handler.

Apache Kafka Streams提供了原生處理反序列化錯誤的異常的功能。有關此支援的詳細資訊,請參閱此開箱即用,Apache Kafka Streams提供了兩種反序列化異常處理程式 - logAndContinue和logAndFail。如名稱所示,前者将記錄錯誤并繼續處理下一條記錄,後者将記錄錯誤并失敗。LogAndFail是預設的反序列化異常處理程式。

16.7.1. Handling Deserialization Exceptions   處理反序列化異常

Kafka Streams binder supports a selection of exception handlers through the following properties.

Kafka Streams binder通過以下屬性支援一系列異常處理程式。

spring.cloud.stream.kafka.streams.binder.serdeError: logAndContinue

In addition to the above two deserialization exception handlers, the binder also provides a third one for sending the erroneous records (poison pills) to a DLQ topic. Here is how you enable this DLQ exception handler.

除了上述兩個反序列化異常處理程式之外,綁定器還提供了第三個反序列化異常處理程式,用于将錯誤記錄(毒丸)發送到DLQ主題。以下是啟用此DLQ異常處理程式的方法。

spring.cloud.stream.kafka.streams.binder.serdeError: sendToDlq

When the above property is set, all the deserialization error records are automatically sent to the DLQ topic.

設定上述屬性後,所有反序列化錯誤記錄将自動發送到DLQ主題。

spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqName: foo-dlq

If this is set, then the error records are sent to the topic foo-dlq. If this is not set, then it will create a DLQ topic with the name error.<input-topic-name>.<group-name>.

如果設定了此項,則會将錯誤記錄發送到主題foo-dlq。如果未設定,則會建立名稱為error.<input-topic-name>.<group-name>的DLQ主題。

A couple of things to keep in mind when using the exception handling feature in Kafka Streams binder.

  • The property spring.cloud.stream.kafka.streams.binder.serdeError is applicable for the entire application. This implies that if there are multiple StreamListener methods in the same application, this property is applied to all of them.
  • The exception handling for deserialization works consistently with native deserialization and framework provided message conversion.

在Kafka Streams綁定器中使用異常處理功能時要記住幾件事。

  • spring.cloud.stream.kafka.streams.binder.serdeError屬性适用于整個應用。這意味着如果在同一個應用程式中有多個StreamListener方法,則此屬性将應用于所有這些方法。
  • 反序列化的異常處理與原生反序列化和架構提供的消息轉換一緻。

16.7.2. Handling Non-Deserialization Exceptions   處理非反序列化異常

For general error handling in Kafka Streams binder, it is up to the end user applications to handle application level errors. As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get access to the DLQ sending bean directly from your application. Once you get access to that bean, you can programmatically send any exception records from your application to the DLQ.

對于Kafka Streams綁定器中的一般錯誤處理,最終使用者應用程式可以處理應用程式級錯誤。作為為反序列化異常處理程式提供DLQ的副作用,Kafka Streams綁定器提供了一種直接從應用程式通路發送bean的DLQ的方法。一旦通路該bean,就可以以程式設計方式将任何異常記錄從應用程式發送到DLQ。

It continues to remain hard to robust error handling using the high-level DSL; Kafka Streams doesn’t natively support error handling yet.

使用進階DSL仍然難以進行強大的錯誤處理; Kafka Streams本身并不支援錯誤處理。

However, when you use the low-level Processor API in your application, there are options to control this behavior. See below.

但是,在應用程式中使用低級Processor API時,可以選擇控制此行為。見下文。

@Autowired

private SendToDlqAndContinue dlqHandler;

@StreamListener("input")

@SendTo("output")

public KStream<?, WordCount> process(KStream<Object, String> input) {

    input.process(() -> new Processor() {

                ProcessorContext context;

                @Override

                public void init(ProcessorContext context) {

                    this.context = context;

                }

                @Override

                public void process(Object o, Object o2) {

                    try {

                        .....

                        .....

                    }

                    catch(Exception e) {

                        //explicitly provide the kafka topic corresponding to the input binding as the first argument.

                        //DLQ handler will correctly map to the dlq topic from the actual incoming destination.

                        dlqHandler.sendToDlq("topic-name", (byte[]) o1, (byte[]) o2, context.partition());

                    }

                }

                .....

                .....

    });

}

16.8. Interactive Queries   互動式查詢

As part of the public Kafka Streams binder API, we expose a class called QueryableStoreRegistry. You can access this as a Spring bean in your application. An easy way to get access to this bean from your application is to "autowire" the bean in your application.

作為公共Kafka Streams binder API的一部分,我們公開了一個名為QueryableStoreRegistry的類。您可以在應用程式中将其作為Spring bean進行通路。從應用程式通路此bean的一種簡單方法是在應用程式中“自動裝配”該bean。

@Autowired

private QueryableStoreRegistry queryableStoreRegistry;

Once you gain access to this bean, then you can query for the particular state-store that you are interested. See below.

一旦獲得對此bean的通路權限,就可以查詢您感興趣的特定狀态存儲。見下文。

ReadOnlyKeyValueStore<Object, Object> keyValueStore =

                                                queryableStoreRegistry.getQueryableStoreType("my-store", QueryableStoreTypes.keyValueStore());

16.9. Accessing the underlying KafkaStreams object   通路底層的KafkaStreams對象

StreamBuilderFactoryBean from spring-kafka that is responsible for constructing the KafkaStreams object can be accessed programmatically. Each StreamBuilderFactoryBean is registered as stream-builder and appended with the StreamListener method name. If your StreamListener method is named as process for example, the stream builder bean is named as stream-builder-process. Since this is a factory bean, it should be accessed by prepending an ampersand (&) when accessing it programmatically. Following is an example and it assumes the StreamListener method is named as process

可以通過程式設計方式通路負責構造KafkaStreams對象的spring-kafka中的StreamBuilderFactoryBean。每個StreamBuilderFactoryBean都被注冊為stream-builder并附加StreamListener方法名稱。例如,如果您的StreamListener方法被命名process,則流建構器bean的名稱為stream-builder-process。由于這是一個工廠bean,是以應該通過在以程式設計方式通路它時添加一個&符号(&)來通路它。以下是一個示例,它假定該StreamListener方法命名為process

StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);

                        KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();

17. RabbitMQ Binder

17.1. Usage

To use the RabbitMQ binder, you can add it to your Spring Cloud Stream application, by using the following Maven coordinates:

要使用RabbitMQ綁定器,可以使用以下Maven坐标将其添加到Spring Cloud Stream應用程式中:

<dependency>

  <groupId>org.springframework.cloud</groupId>

  <artifactId>spring-cloud-stream-binder-rabbit</artifactId>

</dependency>

Alternatively, you can use the Spring Cloud Stream RabbitMQ Starter, as follows:

或者,您可以使用Spring Cloud Stream RabbitMQ Starter,如下所示:

<dependency>

  <groupId>org.springframework.cloud</groupId>

  <artifactId>spring-cloud-starter-stream-rabbit</artifactId>

</dependency>

17.2. RabbitMQ Binder Overview

The following simplified diagram shows how the RabbitMQ binder operates:

以下簡化圖顯示了RabbitMQ綁定器的運作方式:

Figure 11. RabbitMQ Binder

By default, the RabbitMQ Binder implementation maps each destination to a TopicExchange. For each consumer group, a Queue is bound to that TopicExchange. Each consumer instance has a corresponding RabbitMQ Consumer instance for its group’s Queue. For partitioned producers and consumers, the queues are suffixed with the partition index and use the partition index as the routing key. For anonymous consumers (those with no group property), an auto-delete queue (with a randomized unique name) is used.

預設情況下,RabbitMQ Binder實作将每個目标映射到一個TopicExchange。對于每個消費者組,都有一個Queue與此TopicExchange綁定。每個消費者執行個體都有一個與其組Queue對應的RabbitMQ Consumer執行個體。對于分區生産者和消費者,隊列以分區索引為字尾,并使用分區索引作為路由鍵。對于匿名消費者(沒有group屬性的消費者),使用自動删除隊列(具有随機的唯一名稱)。

By using the optional autoBindDlq option, you can configure the binder to create and configure dead-letter queues (DLQs) (and a dead-letter exchange DLX, as well as routing infrastructure). By default, the dead letter queue has the name of the destination, appended with .dlq. If retry is enabled (maxAttempts > 1), failed messages are delivered to the DLQ after retries are exhausted. If retry is disabled (maxAttempts = 1), you should set requeueRejected to false (the default) so that failed messages are routed to the DLQ, instead of being re-queued. In addition, republishToDlq causes the binder to publish a failed message to the DLQ (instead of rejecting it). This feature lets additional information (such as the stack trace in the x-exception-stacktrace header) be added to the message in headers. This option does not need retry enabled. You can republish a failed message after just one attempt. Starting with version 1.2, you can configure the delivery mode of republished messages. See the republishDeliveryMode property.

通過使用可選autoBindDlq選項,您可以配置綁定器以建立和配置死信隊列(DLQ)(以及死信交換DLX,以及路由基礎結構)。預設情況下,死信隊列的名稱即目标名稱,追加.dlq字尾。如果啟用了重試(maxAttempts > 1),則在重試耗盡後,失敗的消息将傳遞到DLQ。如果禁用重試(maxAttempts = 1),則應設定requeueRejected為false(預設值),以便将失敗的消息路由到DLQ,而不是重新排隊。此外,republishToDlq導緻綁定器将失敗的消息釋出到DLQ(而不是拒絕它)。此功能可以将其他資訊(例如,x-exception-stacktrace header中的堆棧跟蹤)添加到headers中的消息。此選項不需要重試。隻需一次嘗試即可重新釋出失敗的消息。從1.2版開始,您可以配置重新釋出的消息的傳遞模式。檢視republishDeliveryMode屬性。

Setting requeueRejected to true (with republishToDlq=false ) causes the message to be re-queued and redelivered continually, which is likely not what you want unless the reason for the failure is transient. In general, you should enable retry within the binder by setting maxAttempts to greater than one or by setting republishToDlq to true.
設定requeueRejected為true(with republishToDlq=false)會導緻消息重新排隊并連續重新傳遞,這可能不是您想要的,除非失敗的原因是暫時的。通常,您應該通過設定maxAttempts為大于1或通過設定republishToDlq為true在綁定器中開啟重試。

See RabbitMQ Binder Properties for more information about these properties.

有關這些屬性的更多資訊,請參見RabbitMQ Binder屬性。

The framework does not provide any standard mechanism to consume dead-letter messages (or to re-route them back to the primary queue). Some options are described in Dead-Letter Queue Processing.

該架構沒有提供任何标準機制來消費死信消息(或将它們重新路由回主隊列)。死信隊列進行中描述了一些選項。

When multiple RabbitMQ binders are used in a Spring Cloud Stream application, it is important to disable 'RabbitAutoConfiguration' to avoid the same configuration from RabbitAutoConfiguration being applied to the two binders. You can exclude the class by using the @SpringBootApplication annotation.
當在Spring Cloud Stream應用程式中使用多個RabbitMQ綁定器時,禁用“RabbitAutoConfiguration”以避免将相同的RabbitAutoConfiguration配置應用于兩個綁定器非常重要。您可以使用@SpringBootApplication注釋排除此類。

Starting with version 2.0, the RabbitMessageChannelBinder sets the RabbitTemplate.userPublisherConnection property to true so that the non-transactional producers avoid deadlocks on consumers, which can happen if cached connections are blocked because of a memory alarm on the broker.

從版本2.0開始,RabbitMessageChannelBinder将RabbitTemplate.userPublisherConnection屬性設定為true,以便非事務生産者避免在消費者上死鎖,如果由于代理上的記憶體警報而阻塞高速緩存連接配接,則可能發生這種情況。

17.3. Configuration Options   配置選項

This section contains settings specific to the RabbitMQ Binder and bound channels.

For general binding configuration options and properties, see the Spring Cloud Stream core documentation.

本節包含特定于RabbitMQ Binder和綁定通道的設定。

有關正常綁定配置選項和屬性,請參閱Spring Cloud Stream核心文檔。

RabbitMQ Binder Properties   RabbitMQ綁定器屬性

By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory. Conseuqently, it supports all Spring Boot configuration options for RabbitMQ. (For reference, see the Spring Boot documentation). RabbitMQ configuration options use the spring.rabbitmq prefix.

預設情況下,RabbitMQ綁定器使用Spring Boot的ConnectionFactory。是以,它支援RabbitMQ的所有Spring Boot配置選項。(有關參考,請參閱Spring Boot文檔)。RabbitMQ配置選項使用spring.rabbitmq字首。

In addition to Spring Boot options, the RabbitMQ binder supports the following properties:

除Spring Boot選項外,RabbitMQ binder還支援以下屬性:

spring.cloud.stream.rabbit.binder.adminAddresses

A comma-separated list of RabbitMQ management plugin URLs. Only used when nodes contains more than one entry. Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses. Only needed if you use a RabbitMQ cluster and wish to consume from the node that hosts the queue. See Queue Affinity and the LocalizedQueueConnectionFactory for more information.

以逗号分隔的RabbitMQ管理插件URL清單。僅在nodes包含多個條目時使用。此清單中的每個條目都必須在spring.rabbitmq.addresses中包含相應的條目。僅在您使用RabbitMQ叢集并希望從承載隊列的節點消費時才需要。有關更多資訊,請參閱Queue Affinity和LocalizedQueueConnectionFactory。

Default: empty.

spring.cloud.stream.rabbit.binder.nodes

A comma-separated list of RabbitMQ node names. When more than one entry, used to locate the server address where a queue is located. Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses. Only needed if you use a RabbitMQ cluster and wish to consume from the node that hosts the queue. See Queue Affinity and the LocalizedQueueConnectionFactory for more information.

以逗号分隔的RabbitMQ節點名稱清單。當存在多個條目時,用于查找隊列所在的伺服器位址。此清單中的每個條目都必須在spring.rabbitmq.addresses中包含相應的條目。僅在您使用RabbitMQ叢集并希望從承載隊列的節點消費時才需要。有關更多資訊,請參閱Queue Affinity和LocalizedQueueConnectionFactory。

Default: empty.

spring.cloud.stream.rabbit.binder.compressionLevel

The compression level for compressed bindings. See java.util.zip.Deflater.

壓縮綁定的壓縮級别。見java.util.zip.Deflater。

Default: 1 (BEST_LEVEL).

spring.cloud.stream.binder.connection-name-prefix

A connection name prefix used to name the connection(s) created by this binder. The name is this prefix followed by #n, where n increments each time a new connection is opened.

用于命名此綁定器建立的連接配接的連接配接名稱字首。名稱是此字首後跟#n,其中每次打開新連接配接時n遞增。

Default: none (Spring AMQP default).

RabbitMQ Consumer Properties   RabbitMQ消費者屬性

The following properties are available for Rabbit consumers only and must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.consumer..

以下屬性僅适用于Rabbit消費者,必須以spring.cloud.stream.rabbit.bindings.<channelName>.consumer.為字首。

acknowledgeMode

The acknowledge mode.

确認模式。

Default: AUTO.

autoBindDlq

Whether to automatically declare the DLQ and bind it to the binder DLX.

是否自動聲明DLQ并将其綁定到綁定器DLX。

Default: false.

bindingRoutingKey

The routing key with which to bind the queue to the exchange (if bindQueue is true). For partitioned destinations, -<instanceIndex> is appended.

用于将隊列綁定到交換機的路由密鑰(如果bindQueue是true)。對于分區目的地,追加-<instanceIndex>。

Default: #.

bindQueue

Whether to bind the queue to the destination exchange. Set it to false if you have set up your own infrastructure and have previously created and bound the queue.

是否将隊列綁定到目标交換機。如果您已設定自己的基礎架構并且之前已建立并綁定隊列,請将其設定為false。

Default: true.

deadLetterQueueName

The name of the DLQ

DLQ的名稱

Default: prefix+destination.dlq

deadLetterExchange

A DLX to assign to the queue. Relevant only if autoBindDlq is true.

要配置設定給隊列的DLX。僅在autoBindDlq是true時相關。

Default: 'prefix+DLX'

deadLetterRoutingKey

A dead letter routing key to assign to the queue. Relevant only if autoBindDlq is true.

用于配置設定給隊列的死信路由密鑰。僅在autoBindDlq是true時相關。

Default: destination

declareExchange

Whether to declare the exchange for the destination.

是否聲明目的地的交換。

Default: true.

delayedExchange

Whether to declare the exchange as a Delayed Message Exchange. Requires the delayed message exchange plugin on the broker. The x-delayed-type argument is set to the exchangeType.

是否将交換聲明為為Delayed Message Exchange。需要代理上的延遲消息交換插件。x-delayed-type參數設定為exchangeType。

Default: false.

dlqDeadLetterExchange

If a DLQ is declared, a DLX to assign to that queue.

如果聲明了DLQ,則為配置設定給該隊列的DLX。

Default: none

dlqDeadLetterRoutingKey

If a DLQ is declared, a dead letter routing key to assign to that queue.

如果聲明了DLQ,則為配置設定給該隊列的死信路由密鑰。

Default: none

dlqExpires

How long before an unused dead letter queue is deleted (in milliseconds).

删除未使用的死信隊列需要多長時間(以毫秒為機關)。

Default: no expiration

dlqLazy

Declare the dead letter queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue.

聲明帶有x-queue-mode=lazy參數的死信隊列。請參閱“懶惰隊列”。請考慮使用政策而不是此設定,因為使用政策允許更改設定而不删除隊列。

Default: false.

dlqMaxLength

Maximum number of messages in the dead letter queue.

死信隊列中的最大消息數。

Default: no limit

dlqMaxLengthBytes

Maximum number of total bytes in the dead letter queue from all messages.

所有消息中死信隊列中的最大總位元組數。

Default: no limit

dlqMaxPriority

Maximum priority of messages in the dead letter queue (0-255).

死信隊列中消息的最大優先級(0-255)。

Default: none

dlqTtl

Default time to live to apply to the dead letter queue when declared (in milliseconds).

聲明時應用于死信隊列的預設時間(以毫秒為機關)。

Default: no limit

durableSubscription

Whether the subscription should be durable. Only effective if group is also set.

訂閱是否應該是持久的。僅group設定時有效。

Default: true.

exchangeAutoDelete

If declareExchange is true, whether the exchange should be auto-deleted (that is, removed after the last queue is removed).

如果declareExchange為true,則是否應自動删除交換(即,在删除最後一個隊列後删除)。

Default: true.

exchangeDurable

If declareExchange is true, whether the exchange should be durable (that is, it survives broker restart).

如果declareExchange是true,則交換是否應該是持久的(即,它在代理重新開機後仍然存在)。

Default: true.

exchangeType

The exchange type: direct, fanout or topic for non-partitioned destinations and direct or topic for partitioned destinations.

交換類型:direct,fanout或用于非分區目标的topic和用于分區目标的direct或topic。

Default: topic.

exclusive

Whether to create an exclusive consumer. Concurrency should be 1 when this is true. Often used when strict ordering is required but enabling a hot standby instance to take over after a failure. See recoveryInterval, which controls how often a standby instance attempts to consume.

是否建立獨家消費者。如果是true,則并發應該是1。通常在需要嚴格排序時使用,但在發生故障後啟用熱備用執行個體。請參閱recoveryInterval,它控制備用執行個體嘗試使用的頻率。

Default: false.

expires

How long before an unused queue is deleted (in milliseconds).

删除未使用的隊列需要多長時間(以毫秒為機關)。

Default: no expiration

failedDeclarationRetryInterval

The interval (in milliseconds) between attempts to consume from a queue if it is missing.

隊列缺失時,嘗試從隊列中消費的時間間隔(以毫秒為機關)。

Default: 5000

headerPatterns

Patterns for headers to be mapped from inbound messages.

從入站消息中映射的headers的模式。

Default: ['*'] (all headers).

lazy

Declare the queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue.

聲明帶有x-queue-mode=lazy參數的隊列。請參閱“懶惰隊列”。請考慮使用政策而不是此設定,因為使用政策允許更改設定而不删除隊列。

Default: false.

maxConcurrency

The maximum number of consumers.

最大消費者數量。

Default: 1.

maxLength

The maximum number of messages in the queue.

隊列中的最大消息數。

Default: no limit

maxLengthBytes

The maximum number of total bytes in the queue from all messages.

所有消息中隊列中的最大總位元組數。

Default: no limit

maxPriority

The maximum priority of messages in the queue (0-255).

隊列中消息的最大優先級(0-255)。

Default: none

missingQueuesFatal

When the queue cannot be found, whether to treat the condition as fatal and stop the listener container. Defaults to false so that the container keeps trying to consume from the queue — for example, when using a cluster and the node hosting a non-HA queue is down.

當無法找到隊列時,是否将條件視為緻命并停止監聽器容器。預設設定為false以便容器繼續嘗試從隊列中消費 - 例如,在使用群集時,托管非HA隊列的節點已關閉。

Default: false

prefetch

Prefetch count.

預取計數。

Default: 1.

prefix

A prefix to be added to the name of the destination and queues.

要添加到destination和隊列名稱的字首。

Default: "".

queueDeclarationRetries

The number of times to retry consuming from a queue if it is missing. Relevant only when missingQueuesFatalis true. Otherwise, the container keeps retrying indefinitely.

如果丢失,則從隊列重試消費的次數。隻有當missingQueuesFatal是true時有關。否則,容器将無限期地重試。

Default: 3

queueNameGroupOnly

When true, consume from a queue with a name equal to the group. Otherwise the queue name is destination.group. This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue.

如果為true,則從名稱等于group的隊列中消費。否則隊列名稱是destination.group。例如,當使用Spring Cloud Stream從現有RabbitMQ隊列中消費時,這很有用。

Default: false.

recoveryInterval

The interval between connection recovery attempts, in milliseconds.

連接配接恢複嘗試之間的間隔,以毫秒為機關。

Default: 5000.

requeueRejected

Whether delivery failures should be re-queued when retry is disabled or republishToDlq is false.

當重試被關閉或republishToDlq的false時,是否發送故障應重新排隊。

Default: false.

republishDeliveryMode

When republishToDlq is true, specifies the delivery mode of the republished message.

當republishToDlq是true時,指定重新釋出消息的傳遞方式。

Default: DeliveryMode.PERSISTENT

republishToDlq

By default, messages that fail after retries are exhausted are rejected. If a dead-letter queue (DLQ) is configured, RabbitMQ routes the failed message (unchanged) to the DLQ. If set to true, the binder republishs failed messages to the DLQ with additional headers, including the exception message and stack trace from the cause of the final failure.

預設情況下,拒絕重試後失敗的郵件。如果配置了死信隊列(DLQ),RabbitMQ會将失敗的消息(未更改)路由到DLQ。如果設定為true,則綁定器會使用其他headers将失敗的消息重新釋出到DLQ,包括異常消息和最終失敗原因的堆棧跟蹤。

Default: false

transacted

Whether to use transacted channels.

是否使用事務化通道。

Default: false.

ttl

Default time to live to apply to the queue when declared (in milliseconds).

聲明時應用于隊列的預設時間(以毫秒為機關)。

Default: no limit

txSize

The number of deliveries between acks.

确認之間的傳遞數量。

Default: 1.

Rabbit Producer Properties   Rabbit生産者屬性

The following properties are available for Rabbit producers only and must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.producer..

以下屬性僅适用于Rabbit生産者,必須帶有spring.cloud.stream.rabbit.bindings.<channelName>.producer.字首。

autoBindDlq

Whether to automatically declare the DLQ and bind it to the binder DLX.

是否自動聲明DLQ并将其綁定到綁定器DLX。

Default: false.

batchingEnabled

Whether to enable message batching by producers. Messages are batched into one message according to the following properties (described in the next three entries in this list): 'batchSize', batchBufferLimit, and batchTimeout. See Batching for more information.

是否啟用生産者的消息批處理。根據以下屬性将消息批處理為一條消息(在此清單的下三個條目中描述):'batchSize',batchBufferLimit,和batchTimeout。有關更多資訊,請參閱批處理。

Default: false.

batchSize

The number of messages to buffer when batching is enabled.

啟用批處理時要緩沖的消息數。

Default: 100.

batchBufferLimit

The maximum buffer size when batching is enabled.

啟用批處理時的最大緩沖區大小。

Default: 10000.

batchTimeout

The batch timeout when batching is enabled.

批處理啟用時的批處理逾時。

Default: 5000.

bindingRoutingKey

The routing key with which to bind the queue to the exchange (if bindQueue is true). Only applies to non-partitioned destinations. Only applies if requiredGroups are provided and then only to those groups.

用于将隊列綁定到交換機的路由密鑰(如果bindQueue是true)。僅适用于非分區目标。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: #.

bindQueue

Whether to bind the queue to the destination exchange. Set it to false if you have set up your own infrastructure and have previously created and bound the queue. Only applies if requiredGroups are provided and then only to those groups.

是否将隊列綁定到目标交換機。如果您已設定自己的基礎架構并且之前已建立并綁定隊列,請将其設定為false。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: true.

compress

Whether data should be compressed when sent.

是否應在發送時壓縮資料。

Default: false.

deadLetterQueueName

The name of the DLQ Only applies if requiredGroups are provided and then only to those groups.

DLQ的名稱,僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: prefix+destination.dlq

deadLetterExchange

A DLX to assign to the queue. Relevant only when autoBindDlq is true. Applies only when requiredGroupsare provided and then only to those groups.

要配置設定給隊列的DLX。隻有當autoBindDlq是true時有關。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: 'prefix+DLX'

deadLetterRoutingKey

A dead letter routing key to assign to the queue. Relevant only when autoBindDlq is true. Applies only when requiredGroups are provided and then only to those groups.

用于配置設定給隊列的死信路由密鑰。隻有當autoBindDlq是true時有關。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: destination

declareExchange

Whether to declare the exchange for the destination.

是否聲明目的地的交換。

Default: true.

delayExpression

A SpEL expression to evaluate the delay to apply to the message (x-delay header). It has no effect if the exchange is not a delayed message exchange.

用于評估應用于消息(x-delay header)的延遲的SpEL表達式。如果交換不是延遲消息交換,則無效。

Default: No x-delay header is set.

delayedExchange

Whether to declare the exchange as a Delayed Message Exchange. Requires the delayed message exchange plugin on the broker. The x-delayed-type argument is set to the exchangeType.

是否将交換聲明為Delayed Message Exchange。需要代理上的延遲消息交換插件。x-delayed-type參數設定為exchangeType。

Default: false.

deliveryMode

The delivery mode.

投遞模式。

Default: PERSISTENT.

dlqDeadLetterExchange

When a DLQ is declared, a DLX to assign to that queue. Applies only if requiredGroups are provided and then only to those groups.

聲明DLQ時,則為将配置設定給該隊列的DLX。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: none

dlqDeadLetterRoutingKey

When a DLQ is declared, a dead letter routing key to assign to that queue. Applies only when requiredGroupsare provided and then only to those groups.

聲明DLQ時,則為配置設定給該隊列的死信路由鍵。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: none

dlqExpires

How long (in milliseconds) before an unused dead letter queue is deleted. Applies only when requiredGroupsare provided and then only to those groups.

删除未使用的死信隊列之前的時間(以毫秒為機關)。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no expiration

dlqLazy

Declare the dead letter queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue. Applies only when requiredGroups are provided and then only to those groups.

使用x-queue-mode=lazy參數聲明死信隊列。請參閱“懶惰隊列”。請考慮使用政策而不是此設定,因為使用政策允許更改設定而不删除隊列。僅在提供requiredGroups時适用,然後僅适用于這些組。

dlqMaxLength

Maximum number of messages in the dead letter queue. Applies only if requiredGroups are provided and then only to those groups.

死信隊列中的最大消息數。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no limit

dlqMaxLengthBytes

Maximum number of total bytes in the dead letter queue from all messages. Applies only when requiredGroupsare provided and then only to those groups.

所有消息中死信隊列中的最大總位元組數。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no limit

dlqMaxPriority

Maximum priority of messages in the dead letter queue (0-255) Applies only when requiredGroups are provided and then only to those groups.

死信隊列中消息的最大優先級(0-255)僅在提供requiredGroups時才适用,然後僅适用于這些組。

Default: none

dlqTtl

Default time (in milliseconds) to live to apply to the dead letter queue when declared. Applies only when requiredGroups are provided and then only to those groups.

聲明時應用于死信隊列的預設時間(以毫秒為機關)。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no limit

exchangeAutoDelete

If declareExchange is true, whether the exchange should be auto-delete (it is removed after the last queue is removed).

如果declareExchange是true,是否應該自動删除交換(在删除最後一個隊列後删除它)。

Default: true.

exchangeDurable

If declareExchange is true, whether the exchange should be durable (survives broker restart).

如果declareExchange是true,交換是否應該是持久的(在broker重新開機後仍然存活)。

Default: true.

exchangeType

The exchange type: direct, fanout or topic for non-partitioned destinations and direct or topic for partitioned destinations.

交換類型:direct,fanout或用于非分區目标的topic和用于分區目标的direct或topic。

Default: topic.

expires

How long (in milliseconds) before an unused queue is deleted. Applies only when requiredGroups are provided and then only to those groups.

删除未使用的隊列之前的時間(以毫秒為機關)。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no expiration

headerPatterns

Patterns for headers to be mapped to outbound messages.

要映射到出站消息的headers模式。

Default: ['*'] (all headers).

lazy

Declare the queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue. Applies only when requiredGroups are provided and then only to those groups.

使用x-queue-mode=lazy參數聲明隊列。請參閱“懶惰隊列”。請考慮使用政策而不是此設定,因為使用政策允許更改設定而不删除隊列。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: false.

maxLength

Maximum number of messages in the queue. Applies only when requiredGroups are provided and then only to those groups.

隊列中的最大消息數。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no limit

maxLengthBytes

Maximum number of total bytes in the queue from all messages. Only applies if requiredGroups are provided and then only to those groups.

所有消息中隊列中的最大總位元組數。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no limit

maxPriority

Maximum priority of messages in the queue (0-255). Only applies if requiredGroups are provided and then only to those groups.

隊列中消息的最大優先級(0-255)。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: none

prefix

A prefix to be added to the name of the destination exchange.

要添加到destination交換機名稱的字首。

Default: "".

queueNameGroupOnly

When true, consume from a queue with a name equal to the group. Otherwise the queue name is destination.group. This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue. Applies only when requiredGroups are provided and then only to those groups.

當true時,使用名稱等于group的隊列消費。否則隊列名稱是destination.group。例如,當使用Spring Cloud Stream從現有RabbitMQ隊列中消費時,這很有用。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: false.

routingKeyExpression

A SpEL expression to determine the routing key to use when publishing messages. For a fixed routing key, use a literal expression, such as routingKeyExpression='my.routingKey' in a properties file or routingKeyExpression: '''my.routingKey''' in a YAML file.

一個SpEL表達式,用于确定釋出消息時要使用的路由鍵。對于固定路由鍵,請使用文字表達式,例如在屬性檔案中routingKeyExpression='my.routingKey'或在YAML檔案中routingKeyExpression: '''my.routingKey'''。

Default: destination or destination-<partition> for partitioned destinations.

預設值:用于分區目的地的destination或destination-<partition>。

transacted

Whether to use transacted channels.

是否使用事務化通道。

Default: false.

ttl

Default time (in milliseconds) to live to apply to the queue when declared. Applies only when requiredGroupsare provided and then only to those groups.

聲明時應用于隊列的預設時間(以毫秒為機關)。僅在提供requiredGroups時适用,然後僅适用于這些組。

Default: no limit

In the case of RabbitMQ, content type headers can be set by external applications. Spring Cloud Stream supports them as part of an extended internal protocol used for any type of transport — including transports, such as Kafka (prior to 0.11), that do not natively support headers.
在RabbitMQ的情況下,内容類型headers可以由外部應用程式設定。Spring Cloud Stream支援它們作為擴充内部協定的一部分,用于任何類型的傳輸 - 包括傳輸,如Kafka(0.11之前),非原支援headers。

17.4. Retry With the RabbitMQ Binder   使用RabbitMQ綁定器重試

When retry is enabled within the binder, the listener container thread is suspended for any back off periods that are configured. This might be important when strict ordering is required with a single consumer. However, for other use cases, it prevents other messages from being processed on that thread. An alternative to using binder retry is to set up dead lettering with time to live on the dead-letter queue (DLQ) as well as dead-letter configuration on the DLQ itself. See “RabbitMQ Binder Properties” for more information about the properties discussed here. You can use the following example configuration to enable this feature:

  • Set autoBindDlq to true. The binder create a DLQ. Optionally, you can specify a name in deadLetterQueueName.
  • Set dlqTtl to the back off time you want to wait between redeliveries.
  • Set the dlqDeadLetterExchange to the default exchange. Expired messages from the DLQ are routed to the original queue, because the default deadLetterRoutingKey is the queue name (destination.group). Setting to the default exchange is achieved by setting the property with no value, as shown in the next example.

在綁定器中啟用重試時,将暫停監聽器容器線程以用于配置的任何後退時段。當單個消費者需要嚴格的訂購時,這可能很重要。但是,對于其他用例,它會阻止在該線程上處理其他消息。使用綁定器重試的另一種方法是使用死信隊列(DLQ)上的生存時間以及DLQ本身上的死信配置設定死信。有關此處讨論的屬性的更多資訊,請參閱“ RabbitMQ Binder屬性 ”。您可以使用以下示例配置來啟用此功能:

  • 設定autoBindDlq為true。綁定器建立DLQ。(可選)您可以在deadLetterQueueName中指定名稱。
  • 設定dlqTtl為您想要在重新開始之間等待的退避時間。
  • 設定dlqDeadLetterExchange為預設交換。來自DLQ的過期消息将路由到原始隊列,因為預設的deadLetterRoutingKey是隊列名稱(destination.group)。通過将屬性設定為無值來實作設定為預設交換,如下一個示例所示。

To force a message to be dead-lettered, either throw an AmqpRejectAndDontRequeueException or set requeueRejected to true (the default) and throw any exception.

要将消息強制為死信,請抛出AmqpRejectAndDontRequeueException或設定requeueRejected為true(預設值)并抛出任何異常。

The loop continue without end, which is fine for transient problems, but you may want to give up after some number of attempts. Fortunately, RabbitMQ provides the x-death header, which lets you determine how many cycles have occurred.

循環繼續沒有結束,這對于瞬态問題很好,但是你可能想在經過一些嘗試後放棄。幸運的是,RabbitMQ提供了x-death header,可以讓您确定發生了多少次循環。

To acknowledge a message after giving up, throw an ImmediateAcknowledgeAmqpException.

放棄後要确認一條消息,抛出ImmediateAcknowledgeAmqpException。

Putting it All Together   所有的放在一起

The following configuration creates an exchange myDestination with queue myDestination.consumerGroupbound to a topic exchange with a wildcard routing key #:

以下配置使用通配符路由鍵#建立一個到主題交換的隊列myDestination.consumerGroup的交換myDestination:

---

spring.cloud.stream.bindings.input.destination=myDestination

spring.cloud.stream.bindings.input.group=consumerGroup

#disable binder retries

spring.cloud.stream.bindings.input.consumer.max-attempts=1

#dlx/dlq setup

spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true

spring.cloud.stream.rabbit.bindings.input.consumer.dlq-ttl=5000

spring.cloud.stream.rabbit.bindings.input.consumer.dlq-dead-letter-exchange=

---

This configuration creates a DLQ bound to a direct exchange (DLX) with a routing key of myDestination.consumerGroup. When messages are rejected, they are routed to the DLQ. After 5 seconds, the message expires and is routed to the original queue by using the queue name as the routing key, as shown in the following example:

此配置使用路由密鑰為myDestination.consumerGroup建立一個直接綁定到exchange(DLX)的DLQ。當消息被拒絕時,它們将被路由到DLQ。5秒後,消息将過期,并使用隊列名稱作為路由密鑰路由到原始隊列,如以下示例所示:

Spring Boot application

@SpringBootApplication

@EnableBinding(Sink.class)

public class XDeathApplication {

    public static void main(String[] args) {

        SpringApplication.run(XDeathApplication.class, args);

    }

    @StreamListener(Sink.INPUT)

    public void listen(String in, @Header(name = "x-death", required = false) Map<?,?> death) {

        if (death != null && death.get("count").equals(3L)) {

            // giving up - don't send to DLX

            throw new ImmediateAcknowledgeAmqpException("Failed after 4 attempts");

        }

        throw new AmqpRejectAndDontRequeueException("failed");

    }

}

Notice that the count property in the x-death header is a Long.

請注意,x-death header中的count屬性是Long。

17.5. Error Channels   錯誤管道

Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel. See “[binder-error-channels]” for more information.

從版本1.3開始,綁定器無條件地為每個消費者目标向錯誤通道發送異常,并且還可以配置為将異步生成器發送失敗發送到錯誤通道。有關詳細資訊,請參閱“ [binder-error-channels] ”。

RabbitMQ has two types of send failures:

  • Returned messages,
  • Negatively acknowledged Publisher Confirms.

RabbitMQ有兩種類型的發送失敗:

  • 傳回的消息,
  • 負面确認的釋出者确認。

The latter is rare. According to the RabbitMQ documentation "[A nack] will only be delivered if an internal error occurs in the Erlang process responsible for a queue.".

後者很少見。根據RabbitMQ文檔,“隻有在負責隊列的Erlang程序中發生内部錯誤時才會傳遞[A nack]。”

As well as enabling producer error channels (as described in “[binder-error-channels]”), the RabbitMQ binder only sends messages to the channels if the connection factory is appropriately configured, as follows:

  • ccf.setPublisherConfirms(true);
  • ccf.setPublisherReturns(true);

除了啟用生産者錯誤通道(如“ [binder-error-channels] ”中所述),如果連接配接工廠配置正确,RabbitMQ綁定器僅向通道發送消息,如下所示:

  • ccf.setPublisherConfirms(true);
  • ccf.setPublisherReturns(true);

When using Spring Boot configuration for the connection factory, set the following properties:

  • spring.rabbitmq.publisher-confirms
  • spring.rabbitmq.publisher-returns

将Spring Boot配置用于連接配接工廠時,請設定以下屬性:

  • spring.rabbitmq.publisher-confirms
  • spring.rabbitmq.publisher-returns

The payload of the ErrorMessage for a returned message is a ReturnedAmqpMessageException with the following properties:

  • failedMessage: The spring-messaging Message<?> that failed to be sent.
  • amqpMessage: The raw spring-amqp Message.
  • replyCode: An integer value indicating the reason for the failure (for example, 312 - No route).
  • replyText: A text value indicating the reason for the failure (for example, NO_ROUTE).
  • exchange: The exchange to which the message was published.
  • routingKey: The routing key used when the message was published.

傳回消息的ErrorMessage的負載是ReturnedAmqpMessageException,具有以下屬性的:

  • failedMessage:發送失敗的spring-messaging Message<?>。
  • amqpMessage:原始的spring-amqp Message。
  • replyCode:一個整數值,訓示失敗的原因(例如,312 - 無路由)。
  • replyText:訓示失敗原因的文本值(例如,NO_ROUTE)。
  • exchange:消息釋出的交換。
  • routingKey:釋出消息時使用的路由密鑰。

For negatively acknowledged confirmations, the payload is a NackedAmqpMessageException with the following properties:

  • failedMessage: The spring-messaging Message<?> that failed to be sent.
  • nackReason: A reason (if available — you may need to examine the broker logs for more information).

對于否定确認的确認,負載是一個NackedAmqpMessageException,具有以下屬性:

  • failedMessage:發送失敗的spring-messaging Message<?>。
  • nackReason:一個原因(如果可用 - 您可能需要檢查代理日志以擷取更多資訊)。

There is no automatic handling of these exceptions (such as sending to a dead-letter queue). You can consume these exceptions with your own Spring Integration flow.

沒有自動處理這些異常(例如發送到死信隊列)。您可以使用自己的Spring Integration流程來使用這些異常。

17.6. Dead-Letter Queue Processing   死信隊列處理

Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them. If the reason for the dead-lettering is transient, you may wish to route the messages back to the original queue. However, if the problem is a permanent issue, that could cause an infinite loop. The following Spring Boot application shows an example of how to route those messages back to the original queue but moves them to a third “parking lot” queue after three attempts. The second example uses the RabbitMQ Delayed Message Exchange to introduce a delay to the re-queued message. In this example, the delay increases for each attempt. These examples use a @RabbitListener to receive messages from the DLQ. You could also use RabbitTemplate.receive() in a batch process.

因為您無法預測使用者将如何處理死信消息,是以架構不提供任何标準機制來處理它們。如果死信的原因是暫時的,您可能希望将消息路由回原始隊列。但是,如果問題是一個永久性問題,那麼可能會導緻無限循環。以下Spring Boot應用程式顯示了如何将這些消息路由回原始隊列但在三次嘗試後将它們移動到第三個“停車場”隊列的示例。第二個示例使用RabbitMQ延遲消息交換為重新排隊的消息引入延遲。在此示例中,每次嘗試的延遲都會增加。這些示例使用@RabbitListener來接收來自DLQ的消息。您也可以RabbitTemplate.receive()在批進行中使用。

The examples assume the original destination is so8400in and the consumer group is so8400.

這些示例假設原始目标是so8400in,而消費者組是so8400。

Non-Partitioned Destinations   未分區目标

The first two examples are for when the destination is not partitioned:

前兩個示例是針對目标未分區的時間:

@SpringBootApplication

public class ReRouteDlqApplication {

    private static final String ORIGINAL_QUEUE = "so8400in.so8400";

    private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

    private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

    private static final String X_RETRIES_HEADER = "x-retries";

    public static void main(String[] args) throws Exception {

        ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

        System.out.println("Hit enter to terminate");

        System.in.read();

        context.close();

    }

    @Autowired

    private RabbitTemplate rabbitTemplate;

    @RabbitListener(queues = DLQ)

    public void rePublish(Message failedMessage) {

        Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER);

        if (retriesHeader == null) {

            retriesHeader = Integer.valueOf(0);

        }

        if (retriesHeader < 3) {

            failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1);

            this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage);

        }

        else {

            this.rabbitTemplate.send(PARKING_LOT, failedMessage);

        }

    }

    @Bean

    public Queue parkingLot() {

        return new Queue(PARKING_LOT);

    }

}

@SpringBootApplication

public class ReRouteDlqApplication {

    private static final String ORIGINAL_QUEUE = "so8400in.so8400";

    private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

    private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

    private static final String X_RETRIES_HEADER = "x-retries";

    private static final String DELAY_EXCHANGE = "dlqReRouter";

    public static void main(String[] args) throws Exception {

        ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

        System.out.println("Hit enter to terminate");

        System.in.read();

        context.close();

    }

    @Autowired

    private RabbitTemplate rabbitTemplate;

    @RabbitListener(queues = DLQ)

    public void rePublish(Message failedMessage) {

        Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();

        Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);

        if (retriesHeader == null) {

            retriesHeader = Integer.valueOf(0);

        }

        if (retriesHeader < 3) {

            headers.put(X_RETRIES_HEADER, retriesHeader + 1);

            headers.put("x-delay", 5000 * retriesHeader);

            this.rabbitTemplate.send(DELAY_EXCHANGE, ORIGINAL_QUEUE, failedMessage);

        }

        else {

            this.rabbitTemplate.send(PARKING_LOT, failedMessage);

        }

    }

    @Bean

    public DirectExchange delayExchange() {

        DirectExchange exchange = new DirectExchange(DELAY_EXCHANGE);

        exchange.setDelayed(true);

        return exchange;

    }

    @Bean

    public Binding bindOriginalToDelay() {

        return BindingBuilder.bind(new Queue(ORIGINAL_QUEUE)).to(delayExchange()).with(ORIGINAL_QUEUE);

    }

    @Bean

    public Queue parkingLot() {

        return new Queue(PARKING_LOT);

    }

}

Partitioned Destinations   已分區目标

With partitioned destinations, there is one DLQ for all partitions. We determine the original queue from the headers.

對于已分區目标,所有分區都有一個DLQ。我們從headers中确定原始隊列。

republishToDlq=false

When republishToDlq is false, RabbitMQ publishes the message to the DLX/DLQ with an x-death header containing information about the original destination, as shown in the following example:

當republishToDlq是false,RabbitMQ使用含有關于原始目的地資訊的x-death header将消息釋出到DLX/DLQ,如圖以下示例:

@SpringBootApplication

public class ReRouteDlqApplication {

private static final String ORIGINAL_QUEUE = "so8400in.so8400";

private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

private static final String X_DEATH_HEADER = "x-death";

private static final String X_RETRIES_HEADER = "x-retries";

public static void main(String[] args) throws Exception {

ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

System.out.println("Hit enter to terminate");

System.in.read();

context.close();

}

@Autowired

private RabbitTemplate rabbitTemplate;

@SuppressWarnings("unchecked")

@RabbitListener(queues = DLQ)

public void rePublish(Message failedMessage) {

Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();

Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);

if (retriesHeader == null) {

retriesHeader = Integer.valueOf(0);

}

if (retriesHeader < 3) {

headers.put(X_RETRIES_HEADER, retriesHeader + 1);

List<Map<String, ?>> xDeath = (List<Map<String, ?>>) headers.get(X_DEATH_HEADER);

String exchange = (String) xDeath.get(0).get("exchange");

List<String> routingKeys = (List<String>) xDeath.get(0).get("routing-keys");

this.rabbitTemplate.send(exchange, routingKeys.get(0), failedMessage);

}

else {

this.rabbitTemplate.send(PARKING_LOT, failedMessage);

}

}

@Bean

public Queue parkingLot() {

return new Queue(PARKING_LOT);

}

}

republishToDlq=true

When republishToDlq is true, the republishing recoverer adds the original exchange and routing key to headers, as shown in the following example:

當republishToDlq是true時,重新釋出恢複器将原始交換和路由關鍵添加到headers中,因為顯示在下面的例子:

@SpringBootApplication

public class ReRouteDlqApplication {

private static final String ORIGINAL_QUEUE = "so8400in.so8400";

private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

private static final String X_RETRIES_HEADER = "x-retries";

private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE;

private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY;

public static void main(String[] args) throws Exception {

ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

System.out.println("Hit enter to terminate");

System.in.read();

context.close();

}

@Autowired

private RabbitTemplate rabbitTemplate;

@RabbitListener(queues = DLQ)

public void rePublish(Message failedMessage) {

Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();

Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);

if (retriesHeader == null) {

retriesHeader = Integer.valueOf(0);

}

if (retriesHeader < 3) {

headers.put(X_RETRIES_HEADER, retriesHeader + 1);

String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);

String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);

this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);

}

else {

this.rabbitTemplate.send(PARKING_LOT, failedMessage);

}

}

@Bean

public Queue parkingLot() {

return new Queue(PARKING_LOT);

}

}

17.7. Partitioning with the RabbitMQ Binder   使用RabbitMQ綁定器進行分區

RabbitMQ does not support partitioning natively.

RabbitMQ本身不支援分區。

Sometimes, it is advantageous to send data to specific partitions — for example, when you want to strictly order message processing, all messages for a particular customer should go to the same partition.

有時,将資料發送到特定分區是有利的 - 例如,當您要嚴格訂購消息處理時,特定客戶的所有消息都應該轉到同一分區。

The RabbitMessageChannelBinder provides partitioning by binding a queue for each partition to the destination exchange.

RabbitMessageChannelBinder通過将每個分區的隊列綁定到目的地交換提供分區。

The following Java and YAML examples show how to configure the producer:

以下Java和YAML示例顯示如何配置生産者:

Producer

@SpringBootApplication

@EnableBinding(Source.class)

public class RabbitPartitionProducerApplication {

    private static final Random RANDOM = new Random(System.currentTimeMillis());

    private static final String[] data = new String[] {

            "abc1", "def1", "qux1",

            "abc2", "def2", "qux2",

            "abc3", "def3", "qux3",

            "abc4", "def4", "qux4",

            };

    public static void main(String[] args) {

        new SpringApplicationBuilder(RabbitPartitionProducerApplication.class)

            .web(false)

            .run(args);

    }

    @InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000"))

    public Message<?> generate() {

        String value = data[RANDOM.nextInt(data.length)];

        System.out.println("Sending: " + value);

        return MessageBuilder.withPayload(value)

                .setHeader("partitionKey", value)

                .build();

    }

}

application.yml

    spring:

      cloud:

        stream:

          bindings:

            output:

              destination: partitioned.destination

              producer:

                partitioned: true

                partition-key-expression: headers['partitionKey']

                partition-count: 2

                required-groups:

                - myGroup

The configuration in the prececing example uses the default partitioning (key.hashCode() % partitionCount). This may or may not provide a suitably balanced algorithm, depending on the key values. You can override this default by using the partitionSelectorExpression or partitionSelectorClass properties.

The required-groups property is required only if you need the consumer queues to be provisioned when the producer is deployed. Otherwise, any messages sent to a partition are lost until the corresponding consumer is deployed.

前面示例中的配置使用預設分區(key.hashCode() % partitionCount)。根據鍵值,這可能會或可能不會提供适當平衡的算法。您可以使用partitionSelectorExpression或partitionSelectorClass屬性覆寫此預設值。

僅當您需要在部署生産者時配置消費者隊列時,才需要required-groups屬性。否則,在部署相應的消費者之前,發送到分區的任何消息都将丢失。

The following configuration provisions a topic exchange:

以下配置提供了主題交換:

The following queues are bound to that exchange:

以下隊列綁定到該交換:

The following bindings associate the queues to the exchange:

以下綁定将隊列關聯到交換:

The following Java and YAML examples continue the previous examples and show how to configure the consumer:

以下Java和YAML示例繼續前面的示例,并說明如何配置消費者:

Consumer

@SpringBootApplication

@EnableBinding(Sink.class)

public class RabbitPartitionConsumerApplication {

    public static void main(String[] args) {

        new SpringApplicationBuilder(RabbitPartitionConsumerApplication.class)

            .web(false)

            .run(args);

    }

    @StreamListener(Sink.INPUT)

    public void listen(@Payload String in, @Header(AmqpHeaders.CONSUMER_QUEUE) String queue) {

        System.out.println(in + " received from queue " + queue);

    }

}

application.yml

    spring:

      cloud:

        stream:

          bindings:

            input:

              destination: partitioned.destination

              group: myGroup

              consumer:

                partitioned: true

                instance-index: 0

The RabbitMessageChannelBinder does not support dynamic scaling. There must be at least one consumer per partition. The consumer’s instanceIndex is used to indicate which partition is consumed. Platforms such as Cloud Foundry can have only one instance with an instanceIndex.
RabbitMessageChannelBinder不支援動态擴充。每個分區必須至少有一個消費者。消費者的instanceIndex用于訓示消費了哪個分區。Cloud Foundry等平台隻能有一個帶有instanceIndex的執行個體。

Appendices

Appendix A: Building

A.1. Basic Compile and Test   基本編譯和測試

To build the source you will need to install JDK 1.7.

要建構源代碼,您需要安裝JDK 1.7。

The build uses the Maven wrapper so you don’t have to install a specific version of Maven. To enable the tests for Redis, Rabbit, and Kafka bindings you should have those servers running before building. See below for more information on running the servers.

建構使用Maven包裝器,是以您不必安裝特定版本的Maven。要為Redis,Rabbit,和Kafka綁定啟用測試,您應該在建構之前運作這些伺服器。有關運作伺服器的更多資訊,請參見下文。

The main build command is

主建構指令是

$ ./mvnw clean install

You can also add '-DskipTests' if you like, to avoid running the tests.

如果願意,您還可以添加'-DskipTests',以避免運作測試。

You can also install Maven (>=3.3.3) yourself and run the mvn command in place of ./mvnw in the examples below. If you do that you also might need to add -P spring if your local Maven settings do not contain repository declarations for spring pre-release artifacts.
您也可以自己安裝Maven(> = 3.3.3)并在下面的示例中運作mvn指令代替./mvnw。如果這樣做,您可能還需要添加,-P spring如果您的本地Maven設定不包含spring pre-release工件的存儲庫聲明。
Be aware that you might need to increase the amount of memory available to Maven by setting a MAVEN_OPTS environment variable with a value like -Xmx512m -XX:MaxPermSize=128m. We try to cover this in the .mvn configuration, so if you find you have to do it to make a build succeed, please raise a ticket to get the settings added to source control.
請注意,您可能需要通過設定MAVEN_OPTS值為的環境變量來增加Maven可用的記憶體量-Xmx512m -XX:MaxPermSize=128m。我們嘗試在.mvn配置中介紹這一點,是以如果您發現必須這樣做才能使建構成功,請提出一個票證以将設定添加到源代碼管理中。

The projects that require middleware generally include a docker-compose.yml, so consider using Docker Compose to run the middeware servers in Docker containers. See the README in the scripts demo repository for specific instructions about the common cases of mongo, rabbit and redis.

需要中間件的項目通常包括docker-compose.yml,是以請考慮使用 Docker Compose在Docker容器中運作middeware伺服器。有關mongo,rabbit,和redis常見情況的具體說明,請參閱腳本示範存儲庫中的README 。

A.2. Documentation

There is a "full" profile that will generate documentation.

有一個“完整”的配置檔案将生成文檔。

A.3. Working with the code   使用代碼

If you don’t have an IDE preference we would recommend that you use Spring Tools Suite or Eclipse when working with the code. We use the m2eclipe eclipse plugin for maven support. Other IDEs and tools should also work without issue.

如果您沒有IDE首選項,我們建議您在使用代碼時使用 Spring Tools Suite或 Eclipse。我們使用 m2eclipe eclipse插件來支援maven。其他IDE和工具也應該沒有問題。

A.3.1. Importing into eclipse with m2eclipse   使用m2eclipse導入eclipse

We recommend the m2eclipe eclipse plugin when working with eclipse. If you don’t already have m2eclipse installed it is available from the "eclipse marketplace".

在使用eclipse時,我們建議使用m2eclipe eclipse插件。如果您還沒有安裝m2eclipse,可以從“eclipse marketplace”獲得。

Unfortunately m2e does not yet support Maven 3.3, so once the projects are imported into Eclipse you will also need to tell m2eclipse to use the .settings.xml file for the projects. If you do not do this you may see many different errors related to the POMs in the projects. Open your Eclipse preferences, expand the Maven preferences, and select User Settings. In the User Settings field click Browse and navigate to the Spring Cloud project you imported selecting the .settings.xml file in that project. Click Apply and then OK to save the preference changes.

不幸的是m2e還不支援Maven 3.3,是以一旦将項目導入Eclipse,你還需要告訴m2eclipse将該.settings.xml檔案用于項目。如果不這樣做,您可能會看到許多與項目中的POM相關的錯誤。打開Eclipse首選項,展開Maven首選項,然後選擇使用者設定。在“使用者設定”字段中,單擊“浏覽”并導航到導入的Spring Cloud項目,選擇該.settings.xml項目中的檔案。單擊應用,然後單擊确定以儲存首選項更改。

Alternatively you can copy the repository settings from .settings.xml into your own ~/.m2/settings.xml.
或者,您可以将存儲庫設定複制.settings.xml到您自己的設定中~/.m2/settings.xml。

A.3.2. Importing into eclipse without m2eclipse   不使用m2eclipse導入eclipse

If you prefer not to use m2eclipse you can generate eclipse project metadata using the following command:

如果您不想使用m2eclipse,可以使用以下指令生成eclipse項目中繼資料:

$ ./mvnw eclipse:eclipse

The generated eclipse projects can be imported by selecting import existing projects from the file menu. [[contributing] == Contributing

可以通過從file菜單中選擇import existing projects導入生成的eclipse項目。[[貢獻] ==貢獻

Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

Spring Cloud是在非限制性Apache 2.0許可下釋出的,遵循非常标準的Github開發過程,使用Github跟蹤器解決問題并将拉取請求合并到master中。如果您想貢獻一些微不足道的東西,請不要猶豫,但請遵循以下指南。

A.4. Sign the Contributor License Agreement   簽署貢獻者許可協定

Before we accept a non-trivial patch or pull request we will need you to sign the contributor’s agreement. Signing the contributor’s agreement does not grant anyone commit rights to the main repository, but it does mean that we can accept your contributions, and you will get an author credit if we do. Active contributors might be asked to join the core team, and given the ability to merge pull requests.

在我們接受非平凡的更新檔或拉取請求之前,我們需要您簽署 貢獻者的協定。簽署貢獻者的協定不會授予任何人對主存儲庫的送出權利,但它确實意味着我們可以接受您的貢獻,如果我們這樣做,您将獲得作者信用。可能會要求活躍的貢獻者加入核心團隊,并且能夠合并拉取請求。

A.5. Code Conventions and Housekeeping   代碼約定和内務管理

None of these is essential for a pull request, but they will all help. They can also be added after the original pull request but before a merge.

  • Use the Spring Framework code format conventions. If you use Eclipse you can import formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project. If using IntelliJ, you can use theEclipse Code Formatter Plugin to import the same file.
  • Make sure all new .java files to have a simple Javadoc class comment with at least an @author tag identifying you, and preferably at least a paragraph on what the class is for.
  • Add the ASF license header comment to all new .java files (copy from existing files in the project)
  • Add yourself as an @author to the .java files that you modify substantially (more than cosmetic changes).
  • Add some Javadocs and, if you change the namespace, some XSD doc elements.
  • A few unit tests would help a lot as well — someone has to do it.
  • If no-one else is using your branch, please rebase it against the current master (or other target branch in the main project).
  • When writing a commit message please follow these conventions, if you are fixing an existing issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue number).

這些都不是拉取請求所必需的,但它們都會有所幫助。它們也可以在原始拉取請求之後但在合并之前添加。

  • 使用Spring Framework代碼格式約定。如果使用Eclipse,則可以使用Spring Cloud Build項目中的eclipse-code-formatter.xml檔案 導入格式化程式設定 。如果使用IntelliJ,則可以使用 Eclipse Code Formatter Plugin導入同一檔案。
  • 確定所有新.java檔案都有一個簡單的Javadoc類注釋,至少有一個@author辨別您的 标記,最好至少有一個關于該類所用内容的段落。
  • 将ASF許可證頭注釋添加到所有新.java檔案(從項目中的現有檔案複制)
  • 将您自己添加為@author您實際修改的.java檔案(超過整容更改)。
  • 添加一些Javadoc,如果更改命名空間,則添加一些XSD doc元素。
  • 一些單元測試也會有很多幫助 - 有人必須這樣做。
  • 如果沒有其他人使用您的分支,請将其重新綁定到目前主伺服器(或主項目中的其他目标分支)。
  • 在編寫送出消息時,請遵循這些約定,如果要修複現有問題,請Fixes gh-XXXX在送出消息的末尾添加(其中XXXX是問題編号)。

Last updated 2018-07-11 12:49:33 UTC

繼續閱讀