laitimes

Design and implementation of Zhihu commercial test bench Simba

author:Flash Gene

At present, the Zhihu client implements a single-week release, and each version needs to go through stages such as development self-testing, product acceptance, and QA regression. There are a large number of ad slots and templates for commercials, and it is not possible to apply all ad formats online when testing. In the early stage, all the advertising mock data was prepared offline, and there were the following pain points in the acceptance process:

  • The offline mock data used by R&D and QA is not uniform, the communication cost is high, and the operation is complex
  1. R&D and QA each maintain mock data locally, using Charles' Map Local feature for data replacement. There are problems of inconsistent data and low timeliness of data in the maintenance data of each party. When reproducing bugs, the data is often inconsistent, resulting in inconsistent performance, which only increases the cost of communication.
  2. Manually maintained mock data needs to be modified frequently during abnormal situation testing, which is prone to error data correction and false positive bugs.
  3. There is an expiration check on data such as videos, and local mock data needs to be regenerated every time, which is a complicated step and wastes a lot of time.
  • The threshold for testing is high

    All students who participate in the project acceptance need to learn how to use Charles, install an SSL certificate on their mobile phones, and understand the purpose of each local mock data.

  • Online acceptance takes a long time

    When conducting online acceptance, you need to create a test contract, order, and advertisement from the advertising delivery system, and the advertisement can only be swiped online after the ad is approved. The process is too long, and the preparation is cumbersome.

  • The buried acceptance is complex

    The advertising tracking has been encrypted, and when QA accepts, it is necessary to decrypt each tracking information to confirm whether the tracking is correct. In addition, the commercial advertising tracking data is sent to the production environment tracking statistics service, and the automation cannot obtain the tracking statistics in real time, resulting in the failure of the commercial advertising client automation to verify the buried points.

In order to solve the above problems, the Zhihu commercial quality assurance team designed and implemented the client workbench - Simba. On this workbench, all project team members can preview all templates/styles to all online advertising slots by scanning the advertising QR code; You can also check and edit the ad template data (hereinafter referred to as benchmark data) that needs to be tested on the workbench for refined testing; During the test, the buried data is automatically analyzed and counted, and the workbench displays the number of various types of buried points of advertisements in real time. At the same time, Simba also provides a creative interface to the client automated testing service, after the creative library is updated, there is no need to modify the client automation use case code, and the creative is updated in real time. The workbench principle is as follows.

Workbench principle

Preview

Workbench obtains fields such as advertising id and creative id from the business database, and piecing together a schema link and generating a QR code according to the preview link rules. After scanning the QR code, the client is routed to the corresponding ad page, and the preview link is written in the request header to send an ad request to the backend. Back-end services such as splash screen, Q&A, and homepage, and transparent transmission preview are linked to the advertising engine. The ad engine returns the specified ad based on the preview link in the request header. This is shown in Figure 1.

Design and implementation of Zhihu commercial test bench Simba

Figure 1 Schematic diagram of the ad preview

Multi-user agent

In addition to the above-mentioned way of scanning the QR code for real-time preview of the advertisement, the workbench also provides proxy services and benchmark data for the acceptance personnel to modify the advertising data for more detailed testing. In daily work, it is necessary to meet the scenario of multiple people accepting at the same time, so Mitmproxy is introduced into the workbench to implement multi-user agents. Each user obtains and uses a proprietary proxy service on the workbench, and requests and returns do not interfere with each other, as shown in Figure 2. Mitmproxy intercepts requests, responses, and modifies them, and provides a command-line tool, Mitmdump. When you start the Mitmdump service, you can add the -s parameter to load the script to customize the request you receive, and then return the response data. Add the -H parameter to add a header to the agent's request to distinguish between users.

Design and implementation of Zhihu commercial test bench Simba

Figure 2 Multi-user agent

The above is the main principle of the workbench, and then the architecture and implementation of the workbench are introduced.

Workbench architecture

The workbench front-end is implemented using React + AntD, and there are 3 modules, which are preview, mock, and proxy (mitmproxy).

The backend of the workbench uses the Python Flask framework, which has a total of 4 modules, namely preview, scheduled tasks (benchmark data), mock, and proxy (mitmproxy).

Design and implementation of Zhihu commercial test bench Simba

Figure 3 Workbench architecture

Below we take a closer look at the 4 main modules.

Preview

QA creates a batch of test ads in the production environment and puts them in a rejected state with no impact on the production environment. The backend of the workbench generates a scheme link through the stitching rules of each ad slot preview request, which is stored in the workbench database. The front end of the workbench displays the pre-cast ad creatives and the corresponding QR code of the schema. Users can quickly screen the advertisements that need to be accepted according to the ad space, ad module, and ad style, and use the QR code scanning function on the Zhihu client to jump to the designated ad page and accept the specified advertisement. This is shown in Figure 4.

Figure 4 Advertisement preview

Benchmark data generation

1. Benchmark data timer: According to the schema link pre-stored in the preview module and the production environment ad request header obtained through the proxy service, the backend of the workbench simulates the ad request at a regular time every day to obtain the ad response data, which is saved in the workbench database as the benchmark data. When the response field changes, the baseline data is updated.

2. Video data update timer: Workbench filters out the preview link of video ads from all schema links, filters out the ads returned through the above schema from the benchmark data and user data, re-obtains the video link according to the video ID, and updates the video benchmark data of the workbench regularly.

3. Real-time update of ad landing page: When testing different types of ad landing pages, Workbench provides the function of replacing the value value of the specified field of benchmark data. In this way, you only need to save one benchmark data and preset 10 landing page values to verify 10 different landing page types, reducing the cost of baseline data maintenance.

Mock

The mock module provides functions such as baseline data replication, benchmark data enablement, header checksum, and buried point statistics.

1. Benchmark data replication: As mentioned above, the benchmark data is regularly updated to the latest online interface to ensure the timeliness of the data. When an acceptor starts a round of testing, they can copy a copy of the benchmark data to their personal directory as their own mock data, and then go to the personal directory and check specific data for testing. This is shown in Figure 5.

Design and implementation of Zhihu commercial test bench Simba

Figure 5 Mock data module

2. Benchmark data enabled: When the workbench receives an ad request from Mitmdump, check whether the mock data is enabled, such as enabling the return of the user's enabled baseline data; Otherwise, the original data stored in the header is used to build a request and sent to the target service.

3. Request header verification: If the requested header is wrong, the returned ad data will not meet expectations. The header has many inspection fields and is easy to forget, so this workbench provides an automatic check function. For all requests forwarded to the workbench, the header important fields are verified, and the verification results are displayed on the workbench in real time. This is shown in Figure 6.

Design and implementation of Zhihu commercial test bench Simba

Figure 6 Header verification is requested

4. Buried statistics: When the workbench receives the buried request sent by Mitmdump, it parses the parameters in the header of the request and the request URL, decrypts it, and summarizes the obtained information in the dimension of advertising and stores it in the database. The front end of the workbench counts the key buried data of the currently displayed advertisements according to the dimensions of advertising space, advertisement, number of exposures, and number of clicks, as shown in Figure 7.

Design and implementation of Zhihu commercial test bench Simba

Fig.7 Buried point statistics module

5. Client-side UI automation use case interface: The automated test opens the page where the ad is located through the Scheme to operate the advertisement, and the workbench automation use case interface receives the ad slot id, and returns all the Schemes of the ad slot to the automation for testing after style deduplication.

6. Automatic buried point verification interface: When writing automation use cases, add fixed parameters after Scheme, such as: Key = Workbench Host: Workbench Port / Workbench Route. The engine side identifies the keyword "Key" to obtain the receiving address of the workbench tracking request, and adds this address to the third-party monitoring link of the advertisement. In this way, when the advertisement is exposed and clicked, the workbench will receive the buried point request of the advertisement, and call the buried point statistics function above to process the request. If there is no keyword key in the scheme, there will be no special action by the engine, so there will be no impact on normal ads. When the automation needs to verify the buried data, request the workbench automation buried point verification interface to obtain the number of advertising exposures, clicks, etc. for assertion.

Mitmproxy

The Mitmproxy module is used to filter and forward requests. The user-initiated Mitmdump service is on the backend server, is user-agnostic, and does not depend on the local environment. Each user uses their own Mitmdump proxy service, so no operations between each user are affected. This is shown in Figure 8.

1. When starting Mitmdump, add a parameter to add a header to the request received by this proxy, such as Port to mark the request for the backend to identify the user corresponding to the request.

2. Add parameters to specify the script, the script can selectively intercept requests (ad requests, buried point requests, etc.), modify the Host, Port, Path, and Schema of the request after interception, forward the request to the mock module, and add a header to store the original data of the request for subsequent use.

Design and implementation of Zhihu commercial test bench Simba

Figure 8 Proxy module

summary

Zhihu commercial test workbench came into being to solve the acceptance of online full advertising card templates, and is currently widely used in scenarios such as style acceptance, development self-testing, and version regression, which greatly accelerates the progress of testing and shortens the testing time. By combining with the client-side UI automation, the full screenshot style is automatically captured, which greatly facilitates the designer's acceptance, and the designer's acceptance efficiency is increased by 4 times. Version integration regression time has been reduced from 6 hours to 4 hours. At present, the platform continues to receive product and development requirements for iteration, providing strong support for commercial quality assurance.

作者:wmaidouw

Source: https://zhuanlan.zhihu.com/p/70876764

Read on