laitimes

Lightweight mock interface and document management platform - Ming

author:Flash Gene

"Ming" is a mock interface and document management platform within Zhihu.

It is designed to be simple and easy to use, which solves the problem of lack of management and confusion of mock interfaces and documents in the past, and the long-term lag of mock interfaces and documents behind online interfaces.

Although there are many similar open source platform tools in the industry, the design and implementation of "Ming" is based on the actual situation of the team itself, which is more streamlined and meets the needs of the team.

background

Before the birth of "Ming", there was a lack of unified management of mock interfaces and interface documents, and they were scattered in various internal systems, which had some drawbacks.

There is a lack of unified management of interface documents

Some teams like to write interface documentation in an internal collaborative documentation platform, while others prefer to write in git repositories, with different directories and depths.

When later generations wanted to look at the historical documentation of the interface, it was impossible to find it, and many of the documents were lost.

Mock interfaces lack unified management

The mock interface is usually written by the front-end in its own code repository, writing some Node.js code of the HTTP service to generate a simple mock interface.

In order to use the mock interface, you need to go through: start the local mock service, switch the local development agent, modify the mock interface code and other operations, so the process of writing the mock interface can often be saved;

Another problem is that there is no way to look up the historical interface, and no one remembers what the mock interface code does for a long time.

Mock interfaces and documentation lag behind online interfaces

Once the requirements are developed and live, the mock interface and interface documentation become unattended and begin to be forgotten.

After the requirements iterate, developers are likely to forget to update the mock interface and documentation, making them more and more disconnected from the online interface, and becoming a garbage heap of history.

Lack of testing after interface changes

When engineers modify the old interface, it is inevitable that there will be mistakes, which will cause some bugs in the interface and lead to online failures. In order to avoid such problems, it is often necessary to spend a lot of effort and time to write test scripts.

Everyone understands that tests should be written during development, but in reality we often don't have the time and energy to do so.

Lightweight solution

In the industry, in order to solve the above common problems in various companies, there have been many similar interface management platforms have emerged, and these platforms have different focuses.

The "Ming" platform also has a unique focus, which is "lightweight".

Interface definition

The process of defining an interface is actually the process of creating an interface on the platform, with the aim of eventually generating "documentation" and "mock interfaces".

However, when it comes to how to define interfaces, different interface management platforms use different schemes.

Most interface management platforms understand interface definition as producing a structured interface description schema

JSON Schema

Some platforms use a JSON schema that specifies what type each field is, what optional values there are, and so on

A JSON schema looks like this:

{
  "type": "object",
  "properties": {
    "data": {
      "items": {
        "properties": {
          "type": { "type": ["number"] },
          "created_at": { "type": ["number"] },
          "updated_at": { "type": ["number"] }
        },
        "type": "object",
        "required": ["type", "created_at", "updated_at"]
      },
      "type": "array"
    }
  },
  "required": ["message", "data", "pagination"]
}           

Very detailed, but verbose, it is an object structure with three numeric fields { type, created_at, updated_at }.

As long as the structure is a little more complex, we can't understand it in a short period of time, and it's not a language that is suitable for human reading, it is suitable for machines.

OpenAPI

Some platforms push to maintain a specification called OpenAPI, which describes all the information about the interface in great detail, and the OpenAPI specification is intended to describe a much broader scope than the JSON Schema, including the Header Server and so on.

An OpenAPI describing the return value of the interface reads as follows:

responses:
  '200':
    description: A user object.
    content:
      application/json:
        schema:
          $ref: '#/components/schemas/User'   # Reference to an object
        example: 
          # Properties of the referenced object
          id: 10
          name: Jessica Smith           

As you can see, there is even a $ref reference function in the OpenAPI, which is as powerful as it can be expected and the specification is as detailed as you can expect.

These specifications are very detailed, but they make the definition of interfaces extremely complex, and the definitions of each interface are long and unreadable.

When we want to create an interface in a platform that uses these defined specifications, how much effort does it take? There are probably the following three ideas:

  • 熟悉语法然后手写 Schema
  • Manipulate a form, add more fields, fill in the type of field, description, enumeration, whether required, nullable, legal range of values, etc
  • The interface return value (JSON) automatically generates the corresponding JSON schema

The first kind of familiarity with grammar is impossible, and we can't guarantee that everyone will learn its grammatical rules.

The second form requires little grammar, but it's too cumbersome, and it takes a lot of effort to constantly switch between mouse clicks and keyboard input.

And in fact, when the interface is written, we will find that this interface definition may be more detailed and standardized than the interface function we finally implemented, and it is easy to be out of reality.

What about the third type of auto-generated? In fact, the "return value (JSON)" carries less information than the JSON schema, so the automatically generated JSON schema will definitely have distorted information and still need to be manually patched.

The interface return value is the definition

The interface definition of "Ming" is very simple, and there is no complex abstraction of the response body and the request body.

Therefore, creating an interface on the "Ming" platform is very fast:

First, fill in the necessary basic information such as interface routing, interface name, and interface description in the form.

Then paste the interface return value in the JSON code editor, write a comment, and click submit;

An interface is created.

Lightweight mock interface and document management platform - Ming

Create an interface

Note: The JSON code editor here is actually a JSON5 editor, so you can write comments.

Take one thing with another

"JSON Schema" and "Open API" are more detailed and strict as interface definitions, which conform to common standards, but they are too cumbersome to edit and create, and the use efficiency is low, and it is easy for engineers to rush to work in order to use it quickly.

The interface definition of "Ming" is simpler and more efficient to use, but there are no strict constraints on the interface definition.

I believe that OpenAPI and JSON Schema will become more and more widely used standards in the future, and that various interface management platforms will do their best to reduce all the inconvenience when writing interface definitions.

However, for some scenarios where the interface document definition does not require such strict details, it is easier to use and produce results quickly, so the solution of weakening the interface definition like "Ming" will always exist.

Document generation vs. mock generation

Different interface definition methods determine the process of document generation and mock generation, and each has its own advantages and disadvantages.

documentation

If you use OpenAPI or JSON Schema as the interface definition, the resulting documentation will be more detailed.

However, the disadvantage is that the definition of JSON Schema does not reflect what the interface actually returns, for example, only knowing that a field is a string type, but not knowing what the string looks like, may not be intuitive in some cases.

"Ming" uses the actual return value as the interface definition, so the fields are not defined in such detail in the document it generates, only a few comments.

The advantage is that it is more intuitive, and the actual content of the interface field will make it easier for people to relate it to the actual business.

Mock

The interface definition of JSON Schema and OpenAPI cannot solve the problem of mock interface generation, and it is necessary to introduce another set of mock generation rules, such as creating a DSL, or writing js code to generate mock data.

"Ming" uses the actual return value as the interface definition, which naturally supports the generation of mock interfaces, which is very convenient.

Automated testing

Different types of tests serve different purposes, and only two types of tests will be discussed here for the time being:

  • Usability testing: Determine whether the interface is working properly, such as by testing some assertions in the script to avoid causing online failures
  • Document synchronization automatic detection: Compare the interface definition with the actual return value of the online interface to conclude whether the interface document lags behind the online interface

Usability testing

Most of the testing functions provided by interface management platforms are usability tests, which are mostly used in the following ways:

Create a test task, preset some headers such as cookies, and then write assertion sentences in the test task, let it request the online interface every short time, and check whether it can pass the test script.

This kind of test needs to be created manually, manually written test code, manually started, and the cost of writing test scripts is high.

Document synchronization is automatically detected

It seems that many interface management platforms do not have the concept of "document synchronization detection", or do not take it out as a stand-alone function, and are more mixed with "availability detection", and the boundaries are blurred.

In fact, under the premise of knowing the interface definition, the "document synchronization detection" can be carried out automatically, when the interface document is successfully created, and the interface has been jointly debugged, and the interface is put into use, the interface platform can automatically enable this detection.

"Ming" uses the actual return value of the interface as the interface definition, which is certainly not enough if it is used to compare whether the online interface is synchronized or not.

Therefore, when "Ming" automatically creates a detection task, it automatically generates a JSON schema using the JSON return value in the interface definition, which is used to compare whether the JSON schema lags behind the online interface.

Take one thing with another

In fact, there is no conflict between "automatic detection of document synchronization" and "usability testing".

Both tests can be implemented regardless of the interface definition method, such as "JSON Schema" or "OpenAPI", or "Interface Return Value as Definition" of Ming.

Since the JSON schema used by "Ming" to compare whether the interface documents are synchronized is automatically generated (although the "comments" in the JSON can be used to help generate a more accurate JSON schema, it is not mandatory to write comments).

As a result, its accuracy will be somewhat reduced.

Different states of the mock interface

In practice, the return value of some interfaces is very complex, and will return different response bodies depending on the request parameters or the current state of the data in the database.

In order to meet this need, there are basically two options:

  • Add the function of conditional statements to the interface, and set the return value in different cases through conditional statements or DSLs
  • Copy multiple copies of the data of the interface management platform, run a sandbox service for each copy, and modify the interface definition in the sandbox to return different data

Implement conditional statements

Many interface management platforms implement this feature by writing conditional branching logic for mock interfaces

Here are some examples:

if(cookie._type === 'error'){
    mockJson.errcode = 400;
}

if(cookie._type === 'empty'){
    mockJson.data.list = [];
}           

Writing js code like this makes the mock interface return different response bodies, status codes, etc.

This is a powerful feature, and the mock interface has the same dynamic logic as the real interface, but the things that come with it are not as simple as we think.

The cost of learning to use is prohibitive

The first is the difficulty of learning grammar, at least to learn JS syntax.

In addition, some variables and fields that come out of nowhere are used in the above code, such as cookie, cookie._type, mockJson, mockJson.data.

Presumably, the platform needs to provide a document for writing code, listing all the variables that can be used and their detailed structures, all the functions that can be used, and the meaning of all fields.

The cost of using debugging is out of control

The code needs to be debugged, and without the local IDE, the process development of the code running is completely invisible.

As a result, once something goes wrong in the code, debugging becomes extremely difficult, and the user doesn't understand what the code is wrong.

After all, we're partially implementing the logic of the real interface, and mistakes are inevitable.

This can make this feature difficult to use.

Maintenance costs are out of control

The code combinations are endless,

In practice, it is not surprising that the user writes any code, and it may take a lot of manpower to respond to the user's oncall.

The demand is endless.

Although it is a partial implementation of the logic of the real interface, this does not mean that the requirements will be simple, in order to meet the needs of users to realize various states of the interface through conditional statements, there will be more and more function methods and variables that need to be supported, endlessly.

This will also incur very large maintenance costs.

Sandboxed platform

Users can start a temporary "Ming" sandbox service at any time, and modify the interface return value in the sandbox, and the status code will not affect the data of the "Ming" main service.

Generally, the people who need to use this function are the front-end and the client, they write the corresponding interface address in the sandbox in the code, and then modify the corresponding interface in the sandbox, and then you can debug different states of the same interface.

"Ming" uses this solution, and the above mentioned learning costs, usage costs, and maintenance costs do not exist.

Take one thing with another

The advantage of the "conditional statement" solution is that the mock interface has the ability to implement part of the dynamic logic of the real interface, which is more convenient in some cases.

However, the disadvantage is that it brings a huge "learning cost", "use cost" and "maintenance cost".

The advantage of the "sandbox" approach adopted by "Ming" is that it avoids the costs of the above three aspects.

The disadvantage is that it is not flexible enough when encountering a few scenarios that really require the mock interface to have dynamic logic.

System implementation

Minimum archetype

The front-end uses a unified development framework and component library for back-end projects.

The backend uses Node.js as the interface service, and Redis persistent storage as the database.

The Node.js service obtains all the interface data from Redis, loads it into memory, and returns the corresponding mock interface data by matching routes.

But this way of loading all the data into memory every time you respond to a mock request is poor.

Therefore, we have added cache optimization, every 10s to load all mock interface data from Redis to memory, and the interface responds with the data in the cache, and the interface response speed is greatly improved.

Screenshots of the subject

Lightweight mock interface and document management platform - Ming

List of items

Lightweight mock interface and document management platform - Ming

Document Center

Lightweight mock interface and document management platform - Ming

Interface documentation

Front-end project integration

After the back-end developers start using the interface platform, the front-end only needs to write the mock address into the project code to use the mock interface for development.

However, this is not secure enough, and the front-end has a great risk of putting the mock interface address on the wrong line.

Therefore, we came up with the idea that an interface management platform could be integrated with front-end projects to establish automatic mapping between online interfaces and mock interfaces.

Similar to the Nginx anti-proxy route matching rule, we use "longer route length" and "parameters present in the route" as the high priority sorting rules, sort all mock interfaces, match online addresses one by one, and return the corresponding mock interfaces.

The local development of the front-end project automatically switches the domain name when sending the request according to the environment variables, so that the front-end development can use the address of the online interface with confidence.

Interface validation

When "Ming" automatically creates a detection task, it uses the JSON return value in the interface definition to automatically generate a JSON schema to compare whether the JSON schema lags behind the online interface.

Although the JSON schema is automatically generated by the interface return value, users can add "format annotations" to the JSON code to assist in the generation of the JSON schema.

An example of a "format note" is as follows:

{
  "data": [
    {
      "name": "刘看山" // @optional
    }
  ]
}           

The @optional here indicates that this field is optional, and this line of comment will affect the final JSON Schema structure.

Scheduled send interface

This is achieved through the Message Queuing and Timer service

Lightweight mock interface and document management platform - Ming

Verify the message queue on a regular basis

Lightweight mock interface and document management platform - Ming

Verify the task state machine

If the verification fails, a modification suggestion is given

The verification failure may be due to a mistake in the interface itself, or it may be due to an error in the automatically generated JSON schema.

If the interface itself is wrong, it means that the online interface is wrong, and remind the relevant person in charge to check the cause of the failure and pay attention to the recent changes in the interface.

If the JSON schema is wrong, it means that you may need to adjust the result of the automatic JSON schema with the help of "Format Annotation".

The so-called suggestion of changes is to suggest a "format note" that may be valid.

Screenshots of the subject

Lightweight mock interface and document management platform - Ming

Notification when an interface validation fails

Sandboxing

Start the sandbox service

The build and deployment process of the sandbox service is basically the same as that of the main service, and different build scripts can be used to distinguish between them.

The sandboxed service is isolated from the main service

Since the data is stored in Redis, it can be distinguished from the main service data by adding a different prefix to the Redis Key.

Copying and overwriting data is dangerous, so you need to add unit tests and runtime checks to this part of the logic.

summary

This article introduces Ming, a mock interface and document management platform used internally by Zhihu, which is characterized by its lightweightness, focusing on efficiency and maintenance costs, and avoiding the introduction of overly complex product logic.

Compared with some open source platforms in the industry, the functions of "Ming" are not perfect enough.

There is still a lot of room for expansion in "postman-like interface testing", "usability testing", "stress testing", "deep integration with back-end interface development", "reference to public business JSON structure", "simple dynamic logic support", etc.

Author: Ma Liang Liang Liangjun

Source: https://zhuanlan.zhihu.com/p/100629469

Read on