This is the Spring Cloud Alibaba (hereinafter referred to as SCA) Best Practices Kubernetes deployment version, which requires you to prepare the following environment.
Here expose the services of the Pod in Kubernetes to the outside world by means of NodePort, and configure the ip mapping of the Kubernetes cluster node before starting the test.
By running the above command, quickly deploy the best practice example through Helm according to the Helm Chart documentation provided by the SCA community.
You can check the deployment status of each container resource through the `kubectl` command provided by Kubernetes, and wait patiently for **all containers to finish starting** to experience the usage scenarios and capabilities of each component on the corresponding page.
For the distributed transaction capability, SCA community provide a scenario **where a user places an order to purchase goods** and after placing the order.
While initializing the `integrated-mysql` container, **initializing the business database table** creates a new user, the user's userId is admin, with a balance of $3; and a new item numbered 1 with 100 units in stock.
So by doing the above, application will create an order, deduct the number of items in stock corresponding to item number 1 (100-1=99), and deduct the balance of the admin user (3-2=1).
If the same interface is requested again, again the inventory is deducted first (99-1=98), but an exception is thrown because the admin user's balance is insufficient and is caught by Seata, which performs a two-stage commit of the distributed transaction and rolls back the transaction.
For service fusion limiting and peak and valley cutting in the context of high traffic, SCA community provide a scenario** where users make likes for products**. In this scenario, we provide two ways to deal with high traffic.
- Sentinel binds specified gateway routes on the gateway side for fusion degradation of services.
- RocketMQ performs traffic clipping, where the producer sends messages to RocketMQ under high traffic requests, while the consumer pulls and consumes through a configurable consumption rate, reducing the pressure of high traffic direct requests to the database to increase the number of likes requests.
The Gateway routing point service has a flow limit rule of 5, while 10 concurrent requests are simulated on the front end through asynchronous processing.
Therefore, we can see that Sentinel performs a service fusion on the Gateway side to return the fallback to the client for the extra traffic, while the number of likes in the database is updated (+5).
Since the consumption rate and interval of the `integrated-praise-consumer` consumer module is configured in Nacos before, the application will simulate 1000 "like" requests when clicking the button, `integrated-praise-provider`
will deliver 1000 requests to the Broker, and the consumer module will consume them according to the configured consumption rate, and update the database with the product data of the likes, simulating the characteristics of RocketMQ to cut the peaks and fill the valleys under high traffic.
You can see that the number of likes in the database is being dynamically updated.
Of course, there is more to each component than just what is demonstrated in the best practices, so if you are interested or want to go deeper, feel free to read the individual example documentation for each component.