Compute1.1 UHost Cloud Compute Instance

SCloud UHost is based on mature cloud computing technology, high-performance infrastructure, high-quality network bandwidth, high-quality data center and other resources. It provides safe and stable, fast deployment, flexible expansion, and convenient management computing unit.

1.1.1 Cloud Instance Type

UHost are divided into four types according to the application scenarios: Outstanding (O), Standard (N), High Frequency (C) 

  Features Applicable scene
Outstanding (O)

The latest generation of cloud host with excellent computing, storage and network performance

Latest Intel Cascade Lake and AMD EPYC2 processor

Database, MMO games, artificial intelligence, etc.
Standard (N) Flexible configuration & Great Choices Enterprise-level applications, memory services, data analysis, etc.
High Frequency (C) Adopts 3.2GHz CPU with strong computing performance High frequency trading, data processing, graphics rendering, etc.

 

Summary of basic parameters of cloud instance:

  Standard N High Frequency C Outstanding O
CPU: Memory 1:1-1:8 1:1-1:8 1:1-1:8
CPU Type IvyBridge/Haswell/Broadwell/Skylake Skylake (models with main frequency ≥3.0GHz) Skylake/Cascadelake
Hard Disk type Cloud disk, Standard local disk, SSD local disk Cloud disk, SSD local disk RSSD cloud disk, SSD cloud disk
Network Network enhancement 1.0/Network enhancement 2.0 (only Skylake and above support) and Hot Scale-Up Network enhancement 1.0 and Hot Scale-Up Network enhancement 2.0 and Hot Scale-Up

1.1.1.1 Standard Instance N

  1. Scenario: Provide the most flexible and free combination of CPU, memory, and disk. Suitable for balanced scenarios such as computing, storage, and networking.
  2. CPU platform support: IvyBridge/Haswell/Broadwell/Skylake
  3. CPU memory combination (support ratio 1:1-1:8):
CPU RAM
1 core 1G, 2G, 4G, 8G
2 cores 2G, 4G, 6G, 8G, 12G, 16G
4 Nuclear 4G, 6G, 8G, 12G, 16G, 32G
8-core 8G, 12G, 16G, 24G, 32G, 48G, 64G
16 core 16G, 24G, 32G, 48G, 64G, 128G
24 core 24G, 32G, 64G, 96G, 192G
32 core 32G, 64G, 96G, 128G
  1. Disk type support: support cloud disk, Standard local disk, SSD local disk

Specific selection range:

System Disk Data Disk
SSD cloud disk (20-500GB)

SSD cloud disk (20-4000GB),

Standard cloud disk (20-8000GB)

Standard local disk (20-100GB) Standard local disk (20-2000GB)
SSD local disk (20-100GB) SSD local disk (20-1000GB)

 

  1. Feature support: network enhancement 1.0/network enhancement 2.0 (only Skylake and above support) and Hot Scale-Up

1.1.1.2 High Frequency Instance C

  1. Scenario: Models with CPU frequency ≥3.0GHz, suitable for computing services, such as high-frequency trading, rendering, artificial intelligence, etc.
  2. CPU platform support: Skylake
  3. CPU memory combination (support ratio 1:1-1:8):
CPU RAM
1 core 1G, 2G, 4G, 8G
2 cores 2G, 4G, 8G, 16G
4 Nuclear 4G, 8G, 16G, 32G
8-core 8G, 16G, 32G, 64G
16 core 16G, 32G, 64G, 128G
32 core 32G, 64G, 128G

 

  1. Disk type support: support cloud disk, SSD local disk

Specific selection range:

System Disk Data Disk
SSD cloud disk (20-500GB)

SSD cloud disk (20-4000GB),

Standard cloud disk (20-8000GB)

SSD local disk (20-100GB) SSD local disk (20-1000GB)
  1. Feature support: network enhancement 1.0 and Hot Scale-Up.
1.1.1.2.1 Outstanding Instance O

The latest generation of cloud hosts with excellent computing, storage and network performance. Suitable for all type of demanding scenarios.

  1. CPU platform support: Skylake/Cascadelake
  2. CPU memory combination (support ratio 1:1-1:8):
CPU RAM
4 Nuclear 4G, 8G, 16G, 32G
8-core 8G, 16G, 32G, 64G
16 core 16G, 32G, 64G, 128G
32 core 32G, 64G, 128G, 256G
64 core 64G, 128G, 256G
  1. Disk type support: support RSSD cloud disk, SSD cloud disk

Specific selection range:

System disk Data disk
SSD cloud disk (20-500GB) RSSD cloud disk (20-32000GB)
  1. Feature support: network enhancement 2.0 and Hot Scale-Up

1.1.2 Product Advantages

  1. Elastic and Flexible

You can easily scale cloud resources horizontally and vertically at any time based on your business demands to prevent resource waste. Create or release cloud hosts in minutes; upgrade or downgrade host CPU and memory within 5 minutes; upgrade or downgrade public network bandwidth online, and easily copy host data and environment with custom image. It also provides open APIs to meet the needs of batch management and automated management.

  1. Stable and Reliable

SCloud promises 99.95% service availability and data durability no less than 99.9999%. The local disk of the cloud host uses RAID for data protection to prevent data loss. UHost has industry-leading kernel optimization and hot patch technology, non-stop online migration technology, and also provide data snapshots capability.

  1. High Performance

The performance indicators of the host CPU and memory are industry-leading, and the unique storage technology increases the random read and write I/O capacity of the disk by 10 times that of the Standard SAS disk. There is also an SSD disk cloud host that provides ultra-high IOPS performance. Excellent network processing capabilities to meet various business application requirements.

UHost can support up to Intel Cascadelake CPU, self-developed network enhancement 2.0 technology, disk Binlog technology, full NVMe disk RSSD cloud disk, etc., can achieve up to 1.2 million IOPS IO performance and 10 million PPS network performance

  1. Secure

UHost has 100% complete network isolation between users to ensure data security. We also provide network firewall function to strictly control access to the public network. In conjunction with the VPC function, a private subnet under a single account can be established to support your internal security management needs. And provide a wealth of monitoring and security tools.

  1. Data Center

SCloud is located in global Tier-3 data centers, relying on high-quality hardware resources and infrastructure to provide users with high-quality BGP, and international bandwidth resources. According to the needs of the business, you can select a matching data center for the target user area that needs to be covered.

  1. Wide Coverage with 31 availability zones Globally

You can launch cloud host products in more than 31 availability zones around the world to cover all five continents, and provide high-quality cloud computing services with consistent experience for your global business

1.2 UK8S Container Cloud

SCloud Container Service for Kubernetes (UK8S) is a Kubernetes-based container management service. You can deploy, manage, and expand your containerized applications on UK8S without worrying about the operation and maintenance of the Kubernetes cluster itself. UK8S is fully compatible with native Kubernetes API, based on SCloud private network, and integrates cloud products such as ULB, UDisk, EIP, VPC, etc.

1.2.1 About Kubernetes

Kubernetes (k8s) is an operating system for automated deployment, expansion, and management of containerized applications. It has reached a production-level container orchestration and management platform. Using K8S, you will get the following benefits:

  1. Self-Repair

Automatically restart the failed container; when the node is unavailable, automatically schedule the container on the node to other nodes; terminate the container that fails the health check.

  1. Reduce Costs

Under the premise of not affecting availability, containers are automatically scheduled according to container resource requirements and constraints to maximize resource utilization and save costs.

  1. Dynamic Scaling

According to CPU or other indicators, automatically adjust the number of copies of the application.

  1. Automatic Deployment and Rollback

Supports multiple deployment scenarios, and can quickly roll back to the previous version when an error occurs.

1.2.2 Basic Concept

 

The above picture is a summary Kubernetes architecture, including ApiServer, Master, Node, Hub (Image Warehouse) and other concepts, let us give a brief introduction in turn.

ApiServer ApiServer is the only entry point for operating the cluster, and provides mechanisms for authentication, authorization, access control, API registration, and discovery. ApiServer runs as a component on the Master.
Node Node is the working node of Kubernetes, which contains the services required to run Pod. Node can be a virtual machine or a physical machine. In UK8S, currently only the virtual machine that is UHost is supported.
Master Master is also a working node of Kubernetes. Unlike Node, Master usually does not run business Pods, but installs components such as ApiServer, Scheduler, Controller Manager, Cloud Controller Manager, ETCD, etc. for controlling and managing the cluster.
Hub Image Warehouse Hub provides Docker image management, storage, and distribution capabilities.

1.2.3 Product Advantages

  UK8S Self-built
Cluster Management One-click to create a cluster in 5 minutes; multi-zone support, high availability of Master through ULB4; Users deploy and build by themselves;
Network Solution High-performance network plug-in adapting to VPC, with no loss in performance; interworking with physical cloud, hybrid cloud, and public cloud internal network by default; Choose and deploy third-party network plug-ins by yourself; Need to adapt to the existing network architecture;
Storage Plan Currently supports SATA, SSD UDisk and UFS, and will also support cloud storage such as US3 in the future; Build a storage cluster by yourself and adapt to Kubernetes storage types;
Load balancing Integrated internal and public network ULB4/ULB7 high performance, high availability, automatic failover; Deploy load balancing by yourself;
Using K8S Provide web terminal; provide a graphical management interface consistent with cloud products; Install kubectl by yourself; install dashboard by yourself;
Technical Support The K8S expert team provides 7*24 hours technical support all year round; provides guidance for business migration from VM to K8S, and uses specifications; Learn and explore on your own;

1.2.4 Product Features

  1. Automated cluster deployment and operation and maintenance

Create a Kubernetes cluster with one click, support cloud hosts of various specifications, and dynamically increase or decrease working nodes.

  1. Deep integration of SCloud cloud products

Based on the Kubernetes extension interface, it integrates cloud products such as UDisk, ULB, and VPC.

  1. High availability ensures uninterrupted business

Cluster 3 Master is highly available. Nodes and applications support cross-availability zone deployment to ensure high business availability.

  1. Compatible with native Kubernetes

Fully compatible with the community’s native API and CLI, multi-version support, keeping up with the latest version of the community.

  1. Fully controllable private cluster

Tenants are isolated between different user clusters, and cluster resources are visible to users and completely transparent.

 

1.3 Cube Serverless

Cube is a container instance service based on serverless architecture. If you have used Kubernetes, you can understand an instance of Cube as the smallest business unit Pod in Kubernetes. There is no need to wait for virtual machines to start, and no need to wait for the Kubernetes cluster to start. You can deploy and manage your services on Cube, and start up your service containers to provide services in seconds.

 

The Cube product is based on the opensource community’s Firecracker as the support management of virtualization and containerization. At the same time, we have made optimizations for containerized service operations, so that Cube instances have VM-level security isolation, light system usage, and start-up speed within second.

1.3.1 Product Advantages

  1. No server operation and maintenance required

SCloud’s infrastructure resources are used to provide support for the business, without the need for operation and maintenance of infrastructure resources.

  1. Billed by Second

Charge according to the number of seconds used, therby reducing the resource usage cost.

  1. Deploy in Seconds

Use the container image to start the container in seconds. No longer need to dependent on the host cluster creation time.

  1. Self-Healing

The running container instance will no longer need to worry about running downtime. Cube’s control and scheduling system will automatically restart the container.

1.3.2 Basic Architecture

1.3.2.1 Scheduling and Orchestration

 

 

The bottom layer uses K8S as the orchestration and scheduling system of the Cube container. This allows it to be stable and reliable. It is also compatible with K8S API to allow for wider application scenarios.

 

1.3.2.2 Light-Weight Virtualization

 

 

In order to balance security and performance, Cube container runs in Micro VM lightweight virtual machine. Micro VM has the following characteristics:

  1. Highly Secure: based on KVM virtualization
  2. Fast Deployment: takes only 125ms to deploy. For single server, 120vm/s
  3. Great Performance: minimum vm only takes 5MiB memory. A single server can support 1000+ vm.

 

1.3.2.3 Container Network

 

 

The SCloud SDN network is used as the Cube’s network solution, which is consistent with the principle of UHost and has the following characteristics:

  • The Cube instances can be reached in two layers
  • The Cube instance has a fixed internal IP
  • Has QoS restrictions to avoid mutual interference with the host instance;
  • Interoperate with UHost, UDB and other resources
  • Mature SDN solution that is stable and reliable

 

1.3.3 Product Specifications

Supported Region:

  • Hong Kong, Tokyo

 

Specification

Type Specification
Compute AMD 2nd-gen EPYC CPU (2.9GHz frequency) / Intel Cascadelake CPU (2.5GHz frequency)
Storage RSSD UDisk, the highest IPOS can reach 115W
Network 25G Ethernet Network

 

CPU/Memory Combination

CPU (Core) Memory (Gi)
0.1 0.125(128Mi)
0.5 0.5, 1, 2
1 1, 2, 4
2 2, 4, 8
4 4, 8, 16
8 8, 16, 32
16 16, 32, 64
32 32, 64, 128, 256

 

Cloud Disk Performance

In the Cube container group, each container operation in the group will allocate 10G read/write space with performance of RSSD UDisk. However this part of the data will not be saved due to the characteristics of the container.

Cube supports mounting RSSD UDisk block storage, which can reach up to 1.15 million IOPS.

 

Performance & Disk Size

Parameter RSSD UDisk
Single Disk IOPS Min(1800 + 50*size, 1,150,000)

 

Performance & Cube Specification

CPU Core Storage IOPS Storage Throughput (MB/s)
1 18000 72
2 36000 143
4 72000 285
8 143000 570
16 285000 1140
64 1150000 2280

 

 

Network

2.1 Bandwidth

SCloud UNet supports purchasing bandwidth effective immediatly and effective at a later time. The bandwidth package can be set flexibly according to business requirements. For example, when there is a short business promotion event and the bandwidth needs to be increased, the bandwidth package can be purchased on a daily basis.

2.2 Shared Bandwidth

Shared bandwidth is a bandwidth mode in which multiple hosts can share the total amount of network bandwidth.

2.3 Elastic IP

SCloud Public Elastic IP (EIP) is a standard static IP address on the Internet. Binding public network elastic IP with cloud host UHost, load balancing ULB, NAT gateway and other services can provide these services with the ability to access the public network.

2.4 Private Network

VPC (Virtual Private Cloud) is a logically isolated network environment belonging to users. In a private network, you can create a VPC of a specified network segment, create a subnet in the VPC, and manage cloud resources independently, and at the same time implement security protection through network ACLs. The private network VPC provides users with the following capabilities:

  1. Manage cloud resources by planning VPC network segments and provide flexible capacity expansion capabilities.
  2. Provides cloud resources with flexible access to the Internet by binding to elastic IP and NAT gateway
  3. Realize the intercommunication requirements between different VPCs through VPC network intercommunication
  4. Realize cross-regional disaster recovery through VPC network intercommunication and high-speed channels
  5. Realize data center seamlessly connecting to the cloud network through VPC network intercommunication and dedicated line access
  6. Provides security isolation and access control capabilities between cloud resources by binding network ACLs

2.4.1 Private Network Components

The private network includes components such as VPC, subnet, NAT gateway, and network ACL:

2.4.1.1 VPC:

VPC is a logically isolated network environment belonging to users. In a private network, you can create a VPC of a specified network segment, create a subnet in the VPC, and manage cloud resources independently.

When creating a VPC, you can independently plan the network segment and flexibly specify the network segment for the VPC. The current range of network segments supported by VPC is as follows:

  • 0.0.0/8 (10.0.0.0-10.255.255.255) mask range: the maximum is /8 mask, and the minimum is /29 mask.
  • 16.0.0/12 (172.16.0.0-172.31.255.255) mask range: the maximum is /12 mask, the minimum is /29 mask.
  • 168.0.0/16 (192.168.0.0-192.168.255.255) mask range: the maximum is /16 mask, and the minimum is /29 mask.

2.4.1.2 Subnet:

In order to divide the address space in the VPC scientifically and effectively, it is divided into more fine-grained network segments. These independent network segments are called Subnets.

  • Cloud resources in the subnet support cross-availability zone deployment, providing a strong guarantee for cross-availability zone disaster recovery.
  • The minimum mask of the subnet segment is /30 mask. When there are cloud resources in a subnet, the subnet cannot be deleted.
  • We will create a default VPC and a default subnet in each region, and users can directly create cloud resources in the default VPC.

2.4.1.3 NAT Gateway:

The NAT gateway is an enterprise-level VPC public network gateway that allows cloud resources that are not bound to an elastic IP in the subnet to access the public network, and can also configure port forwarding rules to enable cloud resources to provide external services.

2.4.1.3.1 Mode Setting

NAT gateway can choose normal mode and whitelist mode.

  • In normal mode, all cloud resources in the designated subnet of the NAT gateway that are not bound to an elastic IP can access the public network through the NAT gateway.
  • In the whitelist mode, only the cloud resources in the designated subnet of the NAT gateway and defined in the whitelist can go out of the Internet through the NAT gateway.
2.4.1.3.2 Port Forwarding

Users can configure port forwarding to map the internal network ports of cloud resources in the VPC to the NAT gateway, so that cloud resources can provide services to the outside world.

Cloud resources bound to elastic IPs in the designated subnet of the NAT gateway will not appear in the port forwarding configuration options list.

2.4.1.3.3 Network Exit

You can specify an elastic IP for a single cloud resource in the designated subnet of the NAT gateway to access the public network, or you can specify the same elastic IP for all cloud resources to access the public network.

2.4.1.4 Network ACL:

Network ACL is a security policy at the subnet level, used to control the data flow in and out of the subnet. Users can set up outbound rules and inbound rules to accurately control the traffic entering and leaving the subnet.

Network ACL is stateless. For example, if users need to allow certain access, they need to add corresponding inbound and outbound rules at the same time. If only the inbound rule is added, but the outbound rule is not added, it can cause access issues.

2.4.1.4.1 Associated Subnet

After creating a network ACL, users can bind and unbind the ACL with any subnet of the VPC to which it belongs.

2.4.1.4.2 Outbound/Inbound Rules

Network ACL rules are divided into outbound rules and inbound rules. The user’s updates to the network ACL rules will be automatically applied to the associated subnet.

Network ACL rules include the following components:

  • Strategy: Allow or Deny.
  • Source IP/Destination IP: The network segment targeted by the outbound/inbound rule.
  • Protocol type: supports TCP, UDP, ICMP and GRE protocol types. You can select ALL to specify all protocol types.
  • Destination port: The allowed port range for TCP and UDP protocol types is 1-65535. There is no need to specify the port for other protocol types.
  • Priority: The priority corresponding to the rule. The smaller the number, the higher the priority. The available range is 1-30000. Only one outbound/inbound rule of the same priority can be created.
  • Association Target: the effective range of ACL rules. Support all resources in the subnet and designated resources in the subnet. “All resources in the subnet” means that the rule is effective for all resources in the subnet bound to the ACL; “Specified resources in the subnet” means that the rule is only effective for selected resources, not for unselected resources in the subnet .

2.4.1.5 Route Table:

The route table is a VPC-level product that can control the network traffic path of cloud resources. A route table is composed of multiple routing rules, which are effective for all resources in the subnet through binding with the subnet.

2.4.1.5.1 Basic Concepts
Default route table: When creating a VPC, the system defaults to the route table created by the VPC. The newly created subnet will be bound to the default route table by default. The default route table cannot be deleted or edited. The rules in the default route table are all system routes.
Custom route table: Users can create route tables independently, and can add routing rules by themselves. Custom route tables can be created, deleted, and edited. Note that the system routes in the custom route table are only allowed to be viewed and not allowed to be edited.
System routing: The system defaults to the routing rules added to the route table. System routing is only allowed to be viewed.
Custom routing: User-defined routing rules. Routing rules include destination address, next hop type, and next hop. Custom routes can be added, edited, and deleted. The effective granularity of the route table is the subnet. Each subnet must be bound and only one route table can be bound. A route table can be bound by multiple subnets.
2.4.1.5.2 Routing Rule Type

The enumeration of the next hop type of the route table rule is as follows:

Target Type Target Description
LOCAL LOCAL System routing, each network segment in the VPC will add a LOCAL route, indicating that the VPC is accessible by default
VPC interoperability vnet id System routing, added by default after VPC is opened
Internet Gateway Internet Gateway System routing, specify the exit of the public network. If the cloud resource is bound to EIP, it will export to EIP by default; otherwise, if the subnet is bound to a NAT gateway, it will export to the NAT gateway by default.
Public Service Public Service System routing, pointing to the DNS and YUM sources provided by SCloud, as well as ULB’s proxy address, health check address, etc.
CUSTOM CUSTOM System routing, SCloud internal special services, such as UAEK and other business use
IPSecVPN vpngw id Custom routing, after using the IPSec VPN product, the route to the IPSec VPN gateway added in the VPC. At present, the system will automatically add, temporarily not allow customers to add
Internal VIP vip id Custom routing, the target is VIP
Instance uhost id (phost id) Custom routing, the target is cloud host or physical cloud host
2.4.1.5.3 Use Restrictions:
  1. The destination address network segments in the routing rules can overlap, but they cannot be completely the same. If the destination addresses in multiple routing rules are matched at the same time, the priority is determined according to the longest prefix matching algorithm.
  2. Custom routing rules support the addition of next hop types as internal VIP and instance (cloud host, physical cloud host).
  3. You cannot add custom routing rules to the default route table. If you need custom rules, you can create a custom route table and add custom routing rules.
  4. Each route table supports up to 50 custom routing rules.

2.4.2 Product Advantages

  1. Flexible VPC network segment definition: It supports the free combination of three network segments 10/172/192, and new network segments can be added to the VPC at any time for flexible expansion.
  2. Use of subnets across availability zones: The subnet can cover any availability zone in the region to realize disaster recovery across availability zones.
  3. Support VIP cross-availability zone use: It lift the restriction on the use of VIP in a single zone, and effectively support the cross-zone high availability architecture.

2.5 Firewall

The public network firewall (UFirewall) is a software firewall provided for cloud hosts and physical cloud hosts on the SCloud cloud platform.

The public firewall rules are directly applied to the data center public network entrance and do not occupy host computing resources. Independently configurable software firewall function, which controls and manages public network access by binding firewall rules to cloud resources, and provides necessary protection for cloud resource security. Users can configure the public network firewall on the console, without logging in to the resource instance for internal adjustments, and can achieve public network access control to host resources.

2.6 IPSecVPN

The VPN gateway (IPSec VPN) provides disaster-tolerant and highly available VPN services. It needs to be used in conjunction with the user’s VPC in SCloud, the user’s local gateway, and public network services. Users can choose a variety of encryption and authentication algorithms to ensure the reliability of the tunnel.

  • Provide Enhanced and Standard Types to choose from according to requirement
  • Provide Highly Available services

The SCloud VPN gateway service complies with relevant national laws and regulations and does not provide the function of accessing Internet content.

 

2.6.1 Basic Concepts

2.6.1.1 Service Structure of VPN Gateway

The VPN gateway service is composed of three parts:

VPN Gateway: The VPN gateway of the public cloud on the SCloud side needs to be associated with the corresponding UVPC

Customer Gateway: The customer’s gateway on the local network

Tunnel: The tunnel connecting the VPN gateway and the customer gateway requires the customer to configure the corresponding algorithm and strategy. The tunnel is established in the public network, and the network quality is affected by the public network.

2.6.1.2 VPN Gateway Terms

Term Description
VPN gateway The customer’s UVPC egress gateway in the SCloud public cloud.
Customer gateway The customer’s gateway in the local network. The customer needs to set the customer gateway’s IP, name and other information on the console.
tunnel The channel connecting the customer gateway and the VPN gateway requires the customer to set its encryption algorithm, authentication algorithm, secret key, etc. After setting, if one party initiates a connection, the tunnel can be established.
EIP Public network elastic IP, bound to VPN to provide public network access address and bandwidth.

2.6.2 Function Overview

Features Description
IKE authentication support Provide authentication for packets in the IKE negotiation process, supporting three authentication algorithms: md5, sha1 and sha2-256
IKE encryption support Provides encryption protection for the messages in the IKE negotiation process, and supports four encryption algorithms: 3des, aes128, aes192 and aes256
IKE DH group Specify the Diffie-Hellman group used when IKE exchange keys, support 1,2,5,14,15,16
ID type Used to describe the endpoint identity of the VPN gateway, optional automatic identification, IP address representation or domain name representation
IPSec authentication support IPSec provides authentication protection function for user data, supports md5 and sha1 authentication algorithms
IPSec encryption support IPSec provides encryption protection function for user data, supports four encryption algorithms: 3des, aes128, aes192 and aes256
IPSec security protocol IPSec supports two security protocols, AH and ESP. AH only supports authentication and protection of data. ESP supports authentication and encryption. ESP protocol is recommended.
PFS PFS is a security feature. A key is cracked and will not affect the security of other keys. The supported DH groups are 1,2,5,14,15,16 and disabled (Disable)

 

2.7 Load Balancer

ULB (SCloud Load Balancer) is a load balancing service that automatically distributes application traffic among multiple cloud resources. It can realize automatic failover, improve business availability, and improve resource utilization.

Load balancing (ULB) can provide traffic distribution based on network packets or proxy mode for multiple hosts or other service instances. It is used to build a “service cluster” composed of multiple service nodes in a high-concurrency service environment. “Service cluster” can expand service processing and fault tolerance, and can automatically eliminate the impact of a single service node failure on the overall service, and improve service availability.

ULB supports HTTP and HTTPS protocols (like Nginx or HAproxy) for the seven-layer protocol, and supports the TCP protocol and UDP protocol for the four-layer protocol

2.7.1 ULB Composition

ULB service consists of the following three parts:

  1. ULB Service Instance (SCloud LoadBalancer): used to receive and distribute traffic.
  2. Virtual Server / Listener (VServer): Listener, each VServer is a set of load balancing front-end port configuration.
  3. Backend Server (Real Server): the cloud resource that the backend actually processes the request.

 

 

2.7.2 Product Advantages

  1. Achieve Traffic Balance

Supports HTTP and TCP protocols, and distributes service traffic to back-end business hosts according to forwarding rules.

  1. Health Examination

Perform a health check on the back-end service server according to the rules, automatically isolate the abnormal host, and quickly switch the problematic resource once a problem is found to ensure service availability.

  1. Session Retention

Provide HTTP session retention function. After opening the session hold, the request can be forwarded to a specific instance according to the user’s characteristics. For users who meet the characteristics, subsequent requests remain bound to the instance. It supports two ways of inserting a cookie on the server and specifying a cookie by the user.

  1. Monitoring Data

The load balancing level provides monitoring of the number of new connections per second, incoming bandwidth, and outgoing bandwidth. The back-end server level provides monitoring of the number of connections per second, inbound bandwidth, and outbound bandwidth.

  1. Safe and Stable

Use hot standby switching, combined with a distributed architecture, to ensure the high availability of ULB itself.

2.7.3 Product Features

  1. Support internal network and public network load balancing

ULB supports two load balancing scenarios, internal network and public network. You can choose to create suitable instances according to your needs to achieve high availability and horizontal expansion for internal network and public network services respectively.

  1. Support request proxy and message forwarding dual mode

ULB supports request proxy mode and message forwarding mode. The message forwarding mode has better performance and supports tens of millions of concurrent and ultra-10,000 megabytes of traffic. The request proxy mode integrates application layer processing functions. It supports SSL offloading, domain name forwarding, path forwarding and other functions. It also supports X-Forwarded-Proto header and multiple protocols such as websocket.

  1. Support client idle connection timeout setting

ULB can customize the user’s idle timeout time by modifying the timeout connection time parameter. This is suitable for scenarios when TCP connection establishment is very expensive. For example, mobile clients often need to keep idle connections. Because of unstable mobile signals, re-establishing connections may take a long time and consume a lot of resources. In this case, you can modify the ULB client idle timeout time to avoid disconnection and reconnection. The currently supported time range is 1-86400s.

  1. Function overview

Function overview

Product Features Public Network ULB Internal Network ULB Description
Layer 4 forwarding (TCP/UDP)  
Layer 7 forwarding (HTTP/HTTPS)  
Load balancing algorithm Polling, source address hashing, weighted polling, minimum connection Polling, source address hashing, consistent hashing, minimum connection  
Health examination Perform health checks on the back-end service nodes according to the rules, automatically isolate abnormal service nodes, and once problems are found, quickly switch the problem nodes to ensure service availability.
Session retention Supports session retention, and users can forward their requests to the same back-end service node.
Cross-availability zone disaster recovery Support binding back-end service nodes in different availability zones to achieve cross-availability zone disaster recovery
Public Firewall Support binding public network firewall to realize the black and white list management of the access source
Domain forwarding Supports forwarding traffic to different back-end nodes according to the access domain name and URL
Certificate management Support HTTPS certificate management
SSL Offloading Support HTTPS SSL Offloading
WebSocket Support WebSocket protocol
IPv6 address support Support forwarding IPv6 traffic
HTTP/2 Does not support HTTP/2 temporarily
Redirect Currently does not support HTTP access redirection to HTTPS
Mutual authentication Currently does not support HTTPS two-way authentication

2.7.4 Technology Architecture

ULB (SCloud Load Balancer) provides the ability to distribute traffic to ensure business scalability and high availability. It supports two scenarios, internal network and public network, and supports two forwarding modes: Application Load Balancer (ULB7) and Network Load Balancer (ULB4). The following will introduce the basic architecture of ULB.

2.7.4.1 Internal Network ULB4

Internal Network ULB4 is self-developed based on DPDK technology. A single server can provide more than 30 million concurrent connections, 10 million pps, and 10G wire-speed forwarding capability. Using cluster deployment, a single cluster has at least 4 servers. Use ECMP + BGP to achieve high availability.

The internal network ULB4 uses a forwarding mode similar to DR. The schematic diagram of internal network load balancing forwarding is as follows:

 

As shown in the figure above, the ULB4 of the same cluster announces the same VIP to its connected access switches, and the access switches are configured with ECMP algorithm, so the traffic load can be balanced to multiple servers, thus forming a cluster. When some ULB4 forwarding abnormalities occur, BGP packet forwarding will also stop forwarding, and the ULB4 server will be removed from the cluster within three seconds to ensure high availability. At the same time, the cluster health check module will also issue an alert to notify the engineer to step in. In addition, the servers in the same ULB4 cluster are distributed across availability zones, so as to ensure the cluster’s high availability across availability zones.

In addition, there is a module in ULB4 to load the health check of the back-end node (currently only supports TCP port detection), and report to ulb4manager and ULB4 forwarding server. After the ULB4 forwarding server receives the client’s business message, it will select one of the back-end nodes that are in normal state, modify the destination mac and tunnel to the back-end node. Note that the source IP and destination IP are unchanged. At this time, the back-end node must bind the ULB4 virtual IP address on the LO port and monitor the service to correctly process the message, and unicast the return packet directly to the Client. This is a typical DR process. Therefore, the internal network ULB4 can directly see the source IP of the Client.

2.7.4.2 Public Network ULB4

The public network ULB4 is similar to the internal network ULB4, and is also self-developed based on DPDK technology. A single server can provide more than 30 million concurrent connections, 10 million pps, and 10G wire-speed forwarding capability. Using cluster deployment, a single cluster has at least 4 servers. Use ECMP + BGP to achieve high availability. Similarly, it uses a forwarding mode similar to DR. The diagram of public network load balancing forwarding is as follows:

 

 

 

Unlike the internal network ULB4, the public network traffic comes in from the internet. The traffic of Client accessing ULB4 enters SCloud POP point and enters UVER (SCloud Virtual Edge Router). UVER is a public network traffic calculation center developed by SCloud. It can learn all EIP next hop information from the service database, and after diversion through BGP, tunnel EIP traffic to the corresponding next target. A ULB4 EIP will fall on all the servers in the ULB4 cluster, so UVER will load balance this part of the traffic to each server in the cluster according to the consistent hash algorithm. The subsequent process is similar to the internal network ULB4. The Backend node needs to bind the ULB’s EIP to the LO and monitor the service, and the backhaul message will be directly sent to the UVER and returned to the Client via the internet.

In the public network ULB4, the cluster health check module will periodically detect the health status of the server. If it finds a problem with the server, it will notify UVER to remove the abnormal server to ensure high availability. Similarly, the public network ULB4 cluster is also highly available across availability zones.

2.7.4.3 Public Network ULB7

ULB7 is developed based on Haproxy, and a single instance can support more than 40w pps, 2Gbps, and at least 400,000 concurrent connections. ULB uses the affinity of the CPU to achieve core isolation and resource control.

 

 

 

Unlike the DR mode used by ULB4, ULB7 uses the Proxy mode, which is Fullnat mode. After receiving the client’s request, ULB7 converts the connection from the client to the ULB7 EIP into a connection from the proxy IP of ULB7 to the actual backend IP. Therefore, Backend cannot see the Client IP directly. In addition, the health check module is integrated in the ULB7 process, so no additional node health check module is required.

Similarly, in the public network ULB7, the cluster health check module will periodically detect the survival status of the server. If it finds a problem with the server, it will notify UVER and remove the abnormal server to ensure high availability. Similarly, the public network ULB7 cluster also has high availablity across availability zones.

2.7.4.4 Mode Comparison

Compared with ULB7, ULB4 has stronger forwarding capability, which is suitable for scenarios where forwarding performance is pursued. The ULB7 can process layer seven data, perform SSL offloading, perform domain name forwarding, path forwarding and other functions, and the back-end node does not need to be configured with additional VIP.

2.7.5 ULB Balancing Algorithm

  1. Round-Robin: When ULB receives a new TCP connection, it forwards it to each back-end service node in turn
  2. Source Address: ULB will base on TCP connect source IP address, and forward the request to service node through hash algorithm. When user visit the same source IP again, the access will still go to the same service node if the number of service node remains unchanged.
  3. Source Address (port): ULB will use a certain hash algorithm to forward the request to a service node based on the source address and source port of the TCP connection. (Only supported by ULB4 message forwarding mode)
  4. Consistent Hashing: The consistent hashing algorithm selects the back-end service node based on the source and destination IP and the result of the consistent hashing algorithm. If you add or delete back-end service nodes, only a small part of the connections will be affected. (Only supported by ULB4 message forwarding mode)
  5. Consistent Hashing (port): According to the source and destination IP, source and destination port, the result of the consistent hash algorithm is used to select the back-end service node. If you add or delete back-end service nodes, only a small part of the connections will be affected. (Only supported by ULB4 message forwarding mode)
  6. Weighted Round-Robin: After ULB receives a new TCP connection, it will be assigned to each service node according to the probability according to the different weights of the back-end service node you specify
  7. Least Connections: After ULB receives a new TCP connection, it will count the number of connections between ULB and the back-end service node in real time, and select the service node with the lowest number of connections to establish a new connection and send data. (Only supported by ULB7 request proxy mode)

2.7.6 Performance

New connections per second (CPS): The number of new connections per second, reflecting the ability to handle new connections.

Maximum number of concurrent connections: The total number of connections per second, reflecting the ability to handle connections concurrently.

Processing packets per second (PPS): The amount of packets forwarded per second, which reflects the packet forwarding rate.

Maximum throughput: the bandwidth that can be supported.

Requests per second (QPS): The number of queries per second. QPS is not a core indicator for ULB. What really consumes performance is CPS. In the case of short connection, QPS=CPS; in the case of long connection, QPS>CPS.

Product Mode New connections per second (cps) Maximum concurrent connections (a) Maximum throughput (bps) Processing packets per second (pps)
Public Network ULB Layer 4 600,000 100,000,000 30G 18,000,000
Public Network ULB Layer 7 40,000 (4,000 ssl) 300,000 800M 400,000
Internal network ULB Layer 4 600,000 100,000,000 30G 21,000,000

 

Database

3.1 UDB Cloud Database

SCloud Cloud Database Service (UDB for short) is a stable and secure fully managed online database service based on mature cloud computing technology, with high availability, high performance, and elastic expansion characteristics.

Compared with traditional self-built databases, UDB improves service availability and makes backup more convenient

  UDB Traditional Self-Built Database
Service Availability 99.95% or more Can only guarantee by yourself
Data Security 99.9999% Can only guarantee by yourself
Backup Automatic Scheduled Backup, Manual Backup, Backup Validity Check Do it yourself. Take extra space to store backup

Monitoring

& Alert

Complete monitoring mechanism Build your own monitoring system
Troubleshooting DBA helps locate and resolve any issues Locate and resolve any issue by yourself
Deployment One-Click Deployment on the Console Purchase, IDC hosting, installation, all done by yourself
Safety Internal Network Isolation, Database Audit, One-Key Recovery Can only guarantee by yourself
Elastic Expansion Support Not Support
Cost Charge on demand, high resource utilization rate High equipment cost, High operation and maintenance cost

 

3.1.1 MySQL

The cloud database MySQL is a highly available and high-performance database service based on mature cloud computing technology. It is fully compatible with MySQL 5.1, MySQL 5.5, MySQL 5.6, MySQL 5.7, Percona 5.5, Percona 5.6 and Percona 5.7 protocols.

In addition to supporting dual-active hot-standby architecture and high-performance SSD disks, it also provides a complete set of solutions for disaster recovery, backup, data rollback, monitoring, and database auditing.

3.1.1.1 Basic Concept

Database Type

MySQL instances include the standard version and high-availability version instance types.

The standard version instance provides a basic database single instance, which can create master-slave synchronization according to requirements to realize data redundancy and separation of read and write.

The high-availability version instance adopts the dual-active hot-standby architecture to completely solve the database unavailability caused by downtime or hardware failure.

Database Version The MySQL instance currently supports MySQL5.1, MySQL5.5, MySQL5.6, MySQL5.7, Percona 5.5, Percona 5.6, Percona 5.7, etc. Users can choose the corresponding cloud database version according to their needs.
Storage Type

MySQL instances currently provide SSD type and NVME type.

SSD type is suitable for business scenarios that require high database performance.

NVMe type is a new generation of ultra-high performance cloud disk products suitable for business scenarios with large capacity and low latency requirements.

Memory The memory size of the cloud database. Users can choose according to the hardware requirements of the cloud database.
Disk The hard disk size of the cloud database. Users can choose according to the hardware requirements of the cloud database.
Resource ID After the user creates a cloud database instance, the system will automatically generate a resource ID, which is globally unique.
IP and Port IP is the internal network address for users to access the cloud database. It will be automatically generated after the cloud database is created successfully. Currently, the public IP is not provided. The default port for MySQL and Percona is 3306.
Backup The backup saves all the data of the point cloud database at a certain time. Cloud database provides two methods of automatic backup and manual backup to prevent data loss and avoid risks caused by misoperation. The backup source can be the main library or the slave library, but the backup from the cross-availability zone slave library is not currently supported.
Log Logs are recording files used to record cloud database operation events. Including binary log, slow query log, error log, operation log.
Database Name Users can customize the name of the cloud database instance.
Administrator The super administrator (root) authority is provided by default, allowing users to customize the administrator password.
Configuration File The configuration file includes various configuration parameters for cloud database operation. Users can customize and modify them as needed. Different cloud database versions provide corresponding default configuration files. Configuration files include default configuration files and custom configuration files. Custom configuration files are created and imported by users.
Master Library and Slave Library The main library supports read and write operations, and the attribute is master. On the one hand, the slave library can be used as the disaster recovery node of the main library, and it can also support read operations, reduce the pressure of the main library, and realize the separation of reading and writing. The attribute of the slave library is slave.
Payment Methods The payment methods of MySQL products are divided into three methods: yearly, monthly, and on-demand, and the payment methods are all prepaid, that is, to pay for the corresponding service period in advance. The price of MySQL product consists of two parts: memory and disk. mysql instance price = (memory size * memory unit price + hard disk size * hard disk unit price) * usage time.
Quantity The number of cloud databases that users need to apply for is one by default, and multiple databases can be selected for batch creation.

3.1.1.2 Product Advantages

  1. Highly Available Architecture

The high-availability version of the instance adopts a dual-active hot-standby architecture to completely solve the database unavailability caused by downtime or hardware failure, and its stable and reliable performance far exceeds the industry average.

  1. High-performance SSD storage media

Provides SSD models, which can still provide fast and efficient database query and transaction processing capabilities in business scenarios with hundreds of millions of data processing volumes, and easily respond to high concurrency and large-scale data processing requirements.

  1. Safe and Reliable

Provides a master-slave architecture to effectively ensure data redundancy, combines automatic backup strategies with manual backups, and provides a database back-file function to make valuable data foolproof.

  1. Rapid Deployment

Examples can be quickly deployed online, saving self-built database work such as purchasing, deployment, configuration, etc., shortening the project cycle, and helping the business to go online quickly.

  1. Elastic Expansion

The database resources can be elastically expanded according to business pressure to meet the requirements for database performance and storage space in different business stages.

  1. Flexible and Easy to Use

The database can be monitored, alarmed and audited through the console; the open API interface is provided to facilitate the realization of the overall application architecture intelligent and systematic, making the system operation and maintenance management more efficient and convenient.

  1. Lower Cost

The required resources can be activated immediately according to business needs, without the need to purchase high-cost hardware in the early stage of the business, effectively reducing initial asset investment and avoiding waste of resources.

3.1.1.3 Model Version

3.1.1.3.1 MySQL Product Type

The cloud database MySQL Storage provides SDD type and NVME type. The cloud database MySQL provides a high-availability dual-master and hot-standby architecture, supports cross-availability zone architecture, and supports cross-availability zone construction of slave slave libraries to achieve availability zone-level disaster recovery and ensure high service availability.

 

3.1.1.3.2 MySQL Capacity Specifications
Memory specification (G) 1 2 4 6 8 12 16 24 32 48 and above
Recommended hard disk specifications (G) 20 50 100 200 300 400 500 800 1000 2000
3.1.1.3.3 MySQL Product Version

The cloud database MySQL supports MySQL5.1/5.5/5.6/5.7 and Percona5.5/5.6/5.7 protocols.

The MySQL high-availability version supports MySQL5.5/5.6/5.7 and Percona5.5/5.6/5.7 protocols.

MySQL high-availability version + Slave slave library architecture supports MySQL5.6/5.7 and Percona5.6/5.7 protocols.

3.1.1.4 Product Features

  1. Dual-active hot-standby architecture, automatic failover:

The high-availability version of MySQL supports cross-availability zone deployment to achieve high-availability disaster tolerance and failover at the computer room level.

  1. Multi-DC, one master and multiple slaves:

Supports the creation of read-only slaves across availability zones; for the database service of the high-availability master library master + read-only slave library slaves, support for opening read and write separation, custom read and write ratios, and linear expansion of throughput.

Protocol and configuration of the cloud database MySQL product is 100% compatible with the native MySQL protocol, and the high availablity version of MySQL supports online downgrade configuration.

  1. Backup and Recovery:

Provides two methods of physical backup and logical backup; the default is automatic daily backup, which saves nearly 7 days of backup for free, supports manual backup, and 3 copies for free; users can set the backup method, backup time, and backup object settings; support data recovery.

  1. Log Management:

The console provides binlog log, error log and slow query log for querying MySQL database, and can be packaged and downloaded for management.

  1. Configuration File Management:

The MySQL configuration file supports user-defined management, the corresponding parameter values can be modified, and the configuration file can be replaced for the mysql database.

  1. Perfect Monitoring and Alarm Settings:

After deploying the MySQL database with one click, it provides monitoring information such as memory usage, disk usage, number of connections, QPS, etc., and also provides a default alarm template. You can set common alarm thresholds for the main monitoring items, and trigger an alarm when the threshold is reached

3.1.1.5 High-Availability version of MySQL

  1. A brief architecture diagram of High Availablility MySQL is as follows:

 

  • Dual-master architecture: two DBs are each other’s standby database
  • Route to the main DB through the proxy node
  • The IP of the high-availability instance is bound to the proxy to ensure that there is no need to change the IP after disaster recovery

 

  1. The main library master + read-only slave library slave read-write separation framework:
  • The database business read-write ratio can reach 5:1 or even 10:1.
  • The core value is that the read performance of the database is significantly improved after the slave node is added, and the read request processing capability of the slave node can be fully utilized.
  • The read performance increases linearly with the number of slave nodes. With the corresponding number of slave nodes, the read performance of the database can also increase linearly.

 

As shown in the figure, a read-write separation middleware is composed of two high-performance Proxy nodes and SCloud distributed load balancing product ULB. The read-write separation middleware will identify the type of business SQL request. If it is a write request, it will be forwarded 100% to the main library. If it is a read request, it will be distributed according to the distribution rules (distribution rules can be configured), as shown in the figure, set to 40% Read requests are distributed to the master library, and 30% of the read requests are sent to slave 1 and slave 2 respectively.

3.1.1.6 Backup Management and Recovery

MySQL Cloud Database supports functions such as database backup, restoration from backup, backup file download, and database back file creation.

MySQL backup is divided into two types: automatic backup and manual backup. Automatic backup is automatically backed up once a day (the default backup time is an hour from 3:00 to 6:00, and the backup can be saved for 7 days)

SSD Type MySQL has two ways to provide backup: physical backup and logical backup. The default is logical backup mode. The backup source can choose to backup from the master or the slave database. Users can change the automatic backup strategy according to business needs.

3.1.1.6.1 Manual Backup

MySQL instance supports manual backup. Users can save important data backups at certain key points in time. Currently, the number of manual backups allowed is three. If there are more than three, the earliest manual backup will be automatically deleted.

The SSD Type MySQL provides physical backup and logical backup. The default is logical backup. When creating a manual backup, the user only needs to enter the backup name, select the backup method, and the console backend will immediately start the backup work.

3.1.1.6.2 Backup Download

Users can download backup files generated by automatic backup and manual backup.

To view the backup files of all MySQL instances, click “Backup Management” in the navigation bar.

3.1.2 SQL Server

The cloud database SQL Server is a highly available and high-performance database service based on mature cloud computing technology. It provides a complete set of solutions such as backup and monitoring, which completely solves the troubles of database operation and maintenance. It also includes Microsoft’s license fee without additional cost for the user.

3.1.2.1 Product Advantages

Advantage UDB for SQL Server Self-built SQL Server
Convenient management, Fully Hosted The user does not need to care about the installation, deployment, expansion of SQL Server. These operations are automatically completed by UDB. At the same time, it provides multiple management operations and multiple key monitoring, making your database operation and maintenance worry-free. You need to build your own monitoring system, alarm system, write your own management scripts, etc., and you need to operate and maintain manual handling in case of failure.
Dual-host high availability, stable and reliable By default, it provides the configuration of one primary and one image. Failure can be switched in seconds. It provides automatic backup capability and complete monitoring. You need to build your own image server, build your own image, build your own backup system, and handle failures yourself.
Cost-Effective and Cost Saving Genuine Microsoft authorization, purchase on demand, pay for rent, and upgrade at any time to help you effectively reduce your database infrastructure investment. Usually select servers and purchase licenses based on peak performance, and pay most of the cost at one time.

3.1.2.2 Application Scenario

  1. E-commerce/O2O/Travel

Order & transaction system based on C#, ASP.NET and other architecture. SQL Server can provide stable and high-performance database support solutions.

  1. Mobile Office

Quickly deploy mobile office platforms such as enterprise OA/ERP/sales management, and store data in a secure subnet, which is safe and reliable.

  1. Financial Sector

Database for the core applications of banking, insurance, securities, funds, and emerging financial fields such as capital trading, circulation, and accounting.

  1. Gaming

Games developed on the Windows platform.

  1. Data Warehouse & Data Analysis Platform

Build a cloud-based data warehouse and data analysis platform through SQL Server’s own business intelligence, IT dashboard version, and collaboration with SharePoint.

3.1.2.3 Product Version

Version support:SQL Server 2012 Enterprise Edition

Creation Method: Contact technical support to help create

3.1.3 MongoDB

The cloud database MongoDB is a highly available and high-performance database service based on mature cloud computing technology. It is fully compatible with the MongoDB protocol and supports flexible deployment. In addition to the replica set instance architecture, the cloud database MongoDB also provides a sharded cluster architecture to meet massive data business scenarios. It also provides a complete set of solutions such as disaster recovery, backup, monitoring and alarm.

3.1.3.1 Basic Concept

Node Type

MongoDB is divided into Primary (Shardsvr), Secondary, Arbiter, Configsvr, Mongos and other node types.

Primary node: It is the primary node of the replica set. The replica set defaults to a three-node replica set architecture. It supports changing the configuration of replica set nodes and supports increasing or decreasing the number of nodes in the replica set.

Secondary node: It is the slave node of the replica set that can provide read services. Adding a Secondary node can improve the read service performance and availability of the replica set.

Arbiter node: It is the arbitration node of the replica set. It does not store data and is only responsible for voting during failover. Mongos node: As a service agent, a single cluster version instance can support multiple Mongos nodes. Configsvr node: a necessary configuration node for the cluster

Database Version MongoDB currently supports MongoDB2.4/2.6/3.0/3.2/3.4/3.6 and /4.0, and users can choose the corresponding cloud database version according to their needs.
Storage Type MongoDB instances currently provide SSD type. SSD models are suitable for business scenarios that require high database performance.
Replica Set By default, a three-node replica set is constructed with one key: Primary node + one Secondary node + one Arbiter node. The operation of creating nodes in the replica set can support the expansion of replica sets of more nodes (for example: five nodes, seven nodes or more nodes)
Sharded Cluster The console supports the construction of a sharded cluster, consisting of three copies of Configsvr + N Mongos + data shards (three-node replica set: Primary node + one Secondary node + one Arbiter node), routing nodes and data shards can be increased according to business data Reduce the number of nodes and shards.
Memory The memory size of the cloud database. Users can choose according to the hardware requirements of the cloud database.
Disk The hard disk size of the cloud database. Users can choose according to the hardware requirements of the cloud database.
Resource ID After the user creates a cloud database instance, the system will automatically generate a resource ID, which is globally unique.
IP and Port IP is the internal network address for users to access the cloud database. It will be automatically generated after the cloud database is created successfully. Currently, the external IP is not provided. The default port of MongoDB is 27017.
Backup The backup saves all the data of the point cloud database at a certain time. Cloud database provides two methods: automatic backup and manual backup to prevent data loss and avoid risks caused by misoperation. The backup source of the replica set can be the primary node or the secondary node.
Log Logs are recording files used to record cloud database operation events. Including binary log, slow query log, error log, operation log
Configuration File The configuration file includes various configuration parameters for cloud database operation. Users can customize and modify them as needed. Different cloud database versions provide corresponding default configuration files. Configuration files include default configuration files and custom configuration files. Custom configuration files are created and imported by users.
Payment Methods The payment methods of MongoDB products are divided into three methods: yearly, monthly, and on-demand, and the payment methods are all prepaid, that is, to pay for the corresponding service cycle in advance. The price of MongoDB products consists of memory and disk. The calculation formula is: node price = (memory size * memory unit price + hard disk size * hard disk unit price) * duration of use

 

3.1.3.2 Product Features and Advantages

3.1.3.2.1 Features

MongoDB products are fully compatible with the MongoDB protocol, support one-click construction of three-node replica sets, support data sharding clusters; provide highly reliable data storage, users can independently configure the number of upgrades and upgrades and expand the number of replica set nodes online according to business needs, and the console builds replicas Set and shard clusters are simple and easy to use. Provide log management, backup recovery, monitoring and alarm setting management functions.

Main features:

  1. One-click replica set management:

Console one-click deployment, second-level delivery; default three-node replica set. MongoDB console can one click deploy replica set to create 3 nodes (1 primary + 1 secondary + 1 abiter), while providing the function of adding nodes (For example: expand to 5 nodes, 7 nodes, etc.). This is suitable for business scenarios that have higher read performance requirements for the database, such as read more and write less scenarios or sudden business needs such as event promotions. Users can base on business needs to increase or decrease the number of Secondary nodes and Arbiter nodes on the console. It can also support node memory or hard disk configuration upgrades and upgrades.

  1. Sharded Cluster Management:

The console supports the construction of a sharded cluster, consisting of three copies of Configsvr + N Mongos + data shards (three-node replica set: Primary node + one Secondary node + one Arbiter node). The number of routing nodes and data shards can be increased/decreased according to business requirement. User can also upgrade and downgrade the node memory and disk configuration in the shards. The default sharded cluster support versions of the three copies of Configsvr: MongoDB3.4, MongoDB 3.6 and MongoDB 4.0. MongoDB3.2 and below versions can be built independently by configuring nodes and routing nodes.

  1. Backup and Recovery:

Support automatic/manual backup, default automatic backup every day, free backup for nearly 7 days. Support manual backup, free 3 copies. User can set the backup method, backup time, and backup object settings. It also supports data recovery. Log management: The console provides error logs and slow query logs for querying the MongoDB database, and can be packaged and downloaded for management.

  1. Configuration File:

Support user-defined management, you can modify the corresponding parameter values, and you can change the configuration file for the MongoDB database. Complete monitoring and alarm settings: After deploying the MongoDB database with one click, it provides monitoring information such as memory usage, disk usage, number of connections, QPS, etc. At the same time, it provides default alarm templates and sets common alarm thresholds for major monitoring items. An alarm is triggered when the threshold is reached.

3.1.3.2.2 Product Advantages
  1. Excellent performance:

High performance, high concurrency, high-performance hardware configuration is adopted for the high-performance requirements of the database, and the database performance parameters are specially optimized.

  1. One-click Replica Set Deployment:

The replica set is composed of the primary node (Primary), the replica node (Secondary), and the arbitration node (Arbiter). In the actual business of the user, the nodes of the replica set can be increased or decreased according to the business needs, so as to flexibly meet the business needs.

  1. Flexible Backup:

Provide two methods of automatic backup and manual backup to prevent data loss and accidental deletion, and ensure the safety and reliability of user data.

  1. Data Security:

Ensure data security in many aspects, including data access, network isolation, and data disaster recovery.

  1. Convenient Management:

Visual console management. You only need to select the corresponding configuration to create an instance according to the required data space and performance requirements, and you can use it directly after the initialization is completed. At the same time, it provides users with abundant cloud platform API interfaces.

  1. Strong Compatibility:

It is seamlessly compatible with the MongoDB protocol, and applications can be migrated to the cloud database without any changes.

  1. Flexible Expansion:

Flexible and changeable online elastic expansion, real-time upgrade of instance configuration according to the actual load of the database, thereby obtaining larger database space and stronger database performance.

  1. Professional DBA service:

The UDB product team is equipped with a professional DBA to provide 7*24 hours of professional DBA services for UDB users to help answer questions related to database operation and maintenance.

3.1.3.3 Replica Set Architecture

SCloud cloud database MongoDB replica set architecture:

 

The default one-key replica set has three nodes including Primary, Secondary, and Arbiter. Customers can add or remove Secondary and Arbiter nodes on the console according to their business needs. The primary node (primary) is responsible for the read and write of the entire replica set. Users can also set all or part of the read requests to the replica node (Secondary) according to business needs. The replica set synchronizes data in real time.

If the primary node fails or goes down, the replica node will elect a new master node. The application server will not be affected. The arbitration node does not store data, and is only responsible for group voting for failover, which reduces the pressure of data replication. In the SCloud MongoDB product, the arbitration node (Arbiter) is free.

3.1.3.4 MongoDB Sharded Cluster Architecture

MongoDB supports sharding, and multiple shards can form a cluster to provide external services. In order to solve the bottleneck caused by the increase in data volume, MongoDB uses “sharding” to solve this problem. MongoDB can quickly build a highly available and scalable distributed cluster with the characteristics of large data volume, high scalability, high performance, flexible data model, and high availability. Sharded cluster architecture is as follows:

 

SCloud cloud data MongoDB sharded cluster (MongoDB 3.4 and above), composed of three copies of Configsvr + N Mongos + N shard data shards. The Mongodb cluster has set the sharding rules and operates the database through mongos to automatically foward the corresponding data operation request to the corresponding sharding machine.

  1. Routing node (Mongos):

At the entrance of database cluster requests, all requests are coordinated through mongos. Mongos itself is a request distribution center, which is responsible for forwarding corresponding data requests to the corresponding shard server. In the production environment, it is recommended to use multiple mongos as the entry point of the request to achieve load balancing and prevent one of the failures from causing all mongodb requests to become inoperable.

  1. Configuration node (Configsvr):

This stores the configuration of all database meta-information (routing, sharding). The routing node will load the configuration information from the configuration node the first time it is started or shut down and restarted. If the configuration node information changes in the future, it will notify all routing nodes to update the status, so that the configuration nodes can continue to route accurately. For MongoDB 3.4 and above, the three copies of configuration nodes in the SCloud Cloud Data MongoDB sharded cluster are free.

  1. Shard node (Shard):

Responsible for storing database data. Each shard data slice defaults to a three-node replica set (a three-node replica set is composed of Primary node + 1 Secondary node + 1 Arbiter node), and the number of Secondary nodes and Arbiter nodes can be changed according to actual business needs; If you need to scale the cluster data storage and read and write concurrency capabilities horizontally, you can also add multiple shard data shards to the console.

3.1.3.5 Payment Method

Billing formula:

Instance node cost = (memory specifications × memory unit price + hard disk specifications × hard disk unit price) × usage time.

Replica set cost = Primary node cost + Secondary node cost. Note: Arbiter node is free

Sharding cluster fee = Mongos (routing node) fee + data sharding (replica set) fee. Note: Sharded clusters of version 3.4 and above, configuration nodes are free

Available Zone:

Hong Kong, California, Washington, Jakarta, Seoul, Singapore

3.1.4 PostgreSQL

SCloud cloud database UDB PostgreSQL is a stable and secure fully managed online PostgreSQL database service based on mature cloud computing technology. It provides a complete set of solutions for disaster recovery, backup, data recovery, monitoring, etc., with features such as high availability, high performance, and elastic expansion. , Completely solve the troubles of database operation and maintenance.

3.1.4.1 Main Concept

Instance type PostgreSQL instances currently support normal and high-availability versions.
Version PostgreSQL instances currently support PostgreSQL 9.4, PostgreSQL 9.6, and PostgreSQL 10.4. Users can choose the corresponding cloud database version according to their needs.
Storage Type PostgreSQL instances currently provide standard models and SSD models to meet the needs of users.
Memory The memory size of the cloud database. Users can choose according to the hardware requirements of the cloud database.
Disk The hard disk size of the cloud database. Users can choose according to the hardware requirements of the cloud database.
Payment methods The payment methods are divided into three methods: yearly, monthly, and on time, and the payment methods are all prepaid, that is, to pay for the corresponding service period in advance. For specific billing instructions, please refer to the “Purchase and Billing” document.
Quantity The number of cloud databases that users need to apply for is one by default, and multiple databases can be selected for batch creation.
node PostgreSQL currently supports Master nodes and Standby nodes.
Configuration File The configuration file includes various configuration parameters for cloud database operation. Users can customize and modify them as needed. Different cloud database versions provide corresponding default configuration files. Configuration files include default configuration files and custom configuration files. Custom configuration files are created and imported by users.
Administrator The super administrator (root) authority is provided by default, allowing users to customize the administrator password.
Instance Name Users can customize the name of the cloud database instance.
Resource ID After the user creates a cloud database instance, the system will automatically generate a resource ID, which is globally unique.
IP and port IP is the internal network address for users to access the cloud database. It will be automatically generated after the cloud database is created successfully. Currently, the external IP is not provided. The default port for PostgreSQL is 5432.
Backup The backup saves all the data of the point cloud database at a certain time. Cloud database provides two methods of automatic backup and manual backup to prevent data loss and avoid risks caused by misoperation.
Log Logs are recording files used to record cloud database operation events. Including database logs.

3.1.4.2 Product Features and Advantages

SCloud cloud database UDB PostgreSQL is a stable and secure fully managed online PostgreSQL database service based on mature cloud computing technology, with high availability, high performance, and elastic expansion characteristics.

3.1.4.2.1 Main Function
  1. Type and Version:

PostgreSQL provides standard version and a high-availability version, and the storage type is SSD. The instance version currently supports PostgreSQL 9.4/9.6 and 10.4, and users can choose the corresponding cloud database model and version according to their needs.

  1. Backup and Recovery:

Support automatic/manual backup, default automatic backup every day, save nearly 7 days of backup for free, support manual backup, free 3 copies. Users can set the backup method, backup time, and backup object settings;

  1. The UDB PostgreSQL high-availability version supports the “second-level file back” function:

When the user has made a human error, the second-level back file function can be used to restore the data to any second in the past 7 days.

  1. Log management:

The console provides PostgreSQL database log query, packaging and downloading functions to manage database logs.

  1. Configuration file:

Support user-defined management, you can modify the corresponding parameter values, and you can change the configuration file for the PostgreSQL database.

  1. Perfect monitoring and alarm settings:

After deploying the PostgreSQL database with one click, it provides monitoring information such as memory usage, disk usage, number of connections, QPS, etc., as well as a default alarm template. Set common alarm thresholds for major monitoring items, and trigger an alarm when the threshold is reached.

  1. IP and port:

IP is the internal network address for users to access the cloud database. It will be automatically generated after the cloud database is created successfully. Currently, the external IP is not provided. The default port for PostgreSQL is 5432.

3.1.4.2.2 Product Advantages
  1. Protocol is fully compatible

UDB PostgreSQL is 100% fully compatible with the PostgreSQL protocol, and some typical PostgreSQL extensions (such as PostGIS geodatabase) are installed by default to achieve an “out of the box” PostgreSQL service experience.

  1. High performance

Each UDB PostgreSQL instance is supported by powerful hardware. When users have high requirements for hardware and performance, they can choose high-performance SSD disks as storage media; SCloud’s operating system kernel team has performed kernel-level tuning for each server running a UDB PostgreSQL instance to ensure the high performance of the system Running; and the parameters of the UDB PostgreSQL instance have also been tuned by a professional DBA to ensure that it can cope with most usage scenarios.

  1. High Availability

UDB PostgreSQL supports high-availability deployment. UDB PostgreSQL high-availability instance adopts master-slave replication to ensure that the main database provides services. At the same time, there is another database service that continuously synchronizes data and is on standby at any time. The powerful automatic disaster recovery module in the UDB background can automatically occur when the UDB PostgreSQL highly available instance service fails. It will detect problems and automatically perform disaster recovery to ensure the stability and reliability of users’ PostgreSQL database services. When the UDB PostgreSQL instance is switched, the disaster recovery module will promote the standby standby PostgreSQL service to the primary database and fall back to the secondary database after the original primary service is started. The user does not need any manual intervention and configuration modification during the whole process.

  1. Data Security

UDB PostgreSQL multiple lines of defense ensure the safety and reliability of data, and high-reliability hardware ensures that the stored data is guaranteed; the highly available instance of PostgreSQL ensures redundant storage of multiple data, and users can even use “create slave database” Function to create more database data backups, and further increase data security.

UDB PostgreSQL supports the advanced “second-level retracement” function. When the user makes a human error, the second-level retracement function can be used to restore the data to any second in the past 7 days. “Second-level retracement” can be a “reassurance” for users to use UDB PostgreSQL products.

  1. Automate Daily Operation and Maintenance

UDB PostgreSQL greatly simplifies the daily operation and maintenance of the DBA: With a click through the interface, the DBA can create a database service running the standard PostgreSQL protocol within minutes with one click; the UDB PostgreSQL service can automatically back up every day and upload it to a highly reliable In the backup storage cluster, the user’s daily backup related work is eliminated; UDB PostgreSQL collects database monitoring data in real time, and users can configure monitoring thresholds to receive various abnormal alarms of the database in time.

  1. Flexible Expansion

UDB PostgreSQL can dynamically expand database resources on demand based on business needs. With just a few clicks on the console, users can dynamically adjust the memory and disk resources of the UDB PostgreSQL instance to meet the requirements for database performance and storage space in different business stages. Especially for UDB PostgreSQL high-availability instances, in the process of resource expansion, the user’s database service can basically be served without stopping, with only a second-level flash. This can greatly reduce the impact time of database expansion on database services, and achieve a true “Hot Scale-Up”.

  1. Lower Cost

The required resources can be activated immediately according to business needs, without the need to purchase high-cost hardware in the early stage of the business, which effectively reduces the initial asset investment and avoids waste of resources.

3.1.4.3 Type Description

UDB-PostgreSQL instance provides standard type and high-availability type.

The standard type provides a basic single instance of the database, which can create a master-slave architecture according to requirements to achieve data redundancy and separation of read and write.

The high-availability type provides dual-node configuration of active and standby, automatic active and standby switching during disaster recovery, avoiding database unavailability due to downtime or hardware failure, and creating a master-slave architecture according to requirements to achieve data redundancy and separation of read and write.

UDB-PostgreSQL instance provides standard and SSD storage type.

The standard type is suitable for business scenarios that require low database I/O performance.

SSD type is suitable for business scenarios that require high database performance.

3.1.4.4 Product Architecture

UDB PostgreSQL supports high-availability deployment. The UDB PostgreSQL high-availability instance adopts a master-slave replication architecture. While the master database provides services, there is another database service that continuously synchronizes data and is on standby at any time. The architecture is as follows:

 

The automatic disaster recovery module in the UDB background can automatically detect problems when there is a problem with the UDB PostgreSQL high-availability instance service, and automatically perform disaster recovery to ensure the stability and reliability of the user’s PostgreSQL database service. When the UDB PostgreSQL instance is switched, the disaster recovery module will promote the standby standby PostgreSQL service to the primary database and fall back to the secondary database after the original primary service is started. The user does not need any manual intervention and configuration modification during the whole process. The schematic diagram of the entire disaster recovery is shown in the following figure:

 

3.2 UMem Cloud Memory Storage

UMem (SCloud Memory Storage) is a high-performance, highly available Key-Value memory storage, including distributed and active/standby versions, Memcache and Redis databases, compatible with commonly used Memcache and Redis protocols. It is mainly used as the storage of persistent data, and can also be used as the storage of cached data. The Redis database is open source, abides by the BSD protocol, and is a high-performance key-value database.

3.2.1 Redis Products 

Cloud Memory Redis is a Key-Value online storage service compatible with the open source Redis protocol. It supports string (String), linked list (List), set (Set), ordered set (SortedSet), hash table (Hash) and other data types, as well as transactions (Transactions), message subscription and publishing (Pub/ Sub) and other advanced functions. Cloud Memory Redis not only provides high-speed data read and write capabilities, but also meets data persistence requirements.

3.2.1.1 Basic Concepts/Terms

Term Definition
Instance name The name of the user-defined cloud memory Redis instance.
Protocol The Cloud Memory Redis instance supports the Redis protocol.
Type The cloud memory Redis instance type: The Master-Replica Redis type and the Distributed Type
Master-Replica Type Refers to the Redis version instance with active-standby architecture. The two-node active and standby instances support capacity expansion and contraction, and the capacity and performance of expansion are limited.
Distributed Type Refers to a Redis version instance that has a sharded cluster that can be expanded. Distributed cluster instances have better scalability and performance, but they also have certain limitations in functionality. Resource ID After a user creates a cloud memory Redis instance, the system will automatically generate a resource ID, which is globally unique.
IP and port IP is the internal network address for users to access the cloud memory Redis instance, which is automatically generated after the cloud memory Redis instance is successfully created. The default port is 6379.
Expansion When the configuration of the cloud memory Redis instance cannot support the business, the cloud memory Redis instance can be expanded and upgraded.
Password The main and standby versions support password access authentication to further ensure instance security.
Configuration upgrade The master-replica type support elastic expansion
Slave library The master-replica type support the creation of read-only instances, with one active and one standby two nodes to ensure high availability of read-only slave libraries
High availability across availability zones The active and standby version of redis supports high-availability versions across availability zones, and one active and one backup deployed in different availability zones in the same region to achieve disaster tolerance at the availability zone level
Available zone of standby database The availability zone deployed by the standby database of the master-replica type of redis enables cross-availability zone deployment. Users can choose to deploy the standby database in other availability zones in the same region.
Attributes The attribute of the master-replica redis instances is master. The attribute of the slave database instance of the master-relica redis is slave
Capacity The capacity of the redis instance created by the user

3.2.1.2 Product Features and Advantages

SCloud Cloud Memory Redis provides two architectures: the master-replica type of Redis and the distributed type of Redis. Based on the highly reliable dual-machine hot standby architecture and the smoothly scalable cluster architecture, it meets the business needs of high read and write performance scenarios and flexible expansion and contraction.

Redis has gradually become the mainstream solution for memory storage in Internet applications in recent years with its rich data structure and functions, excellent performance of a single core, and complete software ecology. Aiming at the pain points of native Redis Cluster with high client requirements and inconvenient expansion, SCloud’s distributed Redis product has self-developed distributed products based on agent implementation, and has been cultivating products for many years, pursuing higher performance, larger capacity, and data security to provide customers with the ultimate distributed cache service.

Main Functions and Advantages:

  1. High Availability

The default setting is highly available to avoid the impact of downtime and other failures on cloud memory services. When the master node fails, the slave node will be quickly promoted to the new master node and continue to provide services. The master-replica Redis provides cross-availability zone high availability function. Users can turn on the “cross-availability zone high availability” function according to business needs, thereby improving service availability and achieving cross-computer room-level disaster recovery. However, because Redis uses asynchronous replication technology, when business pressure is high, a small amount of data from the slave node may lag behind its master node, and a small amount of data inconsistency may occur when the master and slave nodes are switched.

  1. Architecture

Redis provides two architectures: master-replica and distributed.

Master-replica architecture: When the system is running, the standby node (Replica) synchronizes the data of the master node (Master) in real time. When the master node fails, the system automatically performs switching in seconds, and the standby node takes over the service. The whole process is automatic and has no impact on the business. The service is highly available.

Distributed architecture: Distributed instances adopt a distributed cluster architecture, and each node adopts a master-slave high-availability master-slave architecture, which can perform automatic disaster tolerance switchover and failover. Different capacity configurations of the distributed version can be applied to businesses with different pressures, and the performance of distributed Redis can be expanded on demand.

  1. Complete Monitoring

Provide users with multiple types of monitoring, including multiple monitoring such as usage, connection number, QPS, key number and so on.

  1. Online Scaling

Users can directly realize Redis expansion operations on the console, and the service is not affected during the expansion process. The master-replica type Redis support scaling.

  1. Low operation and maintenance costs

Effectively reduce operation and maintenance costs. Users can create upgrade instances based on business needs. There is no need to purchase high-cost hardware at the beginning of the business, which effectively reduces initial capital investment and resource idle waste; convenient and rapid deployment management can reduce deployment and maintenance costs.

  1. Data Security

Data persistence storage adopts a hybrid storage method of memory + hard disk, which not only provides high-speed data read and write capabilities, but also meets data persistence requirements. With internal network isolation and security protection, the master-replica type support password access authentication to ensure safe and reliable access.

  1. Backup and Restore

It automatically backs up data every day, supports manual backup, has strong disaster tolerance capabilities, saves 3 copies for free, supports one-click recovery from backup and backup download, which effectively prevents data misoperation and minimizes possible business losses.

3.2.1.3 Product Architecture

3.2.1.3.1 Master-Replica Redis Architecture

The Mater-Replica Redis adopts the master-replica architecture. The primary node provides daily service access, and the standby node guarantees high availability. When the primary node fails, the system will automatically switch to the standby node to ensure smooth business operation.

 

 

High Service Availability

Adopting a dual-host master-replica architecture, the master node provides external access, and users can add, delete, modify, and check data through the Redis command line and general client. When the master node fails, it will automatically switch between master and replica to ensure smooth business operation.

Highly Reliable Data

The data persistence function is enabled by default, and all data is placed in the disk. It supports data backup function, so users can restore from the backup creation instance, effectively solve data misoperation and other problems. It also supports the deployment of active and standby nodes across availability zones, with cross-availability zone disaster tolerance capabilities.

Compatibility

The master-replica type of Redis support redis3.0/3.2/4.0 and are compatible with Redis protocol commands. Self-built Redis can be smoothly migrated to the active and standby version of Redis.

3.2.1.3.2 Distributed Redis Architecture

The distributed type of Redis adopts the Redis sharding + Proxy architecture. Redis sharding is based on the active and standby Redis resource pools, easily breaking through the single-threaded bottleneck of Redis itself, and supporting online expansion can greatly meet the business needs for Redis’s large-capacity or high-performance.

 

The distributed version of Redis provides an access ip by default, and users access this ip for normal Redis access and data operations. Redis sharding: Each sharding server is a master-replica version of Redis high availability architecture. After the primary node fails, the system will automatically switch between active and standby to ensure high service availability.

Billing method: prepaid, supporting yearly, monthly, and on-time; Redis price consists of capacity, instance price = capacity size * capacity unit price * duration.

3.2.2 Memcache Product 

Cloud memory Memcache is an extremely high-performance, memory-level Key-Value storage service independently developed by SCloud. Cloud memory Memcache can greatly relieve the pressure on back-end storage and improve the response speed of websites or applications. Cloud Memory Memcache supports the KeyValue data structure, and all clients compatible with Memcached protocol can communicate with Cloud Memory Memcache. Similar to self-built MemCached, the cloud memory Memcache is also compatible with the Memcached protocol, compatible with the user environment, and can be used directly. The difference is that the hardware and data are deployed in the cloud, with complete infrastructure, network security guarantees, and system maintenance services. All of these services do not require investment, and only need to pay based on usage.

3.2.2.1 Product Features

  1. Hot Data Access

Realizing the high-speed caching of hot data, and matching with the database can greatly improve the response speed of the application and greatly relieve the pressure on the back-end storage.

  1. Compatible with memcached protocol

Supports Key-Value data structure, and all clients compatible with Memcached protocol can use cloud memory Memcache.

  1. Real time monitoring

Provide real-time monitoring and historical monitoring of multiple data statistics.

  1. Online capacity adjustment:

Support online capacity adjustment, and the lifting configuration is completed instantly.

3.2.2.2 Product advantages

  1. Superior performance

Cached data is stored in memory, and data access returns quickly.

  1. Reliable Service

When a server goes down, the system can quickly restore service, and the user’s current client can automatically reconnect to resume service. Security Guarantee Cloud memory Memcache only supports internal network access and is completely isolated from the public network. Elastic scalability When the business scale changes, users can modify the configuration of the instance at any time as needed.

  1. Features of stand-alone memcache

The stand-alone version of memcache supports all Memcache protocols and stats commands; supports restart and LRU elimination strategies; low single request latency; supports high availability, but after disaster tolerance switching, the data will be cleared; there are clear restrictions on the key and value sizes, respectively 250 bytes, 1M; single-node service QPS generally does not exceed 10W; maximum capacity supports 16G, when expansion, Memcache will restart and data will be cleared.

3.2.2.3 Application Scenario

  1. High frequent access business

Such as social networks, e-commerce, games, advertisements, etc., can store high frequent accessed data in the cloud memory Memcache

  1. Large-scale promotion activity

For large-scale promotion event, the overall access pressure of the system is very high, the general database simply cannot carry such access pressure, cloud memory Memcache storage can be used.

  1. Inventory system with counter

Cloud memory Memcache can be used in conjunction with cloud database UDB. UDB stores specific data information Specific counting information is stored in the database field. Cloud memory Memcache can be used for counting reading, and UDB stores counting information.

3.2.2.4 Selection Suggestion

Stand-alone Memcache is suitable

  • for pure caching business scenarios and can accept data emptying
  • for scenarios where QPS is lower than 10w;
  • for scenarios that are sensitive to latency;

 

The stand-alone version supports regions:

  • California, Hong Kong, Jakarta, Indonesia

 

Storage

4.1 UDisk Cloud Disk

UDisk is a block disk that provides persistent storage space for cloud hosts as a basic block storage product in cloud computing scenarios. It has an independent life cycle and is based on network distributed access. It provides cloud hosts with large-capacity, high reliable, scalable, easy-to-use, and low-cost hard drives service.

UDisk is a cloud hard disk device of SCloud that can be flexibly created and provides advanced management functions. It adopts multiple cross-cabinet physical machine backup mechanisms in the available zone, standard cloud hard drives guarantee data durability 99.99999%. And SSD / RSSD cloud hard drives guarantee 99.999999% data durability. If customers want to further improve the data durability, opening the data-ark for UDisk or creating snapshots regularly is an economical and convenient method.

UDisk is powerful and easy to use. Users can mount the volume to any cloud host, also they can expand the volume when the space is insufficient. At the same time, the UDisk snapshot and cloning functions facilitate the backup, recovery and copying of data, and further improve the data availability.

4.1.1 Basic Concept

Hard drive name The name of a user-defined cloud disk.
Hard drive capacity The size of the cloud disk.
Mount point The cloud hard disk is mounted on the location of the cloud host.
Resource ID After the user creates a cloud disk, the system automatically generates a globally unique resource ID.
Expansion When the capacity of the cloud hard disk cannot support the business, the cloud hard disk can be expanded and upgraded.
Mount and unmount The operation of users mounting and unloading cloud hard disks to and from the cloud host.
Snapshot

Snapshot is a disk management function that effectively prevents data loss and protects data integrity.

By creating a snapshot of the cloud hard disk in seconds, the state of the cloud hard disk at a certain point can be retained, and the cloud hard disk can be created from the snapshot when it needs to be restored.

Offline (Windows) Dismount operation of cloud disks.

4.1.2 Cloud Hard Disk Type

UDisks are divided into three types: standard, SSD and RSSD. Standard cloud drives are SATA media, while SSD cloud drives and RSSD cloud drives are SSD media.

  Standard SSD RSSD
Storage Medium HDD NVME SSD

NVME SSD

Network transmission uses RDMA

Data Persistency 99.99999% 99.999999% 99.999999%
Maximum Capacity 8,000G 4,000G 32,000G
Host Support Standard Standard, High Frequency, Outstanding Outstanding
Data Ark Standby Standby Standby
IOPS(Single disk) 1000 (peak) min{1200+30*capacity, 24000} min{1800+50* capacity, 1200000}
Throughput(Single disk) 100MB/s (maximum) min{80+0.5* capacity, 260}MBps min{120+0.5* capacity, 4800}MBps
Latency 10ms 0.5-3ms 0.1-0.2ms
Applicable Scenario

Ÿ Backup and log storage.

Ÿ Sequential read and write scenarios for large files that require capacity. (Such as: hadoop offline data analysis scenario)

Ÿ Small relational database scenarios and development test scenarios that have certain requirements for data reliability.

Ÿ I/O intensive applications

Ÿ Medium and large relational database

Ÿ NoSQL database, application scenarios that cannot meet the performance of other standard disks

Ÿ High-performance database.

Ÿ I/O-intensive applications that require low latency, such as Elastic Search.

 

 

4.1.3 Product Advantages

UDisk adopts multiple cross-cabinet physical machine backup mechanisms in the available zone and synchronizes data in real time to ensure that it is not affected by single machine failure. Standard cloud hard drives guarantee data durability of 99.99999%, and SSD cloud hard drives and RSSD cloud hard drives guarantee data durability of 99.999999%.

  1. Capacity and flexibility.

UDisk can be freely configured with storage capacity and can be expanded at any time. Currently, RSSD UDisk supports a maximum capacity of 32,000G. Multiple cloud hard disks can be mounted on a single cloud host, which enables unlimited expansion of the cloud host hard disk capacity.

  1. Easy of use.

UDisk supports rapid creation, un/mounting, deletion and expansion operations, which can facilitate the deployment and management of UDisk products without restarting the server.

  1. Backup and restore.

UDisk supports 2 backup methods: manual snapshot and data-ark.

* Manual snapshot can support data disk backup and can restore or create a new instance through backup image.

* Based on the manual snapshot function, Data-Ark also supports real-time backup function of cloud hard disk and can restore or create new UHost instance through backup at any point in time.

Opening the Data-Ark for UDisk or creating snapshots regularly is an economical and convenient way to improve the long-term durability of data. If the UDisk fails, you can use the last snapshot or Data-Ark to restore or create a new instance in time.

Note:

  1. Data Ark supports real-time data backup. For product details, please visit: https://docs.scloud.sg/storage_cdn/uda/index.

4.1.4 Support Region

Availability Zone RSSD SSD HDD
Hong Kong Zone A not support not support stand by
Hong Kong Zone B not support stand by stand by
Washington Zone A not support stand by stand by
Singapore Zone A not support stand by stand by
Taipei Zone A not support stand by stand by
Jakarta Zone A not support stand by stand by
Seoul Zone A not support stand by stand by
Los Angeles Zone A not support stand by stand by
Ho Chi Minh City Zone A not support stand by stand by
Tokyo Zone A not support stand by not support
Frankfurt Zone A not support stand by not support

Note: SSD cloud hard disk supports up to 4T capacity, standard cloud hard disk supports up to 8T capacity, and RSSD cloud hard disk supports up to 32T capacity.

4.2 DataArk

SCloud Data-Ark is a service that provides continuous data protection for UDisks. It supports online real-time backup and can restore data that is accurate to the second level, avoiding the loss of data caused by misoperation and malicious destruction, and effectively protecting precious data.

4.2.1 Product Advantages

  1. Online backup without suspend business.

There is no need to suspend business or stop disk reads and writes when backing up a snapshot, without affecting online business or disk I/O performance.

  1. Recover accurate to the second level, real-time data protection.

Supports recovery of any second within 12 hours, and data recovery at any hour within 24 hours to avoid data loss to the greatest extent.

  1. Automatically / Manual backup to meet individual needs.

In addition to a complete system backup strategy, data can be manually backed up according to business needs to meet the customer backup strategy.

  1. Console self-operation, convenient and fast

The backup strategy and operation are completely operated through the console, saving the labor cost of data backup and lowering the technical threshold.

4.2.2 Backup Method

Data-Ark backup includes two types of methods: Automatically and Manually.

 

 

4.2.2.1 Automatic Backup

The system disk and data disk have three backup forms, which are:

  1. 12-hour second-level backup: the latest data within last 12 hours can be restored to any 1-second data state through Data-Ark.
  2. Hourly backup within 24 hours: data between last 12 hours and 24 hours can be restored at an hourly level.
  3. 00:00 backup every day within 3 days: Data between last 24 hours and 3 days can be restored at a daily level at 0:00 every day.

Note: The data backed up automatically cannot be deleted.

4.2.2.2 Manual Backup

When manually creating a backup, the point in time is the point in time of the current operation. User can choose to back up the system disk, data disk, or both separately, and user can customize the backup name and backup description.

4.3 US3 Object Storage

Object storage (US3) is a service that provides unstructured data storage for internet applications. Compared with traditional hard disk storage, object storage has the advantages of unlimited storage, support for high concurrent access, and lower cost. The data persistency is >= 99.999999999%, and the service availability is >= 99.99%.

4.3.1 Storage Type

US3 provides three storage types: standard, low-frequency, and archive, which are used for frequently accessed hot data, low-frequency access backup data, and archived data suitable for long-term storage. These three types covering various data storage scenarios from hot to cold.

  Standard Low frequency Archive
Storage object Hotspots, frequent access to data Backup data that is not frequently saved for a long time Long-term storage of rarely accessed archived data
Minimum storage time   20 days 60 days
Minimal storage file   60K 60K
Availability 99.99% 99.0% 99.0%
Access Real-time access Real-time access It needs to be thawed by restore before access, and it takes tens of seconds to wait time to restore from the frozen state to the readable state
Picture Processing Support Support Support, but need to unfreeze
Usage Scenario All kinds of social and sharing pictures, audio and video applications, large websites, big data analysis, mobile applications, game programs Long-term backup of mobile applications, smart devices, government and corporate business data, and corporate data Filing of compliance documents such as long-term archive data, medical images, scientific data, and video footage
Relative price 100% 75% 50%
Mutual conversion

Convert to low frequency storage

Convert to archive storage

Convert to archive storage  

4.3.2 Product Advantages

  1. Storage Space without limitaion

There is no upper limit on the cloud storage space, and there is no need to consider storage space expansion. It supports a single file less than or equal to 5TB. It is suitable for the storage of massive files in UGC applications such as audio, video and picture sharing.

  1. High concurrency

US3 supports high concurrent access, breaks through the traditional disk I/O limit, and meets the business needs of high access volume and high download volume. It is suitable for the storage of high-download and high-visit application files.

  1. Access acceleration

US3 combined with CDN distribution acceleration, 500+ acceleration nodes in globally, effectively reduce access delay and improve download speed.

  1. Safety and high – availability

Three copies of the stored files are stored, distributed in different storage clusters. Even if a single copy of data is damaged, it will not affect the usability of the stored file, while ensuring file security.

  1. Picture processing

Users can perform diversified processing of image files in the cloud storage space, such as thumbnails, rotation, cropping, watermarks and other common functions.

  1. Lower cost

The unit price of US3 is lower than that of cloud hard drives, and downloads are distributed by CDN, reducing storage and network costs. Billing is based on actual usage, no minimum fee, no waste of storage and bandwidth resources.

  1. Development support

US3 supports multiple access methods such as API/SDK, command line tools, and console, suitable for multiple languages, and seamlessly integrates with the original business, which can greatly shorten the development cycle and help the business launch quickly.

  1. Support internal/public network access.

Users can access the internal network through the cloud instance provided by SCloud, providing stable and high-speed internal network upload and download speeds.

4.3.3 Support Region

Region Single-Region Management
Hong Kong Support
Los Angeles Support
Singapore Support
Jakarta Support
Taipei Support
Lagos Support
Sao paulo Support
Dubai Support
Frankfurt Support
Ho Chi Minh City Support
Washington Support
Mumbai Support

Cloud Distribution

5.1 UCDN Content Delivery Network

UCDN (SCloud Content Delivery Network) service distributes users’ content to nearly 500 service nodes deployed around the world and performs intelligent control and caching to calculate the nearest access node for users and provide Customers have a better and faster network experience.

Value of UCDN:

  1. Solve the problem of user access nearby.

The physical distance between the user and the service server is relatively long, which requires multiple networks forwarding, and the transmission delay is high and unstable.

  1. Solve inter-network interoperability.

The operator used by the user is different from the operator where the service server is located, and the request needs to be interconnected and forwarded between the operators.

  1. Reduce source station pressure.

The service server has limited network bandwidth and processing capabilities. When many requests are received, the response speed and availability will decrease.

 

5.1.1 Main Concept

Accelerated domain name The accelerated domain name refers to the domain name that needs to be accelerated and can be used for access.
CDN domain name After the accelerated domain name is approved, the system will generate a CDN domain name, and user need to use this domain name to modify their CNAME. For the modification method, see “DNS configuration cname record”.
Content refresh Content refresh refers to deleting a specified directory or file from the acceleration node, so that the user requests to get back to the source again. When you want users to see the latest information in time, you can use this manual cache refresh method.
Prefetch files

Prefetching will simulate the user’s first request on all acceleration nodes, allowing specific content to be cached in the CDN node, improving the user’s first download experience and reducing back-to-source traffic.

(Prefetching needs to distribute the file to all acceleration nodes. The total time to complete depends on the quality of the public network from the source site to the acceleration node. A 100M file is estimated to be within 30 minutes. Occasionally there are some small operators and far away The regional meeting is slower, please be patient.

Prefetching needs to distribute the file to all acceleration nodes. The total time to complete depends on the quality of the public network from the source site to the acceleration node. A 100M file is estimated to be within 30 minutes. Occasionally, some small operators and remote areas will be slower. Please be patient. )

5.1.2 Acceleration Type

Features Website acceleration

Large file download acceleration /

Video on demand acceleration

Domestic Acceleration Support Support

Global

Acceleration

Support N/A
Content refresh Support Support
Prefetch files Support Support
Cache configuration Support N/A
Operation log Support Support

The difference between website acceleration and large file acceleration or video on demand acceleration is mainly determined by the attributes of the files to be accelerated:

  1. If the resources to be accelerated are mainly static small files such as pictures and documents, then website acceleration is preferred. The website acceleration is refreshed and prefetched once a day by default. Small files are updated quickly, and manual refreshing and prefetching are inefficient and wasteful. Labor costs.
  2. If the resources to be accelerated are mainly large files and installation packages over 30M, then large file acceleration is preferred.
  3. If the resources to be accelerated are mainly video, then video-on-demand acceleration is preferred.

5.1.2.1 Website Acceleration

Website acceleration can distribute the static content of the client’s website, such as web pages, pictures, texts, etc., to server nodes around the world, thus significantly improving the response speed and usability of the website. In addition, due to security considerations, page acceleration also provides anti-leech function to protect the website content from being intercepted.

This acceleration type is applicable to portal websites, social networking websites, e-commerce websites, government websites, corporate portal websites, and news media websites.

5.1.2.2 Large File Download Acceleration

Large file download acceleration can accelerate the content that needs to be downloaded, such as video files, game installation packages, software, patches, etc., to provide customers with download acceleration solutions. User only need to upload large files to the nearest server node, and the UCDN platform can intelligently deploy the files and distribute them to nodes across the country, which not only solves the problem of insufficient bandwidth but also allows your users to experience faster download experience.

Large file download acceleration is suitable for the game industry, software download sites, remote education, content providers, etc.

5.1.2.3 Video-on-Demand Acceleration

Video-on-demand acceleration is used to accelerate video files. First upload the content of the source site to the data center, and then distribute it to edge servers across the country through an internal private agreement and transmit it to end users through streaming media technology. Thereby improving the user’s viewing and browsing experience and improving the problems of slow access and viewing pauses.

Video-on-demand acceleration is applicable to streaming media service websites such as online video websites and remote education.

5.1.3 Main Features

5.1.3.1 Real Time Monitoring

Real-time monitoring includes four parts: bandwidth, request count, hit rate and HTTP status code monitoring.

 

The system defaults to display the monitoring situation of the last day, and the duration can also be selected according to requirements. After the screening conditions are completed, confirm the screening, and display the monitoring situation.

5.1.3.1.1 Bandwidth Monitoring

On this page, the real-time total bandwidth line graph under the filter conditions will be displayed. It also counts the peak, valley, and total traffic of the total bandwidth of the selected domain name during the time period.

 

5.1.3.1.2 Request Count Monitoring

Under this page, a line graph of the number of CDN requests and the number of back-to-origin requests under the filter conditions will be displayed. It also counts the peak value of the number of requests, the bottom value of the number of requests, and the total number of requests for the selected domain name during the period.

 

5.1.3.1.3 Hit Rate Monitoring

On this page, a line chart of traffic hit rate and request hit rate under filtering conditions will be displayed. It also counts the average traffic hit rate and average request hit rate of the selected domain name during the period.

 

5.1.3.1.4 HTTP Status Code Monitoring

Under this page, a line graph of HTTP status codes under the filter conditions will be displayed.

 

Note: The request number monitoring, hit rate monitoring and HTTP status code monitoring are controlled by the back-end authority bit. If necessary, please contact the account manager to apply for activation.

5.1.3.2 Statistical Analysis

The statistical analysis is divided into 6 modules: user access distribution, traffic analysis, number of requests analysis, hot URL analysis, hot Refer analysis, and hot IP analysis.

Note:

  1. The statistical analysis function temporarily only supports China accelerated domain names.
  2. The analysis result is obtained from CDN log analysis, and the data delay time is about 1 hr.
  3. [Statistical Analysis] The function is controlled by the back-end authority. If necessary, please contact the account manager to apply for activation.
5.1.3.2.1 User Access Distribution

Under this module, the system will display the corresponding access distribution list and distribution pie chart (two latitudes of traffic and number of requests) according to the domain name and time period information you have filtered.

 

 

[User Access Distribution] The module also supports the switching of three display latitudes of province/operator/province-operator:

5.1.3.2.2 Traffic Analysis

Under this module, the system will display a list of traffic information during the period according to the domain name and period information you have filtered and draw a pie chart of the ratio of hit traffic and return-to-source traffic based on the aggregated data.

 

5.1.3.2.3 Request Count Analysis

Under this module, the system will display a list of the number of requests in the period according to the domain name and period information you have filtered, and at the same time draw a pie chart of the ratio of hit requests and return requests based on the aggregated data.

 

5.1.3.2.4 Hot URL analysis

Under this module, the system will display a list of URLs with the number of requests and traffic TOP100 during that date based on the domain name and date you have filtered.

At the same time, you can download the list as an excel file to the local by clicking the [download] button at the top right of the list.

5.1.3.2.5 Hot Referer Analysis

Under this module, the system will display the Refer list of TOP100 times during that date according to the domain name and date you have filtered.

 

5.1.3.3 Content Management

5.1.3.3.1 Content Refresh

Content refresh is to delete some files on the CDN node to achieve the purpose of returning to the source and update, and it will take effect in 5-10 minutes.

 

 

 

Directory to refresh

  1. Must start with http(s):// and fill in the complete path, such as http://static.scloud.sg/images/test.jpg
  2. Please pay attention to distinguish between uppercase and lowercase letters in the url, the wrong case will cause the refresh to be invalid.
  3. If the directory has a hierarchical relationship between the upper and lower levels, this function will automatically push the lower subdirectories at the same time, and there is no need to fill in the lower subdirectories.

File to refresh

  1. Must start with http(s):// and fill in the complete path, such as http://static.scloud.sg/images/test.jpg
  2. Please pay attention to distinguish the case of the letters in the url, the wrong case will cause the refresh to be invalid.
  3. Submit up to 30 URLs at a time, one per line.

If the catalog and file are filled in at the same time, the contents filled in the catalog and file will be submitted at the same time. The record and status of content refresh can be viewed in performance.

Large file downloads and on-demand content refreshing are not allowed to input Chinese, and Chinese is not supported after transcoding. The accelerated content refresh of the webpage allows input of Chinese, but it will fail if the Chinese is transcoded.

5.1.3.3.2 Content Prefetching

Pre-fetching files is to distribute the files of the origin site to the secondary cache server for acceleration.

 

 

Note:

  1. Fill in the full path of the file, it must be http(s):%%The beginning, such as http(s):%%static.scloud.sg/packages/document.zip.
  2. Each URL is one line, with carriage return and line feed. A maximum of 30 entries can be submitted at a time.
  3. When the file is large, prefetching may take up a lot of bandwidth. It is recommended to perform prefetching when the business is not affected.
  4. Large file acceleration Prefetch files are slightly different from page acceleration.

Large file acceleration Prefetch files are to actively obtain files from the source site to the secondary server for acceleration. It should be noted that the prefetch files cannot be image, html, text and other file formats that can be accelerated in web page acceleration. If it is found that the prefetching has been displayed as being processed, it may be because the path is not filled in correctly. Fill in the full path of the file, which must start with http(s)://, such as http(s)://static.scloud.sg/packages/document.zip.

5.1.3.4 Domain Name Management

On this page, users can switch billing modes, create accelerated domain names, purchase traffic, manage accelerated domain name resources, and perform domain name configuration.

5.1.3.4.1 Billing Mode Switch

Click the button below the current billing mode to pop up the billing mode window. Check the billing method to be selected and confirm to switch.

 

5.1.3.4.2 Domain Name Configuration

Click on an accelerated domain name resource to enter the resource details page, including three modules: basic information, domain name configuration, and operation log.

 

5.1.3.4.3 Basic Information Page

The basic information page displays the acceleration domain name, acceleration area, and test URL of the current resource. You can bind resource-related monitoring data alarms on this page. By default, there is no binding. The alarm template needs to be moved to the resource monitoring side to create.

 

5.1.3.4.4 Domain Name Configuration

The domain name configuration page has 3 modules: back-to-source settings, access control, and cache configuration.

Back to source settings

Under this module, the user can modify the origin/return-to-origin Host of the accelerated domain name. (When the origin site is ip, the return-to-origin host defaults to the accelerated domain name and does not support modification)

Access control

Under this module, users can configure Referer anti-leech and IP blacklist according to their needs. Realize access control requirements for accelerated domain names.

Cache configuration

Only web pages and small files support cache configuration.

If you have applied for technical support for a special cache configuration, you need to reset it before you can use the general cache configuration.

Cache rule priority: top to bottom.

By default, dynamic files such as php, aspx, asp, jsp, do, dwr, cgi, fcgi, action, ashx, axd, json are not cached and have the highest priority.

Website Elements: Selecting website elements will quickly fill in the path template.

Path template: must start with /.

Cache time: You can freely choose the time unit, the upper limit is 30 days.

Path template description:

  1. The slash /$ represents the homepage of the URL
  2. The /news/ at the beginning of the slash represents all files in the news directory.

3./(news|xiao)/ represents all files in the news and xiao directories.

  1. The /* at the beginning of the slash represents all files.
  2. The /.jpg at the beginning of the slash represents all jpg files, including all jpg files in subdirectories such as /xx/
  3. /.(html) at the beginning of the slash represents all files of html type in the root directory, and /.(html|js) represents all files of html and js type in the root directory
  4. The /news/image.jpg at the beginning of the slash represents the specific image.jpg file under the news directory, and news (image.jpg|logo.jpg) represents the image.jpg and logo.jpg files under the news directory.
5.1.3.4.5 Operation Log

User can use Operation log to view all related operation behavior records.

 

5.1.3.4.6 Certificate Management

When you need to use https acceleration, you need to upload a certificate.

Go to the certificate management page and click Create Certificate.

Enter the certificate name, upload the authorized certificate (.crt file), authorized private key (.key file), and CA certificate (.crt file).

 

Note: After uploading the certificate, you also need to go to the domain name details page to select the corresponding certificate to enable https acceleration. If you need to replace the certificate, please upload the new certificate and go to the domain name details page to bind the new certificate. The review time for opening https or replacing the certificate is estimated to be 2 working days, please be patient.

5.1.3.5 Log Download

Go to the log download page to download related log information. The logs of the day can be downloaded the next day. The console can download logs for nearly 7 days and logs over 7 days can be pulled through API.

5.1.4 Node Distribution

There are hundreds of distribution servers in each data center of global nodes, and they are connected to other infrastructures through 10GigE fiber. Able to carry 3 to 5 Tbps of traffic transmission, 24 hours a week, 7 days a week uninterrupted service.

 

  • United States: Atlanta, Chicago, Dallas, Washington, Los Angeles, San Francisco, Seattle, New York, San Jose, Seattle, Denver, Houston, Ashburn
  • Brazil: Buenos Aires, Curitiba, Sao Paulo, Santiago, Rio de Janeiro
  • Amsterdam, The Netherlands
  • Brussels, Belgium
  • Frankfurt, Germany
  • London, England
  • Madrid, Spain
  • Paris France
  • Sweden: Stockholm
  • Canada: Toronto, Vancouver, Montreal
  • Hong Kong, Macau and Taiwan
  • Japan: Tokyo, Osaka
  • Seoul, Korea
  • Philippines: Manila, Cebu, Quezon
  • Singapore: Singapore
  • Vietnam: Hanoi, Ho Chi Minh
  • Thailand: Bangkok, Nonthaburi
  • Cambodia: Cambodia
  • Saudi Arabia: Jeddah
  • India: Mumbai, Chennai, New Delhi, Kerala
  • Pakistan: Hyderabad
  • Australia: Sydney, Auckland, Brisbane, Melbourne, Perth, Wellington
  • Indonesia: Machen Port, Bataan Island, Denpasar, Jakarta

 

Big Data Analysis

6.1 UHadoop Hosted Hadoop Cluster

Hosted Hadoop cluster (UHadoop) is an intergrated big data processing platform based on the Hadoop framework. It provides ready-for-use common components of the big data ecosystem such as Spark, HBase, Presto, and Hive. At the same time, you can choose auxiliary tools from Hue, Sqoop, Oozie, Pig, etc.

In order to meet the demand for separation of storage and computing, UHadoop supports independently managed HDFS storage clusters, which can read and write data for multiple independent computing clusters.

6.1.1 Product Architecture

 

 

6.1.1.1 HDFS

HDFS is deployed in HA mode by default. Two Namenodes are deployed on master 1 and master 2 respectively. Datanode is allocated on all Core nodes. No need to deploy Datanode on Task nodes.

 

6.1.1.2 Yarn

Yarn is deployed in HA mode by default. Two ResourceManagers are deployed on master 1 and master 2 respectively. Nodemanager is allocated on all Core and Task nodes.

 

 

6.1.1.3 Hive

Currently, Hive supports the On Yarn mode. Two Hive-MetaStores are deployed on master 1 and master 2 respectively, and connected to the local mysql, avoiding the Hive service failure caused by the downtime of a single master node

You can connect to Hive services through HiveCli or Beeline.

 

6.1.1.4 HBase

HBase is deployed in HA mode by default. Two HMasters are deployed on master1 and master 2 respectively. HRegionServer is distributed on all Core nodes.

 

6.1.1.5 Spark

Spark adopts On Yarn mode.

6.1.2 Features

  1. Convenient

You can create a cluster in a few minutes without worrying about node allocation, deployment and optimization; with the help of rich examples and scenario tutorials, you can quickly get started to achieve business goals.

  1. Easy to use
  • According to the selected hardware (CPU, memory, disk) and software combination and version, implement automatic deployment;
  • Users can apply for where to deploy their clusters according to the geographic location of themselves or the data source. The supported regions currently include Hong Kong. It will expand to all regions in the near
  1. Elastic

The cluster can support dynamic scaling which effectively avoids resource waste, and the separation of computing and storage.

  1. Open sourcecompatible

It is fully compatible with the open source community version of Hadoop/Spark. Customers can use open source standard APIs to write jobs and migrate to the cloud without any modification.

  1. Secure

The user cluster is located in an exclusive virtual private network to achieve complete resource isolation.

  1. Stable

Key components such as Hadoop/Spark/HBase in the cluster all support high-availability features to ensure service availability.

6.1.3 Advantages

  1. Low cost, the same price as self-built Hadoop with cloud hosting

Currently, the cluster price of UHadoop is the same as the price of self-built Hadoop clusters. It is expected to reduce user costs in more availability zones.

  1. Exclusive physical servernode with lower cost and better performance

Provide physical server node, which can meet the scenarios of PB-level data storage with high read-write performance, in addition to virtual server node.

  1. Shared HDFS storage, balancing theread-write performance and flexibility

Multiple computing clusters can access the data of the same HDFS storage cluster. This mode can reduce the resource cost of large clusters, and the services of computing clusters can run more stably.

  1. Agent monitors the cluster and performs fault recovery

No worry to node disk failures, and service availability. Agents in the cluster will monitor and automatically recover from failures.

  1. Adjust service components at any time to improve the efficiency of cluster service managementby console

The console provides function of turning on, turning off and modifying the configuration of cluster service components, such as Spark, Hive, HBase, ResourceManager, etc. without logging into multiple nodes.

  1. View the application status on Yarn to accelerateapplication development by console

Check the status, logs and other information of the application and its subtasks that were submitted in the last 15 days by console to trace the business situation.

  1. Flexible billing method

Provide a pre-paid model with annual and monthly subscriptions and a flexible pay-as-you-go model.

 

6.2 ElasticSearch

UES (SCloud Elasticsearch) is a log management analysis service based on Elasticsearch and Kibana. By creating a cluster, the deployment of the cluster can be quickly realized. The cluster automatically initializes the appropriate configuration and rich plug-ins. The security plug-in provides the account role authority management function, which provides users with fast creation, easy management and linear expansion. UES also provides an indicator monitoring and visual management platform. The use of high-performance SSD disks can improve processing efficiency for the storage, retrieval and analysis of massive log data.

6.2.1 Advantages

  1. Role authority management

The security plug-ins support multi-tenant management and index, document, and field-level role authority control.

  1. Massivedata storage, retrieval and analysis

Perform real-time processing of massive data, full-text search, structured search, and data analysis.

  1. Quick deployment, easy to use

Minute-level deployment and initialization of cluster, and support for operations such as linear expansion of the cluster.

  1. Superior performance, stable and reliable

The underlying nodes use SSD disks, supporting for data backup, data migration, and failure recovery. The authority verification of running and visualise platform in the private network guarantees the security of the cluster data.

  1. Performance monitoringwith visual management

Support Kibana, a performance indicator monitoring and visual management platform.

  1. Rich node configuration, flexible billing

Support annual, monthly and hourly billing. It can effectively control costs through select the appropriate node configuration and billing cycle according to business needs.

6.2.2 Scenarios

  1. Log management analysis

Analyze unstructured and semi-structured logs such as URLs, mobile devices, and servers for analysis scenarios such as error troubleshooting, application monitoring, and fraud detection.

  1. Global search

Search and navigation services for e-commerce, O2O, traditional enterprise and other industries.

  1. Document storage

Support for storage and analysis of JSON documents.

  1. Data storage (database function)

Using ES as the data store to keep the design as simple as possible for a new project, but not supporting frequent updates and transactional operations.

  1. Traditional database data analysis

Adding ES as a search service to the existing complex system in operation is a relatively safe way.

6.2.3 Kibana

Kibana is an open source analysis and visualization platform designed for use with Elasticsearch. You can use kibana to search, view, and interact with the data stored in the Elasticsearch index. Kibana can easily display advanced data analysis and visualization by using a variety of charts, tables, maps, etc. Kibana makes it easy to understand large amounts of data.

6.3 Kafka Message Queue

UKafka is a distributed messaging product specializing in processing streaming data. By creating cluster, the deployment of Kafka and related services can be quickly realized. Provide users with a stream data processing system that is quick to create, easy to manage, and elastic to scale.

6.3.1 Advantages

  1. Rapid deployment

Create in minutes, automated node allocation, service deployment and performance optimization.

  1. Easy to manage

Combined deployment of hardware and software, no need to worry about the maintenance.

  1. High performance

Inherit the performance advantages of the cloud host and optimize the configuration to improve the overall performance of the service, which is much better than self-built clusters.

  1. Elastic scaling

According to performance and capacity, the entire cluster can be configured and adjusted by adjusting the node configuration, supporting dynamic scaling and effectively avoiding resource waste.

  1. Safe and stable

The cluster mode is deployed on multiple independent physical servers to ensure no resource preemption and high service availability.

  1. Open sourcecompatible

Fully compatible with the open source community version of Kafka, users can directly use standard APIs for development tasks without changing the code.

6.3.2 Architecture

The UKafka service mainly includes the following components:

  1. Topic
  2. Producer
  3. Consumer
  4. Broker

Its structure is shown in the figure:

 

Noun Definition Description
Topic Message subject A specific message flow or message queue. Message is the effective byte, and the subject is the category of the message or the source of the message
Producer Message producer Any service or system that can publish any message
Consumer Message consumer The message receiver or processor who subscribes to a single or multiple messages and obtains the published message data from the Broker
Broker Message node A group of servers that save messages as service nodes in the UKafka cluster
Partition Message partition A topic partition, which distributes a message topic among multiple Brokers to achieve service distribution and high availability

6.3.3 Scenarios

As a distributed message processing system, UKafka is mainly responsible for processing all action streaming data related to message publishing and subscription in a high-throughput environment.

  1. Basic monitoring

Using basic resources as message publishers, send monitoring data indicators related to systems and applications, then collect and process these data and monitor basic resources through self-built systems.

  1. Messagepush

As a message and signaling delivery system between applications, it provides unified message publishing and subscription management to realise search management or content management.

  1. Real-time data analysis

UKafka can provide synchronized analysis of data to improve user experience by tracking the user’s website behaviors and real-time analysis. And it can generate data and store it in the UHadoop cluster for asynchronous data processing.

  1. Log collection

It can be used as a log system for distributed applications or platforms to provide core data for analytical databases with unified collection and processing of service logs.

 

Security

7.1 WAF Web Application Firewall

SCloud Web Application Firewalls are used to monitor and block common attacks on web sites. Support the discovery of web attacks such as SQL injection and XSS cross-site. It can reduce the risk of downtime, tampering and data theft, and hide the origin site to prevent direct attacks to the origin site.

7.1.1 Product Advantages

  1. 7*24 hours expert service

As long as you purchase any version of UEWAF, you can enjoy 7*24 hours of free expert services. For some complex attacks, when machines or algorithms cannot make accurate judgments and blocks, SCloud’s security experts can conduct customized defenses based on hackers’ attack methods.

  1. Low latency response

There are differences in bandwidth quality of different application firewalls. UEWAF is based on BGP line access with stable quality and millisecond response delay.

  1. Intelligent behavior detection model

The proliferation of malicious web robots is one of the important of application layer network security. For example, common CC attacks, malicious crawlers, malicious reviews or interfaces, fraud and many other scenarios. UEWAF behavior analysis engine is based on machine learning and big data analysis technology, which can identify malicious robot behaviors accurately, such as CC attacks, malicious crawlers, machine scans, and malicious brush of interface data.

  1. Rule self-improving system

The mainstream WAF is still based on feature matching. But in the face of increasingly complex and diverse attacks, rule systems based on feature matching are often unable to defend well. UEWAF is based on machine learning, and its intelligent detection engine with excellent generalization capabilities and automatic learning capabilities, can be organically integrated with the rule system to provide a more solid guarantee for the safety of customer websites.

  1. Automated elastic expansion capability

In the face of business uncertainty, UEWAF with automated and elastic expansion capabilities, can use SCloud’s powerful public cloud resource and quickly expand its services when encountering CC attacks or business emergencies, so there will be no performance bottlenecks problem.

  1. Strong collaborative defense capabilities

SCloud utilizes powerful cloud intelligence collection capabilities with the intelligence of other intelligence vendors, and uses the intelligence database on UEWAF to filter out massive amounts of malicious access.

  1. KEY-LESS solution

In the traditional WAF scheme, for https decryption, the client needs to provide the certificate and private key, but many clients cannot provide the private key due to some internal restrictions. In contrast, UEWAF’s KEY-LESS solution allows customers to retain private key control with protection.

  1. Customized rules that fit the business

Protection rules can be customized according to users’ own business situation to intercept malicious traffic or release legitimate requests accurately.

7.1.2 Main Function

SCloud Enterprise Web Application Firewall (UEWAF) is used to protect against internal and external security threats to website applications. It is compatible with the High Defense Service (UADS) to protects against common cc and SQL injection attacks on the network. Web Application Attack Protection provides comprehensive protection against the following types of attacks: SQL injection, XSS cross-site, WebShell upload, command injection, illegal HTTP protocol requests, common web server vulnerability attacks, unauthorized access to core files, path traversal, etc. with functions such as backdoor isolation protection and scanning protection.

  1. Malicious CC attack protection

Control the access frequency of a single source IP, support redirection jump verification, and human-machine identification, etc. In response to massive slow request attacks, comprehensive protection is performed based on statistical response codes and URL request distribution, abnormal Referer and User-Agent feature identification, combined with precise website access control.

  1. Accurate access control

Provide a friendly configuration console interface, supporting condition combinations of common HTTP fields such as IP, URL, Referer, User-Agent, etc., to create a powerful and precise access control strategy to support protection scenarios such as hotlink protection and website background protection. With security modules such as Web common attack protection and CC protection, it creates a multi-layer comprehensive protection mechanism to easily identify trusted and malicious traffic based on requirements.

  1. Rich report analysis

Provide accurate attack details and business analysis reports to understand the website status.

  1. Black and white list

Blacklist: block the specified IP or IP segment. Whitelist: allow the specified IP or IP segment.

  1. Alarm management

Send the alarm summary of WAF related domain names to the user’s mailbox to remind the risks regularly.

Send real-time alrms of WAF related domain names that trigger a large number of rules in a short period of time to the user’s mailbox to remind the risks.

Send real-time alarms of WAF-related domain name source stations that respond abnormally to the user’s mailbox to remind the risks.

  1. Machine behavior detection

Identify machine behaviors, such as normal crawlers, malicious crawlers, cc attacks, etc. Customers can refer to the results of machine behavior detection to intercept the source of malicious crawlers or cc attacks.

  1. Online log query and download

User can view the user’s access log online on the console, and can download the corresponding attack log.

  1. Website incognito

Hide the site address to prevent direct attacks to the origin site.

  1. 0day patches updated

Update the latest vulnerability patches in 24 hours and the protection rules in time.

  1. Observation mode with unblocked alarm

Provide an alarm with non-blocking observation mode. Users can use this mode to observe the WAF’s false positives and interceptions first.

  1. Customisedrule

Define protection rules according to your own business situation to intercept malicious traffic or release legitimate requests accurately.

7.1.3 Product Type

7.1.3.1 Function Comparison

Parameter Description Advanced Version Enterprise Version Flagship Version Customized Version
HTTP Support HTTP(80) port Yes Yes Yes Yes
HTTPS Support HTTPS(443) port Yes Yes Yes Yes
Outside SCloud IDC The website is deployed outside SCloud to access WAF Yes Yes Yes Yes
Basic protection Common web attacks such as SQL injection / command execution Yes Yes Yes Yes

0day patches updated

 

Ptrotect against the latest vulnerability Yes Yes Yes Yes

Intelligent CC attack protection

 

Adjusted CC attack protection strategy Yes Yes Yes Yes
Widecard domain name Add primary domain name No Yes Yes Yes
Keyless Allow customers to retain private key control without upload No Yes Yes Yes
Prevent webpage tampering Prevent hackers from tampering webpage No Yes Yes Yes

Machine behavior detection

 

Intercept the source of malicious No Yes Yes Yes
Log query and download Online log query and download No Yes Yes Yes

 

7.1.3.2 Performance Comparison

Function Advanced Version Enterprise Version Flagship Version Customized Version
Bandwidth (outside/inside cloud)

20/60Mbps

 

40/120Mbps

 

60/200Mbps

 

100/300Mbps

 

Domin names (primary and secondary domain names) 10 20 50 70
Exclusive IP points 0 3 5 10
Regions 1 1 3 All regions
CC Protection 10000 20000 50000 100000
QPS 1000 3000 5000 10000
System rules 10 20 40 50
CC rules 5 10 20 30
Malicious IP blocking 5 5 5 5

Regional IP blocking

 

10 10 10 10
Information security protection 10 10 10 10
Prevent webpage tampering No Yes Yes Yes
Advanced Function No Wildcard domain name/ Keyless/ Prevent webpage tampering/ Machine behavior detection Wildcard domain name/ Keyless/ Prevent webpage tampering/ Machine behavior detection Wildcard domain name/ Keyless/ Prevent webpage tampering/ Machine behavior detection
Customization requirements No No Yes Yes

Parameter Description

  • Bandwidth (outside/inside cloud): the origin site is SCloud host and deployed in the same area as the WAF, then enjoy the bandwidth in the cloud, and the rest is the out-of-cloud bandwidth flow from the public network traffic back to the source. If the user business traffic exceeds the version limit 50%, there may be risksof increased request delay and business links broken.
  • Domain name: The maximum number of domain names added in the current version of WAF, without distinguishing between level 1 and level 2 domain names. It can be improved by expansion pack.
  • Exclusive IP points: the exclusive IP points provided by the current version, can be independently bound to the specified domain name, thereby be assigned an exclusive EIP which is different from the other domain names that share EIP. It can be improved by expansion pack
  • Region: The main working area of theWAF configuration
  • CC protection: The maximum number of links in the current version. If this limit is exceeded, the request delay may increase and the risk of business link disconnection may occur
  • QPS: The maximum number of QPS requests per second of the current version. If this limit is exceeded, the request delay may increase and the business link may be disconnected.
  • System rules: WAF users can customize the rules according to the requested content and method.
  • CC rules: According to the request frequency of the source IP, set blocking or pop-up verification code processing rules.
  • Malicious IP blocking: According to the type of attack, the rules of blacklist blocking are processed by itself.
  • Regional IP blocking: According to the IP attribution requested by the source, the corresponding blocking or release processing is performed.
  • Information security protection: Users can set their own rules, according to the response code of the source site, to replace the response content or interrupt the link.
  • Prevent webpage tampering: users can add html/htm static pages by themselves and carry out corresponding safety protection.
  • Log query and download: Provide the latest 10,000 log queries within 24 hours, and the request log and attack log download within 7 days. For longer storage, please use log storage service.
  • Customization requirements: If some functions cannot be adjusted in the console, please contact the technical staff to hand.

 

7.2 Host Intrusion Detection

SCloud Host-based Intrusion Detection System (SCloud Host-based Intrusion Detection System) is used to detect whether there are security vulnerabilities on the host, whether hacking occurs, whether a hacker is brute force cracking the server account and password, whether the login is successful, and whether a backdoor Trojan is installed. By installing the Agent developed by the SCloud security team, it will actively monitor the security risks and hacker intrusions on the server and alert. It supports cross-platform and multi-region unified management.

7.2.1 Basic Concept

Security Risk Monitoring system’s server security vulnerabilities and application security vulnerabilities are easy to be used by hackers, thus resulting in security risks.
Malicious Trojan Horse If Trojan horse is installed on the server after hacker invaded the server, the hacker will be able to control the server through the “trojan horse” program to destroy and steal the files of the seeded arbitrarily, and even remotely control the host of the seeded.
Remote login If someone logs into the server from an unused location of the server owner, it is necessary to consider whether the password has been cracked by a hacker.
Brute force successful Through matching the correct account and password by exhausting the account and password, the hacker successfully logged into the server.
Configuration defect Configuration defects refer to the unreasonable configuration of the server by the administrator, which resulting in security risks. It is recommended that the administrator modify the server configuration to enhance server security.
Agent Agent is a plug-in, a monitoring program installed on the server

7.2.2 Product Advantages

  1. Unified security management

Support cross-platform management servers, including SCloud virtual or physical machines, and hosts outside the platform such as from Aliyun and Tencent Cloud. Regardless of the deployment environment and location, it can be viewed and operated in a unified Web console.

  1. Easy and flexible deployment

Only need to copy a piece of code to install the agent, flexible deployment, no mandatory installation, cross-platform and cross-regional/ as long as the network is connected in the country.

  1. Real-time monitoring of risks

Real-time monitoring of the security risks that occur on the server, such as whether the hacker is brute-forcing the login name and password of the server, whether the brute-force cracking is successful, whether the backdoor software is installed on the server, etc.

  1. Minimal resource occupation

The Agent plug-in only occupy 1% of the CPU and 10M of memory resources.

The Agent is only responsible for information monitoring, collection, and reporting. Analysis is performed in the cloud protection center to minimize system resource usage.

 

  1. Independent research and development

As a neutral third-party public cloud service provider, there is no operation other than detecting hacker attacks. UHIDS is an Agent independently developed by SCloud. Agent is only used as a software program to detect hacker attacks.

7.2.3 Main Function

  1. Intrusion detection

SSH remote login

UHIDS collects user’s frequently used SSH login source addresses, and if SSH logins from unusual sources are found, it will alert the user.

SSH brute force cracking

UHIDS continuously analyzes SSH login logs, detects successful brute-force cracking behaviors, and alerts users to notify them.

Backdoor Trojan

UHIDS detects network characteristics such as the network connection of the process. If a backdoor Trojan horse is found, it will alert the user.

Abnormal process

UHIDS detects the startup directory of the process, executes the program and other processes. If it finds a process that is suspected of a Trojan horse, then alerts the user.

  1. Vulnerability detection

System vulnerability detection

UHIDS collects version and configuration information of kernel version, dynamic library and other versions, then compares information with historical vulnerability library. If a vulnerable version is found, it will alert the user.

Third-party software vulnerability detection

UHIDS collects version information of third-party software such as Nginx, sshd, mysql, then compares information with historical third-party software vulnerability libraries. If a vulnerable version is found, it will alert the user.

  1. Baseline check

Weak password verification

UHIDS performs regular weak password detection on system accounts, mysql accounts, etc. then alerts the user if a weak password is found.

Application layer configuration verification

UHIDS has a built-in security baseline library and updates it regularly. It reads and analyzes the configuration of application layer software (for example, PHP\Mangodb\Redis\mysql\nginx\httpd, etc.) to determine whether the configuration items meet the security baseline configuration requirements, and will send alarms to users.

  1. Alarm management

UHIDS provides an alarm management function to facilitate users to understand the security status of the cloud host in the first place, and provides a whitelist mechanism for user customization.

Login IP whitelist

UHIDS supports users to set a whitelist for login IP addresses.

Login whitelist

UHIDS supports users to set a whitelist for logging in cities.

Alarm settings

UHIDS supports email, SMS and other alert methods, so that users can detect and deal with corresponding risks or threats in the cloud host in the first place to reduce the security risks faced by the cloud host.

7.2.4 Architecture

UHIDS is mainly composed of two parts, namely UHIDS-Server end and UHIDS-Agent end. By installing agent on the cloud host, and coordinating with the cloud UHIDS-Server for rule log events to monitor the security of the cloud host in real time and ensure the security of the cloud host.

 

 

The server types supported by UHIDS include: CentOS, Ubuntu, Debian, RedHat, Open Suse, Gentoo

By encrypting and transmitting the relevant data of the server to the interface server, the interface server then filters the risk data and intrusion data to different clusters for analysis, and the analysis results are stored in the database. The data in all databases are integrated into a portal database for the console page to call. If the user configures an alarm reminder, the alarm data will be sent to the user by email or SMS in real time.