1 Product introduction

1.1 What is UHost

SCloud UHost provides computing services that can be expanded at any time. Each UHost runs in the form of a virtual machine. The cloud host resources include the most basic computing components such as CPU, memory, and disk.

UHost is the core service of SCloud. Some services, such as IP, image and cloud disk, must be used in combination with UHost. Other services, such as databases, cache, and object storage, can be combined with UHost to build an IT architecture.

 

1.2 Product advantages

1.2.1 Flexibility and elasticity

According to the trends of business development, you can scale cloud resources horizontally and vertically at any time to prevent resource waste. Create or release cloud hosts in minutes, upgrade or downgrade host CPU and memory within 5 minutes, upgrade or downgrade public network bandwidth online, and easily copy host data and environment with customized image function. It also provides open APIs to meet the needs of batch management and automated management.

1.2.2 Stability and reliability

Commit to 99.95% service availability and data reliability no less than 99.9999%. The local disk of the cloud host uses RAID for data protection to prevent data loss; industry-leading kernel optimization and hot patch technology, non-stop online migration technology, provides data snapshots and other functions. In case of failure, compensate one hundred times the loss.

1.2.3 High performance

The performance of our host CPU and memory are industry-leading, and the unique storage technology increases the random read and write I/O capacity of the disk by 10 times over the ordinary SAS disk; there is also an SSD disk cloud host that provides ultra-high IOPS performance. Excellent network processing capabilities meet various business application requirements.

1.2.4 Security

100% complete network isolation between users ensure data security. It provides a network firewall function to strictly control access to the public network. With the VPC function, a private subnet under a single account can be established to support your internal security management needs. And it provides a wealth of monitoring tools and security tools.

1.2.5 Data center

SCloud international data centers, relying on high-quality hardware resources and infrastructures, provide users with high-quality BGP and international bandwidth resources. According to business needs, you can select a matching data center for the target user area that needs to be covered.

 

1.3 Function introduction

  • In all theSCloud availability zones, UHosts can be created;
  • A variety of host models, including: standard models, SSD high-performance models, big data models and GPU models, to meet different needs;
  • Flexible host specifications, CPU and memory size can be customized combination, from 1 core 1G to16 core 64G;
  • 3 types of disks: local standarddisk, local SSD disk, cloud disk (UDisk). Different performances, prices, availability and capacity upper limit can be freely combined;
  • Support common Linux and Windows image, andsupport customized image as well;
  • Publicnetwork firewall, providing network security access control function;
  • Support 2 network modes: basic network and private network;
  • Multiple monitoring indicators comprehensively monitor the health of the host;
  • Support the establishment of high-availability clusters through load balancing products (ULB);
  • The data ark function can be activated, and it can be revertedto any second within 12 hours;
  • Some hosts can turn on the HotScale-Up function, and there is no need to PowerOff when upgrading the CPU/memory;
  • Multiple billing modes: monthly, annual and hourly prepayment.

 

1.4 Host

1.4.1 Regions and availability zones

An availability zone is a combination of a set of resources that are physically and electrically isolated from each other. An availability zone may be designed by combining a part of a computer room or multiple computer rooms. After a reasonable design, the affected area of the fault will be isolated in a single availability zone.

Several availability zones that are geographically close to each other can form a region through high-speed and stable network connections. Intranets between availability zones in the same region can communicate with each other.

The network delay in the same zone is within 0.1ms, and the delay between different zones in the same region is about 0.3ms.

1.4.2 Instance type and CPU platform

1.4.2.1 Instance type

SCloud UHost are divided into 8 types according to the application scenario: Outstanding S (OS), Outstanding PRO (OPRO), Outstanding MAX (OMAX), Outstanding Lite (OLite), Outstanding (O), Standard (N), High Frequency (C), GPU (G).

Classification CPU Frequency Internal network bandwidth Features Applicable scene
Outstanding S (OS) Outstanding

Intel CascadelakeR: 3.0GHz-4.0GHz

 

25GbE The Outstanding type CPU has been upgraded again, with a main frequency of 3.0GHz-4.0GHz and excellent cost-performance ratio. Comprehensive scenarios comprehensive scene
Outstanding PRO (OPRO) Outstanding

AMD EPYC2: 3.5GHz-3.9GHz

 

25GbE 3.5GHz top single-core computing power High-frequency trading, high-performance data, EDA, etc.
Outstanding MAX (OMAX) Outstanding

AMD EPYC2: 2.6GHz-3.3GHz

 

100GbE Up to 240C, 100G network High-performance computing, data analysis, memory services, etc.
Outstanding Lite (OLite) Outstanding

Ampere Altra: 2.8GHz-3.0GHz

 

25GbE Cost-effective cloud host based on Arm v8.2+ architecture Internet components, Android development, etc.
Outstanding (O) Outstanding Intel Cascadelake: 2.5GHz-3.9GHz
AMD EPYC2: 2.9GHz-3.4GHz
25GbE Cloud host with excellent computing performance, storage performance, network performance, and excellent cost-effectiveness. comprehensive scene
Standard (N) Previous generation Intel Skylake: 2.6GHz-3.7GHz
Intel Broadwell: 2.2GHz-2.9GHz
10GbE Free and flexible configuration, rich choices Enterprise-level applications, memory services, data analysis, etc.
High Frequency (C) Previous generation

Intel Skylake: 3.2GHz-4.2GHz

 

10GbE Using a 3.2GHz CPU, highest single-core computing performance High frequency trading, data processing, EDA, etc.
GPU (G) Previous generation Intel Skylake: 2.6GHz-3.7GHz
Intel Broadwell: 2.2GHz-2.9GHz
10GbE Equipped with Nvidia Tesla K80, P40, V100 or T4 GPU Artificial intelligence, scientific computing, graphics rendering, etc.

 

  • Outstandingseries: SCloud UHost flagship series, support NetworkEnhanced0 and RSSD cloud disk, so the maximum network performance can reach 1000W PPS, and the maximum storage performance reaches 120W IOPS.
  • Previous generation series: SCloud UHostold-style series, support local disk/common cloud disk/SSD cloud disk. In the availability zones where the Outstanding series has been launched, the previous generation series will not be continuously supplied.

 

Outstanding S (OS)

1) Introduction

  • The Outstandingtype CPU is upgraded again to support Intel Cascadelake 6248R CPU (3.0GHz-4.0GHz), and single-core performance is increased by 15%.
  • At the same time, the computing performance is better, and the price is the same as that of the Outstanding(Intel version), maintaining an excellent price-performance ratio.
  • The maximum network performance reaches 1000W PPS, and the maximum storage performance reaches 120W IOPS.
  • It performs well in most situationssuch as web services, game services, databases, data analysis and data

2) CPU platform support: Intel CascadelakeR

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
1 Core 1G, 2G, 4G, 8G
2 Core 2G, 4G, 8G, 16G
4 Core 4G, 8G, 16G, 32G
8 Core 8G, 16G, 32G, 64G
16 Core 16G, 32G, 64G, 128G
32 Core 32G, 64G, 128G, 256G
64 Core 64G, 128G, 256G, 512G

4) Disk type support: RSSD cloud disk

Specific selection range:

System disk Data disk
RSSD cloud disk (20-500GB) RSSD cloud disk (20-32000GB)

5) Feature support: NetworkEnhanced 2.0

6) Restrictions: Linux Outstanding UHost only supports high-kernel version images (that is, images with kernel version ≥ 4.19).

 

Outstanding PRO (OPRO)

1) Introduction:

  • Adopt AMD EPYC 7F52 CPU (3.5GHz-3.9GHz), withthe highest single-core computing power in the Outstanding series
  • The memory bandwidth is twice that of Outstanding(O), and the average L3 cache per core is 4 times that of Outstanding (O)
  • Themaximum network performance reaches 1000W PPS, and the maximum storage performance reaches 120W IOPS
  • Suitable for high-frequency trading, high-performance data, EDA and other businesses that require single computing power

2) CPU platform support: AMD EPYC2

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
2 Core 2G, 4G, 8G, 16G
4 Core 4G, 8G, 16G, 32G
8 Core 8G, 16G, 32G, 64G
16 Core 16G, 32G, 64G, 128G
32 Core 32G, 64G, 128G
48 Core 128G

4) Disk type support: RSSD cloud disk

Specific selection range:

System disk Data disk
RSSD cloud disk (20-500GB) RSSD cloud disk (20-32000GB)

5) Feature support: NetworkEnhanced 2.0

6) Restrictions: Linux Outstanding UHost only supports high-kernel version images (that is, images with kernel version ≥ 4.19).

 

Outstanding MAX(OMAX)

1) Introduction

  • Adopt AMD EPYC 7H12 CPU (2.6GHz-3.3GHz)
  • Top specifications up to 240C, withthe largest nuclear scale of the Outstanding series
  • Support 100GbE network. Themaximum network performance reaches 1000W PPS, and the maximum storage performance reaches 120W IOPS
  • Suitable for many-core scenarios such as high-performance computing, data analysis, memory services, etc.

2) CPU platform support: AMD EPYC2

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
2 Core 2G, 4G, 8G, 16G
4 Core 4G, 8G, 16G, 32G
8 Core 8G, 16G, 32G, 64G
16 Core 16G, 32G, 64G, 128G
32 Core 32G, 64G, 128G, 256G
64 Core 64G, 128G , 256G, 512G
128 Core 128G , 256G, 512G, 1024G
240 Core 1024G

4) Disk type support: RSSD cloud disk

Specific selection range:

System disk Data disk
RSSD cloud disk (20-500GB) RSSD cloud disk (20-32000GB)

5) Feature support: NetworkEnhanced 2.0

6) Restrictions: Linux Outstanding UHost only supports high-kernel version images (that is, images with kernel version ≥ 4.19).

 

Outstanding Lite (OLite)

1) Introduction

  • Adopt Ampere® Altra™ processor (2.8GHz-3.0GHz)
  • Based on the Arm v8.2+ architecture (on the basis of Arm v8.2, the Meltdown and Spectre bug fixes and other important features from Arm v8.5 have been introduced)and Neoverse N1 platform
  • Support 25GbE network. Themaximum network performance reaches 1000W PPS, and the maximum storage performance reaches 50W IOPS
  • Suitable for Internet components, Android development and other scenarios

2) CPU platform support: ARM/Altra

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
1 Core 1G, 2G, 4G, 8G
2 Core 2G, 4G, 8G, 16G
4 Core 4G, 8G, 16G, 32G
8 Core 8G, 16G, 32G, 64G
16 Core 16G, 32G, 64G, 128G
32 Core 32G, 64G, 128G, 256G
64 Core 64G, 128G, 256G

4) Disk type support: RSSD cloud disk

Specific selection range:

System disk Data disk
RSSD cloud disk (20-500GB) RSSD cloud disk (20-32000GB)

5) Feature support: NetworkEnhanced 2.0

6) Restrictions: Only ARM version image is supported, and it is not compatible with other x86 image.

 

Outstanding (O)

1) Introduction

  • The flagship SCloud UHost with excellent computing, storage and network performance.
  • Support AMD second generation EPYC CPU (2.9GHz frequency) or Intel Cascadelake CPU (2.5GHz frequency)
  • The maximum network performance reaches 1000W PPS, and the maximum storage performance reaches 120W IOPS.
  • Flexible specifications, up to 96-core 768G ultra-large specification instance.
  • It performs well in most scenarios such as Web services, game services, databases, data analysis and processing, and has an excellent cost-performance ratio.

2) CPU platform support: AMD EPYC2 / Intel Cascadelake

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
2 Core 2G, 4G, 8G, 16G
4 Core 4G, 8G, 16G, 32G
8 Core 8G, 16G, 32G, 64G
16 Core 16G, 32G, 64G, 128G
32 Core 32G, 64G, 128G, 256G
64 Core 64G, 128G, 256G, 512G
96 Core 96G, 192G, 384G, 764G

4) Disk type support: RSSD cloud disk

Specific selection range:

System disk Data disk
RSSD cloud disk (20-500GB) RSSD cloud disk (20-32000GB)

5) Feature support: NetworkEnhanced 2.0

6) Restrictions: Linux Outstanding UHost only supports high-kernel version images (that is, images with kernel version ≥ 4.19).

 

The following instances are the previous generation instances, and will not maintain long-term supply in the availability zones where the Outstanding type instance has been launched

 

Standard (N)

1) Introduction: Provide the most flexible and free combination of CPU, memory, and disk. It is suitable for balanced scenarios such as computing, storage, and network.

2) CPU platform support: Intel IvyBridge/Haswell/Broadwell/Skylake

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
1 Core 1G, 2G, 4G, 8G
2 Core 2G, 4G, 6G, 8G, 12G, 16G
4 Core 4G, 6G, 8G, 12G, 16G, 32G
8 Core 8G, 12G, 16G, 24G, 32G, 48G, 64G
16 Core 16G, 24G, 32G, 48G, 64G, 128G
24 Core 24G, 32G, 64G, 96G, 128G
32 Core 32G, 64G, 96G, 128G

4) Disk type support: cloud disk, Standard local disk, SSD local disk

Specific selection range:

System disk Data disk
SSD cloud disk (20-500GB)

SSD cloud disk (20-4000GB)

Standard local disk (20-8000GB)

Standard local disk (20-100GB) Standard local disk (20-2000GB)
SSD local disk (20-100GB) SSD local disk (20-1000GB)

5) Feature support: NetworkEnhanced 1.0/NetworkEnhanced 2.0 (support only Skylake and above) and Hot Scale-Up

 

High Frequency (C)

1) Introduction: Instances with CPU frequency ≥ 3.2GHz, with the excellent single-core performance, suitable for computing services, such as high-frequency trading, rendering, artificial intelligence, etc.

2) CPU platform support: Intel Skylake

3) CPU memory combination (support ratio 1:1-1:8):

CPU RAM
1 Core 1G, 2G, 4G, 8G
2 Core 2G, 4G, 8G, 16G
4 Core 4G, 8G, 16G, 32G
8 Core 8G, 16G, 32G, 64G
16 Core 16G, 32G, 64G, 128G
32 Core 32G, 64G, 128G

4) Disk type support: cloud disk, SSD local disk

Specific selection range:

System disk Data disk
SSD cloud disk (20-500GB)

SSD cloud disk (20-4000GB)

Standard local disk (20-8000GB)

SSD local disk (20-100GB) SSD local disk (20-1000GB)

5) Feature support: NetworkEnhanced 1.0 and Hot Scale-Up

 

GPU (G)

The instances with GPU cards are suitable for businesses that require GPUs for computing, such as high-performance computing, rendering, and artificial intelligence, etc. Currently support 3 kinds of GPU cards, that is, K80, P40, V100. The configurations attached to the three cards are slightly different.

GPU performance comparison

Parameters Tesla V100 Tesla P40 Tesla K80
The number of CUDA cores 5120 3840 2496
Single precision floating point performance 14 TFOPS 12 TFLOPS 8.7 TFLOPS
INT8 performance N/A 47 TOPS N/A
Tensor performance 112 TFLOPS N/A N/A
Video memory capacity 16GB 24GB 12GB
Architecture Volta Pascal Kepler

 

V100 / P40 GPU

1) CPU platform support: Broadwell

2) GPU-CPU-memory combination support:

GPU CPU RAM
1 4 Core 8G, 16G
8 Core 16G, 32G
2 8 Core 16G, 32G
16 Core 32G, 64G
4 16 Core 32G, 64G
32 Core 64G, 128G

4) Disk type support: cloud disk, SSD local disk

System disk Data disk
SSD cloud disk (20-500GB)

SSD cloud disk (20-4000GB)

Standard local disk (20-8000GB)

SSD local disk (20-100GB) SSD local disk (20-1000GB)

5) Feature support: NetworkEnhanced 1.0

 

K80 GPU

1) CPU platform support: Intel Haswell

2) GPU-CPU-memory combination support:

GPU CPU RAM
1/2 4 Core 8G, 16G
8 Core 16G, 32G
16 Core 32G, 64G

4) Disk type support: SSD local disk

System disk Data disk
SSD local disk (20-100GB) SSD local disk (20-1000GB)

5) Feature support: network enhancement 1.0

1.4.2.2 CPU platform

The CPU platform attribute refers to the CPU micro-architecture version of the host where the cloud host is located, and includes the following options:

Intel IvyBridge,

Intel Haswell

Intel Broadwell

Intel Skylake

Intel Cascadelake

Intel Cascadelake-Refresh

AMD EPYC2

Arm Altra

 

The main difference of each generation of CPU platform is the different hardware architectures and different instruction sets. Under the same CPU architecture, the prices of SCloud Hosts on different CPU platforms are the same.

You can select and configure the most basic CPU platform when creating, or let the system backend allocate the platform automatically. For example, when the user selects the Standard type, CPU platform ≥ Intel Haswell (V3), the system backend may schedule the cloud host to the host of Haswell, Broadwell or Skylake platform.

 

Best practices for CPU platform selection:

1) The CPU platform is an advanced option for host creation. Ordinary Web sites/APP/databases/devops and other non-recomputing services (the average CPU usage rate is below 30%), and there is no requirement for the instruction set. It is recommended to choose the CPU platform with automatic allocation.

 

2) For businesses that require a specific instruction set (such as software explicitly requires AVX instruction set), it is recommended that you refer to the following table for selection:

CPU platform AVX AVX-2 AVX-512
Intel/IvyBridge
Intel/Haswell
Intel/Broadwell
AMD/EPYC2
Intel/Skylake
Intel/Cascadelake
Intel/CascadelakeR
Arm/Altra

3) For businesses with clear requirements for computing performance, it is recommended to choose the latest generation of current available zones.

1.4.3 Characteristics

1.4.3.1 NetworkEnhanced

The network enhancement feature is divided into two different versions, 1.0 and 2.0.

NetworkEnhanced 1.0 is a cloud host feature obtained by SCloud using multi-core CPU and Linux kernel to support network card driver multi-queue. It meets the needs of frequent communication and small volume but large number of data packets in new scenarios. After opening, the expected performance can be increased from 300,000 pps to 1,000,000 pps.

NetworkEnhanced 2.0 through the host’s smart network card hardware support, and SR-IOV (Single-root I/O virtualization) technology, further upgrades the cloud host network performance, up to 10,000,000 pps. At present, this feature only supports  Outstanding series instances.

 

1) Performance comparison

Maximum Internal Network Bandwidth Maximum Package Quantity Cost Support Instances
No NetworkEnhanced 10G [1] 300,000 PPS Free of charge Previous generation series of SCloud UHost
NetworkEnhanced 1.0 10G 1,000,000 PPS Additional charge (see console for details) Previous generation series of SCloud UHost
NetworkEnhanced 2.0 22G [2]

4 cores: 2,500,000 PPS

8 cores: 5,000,000 PPS

16 cores and above: 10,000,000 PPS

Free of charge Outstanding series of SCloud UHost

[1] When NetworkEnhanced is not turned on, packet sending and receiving is the main performance bottleneck, making cloud hosts usually unable to reach the upper limit of 10G bandwidth. The maximum bandwidth can be calculated from the upper limit of the package volume (300,000 PPS) × the average package size. For example: for a large packet with a communication packet size of 1400 bytes, the upper limit of the bandwidth is 300,000×1400×8=3.36Gbps.

 

2) Application scenarios

Games: games with strong real-time interaction, such as ARPG and gunfight games;

Advertising: DSP processes requests frequently;

Mobile social: real-time social networking, communication, push platform;

Web application: applications based on Web Socket.

 

3) Restrictions

NetworkEnhanced 1.0:

  • Only by creating a host through a image that contains “NetEhanced” in Features, can enable NetworkEnhanced 1.0.
  • Currently, only SCloud UHosts with 4 cores and above support the NetworkEnhanced 1.0 feature.
  • NetworkEnhanced 2.0:
  • Only by creating a host through a image that contains “NetEhanced_Ultra” in Features, can enable NetworkEnhanced 2.0.
  • Currently, Windows images/non-high-core version images/1 core SCloud UHosts do not support NetworkEnhanced 2.0.

 

4) FAQ

Q: Why can I see the three network cards eth0, eth1, eth2 in the instance with NetworkEnhanced 2.0 enabled? Which network card should I use?

A: Eth1 is a non-NetworkEnhanced network card, and eth2 is a NetworkEnhanced 2.0 network card; usually, the host’s packets sending and receiving are supported by eth2 to maintain the high performance of NetworkEnhanced 2.0, but because the NetworkEnhanced 2.0 network card cannot maintain connectivity during migration, eth1 will maintain connectivity during the host online migration. As the connectivity between eth1 and eth2 will change, SCloud provides eth0 as the master network card, mapping eth1 when eth1 is connected, and eth2 when eth2 is connected. For business use, you can directly use the eth0 network card.

1.4.3.2 Hot Scale-Up

Host CPU/memory Hot Scale-Up, including CPU and memory both support online upgrade and expansion. During the upgrade process, the cloud host does not need to be PowerOff or restarted, and there will be no performance impact on the applications and services running on the host.

1) Applicable scenarios

  • In the initial stage of the business launch, it is usually impossible to accurately assess the required cloud host configuration. At this time, you can apply for a common specification server first, and then Hot Scale-Up and adjust it later.
  • When there is a sudden increase in business pressure, while expanding the number of cloud hosts in parallel, Hot Scale-Up can also be used as an emergency solution.

2) Restrictions

  • Currently, it only supports SCloud UHosts on the Broadwell and above CPU platforms.
  • Only some image featured with Hot Scale-Up can turn on this feature. Currently CentOS 7.0 and above, Ubuntu 16.06 and above support this feature.

3) Billing

There is no additional charge for enabling the Hot Scale-Up feature.

1.4.3.3 DataArk

SCloud DataArk is a service that provides continuous data protection for SCloud UHost disks. Support online real-time backup, with data recovery capabilities accurate to the second level. Avoid the data loss caused by misoperation and malicious destruction, and effectively protect your precious data.

1) Restrictions

Only support standard local disks and standard cloud disks.

2) Billing

The unit of data ark charge is the disk total capacity. There is an additional fee of 0.4 RMB per GB disk.

1.4.4 Order composition

1.4.4.1 UHost + local disk

When the system disk is a local disk, the order composition of the host includes:

  • CPU
  • GPU (GPU instance only)
  • RAM
  • Local system disk
  • Local data disk
  • Image (referring to the charged image in the image market)
  • Additional features (such as Network-Enhancement)

 

Elastic IP address (EIP) and public data cloud disk (UDisk) are not included. They are independent orders and enjoy an independent life cycle.

1.4.4.2 UHost + UDisk

When the system disk is a cloud disk, the order composition of the host includes:

  • CPU
  • RAM
  • Cloud system disk
  • Image (referring to the charged image in the image market)
  • Additional features (such as Network-Enhancement)

Elastic IP address (EIP) and cloud data disk (UDisk) are not included. They are independent orders and enjoy an independent life cycle.

1.4.5 Quota

In SCloud, in order to prevent users from over-applying for resources, each resource has set different default quotas for different availability zones.

If your demand has exceeded the quota, please contact your account manager to increase the quota.

 

1.5 Disk

SCloud UHost currently has five types of disks available: standard local disk, SSD local disk, standard cloud disk, SSD cloud disk and RSSD cloud disk.

1.5.1 Type introduction

Cloud disk

As a basic block storage product for cloud computing scenarios, it is a block device hard disk that provides persistent storage space for cloud hosts. It has an independent life cycle and is based on network distributed access. It provides cloud hosts with large-capacity, highly reliable, scalable, easy-to-use, and low-cost hard drives. SCloud currently provides two cloud disk options: SSD cloud disk and standard cloud disk.

Local disk

The virtual disk on the same host as the cloud host’s CPU/memory is characterized by low latency. Its life cycle is the same as that of the cloud host, and cannot be bound or unbound. Use RAID for data protection to prevent data loss. SCloud currently provides two local disk options: SSD local disk and standard local disk.

1.5.2 Selection Guide

Parameters RSSD cloud disk SSD cloud disk Standard cloud disk SSD local disk Standard local disk
Comprehensive description The new cloud disk product has super high performance and only 0.1 millisecond delay. Currently only supports the Outstanding series of SCloud UHosts. Excellent and stable overall performance, suitable for most scenarios, with a balanced price. Largest optional capacity, suitable for scenarios with low IO but having redundancy requirements. Excellent performance and latency, and the highest unit price The local disk supported by IO optimization technology has better IO performance than ordinary SATA disks
Scenario recommendation High-performance databases, Elastic Search and other I/O-intensive applications that require low latency I/O-intensive applications, relational databases, NoSQL databases, and other application scenarios where the performance of standard disks cannot be satisfied Log storage, sequential reading and writing of large files, small relational databases, development and test environments, Web servers large relational databases with application layer redundancy, NoSQL databases, high-frequency trading, etc. Small relational databases, various enterprise-level applications, data analysis, game servers
IO performance reference Stable 1200000 Stable 24000 Peak 1000 Peak 80000 Peak 8000
Delay 0.1ms 0.5-3ms 10ms 0.3ms 0.3ms
Redundancy mechanism 3 copies 3 copies 3 copies 2 copies 2 copies
Capacity 20-32000GB 20-8000GB 20-8000GB 20-1000GB 20-2000GB

 

1.5.3 Performance

Parameters RSSD cloud disk SSD cloud disk Standard cloud disk SSD local disk Standard local disk
Optional capacity (data disk) 20-32000GB 20-8000GB 20-8000GB 20-1000GB 20-2000GB
Random read (IOPS) min{1800+50 * capacity, 1200000} min{1200+30 * capacity, 24000} 1000 80000 8000
Random write (IOPS) min{1800+50 * capacity, 1200000} min{1200+30 * capacity, 24000} 1000 15000 8000
Sequential read (MBps) min{120+0.5 * capacity, 4800}MBps min{80+0.5 * capacity, 260}MBps 100 2000 150
Sequential write (MBps) min{120+0.5  * capacity, 4800}MBps min{80+0.5 * capacity, 260}MBps 100 1000 150
Average delay 0.1ms 0.5-3ms 10ms 0.3ms 0.3ms

Note that: The performance of the RSSD cloud disk is also related to the host configuration.

1.5.4 Instance/feature support

Instance support

Instance type System disk Data disk
Outstanding (O) RSSD cloud disk RSSD cloud disk
Standard (N) SSD cloud disk Standard/SSD cloud disk
Standard (N) Standard local disk One standard local disk, with multiple cloud disks
Standard (N) RSSD local disk One SSD local disk, with multiple cloud disks
High frequency (C) RSSD local disk One SSD local disk, with multiple cloud disks
GPU (G) SSD cloud disk Standard/SSD cloud disk
GPU (G) SSD local disk One SSD local disk, with multiple cloud disks
GPU (G) SSD cloud disk Standard/SSD cloud disk

 

Feature support

Disk type Data ark Normal snapshot
RSSD cloud disk Coming soon Coming soon
SSD cloud disk Coming soon Coming soon
Standard cloud disk Support Support
SSD local disk Not available Not available
Standard local disk Support (The system disk and data disk need to be opened at the same time.) Not available

 

1.5.5 FAQ

Why can’t I see the disk type I want in some Availability Zones?

The above disk types are not online in some nodes, or applications are restricted due to inventory reasons. If you find that you cannot select the desired disk, please contact technical support.

Can the SCloud UHost mount multiple local disks?

The SCloud UHost includes a system disk and a data disk by default. One host can only mount one local data disk, and cannot be unmounted and mounted to another host.

However, one host can mount multiple cloud data disks, and they can be unmounted freely.

Does it support system disk expansion?

SSD cloud disk: support 20-500G capacity expansion, free of charge within 40G, and charged by standard disk fee above 40G.

Standard local disk/SSD local disk: support 20-100G capacity expansion, free of charge within 40G, and charged by standard disk fee above 40G.

 

1.6 Image

Image is a template of the cloud host instance environment, including operating system, pre-installed software and configuration.

There are two types of image:

One is the Standard Image, which is officially provided by SCloud and includes various operating systems such as Linux and Windows.

Another one is customized Image, which is a dedicated image created by the user through the SCloud UHost and is only visible to the user.

 

Outstanding special image

Due to the adoption of Network-Enhancement 2.0 and RSSD cloud disk technology, the Outstanding SCloud UHost series has certain requirements for the image kernel for Linux image, that is, at least 4.19 kernel. Therefore, for standard images below CentOS 8.0 / Ubuntu 20.04 / Debian 10.0, SCloud upgrades them to the 4.19 kernel, replacing its native kernel. The higher version of the kernel will not affect the use of upper-layer applications, and is usually more secure than the lower version of the kernel.

For images that cannot support the Outstanding SCloud UHost after uploading, please contact technical support to upgrade the kernel version.

For Windows image, there is no such kernel version requirement.

1.7 Network

1.7.1 Internal network 

1.7.1.1 Private IP

Private IP addresses are uniformly allocated by the system. If you change the private IP within the operating system, the Internal network communication will be interrupted. The communication traffic between hosts in the same data center through the private IP is free. Private IP can be used for Internal network mutual visits between cloud host instances, and also for Internal network mutual visits between cloud hosts and other cloud services, such as UDB and UMem.

In addition, SCloud also supports private virtual IP, which can be directly set on the host after application.

1.7.1.2 Internal network bandwidth limit rules

Outstanding SCloud UHost Series

The bandwidth limit rules of Outstanding SCloud UHost series (O, OS, and OPRO) are as follows:

The number of cores Internal network bandwidth limit rules
4 cores or below 2 Gb
8 cores 4 Gb
16 cores 7.5 Gb
32 cores 15 Gb
64 cores or above 22 Gb

 

Previous generation SCloud UHost series

The previous generation cloud host series does not limit the bandwidth and the maximum Internal network bandwidth is 10Gb.

However, when the Network-Enhancement is not enabled, the packet sending and receiving capability is the main performance bottleneck, so the cloud host usually cannot reach the upper limit of 10G bandwidth. The maximum bandwidth can be calculated from the upper limit of the package volume (300,000 PPS) * average package size. For example: for a large packet with a communication packet size of 1400 bytes, the upper limit of bandwidth is 300000*1400*8=3.36Gbps.

1.7.2 Elastic IP address

1.7.2.1 EIP

The EIP address is the main way for users to access cloud hosts and for host instances to provide external services. In SCloud, the EIP can be flexibly migrated. When a host fails, the EIP can be easily migrated to another host, that is, the Elastic IP. When you deploy a host, if you choose to purchase both an EIP and an public bandwidth quota, an EIP will be allocated at the same time, and the binding will be completed with your host. You can also find this IP resource information in the data panel of the network product.

1.7.2.2 Multicast and broadcast

In the basic network mode, SCloud UHost currently supports broadcast, but does not support Multicast.

1.7.2.3 Firewall

Provide the firewall function for the host by binding the firewall rules to the cloud host. It can control and manage the host’s public network access, and provide the necessary guarantee for the host’s security. The firewall supports TCP/UDP/ICMP/GRE protocol. We have created several sets of default firewalls for you. TCP ports 22 and 3389 and PING are opened by default. You can adjust or create more firewall policies according to your business conditions.

1.7.3 Accelerated domain name

1.7.3.1 Accelerated domain name list

No additional configuration is required on the cloud host, and access acceleration can be achieved when accessing the following domain names:

Classification Domain name
github github.com
*.github.com
gist.githubusercontent.com
raw.githubusercontent.com
dingtalk *.trans.dingtalk.com
trans.dingtalk.com
zjk-cdn-trans.dingtalk.com
sh-cdn-trans.dingtalk.com
sz-cdn-trans.dingtalk.com
baidu.com *.pcs.baidu.com
pcs.baidu.com
*.baidupcs.com
baidupcs.com
nvidia developer.download.nvidia.com
nvcr.io
*.nvcr.io
helm.ngc.nvidia.com
*.ngc.nvidia.com
docker download.docker.com
go golang.org
go.googlesource.com

 

1.7.3.2 Acceleration effect

The test result is as follows, which can achieve a hundredfold speedup.

Before acceleration

The bandwidth before acceleration is between 10KB/s-50KB/s.

[root@192-168-6-90 ~]# git clone https://github.com/dgraph-io/dgraph.git

Cloning into ‘dgraph’…

remote: Enumerating objects: 162, done.

remote: Counting objects: 100% (162/162), done.

remote: Compressing objects: 100% (137/137), done.

Receiving objects:  10% (10713/98227), 5.32 MiB | 27.00 KiB/s

 

After acceleration

The bandwidth after acceleration is between 1MB/s-10MB/s.

[root@192-168-6-90 ~]# git clone https://github.com/dgraph-io/dgraph.git

Cloning into ‘dgraph’…

remote: Enumerating objects: 140, done.

remote: Counting objects: 100% (140/140), done.

remote: Compressing objects: 100% (118/118), done.

remote: Total 98205 (delta 60), reused 46 (delta 22), pack-reused 98065

Receiving objects: 100% (98205/98205), 436.16 MiB | 4.32 MiB/s, done.

 

Acceleration principle

Phenomenon

No additional configuration is required on the host. When dig github.com is executed, a private IP address will be parsed out, which is a service address specifically provided for realizing domain name acceleration.

 

Principle

Taking GitHub implementation as an example, there are two methods for cloning code, HTTPS and SSH. TSL/SSL encryption is performed between the client and the GitHub server. Each connection will generate a unique encryption key, and the transmitted content is integrity checked. The client will verify the validity of the server’s host name and certificate. SSH uses public and private key pairs for authentication and encrypted transmission. The connection from the client to the GitHub server is based on asymmetric encryption to achieve end-to-end encrypted transmission.

 

1.8 Monitor 

We provide monitoring services for your host, network and other resources. You can know the system operation and resource usage at any time, so that you can easily locate and analyze problems.

The host monitoring options are provided by default: CPU usage, disk read and write, network interface card access bandwidth and packet volume. If you have installed our monitoring agent, you can also monitor memory usage, disk space usage and more indicators.

 

 

 

 

 

 

 

 

 

 

2 Billing instructions

2.1 Billing guide

2.1.1 Billing options

Currently SCloud UHost is charged according to the UHost configuration and usage time.

Billing options Billing description
UHost UHost is charged based on the instance, specifications (CPU, memory), length of purchase, and usage of the instances purchased.
UDisk The default system disk is 20GB (purchase required), and the minimum increment is 10GB. The payment period of the UDisk purchased when choosing a cloud host is the same as that of the host. In other scenarios, the payment period for purchasing the UDisk is also recommended to be the same as the payment period of the host to be mounted.
Image The public image provided by SCloud is free. If you upload a self-made image through the US3 service, please check the US3 product price for the billing standard.
Network Network supports billing based on duration and usage. For billing standards, please refer to the pricing of network products.
Other services Include snapshots, data ark, and network enhancements.

 

2.1.2 Billing modes

2.1.2.1 Prepayment

Annual billing

Annual billing means that the billing cycle is paid every year. The default purchase time is one year. If you need to purchase a longer time, select 2 years or 3 years from the drop-down box. At present, the longest single order for SCloud UHost is 3 years. This type of billing method is suitable for long-term stable business.

 

If you do not actively delete the resource after the resource payment is due, and activate the automatic renewal function, the system will deduct the next period of fees for you by default when the account balance is sufficient, and the validity period of the resource will be extended.

If the account balance is insufficient, the status will be expired after the resource expires. If you still need to keep this cloud host, please recharge it in time.

The current annual payment discount strategy is as follows: TODO

UHost UDisk EIP
1 year 83% off 83% off 83% off
2 years 70% off 70% off 83% off
3 years 50% off 50% off 83% off

 

1) When the purchase time of the order is one year, it will be processed at a discount of 83%, that is, purchase for ten months and two months of time will be given away. If you need to purchase an order longer than ten months, it is recommended to choose the annual payment method first.

2) The annual discount is applicable to all SCloud UHost and UDisk types.

3) For annual orders, you need to pay for the purchase period before the start of the order. If you stop using the host and delete resources during the host product purchase period, the system will deduct the cost of your used time and refund you the fee. At this time, you do not enjoy annual payment discount, the deducted fee is the monthly payment unit price * duration of use.

 

Monthly billing

Monthly billing means that a single month is used as the billing cycle to pay for orders. This type of billing method is suitable for start-up or growth business.

If you currently expect to use the period of 1-9 months, you can choose the monthly payment method. If you expect the one-time purchase period to be 10 months or more, it is recommended that you choose the annual payment method to enjoy the annual payment discount.

 

1) In order to meet the needs of users to align bills, the current optional time for monthly payment is by default from purchase to the end of the month, followed by 1-9 months.

2) Monthly payment orders must be paid before the start of the order. If you stop using the host and delete resources within the product purchase period, the system will deduct the length of time you have used and process the in and out refunds. The refund amount = prepaid amount – (the deducted fee which is the monthly unit price * duration of use).

3) At the same time, you can turn on the automatic renewal function. When your account balance is sufficient, the system will automatically deduct the next period’s cost for you when the resource expires to ensure the smooth operation of your resource. If the account balance is insufficient and you do not renew in time, the resource will expire.

 

Hourly billing

Hourly billing means paying orders by hour as the billing cycle. This type of billing method is suitable for temporary batch activation of services to cope with promotion or surge in business.

If you choose the hourly billing method, the purchase of host resources requires a prepayment of 1 hour before using the host product. In the subsequent use time, the system will deduct the next stage of fees before use.

Orders billed by hour are generated by hour, that is, a piece of order information is generated every hour.

 

1) Compared with the monthly unit price, the hourly billing unit price is about 1.5 times that of the monthly billing price using resources of the same duration.

2) Users who are billed by hour can change the billing method to monthly or annual by themselves.

3) The current billing method changes, we only support upward changes, that is, hourly billing can be changed into monthly billing or annual billing, and monthly billing can be changed into annual billing.

4) If you delete the resource during the usage period, the system will deduct the duration fee you used (the minimum billing unit for refunds is one hour, and less than one hour is counted as one hour), and refund the remained fee to your account.

2.1.2.2 Post payment

The post-paid method is billed by hour, and there is no charge when purchasing the host. When it is used for a full hour, a post-paid bill is generated, and automatic renewal is turned on by default. When the current account balance is sufficient, the system will automatically charge you the order fee.

If you make configuration changes to the host, the original configuration will stop immediately and the new configuration will take effect immediately. During this time period, the system will be billed separately according to the usage time of the old configuration resources and the new configuration resources.

 

2.2 Configuration changes

According to the different billing modes of the host, if there is a configuration change, the user bill and refund/replenishment are described as follows.

2.2.1 Prepayment

If the current host billing method is prepaid method (including annual payment, monthly payment, and hourly payment), an order will be generated immediately when a configuration upgrade/downgrade occurs. The time limit for a new order is: the time when the upgrade occurs ~ the end time of the original order . After you confirm that you need to replenish/refund the money, the new configuration will take effect immediately.

2.2.2 Post payment

If the current host’s billing method is post-paid, the configuration takes effect in time when a configuration upgrade/downgrade occurs. When an order is generated every hour, bills are separately charged according to the length of time used by the new configuration and the old configuration.

 

2.3 Renewal

2.3.1 Prepayment supports the Automatic Renewal function

When purchasing a host, if the billing method you choose is prepaid (any one of annual/monthly/hourly billing), we will turn on the Automatic Renewal by default for you. When your account balance is sufficient, the system will automatically renew your fee when it is about to expire.

It should be noted that the automatic renewal is automatically renewed for you according to the previous configuration and duration. If your account balance is insufficient, the system will fail to deduct the fee when the resource is about to expire, and the resource status will be displayed as “expired”, and you can perform “manual renewal” after recharging.

If your payment cycle is one month, in order to facilitate reconciliation, when the system automatically renews your payment for the first time, it will be renewed to the end of the current month, and subsequent renewals will be renewed for the entire month.

At the same time, you can also turn off the “automatic renewal switch”. After the expiration, the resource will automatically enter the arrears state. The system will remind you of the resource expiration and renewal. At this time, the system will not release the user’s resources. You can use the “manual renewal” function to renew, the user can choose to pay monthly or pay annually when renewing. If the user does not recharge and “manually renew”, the deduction fails and the resource status is displayed as “expired”.

If the resources expire, the system may reclaim related expired resources, please renew in time.

When you turn off the “automatic renewal switch”, the system will remind you by email and SMS 3 days before the product expires, and the system will remind you of the product expiration time and renewal matters every day within 7 days after the product expires.

2.3.2 Renewal validity period

If the status of your current resource is “expired”, when you manually renew, the renewal period will start from the expiration time for a new renewal period. Until the end of the renewal period is greater than the current time, the resource is not in an expired state.

If you renew the resource within the validity period of the resource, the renewal period will be settled from the original renewal period before starting a new period.

 

2.4 Recycle/Delete

2.4.1 Notification channel

All notification messages will be notified to the notification recipients you set by email, SMS, and in-site messages.

2.4.2 Recycle

Annual/monthly prepayment

 

3 days before your resource expires : send a notification that the resource is about to expire to the notification recipient you set;

On the day your resource expires : send a resource expired notification to the notification recipient you set;

3 days after your resource expires: Send the notification recipient you set to notify that the cloud host is about to be PowerOff, and perform the shutdown operation on the cloud host on the same day. The cloud host after the shutdown needs to complete the renewal before it can be turned on and continue to use;

10 days after your resource expires: Send a reminder that the cloud host is about to be recycled to the notification recipient you set, and perform the recycling operation on the cloud host on the same day (recovered resources cannot be retrieved, please renew in time).

 

Hourly prepayment

 

On the day after your resource expires: send an expired reminder;

When your resource expires on the first day: send a notification that the resource is about to be recycled;

When your resources expire on the second day: send a recycling notification and reclaim the host.

 

Hourly post payment

On-time post-paid cloud hosting generates a post-paid order every hour. If your account’s available balance is sufficient, the fee will be automatically deducted. If your account’s available balance is insufficient to support deduction, an arrears order will be generated;

Accumulative 3 post-paid arrears orders: Send a notification that the sending resources may be recovered;

Accumulative 3 post-paid arrears orders +24H: Send a resource shutdown notification, and the system will be forced to PowerOff on the same day;

Accumulative 3 post-paid arrears orders +48H: A notification of resource deletion occurs, and the resource is recovered on the same day.

2.4.3 Delete

If you delete the resource within the validity period of the resource, the system will calculate your billing information separately according to your payment method. When you delete a host, resources such as cloud disks and networks attached to the resource will also be deleted simultaneously.

 

Prepayment

If the host you choose is a prepaid model (including annual payment, monthly payment, and hourly payment) and you delete the resource within the validity period, the system will deduct the fee based on your usage time and refund the remaining prepaid amount to you.

 

Post payment

If the host you choose is a post-paid model, when you delete the host, the system will stop the service of the resource and generate a order for you on the hour.

 

In the prepaid mode, if you delete the host and the current usage time is incomplete hours, the system will automatically make up one hour, and the system will calculate the refundable amount based on the actual usage.

 

 

 

 

 

 

 

 

 

 

3 Quick start

3.1 Common precautions

3.1.1 Change host configuration

  1. When you select “Change Configuration” in the menu, you need to PowerOff to perform operations by default. If you need to upgrade the system without rebooting, please confirm that the host supports Hot Scale-Up (see “Hot Scale-Up” in Model|Features), and select the “Hot Scale-Up” operation in the menu;
  2. If the size of the system disk is changed, it will take longer to change the configuration. The reference time is about 100G/30mins. However, there is no need to enter the system for manual adjustment after the expansion is completed;
  3. After expanding the data disk, you need to enter the system for manual adjustment;
  4. The refund incurred due to the downgrade configuration will be immediately returned to your cash account or gift account;

3.1.2 Re-install the host

  1. Re-installing the host will not cause the change of the internal and public network IP;
  2. Please pay attention to the changes of the file system. For example, if CentOS 6.x is re-installed to 7.x, the data disk may not be recognized.

3.1.3 Delete host

  1. After the host is deleted, it cannot be retrieved. Please operate with caution and back up the data in time;
  2. Deleting the host will automatically unbind EIP and UDisk. If you do not need this IP and data disk at all, you can enter the corresponding page to delete;
  3. If the host is already in arrears, deleting the host will not delete the arrears order. You still need to pay for these orders;
  4. If the host has not reached the expiration time, deleting the host will trigger a refund, and immediately return your cash account or gift account.

3.1.4 Create a image

  1. It is recommended to PowerOff and create the image, otherwise there is a certain probability that the image cannot be started after the image is created;
  2. Only support system disk image. If you want to back up the data disk, please use the data ark function.

3.1.5 Reset password

  1. It still needs to PowerOff to reset the password;
  2. If the host modifies the default account, the reset password may not take effect.

 

3.2 Deploy your first instance

3.2.1 Purchase a UHost

If you have not registered a SCloud account, please register an account first.

Operating procedures

Select availability zone

Choose image, CPU and memory

Configure the network

Configure management settings

Choose payment method and pay the order

 

First log in and go to the console page, select the UHost product, click Create Host, and enter the host configuration information page.

 

3.2.1.1 Select availability zone

 

Internal networks are not inter-operable between different regions.

3.2.1.2 Choose image, CPU and memory

It is divided into basic configuration and customized configuration. The basic configuration is a packaged combination and standard image, which can quickly complete the basic configuration of the system; customized configuration supports customized configuration models, CPU platforms, images, CPU, memory, disks and whether to open the Network Enhancement, open the Hot Scale-Up, etc.

 

1) The default model of the basic configuration is Standard (N). The image support currently only supports the standard image provided by SCloud. If you have other requirements, please switch to a customized configuration.

2) The recommended configuration is currently displayed in four combinations (1 core 1G, 1 core 2G, 2 core 4G, 4 core 8G). If you have other requirements, please switch to a customized configuration.

 

1) You can customized the model, CPU, memory, disk type and size.

2) You can choose whether to enable the Network Enhancement and Hot Scale-Up features. Some images and models may not support the Network Enhancement and Hot Scale-Up features.

3) The CPU platform refers to the CPU micro-architecture version of the host where the cloud host is located. Each generation of the CPU platform mainly upgrades the hardware architecture.

3.2.1.3 Configure the network

Divided into two ways: standard network and customized network. By default, the standard network binds you to the public EIP and firewall; the customized network includes your VPC, your Subnet segment, public EIP and firewall. The firewall policy display initially shows two types of “Web recommendation” and “Non-web recommendation”. If you have special requirements, you can create a new firewall policy.

 

In standard network mode, the default EIP bandwidth is 1Mb. The firewall policy has two types: “Web recommendation” and “Non-web recommendation”. If UHost needs to enable http or https services, please select “Wed recommendation”. If the above configuration does not meet your needs, please select a customized network.

 

Bandwidth provides three billing modes: bandwidth billing, traffic billing, and shared bandwidth.

3.2.1.4 Configure management settings

The login user name created for the first time is the default value, which is different according to the selection of the image type and cannot be changed. Set the password, host name, whether to join the business group, whether to open the hardware isolation group, etc. The password can be set according to the requirements, or it can be randomly generated by the system. The isolation group is a logical grouping of cloud hosts.

 

3.2.1.5 Choose payment method and pay the order

The system default payment method is monthly payment, and the default time is from purchase to the end of the month. The system also supports three methods: monthly payment, annual payment, and hourly payment.

The above three are prepaid method, and the system will deduct the next stage of fees in advance. Click [Purchase Now], complete the payment, return to the console page, and you can see the newly purchased UHost information in the list.

 

Choose the payment method: “Monthly payment” -> “Purchase until the end of the month” means that the order is paid to the end of the current month, and the next order will be calculated from 0:00 on the 1st of the next month.

3.2.2 Connect to UHost

There are two ways to connect to the UHost: console login and third-party client login. The third client software (for example: xshell, PuTTY, SecureCRT, etc.) needs to open the corresponding firewall port and public network IP when logging in.

3.2.2.1 Console login

Click the [Login] button on the page, enter the user name and password, and you can log in on the web side.

 

When the status of the UHost changes from “initialized” to “running”, it means that the initialization is complete.

3.2.2.2 Third-party client login

If the image configured on the UHost is a Linux environment, you can connect and log in through tools such as SecureCRT/XShell, as shown in the figure below.

 

The login interface of the UHost will be different due to the different images selected by the system, but the login steps are the same.

3.2.3 Use CLI commands to create a host

SCloud CLI provides a consistent operation interface for managing resources and services on the SCloud platform. It uses scloud-sdk-go to call SCloud OpenAPI to implement operations on resources and services. It is compatible with Linux, macOS and Windows platforms.

 

1) Install SCloud-CLI on macOS or Linux platform

Install via Homebrew (recommended on macOS platform)

Homebrew is a very popular package management tool on the macOS platform. You can easily install or upgrade SCloud-CLI with the following commands

Install SCloud-CLI

brew install scloud

Upgrade to the latest version

brew upgrade scloud

If you encounter an error during the installation process, please execute the following command to update Homebrew

brew update

If the problem persists, execute the following command for more help

brew doctor

 

Compile based on source code (need to install golang locally)

If you have installed git and golang on your platform, you can use the following command to download the source code and compile it

git clone https://github.com/scloud/scloud-cli.git

cd scloud-cli

make install

Upgrade to the latest version

cd /path/to/scloud-cli

git pull

make install

 

2) Install SCloud-CLI on Windows platform

Compile from source code

Download the source code from the release page of SCloud-CLI and unzip it. You can also download the source code via git, open Git Bash, and execute the command git clone https://github.com/scloud/scloud-cli.git. Switch to the directory where the source code is located, compile the source code (execute the command go build -mod=vendor -o scloud.exe), and then add the directory where the executable file scloud.exe is located to the PATH environment variable. For specific operations, please refer to the document configuration After completion, open the terminal (cmd or power shell) and execute the command scloud –version to check whether the installation is successful.

Download binary executable file

Open the scloud-cli release page and find the scloud-cli compressed package suitable for your platform. Click the link to download and decompress, and then add the directory where the executable file scloud.exe is located to the PATH environment variable. For the operation of adding environment variables, please refer to the document

 

3) Use SCloud-CLI in a Docker container

If you have installed Docker, use the following command to pull the packaged SCloud-CLI image. Image packaging Dockerfile

docker pull uhub.service.scloud.cn/scloudcli/scloud-cli:source-code

Create a container based on this image

docker run –name scloud-cli -it -d uhub.service.scloud.cn/scloudcli/scloud-cli:source-code

Connect to the container and start using SCloud-CLI

docker exec -it scloud-cli zsh

 

4) Enable command completion (bash or zsh shell)

SCloud-CLI supports auto-completion of commands. After it is turned on, you only need to enter some characters of the command, and then hit the Tab key to automatically complete the remaining characters of the command.

 

Bash shell

Add the following code to the file “$HOME/.bash_profile” or “$HOME/.bashrc”, then source <$HOME/.bash_profile|$HOME/.bashrc>, or open a new terminal, the command completion will take effect

complete -C $(which scloud) scloud

 

Zsh shell

Add the following code to the file ~/.zshrc, then source ~/.zshrc, or open a new terminal, the command completion will take effect

autoload -U +X bashcompinit && bashcompinit

complete -F $(which scloud) scloud

The Zsh built-in command bashcompinit may not take effect in some operating systems. If the above script does not take effect, try to replace it with the following script

_scloud() {

read -l;

local cl=”$REPLY”;

read -ln;

local cp=”$REPLY”;

reply=(`COMP_SHELL=zsh COMP_LINE=”$cl” COMP_POINT=”$cp” scloud`)

}

 

compctl -K _scloud scloud

 

5) Initial configuration

SCloud CLI supports multiple named configurations, which are stored in the local files config.json and credential.json, located in the ~/.scloud directory. You can use the scloud config add command to add multiple configurations, use –profile to specify the configuration name, or directly add configurations in the local files config.json and credential.json. When there is no valid configuration locally, the scloud init command will add a configuration and name it default. This command simplifies the configuration process as much as possible and is suitable for initializing the configuration when using SCloud CLI for the first time.

 

There are 10 configuration items in total

  • Profile: Profile name, this name is not allowed to be repeated. It can be overridden by the parameter –profile when executing the command
  • Active: Identifies whether this configuration is effective, there is only one effective configuration
  • ProjectID: The default project ID, which can be overridden by the parameter –project-id when executing the command
  • Region: The default region, which can be overridden by the parameter –region when executing the command
  • Zone: The default availability zone, which can be overridden by the parameter –zone when executing the command
  • BaseURL: The default SCloud Open API address, which can be overridden by the parameter –base-url when executing the command
  • Timeout: The default request API timeout time, in seconds, the execution command can be overridden by the parameter –timeout
  • PublicKey: The public key of the account, which can be overwritten by the parameter –public-key when executing the command
  • PrivateKey: The private key of the account, which can be overwritten by the parameter –private-key when executing the command
  • MaxRetryTimes: The default maximum number of API request failure retries, which is only effective for idempotent APIs. The so-called idempotent API means that there will be no side effects due to multiple calls, such as releasing EIP (ReleaseEIP), which can be overridden by the parameter –max-retry-times when executing the command

 

The commands to add or modify the configuration are as follows.

First use, initial configuration

$ scloud init

View all configurations

$ scloud config list

 

Profile  Active  ProjectID   Region  Zone       BaseURL                 Timeout  PublicKey           PrivateKey          MaxRetryTimes

default  true    org-oxjwoi  cn-bj2  cn-bj2-05  https://api.scloud.cn/  15       YSQGIZrL*****nCRQ=  jtma2eqQ*****+Avms  3

uweb     false   org-bdks4e  cn-bj2  cn-bj2-05  https://api.scloud.cn/  15       4E9UU0Vh*****PWQ==  694581ea*****a0d45  3

Add configuration

$ scloud config add –profile <new-profie-name>  –public-key xxx –private-key xxx

Modify the configuration items of a configuration

$ scloud config update –profile xxx –region cn-sh2

For more information, please refer to the command help

$ scloud config —help

 

6) Example

Use SCloud CLI to create a data center in Nigeria to create a host and bind an external IP.

First, create a cloud host

$ scloud uhost create –cpu 1 –memory-gb 1 –password **** –image-id uimage-fya3qr

 

uhost[uhost-zbuxxxx] is initializing…done

Execute the following command to view the meaning of each parameter of the command to create a host

$ scloud uhost create –help

Secondly, we’re going to allocate an EIP and then bind it to the UHost created above.

$ scloud eip allocate –bandwidth-mb 1

allocate EIP[eip-xxx] IP:106.75.xx.xx  Line:BGP

 

$ scloud eip bind –eip-id eip-xxx –resource-id uhost-xxx

bind EIP[eip-xxx] with uhost[uhost-xxx]

The above operation can also be done with one command

$ scloud uhost create –cpu 1 –memory-gb 1 –password **** –image-id uimage-fya3qr –create-eip-bandwidth-mb 1

 

 

4 Instructions

4.1 Instance

4.1.1 Manage your instance

If you log in to the UHost console, you can Login, Stop, Restart, and PowerOff the host.

 

Stop and Restart are both at the virtual machine level. If you encounter that Restart or Stop is invalid, you can use the PowerOff operation. PowerOff is a shutdown at the host level, which is equivalent to forcibly turning off the power and is more thorough.

If you click [Details], you will enter the host details page, displaying the basic information, configuration information, payment information and monitoring information of the currently selected host.

 

 

4.1.1.1 4.1.1.1 Basic information

 

The basic information module includes Resource ID, Resource Name, UGroup, Availability Zone, Private IP, etc, among which the Resource Name and UGroup can be changed instantly.

4.1.1.2 Configuration information

The configuration information module includes Type, CPU platform, Image, CPU, Memory, etc. Click [Edit Config] to enter the configuration change page, which supports changing memory and CPU specifications, resizing disk, and upgrading public EIP bandwidth.

4.1.1.3 Change the configuration

Operation path

1) Choose a UHost

2) Click to select [Edit Config], then click [Change the configuration], click [Continue] to enter the next page

3) Select the CPU and memory separately to upgrade the data

4) Click [OK] to make up the difference and complete the payment

 

 

At the same time, you can also use the uhost resize (SCloud CLI) command to upgrade the configuration and specify the instance ID. Please use the –cpu parameter and –memory-gb parameter to specify the number of CPU cores and memory size as follows.

scloud uhost resize –uhost-id uhost-0a3gcvih –cpu 2 –memory-gb 4

Output:

Resize uhost must be after stop it. Do you want to stop this uhost? (y/n):y

uhost[uhost-0a3gcvih] is shutting down…done

UHost:[uhost-0a3gcvih] resized…done

4.1.1.4 Resize disk capacity

Operation path

1) Choose a UHost

2) Click to select [Change Configuration], then click [Adjust Disk Capacity], select the disk you want to adjust, and click [Continue] to enter the next page

3) Choose to adjust the disk capacity value

4) Click [OK] to make up the difference and complete the payment

 

 

If the current disk supports online expansion, there is no need to restart the host after expansion, but you need to enter the host to perform related configurations.

4.1.1.5 Upgrade public EIP bandwidth

Operation path

1) Choose a UHost

2) Click to select [Change Configuration], then click [Upgrade public EIP bandwidth], select the IP address that needs to be upgraded, and click [Continue] to enter the next page

3) Adjust the bandwidth value you need

4) Click [OK] to make up the difference and complete the payment

 

 

4.1.1.6 Payment information

 

The payment information module includes creation time, expiration time, and payment method. Click [Pay] to support one-click renewal.

4.1.2 Delete instance

You can log in to the UHost console, select the host, and click [Delete Instance] to delete the current host.

At the same time, you can also use the uhost delete (SCloud CLI) command to delete the host and specify the instance ID.

scloud uhost delete –uhost-id uhost-0a3gcvih

Output:

Are you sure you want to delete the host(s)? (y/n):y

uhost[uhost-0a3gcvih] is shutting down…done

uhost[uhost-0a3gcvih] deleted

1) If you delete a cloud host through the console, the EIP and UDisk associated with the host will be deleted simultaneously, and billing will not continue. If you use the API, the fields ReleaseEIP and ReleaseUDisk determine whether to delete EIP and UDisk synchronously.

2) After the resource is deleted, the system will automatically refund the remaining fees in the lease.

4.1.3 Network Configuration

The console host list page provides access to change the public network firewall, bind the EIP.

Operation path

1) Select host

2) Click […], select [Related Product] in the drop-down box

3) Click [Edit Firewall]/[Bind EIP] to operate

4.1.4 Disk configuration

Operation path

1) Select host

2) Click […], select [Related Product] in the drop-down box

3) Click [Mount UDisk] to operate

4.1.5 More operations

In addition to the above functions, the console list also supports more operations such as Edit UGroup and Create Image.

 

 

4.1.5.1 Edit UGgroup

Operation path

1) Select host

2) Click […], select [More Operations] from the drop-down box

3) Click [Edit UGroup] operation

 

You can group cloud hosts so that certain operations can be performed in units of business groups, eliminating the cumbersome operation of one by one, and facilitating centralized management of resources. When you subsequently view the monitoring data or modify the rules, you can perform operations in groups.

4.1.5.2 View or Edit Custom Data

Operation path

1) Select host

2) Click […], select [More Operations] from the drop-down box

3) Click [View or Edit Custom Data]

4.1.5.3 Create an image

Operation path

1) Select host

2) Click […], select [More Operations] from the drop-down box

3) Click [Create Image]

 

4.1.5.4 Reset password

Operation path

1) Select host

2) Click […], select [More Operations] from the drop-down box

3) Click [Set/Reset Password]

 

At the same time, you can also use the uhost reset-password (SCloud CLI) command to reset the host password, and specify the availability zone and instance ID. Please use the –password parameter to specify the host password as follows.

scloud uhost reset-password –zone cn-bj2-05 –uhost-id uhost-0a3gcvih –password test12345

Output:

uhost[uhost-0a3gcvih] will be stopped, can we do this? (y/n):y

uhost[uhost-0a3gcvih] is shutting down…done

uhost[uhost-0a3gcvih] reset password

  1. At present, it is still necessary to PowerOff to reset the password;
  2. If the host changes the default account, the reset password may not take effect.

4.1.6 Reinstall the system

Operation path

1) Select host

2) Click […], select [More Operations] from the drop-down box

3) Click [Reinstall System]

 

 

1) If you reinstall the system and the size of the system disk is inconsistent with the original host, which is equivalent to the system disk size change, you need to make up the price difference or initiate a refund.

2) Reinstalling the host and reinstalling the system need to be performed under shutdown conditions, and will not cause changes to the internal and public IPs.

3) Please pay attention to the file system changes when reinstalling the system. For example, if CentOS 6.x is reinstalled to 7.x, or Linux and Windows are reinstalled to each other, the data disk may not be recognized.

4) A cloud host with Network Enhancement enabled cannot be reinstalled into a system that does not support Network Enhancement (such as Windows).

4.1.7 Host NTP configuration operation guide

NTP server IP in each region

Region Availability Zone NTP server 1 NTP server 2 NTP server 3
Hong Kong Availability Zone A 10.8.255.1 10.8.255.2 0.cn.pool.ntp.org
Hong Kong Availability Zone B 10.8.255.1 10.8.255.2 0.cn.pool.ntp.org
Los Angeles Availability Zone A 10.11.255.1 10.11.255.2 0.cn.pool.ntp.org
0.cn.pool.ntp.org Availability Zone A 10.27.255.101 10.27.255.102 0.cn.pool.ntp.org
Singapore Availability Zone A 10.35.255.1 10.35.255.2 0.cn.pool.ntp.org
Bangkok Availability Zone A 10.31.255.101 10.31.255.102 0.cn.pool.ntp.org
Kaohsiung Availability Zone A 10.37.255.101 10.37.255.102 0.cn.pool.ntp.org
Taipei Availability Zone A 10.41.255.1 10.41.255.2 0.cn.pool.ntp.org
Jakarta Availability Zone A 10.45.255.1 10.45.255.2 0.cn.pool.ntp.org
Mumbai Availability Zone A 10.47.255.1 10.47.255.2 0.cn.pool.ntp.org
Seoul Availability Zone A 10.33.255.101 10.33.255.102 0.cn.pool.ntp.org
Tokyo Availability Zone A 10.40.255.1 10.40.255.2 0.cn.pool.ntp.org
Frankfurt Availability Zone A 10.29.255.101 10.29.255.102 0.cn.pool.ntp.org
Moscow Availability Zone A 10.39.255.1 10.39.255.2 0.cn.pool.ntp.org
Dubai Availability Zone A 10.44.255.1 10.44.255.2 0.cn.pool.ntp.org
San Paulo Availability Zone A 10.49.255.1 10.49.255.2 0.cn.pool.ntp.org

 

Each machine is configured with at least two SCloud NTP server IP and one external NTP server, corresponding to upstream1, upstream2, and offical_upstream3 in the following documents.

 

Modify NTP for each operating system

1) CentOS/Ubuntu/Redhat/Debian/Gentoo

NTP configuration file location: /etc/ntp.conf

* Modification method

Add the corresponding NTP server IP according to the availability zone

restrict region:

add restrict 10.255.255.1

add restrict 10.255.255.2

 

server region:

original configuration:

server 0.asia.pool.ntp.org

server 1.asia.pool.ntp.org

server 2.asia.pool.ntp.org

server 3.asia.pool.ntp.org (3.gentoo.pool.ntp.org)

replaced by:

server 10.255.255.1 iburst minpoll 3 maxpoll 4 prefer

server 10.255.255.2 iburst minpoll 3 maxpoll 4 prefer

server 0.cn.pool.ntp.org iburst minpoll 3 maxpoll 4

 

Function: Shorten the time polling cycle, and choose SCloud’s NTP service first

Add fine-tuning parameters

CentOS 6.x

add tinker dispersion 100

add tinker step 1800

add tinker stepout 3600

 

Function: Speed up the fine-tuning, control the fine-tuning range

CentOS 7.x

add tinker dispersion 100

add tinker stepout 3600

 

Function: Speed up the fine-tuning, control the fine-tuning range

 

* Test method

Restart NTP service

CentOS/Redhat/Gentoo:

# service ntpd restart

 

Ubuntu:

# sudo service ntp restart

 

Debian:

# service ntp restart

Check NTP server IP

# ntpq -pn

If the SCloud server IP in the table is displayed, it means that the ntp configuration is correct.

Make sure that when the machine can skip time synchronization, first execute the ntpdate or date command to set the time, and then start the ntp time synchronization service.

Specific operation:

# service ntpd stop

# ntpdate upstream1 or # date -s “Y-m-D H:M:S”

# service ntpd start

 

2) OpenSUSE

NTP configuration file location: /etc/ntp.conf

* Modification method

Add restrict parameters

restrict -4 default kod notrap nomodify nopeer noquery

restrict -6 default kod notrap nomodify nopeer noquery

Add the corresponding NTP service IP according to the availability zone

restrict region

add restrict 10.255.255.1

add restrict 10.255.255.2

 

server region

original configurations:

server 0.asia.pool.ntp.org

server 1.asia.pool.ntp.org

server 3.asia.pool.ntp.org (3.gentoo.pool.ntp.org)

replaced by:

server 10.255.255.1 iburst minpoll 3 maxpoll 4 prefer

server 10.255.255.2 iburst minpoll 3 maxpoll 4 prefer

server 0.cn.pool.ntp.org iburst minpoll 3 maxpoll 4

 

Function: Shorten the time polling cycle, and prefer SCloud’s NTP service

 

* Test method

Restart NTP service

# service ntp restart

Check NTP server IP

# ntpq -pn

If the SCloud server IP in the table is displayed, it means that the ntp configuration is correct.

Make sure that when the machine can skip time synchronization, first execute the ntpdate or date command to set the time, and then start the ntp time synchronization service.

Specific operation:

# service ntpd stop

# ntpdate upstream1 or # date -s “Y-m-D H:M:S”

# service ntpd start

You can download and run the script to complete the configuration, see mod_ntp.sh

 

3) Windows

* Modification method

Modify the Windows Time service to start automatically

  1. Enter “services.msc” in the terminal, the service list pops up, find “Windows Time”, change the startup type into “automatic”, and start the service; (if it is already started, ignore it).
  2. For 2008 and 2012 users, 64-bit machines, you need to enter `sc triggerinfo w32time start/networkon stop/networkoff` in the terminal.

(The above commands are cmd commands and cannot be run in powershell).

 

Modify Group Policy

Start the Windows NTP client end

  1. Enter “gpedit.msc” in the terminal, and the group policy editor will pop up.
  2. “Computer Configuration\Administrative Templates\System\Windows Time Service\Time Provider\Configure Windows NTP Client”, change its status to “Enabled”.

Configure Windows NTP client parameters

  1. Configure the “NtpServer” value of the corresponding Availability Zone as “upstream1,0x9 upstream2, 0x9official_upstream3,0x9”.
  2. Modify the “Type” value to NTP.
  3. Modify “SpecialPollInterval” to a value between 30-60s.

Enable global configuration (computer configuration management template system Windows time service global configuration settings)

Modify “MaxAllowPhaseOffset” to 3600

Modify “MaxNegPhaseCorrection” to 3600

Modify “MaxPosPhaseCorrection” to 3600

Modify the “PhaseCorrectRate” value to “7”

Modify the “MinPollInterval” value to “3”

Modify the “MaxPollInterval” value to “4”

 

* Test Methods

1) Execute gpupdate /force on the command line to forcefully update the group policy;

2) After completing the above configuration and ensuring that the machine can skip time synchronization, execute w32tm/resync on the terminal to enable the client to send a clock synchronization request to the server to complete the immediate time synchronization;

3) Enter w32tm /query /status in the terminal command line to view synchronization information.

 

4.2 Disk

4.2.1 View hard disk partitions

After logging in to the cloud host, use the fdisk -l command to view the hard disk partition of the cloud host (root privileges are required in Ubuntu).

 

System disk: /dev/vda

Data Disk 1: /dev/vdb

Data Disk 2: /dev/vdc

4.2.2 System disk expansion

1) Expansion rules

Different disk types follow different disk expansion rules:

Type Capacity limit Support expansion operation
Local disk (standard local disk, SSD local disk) 100GB Change configuration
Cloud disk 500GB Create host, change configuration, reinstall system

 

2) Expansion steps

Expansion when creating/reinstalling the host:

1.On the page of creating/reinstalling the host, select the size of the system disk;

2.Waiting for the creation/reinstallation to be completed, at this time the expansion of the underlying block device has been completed;

3.Go to the host to check whether the file system has been expanded.

 

Expand the capacity by changing the configuration after creation:

It takes a long time to expand the local system disk, and it may take 30 minutes to PowerOff and wait for the expansion to 100G.

 

1.Select “Change Configuration” -> “Change Disk Capacity” -> System Disk;

2.Wait for the expansion to end, the host enters the shutdown state, at this time the expansion of the underlying block device has been completed;

3.Power on and enter the host to check whether the file system has been expanded.

 

Check whether the file system has been expanded:

  1. Linux

df -TH

  1. Windows

This computer -> check whether the size of the C drive is consistent with the console

If the file system has not been expanded, you need to perform the expansion steps in the system.

 

3) Expansion steps in the system

Linux

Step 1: Install growpart

Growpart has been installed by default in the cloud-init supported version image, and the remaining versions need to be installed by themselves. The process is as follows:

CentOS:

yum install -y epel-release

yum install -y cloud-utils

Ubuntu:

sudo apt-get install cloud-initramfs-growroot

 

Step 2: Expand the partition table

LANG=en_US.UTF-8

growpart /dev/vda 1

CentOS6 and Debian8 may encounter the situation that the kernel and tool chain do not support hot reloading of the partition table. In this case, the operating system needs to be restarted after expanding the partition table.

 

Step 3: Expand the file system

resize2fs /dev/vda1 (ext4 file system)

xfs_growfs /dev/vda1 (xfs file system) 或xfs_growfs /

 

Step 4: Confirm

Check whether the expansion is complete:

df -TH

 

Windows

Select the expansion volume in “Computer Management” to complete the expansion. The specific steps are as follows:

4.2.3 Data disk expansion

1) Expansion step

 

Select “Change Configuration” in the console, and then restart after the upgrade.

If the file system has not been expanded, you need to perform the expansion steps in the system.

 

2) Expansion steps in the system

Linux

Check the file system type of the data disk (the upgrade operation requires different operations for the ext4 and xfs file systems)

df -ihT

 

 

Operating system for ext4 file format (such as CentOS6)

e2fsck -f /dev/vdb

resize2fs /dev/vdb

Operating system for xfs file format (such as CentOS7)

xfs_repair /dev/vdb

xfs_growfs /data

 

Windows

Operate on the host, enter diskpart.exe, list volume in cmd, select the logical volume to be extended, and enter extend [size=n], or extend to extend all unallocated sizes to the selected logical volume.

 

2) Hosts without local data disks before expansion

Linux

After the upgrade, you need to do the following operations in the cloud host:

Two file system formats, ext4 or xfs, can be selected to format the data disk

Set the data disk to the ext4 file format (the default file system format of CentOS6):

mkfs -t ext4 /dev/vdb

mount /dev/vdb /data/

Edit /etc/fstab and write the corresponding configuration to fstab

/dev/vdb   /data  ext4  defaults,noatime 0 0

Set the data disk to xfs format (the default file system format of CentOS7):

mkfs.xfs /dev/vdb

mount -t xfs /dev/vdb /data

Edit /etc/fstab and add the following

/dev/vdb /data xfs defaults,noatime 0 0

 

Windows

Operate on the host, enter diskpart.exe in cmd

1.Enter list disk, select disk n (please fill in the specific value of n according to the actual situation), and select the data disk;

2.Enter create partition primary to create a partition;

3.Enter list volume, you can see the created volume. Enter format fs=ntfs quick to partition;

4.Enter assign to assign the drive letter;

5.Enter exit to exit, and the created disk can be seen in the system.

4.2.4 Mount cloud disk

On the console Host Management -> Mount cloud disk, perform the mounting operation.

You can also use the udisk attach (SCloud CLI) command to mount the cloud disk, and specify the availability zone and cloud host instance ID. Please use the –udisk-id parameter to specify the cloud disk resource ID. E.g:

scloud udisk attach –udisk-id bsm-bagfqw5u –zone cn-bj2-05 –uhost-id uhost-bh0fvsnh/UHost

Output:

udisk[bsm-bagfqw5u] is attaching to uhost uhost[uhost-bh0fvsnh]…done

 

Then do the following operations in the cloud host:

mkdir /udisk

mount /dev/vdc /udisk

df -h

4.2.5 Delete cloud disk

You can only delete the data disk, and the system disk cannot be deleted. At the same time, the local disk does not support deletion, and does not support separate release.

 

1) Delete the cloud disk in the system

Linux

The Linux operating system executes the following statement:

umount /dev/vdc

 

Windows

First select the cloud disk in the Disk Manager, right-click and select “Offline”.

Then select the cloud disk in the device manager, right-click “uninstall”

These two operations are equivalent to dismounting the cloud disk in the Windows system.

If there is only one C drive, the second one is the cloud drive. It is recommended to back up the cloud drive (such as snapshots and clones) before expansion.

On the console cloud disk list page, select the cloud disk to be expanded and uninstall it. At this time, the status of the cloud disk will change from “Mounted” to “Available”.

2) Console operation

On the console Host Management page -> Cloud Disk Management -> Uninstall, perform the uninstall operation.

You can also use the udisk detach (SCloud CLI) command to uninstall the cloud disk and specify the availability zone. Please use the –udisk-id parameter to specify the cloud disk resource ID. E.g:

scloud udisk detach –udisk-id bsm-bagfqw5u –zone cn-bj2-05

Output:

Please confirm that you have already unmounted file system corresponding to this hard drive,(See “https://docs.scloud.cn/udisk/userguide/umount” for help), otherwise it will cause file system damage and UHost cannot be normally shut down. Sure to detach? (y/n):y

udisk[bsm-bagfqw5u] is detaching from uhost[uhost-bh0fvsnh]…done

4.2.6 Local disk “shrink”

The console does not support the “shrinking” of the local disk, but the “shrinking” can be achieved in disguised form through the following steps.

Please note that this operation will completely erase the data, please perform data backup first! ! !

1) Delete the local disk

The deletion of the local disk needs to be done after PowerOff. First enter the system to uninstall, the operation steps:

Linux

umount /dev/vdc

 

Windows

First select the cloud disk in the Disk Manager, right-click and select “Offline”.

Then select the cloud disk in the device manager and right-click “Uninstall”.

After the uninstall is completed in the system, please select the designated host->Details->Disk and Restoration->Unmount UDisk in the console.

 

 

2) Add local disk

Please select the designated host->Details->Disk and Restoration->Add local disk in the console, and select the appropriate size.

 

After creating a new one, please enter the host and perform the following operations:

Linux

After the upgrade, you need to do the following operations in the cloud host:

Two file system formats, ext4 or xfs, can be selected to format the data disk

Set the data disk to the ext4 file format (the default file system format of CentOS6):

mkfs -t ext4 /dev/vdb

mount /dev/vdb /data/

Edit /etc/fstab and write the corresponding configuration to fstab

/dev/vdb   /data  ext4  defaults,noatime 0 0

Set the data disk to xfs format (the default file system format of CentOS7):

mkfs.xfs /dev/vdb

mount -t xfs /dev/vdb /data

Edit /etc/fstab and add the following

/dev/vdb /data xfs defaults,noatime 0 0

 

Windows

Operate on the host, enter diskpart.exe in cmd

1.Enter list disk, select disk n (please fill in the specific value of n according to the actual situation), and select the data disk.

2.Enter create partition primary to create a partition.

3.Enter list volume, you can see the created volume. Enter format fs=ntfs quick to partition

4.Enter assign. Assign a drive letter.

5.Type exit to exit. The created disk can be seen in the system.

 

In the same way, cloud disks do not support direct “shrinking”, but you can create a new cloud disk with a smaller capacity after uninstalling and mount it directly.

4.2.7 Disk Snapshot

Disk Snapshot Service (USnap) is based on Data Ark CDP technology and provides the ability to create snapshots for all series of cloud hard disk data disks (ordinary/SSD/RSSD). Snapshot is a convenient and efficient means of data disaster recovery, which is often used for data backup, making customized images, and applying disaster recovery.

 

4.3 Image

4.3.1 customized image

4.3.1.1 Create an image

Select the host that needs to create an image, and then click the create an image button. Fill in the name and description of the image, and click OK to create the image.

After entering the name and description of the image, click the OK button to generate an image of the host, and the page will jump to the image management page.

The price of the self-made image is the same as the original image of the self-made one. For example, if the self-made image is created on CentOS 6.5 provided by SCloud, it is free to use the image to create the host; if the self-made image is created from a paid image in the image market, the created host still needs to charge for market image. The user’s self-made image will be stored for the user in the current region for a long time.

Please make sure not to modify the key configuration of the system before making a self-made image. For example, network-related configuration information. The modification of the key configuration of the system may cause a series of problems such as the inability to create the image or the inability to start the created image.

4.3.1.2 Image management

Support to modify user name and remarks.

4.3.1.3 Create host from image

You can create a new host from a customized image.

Image in this zone can only create hosts in this zone. If you need to create across availability zones, please submit a work order to activate the image and upgrade to the regional service authority.

The applicable model of the image is limited to inherit the parent image of the self-modified customized image. For example, if the parent image is a Windows image, a 1-core host cannot be created from this customized image, and network enhancement cannot be enabled.

4.3.2 Use Packer to create a customized image

Note: Temporary resources will be created during the process of using Packer to create a customized image. Temporary resources will be automatically deleted after the completion of the build, so a certain amount of fees will be charged.

4.3.2.1 Overview

Packer is a lightweight open source tool for automated packaging of images launched by Hashicorp. Cloud vendors can build their own Builder to integrate Packer, and they can efficiently and concurrently create consistent images for multi-cloud platforms with a single configuration file. Packer can run on commonly used mainstream operating systems. It is not a substitute for Chef, Puppet and other software, but integrates and uses these automated configuration tools to pre-install software on the image. Coupled with SCloud Terraform, SCloud CLI and other tools, infrastructure as code (IaC), continuous integration and rapid delivery can be realized in a multi-cloud DevOps scenario.

As shown in the figure below, Packer integrates Chef, Shell, Puppet and other tools in Provisioner to make immutable images containing various software for use by cloud hosts, Docker, etc. on multi-cloud platforms.

 

 

Comparison of Packer and traditional console image creation

Console to create an image Packer to create an image
How to use Click on the console Build using configuration files
Reusability Low. The same operation needs to be performed every time, and the image consistency cannot be guaranteed. Version management is not possible. High. Configuration files can be copied and modified, and version management is possible.
Operational complexity High. First use the basic image to create the cloud host, log in to the cloud host for deployment, and then manually create the image. Low. Execute the configuration file, automatically execute the pre-configured automation script, and then automatically build the image.
Creation time Long. Process-based operations require manual watchkeeping and cannot wait for the execution of each process accurately. Short. Automated workflow operation, complete polling and waiting mechanism, seamlessly connecting each process.

 

Life cycle of creating image by Packer

 

1) The user calls SCloud Builder by building a JSON template and executing the packer build command;

2) Parameters are checked in advance to ensure availability;

3) Create cloud host, EIP and other related temporary resources (if configured as an internal network environment, EIP is not required);

4) Connect to the host via SSH or WinRM, etc., and execute the Provisioner process; Shut down the cloud host and create a image image;

5) Copy image;

6) Delete temporary resources such as host and EIP;

7) Perform post-processing (such as local image import, etc.).

4.3.2.2 Quick start

1) Related Links

Official reference document address https://www.packer.io/docs

Used to query various parameters of SCloud Packer Builder

Official download page https://www.packer.io/downloads?spm=a2c4g.11186623.2.13.7186682bskvY7M

Used to install Packer

Open source warehouse address https://github.com/hashicorp/packer

Welcome to contribute code to SCloud Packer Builder

 

2) Environment configuration

Install Packer

  • Refer to the official installation document to install Packer
  • Install Cloud CLI tool (not necessary, easy to query basic image and other information)
  • Install Terraform (not necessary, it is convenient to use the image made by packer for resource arrangement)

Configure the default user

Set the key SCloud_PUBLIC_KEY, SCloud_PRIVATE_ KEY and set the project ID SCloud_PROJECT_ID as a global environment variable (recommended), or explicitly specify public_key, private_key, project_id in the json file.

 

3) Write JSON file

Let’s take the example of building a customized image with nginx installed. First, create a clean empty folder as the workspace, switch to this directory, and write a JSON specification file (eg: test.json), as follows:

{

“variables”: {

“scloud_public_key”: “{{env `SCLOUD_PUBLIC_KEY`}}”,

“scloud_private_key”: “{{env `SCLOUD_PRIVATE_KEY`}}”,

“scloud_project_id”: “{{env `SCLOUD_PROJECT_ID`}}”

},

“builders”: [

{

“type”: “scloud-uhost”,

“public_key”: “{{user `scloud_public_key`}}”,

“private_key”: “{{user `scloud_private_key`}}”,

“project_id”: “{{user `scloud_project_id`}}”,

“region”: “cn-bj2”,

“availability_zone”: “cn-bj2-02”,

“instance_type”: “n-basic-2”,

“source_image_id”: “uimage-f1chxn”,

“ssh_username”: “root”,

“image_name”: “packer-test-basic-bj”

}

],

“provisioners”: [

{

“type”: “shell”,

“inline”: [

“yum install -y nginx”

]

}

]}

As above, a scloud-uhost Builders builder and a provisioners configurator are defined. By executing the command packer build test.json, you can build a customized image with one click.

4.3.3 Use Packer to create and import a customized image

4.3.3.1 Overview

Packer is a lightweight open source tool for automated packaging of images launched by Hashicorp. SCloud, through the integration of Packer, currently supports one-click import of self-made local images into SCloud platform.

4.3.3.2 Related Links

Image import official reference document address https://www.packer.io/docs

Used to query various parameters of SCloud import Post-Processors

Packer official download page https://www.packer.io/downloads?spm=a2c4g.11186623.2.13.7186682bskvY7M

Used to install Packer

Open source warehouse address https://github.com/hashicorp/packer

Welcome to contribute code to SCloud Packer Builder

4.3.3.3 Image import example

Next, we will use Packer to make and import a CentOS image. As shown below:

 

Packer first uses QEMU Builder to make a RAW image and stores it in a locally configured directory. Then, it uses scloud-import Post-Processors to store the local image in the UFile configured by the user and automatically imports it into the SCloud cloud platform.

 

Environment configuration

1) Install Packer

Refer to the official installation document to install Packer

2) Configure the default user

Set the key SCLOUD_PUBLIC_KEY, SCLOUD_PRIVATE_ KEY and set the project ID SCLOUD_PROJECT_ID as a global environment variable (recommended), or explicitly specify public_key, private_key, project_id in the json file.

3) Install QEMU

Refer to the official installation document, which uses the command line to install, MacOS: brew install qemu, CentOs: yum install qemu-kvm, Ubuntu: apt-get install qemu

4) Create a UFile bucket space

 

Write JSON file

Let us create and import a CentOS 6.10 custom image based on the MacOs system using QEMU as an example. First create a clean empty folder as the work area, switch to this directory, and write a JSON specification file (eg: local.json), as follows:

{“variables”: {

“scloud_public_key”: “{{env `SCLOUD_PUBLIC_KEY`}}”,

“scloud_private_key”: “{{env `SCLOUD_PRIVATE_KEY`}}”,

“scloud_project_id”: “{{env `SCLOUD_PROJECT_ID`}}”,

“disk_size”: “4096”,

“iso_checksum”: “0da4a1206e7642906e33c0f155d2f835”,

“iso_checksum_type”: “md5”,

“iso_name”: “CentOS-6.10-x86_64-minimal.iso”,

“ks_path”: “centos-6.10/ks.cfg”,

“mirror”: “http://mirrors.ustc.edu.cn/centos”,

“mirror_directory”: “6.10/isos/x86_64”,

“template”: “centos-6.10-x86_64”},

“builders”:[

{

“type”: “qemu”,

“boot_command”: [

“<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{user `ks_path`}}<enter><wait>”

],

“boot_wait”: “10s”,

“disk_size”: “{{user `disk_size`}}”,

“http_directory”: “http”,

“iso_checksum”: “{{user `iso_checksum`}}”,

“iso_checksum_type”: “{{user `iso_checksum_type`}}”,

“iso_url”: “{{user `mirror`}}/{{user `mirror_directory`}}/{{user `iso_name`}}”,

“output_directory”: “packer-{{user `template`}}-qemu”,

“shutdown_command”: “echo ‘packer’|sudo -S shutdown -P now”,

“ssh_password”: “scloud_packer”,

“ssh_port”: 22,

“ssh_username”: “root”,

“ssh_timeout”: “10000s”,

“vm_name”: “{{ user `template` }}.raw”,

“net_device”: “virtio-net”,

“disk_interface”: “virtio”,

“format”: “raw”,

“use_default_display”: “false”,

“qemuargs”: [

[“-display”, “cocoa”]

]

}

],

“post-processors”:[

{

“type”:”scloud-import”,

“public_key”: “{{user `scloud_public_key`}}”,

“private_key”: “{{user `scloud_private_key`}}”,

“project_id”: “{{user `scloud_project_id`}}”,

“region”:”cn-bj2″,

“ufile_bucket_name”: “packer-test”,

“image_name”: “packer_import_test”,

“image_os_type”: “CentOS”,

“image_os_name”: “CentOS 6.10 64浣”,

“format”: “raw”

}

]}

As above, a qemu Builder builder and a scloud-import Post-Processors are defined, which are configured with UFile bucket name and other information.

 

Write Kickstart file

According to the http_directory and boot_command configured in the above JSON file QEMU, you need to create a ./http/centos-6.10/ directory under the JSON file directory to store the Kickstart file, ks.cfg, as follows: (https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/6/html/installation_guide/ch-kickstart2)

install

cdrom

lang en_US.UTF-8

keyboard us

network –bootproto=dhcp

rootpw scloud_packer

firewall –disabled

selinux –permissive

timezone UTC

unsupported_hardware

bootloader –location=mbr

text

skipx

zerombr

clearpart –all

autopart

auth –enableshadow –passalgo=sha512

firstboot –disabled

reboot

 

%packages –nobase –ignoremissing

sudo

gcc

make

%end

 

Execute command line

By executing the command packer build local.json, you can create and import a custom image with one click.

4.3.4 Image upload

4.3.4.1 Operation steps

1) Upload the image to US3. Because the image capacity is usually large, you need to use the client or SDK to upload.

Note, please add the imaage to the public storage space.

2) Please select Console -> Cloud Host -> Image, click “Image upload”

3) Enter information about image. Select “OK” to start importing.

4) The import progress can be queried in the image list. After the image status becomes “Available”, the host can be created.

 

Remarks: Some regions do not support image upload at the moment. If you want to upload to such a region, please submit a work order to migrate the image to the target region.

4.3.4.2 Operating instructions

Linux image

System: Image based on CentOS, Redhat, Ubuntu, OpenSUSE, and SUSE releases. Supports 32-bit and 64-bit. (Debian and Gentoo are not currently supported. If you want to import such images, please consult technical support)

  • Support image format: raw, vhd, qcow2, vmdk
  • File system type: ext3 or ext4 file system using MBR partition (GPT partition is not supported)
  • System disk size: recommended not to exceed 50G
  • Driver: The virtio driver of the virtualization platform KVM must be installed
  • Kernel restriction: It is recommended to use the native kernel, if the kernel is modified, the import may fail; if the image is Redhat, the user needs to purchase the license by himself.
  • Other restrictions: Only support system disk image, not data disks.

 

Windows image

Operating system: Microsoft Windows Server 2008/2012; support 32-bit/64-bit system, support activation

  • Support image format: raw, vhd, qcow2, vmdk
  • File system type: NTFS file system using MBR partition (GPT partition is not supported)
  • System disk size: It is recommended not to exceed 50G.
  • Driver: If the image comes with a virtio driver, it will start with virtio; if the image does not have this driver, it will start with an IDE.
  • Other restrictions: Only support system disk image, not data disks.

 

Other images

When your image is not in the image range supported by SCloud, you can choose to upload it as an “other” type of image.

Other images have the following limitations:

  • There is no initialization by default. Therefore, the initialization steps such as password setting, network setting, SSH installation, NTP configuration, and Yum source configuration during host creation/reinstallation will not take effect. Users need to log in through the console and configure themselves.
  • The console password modification function cannot take effect.

 

Manual initialization steps:

  • Before uploading, DHCP must be turned on inside the image to automatically obtain the private IP.
  • Upload the image through the console and create a host through this image.
  • In the console, enter the virtual machine through the cloud host login function.
  • Set password.
  • Configure NTP, Yum source. (Optional)
  • Install SSH. (Optional)

4.3.5 Copy image

If you need to use customized image across regions or projects, you can use the copy image function. If you want to use this function, please apply for a work order or contact your account manager to activate the upgrade image to a regional level service.

 

Click the Copy Image button

 

Select the project and region to be copied

 

Check the copy task in the image list of the target project in the target region. Depending on the region and the size of the image, the time required is different, and you need to wait patiently.

 

 

4.4 Metadata and UserData

4.4.1 Metadata

Metadata is a collection of basic information about UHost, including host id, configuration, image, IP, etc. All relevant metadata of the instance can be obtained through the metadata server.

4.4.1.1 Metadata Server

The metadata server is an intranet service. Through this service, the current cloud host instance’s own information can be obtained in the host.

SCloud’s metadata server address is (the same in all zones):

http://100.80.80.80/meta-data/

4.4.1.2 Metadata item

(Relative to: http://100.80.80.80/meta-data/latest/uhost)

Metadata item Explanation
/project-id Project ID
/region Region
/zone Availability zone
/uhost-id UHost ID
/name UHost name
/remark UHost remarks
/tag Cloud Hosting Business Group
/image-id Image ID
/os-name Image operating system name
/machine-type Instance type
/cpu The number of CPU
/memory Memory capacity (MB)
/gpu The number of GPU
/isolation-group Hardware isolation group ID
/net-capability Network Enhancement
/hotplug Hot Scale-Up feature
/disks/N/ (Array) Disk
/disks/N/disk-id Disk ID
/disks/N/name Disk name
/disks/N/is-boot Whether is the system disk or not
/disks/N/disk-type Disk type
/disks/N/size Disk capacity (GB)
/disks/N/drive Disk letter
/disks/N/encrypted Whether is an encrypted disk or not
/disks/N/backup-type Backup type
/network-interfaces/N/ (Array) Virtual network interface controller
/network-interfaces/N/vpc-id VPC ID
/network-interfaces/N/subnet-id Subnet ID
/network-interfaces/N/mac MAC address
/network-interfaces/N/ips/N/ (Array) IP address
/network-interfaces/N/ips/N/ip-id EIP ID (valid only when it is EIP)
/network-interfaces/N/ips/N/ip-address IP address
/network-interfaces/N/ips/N/type IP type
/network-interfaces/N/ips/N/width Bandwidth size (MB)

4.4.1.3 Check metadata

You can obtain the corresponding item information under the relevant directory level of the metadata server through the following command:

[root@192-168-1-1]# curl http://100.80.80.80/meta-data/latest/uhost/uhost-id

 

uhost-vjfsj2db

 

You can get the corresponding directory level of the metadata server through the following command:

[root@192-168-1-1]# curl http://100.80.80.80/meta-data/latest/uhost/disks/0/

 

/backup-type

/encrypted

/disk-id

/disk-type

/drive

/is-boot

/name

/size

4.4.1.4 Combined with Cloud-Init

The following example is a section of user-defined data (UserData). The purpose is to automatically report the host ID information to the server (1.2.3.4) after the host is created and available:

#!/bin/sh

md=http://100.80.80.80/meta-data/v1

myserver=http://1.2.3.4/

ID=$(curl -s $md/instance-id)

curl -s $myserver/?id=$ID

4.4.2 UserData

UserData refers to the configuration script that the system automatically runs when the host is started for the first time or every time it is started. The script can be passed to the metadata server by the console/API, etc., and obtained by the cloud-init program in the host.

To determine whether the host supports UserData , you need to confirm that cloud-init has been installed inside the image (for the official image provided by SCloud, or a customized image based on SCloud image, you can check whether the CloudInit item is included in the Feature array of the image. Determine whether the program is installed in the system). When the conditions are met, the host creation page will display the “UserData” option.

4.4.2.1 Cloud-init

Cloud-init is an open source software launched by Canonical, the parent company of the Linux distribution Ubuntu. This software can be installed on mainstream Linux distributions (Ubuntu, CentOS, Debian, etc.), and is mainly used in cloud computing platforms. Help users to initialize the cloud host they created.

UserData is a mechanism provided by Cloud-Init by default, which is universal for multiple clouds.

4.4.2.2 Pass in UserData when creating a UHost

Through the console/API, you can pass in custom data when creating a host. Supported script types include: User-Data, Cloud Config, Include, Gzip compression script, Upstart Job, etc.

Note: The content of the script cannot exceed 16 KB.

 

User-data script

The first line is fixed as #!, such as #!/bin/bash, or #!/bin/python, etc.

It is executed only once when the instance is launched for the first time.

Example 1: Output Hello World after the host is created

#!/bin/sh

echo “Hello World!”

 

After the creation is complete, you will be able to see the words “Hello World!” at the end of the /var/log/cloud-init-output.log log file.

Example 2: The host starts to enable the Httpd service

#!/bin/bash

service httpd start

chkconfig httpd on

 

Cloud Config script

The first line is fixed as #cloud-config

It means that what you provide is a set of dedicated configuration data in yaml format natively defined by Cloud-Init, which contains almost all abstract descriptions related to operating system configuration.

Example 1: Modify Hostname

#cloud-config

hostname: uhost1

 

Example 2: Modify the mount point of the data disk to /opt/data

#cloud-config

mounts:

– [ /dev/vdb, /opt/data ]

 

Example 3: Automatically execute a yum update or apt-get upgrade after the host is created

#cloud-config

package_upgrade: true

 

Example 4: Configure the key when creating the host

#cloud-config

ssh_authorized_keys:

– ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUUk8EEAnnkhXlukKoUPND/RRClWz2s5TCzIkd3Ou5+Cyz71X0XmazM3l5WgeErvtIwQMyT1KjNoMhoJMrJnWqQPOt5Q8zWd9qG7PBl9+eiH5qV7NZ mykey@host

 

Other script types

UHost also supports importing script types such as Include scripts, Gzip compression scripts, and Upstart Job.

For details, please refer to https://cloudinit.readthedocs.io/en/latest/topics/format.html

4.4.2.3 Get UserData in the UHost

UserData can be obtained inside the host through the following methods

curl http://100.80.80.80/user-data

 

4.5 KeyPair

KeyPair is a safe and convenient login method commonly used for batch management of enterprise servers. The key pair generates a pair of keys (public key and private key) through an encryption algorithm, and the RSA 4096-bit encryption method is used by default.

4.5.1 Functional advantage

Compared with password login, key login has the following advantages:

  • Security: SSH key is more secure and reliable for login authentication
  • Convenience: Use the private key to log in to the target instance through the SSH client without entering a password; Easy to remotely log in to batch Linux instances for easy management

4.5.2 Use restrictions

The use of SSH KeyPair has the following restrictions:

  • Only supports Linux instances and supports CloudInit instances
  • Only supports the creation of 4096-bit RSA KeyPair
  • When creating an instance on the console and choosing a key to log in, a Linux instance can only be bound to one KeyPair
  • If the KeyPair has been bound when the instance is created, reinstalling the system and binding the new KeyPair will replace the original KeyPair
  • If you need to use multiple KeyPairs to log in to the instance, you can manually modify the ~/.ssh/authorized_keys file inside the instance to add multiple KeyPairs

4.5.3 Generation method

  • SCloud provides interface generation. RSA 4096-bit encryption is adopted by default.

 

Note: If your KeyPair is generated by the KeyPair creation function of the console, when generating the KeyPair for the first time, please download and save the private key properly. When the KeyPair is bound to an instance, you will not be able to log in to the instance without the private key.

  • The user uses the SSH KeyPair generator to generate it. The imported KeyPair only supports ssh-rsa encryption.

4.5.4 Instructions

Create/import KeyPair

 

Note that: After the SSH KeyPair is successfully created, SCloud will save the public key part of the SSH KeyPair. In the Linux instance, the content of the public key is placed in the file ~/.ssh/authorized_keys. You need to download and keep the private key in a safe place. The private key uses unencrypted PEM (Privacy-Enhanced Mail) encoded PKCS#8 format.

  • When creating/reinstalling the system, choose a password to log in, and choose the key just created.

 

  • Open the SSH client
  • Find your private key file, for example, the private key file is scloud-test.cer
  • If necessary, run this command to ensure that your key is not visible. For example:

chmod 400 scloud-test.pem

  • Connect to your instance. For example:

ssh -i ~/Desktop/scloud-test.pem  root@113.31.112.80

 

4.6 Hardware isolation group

The hardware isolation group is a logical grouping of cloud hosts, which can ensure that each cloud host in the group falls on a different physical machine. Each isolation group can add up to 7 cloud hosts in a single availability zone.

4.6.1 Create host specific isolation group

In the process of creating a host, you can optionally join a hardware isolation group.

 

Conditions to join the hardware isolation group: The number of hosts in the isolation group in the current available zone must be less than 7.

If there is no isolation group currently, you can click “Add isolation group”, enter the isolation group page to complete the creation, and choose to refresh this pop-up window.

You can also use the uhost create (SCloud CLI) command to create a host. Please use the –isolation-group parameter to specify the isolation group. E.g:

scloud uhost create –isolation-group ig-rhcq22xt/ig –memory-gb 1 –cpu 1 –password test1234 –zone cn-bj2-05 –image-id uimage-35pn5v/CentOS 7.6 64位

Output:

uhost[uhost-bh0fvsnh] is initializing…done

4.6.2 Check quarantine group

Through the hardware isolation group Tab, you can view all isolation groups.

In the host list, you can also expand the “Hardware Isolation Group” column in the “Custom List”. Support filtering hosts by isolation group.

You can also use the isolation-group list (SCloud CLI) command to view isolation groups. E.g:

scloud uhost isolation-group list

Ouput:

ResourceID   Name  Remark  UHostCount

ig-rhcq22xt  ig    ig      cn-bj2-05:1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5 Performance data

5.1 Local disk I/O performance test

Note that: This document is only a benchmark test of IO performance. Since the local disk is a shared disk, its IO will fluctuate to a certain extent, making the performance less than the level tested in the document. If you want more stable IO, it is recommended that you choose a cloud disk.

5.1.1 Hard disk performance indicators

Sequential read and write (throughput, the common unit is MB/s): The storage location of files on the hard disk is continuous.

Applicable scenarios: large file copy (such as video and music). Even if the speed is high, it has no reference value for database performance.

4K random read and write (IOPS, the common unit is times): Read and write data at random locations on the hard disk, 4KB each time.

Applicable scenarios: operating system operation, software operation, database.

The following is the test data obtained from the I/O benchmark performance test between the disks of “UHost Standard Model” and “UHost High Performance SSD Machine”, using the general I/O test tool “fio”, and under the conditions of the specified data block size “4K, 512K” and the queue depth “128”.

5.1.2 Test Results

Test 1. Sequential read/write 512K

 

(Comparison between local standard disk and local SSD disk)

 

Test 2. Random read/write 4K

 

(Comparison between local ordinary disk and local SSD disk)

 

Test details

Tool: fio

Official website:

http://freecode.com/projects/fio

http://brick.kernel.dk/snaps/

 

Note that: For performance testing, it is recommended to test directly by writing to a bare disk, and more realistic data will be obtained. However, testing the bare disk directly will destroy the file system structure and cause data loss. Please make sure that the data in the disk has been backed up before testing.

 

Block size: 4kb / 512kb

Queue depth: 128

fio.conf configuration:

[global]

ioengine=libaio

iodepth=128

time_based

direct=1

thread=1

group_reporting

randrepeat=0

norandommap

numjobs=32

timeout=6000

runtime=120

 

[randread-4k]

rw=randread

bs=4k

filename=/dev/sdb   Note that: /dev/sdb is the device name of the target test disk

rwmixread=100

stonewall

 

[randwrite-4k]

rw=randwrite

bs=4k

filename=/dev/sdb

stonewall

 

[read-512k]

rw=read

bs=512k

filename=/dev/sdb

stonewall

 

[write-512k]

rw=write

bs=512k

filename=/dev/sdb

stonewall

How to use: shell$> fio fio.conf

5.2 Network enhancement performance data

Note that: This document is only a performance benchmark test, the maximum performance value brought by the test through UDP protocol + small packet/large packet. The package volume in a specific business scenario is affected by various factors such as upper-layer applications and communication protocols. Please refer to the actual business stress test results.

5.2.1 Network performance test indicators

Packet Amount (common unit: pps): The number of network packets that can be processed per second. It is the core indicator of network enhancement.

Bandwidth (common unit: mb/s): Network bandwidth refers to the largest bit of data that can be passed in a fixed period of time (1 second).

TCP_RR (common unit: times/second): Test the response efficiency of multiple TCP requests and responses in the same TCP connection. This application scenario often appears in database applications.

TCP_CRR (common unit: times/second): Test the response efficiency of request and response in multiple TCP connections. A new TCP connection is established for each TCP request and response. The most typical application is an HTTP web page access request, and each request response is carried out in a separate TCP connection.

5.2.2 Test Data

Test environment

*Test machine:

Image: CentOS 7.2 64 bit

Specifications: 1) 8-core CPU 16G memory 2) 16-core CPU 32G memory

 

*Auxiliary machine:

Image: CentOS 7.2 64 bit

Specifications: 8-core CPU 16G memory * 8 sets

 

Test 1. Package test

Use UDP_STREAM, small packet (1byte) test.

 

NetworkEnhancement can significantly increase the package volume. In the UDP test, the peak packet processing capacity can reach more than 1 million pps, which is 3 times that of the case without network enhancement.

SCloud’s package capacity has nothing to do with specifications.

 

Test 2. Bandwidth test

Use UDP_STREAM, large packet (1400bytes) test.

 

NetworkEnhancement can significantly increase the bandwidth, but in the outbound test, the maximum bandwidth bottleneck (10G) of the internal network is reached, so it cannot be further improved.

 

Test 3. TCP_RR and TCP_CRR

 

Since the back and forth path has not changed, the single chain can only go through one core and one channel, and there is no difference whether NetworkEnhancement is enabled or not. Therefore, turning on NetworkEnhancement cannot significantly improve TCP_RR and TCP_CRR.

 

 

 

 

 

 

6 Price of UHost

6.1 Billing method

The cost of UHost is the sum of the individual prices of CPU, memory, system disk, data disk, public network bandwidth, GPU, NetworkEnhancement, and Data Ark.

Public network bandwidth is a separate order.

The cloud host adopts a prepaid billing model, which supports payment by hour, month and year.

 

Hourly unit price = monthly unit price × 1.5/(24×30)

Annual unit price = monthly unit price × 10

 

 

 

 

 

 

 

 

 

 

 

 

 

 

7 FAQ

What changes have taken place in the instance concept 1.0 and 2.0?

In order to lower the entry barrier for new users to models, in May 2019, with the launch of the new version of the host creation page, SCloud defined the host model concept 2.0, which merged the original 8 models into three.

 

Correspondence between the new version and the old version:

Old version New version
Standard N1 Standard N + IvyBridage/Haswell CPU
Standard N2 Standard N + Broadwell CPU
Standard N3 Standard N + Skylake CPU
High IO I1 Standard N + IvyBridage/Haswell CPU + SSD local disk
High IO I2 Standard N + Broadwell CPU + SSD local disk
High frequency C1 High frequency C
GPU K80 GPU G + K80 GPU
GPU P40 GPU G + P40 GPU
GPU V100 GPU G + V100 GPU

Refer to this table, if your previous main model is Standard N2, and you want to create the same or better model, select the Standard N + CPU platform ≥ Broadwell on the console. The background will allocate Standard N2 or the newer Standard N + Skylake CPU (that is, on the original Standard N3) according to the actual resource situation.

 

How do I choose the storage type?

SCloud UHost can choose three types of disks: cloud disk (UDisk), local standard disk, and local SSD disk.

Among them, UDisk has 3 redundant backups, which is highly reliable and can be restored in seconds after crash; the local disk series has stronger IO performance, but the price is relatively higher.

The above models are not available in some regions, or applications are restricted due to inventory reasons. If you find that you cannot select the disk you need, please consult your account manager.

 

Is the firewall set in the control panel mapped to the iptables in the system?

This is a separate firewall, which is different from iptables in the user’s system. We recommend that users use the background firewall, which achieves the same effect as iptables and is easy to operate.

 

What should I do if the software I want to install cannot be found in the software source provided by SCloud?

If your cloud host has an public network IP, you can install a third-party source. If there are many users, we will add it to the default software source cache. If there is no third-party source, you can download the source code and compile it yourself. If there is no public network IP, please give us feedback and we will continue to update the content of the software source.

 

How to achieve the best performance when using MySQL on SSD cloud host?

The following optimizations are required to use MySQL database on SSD cloud host: innodb_io_capacity is set to 2000 (if MyISAM is used, this configuration is not required)

 

How to migrate files to the cloud host?

It is packaged locally on the original server, and then transferred to the cloud host by means of scp or ftp.

 

Is it still charged when the cloud host is shut down?

The cloud host will still be charged as usual when it is down. Please make sure to delete the unused host.

 

How to expand SWAP partition?

swapoff -a

rm /swapfile

dd if=/dev/zero of=/swapfile bs=1M count=1024

chmod 600 /swapfile

mkswap /swapfile

swapon /swapfile

 

There is a test page on the cloud host, but why is the speed so low when using 17ce to get this page?

17ce’s Get is a concurrent test. If the bandwidth is only 2M, then each node of 17ce (about 50 nodes) can only be divided into dozens of K, so it will be slower.

 

Yum update is very slow in CentOS. How to deal with it?

Change 6.3 (or other version number) in /etc/yum.repos.d/CentOS-Base.repo to $releasever, and then yum update.

 

Gem is often very slow and can’t be used. How to change to the source of taobao?

gem sources –remove http://rubygems.org/

gem sources -a http://ruby.taobao.org/

gem sources -l  # Please make sure only ruby.taobao.org

gem install foo

 

I just did a large concurrency test, and then I can’t connect to my UHost. Why?

The firewall in the computer room may determine that this is an attack, thereby prohibiting access to the local IP. If this happens, please contact technical support.

 

Uploading files to the cloud host is slow?

We do not impose speed restrictions on uploading files to the cloud host. If it is slow, you can find out whether the local operator has restrictions on the uplink.

 

How to use the two IPs in the dual-line computer room for intelligent analysis?

We recommend using DNSPod. After logging in to the background, you need to input the following content:

1) Host record

The host record is the domain name prefix. Common usages are:

www: the resolved domain name is www.scloud.sg

@: directly resolve the main domain name scloud.sg

*: pan resolution, matching all other domain names *.scloud.sg

2) Record type

Here we make A record, just choose A.

 

Can the cloud host update the kernel?

Can be updated. Just like a physical server, use yum update or apt-get upgrade linux-image

 

How do I make a custom kernel rpm package on CentOS?

Confirm the current kernel version number, download the corresponding SRPM package:

# Confirm the current version number

uname -r

 

# Go to valut.centos.org to find SRPM and download

# Precautions

# (1) Confirm whether the plus version of the kernel is used, if yes, the SRPM is in /centosplus/Source/SPackages/

# (2) The non-plus version is in the following two directories: /updates/Source/SPackages, /os/Source/SPackages

wget http://vault.centos.org/6.4/updates/Source/SPackages/kernel-2.6.32-358.14.1.el6.src.rpm

Install SRPM and related RPM tools:

# install SRPM

rpm -ivh kernel-2.6.32-358.14.1.el6.src.rpm

 

# install RPM related tools

yum install rpm-build redhat-rpm-config patchutils xmlto asciidoc elfutils-libelf-devel zlib-devel binutils-devel newt-devel python-devel perl-ExtUtils-Embed hmaccalc rng-tools kernel-firmware

 

# start rngd service and provide enough entropy

cat /dev/null >/etc/sysconfig/rngd echo ‘EXTRAOPTIONS=”–rng-device /dev/urandom”‘ >/etc/sysconfig/rngd service rngd start

Generate the kernel source code and use diff to generate the patch file:

# Generate kernel source code

cd ~/rpmbuild/SPECS

rpmbuild -bp kernel.spec

 

# Modify and generate diff file

cd ~/rpmbuild/BUILD

cp -r kernel-2.6.32-358.14.1.el6 kernel-2.6.32-358.14.1.el6.mine

diff -urpN kernel-2.6.32-358.14.1.el6 kernel-2.6.32-358.14.1.el6.mine> this-patch-to-fix-that-bug.patch

 

# Copy patch to SOURCES

cp this-patch-to-fix-that-bug.patch ~/rpmbuild/SOURCES

 

# Clean up

rm -rf ~/rpmbuild/BUILD/kernel-2.6.32-358.14.1.el6*

Modify the SPEC file to generate a new kernel RPM package:

# Find the following lines, then add a line after

# Source84: config-s390x-generic-rhel

# Source85: config-powerpc64-debug-rhel

# Source86: config-s390x-debug-rhel

 

# New line

Source87: this-patch-to-fix-that-bug.patch

Patch001: this-patch-to-fix-that-bug.patch

 

# (Optional) Modify the changelog, find the %changelog line, and then insert the line:

* Tue Aug 03 2013 Your Name<yourname@company.com> [2.6.32-358.14.1.el6.centos]

-[XXX] path to fix that bug

Pack and generate a new kernel RPM package:

# Execute SPEC, the generation process is very long, please be patient.

# Before execution, please review the above steps carefully to avoid recurring errors

rpmbuila -ba kernel.spec

 

# Several RPM packages will be generated, the most important of which are as follows, use rpm -ivh to install

# kernel-2.6.32-358.14.1.el6.x86_64.rpm

# kernel-devel-2.6.32-358.14.1.el6.x86_64.rpm

# kernel-headers-2.6.32-358.14.1.el6.x86_64.rpm

 

The CentOS system installation package has a dependency on kernel-devel, how should I solve it?

In order to ensure the stability of the kernel, by default, we have added exclude=kernel* centos-release* in the yum configuration file /etc/yum.conf to prevent unintentional updates of kernel-related things when installing software packages. If you really need to install this package, just comment this line of code in /etc/yum.conf.

 

What is the /usr/bin/uga program? What is the uga process in the host?

UGA is a built-in agent program provided by SCloud. It is only used to help users perform automated operations in conjunction with the host console function to improve user experience. It is not possible to check, add, delete, or modify user files through UGA.

 

How to activate Windows Server on UHost?

The Windows UHost created on SCloud is automatically activated by default, and users do not need to operate anymore. Refer to KMS activation instructions.

In case of special reasons, you need to manually activate it, the steps are as follows:

1) KMS address

First define the KMS address ($kms_name) of each data center. $kms_name will be used in step 4. Please use the corresponding address instead. If you need other zone addresses, please contact technical support.

2) Enter the command directory

Open cmd as administrator -> cd C:Windowssystem32

 

3) Clear the key and restart

Execute cscript.exe slmgr.vbs /rearm to clear the unified key, and restart the operating system after completion.

 

4) Configure KMS

Open cmd -> cd C:Windowssystem32 as an administrator. Execute cscript.exe slmgr.vbs /skms $kms_name (see step 1)

 

5) Activate Windows

Execute cscript.exe slmgr.vbs /ato to activate windows

 

 

 

 

 

8 Linux service configuration guide

8.1 Configure Keepalived VIP under CentOS 6.X

8.1.1 Apply for internal network VIP TODO

1) After the user logs in to the console and selects the data center, click the “Elastic IP” function button under the main tab of Uhost, and click “apply for internal network IP” under the “apply for IP” button.

2) Select the number of private IPs to be applied for, click “OK” to complete the application process.

3) After the page returns, the information of the applied private IP will be displayed.

8.1.2 Install keepalived

Directly install with yum under CentOS

# yum install -y keepalived

8.1.3 Configure VIP

First explain several variables in the following steps (root privileges are used by default):

$node1: Internal IP of server A

$node2: Internal IP of server B

$vip: Internal VIP

 

Server A (node1)

1) Edit /etc/keepalived/keepalived.conf

global_defs {

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER

interface eth0

#unicast peer The format must match exactly! Otherwise, it won’t get up and it must be written in three lines.

unicast_peer {

$node2

}

virtual_router_id 51

priority 100 advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

$vip dev eth0

}

}

2) Start keepalived

# service keepalived start

 

Server B (node2)

1) Edit /etc/keepalived/keepalived.conf

global_defs {

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state BACKUP

interface eth0

#unicast peer The format must match exactly! Otherwise, it won’t get up and it must be written in three lines.

unicast_peer {

$node1

}

virtual_router_id 51

priority 90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

$vip dev eth0

}

}

2) Start keepalived

# service keepalived start

8.1.4 Test VIP

1) Check the system log and check whether keepalived is successful

# tail /var/log/messsages

2) Stop keepalived on node1, and then observe the IP information on node1 and node2 respectively

# service keepalived stop

# ip a

 

8.2 CentOS KPTI shutdown method

In order to solve the security risks caused by the recent MeltDown vulnerability, SCloud’s official CentOS 6.x and 7.x images have been updated. The new version of the image has KPTI (Kernel Page Table Isolation) enabled by default to fix the vulnerability.

However, according to the test, enabling KPTI may have a 5%-30% impact on the performance of the virtual machine and less impact on computing business, while greater impact IO/memory business.

You can judge whether to close KPTI according to the actual situation, so as to restore performance under the premise of assuming security risks.

 

Centos 6.x

Close KPTI

vim /boot/grub/grub.conf

Append to the kernel line

nopti

Restart the virtual machine to take effect

Verify that it is closed

Input the command:

dmesg | grep isolation

If the following information is displayed, it means that KPTI is still open.

x86/pti: Kernel page table isolation enabled

If the shutdown operation is successful, the above line of information will not be displayed.

 

CentOS 7.x

1) Close KPTI

Input the command:

vim /boot/grub2/grub.cfg

Append to the /boot/vmlinuz-* line

nopti

Restart the virtual machine to take effect

2) Verify that it is closed

Input the command:

dmesg | grep isolation

If the following information is displayed, it means that KPTI is still open.

x86/pti: Unmapping Kernel while in userspace

If the shutdown operation is successful, the above line of information will not be displayed.

 

8.3 MariaDB software source configuration

Note that: The MariaDB software source of SCloud platform is currently only applicable to CentOS6.x 64-bit and Redhat6.x 64-bit operating systems.

8.3.1 MariaDB 5.5

Create a new mariadb.repo file in /etc/yum.repos.d/ and add the following content:

[mariadb5]

name=mariadb5 Repository

baseurl=http://mariadb5.mirror.scloud.cn

gpgcheck=0

enabled=1

After creating the repo file, execute yum clean all && yum makecache to make the new warehouse take effect.

Note that: The domain name of each data center warehouse is different, see the table below for details.

Data center Warehouse url
Asia Pacific mariadb5.mirror.hk.scloud.cn or mariadb5.mirrors.hk.scloud.cn
North America mariadb5.mirror.la.scloud.cn or mariadb5.mirrors.la.scloud.cn

8.3.2 MariaDB 10.0

Create a new mariadb.repo file in /etc/yum.repos.d/ and add the following content:

[mariadb100]

name=mariadb100 Repository

baseurl=http://mariadb100.mirror.scloud.cn

gpgcheck=0

enabled=1

After creating the repo file, execute yum clean all && yum makecache to make the new warehouse take effect.

Note that: The domain name of each data center warehouse is different, see the table below for details.

Data center Warehouse url
Asia Pacific mariadb100.mirror.hk.scloud.cn or mariadb100.mirrors.hk.scloud.cn
North America mariadb100.mirror.la.scloud.cn or mariadb100.mirrors.la.scloud.cn

8.3.3 MariaDB 10.1

Create a new mariadb.repo file in /etc/yum.repos.d/ and add the following content:

[mariadb101]

name=mariadb101 Repository

baseurl=http://mariadb101.mirror.scloud.cn

gpgcheck=0

enabled=1

After creating the repo file, execute yum clean all && yum makecache to make the new warehouse take effect.

Note that: The domain name of each data center warehouse is different, see the table below for details.

Data center Warehouse url
Asia Pacific mariadb101.mirror.hk.scloud.cn or mariadb101.mirrors.hk.scloud.cn
North America mariadb101.mirror.la.scloud.cn or mariadb101.mirrors.la.scloud.cn

8.4 PostgreSQL software source configuration

Note that: The PostgreSQL software source of the SCloud platform is currently only available for CentOS 5.x|6.x|7.x 64-bit and Redhat6.x|7.x 64-bit operating systems.

Create a new postgresql.repo file in /etc/yum.repos.d/ and add the following content:

postgresql 9.3 version

[PostgreSQL93]

name=postgresql93 Repository

baseurl=http://centos.mirror.scloud.cn/postgresql/9.3/$releasever/$basearch

gpgcheck=0

enabled=1

 

postgresql 9.4 version

[PostgreSQL94]

name=postgresql94 Repository

baseurl=http://centos.mirror.scloud.cn/postgresql/9.4/$releasever/$basearch

gpgcheck=0

enabled=1

After creating the repo file, execute yum clean all && yum makecache to make the new warehouse take effect.

Note that: The domain name of each data center warehouse is different, see the table below for details.

Data center Warehouse url
Asia Pacific centos.mirror.hk.scloud.cn or centos.mirrors.hk.scloud.cn
North America centos.mirror.la.scloud.cn or centos.mirrors.la.scloud.cn

8.5 SCloud Docker Public Image Warehouse

Public image library UHub is a flexible public image library service launched by SCloud.

UHub allows users to freely create and manage their own private image libraries.

The UHub image library is a cross-regional architecture. The image pushed in one region can be accessed by other regions, and the user experience is excellent.

 

8.6 SCloud PyPI private source configuration

PyPI is Python’s official third-party library repository.

In order to solve the problem that the access speed of the default official source is limited, concurrent requests are limited, packet loss, timeout and other issues often occur, SCloud PyPI private source provides pure internal access. On the SCloud UHost, with no external IP address required, you can obtain the required Python Package.

Global configuration

Configure the following in the ~/.pip/pip.conf file:

[global]

index-url = https://pypi.internal-mirrors.scloud.cn/simple

Install Python Package in the specified source

pip3 install flask -i https://pypi.internal-mirrors.scloud.cn/simple

 

8.7 Install Cloudera

Install Cloudera-Manager

Download from this link:

http://archive.cloudera.com/cm4/installer/latest/cloudera-manager-installer.bin

Execute the following script:

chmod u+x cloudera-manager-installer.bin

Enter /etc/yum.repo.d/:

cat >cloudera-manager.repo <<EOF

[cloudera-manager]

name=Cloudera Manager

baseurl=http://cloudera-manager.mirror.scloud.cn

gpgcheck=0

enabled=1

EOF

Execute:

cloudera-manager-installer.bin

 

Install Cloudera-CDH

1) “Choose the specific version of CDH you want to install on the host.”

2) Select “Custom Repository”

3) Enter http://cloudera-cdh4.mirror.scloud.cn/ (If you need to install CDN3, enter http://cloudera-cdh3.mirror.scloud.cn)

4) “Choose the specific version of Cloudera Manager you want to install on the host.

5) Select “Custom Repository”

6) Enter “http://cloudera-manager.mirror.scloud.cn/”

Note that: Our host name is 10-4-4-1 (example) by default. When installing CDH, please modify hostanme first, do not include “-“, some software is not compatible with this.

 

8.8 Install Perf

Ubuntu

Enter the following command to install:

sudo apt-get install linux-tools

Enter perf top, if it can be executed, the installation is successful. If you are prompted to install a special version, the installation can be completed.

 

8.9 Install Systemtap

Ubuntu

Add the Ubuntu ddebs source file and paste the following command on the command line:

codename=$(lsb_release -c | awk  ‘{print $2}’)

sudo tee /etc/apt/sources.list.d/ddebs.list << EOF

deb http://ddebs.ubuntu.com/ ${codename}      main restricted universe multiverse

deb http://ddebs.ubuntu.com/ ${codename}-security main restricted universe multiverse

deb http://ddebs.ubuntu.com/ ${codename}-updates  main restricted universe multiverse

deb http://ddebs.ubuntu.com/ ${codename}-proposed main restricted universe multiverse

EOF

Among them, universe and multiverse are deleted in Ubuntu 11.10 and changed to:

codename=$(lsb_release -c | awk  ‘{print $2}’)

sudo tee /etc/apt/sources.list.d/ddebs.list << EOF

deb http://ddebs.ubuntu.com/ ${codename}      main restricted

deb http://ddebs.ubuntu.com/ ${codename}-security main restricted

deb http://ddebs.ubuntu.com/ ${codename}-updates  main restricted

deb http://ddebs.ubuntu.com/ ${codename}-proposed main restricted

EOF

Ubuntu Key authentication:

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys ECDCAD72428D7C01

Update index:

sudo apt-get update -y

Install Systemtap:

sudo apt-get install -y systemtap gcc

Install dbgsym:

sudo apt-get install linux-image-$(uname -r)-dbgsym

Verify that Systemtap is installed successfully. If hello world is displayed correctly, the installation is successful:

stap -e ‘probe kernel.function(“sys_open”) {log(“hello world”) exit()}’

 

CentOS

Install Systemtap:

yum install systemtap kernel-devel

Download debuginfo:

First check the system kernel version:

uname -rm

Go to http://debuginfo.centos.org/ to download the corresponding RPM package and install it:

rpm -Uhv kernel-debuginfo-*rpm

Verify that Systemtap is installed successfully. If hello world is displayed correctly, the installation is successful:

stap -e ‘probe kernel.function(“sys_open”) {log(“hello world”) exit()}’

 

8.10 Install and configure LNMP

Execute the following commands:

yum –enablerepo=remi install nginx php php-fpm mysql mysql-server unzip

 

service mysqld restart

 

mkdir /opt/op

 

mv lnmp_scloud.zip /opt/op

 

cd /opt/op

 

unzip lnmp_scloud.zip

 

cd lnmp_scloud

 

./lnmp.sh create myapp www.myapp.com /srv/http/myapp

 

service php-fpm restart

 

service nginx restart

 

Test

curl www.myapp.com/demo.php

app: myapp(domain: www.myapp.com) works!

Note that:

1) Download lnmpcloud.zip;

2) www.myapp.com, please modify it to your own domain name.

 

8.11 Update CentOS system

The method provided in this article takes the CentOS series as an example. It can be updated to the latest system version maintained by SCloud software source.

Via the command:

cat /etc/centos-release

You can view your current system kernel version. If you need to upgrade, you can do the following:

Step 1 Edit yum.conf

In /etc/yum.conf, comment: exclude kernel* centos-release*

Step 2 Upgrade all packages

Input the command:

yum update

This command will upgrade the CentOS system and all related software packages to the latest version provided by the SCloud software source.

Wait for the upgrade to complete.

Step 3 Confirm that the upgrade is complete

After the upgrade is complete, enter the command again:

cat /etc/centos-release

The upgraded system version will be displayed.

 

8.12 Optimize DNS configuration method

Step 1 Configure redundant DNS Server address

It can prevent the domain name cannot be resolved after a single point of failure of the DNS Server.

Take CentOS as an example:

Open the /etc/resolv.conf file in the host,

If only one IP is configured in the file, replace with the following two IPs according to the following list:

Data center/Availability Zone IP
Hong Kong Availability Zone A 10.8.255.1, 10.8.255.2
California Availability Zone A 10.11.255.1, 10.11.255.2

Step 2 Open NSCD service

Open the NSCD service in Linux to cache the DNS resolution results locally. During the TTL time, there is no need to go to the DNS server to repeat the analysis, thereby speeding up the DNS resolution and alleviating the pressure on the DNS server.

Take CentOS as an example:

1) Installation

yum install nscd

2) Add the configuration file /etc/nscd.conf

The content is as follows:

#

# /etc/nscd.conf

#

# An example Name Service Cache config file.  This file is needed by nscd.

#

# Legal entries are:

#

#       logfile                 <file>

#       debug-level             <level>

#       threads                 <initial #threads to use>

#       max-threads             <maximum #threads to use>

#       server-user             <user to run server as instead of root>

#               server-user is ignored if nscd is started with -S parameters

#       stat-user               <user who is allowed to request statistics>

#       reload-count            unlimited|<number>

#       paranoia                <yes|no>

#       restart-interval        <time in seconds>

#

#       enable-cache            <service> <yes|no>

#       positive-time-to-live   <service> <time in seconds>

#       negative-time-to-live   <service> <time in seconds>

#       suggested-size          <service> <prime number>

#       check-files             <service> <yes|no>

#       persistent              <service> <yes|no>

#       shared                  <service> <yes|no>

#       max-db-size             <service> <number bytes>

#       auto-propagate          <service> <yes|no>

#

# Currently supported cache names (services): passwd, group, hosts, services

#

#   logfile                 /var/log/nscd.log

threads                 4

max-threads             32

server-user             nscd

stat-user               somebody

debug-level             5

reload-count            5

paranoia                no

restart-interval        3600

 

 

enable-cache            hosts           yes

enable-cache            passwd          no

enable-cache            group           no

enable-cache            services        no

positive-time-to-live   hosts           5

negative-time-to-live   hosts           20

suggested-size          hosts           211

check-files             hosts           yes

persistent              hosts           yes

shared                  hosts           yes

max-db-size             hosts           33554432

3) Start the service

service nscd start

4) Add self-starting after booting

chkconfig nscd on

5) If you need to stop the service

service nscd stop

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

9 Windows service configuration guide

9.1 KMS activation method description

This article mainly explains how to automatically activate the Windows host in the SCloud environment.

 

Background

Users must activate the Windows system before they can use it normally, install application software, deploy business environment, and so on.

Therefore, when users purchase UHost for all Windows systems, the system copyright fee is included. During the host initialization process, the SCloud platform automatically activates the system. The specific process is described below.

 

Activate by KMS

Microsoft used VOL (volume licensing for organizations) to activate the system in Windows XP and Windows Server 2003. After Windows Vista, it adopted MAK (Multiple Activation Key) multiple activation key method and KMS (Key Management Service) key management service method to activate.

The difference between them is that MAK is a counting method. When the MAK key is entered for activation after each system reinstallation, the key is calculated to the maximum number of activation. When the number of activation reaches the maximum number of activation, this key will not be able to be activated again; while KMS manages the key in a Server-Client way, the client can communicate with the server through the LAN, the key is installed on the server and managed by the key management service all client hosts that communicate with it, as is shown in the figure below:

 

The operating mechanism of KVM is as follows:

1) After installing the key, KMS Server connects to the Microsoft license server through the public network to activate its own KMS service;

2) When the Client is initializing, the KMS address accessed from the intranet is resolved to the KMS Server IP through the DNS Server;

3) DNS Server maintains regular intranet communication with the KMS Server cluster. If an abnormality of one KMS Server is detected, it will automatically resolve the domain name to another KMS Server;

4) The KMS Server communicates with the Client through the default port 1688 and automatically activates the client system. It will contact the KMS Server again to activate it every 180 days.

 

Client activation

UHost can automatically activate the system after the initialization is completed, and the user has no perception or any intervention during the whole process. After the user remotely logs in to the host, he can view the activation information through the slmgr.vbs /dlv command.

 

Frequently Asked Questions and Answers

Q: Are all the Windows cloud hosts you provide genuine?

A: All of our Windows platform hosts provided by SCloud have purchased an official genuine license from Microsoft. When you purchase our Windows version of the host, the system copyright fee is already included, so you don’t need to purchase another system serial number. The host has been automatically activated through the KMS service of the intranet when it is initialized, and it will be automatically activated every 180 days. You can view the specific activation information through the cmd command slmgr.vbs /dlv inside the host.

 

Q: Can I install other charging software on these Windows cloud hosts?

A: You can install paid software in the SCloud UHost, but the copyright fee for the software needs to be paid by the user to the software provider. SCloud will not interfere with any operation of the user in the host. If you use pirated software, all legal responsibilities arising therefrom shall be borne by the customer.

 

Q: I created a Windows host in a computer room of SCloud and created an image. After submitting a work order, you helped me migrate the image to another computer room, but now when I use this image to generate a new UHost, the system prompts “The product is not activated”. How can I solve it?

A: Because your image has been migrated across computer rooms, the KMS address inside the system has changed. The KMS information in the image is still recorded in the previous computer room, and you need to manually activate it once to solve it. For specific operations, please refer to our FAQ document.

 

Summary

The KMS method is adopted to uniformly manage system activation, and the communication between the server and the client is maintained at any time through the intranet, which greatly facilitates the automated deployment and rapid generation of hosts in the cloud environment. It saves a series of repetitive tasks such as system activation and environment initialization when dynamically increasing or decreasing resources for customers in large quantities.

 

Reference documents

Microsoft official website: https://support.microsoft.com/en-us/windows/product-activation-for-windows-online-support-telephone-numbers-35f6a805-1259-88b4-f5e9-b52cccef91a0

 

9.2 Optimize DNS configuration method

Step 1 Configure redundant DNS Server address

It can prevent the domain name cannot be resolved after a single point of failure of the DNS Server.

In the system, select control panel -> Network and Sharing Center -> select the current network connection (such as Ethernet 2) -> Properies -> double-click Internet Protocol Version 4 (TCP/IPv4)

In Perferred DNS Server (Preferred DNS Server) and Alternate DNS Server (Alternate DNS Server), according to the computer room, configure the two IPs in the table below.

Data center/Availability Zone IP
Hong Kong Availability Zone A 10.8.255.1, 10.8.255.2
California Availability Zone A 10.11.255.1, 10.11.255.2

Step 2 Open NSCD service

No additional configuration is required in Windows, and the local DNS cache is enabled by default.

 

Leave a Reply

Your email address will not be published. Required fields are marked *