This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation for Cloudforet - Easy guide for multi cloud management

Documentation and detailed use guide for Cloudforet contributors.
cloudforet ecosystem

1 - Introduction

Introduce to Cloudforet Project

1.1 - Overview

Introducing Cloudforet Project

Main Features

The Need for Multi-Cloud Environments

Cloud computing has become a critical technology for businesses, offering flexibility, scalability, and cost efficiency. However, multi-cloud environments, which use the services of multiple cloud providers, can introduce complexity and management challenges.

Key reasons for adopting a multi-cloud strategy include:

  • Optimal Service Selection: Different cloud providers offer a variety of services, and a multi-cloud environment allows businesses to choose the best service for their specific needs.
  • Cost Optimization: By leveraging multiple providers, businesses can play them off against each other to drive down costs.
  • Avoiding Service Disruption: If one cloud provider experiences an outage, businesses can continue operating without interruption by using another provider's services.
  • Regulatory Compliance: A multi-cloud environment can facilitate regulatory compliance by storing data in multiple regions.

Cloudforet, Multi-Cloud Management Platform Features

A Cloudforet provides a range of features to effectively manage multi-cloud environments:

  • Centralized Management: It offers a single interface to manage multiple cloud providers, ensuring consistency and efficiency.
  • Automation: Automates tasks such as cloud resource provisioning, deployment, and management, enhancing operational efficiency.
  • Monitoring: Real-time monitoring of performance, usage, and security across the multi-cloud environment helps predict and resolve issues.
  • Cost Management: Analyzes and optimizes cloud usage to reduce costs.
  • Security: Strengthens security across the multi-cloud environment and detects and prevents threats.

Benefits of Cloudforet, Multi-Cloud Management Platforms

Cloudforet offers several benefits:

  • Improved Operational Efficiency: Centralized management and automation of cloud management tasks streamline operations.
  • Cost Reduction: Optimizing cloud usage and eliminating unnecessary resources lowers costs.
  • Enhanced Security: Robust security features fortify the multi-cloud environment against threats.
  • Regulatory Compliance: Facilitates meeting regulatory compliance requirements.
  • Increased Agility: Rapid development and deployment of new services are possible in a multi-cloud environment.

Cloudforet Universe

Our feature is expanding all areas to build a Cloudforet universe to fulfill requirements for cloud operation/management based on the inventory data, automation, analysis, and many more for Multi Clouds. SpaceONE Universe

1.2 - Integrations

Supported Technologies

Overview

Cloudforet supports the Plugin Interfaces, which supports to extend Core Services. The supported plugins are listed in

Managed Plugins (Compatible with Cloudforet Version 1.x)

Managed Plugins are pre-registed plugins that are compatible with Cloudforet Version 1.x.

nameplugin_idservice_typeprovider
API Directplugin-api-direct-mon-webhookmonitoring.Webhook
AWS Cloud Service Collectorplugin-aws-cloud-service-inven-collectorinventory.Collectoraws
AWS CloudTail Log DataSourceplugin-aws-cloudtrail-mon-datasourcemonitoring.DataSourceaws
AWS CloudWatch Metric DataSourceplugin-aws-cloudwatch-mon-datasourcemonitoring.DataSourceaws
AWS Cost Explorer Data Sourceplugin-aws-cost-explorer-cost-datasourcecost_analysis.DataSourceaws
AWS EC2 Collectorplugin-aws-ec2-inven-collectorinventory.Collectoraws
AWS Personal Health Dashboard Collectorplugin-aws-phd-inven-collectorinventory.Collectoraws
AWS SNSplugin-aws-sns-monitoring-webhookmonitoring.Webhookaws
AWS Trusted Advisor Collectorplugin-aws-ta-inven-collectorinventory.Collectoraws
Azure Activity Log DataSourceplugin-azure-activity-log-mon-datasourcemonitoring.DataSourceazure
Azure Cost Management Data Sourceplugin-azure-cost-mgmt-cost-datasourcecost_analysis.DataSourceazure
Azure Collectorplugin-azure-inven-collectorinventory.Collectorazure
Azure Monitoring Metric DataSourceplugin-azure-monitor-mon-datasourcemonitoring.DataSourceazure
Email Notification Protocolplugin-email-noti-protocolnotification.Protocolemail
Google Cloud Collectorplugin-google-cloud-inven-collectorinventory.Collectorgoogle_cloud
Google Cloud Log DataSourceplugin-google-cloud-log-mon-datasourcemonitoring.DataSourcegoogle_cloud
Google Cloud Monitoringplugin-google-monitoring-mon-webhookmonitoring.Webhookgoogle_cloud
Google Cloud Monitoring Metric DataSourceplugin-google-stackdriver-mon-datasourcemonitoring.DataSourcegoogle_cloud
Grafanaplugin-grafana-mon-webhookmonitoring.Webhook
Keycloak OIDCplugin-keycloak-identity-authidentity.Domain
MS Teams Notification Protocolplugin-ms-teams-noti-protocolnotification.Protocolmicrosoft
Prometheusplugin-prometheus-mon-webhookmonitoring.Webhook
Slack Notification Protocolplugin-slack-noti-protocolnotification.Protocolslack
Telegram Notification Protocolplugin-telegram-noti-protocolnotification.Protocoltelegram

Additional Plugins

There are more plugins in the Github Plugin Project

1.3 - Key Differentiators

Core technology of Cloudforet.

Open Source Project

In order to provide effective and flexible support over various cloud platforms, we aim for an open source based strategy cloud developer community. Open Platform

Plugin Interfaces

Protocol-Buffer based gRPC Framework provides optimization on its own engine, enabling effective processing of thousands of various cloud schemas based on MSA (Micro Service Architecture). Plugin Architecture

Dynamic Rendering

Provide a user-customized view with selected items by creating a Custom-Dashboard based on Json Metadata. Dynamic Rendering

Plugin Ecosystem

A Plugin marketplace for various users such as MSP, 3rd party, and customers to provide freedom for developments and installation according to their own needs. Plugin Ecosystem

1.4 - Release Notes

Cloudforet Release Notes

Development Version

Active Development Version: 2.x

Stable Version

DateVersionSee Details
2023-10-061.12.0Version 1.12.0-english
2023-03-061.11.0Version 1.11.0-english
2022-11-281.10.4Version 1.10.4-english
2022-11-071.10.3Version 1.10.3-english
2022-10-111.10.2Version 1.10.2-english
2022-09-011.10.1Version 1.10.1-english
2022-07-221.10.0Version 1.10.0-english
2022-05-251.9.7Version 1.9.6-english
2022-05-021.9.6Version 1.9.6-english
2022-04-051.9.4Version 1.9.4-english
2022-03-101.9.3Version 1.9.3-english
2022-02-091.9.1Version 1.9.1-english
2021-12-301.9.0Version 1.9.0-english
2021-12-141.8.7Version 1.8.7-english
2021-11-051.8.5Version 1.8.5-english
2021-10-051.8.4Version 1.8.4-english
2021-09-141.8.3Version 1.8.3-english
2021-08-311.8.2Version 1.8.2-english
2021-08-171.8.1Version 1.8.1-english
2021-07-211.7.4Version 1.7.4-english
2021-06-291.7.3Version 1.7.3-english
2021-05-271.7.2Version 1.7.2-english
2021-04-211.6.7Version 1.6.7-english
2021-03-091.6.4Version 1.6.4-english
2021-02-171.6.2Version 1.6.2-english
2021-01-251.6.1Version 1.6.1-english
2020-12-291.5.3Version 1.5.3-english
2020-11-231.5.1Version 1.5.1-english
2020-9-281.3.2Version 1.3.2-english

2 - Concepts

About Cloudforet Project

2.1 - Architecture

Overall Architecture

Micro Service Architecture

Cloudforet adopts a microservice architecture to provide a scalable and flexible platform. The microservice architecture is a design pattern that structures an application as a collection of loosely coupled services. Each service is self-contained and implements a single business capability. The services communicate with each other through well-defined APIs. This architecture allows each service to be developed, deployed, and scaled independently.

Cloudforet Architecture

The frontend is a service provided for web users, featuring components such as console and console-api that communicate directly with the web browser. The core logic is structured as independent microservices and operates based on gRPC to ensure high-performance and reliable communication.

Each core logic can be extended by plugin services. Every plugins are developed and deployed independently, and they can be added, removed or upgraded without affecting the core logic.

API-Driven design

API-Driven design in microservice architecture is a pattern where APIs (Application Programming Interfaces) are the primary way that services interact and communicate with each other. This approach emphasizes the design of robust, well-defined, and consistent APIs that serve as the contracts between microservices. Here’s a detailed explanation of the API-Driven design pattern:

gRPC as the Communication Protocol

gRPC is a high-performance, open-source, universal RPC (Remote Procedure Call) framework that is widely used in microservice architectures. It uses HTTP/2 as the transport protocol and Protocol Buffers (protobuf) as the interface definition language. gRPC provides features such as bidirectional streaming, flow control, and authentication, making it an ideal choice for building efficient and reliable microservices.

Loose Coupling

API-Driven design promotes loose coupling between microservices by defining clear and well-documented APIs. Each microservice exposes a set of APIs that define how other services can interact with it. This allows services to evolve independently without affecting each other, making it easier to develop, deploy, and maintain microservices.

Version control

Cloudforet APIs support two types of versioning, core and plugin version. Core version is for communication between micro services for frontend. plugin version of internal communication in a single micro services for implementing API.

API Documentation https://cloudforet.io/api-doc/

Protobuf API Specification https://github.com/cloudforet-io/api

Service-Resource-Verb Pattern

API-Driven design can be effectively explained using the concepts of service, resource, and verb. Here’s how these concepts apply to microservices:

Service, Resource, Verb

Service

A service in microservice architecture represents a specific business functionality. Each service is a standalone unit that encapsulates a distinct functionality, making it independently deployable, scalable, and maintainable. Services communicate with each other over a network, using lightweight protocols gRPC.

  • Example: in the Cloudforet, individual services are identity, repository, or inventory.
    • identity service: manages user authentication and authorization.
    • repository service: manages the metadata for plugins and their versions.
    • inventory service: manages the resources and their states.

Resource

A resource represents the entities or objects that the services manage. Resources are typically data entities that are created, read, updated, or deleted (CRUD operations) by the services.

  • Example: in the identity Service, resources include Domain, User, and Workspace.
    • Domain: represents a seperated organization or customer.
    • User: represents a user account.
    • Workspace: represents a logically isolated group contains resources.

Verb

A verb represents the actions or operations that can be performed on resources. These are typically the gRPC methods (get, create, delete, update, list, etc.) in a service. Verbs define what kind of interaction is taking place with a resource.

  • Example: in the User resource, verbs include create, get, update, delete, and list.
    • create: creates a new user.
    • get: retrieves the user information.
    • update: updates the user information.
    • delete: deletes the user.
    • list: lists all users.

2.2 - Identity

Overall explanation of identity service

2.2.1 - Provider & Service Accounts

A provider is the overarching entity that offers resources, within which multiple service accounts exist. These service accounts are used to securely and efficiently access the resources provided by the provider.Concept of Provider and Service Accounts

concept

User Experience: Console

console

Provider

In the context of Cloudforet, a provider is a top-level entity that groups a range of resources. Providers can include cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, as well as any entity that groups together like software_licence.

Service Account

A service account functions as an identifier for a group of resources within a provider. This means that the service account is used as primary key for distinguishing a specific set of resources.

API Reference

ResourceAPI Description
Providerhttps://cloudforet.io/api-doc/identity/v2/Provider/
Service Accounthttps://cloudforet.io/api-doc/identity/v2/ServiceAccount/
Schemahttps://cloudforet.io/api-doc/identity/v2/Schema/

2.2.2 - Project Management

About project management

2.2.3 - Role Based Access Control

This page explores the basic concepts of User Role-Based Access Management (RBAC) in SpaceONE.

How RBAC Works

Define who can access what to who and which organization (project or domain) through SpaceONE's RBAC (Role Based Access Control).

For example, the Project Admin Role can inquire (Read) and make several changes (Update/Delete) on all resources within the specified Project. Domain Viewer Role can inquire (Read) all resources within the specified domain. Resources here include everything from users created within SpaceONE, Project/Project Groups, and individual cloud resources.

Every user has one or more roles, which can be assigned directly or inherited within a project. This makes it easy to manage user role management in complex project relationships.

Role defines what actions can be performed on the resource specified through Policy. Also, a Role is bound to each user. The diagram below shows the relationships between Users and Roles and Projects that make up RBAC.

This role management model is divided into three main components.

  • Role. It is a collection of access right policies that can be granted for each user. All roles must have one policy. For more detailed explanation, please refer to Understanding Role.

  • Project. The project or project group to which the permission is applied.

  • User. Users include users who log in to the console and use UI, API users, and SYSTEM users. Each user is connected to multiple Roles through the RoleBinding procedure. Through this, it is possible to access various resources of SpaceONE by receiving appropriate permissions.

Basic Concepts

When a user wants to access resources within an organization, the administrator grants each user a role of the target project or domain. SpaceONE Identity Service verifies the Role/Policy granted to each user to determine whether each user can access resources or not.

Resource

If a user wants to access a resource in a specific SpaceONE project, you can grant the user an appropriate role and then add it to the target project as a member to make it accessible. Examples of these resources are Server, Project, Alert .

In order to conveniently use the resources managed within SpaceONE for each service, we provide predefined Role/Policy. If you want to define your own access scope within the company, you can create a Custom Policy/Custom Role and apply it to the internal organization.

For a detailed explanation of this, refer to Understanding Role.

Policy

A policy is a collection of permissions. In permission, the allowed access range for each resource of Space One is defined. A policy can be assigned to each user through a role. Policies can be published on the Marketplace and be used by other users, or can be published privately for a specific domain.

This permission is expressed in the form below. {service}.{resource}.{verb} For example, it has the form inventory.Server.list .

Permission also corresponds to SpaceONE API Method. This is because each microservice in SpaceONE is closely related to each exposed API method. Therefore, when the user calls SpaceONE API Method, corresponding permission is required.

For example, if you want to call inventory.Server.list to see the server list of the Inventory service, you must have the corresponding inventory.Server.list permission included in your role.

Permission cannot be granted directly to a user. Instead, an appropriate set of permissions can be defined as a policy and assigned to a user through a role. For more information, refer to Understanding Policy.

Roles

A role is composed of a combination of an access target and a policy. Permission cannot be directly granted to a user, but can be granted in the form of a role. Also, all resources in SpaceONE belong to Project. DOMAIN, PROJECT can be separated and managed.

For example, Domain Admin Role is provided for the full administrator of the domain, and Alert Manager Operator Role is provided for event management of Alert Manager.

Members

All cloud resources managed within SpaceONE are managed in units of projects. Therefore, you can control access to resources by giving each user a role and adding them as project members.

Depending on the role type, the user can access all resources within the domain or the resources within the specified project.

  • Domain: You can access all resources within the domain.
  • Project: You can access the resources within the specified Project.

Project type users can access resources within the project by specifically being added as a member of the project.

If you add as member of Project Group, the right to access all subordinate project resources is inherited.

Organization

All resources in SpaceONE can be managed hierarchically through the following organizational structure.

All users can specify access targets in such a way that they are connected (RoleBinding) to the organization.

  • Domain : This is the highest level organization. Covers all projects and project groups.
  • PROJECT GROUP : This is an organization that can integrate and manage multiple projects.
  • Projects : The smallest organizational unit in SpaceONE. All cloud resources belong to a project.

2.2.3.1 - Understanding Policy

This page takes a detailed look at Policy.

Policy

Policy is a set of permissions defined to perform specific actions on SpaceONE resources. Permissions define the scopes that can be managed for Cloud Resources. For an overall description of the authority management system, please refer to Role Based Access Control.

Policy Type

Once defined, the policy can be shared so that it can be used by roles in other domains. Depending on whether or not this is possible, the policy is divided into two types.

  • MANAGED: A policy defined globally in the Repository service. The policy is directly managed and shared by the entire system administrator. This is a common policy convenient for most users.
  • CUSTOM: You can use a policy with self-defined permissions for each domain. It is useful to manage detailed permission for each domain.

Policy can be classified as following according to Permission Scope.

  • Basic: Includes overall permission for all resources in SpaceONE.
  • Predefined : Includes granular permission for specific services (alert manager, billing, etc.).

Managed Policy

The policy below is a full list of Managed Policies managed by the CloudONE team. Detailed permission is automatically updated if necessary. Managed Policy was created to classify policies according to the major roles within the organization.

Policy TypePolicy NamePolicy IdPermission DescriptionReference
MANAGED-BasicDomain Admin Accesspolicy-managed-domain-adminHas all privileges except for the following
Create/delete domain
api_type is SYSTEM/NO_AUTH
Manage DomainOwner (create/change/delete)
Manage plug-in identity.Auth Plugin management ( change)
policy-managed-domain-admin
MANAGED-BasicDomain Viewer Accesspolicy-managed-domain-viewerRead permission among Domain Admin Access permissionspolicy-managed-domain-viewer
MANAGED-BasicProject Admin Accesspolicy-managed-project-adminExclude the following permissions from Domain Admin Access Policy
Manage providers (create/change/inquire/delete)
Manage Role/Policy (create/change/delete)
Manage plug-ins inventory.Collector (create/change /delete)
plugin management monitoring.DataSource (create/change/delete)
plugin management notification.Protocol (create/change/delete)
policy-managed-project-admin
MANAGED-BasicProject Viewer Accesspolicy-managed-project-viewerRead permission among Permissions of Project Admin Access Policypolicy-managed-project-viewer
MANAGED-PredefinedAlert Manager Full Accesspolicy-managed-alert-manager-full-accessFull access to Alert Managerpolicy-managed-alert-manager-full-access

Custom Policy

If you want to manage the policy of a domain by yourself, please refer to the Managing Custom Policy document.

2.2.3.2 - Understanding Role

This page takes a detailed look at Roles.

Role structure

Role is a Role Type that specifies the scope of access to resources as shown below and the organization (project or project group) to which the authority is applied. Users can define access rights within each SpaceONE through RoleBinding.

Role Example

Example: Alert Operator Role

---
results:
  - created_at: '2021-11-15T05:12:31.060Z'
    domain_id: domain-xxx
    name: Alert Manager Operator
    policies:
      - policy_id: policy-managed-alert-manager-operator
        policy_type: MANAGED
    role_id: role-f18c7d2a9398
    role_type: PROJECT
    tags: {}

Example : Domain Viewer Role

---
results:
- created_at: '2021-11-15T05:12:28.865Z'
  domain_id: domain-xxx
  name: Domain Viewer
  policies:
  - policy_id: policy-managed-domain-viewer
    policy_type: MANAGED
  role_id: role-242f9851eee7
  role_type: DOMAIN
  tags: {}

Role Type

Role Type specifies the range of accessible resources within the domain.

  • DOMAIN: Access is possible to all resources in the domain.
  • PROJECT: Access is possible to all resources in the project added as a member.

Please refer to Add as Project Member for how to add a member as a member in the project.

Add Member

All resources in SpaceONE are hierarchically managed as follows. The administrator of the domain can manage so that users can access resources within the project by adding members to each project. Users who need access to multiple projects can access all projects belonging to the lower hierarchy by being added to the parent project group as a member. For how to add as a member of the Project Group, refer to Add as a Member of Project Group.

Role Hierarchy

If a user has complex Rolebinding within the hierarchical project structure. Role is applied according to the following rules.

For example, as shown in the figure below, the user stark@example.com is bound to the parent Project Group as Project Admin Role, and the lower level project is APAC. When it is bound to Project Viewer Role in Roles for each project are applied in the following way.

  • The role of the parent project is applied to the sub-project/project group that is not directly bound by RoleBinding.
  • The role is applied to the subproject that has explicit RoleBinding. (overwriting the higher-level role)

Default Roles

All SpaceOne domains automatically include Default Role when created. Below is the list.

NameRole TypeDescription
Domain AdminDOMAINYou can search/change/delete all domain resources
Domain ViewerDOMAINYou can search all domain resources
Project AdminPROJECTYou can view/change/delete the entire project resource added as a member
Project ViewerPROJECTYou can search the entire project resource added as a member
Alert Manager OperatorPROJECTYou can inquire the entire project resource added as a member, and have the alert handling authority of Alert Manager

Managing Roles

Roles can be managed by the domain itself through spacectl. Please refer to the Managing Roles document.

2.3 - Inventory

Overall explanation of inventory service

2.3.1 - Inventory Collector

This page explores how to collect Cloud Resources"

How to collect

When user create a collect API call, the collecting task is created, then pushed the queue. Inventory Worker patches the task then execute the task, which is collecting the resources from the plugin.

collect

Collecting Manager

collect

2.3.2 - Monitoring

About monitoring service

2.4 - Monitoring

About monitoring service

2.5 - Alert Manager

About alert manager

2.6 - Cost Analysis

About cost analysis

3 - Setup & Operation

Installation & Administrator Guide

3.1 - Getting Started

How to install Cloudforet for developer

Previous version of Cloudforet,

VersionInstallation Guide
v1.12 (stable)https://cloudforet.io/v1-12/docs/setup_operation/quick_install/
v2.x (development)Current page

Overview

This is Getting Started Installation guide with minikube.

Note :- This Guide is for developer only.

Cloudforet-Minikube Architecture

Cloudforet-Minikube Architecture


Prerequisites

  • Minimum requirements for development (2 cores, 8 GB memories, 30GB disk)
CSPPossible Instance Type
AWSt3.large , m5.large
GCPn4-standard-2, n1-standard-2
Azured2s-v3
  • Docker/Docker Desktop
    • If you don't have Docker installed, minikube will return an error as minikube uses docker as the driver.
    • Highly recommend installing Docker Desktop based on your OS.
  • Minikube
    • Requires minimum Kubernetes version of 1.21+.
  • Kubectl
  • Helm
    • Requires minimum Helm version of 3.11.0+.
    • If you want to learn more about Helm, refer to this.

Before diving into the Cloudforet Installation process, start minikube by running the command below.

minikube start --driver=docker --memory=6000mb

Installation

You can install the Cloudforet by the following the steps below.

1) Add Helm Repository

This command wll register Helm repository.

helm repo add cloudforet https://cloudforet-io.github.io/charts
helm repo update
helm search repo cloudforet

2) Create Namespaces

kubectl create ns cloudforet
kubectl create ns cloudforet-plugin

3) Create Role and RoleBinding

First, download the rbac.yaml file.

The rbac.yaml file basically serves as a means to regulate access to computer or network resources based on the roles of individual users. For more information about RBAC Authorization in Kubernetes, refer to this.

If you are used to downloading files via command-line, run this command to download the file. Next, execute the following command.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples-v2/rbac.yaml -O rbac.yaml
kubectl apply -f rbac.yaml -n cloudforet-plugin

4) Install Cloudforet Chart

Download default YAML file for helm chart. Execute the following command.

Current Cloudforet 2.x is development status, so you need to add --devel option.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples-v2/values/release-2x.yaml -O release-2x.yaml
helm install cloudforet cloudforet/spaceone -n cloudforet -f release-2x.yaml --devel

After executing the above command, check the status of the pod.

Scheduler pods are in CrashLoopBackOff or Error state. This is because the setup is not complete.

kubectl get pod -n cloudforet


NAME                                      READY   STATUS             RESTARTS      AGE
board-5746fd9657-vtd45                    1/1     Running            0             57s
config-5d4c4b7f58-z8k9q                   1/1     Running            0             58s
console-6b64cf66cb-q8v54                  1/1     Running            0             59s
console-api-7c95848cb8-sgt56              2/2     Running            0             58s
console-api-v2-rest-7d64bc85dd-987zn      2/2     Running            0             56s
cost-analysis-7b9d64b944-xw9qg            1/1     Running            0             59s
cost-analysis-scheduler-ff8cc758d-lfx4n   0/1     Error              3 (37s ago)   55s
cost-analysis-worker-559b4799b9-fxmxj     1/1     Running            0             58s
dashboard-b4cc996-mgwj9                   1/1     Running            0             56s
docs-5fb4cc56c7-68qbk                     1/1     Running            0             59s
identity-6fc984459d-zk8r9                 1/1     Running            0             56s
inventory-67498999d6-722bw                1/1     Running            0             57s
inventory-scheduler-5dc6856d44-4spvm      0/1     CrashLoopBackOff   3 (18s ago)   59s
inventory-worker-68d9fcf5fb-x6knb         1/1     Running            0             55s
marketplace-assets-8675d44557-ssm92       1/1     Running            0             59s
mongodb-7c9794854-cdmwj                   1/1     Running            0             59s
monitoring-fdd44bdbf-pcgln                1/1     Running            0             59s
notification-5b477f6c49-gzfl8             1/1     Running            0             59s
notification-scheduler-675696467-gn24j    1/1     Running            0             59s
notification-worker-d88bb6df6-pjtmn       1/1     Running            0             57s
plugin-556f7bc49b-qmwln                   1/1     Running            0             57s
plugin-scheduler-86c4c56d84-cmrmn         0/1     CrashLoopBackOff   3 (13s ago)   59s
plugin-worker-57986dfdd6-v9vqg            1/1     Running            0             58s
redis-75df77f7d4-lwvvw                    1/1     Running            0             59s
repository-5f5b7b5cdc-lnjkl               1/1     Running            0             57s
secret-77ffdf8c9d-48k46                   1/1     Running            0             55s
spacectl-5664788d5d-dtwpr                 1/1     Running            0             59s
statistics-67b77b6654-p9wcb               1/1     Running            0             56s
statistics-scheduler-586875947c-8zfqg     0/1     Error              3 (30s ago)   56s
statistics-worker-68d646fc7-knbdr         1/1     Running            0             58s
supervisor-scheduler-6744657cb6-tpf78     2/2     Running            0             59s

To execute the commands below, every POD except xxxx-scheduler-yyyy must have a Running status.

5) Default Initialization (in spacectl POD)

To use Cloudforet, you have to initialize the root domain, which creates a SYSTEM TOKEN.

Login to the spacectl POD and execute the command below.

kubectl exec -it -n cloudforet spacectl-xxxxx -- /bin/sh
spacectl config init -f default.yaml

root domain yaml file (root.yaml)

---
admin:
    user_id: admin@example.com
    password: Admin123!@#
    name: Admin

Execute the command below to create the root domain.

spacectl exec init identity.System -f root.yaml

6) Update Helm Values

Update your helm values file (ex. release-2x.yaml) and edit the values. There is only one item that need to be updated.

For EC2 users: put in your EC2 server's public IP instead of 127.0.0.1 for both CONSOLE_API and CONSOLE_API_V2 ENDPOINT.

  • TOKEN (from the previous step)
console:
  production_json:
    CONSOLE_API:
      ENDPOINT: http://localhost:8081  # http://ec2_public_ip:8081 for EC2 users
    CONSOLE_API_V2:
      ENDPOINT: http://localhost:8082  # http://ec2_public_ip:8082 for EC2 users

global:
  shared_conf:
    TOKEN: 'TOKEN_VALUE_FROM_ABOVE'   # Change the system token

After editing the helm values file(ex. release-2x.yaml), upgrade the helm chart.

helm upgrade cloudforet cloudforet/spaceone -n cloudforet -f release-2x.yaml --devel

After upgrading, delete the pods in cloudforet namespace that have the label app.kubernetes.io/instance and value cloudforet.

kubectl delete po -n cloudforet -l app.kubernetes.io/instance=cloudforet

7) Check the status of the pods

kubectl get pod -n cloudforet

8) Create User Domain (In spacectl POD)

Create a user domain yaml file (domain.yaml)

---
name: spaceone
admin:
  user_id: admin@domain.com
  password: Admin123!@#
  name: Admin

execute the command below to create the user domain.

spacectl config init -f default.yaml
spacectl config set api_key {SYSTEM_TOKEN} 
spacectl exec create identity.Domain -f domain.yaml

If all pods are in Running state, the setup is complete.

Port-forwarding

Installing Cloudforet on minikube doesn't provide any Ingress objects such as Amazon ALB or NGINX ingress controller. We can use kubectl port-forward instead.

Run the following commands for port forwarding.

# CLI commands
kubectl port-forward -n cloudforet svc/console 8080:80 --address='0.0.0.0' &
kubectl port-forward -n cloudforet svc/console-api 8081:80 --address='0.0.0.0' &
kubectl port-forward -n cloudforet svc/console-api-v2-rest 8082:80 --address='0.0.0.0' &

Start Cloudforet

Log-In User Domain

For EC2 users: open browser with http://your_ec2_server_ip:8080

Open browser (http://127.0.0.1:8080)

IDPASSWORD
admin@domain.comAdmin123!@#

Reference

3.2 - Installation

This section describes how to install Cloudforet.

3.2.1 - AWS

Install Guide of Cloudforet on AWS

Cloudforet Helm Charts

A Helm Chart for Cloudforet 1.12.

Prerequisites

  • Kubernetes 1.21+
  • Helm 3.2.0+
  • Service Domain & SSL Certificate (optional)
    • Console: console.example.com
    • REST API: *.api.example.com
    • gRPC API: *.grpc.example.com
    • Webhook: webhook.example.com
  • MongoDB 5.0+ (optional)

Cloudforet Architecture

Cloudforet Architecture

Installation

You can install the Cloudforet using the following the steps.

1) Add Helm Repository

helm repo add cloudforet https://cloudforet-io.github.io/charts
helm repo update
helm search repo cloudforet

2) Create Namespaces

kubectl create ns spaceone
kubectl create ns spaceone-plugin

If you want to use only one namespace, you don't create the spaceone-plugin namespace.

3) Create Role and RoleBinding

First, download the rbac.yaml file.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/rbac.yaml -O rbac.yaml

And execute the following command.

kubectl apply -f rbac.yaml -n spaceone-plugin

or

kubectl apply -f https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/rbac.yaml -n spaceone-plugin

4) Install Cloudforet Chart

helm install cloudforet cloudforet/spaceone -n spaceone

After executing the above command, check the status of the pod.

kubectl get pod -n spaceone

NAME                                       READY   STATUS             RESTARTS      AGE
board-64f468ccd6-v8wx4                     1/1     Running            0             4m16s
config-6748dc8cf9-4rbz7                    1/1     Running            0             4m14s
console-767d787489-wmhvp                   1/1     Running            0             4m15s
console-api-846867dc59-rst4k               2/2     Running            0             4m16s
console-api-v2-rest-79f8f6fb59-7zcb2       2/2     Running            0             4m16s
cost-analysis-5654566c95-rlpkz             1/1     Running            0             4m13s
cost-analysis-scheduler-69d77598f7-hh8qt   0/1     CrashLoopBackOff   3 (39s ago)   4m13s
cost-analysis-worker-68755f48bf-6vkfv      1/1     Running            0             4m15s
cost-analysis-worker-68755f48bf-7sj5j      1/1     Running            0             4m15s
cost-analysis-worker-68755f48bf-fd65m      1/1     Running            0             4m16s
cost-analysis-worker-68755f48bf-k6r99      1/1     Running            0             4m15s
dashboard-68f65776df-8s4lr                 1/1     Running            0             4m12s
file-manager-5555876d89-slqwg              1/1     Running            0             4m16s
identity-6455d6f4b7-bwgf7                  1/1     Running            0             4m14s
inventory-fc6585898-kjmwx                  1/1     Running            0             4m13s
inventory-scheduler-6dd9f6787f-k9sff       0/1     CrashLoopBackOff   4 (21s ago)   4m15s
inventory-worker-7f6d479d88-59lxs          1/1     Running            0             4m12s
mongodb-6b78c74d49-vjxsf                   1/1     Running            0             4m14s
monitoring-77d9bd8955-hv6vp                1/1     Running            0             4m15s
monitoring-rest-75cd56bc4f-wfh2m           2/2     Running            0             4m16s
monitoring-scheduler-858d876884-b67tc      0/1     Error              3 (33s ago)   4m12s
monitoring-worker-66b875cf75-9gkg9         1/1     Running            0             4m12s
notification-659c66cd4d-hxnwz              1/1     Running            0             4m13s
notification-scheduler-6c9696f96-m9vlr     1/1     Running            0             4m14s
notification-worker-77865457c9-b4dl5       1/1     Running            0             4m16s
plugin-558f9c7b9-r6zw7                     1/1     Running            0             4m13s
plugin-scheduler-695b869bc-d9zch           0/1     Error              4 (59s ago)   4m15s
plugin-worker-5f674c49df-qldw9             1/1     Running            0             4m16s
redis-566869f55-zznmt                      1/1     Running            0             4m16s
repository-8659578dfd-wsl97                1/1     Running            0             4m14s
secret-69985cfb7f-ds52j                    1/1     Running            0             4m12s
statistics-98fc4c955-9xtbp                 1/1     Running            0             4m16s
statistics-scheduler-5b6646d666-jwhdw      0/1     CrashLoopBackOff   3 (27s ago)   4m13s
statistics-worker-5f9994d85d-ftpwf         1/1     Running            0             4m12s
supervisor-scheduler-74c84646f5-rw4zf      2/2     Running            0             4m16s

Scheduler pods are in CrashLoopBackOff or Error state. This is because the setup is not complete.

5) Initialize the Configuration

First, download the initializer.yaml file.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/initializer.yaml -O initializer.yaml

And execute the following command.

helm install cloudforet-initializer cloudforet/spaceone-initializer -n spaceone -f initializer.yaml

or

helm install cloudforet-initializer cloudforet/spaceone-initializer -n spaceone -f https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/initializer.yaml

For more information about the initializer, please refer the spaceone-initializer.

6) Set the Helm Values and Upgrade the Chart

Complete the initialization, you can get the system token from the initializer pod logs.

# check pod name
kubectl logs initialize-spaceone-xxxx-xxxxx -n spaceone

...
TASK [Print Admin API Key] *********************************************************************************************
"{TOKEN}"

FINISHED [ ok=23, skipped=0 ] ******************************************************************************************

FINISH SPACEONE INITIALIZE

First, copy this TOKEN, then Create the values.yaml file and paste it to the TOKEN.

console:
  production_json:
    # If you don't have a service domain, you refer to the following 'No Domain & IP Access' example.
    CONSOLE_API:
      ENDPOINT: https://console.api.example.com       # Change the endpoint
    CONSOLE_API_V2:
      ENDPOINT: https://console-v2.api.example.com    # Change the endpoint

global:
  shared_conf:
    TOKEN: '{TOKEN}'                                    # Change the system token

For more advanced configuration, please refer the following the links.

After editing the values.yaml file, upgrade the helm chart.

helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet

7) Check the status of the pods

kubectl get pod -n spaceone

If all pods are in Running state, the setup is complete.

8) Ingress and AWS Load Balancer

In Kubernetes, Ingress is an API object that provides a load-balanced external IP address to access Services in your cluster. It acts as a layer 7 (HTTP/HTTPS) reverse proxy and can route traffic to other services based on the requested host and URL path.

For more information, see What is an Application Load Balancer? on AWS and ingress in the Kubernetes documentation.

Prerequisite

Install AWS Load Balancer Controller
AWS Load Balancer Controller is a controller that helps manage ELB (Elastic Load Balancers) in a Kubernetes Cluster. Ingress resources are provisioned with Application Load Balancer, and service resources are provisioned with Network Load Balancer.
Installation methods may vary depending on the environment, so please refer to the official guide document below.

How to set up Cloudforet ingress

1) Ingress Type
Cloudforet provisions a total of 3 ingresses through 2 files.

  • Console : Ingress to access the domain
  • REST API : Ingress for API service
    • console-api
    • console-api-v2

2) Console ingress
Setting the ingress to accerss the console is as follows.

cat <<EOF> spaceone-console-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-ingress
  namespace: spaceone
  annotations:
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-ingress # Caution!! Must be fewer than 32 characters.
spec:
  ingressClassName: alb
  defaultBackend:
    service:
      name: console
      port:
        number: 80
EOF
# Apply ingress
kubectl apply -f spaceone-console-ingress.yaml

If you apply the ingress, it will be provisioned to AWS Load Balancer with the name spaceone-console-ingress. You can connect through the provisioned DNS name using HTTP (80 Port).

3) REST API ingress
Setting the REST API ingress for the API service is as follows.

cat <<EOF> spaceone-rest-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-api-ingress
  namespace: spaceone
  annotations:
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-ingress # Caution!! Must be fewer than 32 characters.
spec:
  ingressClassName: alb
  defaultBackend:
    service:
      name: console-api
      port:
        number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-api-v2-ingress
  namespace: spaceone
  annotations:
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-v2-ingress
spec:
  ingressClassName: alb
  defaultBackend:
    service:
      name: console-api-v2-rest
      port:
        number: 80
EOF
# Apply ingress
kubectl apply -f spaceone-rest-ingress.yaml

REST API ingress provisions two ALBs. The DNS Name of the REST API must be saved as console.CONSOLE_API.ENDPOINT and console.CONSOLE_API_V2.ENDPOINT in the values.yaml file.

4) Check DNS Name
The DNS name will be generated as http://{ingress-name}-{random}.{region-code}.elb.amazoneaws.com. You can check this through the kubectl get ingress -n spaceone command in Kubernetes.

kubectl get ingress -n spaceone

NAME                     CLASS   HOSTS   ADDRESS                                                                      PORTS   AGE
console-api-ingress      alb     *       spaceone-console-api-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com      80      15h
console-api-v2-ingress   alb     *       spaceone-console-api-v2-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com   80      15h
console-ingress          alb     *       spaceone-console-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com          80      15h

Or, you can check it in AWS Console. You can check it in EC2 > Load balancer as shown in the image below.

spaceone-console-ingress-alb

5) Connect with DNS Name
When all ingress is ready, edit the values.yaml file, restart pods, and access the console.

console:
  production_json:
    # If you don't have a service domain, you refer to the following 'No Domain & IP Access' example.
    CONSOLE_API:
      ENDPOINT: http://spaceone-console-api-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com
    CONSOLE_API_V2:
      ENDPOINT: http://spaceone-console-api-v2-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com

After applying the prepared values.yaml file, restart the pods.

helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet

Now you can connect to Cloudforet with the DNS Name of spaceone-console-ingress.

  • http://spaceone-console-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com

Advanced ingress settings

How to register an SSL certificate
We will guide you through how to register a certificate in ingress for SSL communication.
There are two methods for registering a certificate. One is when using ACM(AWS Certificate Manager), and the other is how to register an external certificate.

How to register an ACM certificate with ingress
If the certificate was issued through ACM, you can register the SSL certificate by simply registering acm arn in ingress.

First of all, please refer to the AWS official guide document on how to issue a certificate.

How to register the issued certificate is as follows. Please check the options added or changed for SSL communication in existing ingress.

Check out the changes in ingress.
Various settings for SSL are added and changed. Check the contents of metadata.annotations.
Also, check the added contents such as ssl-redirect and spec.rules.host in spec.rules.

  • spaceone-console-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-ingress
  namespace: spaceone
  annotations:
+   alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
+   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
-   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
+   alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:..."  # Change the certificate-arn
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-ingress # Caution!! Must be fewer than 32 characters.
spec:
  ingressClassName: alb
- defaultBackend:
-   service:
-     name: console
-     port:
-       number: 80
+ rules:
+   - http:
+       paths:
+         - path: /*
+           pathType: ImplementationSpecific
+           backend:
+             service:
+               name: ssl-redirect
+               port:
+                 name: use-annotation
+   - host: "console.example.com"  # Change the hostname
+     http:
+       paths:
+         - path: /*
+           pathType: ImplementationSpecific
+           backend:
+             service:
+               name: console 
+               port:
+                 number: 80
  • spaceone-rest-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-api-ingress
  namespace: spaceone
  annotations:
+   alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
+   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
-   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
+   alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:..."  # Change the certificate-arn
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-ingress # Caution!! Must be fewer than 32 characters.
spec:
  ingressClassName: alb
- defaultBackend:
-   service:
-     name: console-api
-     port:
-       number: 80
+ rules:
+   - http:
+       paths:
+         - path: /*
+           pathType: ImplementationSpecific
+           backend:
+             service:
+               name: ssl-redirect
+               port:
+                 name: use-annotation
+   - host: "console.api.example.com"  # Change the hostname
+     http:
+       paths:
+         - path: /*
+           pathType: ImplementationSpecific
+           backend:
+             service:
+               name: console-api
+               port:
+                 number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-api-v2-ingress
  namespace: spaceone
  annotations:
+   alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
+   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
-   alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
+   alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:..."  # Change the certificate-arn
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-v2-ingress
spec:
  ingressClassName: alb
- defaultBackend:
-   service:
-     name: console-api-v2-rest
-     port:
-       number: 80
+ rules:
+   - http:
+       paths:
+         - path: /*
+           pathType: ImplementationSpecific
+           backend:
+             service:
+               name: ssl-redirect
+               port:
+                 name: use-annotation
+   - host: "console-v2.api.example.com"  # Change the hostname
+     http:
+       paths:
+         - path: /*
+           pathType: ImplementationSpecific
+           backend:
+             service:
+               name: console-api-v2-rest
+               port:
+                 number: 80

SSL application is completed when the changes are reflected through the kubectl command.

kubectl apply -f spaceone-console-ingress.yaml
kubectl apply -f spaceone-rest-ingress.yaml

How to register an SSL/TLS certificate
Certificate registration is possible even if you have an external certificate that was previously issued. You can register by adding a Kubernetes secret using the issued certificate and declaring the added secret name in ingress.

Create SSL/TLS certificates as Kubernetes secrets. There are two ways:

1. Using yaml file
You can add a secret to a yaml file using the command below.

kubectl apply -f <<EOF> tls-secret.yaml
apiVersion: v1
data:
  tls.crt: {your crt}   # crt
  tls.key: {your key}   # key
kind: Secret
metadata:
  name: tls-secret
  namespace: spaceone
type: kubernetes.io/tls
EOF

2. How to use the command if a file exists
If you have a crt and key file, you can create a secret using the following command.

kubectl create secret tls tlssecret --key tls.key --cert tls.crt

Add tls secret to Ingress
Modify ingress using registered secret information.

ingress-nginx settings
Using secret and tls may require setup methods using ingress-nginx. For more information, please refer to the following links:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-ingress
  namespace: spaceone
  annotations:
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-ingress # Caution!! Must be fewer than 32 characters.
spec:
  tls:
  - hosts:
      - console.example.com        # Change the hostname
    secretName: tlssecret          # Insert secret name
  rules:
    - http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: ssl-redirect
                port:
                  name: use-annotation
    - host: "console.example.com"  # Change the hostname
      http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: console 
                port:
                  number: 80

3.2.2 - On Premise

This section describes how to install CloudForet in an On-Premise environment.

on_premise

Prerequisites

Install Cloudforet

It guides you on how to install Cloudforet using Helm chart. Related information is also available at: https://github.com/cloudforet-io/charts

1. Add Helm Repository

# Set working directory
mkdir cloudforet-deployment
cd cloudforet-deployment
wget https://github.com/cloudforet-io/charts/releases/download/spaceone-1.12.12/spaceone-1.12.12.tgz
tar zxvf spaceone-1.12.12.tgz

2. Create Namespaces

kubectl create ns cloudforet 
kubectl create ns cloudforet-plugin

Cautions of creation namespace
If you need to use only one namespace, you do not need to create the cloudforet-plugin namespace.
After changing the Cloudforet namespace, please refer to the following link. Change K8S Namespace

3. Create Role and RoleBinding

First, download the rbac.yaml file.

The rbac.yaml file basically serves as a means to regulate access to computer or network resources based on the roles of individual users. For more information about RBAC Authorization in Kubernetes, refer to this.

If you are used to downloading files via command-line, run this command to download the file.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/rbac.yaml -O rbac.yaml

Next, execute the following command.

kubectl apply -f rbac.yaml -n cloudforet-plugin

4. Install

Download default YAML file for helm chart.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/values/release-1-12.yaml -O release-1-12.yaml
helm install cloudforet spaceone -n cloudforet -f release-1-12.yaml

After executing the above command, check the status of the pod.

Scheduler pods are in CrashLoopBackOff or Error state. This is because the setup is not complete.

kubectl get pod -n cloudforet

NAME                                       READY   STATUS             RESTARTS      AGE
board-64f468ccd6-v8wx4                     1/1     Running            0             4m16s
config-6748dc8cf9-4rbz7                    1/1     Running            0             4m14s
console-767d787489-wmhvp                   1/1     Running            0             4m15s
console-api-846867dc59-rst4k               2/2     Running            0             4m16s
console-api-v2-rest-79f8f6fb59-7zcb2       2/2     Running            0             4m16s
cost-analysis-5654566c95-rlpkz             1/1     Running            0             4m13s
cost-analysis-scheduler-69d77598f7-hh8qt   0/1     CrashLoopBackOff   3 (39s ago)   4m13s
cost-analysis-worker-68755f48bf-6vkfv      1/1     Running            0             4m15s
cost-analysis-worker-68755f48bf-7sj5j      1/1     Running            0             4m15s
cost-analysis-worker-68755f48bf-fd65m      1/1     Running            0             4m16s
cost-analysis-worker-68755f48bf-k6r99      1/1     Running            0             4m15s
dashboard-68f65776df-8s4lr                 1/1     Running            0             4m12s
file-manager-5555876d89-slqwg              1/1     Running            0             4m16s
identity-6455d6f4b7-bwgf7                  1/1     Running            0             4m14s
inventory-fc6585898-kjmwx                  1/1     Running            0             4m13s
inventory-scheduler-6dd9f6787f-k9sff       0/1     CrashLoopBackOff   4 (21s ago)   4m15s
inventory-worker-7f6d479d88-59lxs          1/1     Running            0             4m12s
mongodb-6b78c74d49-vjxsf                   1/1     Running            0             4m14s
monitoring-77d9bd8955-hv6vp                1/1     Running            0             4m15s
monitoring-rest-75cd56bc4f-wfh2m           2/2     Running            0             4m16s
monitoring-scheduler-858d876884-b67tc      0/1     Error              3 (33s ago)   4m12s
monitoring-worker-66b875cf75-9gkg9         1/1     Running            0             4m12s
notification-659c66cd4d-hxnwz              1/1     Running            0             4m13s
notification-scheduler-6c9696f96-m9vlr     1/1     Running            0             4m14s
notification-worker-77865457c9-b4dl5       1/1     Running            0             4m16s
plugin-558f9c7b9-r6zw7                     1/1     Running            0             4m13s
plugin-scheduler-695b869bc-d9zch           0/1     Error              4 (59s ago)   4m15s
plugin-worker-5f674c49df-qldw9             1/1     Running            0             4m16s
redis-566869f55-zznmt                      1/1     Running            0             4m16s
repository-8659578dfd-wsl97                1/1     Running            0             4m14s
secret-69985cfb7f-ds52j                    1/1     Running            0             4m12s
statistics-98fc4c955-9xtbp                 1/1     Running            0             4m16s
statistics-scheduler-5b6646d666-jwhdw      0/1     CrashLoopBackOff   3 (27s ago)   4m13s
statistics-worker-5f9994d85d-ftpwf         1/1     Running            0             4m12s
supervisor-scheduler-74c84646f5-rw4zf      2/2     Running            0             4m16s

To execute the commands below, every POD except xxxx-scheduler-yyyy must have a Running status.

5) Initialize the Configuration

First, download the initializer.yaml file.

For more information about the initializer, please refer to the spaceone-initializer.

If you are used to downloading files via command-line, run this command to download the file.

wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/initializer.yaml -O initializer.yaml

And execute the following command.

wget https://github.com/cloudforet-io/charts/releases/download/spaceone-initializer-1.3.3/spaceone-initializer-1.3.3.tgz
tar zxvf spaceone-initializer-1.3.3.tgz
helm install initializer spaceone-initializer -n cloudforet -f initializer.yaml

6) Set the Helm Values and Upgrade the Chart

Complete the initialization, you can get the system token from the initializer pod logs.

To figure out the pod name for the initializer, run this command first to show all pod names for namespace spaceone.

kubectl get pods -n cloudforet 

Then, among the pods shown copy the name of the pod that starts with initialize-spaceone.

NAME                                       READY   STATUS      RESTARTS   AGE
board-5997d5688-kq4tx                      1/1     Running     0          24m
config-5947d845b5-4ncvn                    1/1     Running     0          24m
console-7fcfddbd8b-lbk94                   1/1     Running     0          24m
console-api-599b86b699-2kl7l               2/2     Running     0          24m
console-api-v2-rest-cb886d687-d7n8t        2/2     Running     0          24m
cost-analysis-8658c96f8f-88bmh             1/1     Running     0          24m
cost-analysis-scheduler-67c9dc6599-k8lgx   1/1     Running     0          24m
cost-analysis-worker-6df98df444-5sjpm      1/1     Running     0          24m
dashboard-84d8969d79-vqhr9                 1/1     Running     0          24m
docs-6b9479b5c4-jc2f8                      1/1     Running     0          24m
identity-6d7bbb678f-b5ptf                  1/1     Running     0          24m
initialize-spaceone-fsqen-74x7v            0/1     Completed   0          98m
inventory-64d6558bf9-v5ltj                 1/1     Running     0          24m
inventory-scheduler-69869cc5dc-k6fpg       1/1     Running     0          24m
inventory-worker-5649876687-zjxnn          1/1     Running     0          24m
marketplace-assets-5fcc55fb56-wj54m        1/1     Running     0          24m
mongodb-b7f445749-2sr68                    1/1     Running     0          101m
monitoring-799cdb8846-25w78                1/1     Running     0          24m
notification-c9988d548-gxw2c               1/1     Running     0          24m
notification-scheduler-7d4785fd88-j8zbn    1/1     Running     0          24m
notification-worker-586bc9987c-kdfn6       1/1     Running     0          24m
plugin-79976f5747-9snmh                    1/1     Running     0          24m
plugin-scheduler-584df5d649-cflrb          1/1     Running     0          24m
plugin-worker-58d5cdbff9-qk5cp             1/1     Running     0          24m
redis-b684c5bbc-528q9                      1/1     Running     0          24m
repository-64fc657d4f-cbr7v                1/1     Running     0          24m
secret-74578c99d5-rk55t                    1/1     Running     0          24m
spacectl-8cd55f46c-xw59j                   1/1     Running     0          24m
statistics-767d84bb8f-rrvrv                1/1     Running     0          24m
statistics-scheduler-65cc75fbfd-rsvz7      1/1     Running     0          24m
statistics-worker-7b6b7b9898-lmj7x         1/1     Running     0          24m
supervisor-scheduler-555d644969-95jxj      2/2     Running     0          24m

To execute the below kubectl logs command, the status of POD(Ex: here initialize-spaceone-fsqen-74x7v) should be Completed . Proceeding with this while the POD is INITIALIZING will give errors

Get the token by getting the log information of the pod with the name you found above.

kubectl logs initialize-spaceone-fsqen-74x7v -n cloudforet

...
TASK [Print Admin API Key] *********************************************************************************************
"TOKEN_SHOWN_HERE"

FINISHED [ ok=23, skipped=0 ] ******************************************************************************************

FINISH SPACEONE INITIALIZE

Update your helm values file (ex. release-1-12.yaml) and edit the values. There is only one item that need to be updated.

For EC2 users: put in your EC2 server's public IP instead of 127.0.0.1 for both CONSOLE_API and CONSOLE_API_V2 ENDPOINT.

  • TOKEN
console:
  production_json:
    CONSOLE_API:
      ENDPOINT: https://console-v1.api.example.com  # Change to your domain (example.com)
    CONSOLE_API_V2:
      ENDPOINT: https://console-v2.api.example.com  # Change to your domain (example.com)

global:
  shared_conf:
    TOKEN: 'TOKEN_VALUE_FROM_ABOVE'   # Change the system token

After editing the helm values file(ex. release-1-12.yaml), upgrade the helm chart.

helm upgrade cloudforet spaceone -n cloudforet -f release-1-12.yaml

After upgrading, delete the pods in cloudforet namespace that have the label app.kubernetes.io/instance and value cloudforet.

kubectl delete po -n cloudforet -l app.kubernetes.io/instance=cloudforet

7. Check the status of the pods

Check the status of the pod with the following command. If all pods are in Running state, the installation is complete.

kubectl get pod -n cloudforet

8. Configuration Ingress

Kubernetes Ingress is a resource that manages connections between services in a cluster and external connections. Cloudforet is serviced by registering the generated certificate as a secret and adding an ingress in the order below.

Install Nginx Ingress Controller
An ingress controller is required to use ingress in an on-premise environment. Here is a link to the installation guide for Nginx Ingress Controller supported by Kubernetes.

Case 1) cert-manager with Letsencrypt

If you want to use a free SSL certificate, you can use cert-manager with Letsencrypt.

  • file: cloudforet-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-ingress
  namespace: cloudforet
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - console.example.com
    - console-v1.api.example.com
    - console-v2.api.example.com
    - webhook.api.example.com
    secretName: console-tls
  rules:
    - host: "console.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console
                port:
                  number: 80
    - host: "console-v1.api.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console-api
                port:
                  number: 80
    - host: "console-v2.api.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console-api-v2-rest
                port:
                  number: 80
    - host: "webhook.api.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: monitoring-rest
                port:
                  number: 80

Crete the prepared ingress in the cloudforet namespace with the command below.

kubectl apply -f cloudforet-ingress.yaml -n cloudforet

Case 2) Generate self-managed SSL

Create a private ssl certificate using the openssl command below. (If an already issued certificate exists, you can create a Secret using the issued certificate. For detailed instructions, please refer to the following link. Create secret by exist cert)

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout console_ssl.pem -out console_ssl.csr -subj "/CN=console.example.com/O=cloudforet" -addext "subjectAltName = DNS:*.api.example.com"

Create secret for ssl

If the certificate is ready, create a secret using the certificate file.

kubectl create secret tls console-tls --key console_ssl.pem --cert console_ssl.csr

Create Ingress

Each file is as follows. Change the hostname inside the file to match the domain of the certificate you created.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: console-ingress
  namespace: cloudforet
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - console.example.com
    - console-v1.api.example.com
    - console-v2.api.example.com
    - webhook.api.example.com
    secretName: console-tls
  rules:
    - host: "console.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console
                port:
                  number: 80
    - host: "console-v1.api.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console-api
                port:
                  number: 80
    - host: "console-v2.api.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console-api-v2-rest
                port:
                  number: 80
    - host: "webhook.api.example.com"  # Change the hostname
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: monitoring-rest
                port:
                  number: 80

Create the prepared ingress in the cloudforet namespace with the command below.

kubectl apply -f cloudforet-ingress.yaml -n cloudforet

Connect to the Console

Connect to the Cloudforet Console service.


Advanced Configurations

Additional settings are required for the following special features. Below are examples and solutions for each situation.

NameDescription
Set Plugin CertificateThis is how to set a certificate for each plugin when using a private certificate.
Support Private Image RegistryIn an environment where communication with the outside is blocked for organization's security reasons, you can operate your own Private Image Registry. In this case, Container Image Sync operation is required, and Cloudforet suggests a method using the dregsy tool.
Change K8S NamespaceNamespace usage is limited by each environment, or you can use your own namespace name. Here is how to change Namespace in Cloudforet.
Set HTTP ProxyIn the on-premise environment with no Internet connection, proxy settings are required to communicate with the external world. Here's how to set up HTTP Proxy.
Set K8S ImagePullSecretsIf you are using Private Image Registry, you may need credentials because user authentication is set. In Kubernetes, you can use secrets to register credentials with pods. Here's how to set ImagePullSecrets.

3.3 - Configuration

We will introduce the custom settings for using Cloudforet.

3.3.1 - Set plugin certificate

Describes how to set up private certificates for plugins used in Cloudforet.

set_plugin_certificate

If Cloudforet is built in an on-premise environment, it can be accessed through a proxy server without direct communication with the Internet.
At this time, a private certificate is required when communicating with the proxy server.
First, configure the secret with the prepared private certificate and mount it on the private-tls volume.
After that, set the value of various environment variables required to set the certificate in supervisor's KubernetesConnectorto be the path of tls.crt in the private-tls volume.




Register the prepared private certificate as a Kubernetes Secret

ParameterDescriptionDefault
apiVersionAPI version of resourcev1
kindKind of resourceSecret
metadataMetadata of resource{...}
metadata.nameName of resourceprivate-tls
metadata.namespaceNamespace of resourcespaceone
dataData of resourcetls.crt
typeType of resourcekubernetes.io/tls
kubectl apply -f create_tls_secret.yml
---
apiVersion: v1
kind: Secret
metadata:
  name: spaceone-tls
  namespace: spaceone
data:
  tls.crt: base64 encoded cert  # openssl base64 -in cert.pem -out cert.base64
type: kubernetes.io/tls



Set up on KubernetesConnector of supervisor

ParameterDescriptionDefault
supervisor.application_schedulerConfiguration of supervisor scheduler{...}
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[]Environment variables for plugin[...]
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].nameName of environment variableREQUESTS_CA_BUNDLE, AWS_CA_BUNDLE, CLOUDFORET_CA_BUNDLE
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].valueValue of environment variable/opt/ssl/cert/tls.crt
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumes[]Volumes for plugin[...]
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumes[].nameName of volumesprivate-tls
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumes[].secret.secretNameSecret name of secret volumeprivate-tls
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[]Volume mounts of plugins[...]
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[].nameName of volume mountsprivate-tls
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[].mountPathPath of volume mounts/opt/ssl/cert/tls.crt
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[].readOnlyRead permission on the mounted volumetrue
supervisor:
  enabled: true
  image:
    name: spaceone/supervisor
    version: x.y.z

  imagePullSecrets:
    - name: my-credential

  application_scheduler:
    CONNECTORS:
      KubernetesConnector:
        env:
          - name: REQUESTS_CA_BUNDLE
            value: /opt/ssl/cert/tls.crt
          - name: AWS_CA_BUNDLE
            value: /opt/ssl/cert/tls.crt
          - name: CLOUDFORET_CA_BUNDLE
            value: /opt/ssl/cert/tls.crt
        volumes:
          - name: private-tls
            secret:
              secretName: private-tls
        volumeMounts:
          - name: private-tls
            mountPath: /opt/ssl/cert/tls.crt
            readOnly: true



Update

You can apply the changes through the helm upgrade command and by deleting the pods

helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet

3.3.2 - Change kubernetes namespace

This section describes how to change a core service or plugin service to a namespace with a different name.

When Cloudforet is installed in the K8S environment, the core service is installed in spaceone and the plugin service for extension function is installed in spaceone-plugin namespace. (In v1.11.5 and below, it is installed in root-supervisor.)

If the user wants to change the core service or plugin service to a namespace with a different name or to install in a single namespace, the namespace must be changed through options.

In order to change the namespace, you need to write changes in Cloudforet's values.yaml. Changes can be made to each core service and plugin service.

Change the namespace of the core service

To change the namespace of the core service, add the spaceone-namespace value by declaring global.namespace in the values.yaml file.

#console:
#  production_json:
#    CONSOLE_API:
#      ENDPOINT: https://console.api.example.com        # Change the endpoint
#    CONSOLE_API_V2:
#      ENDPOINT: https://console-v2.api.example.com     # Change the endpoint

global:
  namespace: spaceone-namespace                         # Change the namespace
  shared_conf:

Change the namespace of plugin service

You can change the namespace of supervisor's plugin service as well as the core service. Life-cycle of plugin service is managed by supervisor, and plugin namespace setting is also set in supervisor.

Below is the part where supervisor is set to change the namespace of the plugin service in the values.yaml file. Add the plugin-namespace value to supervisor.application_scheduler.CONNECTORS.KubernetesConnector.namespace.

#console:
supervisor:
  application_scheduler:
    HOSTNAME: spaceone.svc.cluster.local                # Change the hostname
    CONNECTORS:
      KubernetesConnector:
        namespace: plugin-namespace                     # Change the namespace

Update

You can apply the changes through the helm upgrade command and by deleting the pods.

helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet

3.3.3 - Creating and applying kubernetes imagePullSecrets

We will explain the process of enabling Cloudforet pods to get private container images using imagePullSecrets.

Due to organization's security requirements, User can Build and utilize a private dedicated image registry to manage private images.

To pull container images from a private image registry, credentials are required. In Kubernetes, Secrets can be used to register such credentials with pods, enabling them to retrieve and pull private container images.

For more detailed information, please refer to the official documentation.

Creating a Secret for credentials.

Kubernetes pods can pull private container images using a Secret of type kubernetes.io/dockerconfigjson.

To do this, create a secret for credentials based on registry credentials.

kubectl create secret docker-registry my-credential --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

Mount the credentials Secret to a Pod.

You can specify imagePullSecrets in the helm chart values of Cloudforet to mount the credentials Secret to the pods.

WARN: Kubernetes Secret is namespace-scoped resources, so they need to exist in the same namespace.

Set imagePullSecrets configuration for the core service

ParameterdescriptionDefault
[services].imagePullSecrets[]]imagePullSecrets configuration(* Each micro service section)[]
[services].imagePullSecrets[].nameName of secret type of kubernetes.io/dockerconfigjson""
console:
    enable: true
    image:
      name: spaceone/console
      version: x.y.z

    imagePullSecrets:
      - name: my-credential

console-api:
    enable: true
    image:
      name: spaceone/console-api
      version: x.y.z

    imagePullSecrets:
      - name: my-credential

(...)

Set imagePullSecrets configuration for the plugin

ParameterdescriptionDefault
supervisor.application_schedulerConfiguration of supervisor scheduler{...}
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.imagePullSecrets[]imagePullSecrets configuration for plugin[]
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.imagePullSecrets[].nameName of secret type of kubernetes.io/dockerconfigjson for plugin""
supervisor:
    enabled: true
    image:
      name: spaceone/supervisor
      version: x.y.z

    imagePullSecrets: 
      - name: my-credential

    application_scheduler:
      CONNECTORS:
          KubernetesConnector:
              imagePullSecrets: 
                - name: my-credential

Update

You can apply the changes through the helm upgrade command and by deleting the pods

helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet

3.3.4 - Setting up http proxy

We will explain the http_proxy configuration for a Kubernetes pod to establish a proxy connection.

set_proxy

You can enable communication from pods to the external world through a proxy server by declaring the http_proxy and https_proxy environment variables.

This configuration is done by declaring http_proxy and https_proxy in the environment variables of each container.

no_proxy environment variable is used to exclude destinations from proxy communication.

For Cloudforet, It is recommended to exclude the service domains within the cluster for communication between micro services.

Example

Set roxy configuration for the core service

ParameterdescriptionDefault
global.common_env[]Environment Variable for all micro services[]
global.common_env[].nameName of environment variable""
global.common_env[].valueValue of environment variable""
global:
  common_env:
    - name: HTTP_PROXY
      value: http://{proxy_server_address}:{proxy_port}
    - name: HTTPS_PROXY
      value: http://{proxy_server_address}:{proxy_port}
    - name: no_proxy
      value: .svc.cluster.local,localhost,{cluster_ip},board,config,console,console-api,console-api-v2,cost-analysis,dashboard,docs,file-manager,identity,inventory,marketplace-assets,monitoring,notification,plugin,repository,secret,statistics,supervisor

Set proxy configuration for the plugin

ParameterdescriptionDefault
supervisor.application_schedulerConfiguration of supervisor schduler{...}
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[]Environment Variable for plugin[]
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].nameName of environment variable""
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].valueName of environment variable""

WRAN:
Depending on your the installation environment, the default local domain may differ, so you need to change the default local domain such as .svc.cluster.local to match your environment. You can check the current cluster DNS settings with the following command.

kubectl run -it --rm busybox --image=busybox --restart=Never -- cat /etc/resolv.conf

supervisor:
    enabled: true
    image:
      name: spaceone/supervisor
      version: x.y.z

    imagePullSecrets: 
      - name: my-credential

    application_scheduler:
      CONNECTORS:
        KubernetesConnector:
          env:
            - name: HTTP_PROXY
              value: http://{proxy_server_address}:{proxy_port}
            - name: HTTPS_PROXY
              value: http://{proxy_server_address}:{proxy_port}
            - name: no_proxy
              value: .svc.cluster.local,localhost,{cluster_ip},board,config,console,console-api,console-api-v2,cost-analysis,dashboard,docs,file-manager,identity,inventory,marketplace-assets,monitoring,notification,plugin,repository,secret,statistics,supervisor

Update

You can apply the changes through the helm upgrade command and by deleting the pods

helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet

3.3.5 - Support private image registry

Cloudforet proposes a way to sync container images between private and public image registries.

In organizations operating in an on-premise environment, there are cases where they establish and operate their own container registry within the internal network due to security concerns.

In such environments, when installing Cloudforet, access to external networks is restricted, requiring the preparation of images from Dockerhub and syncing them to their own container registry.

To automate the synchronization of container images in such scenarios, Cloudforet proposes using a Container Registry Sync tool called 'dregsy' to periodically sync container images.

dregsy_for_image_sync

In an environment situated between an external network and an internal network, dregsy is executed.

This tool periodically pulls specific container images from Dockerhub and uploads them to the organization's private container registry.

NOTE:
The dregsy tool described in this guide always pulls container images from Dockerhub, regardless of whether the images already exist in the destination registry.

And, Docker Hub limits the number of Docker image downloads, or pulls based on the account type of the user pulling the image

  • For anonymous users, the rate limit is set to 100 pulls per 6 hours per IP address
  • For authenticated users, it’s 200 pulls per 6 hour period.
  • Users with a paid Docker subscription get up to 5000 pulls per day.

Install and Configuration

NOTE:
In this configuration, communication with Dockerhub is required, so it should be performed in an environment with internet access.

Also, this explanation is based on the installation of Cloudforet version 1.11.x

Prerequisite

Installation

Since the tools are executed using Docker, there is no separate installation process required. 

The plan is to pull and run the dregsy image, which includes skopeo (mirror tool).

Configuration

  • Create files
touch /path/to/your/dregsy-spaceone-core.yaml
touch /path/to/your/dregsy-spaceone-plugin.yaml
  • Add configuration (dregsy-spaceone-core.yaml)

If authentication to the registry is configured with username:password,
the information is encoded and set in the 'auth' field as shown below (example - lines 19 and 22 of the configuration).

echo '{"username": "...", "password": "..."}' | base64

In the case of Harbor, Robot Token is not supported for authentication.
Please authenticate by encoding the username:password

relay: skopeo
watch: true

skopeo:
  binary: skopeo
  certs-dir: /etc/skopeo/certs.d

lister:
  maxItems: 100
  cacheDuration: 2h

tasks:
  - name: sync_spaceone_doc
    interval: 21600 # 6 hours
    verbose: true

    source:
      registry: registry.hub.docker.com
      auth: {Token}                 # replace to your dockerhub token
    target:
      registry: {registry_address}  # replace to your registry address
      auth: {Token}                 # replace to your registry token
      skip-tls-verify: true

    mappings:
      - from: spaceone/spacectl
        to: your_registry_project/spaceone/spacectl     # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/marketplace-assets
        to: your_registry_project/spaceone/marketplace-assets   # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/docs
        to: your_registry_project/spaceone/docs          # replace to your registry project & repository
        tags:
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: redis
        to: your_registry_project/spaceone/redis       # replace to your registry project & repository
        tags: 
          - 'latest'
      - from: mongo
        to: your_registry_project/spaceone/mongo       # replace to your registry project & repository
        tags: 
          - 'latest'

  - name: sync_spaceone_core
    interval: 21600 # 6 hours
    verbose: true

    source:
      registry: registry.hub.docker.com
      auth: {Token}
    target:
      registry: {registry_address}  # replace to your registry address
      auth: {Token}               # replace to your registry token
      skip-tls-verify: true

    mappings:
      - from: spaceone/console
        to: your_registry_project/spaceone/console     # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/inventory
        to: your_registry_project/spaceone/inventory       # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/console-api
        to: your_registry_project/spaceone/console-api     # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/cost-analysis
        to: your_registry_project/spaceone/cost-analysis       # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/statistics
        to: your_registry_project/spaceone/statistics      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/secret
        to: your_registry_project/spaceone/secret      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/file-manager
        to: your_registry_project/spaceone/file-manager        # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/monitoring
        to: your_registry_project/spaceone/monitoring      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/supervisor
        to: your_registry_project/spaceone/supervisor      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/identity
        to: your_registry_project/spaceone/identity        # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/notification
        to: your_registry_project/spaceone/notification        # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/repository
        to: your_registry_project/spaceone/repository      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/plugin
        to: your_registry_project/spaceone/plugin      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/config
        to: your_registry_project/spaceone/config      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/console-api-v2
        to: your_registry_project/spaceone/console-api-v2      # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/board
        to: your_registry_project/spaceone/board       # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
      - from: spaceone/dashboard
        to: your_registry_project/spaceone/dashboard       # replace to your registry project & repository
        tags: 
          - 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
  • Add configuration (dregsy-spaceone-plugin.yaml)
relay: skopeo
watch: true

skopeo:
  binary: skopeo
  certs-dir: /etc/skopeo/certs.d

lister:
  maxItems: 100
  cacheDuration: 2h

tasks:
  - name: sync_spaceone_plugin
    interval: 21600 # 6 hours
    verbose: true

    source:
      registry: registry.hub.docker.com
      auth: {Token}                 # replace to your dockerhub token
    target:
      registry: {registry_address}  # replace to your registry address
      auth: {Token}                 # replace to your registry token
      skip-tls-verify: true

    mappings:
      - from: spaceone/plugin-google-cloud-inven-collector
        to: your_registry_project/spaceone/plugin-google-cloud-inven-collector     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-azure-inven-collector
        to: your_registry_project/spaceone/plugin-azure-inven-collector     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-cloudwatch-mon-datasource
        to: your_registry_project/spaceone/plugin-aws-cloudwatch-mon-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-azure-activity-log-mon-datasource
        to: your_registry_project/spaceone/plugin-azure-activity-log-mon-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-cloudtrail-mon-datasource
        to: your_registry_project/spaceone/plugin-aws-cloudtrail-mon-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-ec2-inven-collector
        to: your_registry_project/spaceone/plugin-aws-ec2-inven-collector     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-sns-mon-webhook
        to: your_registry_project/spaceone/plugin-aws-sns-mon-webhook     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-trusted-advisor-inven-collector
        to: your_registry_project/spaceone/plugin-aws-trusted-advisor-inven-collector     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-azure-monitor-mon-datasource
        to: your_registry_project/spaceone/plugin-azure-monitor-mon-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-email-noti-protocol
        to: your_registry_project/spaceone/plugin-email-noti-protocol     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-google-stackdriver-mon-datasource
        to: your_registry_project/spaceone/plugin-google-stackdriver-mon-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-telegram-noti-protocol
        to: your_registry_project/spaceone/plugin-telegram-noti-protocol     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-keycloak-identity-auth
        to: your_registry_project/spaceone/plugin-keycloak-identity-auth     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-prometheus-mon-webhook
        to: your_registry_project/spaceone/plugin-prometheus-mon-webhook     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-slack-noti-protocol
        to: your_registry_project/spaceone/plugin-slack-noti-protocol     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-grafana-mon-webhook
        to: your_registry_project/spaceone/plugin-grafana-mon-webhook     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-cloud-service-inven-collector
        to: your_registry_project/spaceone/plugin-aws-cloud-service-inven-collector     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-phd-inven-collector
        to: your_registry_project/spaceone/plugin-aws-phd-inven-collector     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-api-direct-mon-webhook
        to: your_registry_project/spaceone/plugin-api-direct-mon-webhook     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-azure-cost-mgmt-cost-datasource
        to: your_registry_project/spaceone/plugin-azure-cost-mgmt-cost-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-aws-cost-explorer-cost-datasource
        to: your_registry_project/spaceone/plugin-aws-cost-explorer-cost-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-ms-teams-noti-protocol
        to: your_registry_project/spaceone/plugin-ms-teams-noti-protocol     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-google-monitoring-mon-webhook
        to: your_registry_project/spaceone/plugin-google-monitoring-mon-webhook     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-http-file-cost-datasource
        to: your_registry_project/spaceone/plugin-http-file-cost-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'
      - from: spaceone/plugin-google-cloud-log-mon-datasource
        to: your_registry_project/spaceone/plugin-google-cloud-log-mon-datasource     # replace to your registry project & repository
        tags: 
          - 'semver: >=1.0.0 <1.99.0'
          - 'keep: latest 2'

Run

No need to pull docker images separately.
The command below will get the image if there is no image locally

docker run -d --rm --name dregsy_spaceone_core -v /path/to/your/dregsy-spaceone-core.yaml:/config.yaml xelalex/dregsy:0.5.0
docker run -d --rm --name dregsy_spaceone_plugin -v /path/to/your/dregsy-spaceone-plugin.yaml:/config.yaml xelalex/dregsy:0.5.0

Management

  • view log
docker logs -f {container_id|container_name}
  • delete docker container
docker rm {container_id|container_name} [-f]

3.3.6 - Advanced configuration guide

Advanced Configuration Guide of Cloudforet

Title and Favicon

Cloudforet has default title and CI with Wanny favicon.

But you can change them to your own title and favicon.


ComponentFile PathDescription
Title/var/www/title.txtname of Title
Favicon/var/www/favicon.icofavicon file

Console supports the functionality of changing title and favicon. The default values are in source code, but you can overwrite them when deploying pods.

NOTE: Both Title and Favicon should be exist together, even though you want to configure one of them!


This is an example value of console.yaml file.
console:
  production_json:
    DOMAIN_NAME_REF: hostname
    CONSOLE_API:
      ENDPOINT: https://console-v1.api.example.com
    CONSOLE_API_V2:
      ENDPOINT: https://console-v2.api.example.com
    DOMAIN_IMAGE:
      CI_LOGO: https://raw.githubusercontent.com/cloudforet-io/artwork/main/logo/symbol/Cloudforet_symbol--dark-navy.svg
      CI_TEXT_WITH_TYPE: https://raw.githubusercontent.com/kren-ucloud/artwork/main/logo/KREN-logo.png
      SIGN_IN: https://raw.githubusercontent.com/cloudforet-io/artwork/main/illustrations/happy-new-year-2024.png
      CI_TEXT: https://raw.githubusercontent.com/cloudforet-io/artwork/main/logo/wordmark/Cloudforet_wordmark--primary.svg
  volumeMounts:
    application:
      - name: favicon
        mountPath: /var/www/title.txt
        subPath: title.txt
        readOnly: true
      - name: favicon-img
        mountPath: /var/www/favicon.ico
        subPath: favicon.ico
        readOnly: true

  volumes:
    - name: favicon
      configMap:
        name: favicon
    - name: favicon-img
      configMap:
        name: favicon-img
    - name: timezone
      hostPath:
        path: /usr/share/zoneinfo/Asia/Seoul
    - name: log-volume
      emptyDir: {}
      

The actual values are from Kubernetes ConfigMap object. So you might have to change the value at ConfigMap or create a new one and mount it in your pod.

Title(title.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  name: favicon
  namespace: spaceone
data:
  title.txt: |
    KREN UCLOUD

Apply at your Kubernetes cluster.

kubectl apply -f title.yaml -n spaceone

Favicon (favicon.yaml)

Cloudforet new Favicon file is favicon.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: favicon-img
  namespace: spaceone
binaryData:
  favicon.ico: AAABAAEAAAAAAAEAIADxxxxxxx...

NOTE: favicon.ico must be base64 encoded.

# prepare your favicon.ico file, and encode it to base64 (shell command)
cat favicon.ico | base64

Apply at your Kubernetes cluster.

kubectl apply -f favicon.yaml -n spaceone

Corporate Identity

When you open Cloudforet page, you can see the default Cloudforet CI, logo and text. You can change the default Cloudforet CI with your company CI.

Login Page

Every Page

Update helm value of console (console -> production_json -> DOMAIN_IMAGE)

keyword: DOMAIN_IMAGE

ConfigurationDescriptionFormat
CI_LOGOCustom Logo ImageImage (56 * 56 px)
CI_TEXT_WITH_TYPECI Text ImageImage (164 * 40 px)
SIGN_INSign-in page ImageImage (1024 * 1024 px)
CI_TEXTCI Text Image On every pageImage (123 * 16 px)

NOTE: Recommended file format is SVG. But if you would like to use a PNG file, use transparent background and double the size than recommended size.

NOTE: Cloudforet does not support uploading files, so upload CI files at your web server or S3.!

console:
  production_json:
    DOMAIN_NAME_REF: hostname
    CONSOLE_API:
      ENDPOINT: https://console-v1.api.example.com
    CONSOLE_API_V2:
      ENDPOINT: https://console-v2.api.example.com
    DOMAIN_IMAGE:
      CI_LOGO: https://raw.githubusercontent.com/cloudforet-io/artwork/main/logo/symbol/Cloudforet_symbol--dark-navy.svg
      CI_TEXT_WITH_TYPE: https://raw.githubusercontent.com/kren-ucloud/artwork/main/logo/KREN-logo.png
      SIGN_IN: https://raw.githubusercontent.com/cloudforet-io/artwork/main/illustrations/happy-new-year-2024.png
      CI_TEXT: https://raw.githubusercontent.com/cloudforet-io/artwork/main/logo/wordmark/Cloudforet_wordmark--primary.svg
  volumeMounts:
    application:
      - name: favicon
        mountPath: /var/www/title.txt
        subPath: title.txt
        readOnly: true
      - name: favicon-img
        mountPath: /var/www/favicon.ico
        subPath: favicon.ico
        readOnly: true

  volumes:
    - name: favicon
      configMap:
        name: favicon
    - name: favicon-img
      configMap:
        name: favicon-img
    - name: timezone
      hostPath:
        path: /usr/share/zoneinfo/Asia/Seoul
    - name: log-volume
      emptyDir: {}

Google Analytics

You can apply Google Analytics to Cloudforet Console by following the steps below.

Create accounts and properties

  1. Log in to your Google account after accessing the Google Analytics site.

  2. Click the Start Measurement button.

    ga_start_01

  3. Enter your account name and click the Next button.

    ga_start_02

  4. Enter a property name and click the Next button.

    In the property name, enter the name of the url you want to track.

    ga_start_03

  5. Click the Create button.

    ga_start_04

  6. Click the Agree button after agreeing to the data processing terms.

    ga_start_05

Set up data streams

  1. Choose Web as the platform for the data stream you want to collect.

    ga_data_stream_01

  2. Enter your Cloudforet Console website URL and stream name and click the Create Stream button.

    ga_data_stream_02

  3. Check the created stream information and copy the measurement ID.

    ga_data_stream_03

Set up the Cloudforet Helm Chart

Paste the copied measurement ID as the value for the GTAG_ID key in the helm chart settings as shown below.

# frontend.yaml
console:
  ...
  production_json:
    ...
    GTAG_ID: {measurement ID}
    ...

3.3.7 - Create secret by exist cert

If a public or private certificate is issued, it explains how to create and apply a secret using the issued certificate.

If a public or private certificate has already been issued, you can create a secret through the existing certificate. The following is how to create a secret using the certificate_secret.yaml file.

Create Secret from certificate_secret.yaml file

If the certificate is ready, edit the certificate_secert.yaml file. The file can be downloaded from the link below. In addition, the downloaded content is edited and used as follows. https://github.com/cloudforet-io/charts/blob/master/examples/ingress/on_premise/certificate_secret.yaml

cat <<EOF> certificate_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: spaceone-tls
  namespace: spaceone           # Change the namespace
data:
  tls.crt: base64 encoded cert  # openssl base64 -in cert.pem -out cert.base64
  tls.key: base64 encoded key   # openssl base64 -in key.pem -out key.base64
type: kubernetes.io/tls
EOF

Apply the certificate_secret.yaml file to the spaceone namespace through the following command.

kubectl apply -f certificate_secret.yaml -n spaceone

4 - User Guide

Guides for Cloudforet Users

4.1 - Get started

Cloudforet provides a service that helps integrate resources spread in many ‘cloud service’ providers and systematically manage them.

Learn more about Cloudforet through a user guide.

To use Cloudforet's services, the following three prerequisites must be met:

  • User settings
  • Project settings
  • Service account settings

User settings

Cloudforet users are classified into three types: internal users, external users, and API users.

This section only introduces how to add internal users, and how to add external users and API users can be found in [IAM] user guide.

Adding a user

(1) Click the [Create] button on the [Admin > Users] page.

user-page

(2) In the [Create user] modal, select the [Local] tab.

(2-1) After entering the ID, click the [Check ID] button to check if the ID is valid.

user-create-modal-local-id

(2-2) After entering the name, email, and password to identify the user, click the [OK] button to complete the user creation.

user-create-modal-local-filed

Project settings

Create project and Project group for systematic resource management .

Creating a project group

Since a project must belong to one project group, you must first create a project group before creating a project.

(1) Click the [Create project group] button on the [Project] page.

project-group-create-button

(2) After entering the project group name in the [Create project group] modal dialog, click the [OK] button to create the project group.

project-group-create-modal

Creating a project

After creating a project group, create a project that will belong to it.

(1) Select the previously created project group from the list of project groups on the left and click the [Create project] button at the top right.

project-group-select

(2) After entering the project name in the [Create project] modal dialog, click the [OK] button to create the project.

project-create-modal

Inviting project group members

You can invite users to a project group to register as a Member of the project group.

(1) Select the previously created project group from the [Project group] list on the left.

(2) Click the [Manage project group members] icon button at the top right.

project-member-icon-button

(3) Click the [Invite] button on the [Manage project group members] page to open the [Invite members] modal dialog.

project-member-invite-button

(3-1) Select the member you want to invite. You can select and invite multiple members at once.

project-member-invite-modal-member-added

(3-2) Select the role to be granted to the members to be invited.

project-member-invite-modal-role-added

(3-3) After entering the labels for the members to invite, press the Enter key to add them.

(3-4) Click the [OK] button to complete member invitation.

project-member-invited

Service account settings

Service Account means the Cloud service account required to collect resources for the cloud service.

Adding cloud service account

(1) On the [Asset Inventory > Service account] page, select the cloud service you want to add.

service-account-provider-menu

(2) Click the [Add] button.

service-account-add-button

(3) Fill out the service account creation form.

(3-1) Enter basic information.

service-account-add-base-info

(3-2) Specify the project to collect resources from according to the service account.

service-account-connect-project

(3-3) Enter encryption key information.

service-account-add-key

(4) Click the [Save] button to complete.


After completing the above steps, if you want to use Cloudforet’s services more conveniently and in a variety of ways, please see the following guide:

4.2 - User permission

It provides a basic role-based permission system, enabling you to assign user-specific access rights and manage the system effectively, tailored to your organization’s structure and objectives.

Role Type

Roles are defined based on three types:

  • Admin: has access to all workspaces, including domain settings and Admin mode.
  • Workspace Owner: has access to all projects within the workspace.
  • Workspace Member: has access only to projects they are invited to or that are public within the workspace.

You can find detailed information about the permissions for each role type below.



Admin Role Type


✓ Domain-Wide Management

  • Manage all users including admins within the domain
  • Invite and manage users across all workspaces
  • Assign roles: Admin, Workspace Owner, Workspace Member
  • Restrict access to specific service menus based on roles

✓ All Workspace Management

  • Create/Delete/Enable/Disable workspaces
  • Access settings for all workspaces

✓ App (Client Secret) Management

  • Create and mange domain-level access apps (Client Secrets)
  • Assign apps (Client Secrets) to Admin roles

✓ Domain Settings

  • Configure domain display, icons, and other white labeling settings
  • Set the domain timezone and language

✓ Service Management

  • Create data collectors or budget allocations at a global level



Workspace Owner Role Type

✓ Specific Workspace User Management

  • Invite and manage users within the workspace
  • Assign roles: Workspace Owner, Workspace Member

✓ Workspace App (Client Secret) Management

  • Create and manage workspace-level access apps (Client Secrets)
  • Assign apps (Client Secrets) to Workspace Owner roles

✓ Project Management

  • Create new projects and project groups, and invite users to them

✓ Service Management

  • Manage each service within a workspace



Workspace Member Role Type

✓ View data within the invited workspace, with limited management capabilities

✓ Access only to projects they are invited to or that are public within the workspace



Workspace Owner vs Workspace Member



4.3 - Admin Guide

Users with the Admin role type have top-level administrative authority within the domain.
Admins can access all workspaces, including domain settings, and adjust detailed configurations.

Learn more about roles here.

Entering Admin Center

Click the 'Admin' toggle at the top right to switch to Admin mode.


4.3.1 - User Management

You can invite new users, view and manage all users across the domain.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [IAM > User]



Inviting Users

(1) Click the [+ Add] button at the top

(2) Invite users with workspaces and roles assigned

(2-1) Add user account

  • Local: Local: Enter in email format
  • For other SSO such as Google, Keycloak, etc., enter according to the format configured in the domain.

(2-2) Select if the user has the Admin role or not

  • Admin Role ON: No need to select a workspace as it grants access to the entire domain
  • Admin Role OFF: Must select one or more workspaces and assign roles within those workspaces

(2-3) Click the [Confirm] button to complete the user invitation


(3) Check the added user list

Clicking on a specific user allows you to see detailed user information and the list of workspaces the user belongs to.



Editing Users

(1) Click on a specific user, then click the [Actions > Edit] button.

(2) Edit user information:

  • Change Name
  • Change Notification Email: the Admins can change the email address and verify it directly.
  • Change Password: the Admins can either set a new password directly for the user or send a password reset link via email.

(3) Enable/Disable Users

Select one or more users, then click the [Actions > Enable] or [Actions > Disable] button to change their active status.

4.3.2 - App Settings

You can create and manage apps for generating Client Secrets for API/CLI access.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [IAM > App]



Creating Apps

To use Spacectl, the CLI tool provided by Cloudforet(SpaceONE), an accessible Client Secret is required.

In Admin Center, you can create an app with admin roles and provide its Client Secret key to other users.

(1) Click the [+ Create] button at the top right

(2) Enter the required information:

  1. Enter a name.
  2. Select an Admin role: You can find detailed information about roles here.
  3. Enter tags: input in the 'key:value' format.
  4. Click the [Confirm] button to complete the app creation.

(3) Download the generated files



Regenerating Client Secret

(1) Select the app that needs regeneration.

(2) Click [Actions > Regenerate Client Secret] at the top.

  • The Secret will be regenerated, and you can download the updated configuration files.

4.3.3 - Role Settings

Detailed role management is available through user role types, page access permissions, and API connections.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [IAM > Role]



Using Managed Roles

  • Pre-provided 'Managed' roles allow you to easily identify and quickly assign roles to users:

Domain Admin, Workspace Owner, Workspace Member. (Managed roles cannot be modified or deleted.)



Creating Custom Roles

(1) Click the [+ Create] button at the top

(2) Enter the role name

(3) Select a role type

(4) Set page access permissions

  • The Admin role type has access to the entire domain, so no additional page access permissions are needed.
  • Workspace Owner and Workspace Member can have page access permissions set accordingly.

(5) Click the [+ Create] button to complete the role creation



Editing/Deleting Roles

(1) Select a role

(2) Click [Actions > Edit] or [Actions > Delete] at the top

(3) When 'Edit' is clicked, you will be taken to the role editing page as shown below

4.3.4 - Workspace Settings

Create and manage separate workspace environments according to the size and structure of your organization.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [Preferences > Workspaces]



Creating Workspaces & Inviting Users

Creating a Workspace

(1) Click the [+ Create] button at the top

(2) Enter the basic information and create

  • Enter a name
  • Enter a description
  • Select the main color of the workspace
  • Click the [Confirm] button

Once the workspace is created, you can immediately invite users.



Inviting Users to a New Workspace

(1) Enter user accounts to add them to the list

(2) Select a role

(3) Click the [Confirm] button to complete the invitation

  • You can view the user list at the bottom when you select the created workspace.



Editing Workspaces

After selecting a specific workspace, click the [Actions] button at the top to make the following changes:

  • Edit: Edit the workspace name and description.
  • Delete: Delete the workspace
    • Upon deletion, all users associated with that workspace will lose access.
  • Enable or Disable: Change the activation status of the workspace,
    • When deactivated, all users associated with that workspace will lose access.



Switching to a Workspace

  • Clicking on a specific workspace name will switch to that workspace environment.
  • Switching to a workspace will automatically exit the Admin Center.

4.3.5 - Domain Settings

Provides white labeling features allowing you to customize elements such as domain name, icon, and images.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [Preferences > Domain Settings]



Setting Basic Information

Enter the domain display name and click [Save Changes] to reflect the name in the browser tab as shown below.



Setting Brand Assets

You can apply basic brand assets to the system, such as the main icon and login page image.

Enter the appropriate image URL for each asset and click [Save Changes] to apply them as shown below.



Setting Timezone/Language

You can set the default timezone and language for the domain.

4.3.6 - Notices

You can use the notice feature to view system announcements and post important updates or information related to the management and operation of the domain.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [Info > Notice]



Creating a New Notice

(1) Click the [+ Create Notice] button at the top.

(2) Write the notice:

  • Enter the author's name, title, and body text
  • You can set the notice to be pinned at the top or displayed as a popup
  • Click the [Confirm] button to post the notice

4.3.7 - Data Sources

You can view the data collection results for each data source and manage them by linking connected accounts to workspaces.

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [Cost Explorer > Data Sources]



Viewing Detailed Information of Data Sources

(1) View the list of data sources

(2) Select a specific data source to view detailed information

  • Basic information of the data source
  • Recent data collection results



Managing Linked Accounts for a Data Source

(1) Click on a specific data source from the [Cost Explorer > Data Sources] page

(2) On the Linked Account tab, reset or update the workspaces linked to each service account

  • Reset: Unlink the workspaces from selected accounts
  • Update: Re-select and link a different workspace to the selected accounts

4.3.8 - Trusted Accounts

You can add and manage top-level organization accounts for each cloud provider, and automatically sync them to create and update workspaces and projects in Cloudforet(SpaceONE)

Accessing the Menu

(1) Switch to Admin Center

(2) Navigate to [Asset Inventory > Service Account]



Managing Global Trusted Accounts

In Admin Center, you can create and manage global Trusted Accounts that can be used across all workspaces.

💡 Trusted Account is used for the following purposes:

1) Higher-level accounts

  • When creating a new General Account, you can attach a Trusted Account to avoid repeatedly entering secret and access keys, thereby simplifying the process and enhancing security in line with the organization’s structure.

2) Automatic Account Synchronization

  • Instead of entering individual accounts one by one, you can use the Auto Sync feature to automatically link the organizational structure configured in the cloud provider with the SpaceONE system, creating and updating workspaces and projects according to SpaceONE's structure. Detailed instructions for setting up account synchronization are provided below.



Setting Up Trusted Account Auto Sync

[ Basic Structure ]

SpaceONE has a management structure of Workspace > Project Group > Project - Service Account. When Cloud resources are collected, it is managed by being mapped to a Project, which can then be used for grouping purposes.

➊ Workspace

The top-level management structure that separates workspaces. This can be used to separate environments by company or internal organization.

➋ Project Group

Represents a structure for detailed departments. It commonly has a folder structure.

➌ Project

The lower management structure where actual Cloud resources are mapped. It represents a project unit and can map one or more accounts (Service Accounts) used in the project.

  • Service Account: An account used for actual data collection, which is added to the Project.

[ Set Auto Synchronization ]

1) Select a provider and click the [+ Create] button

2) Enter Base Information & Credentials

3) Turn Auto Sync ON

  • Set the Mapping Method as needed

  • Set Schedule: Select up to 2 times daily for sync


[ Set Auto Synchronization by Cloud Provider ]



Viewing/Editing Trusted Accounts

1) Select a Trusted Account: Go to [Asset Inventory > Service Account] in Admin Center

2) Check or Edit Base Information

3) Check the List of Connected General Accounts:

💡 With auto sync turned on,

  • Accounts are automatically synced and updated according to CSP's structures.
  • It allows you to sync and update accounts directly via the [Sync Now] button.

4) Check and Edit Auto Synchronization Settings:

  • Set details, turn it On or Off, change schedule, etc.

5) Edit Service Account Name or Delete it:

  • Change the service account name with the [✏️] edit button on the top right next to the title.
  • Delete the service account via the [🗑️] delete button on the top right next to the title.

4.3.9 - Global Asset Management

You can view and utilize the detailed features of resources across all workspaces within the domain.

Accessing the Menu

(1) Switch to Admin Center



Creating Global Collectors

➊ Creating a Collector

(1) In Admin Center, go to [Asset Inventory > Collector]

(2) Click the [+ Create] button.

(3) Select an appropriate collector for the data needed to collect

  • Learn more about collector plugins here

(4) Proceed through Steps 1 to 4

  • In the final step (Step 4), you can set the collection schedule and enable 'Collect Data Immediately' upon creation.


➋ Editing/Deleting a Collector

(1) In Admin Center, go to [Asset Inventory > Collector]

(2) Select a collector needed to modify from the list

(3) In the detailed page of the selected collector, you can edit sections such as:

  • Base Information / Schedule / Additional Options

(4) Edit the collector name or delete it:

  • Use the [✏️] edit button next to the collector name at the top to change the name.
  • Use the [🗑️] delete button next to the collector name at the top to delete the collector.


➌ Collecting Data

(1) In Admin Center, go to [Asset Inventory > Collector]

(2) With mouse over, the [Collect Data] button appears, allowing immediate data collection

(3) Click a collector to go to the detailed page and use the [Collect Data] button at the top for immediate collection.



Viewing All Resources in the Domain

In Admin mode, you can view all resources collected across all workspaces within the domain.

(1) [Asset Inventory > Cloud Service]: Overview of all cloud service resources.

(2) [Asset Inventory > Server]: Overview of servers within cloud service resources.

(3) [Asset Inventory > Security]: Security status and checklist based on the created security plugin frameworks.

4.3.10 - Global Cost Management

You can view the costs of all workspaces within the domain and utilize detailed features.

Accessing the Menu

(1) Switch to Admin Center



Analyzing Costs from All Workspaces

You can view the total costs incurred across all workspaces at once.

(1) In Admin Center, go to [Cost Explorer > Cost Analysis]

(2) Click the 'Workspace' tab from the list of Group By to view costs by workspace.

(3) Set detailed analysis using the [Filters].

(4) Save as new cost analysis page

  • Predefined analysis pages (e.g., Monthly cost by workspace): Only [Save As] is available.
  • Custom cost analysis pages: You can [Save], [Save As], [Edit/Delete].



Setting Budgets by Workspace

You can create and manage budgets based on workspaces relative to the total incurred costs.

(1) In Admin mode, go to [Cost Management > Budget]

[How to set a budget]

a. Click the [+ Create] button

b. Set the budget according to the specific workspace and billing data source

  • Enter a name
  • Select a workspace
  • Select a data source
  • Choose a budget plan (total budget or monthly budget)
  • Click the [Confirm] button



Setting Cost Report

You can configure detailed settings to view cost reports incurred across all workspaces.

(1) In Admin mode, go to [Cost Explorer > Cost Report].

(2) In the 'Next Report' widget, click the [Settings] button to configure the report.

  • Select Language/Currency/Monthly Issue date.

(3) In the 'Report Recipients' widget, configure the recipients.

(4) View the overall report:

  • Cost trends for last 12 months
  • Monthly total cost summary

(5) Click a specific report to view details

4.4 - Project

Design and manage a hierarchical structure according to the size and structure of your organization, where you can systematically manage the collected cloud resources.

Create a Project group and a Project on the project page of Cloudforet, and invite your member.

project-full-page

4.4.1 - Project

Project is a grouping unit for managing resources.

A project must belong to a specific Project group, and there can be no more hierarchies below the project.

Invite a Member to the project and assign a Role that differentiates their access privilege to project resources.

Creating a project

(1) From the [Project group] list on the left side of the [Project] page, select a project group for which you will create a project.

project-full-page

(2) Click the [Create project] button at the top right.

(3) After entering a project name in the [Create project] modal dialog, click the [OK] button to create the project.

project-create-modal

Viewing the project list

From the project list, you can easily check the resource status of the major categories of each project.
You can also enter a search word to see a list of project groups and projects that match your criteria.

Getting a list of all projects

You can view a list of the entire projects by selecting [All projects] from [Project groups] on the left.

project-click-all-project

Viewing a list of projects in a project group

You can select the project group you want from the [Project group] list on the left to view projects belonging to that group only.

project-click-single-project-group

If there are other project groups under the selected project group, the projects belonging to such other project groups are not displayed here.

- Project Group A
    - Project Group B
        - Project B-1
        - Project B-2
    - Project A-1
    - Project A-2

For example in the above structure, if you select Project Group A, only Project A-1 and Project A-2 would be displayed in the list.

Exploring projects

Select a project from a list of projects to enter the project detail page.

Project Dashboard

In the [Summary information] tab, you can check the aggregated information of the resources belonging to the project through the project dashboard.

project-dashboard-full-page


The project dashboard shows the status of resource usage and trends by category and region.

In addition, statistical information about the project in diverse formats through multiple widgets helps to manage resources more efficiently and minimize costs.

Below is a list of widgets on the project dashboard.

Project dashboard widget nameDescription
AlertInformation on alerts that occurred in the project
CostCost information for the project
Today's resource updatesResource information updated from midnight local time to the present
Cloud servicesInformation on major cloud services among the services
AWS Personal Health DashboardInformation on AWS Personal Health Dashboard
AWS Trusted AdvisorInformation on AWS Trusted Advisor

Edit project

Changing project name

(1) Click the [Edit] icon button to the right of the project name.

project-name-edit-icon-button

(2) After entering the name to be changed in the [Change project] modal dialog, click the [OK] button to change the project name.

project-name-edit-modal

Managing project tags

You can manage it by adding tags to your project.

(1) Click the [Edit] button inside the [Tag] tab.

project-tag-table

(2) Click the [Add Tag] button on the [Tag] page.

(3) Enter the value to be added in the form of ‘key:value.’

project-tag-add

(3-1) If you want to add more tags, click the [Add tag] button as many as the number of tags you want.

(4) Click the [Save] button to finish adding tags.

Deleting a project

(1) Click the [Delete] icon button to the right of the project name.

project-delete-icon-button

(2) Click the [OK] button in the [Delete project] modal dialog to delete the project.

project-delete-modal

4.4.2 - Member

Invite Members to a Project and a Project group, and assign a Role to them.

Members are always assigned at least one role for each, which allows them to manage access to the project and project group.

Manage project group members

You can manage members by entering the [Manage project group members] page.

(1) Select the project group whose members you want to manage from the [Project group] list on the left side of the [Project] page.

(2) Click the [Manage project group members] icon button at the top right.

project-member-icon-button

(3) Enter a search word on the [Manage project group members] page to view a list of projects that meet the criteria, invite new members, or edit/delete members.

project-member-search

Inviting project group members

(1) Click the [Invite] button on the [Manage project group members] page to open the [Invite members] modal dialog.

project-member-invite-button

(2) Select the member you want to invite. You can select and invite multiple members at once.

project-member-invite-modal-member

(3) Select the roles to be granted to members that you want to invite.

project-member-invite-modal-role

(4) After entering labels for members to invite, press the Enter key to add them.

(5) Click the [OK] button to complete member invitation.

project-member-invite-success

Editing project group members

You can change the roles and labels granted to members for the project group.

(1) In the [Manage project group members] page, select the member you want to edit.

(2) Select [Edit] from the [Action] dropdown.

project-member-edit-menu

(3) In the [Change member information] modal dialog, enter the contents you want to change and click the [OK] button to complete the change.

project-member-edit-modal

Deleting project group members

(1) In the [Manage project group members] page, select the member you want to delete. Multiple selections are possible.

(2) Select [Delete] from the [Action] dropdown.

project-member-delete-menu

(3) Click the [OK] button in the [Remove member] modal dialog to remove the member.

project-member-delete-modal

Managing project members

You can manage members by entering the [Members] tab of the project detail page, and all methods and contents are the same as the managing project group members (link).

(1) On the [Project] page, select the project whose members you want to manage and go to the project detail page.

(2) Select the [Member] tab.

project-member-tab

4.5 - Dashboards

Dashboard service that visually represents (multi) cloud data, such as billing and assets, making complex data easy to understand at a glance. With support for various chart types and graphic elements, you can quickly grasp the essentials of your critical data.

You can create customized dashboards by combining specific widgets to gain a quick overview of your desired data in addition to the default provided dashboards. Furthermore, you can have precise control over variables, date ranges, and detailed options for each widget for each dashboard, allowing you to build and manage more accurate and professional dashboards tailored to your organization's requirements.

4.6 - Asset inventory

Asset inventory allows a user to collect resources based on the registered user cloud service account, and view the collected resources.

Cloud provider: refers to a cloud provider offering cloud services such as AWS, Google Cloud, Azure, etc.

Cloud service: refers to a cloud service that a cloud provider offers, as in the case of AWS EC2 Instance.

Cloud resource: refers to resources of cloud services, as in the case of servers of AWS EC2 Instance.

4.6.1 - Quick Start

You may want to go over our Asset inventory service for a quick start below.

Creating a service account

Add a cloud service account in the [Asset inventory > Service account] page.

(1) Select a cloud service to add.

service-account-provider-menu

(2) Click the [Add] button.

service-account-add-button

(3) Fill out the service account creation form.

(3-1) Enter basic information.

service-account-add-base-info

(3-2) Specify the project to collect resources from according to the service account.

service-account-connect-project

(3-3) Enter encryption key information.

service-account-add-key

Creating a collector

On the [Asset Inventory > Collector] page, create a collector to collect resources.

(1) Click the [Create] button.

collector-create-button

(2) Select the plugin to use when collecting resources.

collector-plugin-list

(3) Fill out the collector creation form. (3-1) Enter basic information such as a name and a version.

collector-create-base-info

(3-2) Add tags if necessary.

collector-create-tag

(4) Create a schedule for running the collector.

(4-1) On the [Asset inventory > Collector] page, select one collector from the table, and then click the [Add] button in the [Schedule] tab.

collector-single-select

(4-2) In the [Add schedule] modal dialog, set the time to run the collector and click the [OK] button.

collector-schedule-modal

Verifying collected resources

You can view the collected resources in [Asset inventory > Cloud service].

collector-resource-inquiry

4.6.2 - Cloud service

Collector allows you to integratively view diverse cloud resources and understand their usage status.

Viewing a list of cloud services

The cloud service page displays the status of cloud service usage by Provider.

Advanced Search and filter settings allow you to filter the list by refined criteria.

Choosing a Provider

Select a provider to view cloud services provided through a certain provider only.

cloud-service-provider-menu

Filter settings

You can search with more detailed conditions by setting service classification and region filters.

(1) Click the [Settings] button to open the [Filter Settings] modal dialog.

cloud-service-filter-button

(2) After selecting the desired filter, click the [OK] button to apply it.

cloud-service-filter-modal

Exploring Cloud Service

You can check the details of certain cloud services on the cloud service detail page.

Click a card on the cloud service page to go to the detail page.

cloud-service-select

You can check detailed information about the selected cloud service in the cloud service list on the left.

cloud-service-list-lnb

Viewing a list of resources in cloud services

You can enter a search word to see a list of cloud resources that match your criteria.

See here for a detailed description of Advanced search.

Click the [Excel] icon button to [Export as an Excel file] for a list of resources (/ko/docs/guides/advanced/excel-export) or click the [Settings] icon button to [Personalize table fields](/ko/ docs/guides/advanced/custom-table).

cloud-sevice-detail-full-page

Viewing the status of cloud service usage

You can check statistical information about the selected cloud service.

cloud-service-single-select

For more detailed information, click the [View chart] button on the right.

cloud-service-chart-modal

Opening cloud resources console

Sometimes you need to work in a console provided by a cloud resources provider.

(1) Select the cloud resource to which you want to connect the console.

(2) Click the [Console connection] button.

cloud-service-connect-console

(3) By clicking the button, open the provider's console in a new tab where you can continue working with the cloud resource.

Below is an example of the AWS EC2 Instance console that was opened.

cloud-service-console-opened

Exploring resources in cloud services

If you select an item you want to look at in the list of cloud resources, you can check information about that resource at the bottom.

  • [Details] (#check-cloud-resources-details)
  • [Tag] (#manage-cloud-resources-tag)
  • [Associated member] (#check-cloud-resource-associated-member)
  • [Change history] (#check-cloud-resource-associated-member)
  • [Monitoring] (#check-cloud-resource-monitoring-information)

cloud-resource-single-select

Checking cloud resource details

Detailed information about the selected resource is displayed.

The information displayed here is divided into a Basic tab and a More information tab.

  • Basic tab: This is provided as default in the cloud resources details, and the [Basic information] and [Original data] tabs are applicable.
  • More information Tab: All tabs except the main tab are determined by the collector plugin that gathers resources. For detailed information, see here.

cloud-resource-info-tab

The image above is an example of cloud resources details.

Except for the [Basic information] tab and [Original data] tabs, all other tabs (AMI, Permissions, Tags) offer information added by the collector plugin.

Managing cloud resources tags

There are two types of tags for cloud resources: Managed and Custom. For each cloud resource, you can either view the Managed tags added from the provider or add Custom tags.

Each tag in the form of key: value can be useful when searching for specific resources.

cloud-resource-tag-tab

[ Viewing Managed Tags ]

  • The Managed tags can't be directly edited or removed in Cloudforet.

cloud-resource-tag-tab

[ Creating & Viewing Custom Tags ]

(1) Click the [Edit Custom Tags] button

cloud-resource-tag-tab

(2) After entering the tag in the form of key:value on the tag page, click the [Save] button to complete this process.

cloud-resource-tag-tab

Checking members associated with cloud resources

In the [Associated members] tab, you can check user information that meets the conditions below:

cloud-resource-member-tab

Viewing history of changing cloud resources

In the [Change history] tab, you can quickly identify changes by date/time of the selected cloud resource.

(1) You can select a certain date or search for the content you want to check.

cloud-resource-changes-tab

(2) When you click a certain key value or time period, you can check the details of the corresponding history of changes.

(2-1) Contents of changes: You can check the details of which key values ​​of the resource were updated and how.

cloud-history-diff-tab

(2-2) Logs: As we support detailed logs by providers such as AWS CloudTrail, you can check which detailed events have occurred within/without of the selected time. This has a great advantage when identifying users who have made changes to a particular resource.

cloud-history-log-tab

You can check the detailed log by clicking the value key you want to check.

cloud-history-log-modal

(2-3) Notes: By adding/managing notes at a selected time, you can freely manage the workflow for each company, such as which person in charge is related to the change, which process you will choose to solve the issue, etc.

cloud-history-note-tab

Checking cloud resource monitoring information

The [Monitoring] tab shows various metrics for cloud resources.

cloud-resource-monitoring-tab

You can also view metrics for different criteria by changing the [Time range] filter, or by selecting a different statistical method from the [Statistics] dropdown.

If you select multiple resources by clicking the checkbox on the left from the list of cloud resources at the top, you can compare and explore metric information for multiple resources.

cloud-resource-multi-monitoring

Metrics information is collected by the Monitoring plugin, and for detailed information, see here.

4.6.3 - Server

Collector allows you to check server resources among diverse resources of cloud services.

Getting a list of server resources

You can check a list of server resources by entering the server page through the [Asset inventory > Server] menu.

Advanced search allows you to filter the list by elaborate criteria.

Click the [Excel] icon button to [Export as an Excel file] for a list of resources (/ko/docs/guides/advanced/excel-export) or click the [Settings] icon button to [Personalize table fields](/ko/ docs/guides/advanced/custom-table).

server-full-page

Opening the server resources console

Sometimes you need to work on a console site that a server resources provider offers.

(1) Select the server resource to which you want to connect the console.

(2) Click the [Console connection] button.

server-console-connect

(3) Click the button to open the provider's console in a new tab where you can continue working with the server resource.

Below is an example of the AWS EC2 Instance console that was opened.

server-console-opened

Explore server resources

If you select the item you want to look at from a list of server resources, you can check information about the resource at the bottom.

It is equivalent to the Explore cloud service resources in the [Asset inventory > Cloud service] menu.

4.6.4 - Collector

Cloudforet gathers Cloud resources through a Collector, and its schedule can be set up.

Overview

To collect data with a collector, you need two elements:

Collector plugin

This is an element that defines the specifications of what resources to collect from the Cloud provider, and how to display the collected data on the screen.

Since each provider has a different structure and content of data, a collector completely relies on Collector plugin to collect resources.

For detailed information on this, see here.

Service account

To collect resources, you need to connect to an account on the Cloud provider.

Service Account is your account information to link to your provider's account.

A collector accesses the provider account through the service account created for each provider.

For detailed information on this, see here.

Create a collector

(1) Click the [+ Create] button at the top left.

collector-create-button

(2) Follow the steps on the "Create New Collector" page.

(2-1) On the Plugin List page, find a required collector plugin and click the [Select] button.

collector-plugin-lists

(2-2) Enter the name and version of the collector and click the [Continue] button.

(Depending on the collector, it can be required to select a specific cloud provider.)

collector-plugin-create


(2-3) Select additional options for the collector and click the [Continue] button.

(2-3-1) Service Account: Select either "All" or Specific Service Accounts. If you choose "All," the service accounts associated with the provider related to the collector will be automatically selected for data collection.

(2-3-2) Additional Options: Depending on the collector, there may or may not be additional options to select.

collector-plugin-create

(2-4) You can set up a schedule to automatically perform data collection (optional). Once you have completed all the steps, click the [Create New Collector] button to finalize the collector creation.

collector-plugin-create

(2-5) Once collector is created, you can collect data immediately.

collector-plugin-create


Get a list of collectors

You can view a list of all collectors that have been created on the collector page.

Advanced search allows you to filter the list by elaborate criteria. For a detailed explanation, see here.

collector-list-inquiry


View/Edit/Delete collector

(1) View Details

(1-1) Select a specific collector card from the list to navigate to its detailed page.

collector-list-select

(1-2) You can view the basic information, schedule, additional options, and attached service accounts.

collector-detail-info-tab


(2) Edit or Delete

(2-1) Click on the [Edit] icon at the top and modify the collector name.

collector-detail-edit

(2-2) If you need to edit details such as base information, schedule, additional options or service accounts, click the [Edit] button in each area.

collector-detail-edit

(2-3) After making the changes, click the [Save Changes] button to complete the modification.

collector-detail-edit

(2-4) If you need to delete a collector, click the [Trash] icon on the top.

collector-detail-delete


Set up automated data collection

After creating a collector, you can still modify the automated data collection schedule for each individual collector.

(1) In the collector list page, you can enable or disable automated data collection for each collector by using the schedule toggle button(Switch On/Off) in the collector card section. You can quickly set and modify the frequency by clicking the [Edit] button.

collector-edit-schedule

collector-edit-schedule

collector-edit-schedule

(2) You can also navigate to the detailed page of each collector and change the schedule.

collector-edit-schedule


Start data collection immediately

You can collect data on a one-time basis without setting up automated data collection.

It allows data collection to take place even when the collector does not have an automated data collection schedule.

Data collection works in two ways:

Collect data for all attached service accounts

Collector needs account information from a Provider for data collection, which is registered through Service account.

(1) Click on [Collect Data]

(Collector list Page) Hover over the collector card area for data collection, and then click the [Collect Data] button.

collector-collect-data

(Collector Detail Page) Click the [Collect Data] button located in the top right corner of the detailed page.

collector-collect-data

collector-collect-data


(2) Proceed with data collection.

(2) Whether or not the collector has completed a data collection can be checked in Collector history. You can click the [View details] link of a selected collector to go to that page.


Collect data for a single service account

When collecting data with a collector, you may only collect data from a specific cloud provider’s account.

(1) Select a collector from the collector list page, and go to detail page.

(2) You can find the list of attached service accounts on the bottom of detail page.

collector-service-account

(3) In order to start data collection, Click the [Collect Data] button on the right side of the service account for which you want to collect data.


Checking data collection history

You can check your data collection history on the Collector history page.

You can go to the collector history page by clicking the [Collector history] button at the top of the collector page.

collector-history-at-table

collector-history-at-table

Checking the details of data collection history

If you select a collection history from the list of data collections above, you will be taken to the collection history details page.

You can check data collection status, basic information, and Collection history by service account.

collector-history-detail-full-page

Checking collection history for each service account

When you run the collector, each collection is performed for each associated service account.

Here you can find information about how the collection was performed by the service account.

collector-history-detail-table

Key field Information
  • Created Count: The number of newly added resources
  • Updated Count: The number of imported resources
  • Disconnected Count: The number of resources that were not fetched
  • Deleted Count: Number of deleted resources (in case of a resource failing to fetch multiple times, it is considered deleted.)

Check the content of collection errors

(1) Select the item you want to check for error details from a list of collections for each account.

(2) You can check the details of errors in the [Error list] tab below.

collector-history-error-list

4.6.5 - Service account

In the Service account page, you can easily integrate, manage, and track your accounts of each cloud service.

Add service account

There are two types of service accounts for different needs and better security.

Create General Account

(1) On the [Asset inventory > Service account] page, select the cloud service you want to add.

service-account-provider-menu

(2) Click the [Add] button.

service-account-add-button

(3) Fill out the service account creation form.

(3-1) Select General Account.

service-account-select-general-accout

(3-2) Enter basic information.

service-account-add-base-info

(3-3) Specify the project to collect resources from according to the service account.

service-account-connect-project

(3-4) Enter encryption key information.

  • Option 1) You can create account with its own credentials. service-account-add-key

  • Option 2) Create account using credentials from an existing Trusted Account.

  • In the case of AWS, you can easily create Assume Role by attaching an exisiting Trusted Account. If you select a certain Trusted Account, its credential key will automatically get inserted, then you will only need to enter the rest of information. service-account-add-key

  • Option 3) You can also create account without credentials. service-account-add-key

(4) Click the [Save] button to complete.

Create Trusted Account

(1) On the [Asset inventory > Service account] page, select the cloud service you want to add.

service-account-provider-menu

(2) Click the [Add] button.

service-account-add-button

(3) Fill out the service account creation form.

(3-1) Select Trusted Account.

service-account-select-trusted-accout

(3-2) Enter basic information.

service-account-add-base-info-2-2

(3-3) Specify the project to collect resources from according to the service account.

service-account-connect-project

(3-4) Enter encryption key information.

service-account-add-key-2-2

(4) Click the [Save] button to complete.

Viewing service account

You can view a list of service accounts that have been added, and when you click a certain account, you can check the detailed information.

service-account-view-list

Editing service account

Select a service account you want to edit from the list.

service-account-detail-page

Editing each part

You can edit each part of detail information by clicking [Edit] button.

service-account-edit-btn service-account-edit

Removing service account

Select a service account you want to remove from the list.

You can delete it by clicking the delete icon button.

service-account-delete-btn

If the service account is Trusted Account type and currently attached to more than one General Account, it can't be removed.

service-account-cannot-delete

4.7 - Cost Explorer

Cost Explorer feature traces all expenses incurred from service accounts registered in Cloudforet. Cost data, having been cleaned, can be found in Dashboard or Cost analysis.

The amount used by period can be checked based on a Budget set by a user and Budget use notification can also be set up.

4.7.1 - Cost analysis

Cost analysis provides detailed analyses of cost data received from cloud providers.

By grouping or filtering data based on diverse conditions, you can view the desired cost data at a glance.

Verifying cost analysis

Selecting a data source

If you have more than one billing data source connected, you can perform a detailed cost analysis by selecting each data source from the "Cost Analysis" section in the left menu.

cost-analysis-data-source

Selecting the granularity

Granularity is criteria set for how data will be displayed. The form of the provided chart or table varies depending on the detailed criteria.

cost-analysis-granularity

  • Daily: You can review daily accumulated data for a specific month.
  • Monthly: You can check monthly data for a specific period (up to 12 months).
  • Yearly: You can examine yearly data for the most recent three years.

Selecting the period

The available options in the "Period" menu vary depending on a granularity you choose. You can select a menu from the [Period] dropdown or configure it directly through the "Custom" menu.

cost-analysis-period


Group-by settings

You can select more than one result from group statistics. In the chart, only one selected result of group statistics is displayed, and in the table, you can see all the results from group statistics that you select.

cost-analysis-groupby

cost-analysis-groupby


Filter settings

Filters, similar to group-by, can be selected one or more at a time, and your configured values are used for filtering with an "AND" condition.

(1) Click the [Filter] button at the top of the page.

(2) When the "Filter Settings" window opens, you can choose the desired filters, and the selections will be immediately reflected in the chart and table.

cost-analysis-filter


Creating/managing custom cost analysis

Creating a custom analysis page

To alleviate the inconvenience of having to reset granularity and period every time you enter the "Cost Analysis" page, a feature is provided that allows you to save frequently used settings as custom analysis pages.

(1) Click the [Save As] button in the upper-right corner of a specific cost analysis page.

cost-analysis-save_as

(2) After entering a name and clicking the [Confirm] button, a new analysis page is created.

cost-analysis-save_to

cost-analysis-saved

(3) Custom cost analysis pages can be saved with settings like name, filters, group-by, etc., directly using the [Save] option, and just like the default analysis pages, you can also create new pages by using [Save As].

cost-analysis-save_saveas


Editing the custom analysis name

You can edit the name by clicking the [Edit] button at the top of the page.

cost-analysis-edit

cost-analysis-edit_name


Deleting the custom analysis name

You can delete the page by clicking the [Delete] button at the top of the page.

cost-analysis-delete

4.7.2 - Budget

Budget is a service that helps manage your budget by setting standards on costs incurred by each project.

Creating a budget

(1) Click the [Create budget] button at the top right of the [Cost Explorer > Budget] page.

budget-create-01

(2) Enter basic information

budget-create-02

(2-1) Enter the name of the budget.

(2-2) Select a billing data source.

(2-3) Select the project to be the target of budget management in the target item.

(2-4) Select the cost incurring criteria. If you select all as the cost type, all cost data related to the corresponding project will be imported.

(3) Enter the budget plan

budget-create-03

(3-1) Set a period for managing the budget.

(3-2) Choose how you want to manage your budget.

(3-3) Set the budget amount. If you selected Set total budget, enter the total budget amount. If you selected Set monthly budget, enter the monthly budget amount.

Check the set budget and usage status

The budget page provides a summary of your budget data and an overview of your budget for each project at a glance. You can use filters at the top to specify a period or apply an exchange rate, and you can search for a specific project or name using an advanced search.

budget-full-page-01

Budget detail page

On the budget detail page, you can view specific data for the created budget.

Budget summary

Under [Budget summary], you can check the monthly budget and cost trends through charts and tables.

budget-detail-01

Set budget usage notifications

In [Budget usage notification settings], you can adjust the settings to receive a notification when the budget has been used up over a certain threshold. When the budget amount used goes over a certain percentage or the actual amount exceeds a certain amount, you can receive a notification through the notifications channel registered in advance.

budget-alert-01

4.8 - Alert manager

Alert manager in Cloudforet is a service to integrate and manage events of diverse patterns that occur in multiple monitoring systems.

4.8.1 - Quick Start

You may want to go over our Alert manager service for a quick start below.

Creating alerts

Alerts can be created in two ways:

  • Create an alert manually in the Cloudforet console.
  • Automatically create through an external monitoring service connection

Creating an alert manually from a console

(1) Go to the [Alert manager > Alert] page and click the [Create] button.

create-alert-step-1

(2) When the [Create alert] modal dialog opens, fill in the input form.

create-alert-step-2

(2-1) Enter an [Alert title] and select [Urgency].

(2-2) Designate the project for which the alert occurred.

(2-3) Write [Comment] if an additional explanation is needed.

(3) Click the [OK] button to complete alert creation.

Connecting to an external monitoring service to receive alerts

When an external monitoring service is connected, an event message occurring in the service is automatically generated as an alert.
To receive alerts from the external monitoring, Webhook creation and Connection settings are required.


Creating a webhook

To receive event messages from an external monitoring service, you need to create a webhook.
Webhooks can be created on the project detail page.

(1) Go to the [Alerts] tab of the project detail page and select the [Webhook] tab.

create-webhook-step-1

(2) Click the [Add] button.

(3) Write a name in an [Add webhook] modal dialog and select the plug-in of the external monitoring service to be connected.

create-webhook-step-3

(4) Click the [OK] button to complete set up.

Escalation policy settings

Whether an alert received via a webhook is sent as a notification to project members is determined by escalation policy.

(1) Inside the [Alert] tab of the project detail page, move to the [Settings] tab.

create-escalation-policy-step-1

(2) Click the [Change] button in the escalation policy area.

create-escalation-policy-step-2

(3) After selecting the [Create new policy] tab, enter the settings to create an escalation policy.

create-escalation-policy-step-4

PolicyDescription
Exit condition (status)Define the condition to stop the generated alarm.
RangeIndicate the scope in which escalation policy can be used. In case of "global," the policy can be used in all projects within the domain, and in case of "project," within the specified project.
Escalation RulesAll levels from LV1 to LV5 can be added.
Alerts are sent to a notifications channel belonging to a set level, and a period between steps can be given in minutes from step 2 or higher.
Number of repetitionsDefine how many times to repeat an alert notification. Notifications can be repeated up to 9 times.
Project (if you create it from the escalation rules page)If the scope is a project, this indicates the project being targeted.

(4) When all settings are completed, click the [OK] button to create the escalation policy.

Notifications settings

In the [Notification] tab of the project detail page, you can decide whether or not to Create a notifications channel and enable it.
Notifications channel is a unit that expresses the systematic recipient area, including the method and level of notifications transmission. It helps to transmit alerts according to the level set in the escalation rule.

(1) On the project detail page, select the [Notification] tab and click the [Add channel] button of the desired notifications channel.

notification-step-1

(2) On the notification creation page, enter the settings to create a notifications channel.

(2-1) Enter the basic information about the notifications channel you want to create, such as the required channel name and notification level. The [Channel name] and [Notification level] comprise the basic setting fields, and afterward, the remaining fields receive different information per channel.

notification-step-3-1

(2-2) You can set a schedule to receive notifications only at certain times.

notification-step-3-2

(2-3) Notifications can be received when an alert occurs or when a threshold for budget notifications was reached. You can set the occasions when you receive notifications in [Topic].

notification-step-3-3

(3) Click the [Save] button to complete the notifications channel creation.

(4) Notifications channels that have been created can be checked at the bottom of the [Notification] tab.

notification-step-5

You can control whether to activate the corresponding notifications channel through the toggle button at the top left. Even if there is a level set up under the escalation policy, without activating the notifications channel, notifications will not go out.

4.8.2 - Dashboard

It is on dashboards that you can view alerts that have occurred to the currently logged-in users at a glance.

You can check alerts for each of the three main parts, as follows:

Check alerts by state

At the top of the dashboard, you can view alerts by State.
Click each item to go to the Alert details page, where you can check detailed information or implement detailed settings.

view-alert-by-status

Alerts history

Alert history occurred in Project is displayed.
You can see the daily data on the chart, and the increase/decrease in alerts on the card compared to the previous month.

alert-history-1

Project dashboard

[Project dashboard] shows the alert information of each project related to a user.

In the case of [Top 5 project activities], projects are displayed in the order of having the most alerts in the [Open] state.

project-board-1

At the bottom of the search bar, the alerted projects are displayed in the order of highest activity.
Only projects marked with an issue status are visible, and when all the alerts reach a cleared status, they are changed to normal status and are no longer visible on the dashboard.

project-board-2

4.8.3 - Alert

Alert is defined by all the issues that occur during service operation, created mainly to send notifications to relevant users.

State

Alerts have one of the following states:

StateDescription
OKState in which an alert has been assigned and is being processed
CreatedState in which alert was first registered
ResoledState in which the contents of alerts such as faults, inspection, etc., have been resolved
ErrorState in which an event has been received through webhook connections but alerts were not normally registered due to error

Urgency

There are two types of urgent alerts in Cloudforet: high and low.

Whereas in the case of the Manual creation of alerts, it is created as one of two types, high and low, in the case of automatic creation through webhook connections, urgency is measured according to Severity.

Creating alerts

Alerts can be created in two ways:

  • Manual creation: create an alert manually in the Cloudforet console.
  • Auto generation: create a webhook and receives events from an external monitoring service connected to the webhook. And it automatically generates an alert by purifying the received event message.

Creating an alert manually from a console

(1) Go to the [Alert manager > Alerts] page and click the [Create] button.

create-alert-step-1

(2) When the [Create alert] modal dialog opens, fill in the input form.

create-alert-step-2

(2-1) Enter an [Alert title] and select [Urgency].

(2-2) Designate the project for which the alert occurred.

(2-3) Write [Comment] if an additional explanation is needed.

(3) Click the [OK] button to complete alert creation.

Connecting to an external monitoring service to receive alerts

When an external monitoring service is connected, an event message occurring in the service is automatically generated as an alert.
To receive alerts from the external monitoring, Webhook creation and Connection settings are required.


Creating a webhook

To receive event messages from an external monitoring service, you need to create a webhook.
Webhooks can be created on the project detail page.

(1) Go to the [Alerts] tab of the project detail page and select the [Webhook] tab.

create-webhook-step-1

(2) Click the [Add] button.

(3) Write a name in an [Add webhook] modal dialog and select the plug-in of the external monitoring service to be connected.

create-webhook-step-3

(4) Click the [OK] button to complete set up.

Using Alerts

Let's take a brief look at various ways to use the alert features in Cloudforet.

  • Notifications channel: set up how and when to send alerts to which users.
  • Escalation policy: apply step-by-step rules to effectively forward received alerts to project members.
  • Event rules: events received through webhooks are generated as Alerts according to the circumstances.
  • Maintenance period: register regular and irregular system task schedules to guide tasks and block Alerts that occur between tasks.

Getting a list of alerts

You can view alerts from all projects on the [Alert manager > Alerts] page.
You can search for alerts or change the state of an alert.

Searching for alerts

You can enter a search term to see a list of alerts that match your criteria and click the title of an alert you want to check on an alert detail page.

alert-search

Also, the built-in filtering feature makes it convenient to filter alerts.

For a detailed description on advanced search, see here.

Changing alert state in lists

You can edit an alert state right from the list.

(1) Select an alert to edit the state, and click the desired button from among [OK], [Resolved], and [Delete] in the upper right corner.

update-alert-1

(1-1) Click the [OK] button to change the state to OK

The OK state is a state in which the alert has been assigned and is being processed by a person in charge.
As soon as you change the state, you can set the person in charge of the selected alert to yourself, and click the [OK] button to complete.

update-alert-1-1

(1-2) Click the [Revolved] button to change the state to `resolved’

The resolved state means that the issue that caused the alert has been processed.
You can write a note as soon as the state changes, and click the [OK] button to complete.

update-alert-1-2

(1-3) Click the [Delete] button to delete an alert

You can check the alert list to be deleted once again, and click the [OK] button to delete it.

update-alert-1-3

Viewing alerts

You can view and manage details and alert history on the alert detail page.

alert-detail-page

ItemsDescription
DurationTime during which an alert lasted
DescriptionAs a description of an alert, the content written by a user or that of an event received from an external monitoring service
RulesConditions alerted by an external monitoring service
SeverityLevel of seriousness of data received from a webhook event
Escalation policyApplied escalation policy
ProjectAlerted project(s)
CreateMonitoring services that sent alerts
Resource nameAlert occurrence target

Renaming and deleting alerts

You can change the name of an alert or delete an alert through the [Edit] and [Delete] icon buttons for each.

update-alert-name-or-delete-alert

Changing state/urgency

State and urgency can be easily changed via the dropdown menus.

update-state-urgency

Changing the person in charge

(1) Click the [Assign] button.

update-assignee-1

(2) Select a person in mind and click the [OK] button to complete the assignment of the person in charge.

update-assignee-2

Editing description

Only users with an administrative role for the alert can edit it.

(1) Click the [Edit] button.

update-description-1

(2) Write changes through a form in an alert description field and click the [Save changes] button to complete such changes.

update-description-2

Changing a project

You can change the project linked with an alert.

(1) Click the [Change] button to change a project.

update-project-1

(2) After selecting a project from a [Select project] dropdown menu, click the [Save changes] button to complete the project change.

update-project-2

Updating to a new state

By recording the progress in the state of alerts field, you can quickly grasp their state.
If you change the content, the previous state history will be lost.

(1) Click the [New update] button.

update-status-1

(2) Input the state in the [New state update] modal dialog, and click the [OK] button to complete the state update.

update-status-2

Adding recipients

Alerts are sent to recipients via Escalation policy.

If you need to send an alert to additional users for that alert, set up [Additional recipients].

add-additional-responder-1

You can view and search a list of available users by clicking the search bar, where multiple selections are possible.

add-additional-responder-2

Adding notes

Members can communicate by leaving comments on alerts, registering inquiries and answers to those inquiries during processing.

add-note

Viewing occurred events

You can view history by logging events that occurred in one alert.

view-pushed-event

If you click one event from a list, you can view the details of that event.

view-pushed-event-detail

Notification policy settings

You can set an alert to occur only when the urgency of the alert that has occurred in the project is urgent.

(1) Inside the [Alerts] tab of the project detail page, go to the [Settings] tab.

notification-policy-1

(2) Click the [Edit] icon button in the notification policy area.

notification-policy-2

(3) Select the desired notification policy.

notification-policy-3

(4) Click the [OK] button to complete policy settings.

Auto recovery settings

The auto recovery feature automatically places the alert into a resolved state when the system crashes.

(1) Inside the [Alerts] tab on the project detail page, move to the [Settings] tab.

auto-recovery-1

(2) Click the [Edit] icon button in the auto recovery area.

auto-recovery-2

(3) Select the desired auto recovery settings.

auto-recovery-3

(4) Click the [OK] button to complete auto recovery settings

4.8.4 - Webhook

You can receive events that occurred in external monitoring services through Webhook.

Creating a webhook

To receive event messages from an external monitoring service, you need to create a webhook.
Webhooks can be created on the project detail page.

(1) Go to the [Alerts] tab of the project detail page and select the [Webhook] tab.

create-webhook-step-1

(2) Click the [Add] button.

(3) Write a name in an [Add webhook] modal dialog and select the plug-in of the external monitoring service to be connected.

create-webhook-step-3

(4) Click the [OK] button to complete set up.

Getting a list of webhooks

You can enter a search word in the search bar to see a list of webhooks that match your criteria. For a detailed description on advanced search, see here.

webhook-search

Editing and deleting webhook

You can enable, disable, change, or delete a webhook viewed from the list.

update-webhook

Enabling/disabling a webhook

If you enable a webhook, you can receive events from an external monitoring service connected to the webhook at Alerts.
On the contrary, if you disable a webhook, incoming events are ignored and no alerts are raised.

(1) Select the webhook to enable and choose the [Enable]/[Disable] menu from the [Action] dropdown.

enable-webhook-1

(2) Check the content in the [Enable/disable a webhook] modal dialog and click the [OK] button.

enable-webhook-2 disable-webhook-2

Renaming a webhook

(1) Select the webhook to change from the webhook list, and select the [Change] menu from the [Action] dropdown.

update-webhook-name-1

(2) Write a name to be changed and click the [OK] button to complete the change.

update-webhook-name-2

Deleting a webhook

(1) Select the webhook to delete from the webhook list, and choose the [Delete] menu from the [Action] dropdown.

delete-webhook-1

(2) After entering the accurate name of the selected webhook, click the [Delete] button to delete the webhook.

delete-webhook-2

4.8.5 - Event rule

By setting an Event rule, an alert that occurs triggers specific actions to perform automatically, reducing the hassle of manually managing alerts.

Event rules are project dependent and can be managed on the project detail page.

event-rule-full-page

Create event rules

(1) In the [Settings] tab found in the [Alert] tab of the project detail page, click the [Edit] button of the event rule.

create-event-rule-1

(2) Click the [Add event rule] button.

create-event-rule-2

(3) Enter desired setting values ​​on the event rule page.

create-event-rule-3

(3-1) Set the conditions to perform additional actions on the received alert.

At least one condition must be written, and you can add conditions by clicking the [Add] button on the right or delete them by clicking the [Delete] icon button.

create-event-rule-3-1

(3-2) Specify the action to be performed on the alert that meets the conditions defined above.

create-event-rule-3-2

List of event rules settings

PropertyDescription
Stop notificationsSuppress Notification for alerts for the corresponding conditions
Project routingAlerts of the corresponding conditions are not received by current project but by project selected under project routing (no alert is created in the current project)
Project DependenciesAlerts of the corresponding conditions can be viewed from the projects registered in project dependency.
UrgencyAutomatically assign urgency to alerts of the corresponding conditions
High, low, or none-set can be specified and in case of none-set, rules are applied as follows
• External monitoring alert: Urgency of an object
• Direct creation: High (default)
Person in chargeAutomatically assign a person in charge of the alert for the corresponding condition(s):
Additional recipientsWhen Notification occurs with the alert of the corresponding condition(s), send a notification to specified users together
Additional informationAutomatically add information to alerts for the corresponding conditions
Stop executing further actionsIf the event rule is executed, subsequent event rules are ignored (See Ways and order of event rules action)

Edit event rules

(1) Click the [Edit] button on the event rules page.

update-event-rule-1

(2) Enter the setting values you want​for the event rule.

update-event-rule-2

(3) Click the [Save] button to complete editing the event rules.

Delete event rules

(1) Click the [Delete] button on the event rules page.

delete-event-rule-1

(2) In the [Delete event rule] modal dialog, click the [OK] button to complete the deletion.

delete-event-rule-2

Ways and order of event rules action

Event rules set by a user for when an alert occurs will be executed sequentially.

event-working-system

If event rules are created as in the example above, they are executed in the order of [#1], [#2], etc., starting from the highest event rule.
You can easily change the order of the event rules by clicking the [↑] and the [↓] buttons.

4.8.6 - Maintenance window

You may want to block sending Notification on Alerts that occur during regular or irregular system operation.
Setting a Maintenance window allows you to block sending notifications during that period.

The maintenance window is project dependent and can be managed on the project detail page.

maintenance

Create maintenance window

(1) Click the [Create maintenance window] button at the top right of the project detail page.

create-maintenance-window-1

(2) Enter a [Title] for a maintenance window and set the schedule to limit the occurrence of the alert.

create-maintenance-window-2-1

When you set the schedule, you can start right away or have it start at a scheduled time.
Select the [Start and end now] option if you want to start immediately, or the [Start at scheduled time] option if you want to schedule an upcoming task

create-maintenance-window-2-2

(3) Click the [OK] button to complete the creation.

Edit maintenance window

You can only edit maintenance windows that have not yet ended.

(1) Select the [Maintenance window] tab under the [Alerts] tab on the project detail page.

(2) Select the object you want to edit and click the [Edit] button.

update-maintenance-window-1

(3) After changing the desired items, click the [OK] button to complete.

update-maintenance-window-2

Closing maintenance window

(1) Select the [Maintenance window] tab under the [Alerts] tab on the project detail page.

(2) Select the object to be edited and click the [Exit] button to exit.

delete-maintenance-window

4.8.7 - Notification

Notifications are a means to deliver alerts.

In the Notifications channel page, you can set up how and when to send alerts to which users.

The notifications channel is project dependent and can be managed on the project detail page.

notification-full-page

Creating a notifications channel

In the [Notification] tab of the project detail page, you can decide whether or not to Create a notifications channel and enable it.

Notifications channel is a unit that expresses the systematic recipient area, including the method and level of notifications transmission. It helps to transmit alerts according to the level set in the escalation rule.

(1) On the project detail page, select the [Notification] tab and click the [Add channel] button of the desired notifications channel.

notification-step-1

(2) On the notification creation page, enter the settings to create a notifications channel.

(2-1) Enter the basic information about the notifications channel you want to create, such as the required channel name and notification level. The [Channel name] and [Notification level] comprise the basic setting fields, and afterward, the remaining fields receive different information per channel.

notification-step-2-1

(2-2) You can set a schedule to receive notifications only at certain times.

notification-step-2-2

(2-3) Notifications can be received when an alert occurs or when a threshold for budget notifications was reached. By setting up topics, you can choose which notifications you want to receive.
If you select [Receive all notifications], you will receive both types of notifications, and if you select [Receive notifications on selected topics], you will receive only notifications related to what you selected.

notification-step-2-3

(3) Click the [Save] button to complete the notifications channel creation.

Editing and deleting the notifications channel

Editing the notifications channel

Created notifications channels can be checked under each notifications channel selection.

update-notification-channel-1

You can change the active/inactive status through the toggle button at the top left, and you can edit each item by clicking the [Edit] button of each notifications channel.
When you complete inputting the information, click the [Save changes] button to complete the editing.

update-notification-channel-2

Deleting the notifications channel

You can delete the notifications channels by clicking the [Delete icon] button in the upper right corner.

delete-notification-channel

Cloudforet user channel

The [Add Cloudforet user channel] button exists in the [Notifications channel] item in the project.

cloud-foret-user-channel-1

If you add a Cloudforet user channel, an alert is spread to the personal channels of project members. Afterward, alerts are forwarded via the Cloudforet user notifications channel of the user who has received it.

cloud-foret-user-channel-2

Creating a Cloudforet user notifications channel

A user notifications channel can be created in [My page > Notifications channel].

create-user-channel

Unlike creating a project notifications channel, there are no notification level settings, and other creation procedures are the same as Creating a project notifications channel.

4.8.8 - Escalation policy

By applying stage-by-stage rules to alerts through escalation policies, alerts that have been received are effectively sent to members of the project.

Each rule has a set level, and an alert is spread to the corresponding notifications channel for each level.

Whether an alert received via a webhook is to be sent as a notification to project members is determined by Escalation policy.
Escalation policy can be managed in two places:

  • [Alert manager > Escalation policy] page: Manage escalation policy under the scope of global and project
  • [Project] detail page: Manage escalation policy under the scope of project

Create escalation policy

If you are a user with manage permission on the [Escalation policy] page, you can create an escalation policy.

Create in an [Escalation policy] page

(1) Click the [Create] button on the [Alert manager > Escalation policy] page.

escalation-policy-full-page

(2) Enter the settings to create an escalation policy.

escalation-policy-create-modal

PolicyDescription
Exit condition (status)Define the condition to stop the generated alarm.
RangeIndicate the scope in which the escalation policy can be used. In case of global, the policy can be used in all projects within the domain, and in case of project, within the specified project.
ProjectScope defined as project indicates the project being targeted.
Escalation rulesDefine rules for sending step-by-step notifications.
Alerts are sent to a notifications channel belonging to a set level, and a period between steps can be given in minutes from step 2 or higher.
Number of repetitionsDefine how many times to repeat an alert notification. Notifications can be repeated up to 9 times.

Create in a [Project] detail page

When you create an escalation policy on the [Project] detail page, the project is automatically designated as an escalation policy target.

(1) Inside the [Alert] tab of the project detail page, go to the [Settings] tab.

create-escalation-policy-1

(2) Click the [Change] button in the escalation policy area.

create-escalation-policy-2

(3) Click the [Create new policy] tab.

create-escalation-policy-3

(4) Enter settings to create an escalation policy.

create-escalation-policy-4

Level

A level is a transmission range at which you send an alert from the stage you are in when sending the alert by stage.

You can set up a notifications channel in the project, and each notifications channel has its own level.

escalation-policy-level

When defining the escalation rule, you set the [Notification level]. At each set stage, an alert is sent to the notifications channel of the corresponding level.

(5) When all settings are completed, click the [OK] button to create the escalation policy.

Set as default policy

After selecting one from the list of escalation policies, you can set it up as a default by selecting the [Set as default] menu from the [Action] dropdown.

When a new project is created and the alert is activated, the corresponding policy is automatically applied.

set-as-default

Modify and delete escalation

Once you select a target from the escalation policy list, [Modify] and [Delete] become available from the [Action] dropdown.

escalation-policy-update-delete

Edit

In the case of editing, you can use the same form as a modal dialog that is created when the [Create] button is clicked, and all items except the range can be edited.

update-escalation-policy

Delete

In case of deletion, you can proceed with deletion through the confirmation modal dialog as shown below:

delete-escalation-policy

4.9 - IAM

You can invite/manage users and configure API/CLI access through app settings within a specific workspace.

4.9.1 - User

You can invite and manage users for a workspace.

Accessing the Menu

(1) Select a specific workspace

(2) Go to [IAM > User]



Inviting Users

(1) Click the [Invite] button at the top

(2) Add user accounts and assign workspace roles

(2-1) Enter & Search user accounts

You can invite both existing users within the domain and external users to the workspace.

  • Local: Enter the email format.
  • If SSO such as Google, Keycloak, etc., is added to the domain, enter according to the corresponding format.

(2-2) Select a workspace access role

(2-3) Click the [Confirm] button to complete the user invitation


(3) Check the invited user list

By clicking on a specific user, you can view detailed user information as well as the list of projects the user belongs to.



Editing Users

Workspace Owners can only modify or remove user roles, and cannot edit other user information.

(1) Change roles

  • Click the dropdown button in the user's Role display to change the role.

(2) Remove users from the workspace

  • Click the [Remove] button to remove the user.

4.9.2 - App

You can create and manage apps for issuing Client Secrets for API/CLI access to a workspace.

Accessing the Menu

(1) Select a specific workspace

(2) Go to [IAM > App]



Creating an App

To use Cloudforet(SpaceONE)'s CLI tool, Spacectl, you need an accessible Client Secret.

You can create an app with the Workspace Owner role in a specific workspace and provide the Client Secret key of that app to other users.

(1) Click the [+ Create] button in the upper right corner

(2) Enter Information

  1. Enter a name
  2. Select the Workspace Owner role: You can find detailed information about roles here.
  3. Enter tags in the 'key:value' format
  4. Click the [Confirm] button to complete the app creation.

(3) Download the generated file



Regenerating Client Secret

(1) Select an app

(2) Click [Actions > Regenerate Client Secret]

  • A new secret will be generated, and you can download the configuration file again.

4.10 - My page

My page allows you to manage your personalized data.

4.10.1 - Account & profile

Account & profile is a page where you can view and edit your personal information.

[My page] can be accessed through the submenu that appears when you click the icon on the far right of the top menu.

account-profile-01

Changing settings

You can change your name, time zone, and language settings on the [My page > Account & profile] page.

account-profile-02

Verifying Notification Email

You can enter and verify Notification Email. If your Notification Email has not been verified yet, you won't be able to receive important system notifications or password reset link.

account-profile-03

Changing the password

If you are an internal user (a user signed in with ID/password), you can change your password on this page.

account-profile-04

4.10.2 - Notifications channel

Notification Channel is a service that allows you to receive various alerts and events from Cloudforet's monitoring system or budget service, or notifications from Cloudforet itself, etc.

Creating notifications

On the [My page > Notifications channel] page, there is an [Add channel] button for each protocol.

notification-channel-01

As you click the [Add channel] button, you will enter the following page. The input form for basic information is different for each protocol, whereas the channel name, notification schedules, and selection boxes for topics able to subscribe to are the same for all protocols.

notification-channel-02

If you select anytime as the schedule, you can receive notifications at any time. If you select set time, you can select the desired day and time.

notification-channel-03

You can also select an option to receive all notifications for topics, or receive notifications only for a topic you would select between alert and budget.

notification-channel-04

Verifying the created notifications channel

When you fill out all input forms and create a notifications channel, you can check the newly created channel as follows:

notification-channel-created-01

Editing the notifications channel

Alerts you create can be edited directly from the list.

In the case of a protocol that can edit the entered data (e.g. SMS, voice call), data, channel name(s), schedules, and topics can all be edited. For protocols where data cannot be edited (e.g., Slack, Telegram), the [Edit] button is not active.

notification-channel-edit-01

4.11 - Information

You can check important Information such as recent updates or task announcements regarding the use of the console.

4.11.1 - Notice

This is a page where you can check notices written by the Cloudforet system administrator or the administrator of the customer company in use.

Verifying notices

(1) Quick check for recent notices: After clicking the notification button on the top menu, click the [Notice] tab to check the recently registered notices.

gnb-notice-tab

(2) Check the full list: You can move to the full list of notices page through the submenu that appears when you click the icon on the far right of the top menu.

gnb-profile-menu

Registering notice

A user with a role whose type is [Admin] is permitted to directly create announcements within a related domain.

(1) Enter the [Notice] page, and click the [Register new notice] button to write a new post.

  • The updated notice is open to all users assigned a specific role within a related domain.

notice-list

create-notice

(2) The updated notice can be [modified] or [deleted] later.

4.12 - Advanced feature

Advanced features are designed to use Cloudforet more conveniently.

4.12.1 - Custom table

Custom table feature is useful when there are many fields in the table or when you want to adjust the field order.

If you click the [Settings] icon button from the table, you can directly set up the table fields.

custom-table-01

Getting field properties

You can sort fields by suggestion/alphabet or search by field name. You can also search by the tag field that you have.

custom-table-02

Selecting/deselecting fields

Fields can be freely deselected/selected from the field table. Select the desired field and click the [OK] button.

custom-table-select-01

Sorting fields

Auto sort

If you click the [Recommended order] or [Alphabetical order] button at the top of the field table, the fields are sorted by the corresponding condition. The sorting only applies to the selected field.

custom-table-sort-01

Manual sorting

You can manually sort fields by dragging and dropping the [Reorder] icon button to the right of the selected field.

custom-table-sort-02

Reverting to default settings

If you want to retrieve a custom field to its default settings, click the [Return to Default] button.

custom-table-sort-03

4.12.2 - Export as an Excel file

Export as an Excel file allows you to download table data compatible with Excel.

Click the [Export as an Excel file] icon button from the table.

excel-export-01

The data downloaded to Excel is as follows, and if you set it up to show only some fields as a custom table, you can see the data of that field only:

excel-export-02

4.12.3 - Search

Search feature makes your data easily refined and checked.

There are two ways to use the search bar from the data tables: advanced and keyword searches.

The search field provided by SpaceONE makes data searches much more convenient. All field names that can be searched would appear as you hover your mouse cursor over the search bar.

search-query-01

After selecting a field, you can manually enter a value for that field or choose it from a list of suggestions.

search-query-02

Use the keyword search if you want to search all fields rather than limit your search to a specific field. If you type the text in the search bar and press the enter key, the data containing the keyword is filtered in and displayed in the table.

search-keyword-01

You can use both advanced and keyword searches together, and multiple searches are possible. The search word shall be displayed in the table if any of the field values ​​are matched by filtering the data with the "or" condition.

search-keyword-02

4.13 - Plugin

Let us introduce a "Plugin" feature used in Cloudforet.

4.13.1 - [Alert manager] notification

Cloudforet provides plugins as a Notification method to deliver alerts to users.

Overview

Cloudforet provides plugins as a notification method to deliver alerts to users.
For a list of plugins currently supported by Cloudforet, see the Plugin support list.
You can see more detailed descriptions on Telegram and Slack connections from the below link.
In addition, the Email, SMS and Voice call are available Without any additional settings.

Plugin support list

PluginsSetup guide link
Telegramhttps://github.com/cloudforet-io/plugin-telegram-noti-protocol/blob/master/docs/ko/GUIDE.md
Slackhttps://github.com/cloudforet-io/plugin-slack-noti-protocol/blob/master/docs/ko/GUIDE.md
EmailCan be used without additional settings
SMSCan be used without additional settings
Voice callCan be used without additional settings

4.13.2 - [Alert manager] webhook

Cloudforet supports plugin type webhooks for you to receive event messages from Various monitoring services.

Overview

Cloudforet supports plugin type webhooks for you to receive event messages from Various monitoring services.
For a list of webhook plugins currently supported by Cloudforet, see the Plugin support list.

In particular, event messages generated by AWS CloudWatch and AWS PHD (PersonalHealthDashboard)
are collected by Cloudforet through the AWS SNS (Simple Notification Service) webhook.

For the settings guide for each monitoring service, see Setup guide link in the plugin support list below.

Plugin support list

PluginsSetup guide link
AWS SNShttps://github.com/cloudforet-io/plugin-aws-sns-mon-webhook/blob/master/docs/ko/GUIDE.md
Grafanahttps://github.com/cloudforet-io/plugin-grafana-mon-webhook/blob/master/docs/ko/GUIDE.md
Prometheushttps://github.com/cloudforet-io/plugin-prometheus-mon-webhook/blob/master/docs/ko/GUIDE.md
Zabbixhttps://github.com/cloudforet-io/plugin-zabbix-mon-webhook/blob/master/docs/ko/GUIDE.md

4.13.3 - [Asset inventory] collector

Cloudforet can collect cloud resources in use by each Cloud provider through a collector plugin.

Overview

Cloudforet can collect cloud resources in use by each Cloud provider through a collector plugin.
For a list of collectors currently supported by Cloudforet, see the Plugin support list below.

First, to use the collector plugin, you must register a Service account.
However, since the ways for registering a service account registration are different for each cloud provider such as AWS, Google Cloud, Azure, etc.,
see the Setup guide link in the plugin support list below for detailed settings.

Plugin support list

PluginsSetup guide link
AWS Cloud Services collectorhttps://github.com/cloudforet-io/plugin-aws-cloud-service-inven-collector/blob/master/docs/ko/GUIDE.md
AWS EC2 Compute collectorhttps://github.com/cloudforet-io/plugin-aws-ec2-inven-collector/blob/master/docs/ko/GUIDE.md
AWS Personal Health Dashboard collectorhttps://github.com/cloudforet-io/plugin-aws-phd-inven-collector/blob/master/docs/ko/GUIDE.md
AWS Trusted Advisor collectorhttps://github.com/cloudforet-io/plugin-aws-trusted-advisor-inven-collector/blob/master/docs/ko/GUIDE.md
Azure Cloud collectorhttps://github.com/cloudforet-io/plugin-azure-inven-collector/blob/master/docs/ko/GUIDE.md
Google Cloud collectorhttps://github.com/cloudforet-io/plugin-google-cloud-inven-collector/blob/master/docs/ko/GUIDE.md
Monitoring Metric Collector of Collected Resourceshttps://github.com/cloudforet-io/plugin-monitoring-metric-inven-collector/blob/master/docs/ko/GUIDE.md

4.13.4 - [Cost analysis] data source

Cloudforet collects cost data on cloud services using a plugin.

Overview

Cloudforet collects cost data for cloud services using a plugin.
For a list of plugins currently supported by Cloudforet, see the Plugin support list.
If there is no suitable plugin, you can develop a plugin fit for your company's billing system
and use it in Cloudforet.

Plugin support list

PluginsSetup guide link
AWS hyperbilling cost datasourcehttps://github.com/cloudforet-io/plugin-aws-hyperbilling-cost-datasource/blob/master/docs/ko/GUIDE.md

4.13.5 - [IAM] authentication

As a Means for user authentication, Cloudforet provides an authentication method using an account of other services through a plugin.

Overview

As a means for user authentication, Cloudforet provides an authentication method using an account of other services using a plugin.
For a list of authentication plugins currently supported by Cloudforet, see the Plugin support list.

You can use the Google Oauth2 plugin,
which authenticates users through your Google account, and the Keycloak plugin, which supports a single sign-on (SSO) via standard protocols.
For more detailed settings, see the Setup guide link below

Plugin support list

PluginsSetup guide link
Google Oauth2https://github.com/cloudforet-io/plugin-googleoauth2-identity-auth/blob/master/docs/ko/GUIDE.md
Keycloakhttps://github.com/cloudforet-io/plugin-keycloak-identity-auth/blob/master/docs/ko/GUIDE.md

5 - Developers

Guides for Cloudforet Development

5.1 - Architecture

Cloudforet Architecture guide

5.1.1 - Micro Service Framework

Cloudforet Deep Dive

Cloudforet Architecture

The Cloudforet consists of a micro service architecture based on identity and inventory. Each micro services provides a plugin interface for flexibility of implementation.

Cloudforet Backend Software Framework

The Cloudforet development team has created our own S/W framework like Python Django or Java Spring. Cloudforet S/W Framework provides software framework for implementing business logic. Each business logic can present its services in various way like gRPC interface, REST interface or periodic task.

LayerDescrptionBase ClassImplementation Directory
InterfaceEntry point of Service requestcore/api.pyproject/interface/interface type/
HandlerPre, Post processing before Service call
ServiceBusiness logic of servicecore/service.pyproject/service/
CacheCaching for manager function(optional)core/cache/
ManagerUnit operation for each service functioncore/manager.pyproject/manager/
ConnectorInterface for Data Source(ex. DB, Other micro services)

Backend Server Type

Based on Interface type, each micro service works as

Interface typeDescription
gRPC servergRPC based API server which is receiving requests from console or spacectl client
rest serverHTTP based API server, usually receiving requests from external client like grafana
scheduler serverPeriodic task creation server, for example collecting inventory resources at every hour
worker serverPeriodic task execution server which requests came from scheduler server

5.1.2 - Micro Service Deployment

Micro Service Deployment

Cloudforet Deployment

The Cloudforet can be deployed by Helm chart. Each micro services has its own Helm chart, and the top chart, spaceone/spaceone contains all sub charts like console, identity, inventory and plugin.

Cloudforet provides own Helm chart repository. The repository URL is https://cloudforet-io.github.io/charts

helm repo add spaceone https://cloudforet-io.github.io/charts
helm repo list
helm repo update

helm search repo -r spaceone
NAME                          	CHART VERSION	APP VERSION	DESCRIPTION
spaceone/spaceone             	1.8.6        	1.8.6      	A Helm chart for Cloudforet
spaceone/spaceone-initializer 	1.2.8        	1.x.y      	Cloudforet domain initializer Helm chart for Kube...
spaceone/billing              	1.3.6        	1.x.y      	Cloudforet billing Helm chart for Kubernetes
spaceone/billing-v2           	1.3.6        	1.x.y      	Cloudforet billing v2 Helm chart for Kubernetes
spaceone/config               	1.3.6        	1.x.y      	Cloudforet config Helm chart for Kubernetes
spaceone/console              	1.2.5        	1.x.y      	Cloudforet console Helm chart for Kubernetes
spaceone/console-api          	1.1.8        	1.x.y      	Cloudforet console-api Helm chart for Kubernetes
spaceone/cost-analysis        	1.3.7        	1.x.y      	Cloudforet Cost Analysis Helm chart for Kubernetes
spaceone/cost-saving          	1.3.6        	1.x.y      	Cloudforet cost_saving Helm chart for Kubernetes
spaceone/docs                 	2.0.0        	1.0.0      	Cloudforet Open-Source Project Site Helm chart fo...
spaceone/identity             	1.3.7        	1.x.y      	Cloudforet identity Helm chart for Kubernetes
spaceone/inventory            	1.3.7        	1.x.y      	Cloudforet inventory Helm chart for Kubernetes
spaceone/marketplace-assets   	1.1.3        	1.x.y      	Cloudforet marketplace-assets Helm chart for Kube...
spaceone/monitoring           	1.3.15       	1.x.y      	Cloudforet monitoring Helm chart for Kubernetes
spaceone/notification         	1.3.8        	1.x.y      	Cloudforet notification Helm chart for Kubernetes
spaceone/plugin               	1.3.6        	1.x.y      	Cloudforet plugin Helm chart for Kubernetes
spaceone/power-scheduler      	1.3.6        	1.x.y      	Cloudforet power_scheduler Helm chart for Kubernetes
spaceone/project-site         	1.0.0        	0.1.0      	Cloudforet Open-Source Project Site Helm chart fo...
spaceone/repository           	1.3.6        	1.x.y      	Cloudforet repository Helm chart for Kubernetes
spaceone/secret               	1.3.9        	1.x.y      	Cloudforet secret Helm chart for Kubernetes
spaceone/spot-automation      	1.3.6        	1.x.y      	Cloudforet spot_automation Helm chart for Kubernetes
spaceone/spot-automation-proxy	1.0.0        	1.x.y      	Cloudforet Spot Automation Proxy Helm chart for K...
spaceone/statistics           	1.3.6        	1.x.y      	Cloudforet statistics Helm chart for Kubernetes
spaceone/supervisor           	1.1.4        	1.x.y      	Cloudforet supervisor Helm chart for Kubernetes

Helm Chart Code

Each repository should provide its own helm chart.

The code should be at {repository}/deploy/helm

Every Helm charts consists of four components.

File or DirectoryDescription
Chart.yamlInformation of this Helm chart
values.yamlDefault vaule of this Helm chart
config (directory)Default configuration of this micro service
templates (directory)Helm template files

The directory looks like

deploy
└── helm
    ├── Chart.yaml
    ├── config
    │   └── config.yaml
    ├── templates
    │   ├── NOTES.txt
    │   ├── _helpers.tpl
    │   ├── application-grpc-conf.yaml
    │   ├── application-rest-conf.yaml
    │   ├── application-scheduler-conf.yaml
    │   ├── application-worker-conf.yaml
    │   ├── database-conf.yaml
    │   ├── default-conf.yaml
    │   ├── deployment-grpc.yaml
    │   ├── deployment-rest.yaml
    │   ├── deployment-scheduler.yaml
    │   ├── deployment-worker.yaml
    │   ├── ingress-rest.yaml
    │   ├── rest-nginx-conf.yaml
    │   ├── rest-nginx-proxy-conf.yaml
    │   ├── service-grpc.yaml
    │   ├── service-rest.yaml
    │   └── shared-conf.yaml
    └── values.yaml

3 directories, 21 files

Based on micro service types like frontend, backend, or supervisor, the contents of templates directory is different.

Implementation

values.yaml

values.yaml file defines default vault of templates.

Basic information

###############################
# DEFAULT
###############################
enabled: true
developer: false
grpc: true
scheduler: false
worker: false
rest: false
name: identity
image:
    name: spaceone/identity
    version: latest
imagePullPolicy: IfNotPresent

database: {}
  • enabled: true | false defines deploy this helm chart or not
  • developer: true | false for developer mode (recommendation: false)
  • grpc: true if you want to deploy gRPC server
  • rest: true if you want to deploy rest server
  • scheduler: true if you want to deploy scheduler server
  • worker: true if you want to deploy worker server
  • name: micro service name
  • image: docker image and version for this micro service
  • imagePullPolicy: IfNotPresent | Always
  • database: if you want to overwrite default database configuration

Application Configuration

Each server type like gRPC, rest, scheduler or worker server has its own specific configuration.

application_grpc: {}
application_rest: {}
application_scheduler: {}
application_worker: {}

This section is used at templates/application-{server type}-conf.yaml, then saved as ConfigMap.

Deployment file uses this ConfigMap at volumes,

then volumeMount as /opt/spaceone/{ service name }/config/application.yaml


For example, inventory scheduler server needs QUEUES and SCHEDULERS configuration.

So you can easily configure by adding configuration at application_scheduler like

application_scheduler:
    QUEUES:
        collector_q:
            backend: spaceone.core.queue.redis_queue.RedisQueue
            host: redis
            port: 6379
            channel: collector

    SCHEDULERS:
        hourly_scheduler:
            backend: spaceone.inventory.scheduler.inventory_scheduler.InventoryHourlyScheduler
            queue: collector_q
            interval: 1
            minute: ':00'

Local sidecar

If you want to append specific sidecar in this micro service.

# local sidecar
##########################
#sidecar:

Local volumes

Every micro service needs default timezone and log directory.

##########################
# Local volumes
##########################
volumes:
    - name: timezone
      hostPath:
          path: /usr/share/zoneinfo/Asia/Seoul
    - name: log-volume
      emptyDir: {}

Global variables

Every micro services need some part of same configuration or same sidecar.

#######################
# global variable
#######################
global:
    shared: {}
    sidecar: []

Service

gRPC or rest server needs Service

# Service
service:
    grpc:
        type: ClusterIP
        annotations:
            nil: nil
        ports:
            - name: grpc
              port: 50051
              targetPort: 50051
              protocol: TCP
    rest:
        type: ClusterIP
        annotations:
            nil: nil
        ports:
            - name: rest
              port: 80
              targetPort: 80
              protocol: TCP

volumeMounts

Some micro service may need additional file or configuration. In this case use volumeMounts which can attach any thing.

################################
# volumeMount per deployment
################################
volumeMounts:
    application_grpc: []
    application_rest: []
    application_scheduler: []
    application_worker: []

POD Spec

We can configure specific value for POD spec. For example, we can use nodeSelector for deploy POD at specific K8S worker node.

####################################
# pod spec (append more pod spec)
# example nodeSelect
#
# pod:
#   spec:
#     nodeSelector:
#       application: my-node-group
####################################
pod:
    spec: {}

CI (github action)

If you want to make helm chart for this micro service, trigger at github action Make Helm Chart.

5.2 - Microservices

Cloudforet Micro services

5.2.1 - Console

Console microservice

5.2.2 - Identity

Identity microservice

5.2.3 - Inventory

Inventory microservice

5.2.4 - Monitoring

Notification microservice

5.2.5 - Notification

Monitoring microservice

5.2.6 - Statistics

Monitoring microservice

5.2.7 - Billing

Billing microservice

5.2.8 - Plugin

Plugin microservice

5.2.9 - Supervisor

Supervisor microservice

5.2.10 - Repository

Repository Microservice

5.2.11 - Secret

Secret Microservice

5.2.12 - Config

Config Microservice

5.3 - Frontend

Frontend Development Guides

5.4 - Design System

Mirinae is Cloudforet’s open-source design system for products and digital experiences.

Overview

In the hyper-competitive software market, design system have become a big part of a product’s success. So, we built our design system based on the principles below.

A design system increases collaboration and accelerates design and development cycles. Also, a design system is a single source of truth that helps us speak with one voice and vary our tone depending on the situational context.

Principle

User-centered

Design is the “touch point” for users to communicate with the product. Communication between a user and a product is the key activity for us. We prioritize accessibility, simplicity, and perceivability. We are enabling familiar interactions that make complex products simple and straightforward for users to use.

Clarity

Users need to accomplish their complex tasks on our multi-cloud platform. We reduced the length of the thinking process by eliminating confusion for a better user experience. We aim to users achieve tasks simpler and improve motivation to solve tasks.

Consistency

Language development is supported by a variety of sensory experiences. We aim to have the best and the most perfectly consistent design system and keep improving the design system by checking usability.

Click the links to open the resource below for Mirinae’s development.

Resources

GitHub

Design system repository

Storybook

Component Library

Figma

Preparing For Release

5.4.1 - Getting Started

이 페이지는 SpaceOne Design System 개발을 시작하기 위한 안내문서입니다.

개발 환경 세팅

Fork

현재 스페이스원의 콘솔은 오픈소스로 운영중에 있습니다.

개발에 기여하기위해 먼저 Design System 레포지토리를 개인 github 계정에 포크해 줍니다.

Clone

이후 포크해온 레포지토리를 로컬로 클론해 줍니다.

서브모듈로 assets번역 관련 레포지토리가 사용중이기 때문에 함께 초기화합니다.

git clone --recurse-submodules https://github.com/[github username]/spaceone-design-system

cd console

Run Storybook

콘솔을 실행 실행시키기 위해 npm으로 의존성을 설치하고, 스크립트를 실행해 줍니다.

npm install --no-save

npm run storybook

Build

배포 가능한 zip을 생성하려면 아래의 스크립트를 실행하시면 됩니다.

npm run build

스토리북


SpaceOne Design system은 Storybook을 제공하고 있습니다.

컴포넌트를 생성하면 해당 컴포넌트의 기능 정의를 Storybook을 통해 문서화합니다.

기본적으로 한 컴포넌트가 아래와 같은 구조로 구성되어 있습니다.

- component-name
    - [component-name].stories.mdx
    - [component-name].vue
    - story-helper.ts
    - type.ts

[component-name].stories.mdx 와 story-helper.ts

컴포넌트의 설명, 사용예시, Playground를 제공합니다.

mdx 포멧을 사용중이며 사용방법은 문서를 참고하십시오.

playground에 명시되는 props, slots, events와 같은 속성들은 가독성을 위해 story-helper를 통해 분리하여 작성하는 방식을 지향합니다.

차트 라이선스


SpaceONE 디자인 시스템은 내부적으로 amCharts for Dynamic Chart를 사용합니다.

디자인 시스템을 사용하기 전에 amCharts의 라이선스를 확인해주십시오.

자신에게 적합한 amCharts 라이선스를 구입하여 애플리케이션에서 사용하려면 라이선스 FAQ를 참조하십시오.

스타일


스타일 정의에 있어 SpaceOne Console은 tailwind csspostcss를 사용중에 있습니다.

SpaceOne의 color palette에 따라 tailwind 커스텀을 통해 적용되어 있습니다. (세부 정보는 storybook을 참고해주세요)

5.5 - Backend

Backend Core Service Development Guides

5.6 - Plugins

Cloudforet Plugin Development Guide

5.6.1 - About Plugin

Concept of Plugin Interface

About Plugin

A plugin is a software add-on that is installed on a program, enhancing its capabilities.
The Plugin interface pattern consists of two types of architecture components: a core system and plug-in modules. Application logic is divided between independent plug-in modules and the basic core system, providing extensibility, flexibility, and isolation of application features and custom processing logic

Why Cloudforet use a Plugin Interface

  • Cloudforet wants to accommodate various clouds on one platform. : Multi-Cloud / Hybrid Cloud / Anything
  • We want to be with Cloudforet not only in the cloud, but also in various IT solutions.
  • We want to become a platform that can contain various infrastructure technologies.
  • It is difficult to predict the future direction of technology, but we want to be a flexible platform that can coexist in any direction.

Integration Endpoints

Micro ServiceResourceDescription
IdentityAuthSupport Single Sign-On for each specific domain
ex) OAuth2, ActiveDirectory, Okta, Onelogin
InventoryCollectorAny Resource Objects for Inventory
ex) AWS inventory collector
MonitoringDataSourceMetric or Log information related with Inventory Objects
ex) CloudWatrch, StackDriver ...
MonitoringWebhookAny Event from Monitoring Solutions
ex) CPU, Memory alert ...
NotificationProtocolSpecific Event notification
ex) Slack, Email, Jira ...

5.6.2 - Developer Guide

Developer Guide

Plugin can be developed in any language using Protobuf. This is because both Micro Service and Plugin communication use Protobuf by default. The basic structure is the same as the server development process using the gRPC interface.

When developing plugins, it is possible to develop in any language (all languages that gRPC interface can use), but If you use the Python Framework we provide, you can develop more easily. All of the currently provided plugins were developed based on the Python-based self-developed framework.

For the basic usage method for Framework, refer to the following.

The following are the development requirements to check basically when developing a plugin, and you can check the detailed step-by-step details on each page.

5.6.2.1 - Plugin Interface

Check Plugin Interface

First, check the interface between the plugin to be developed and the core service. The interface structure is different for each service. You can check the gRPC interface information about this in the API document. (SpaceONE API)

For example, suppose we are developing an Auth Plugin for authentication of Identity. At this time, if you check the interface information of the Auth Plugin, it is as follows. (SpaceONE API - Identity Auth)





In order to develop Identity Auth Plugin, a total of 4 API interfaces must be implemented. Of these, init and verify are intefaces that all plugins need equally, The rest depends on the characteristics of each plugin.

Among them, let's take a closer look at init and verify, which are required to be implemented in common.

1. init

Plugin initialization. In the case of Identity, when creating a domain, it is necessary to decide which authentication to use, and the related Auth Plugin is distributed. When deploying the first plugin (or updating the plugin version), after the plugin container is created, the Core service calls the init API to the plugin. At this time, the plugin returns metadata information required when the core service communicates with the plugin. Information on metadata is different for each Core service.

Below is an example of python code for init implementation of Google oAuth2 plugin. Metadata is returned as a return value, and at this time, various information required by identity is added and returned.

    @transaction
    @check_required(['options'])
    def init(self, params):
        """ verify options
        Args:
            params
              - options
        Returns:
            - metadata
        Raises:
            ERROR_NOT_FOUND:
        """
        
        manager = self.locator.get_manager('AuthManager')
        options = params['options']
        options['auth_type'] = 'keycloak'
        endpoints = manager.get_endpoint(options)
        capability= endpoints
        return {'metadata': capability}

2. verify

Check the plugin's normal operation. After the plugin is deployed, after the init API is called, it goes through a check procedure to see if the plugin is ready to run, and the API called at this time is verify. In the verify step, the procedure to check whether the plugin is ready to perform normal operation is checked.

Below is an example of python code for verify implementation of Google oAuth2 plugin. The verify action is performed through the value required for Google oAuth2 operation. The preparation stage for actual logic execution requires verification-level code for normal operation.

    def verify(self, options):
        # This is connection check for Google Authorization Server
        # URL: https://www.googleapis.com/oauth2/v4/token
        # After connection without param.
        # It should return 404
        r = requests.get(self.auth_server)
        if r.status_code == 404:
            return "ACTIVE"
        else:
            raise ERROR_NOT_FOUND(key='auth_server', value=self.auth_server)

5.6.2.2 - Plugin Register

Plugin Register

If plugin development is completed, you need to prepare plugin distribution. Since all plugins of SpaceONE are distributed as containers, the plugin code that has been developed must be built as an image for container distribution. Container build is done after docker build using Dockerfile, The resulting Image is uploaded to an image repository such as Docker hub. At this time, the image storage is uploaded to the storage managed by the Repository service, which is a microservice of SpaceONE.


If you have uploaded an image to the repository, you need to register the image in the Repository service among Microservices. Registration API uses Repository.plugin.register. (SpaceONE API - (Repository) Plugin.Register)


The example below is the parameter content delivered when registering the Notification Protocol Plugin. The image value contains the address of the previously built image.

name: Slack Notification Protocol
service_type: notification.Protocol
image: pyengine/plugin-slack-notification-protocol_settings
capability:
  supported_schema:
  - slack_webhook
  data_type: SECRET
tags:
  description: Slack
  "spaceone:plugin_name": Slack
  icon: 'https://spaceone-custom-assets.s3.ap-northeast-2.amazonaws.com/console-assets/icons/slack.svg'
provider: slack
template: {}

In the case of image registration, directly use gRPC API or use spacectl because it is not yet supported in Web Console. After creating the yaml file as above, you can register the image with the spacectl command as shown below.

> spacectl exec register repository.Plugin -f plugin_slack_notification_protocol.yml

When the image is registered in the Repository, you can check it as follows.

> spacectl list repository.Plugin -p repository_id=<REPOSITORY_ID>  -c plugin_id,name
plugin_id                              | name
----------------------------------------+------------------------------------------
 plugin-aws-sns-monitoring-webhook      | AWS SNS Webhook
 plugin-amorepacific-monitoring-webhook | Amore Pacific Webhook
 plugin-email-notification-protocol_settings     | Email Notification Protocol
 plugin-grafana-monitoring-webhook      | Grafana Webhook
 plugin-keycloak-oidc                   | Keycloak OIDC Auth Plugin
 plugin-sms-notification-protocol_settings       | SMS Notification Protocol
 plugin-voicecall-notification-protocol_settings | Voicecall Notification Protocol
 plugin-slack-notification-protocol_settings     | Slack Notification Protocol
 plugin-telegram-notification-protocol_settings  | Telegram Notification Protocol

 Count: 9 / 9

Detailed usage of spacectl can be found on this page. Spacectl CLI Tool

5.6.2.3 - Plugin Deployment

Plugin Deployment

To actually deploy and use the registered plugin, you need to deploy a pod in the Kubernetes environment based on the plugin image. At this time, plugin distribution is automatically performed in the service that wants to use the plugin.

For example, in the case of Notification, an object called Protocol is used to deliver the generated Alert to the user. At that time, Protocol.create action (Protocol.create) triggers installing Notification automatically.

The example below is an example of the Protocol.create command parameter for creating a Slack Protocol to send an alarm to Slack in Notification.

---
name: Slack Protocol
plugin_info:
  plugin_id: plugin-slack-notification-protocol_settings
  version: "1.0"
  options: {}
  schema: slack_webhook
tags:
  description: Slack Protocol

In plugin_id, put the ID value of the plugin registered in the Repository, In version, put the image tag information written when uploading the actual image to an image repository such as Dockerhub. If there are multiple tags in the image repository, the plugin is distributed with the image of the specified tag version.

In the above case, because the version was specified as "1.0" It is distributed as a "1.0" tag image among the tag information below.



In the case of the API, it takes some time to respond because it goes through the steps of creating and deploying a Service and a Pod in the Kubernetes environment. If you check the pod deployment in the actual Kubernetes environment, you can check it as follows.

> k get po
NAME                                                              READY   STATUS    RESTARTS   AGE
plugin-slack-notification-protocol_settings-zljrhvigwujiqfmn-bf6kgtqz   1/1     Running   0          1m

5.6.2.4 - Plugin Debugging

How To Debugging SpaceONE Plugins

Using Pycharm


It is recommand to using pycharm(The most popular python IDE) to develop & testing plugins. Overall setting processes are as below.

1. Open projects and dependencies

First, open project Identity, python-core and api one by one.

  • Click Open

  • Select your project directory. In this example '~/source/cloudone/identity'

  • Click File > Open , then select related project one by one. In this example '~/source/cloudone/python-core'

  • Select New Window for an additional project. You might need to do several times if you have multiple projects. Ex) python-core and api

  • Now we have 3 windows. Just close python-core and API projects.

  • Once you open your project at least one time, you can attach them to each other. Let's do it on identity project. Do this again Open > select your anther project directory. In this example, python-core and API.

But this time, you can ATTACH it to Identity project.

You can attach a project as a module if it was imported at least once.

2. Configure Virtual Environment

  • Add additional python interpreter

  • Click virtual environment section

  • Designate base interpreter as 'Python 3.8'(Python3 need to be installed previously)

  • Then click 'OK'

  • Return to 'Python interpreter > Interpreter Settings..'

  • List of installed python package on virtual environment will be displayed

  • Click '+' button, Then search & click 'Install Package' below

  • 'spaceone-core'

  • 'spaceone-api'

  • 'spaceone-tester'

  • Additional libraries are in 'pkg/pip_requirements.txt' in every repository. You also need to install them.

  • Repeat above process or you can install through command line

$> pip3 install -r pip_requirements.txt

3. Run Server

  • Set source root directory
  • Right click on 'src' directory 'Mark Directory as > Resource Root'
  • Set test server configuration
  • Fill in test server configurations are as below, then click 'OK'
ItemConfigurationEtc
Module namespaceone.core.command
Parametersgrpc spaceone.inventory -p 50051-p option means portnumber (Can be changed)
  • You can run test server with 'play button' on the upper right side or the IDE

4. Execute Test Code

Every plugin repository has their own unit test case file in 'test/api' directory

  • Right click on 'test_collector.py' file
  • Click 'Run 'test_collector''

Some plugin needs credential to interface with other services. You need to make credential file and set them as environments before run

  • Go to test server configuration > test_server > Edit Configurations

  • Click Edit variables

  • Add environment variable as below

ItemConfigurationEtc
PYTHONUNBUFFERED1
GOOGLE_APPLICATION_CREDENTIALSFull path of your configuration file

Finally you can test run your server

  • First, run test server locally

  • Second, run unit test

Using Terminal

5.6.3 - Plugin Designs

Plugin designs for each cloud services

Inventory Collector

Inventory Collector 플러그인을 통해 전문적인 개발지식이 없는 시스템 엔지니어부터, 전문 클라우드 개발자들까지 원하는 클라우드 자산정보를 편리하게 수집하여 체계적으로 관리할 수 있습니다. 그리고, 수집한 자산 정보를 사용자 UI에 손쉽게 표현할 수 있습니다.

Inventory Collector 플러그인은 SpaceONE의 grpc framework 기본 모듈을 기반으로(spaceone-core, spaceone-api) 개발할 수 있습니다. 아래의 문서는 각각의 Cloud Provider별 상세 스펙을 나타냅니다.

AWS

Azure

Google Cloud


Google Cloud VM Instance


Google Cloud SQL Instance


Google Cloud Disk


Google Cloud External IP Address


Google Cloud Instance Group


Collecting Google Cloud Instance Template


Collecting Google Cloud Load Balancing


Collecting Google Cloud Machine Image


Collecting Google Cloud Route


Collecting Google Cloud Snapshot


Collecting Google Cloud Storage Bucket


Collecting Google Cloud VPC Network

Identity Authentication

Monitoring DataSources

Alert Manager Webhook

Notifications

Billing

5.6.4 - Collector

Inventory Collector plugin development

Add new Cloud Service Type

To add new Cloud Service Type

ComponentSource DirectoryDescription
modelsrc/spaceone/inventory/model/skeletonData schema
managersrc/spaceone/inventory/manager/skeletonData Merge
connectorsrc/spaceone/inventory/connector/skeletonData Collection

Add model

5.7 - API & SDK

API & SDK Guide

https://cloudforet-io.github.io/api-doc/

5.7.1 - gRPC API

API Specification and Build

Developer Guide

This guide explains the new SpaceONE API specification which extends the spaceone-api.

git clone https://github.com/cloudforet-io/api.git

Create new API spec file

Create new API spec file for new micro service. The file location must be

proto/spaceone/api/<new service name>/<version>/<API spec file>

For example, the APIs for inventory service is defined at

proto
└── spaceone
    └── api
        ├── core
        │   └── v1
        │       ├── handler.proto
        │       ├── plugin.proto
        │       ├── query.proto
        │       └── server_info.proto
        ├── inventory
        │   ├── plugin
        │   │   └── collector.proto
        │   └── v1
        │       ├── cloud_service.proto
        │       ├── cloud_service_type.proto
        │       ├── collector.proto
        │       ├── job.proto
        │       ├── job_task.proto
        │       ├── region.proto
        │       ├── server.proto
        │       └── task_item.proto
        └── sample
            └── v1
                └── helloworld.proto

If you create new micro service called sample, create a directory proto/spaceone/api/sample/v1

Define API

After creating API spec file, update gRPC protobuf.

The content consists with two sections. + service + messages

service defines the RPC method and message defines the request and response data structure.

syntax = "proto3";

package spaceone.api.sample.v1;

// desc: The greeting service definition.
service HelloWorld {
  // desc: Sends a greeting
  rpc say_hello (HelloRequest) returns (HelloReply) {}
}

// desc: The request message containing the user's name.
message HelloRequest {
  // is_required: true
  string name = 1;
}

// desc: The response message containing the greetings
message HelloReply {
  string message = 1;
}

Build API spec to specific language.

Protobuf can not be used directly, it must be translated to target langauge like python or Go.

If you create new micro service directory, udpate Makefile Append directory name at TARGET

TARGET = core identity repository plugin secret inventory monitoring statistics config report sample

Currently API supports python output.

make python

The generated python output is located at dist/python directory.

dist
└── python
    ├── setup.py
    └── spaceone
        ├── __init__.py
        └── api
            ├── __init__.py
            ├── core
            │   ├── __init__.py
            │   └── v1
            │       ├── __init__.py
            │       ├── handler_pb2.py
            │       ├── handler_pb2_grpc.py
            │       ├── plugin_pb2.py
            │       ├── plugin_pb2_grpc.py
            │       ├── query_pb2.py
            │       ├── query_pb2_grpc.py
            │       ├── server_info_pb2.py
            │       └── server_info_pb2_grpc.py
            ├── inventory
            │   ├── __init__.py
            │   ├── plugin
            │   │   ├── __init__.py
            │   │   ├── collector_pb2.py
            │   │   └── collector_pb2_grpc.py
            │   └── v1
            │       ├── __init__.py
            │       ├── cloud_service_pb2.py
            │       ├── cloud_service_pb2_grpc.py
            │       ├── cloud_service_type_pb2.py
            │       ├── cloud_service_type_pb2_grpc.py
            │       ├── collector_pb2.py
            │       ├── collector_pb2_grpc.py
            │       ├── job_pb2.py
            │       ├── job_pb2_grpc.py
            │       ├── job_task_pb2.py
            │       ├── job_task_pb2_grpc.py
            │       ├── region_pb2.py
            │       ├── region_pb2_grpc.py
            │       ├── server_pb2.py
            │       ├── server_pb2_grpc.py
            │       ├── task_item_pb2.py
            │       └── task_item_pb2_grpc.py
            └── sample
                ├── __init__.py
                └── v1
                    ├── __init__.py
                    ├── helloworld_pb2.py
                    └── helloworld_pb2_grpc.py

References

[Google protobuf] https://developers.google.com/protocol-buffers/docs/proto3

5.8 - CICD

Detailed Explanation of Cloudforet CICD Pipeline and Architecture

Actions

Cloudforet has 100+ repositories, 70 of which are for applications and these repositories have github action workflows.

Because of that, it's very difficult to handle one by one when you need to update or run a workflow.

To solve this problem, we created Actions.

The diagram below shows the relationship between Actions and repositories.

What does actions actually do?

Actions is a control tower that manages and deploys github action workflows for Cloudforet's core services.
It can also bulk trigger these workflows when a new version of Cloudforet's core services needs to be released.

1. Manage and deploy github action workflows for Cloudforet's core services.

All workflows for Cloudforet's core services are managed and deployed in this repository.

We write the workflow according to our workflow policy and put it in the workflows directory of Actions.
Then these workflows can be deployed into the repository of Cloudforet's core services

Our devops engineers can modify workflows according to our policy and deploy them in batches using this feature.

The diagram below shows the process for this feature.

*) If you want to see the Actions script that appears in the diagram, see here.

2. trigger workflows when a new version of Cloudforet's core services needs to be released.

When a new version of Cloudforet's core services is released, we need to trigger the workflow of each repository.
To do this, we made workflow that can trigger workflows of each repository in Actions.

Reference

Cloudforet CICD Project, Actions

5.8.1 - Frontend Microservice CI

Detailed Explanation of Frontend Microservice Repository CI

Frontend Microservice CI process details



The flowchart above describes 4 .yml GitHub Action files for CI process of frontend microservices. Unlike the backend microservices, frontend microservices are not released as packages, so the branch tagging job does not include building and uploading the NPM software package. Frontend microservices only build software and upload it on Docker, not NPM or PyPi.


To check the details, go to the .github/workflow directory in each directory. We provide an example of the workflow directory of the frontend microservices with the below link.


5.8.2 - Backend Microservice CI

Detailed Explanation of Backend Microservice Repository CI

Backend Microservice CI process details



The flowchart above describes 4 .yml GitHub Action files for CI process of backend microservices. Most of the workflow is similar to the frontend microservices' CI. However, unlike the frontend microservices, backend microservices are released as packages, therefore the process includes building and uploading PyPi package.


To check the details, go to the .github/workflow directory in each directory. We provide an example of the workflow directory of the backend microservices with the below link.


5.8.3 - Frontend Core Microservice CI

Detailed Explanation of Frontend Core Microservice Repository CI

Frontend Core Microservice CI



Frontend Core microservices' codes are integrated and built, uploaded with the flow explained above. Most of the workflows include set-up process including setting Node.js, caching node modules, and installing dependencies. After the set-up proccess, each repository workflow is headed to building process proceeded in NPM. After building, both repositories' packages built are released in NPM by code npm run semantic-release.


Check semantic-release site, npm: semantic-release for further details about the release process.


Also, unlike other repositories deployed by the flow from Docker to Spinnaker and k8s, spaceone-design-system repository is deployed differently, based on direct deployment through AWS S3.


To check the details, go to the .github/workflow directory in each directory.


5.8.4 - Backend Core Microservice CI

Detailed Explanation of Backend Core Microservice Repository CI

Backend Core Microservice CI process details



Backend Core microservices' 4 workflow related GitHub Action files are explained through the diagram above. Unlike the other repositories, pushes in GitHub with tags are monitored and trigger to do building the package in PyPi for testing purposes, instead of workflow tasks for master branch pushes.


Also, Backend Core microservices are not built and uploaded on Docker. They are only managed in PyPi.


To check the details, go to the .github/workflow directory in each directory.


5.8.5 - Plugin CI

Detailed Explanation of Plugin Repository CI

Plugin CI process details


Plugin repositories with name starting with ‘plugin-’ have unique CI process managed with workflow file named push_sync_ci.yaml. As the total architecture of CI is different from other repositories, plugin repositories' workflow files are automatically updated at every code commit.



We can follow the plugin CI process, step by step.


Step 1. push_sync_ci.yaml in each plugin repository is triggered by master branch push or in a manual way.

Step 2. push_sync_ci.yaml runs cloudforet-io/actions/.github/worflows/deploy.yaml.

Step 2-1. spaceone/actions/.github/worflows/deploy.yaml runs cloudforet-io/actions/src/main.py.

  1. cloudforet-io/actions/src/main.py updates each plugin repository workflow files based on the repository characteristics distinguished by topics. Newest version files of all plugin repository workflows are managed in cloudforet-io/actions.

Step 2-2. spaceone/actions/.github/worflows/deploy.yaml runs push_build_dev.yaml in each plugin repository

  1. push_build_dev.yaml proceeds versioning based on current date.

  2. push_build_dev.yaml upload the plugin image in Docker.

  3. push_build_dev.yaml sends notification through Slack.



To build and release the docker image of plugin repositories, plugins use dispatch_release.yaml.

  1. dispatch_release.yaml in each plugin repository is triggered manually.

  2. dispatch_release.yaml executes condition_check job to check version format and debug.

  3. dispatch_release.yaml updates master branch version file.

  4. dispatch_release.yaml executes git tagging.

  5. dispatch_release.yaml builds and pushes to Docker Hub with docker/build-push-action@v1

  6. dispatch_release.yaml sends notification through Slack.



For further details, you can check our GitHub cloudforet-io/actions.


5.8.6 - Tools CI

Detailed Explanation of Tools Repository CI

Tools CI process details



spacectl, spaceone-initializer, tester repositories are tools used for the spaceone project. There are some differences from other repositories' CI process.


spacectl repository workflow includes test code for each push with a version tag, which is similar to the CI process of backend core repositories.


spaceone-initializer repository does not include the workflow file triggered by ‘master branch push’, which most of repositories including spacectl and tester have.


Tools-category repositories use different repositories to upload.

  • spacectl : PyPi and Docker both
  • spaceone-initializer : Docker
  • tester : PyPi


To check the details, go to the .github/workflow directory in each directory.


5.9 - Contribute

Cloudforet Project Contribution Guide

5.9.1 - Documentation

Cloudforet Project Documentation Guide

5.9.1.1 - Content Guide

This page contains guidelines for Cloudforet documentation.

Create a new page

Go to the parent page of the page creation location. Then, click the 'Create child page' button at the bottom right.

or:
You can also fork from the repository and work locally.

Choosing a title and filename

Create a filename that uses the words in your title separated by underscore (_). For example, the topic with title Using Project Management has filename project_management.md.

Adding fields to the front matter

In your document, put fields in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here's an example:

---
title: "Project Management"
linkTitle: "Project Management"
weight: 10
date: 2021-06-10
description: >
  View overall status of each project and Navigate to detailed cloud resources.  
---

Description of front matter variables

VariablesDescription
titleThe title for the content
linkTitleLeft-sidebar title
weightUsed for ordering your content in left-sidebar. Lower weight gets higher precedence. So content with lower weight will come first. If set, weights should be non-zero, as 0 is interpreted as an unset weight.
dateCreation date
descriptionPage description

If you want to see more details about front matter, click Front matter.

Write a document

Adding Table of Contents

When you add ## in the documentation, it makes a list of Table of Contents automatically.

Adding images

Create a directory for images named file_name_img in the same hierarchy as the document. For example, create project_management_img directory for project_management.md. Put images in the directory.

Style guide

Please refer to the style guide to write the document.

Opening a pull request

When you are ready to submit a pull request, commit your changes with new branch.

5.9.1.2 - Style Guide (shortcodes)

This page explains the custom Hugo shortcodes that can be used in Cloudforet Markdown documentation.

Heading tag

It is recommended to use them sequentially from ##, <h2>. It's for style, not just semantic markup.

Code :

{{< link-button background-color="navy500" url="/" text="Home" >}}
{{< link-button background-color="white" url="https://cloudforet.io/" text="cloudforet.io" >}}

Output :
Home cloudforet.io

Video

Code :

{{< video src="https://www.youtube.com/embed/zSoEg2v_JrE" title="Cloudforet Setup" >}}

Output:

Alert

Code :

{{< alert title="Note Title" >}}
	Note Contents
{{< /alert >}}

Output:

Reference