This the multi-page printable view of this section. Click here to print.
Documentation for Cloudforet - Easy guide for multi cloud management
- 1: Introduction
- 1.1: Overview
- 1.2: Integrations
- 1.3: Key Differentiators
- 1.4: Release Notes
- 2: Concepts
- 2.1: Architecture
- 2.2: Identity
- 2.2.1: Project Management
- 2.2.2: Role Based Access Control
- 2.2.2.1: Understanding Policy
- 2.2.2.2: Understanding Role
- 2.3: Inventory
- 2.3.1: Monitoring
- 2.4: Alert Manager
- 2.5: Cost Analysis
- 3: Setup & Operation
- 3.1: Getting Started
- 3.2: Installation
- 3.2.1: AWS
- 3.2.2: On Premise
- 3.3: Configuration
- 4: User Guide
- 4.1: Get started
- 4.2: Dashboards
- 4.2.1: Dashboard Templates
- 4.2.2: Create Dashboard
- 4.2.3: Customize Dashboard
- 4.2.4: Review & Quick Configuration
- 4.3: Project
- 4.4: Asset inventory
- 4.4.1: Quick Start
- 4.4.2: Cloud service
- 4.4.3: Server
- 4.4.4: Collector
- 4.4.5: Service account
- 4.5: Cost Explorer
- 4.5.1: Cost analysis
- 4.5.2: Budget
- 4.6: Alert manager
- 4.6.1: Quick Start
- 4.6.2: Dashboard
- 4.6.3: Alert
- 4.6.4: Webhook
- 4.6.5: Event rule
- 4.6.6: Maintenance window
- 4.6.7: Notification
- 4.6.8: Escalation policy
- 4.7: Administration
- 4.7.1: [IAM] User
- 4.8: My page
- 4.8.1: Account & profile
- 4.8.2: Notifications channel
- 4.9: Information
- 4.9.1: Notice
- 4.10: Advanced feature
- 4.10.1: Custom table
- 4.10.2: Export as an Excel file
- 4.10.3: Search
- 4.11: Plugin
- 4.11.1: [Alert manager] notification
- 4.11.2: [Alert manager] webhook
- 4.11.3: [Asset inventory] collector
- 4.11.4: [Cost analysis] data source
- 4.11.5: [IAM] authentication
- 5: Developers
- 5.1: Architecture
- 5.1.1: Micro Service Framework
- 5.1.2: Micro Service Deployment
- 5.2: Microservices
- 5.2.1: Console
- 5.2.2: Identity
- 5.2.3: Inventory
- 5.2.4: Monitoring
- 5.2.5: Notification
- 5.2.6: Statistics
- 5.2.7: Billing
- 5.2.8: Plugin
- 5.2.9: Supervisor
- 5.2.10: Repository
- 5.2.11: Secret
- 5.2.12: Config
- 5.3: Frontend
- 5.4: Design System
- 5.4.1: Getting Started
- 5.5: Backend
- 5.6: Plugins
- 5.6.1: About Plugin
- 5.6.2: Developer Guide
- 5.6.2.1: Plugin Interface
- 5.6.2.2: Plugin Register
- 5.6.2.3: Plugin Deployment
- 5.6.2.4: Plugin Debugging
- 5.6.3: Plugin Designs
- 5.6.4: Collector
- 5.7: API & SDK
- 5.7.1: gRPC API
- 5.8: CICD
- 5.8.1: Frontend Microservice CI
- 5.8.2: Backend Microservice CI
- 5.8.3: Frontend Core Microservice CI
- 5.8.4: Backend Core Microservice CI
- 5.8.5: Plugin CI
- 5.8.6: Tools CI
- 5.9: Contribute
- 5.9.1: Documentation
- 5.9.1.1: Content Guide
- 5.9.1.2: Style Guide (shortcodes)
1 - Introduction
1.1 - Overview
Main Features
1. Multi-Cloud Management
- IaaS Infra-integration: Auto-discovery and sorting infrastructure information scattered across various platforms
- Resource Search: Quick search over resources reflected on relevance
- Resource Monitoring: Instant connection to resource's status connected to infrastructure
2. Cloud Orchestration
Infrastructure as Code: Code-based infrastructure configuration management
Remote Command: Batch command execution over multiple remote servers
Application Catalog: Supports easy installation on applications such as Database and Middleware
3. Infra. Analysis
- Security Compliance: Auto-detection and analysis on Cloud security vulnerabilities
- Cost Optimization: Detection over unused resources and analyzing overinvested infrastructure
- Capacity Planning: Infrastructure usage statistics and expansion plan establishment
Cloudforet Universe
Our feature is expanding all areas to build a Cloudforet universe to fulfill requirements for cloud operation/management based on the inventory data, automation, analysis, and many more for Multi Clouds.
1.2 - Integrations
Overview
Cloudforet supports the Plugin Interfaces, which supports to extend Core Services. The supported plugins are below
Inventory
Inventory.Collector supports Collection of Assets. Integrate all your cloud service accounts and scan all existing resources. All cloud resources are collected through Cloudforet collector plugins based on the Plugin Interfaces.
AWS Cloud Service Plugin
MS Azure Cloud Service Plugin
Google Cloud Service Plugin
Identity
Identity.auth supports user management. Choonho various authentication options. Cloudforet supports from local ID / password to external identity services including Google OAuth2, Active Directory and Keycloak.
Google oAuth Identity Plugin
KeyCloak Identity Plugin
Monitoring
DataSource
AWS CloudWatch DataSource Plugin
Azure Monitor DataSource Plugin
Google Cloud Monitor DataSource Plugin
Webhook
AWS Simple Notification Webhook Plugin
Zabbix Webhook Plugin
Grafana Webhook Plugin
Notification
API Direct Connect Protocol Plugin
AWS SNS Protocol Plugin
Slack Protocol Plugin
Telegram Protocol Plugin
Email Protocol Plugin
Billing
Megazone Hyperbilling Billing Service
1.3 - Key Differentiators
Open Platform
In order to provide effective and flexible support over various cloud platforms, we aim for an open source based strategy cloud developer community.
Plugin Interfaces
Protocol-Buffer based gRPC Framework provides optimization on its own engine, enabling effective processing of thousands of various cloud schemas based on MSA (Micro Service Architecture).
Dynamic Rendering
Provide a user-customized view with selected items by creating a Custom-Dashboard based on Json Metadata.
Plugin Ecosystem
A Plugin marketplace for various users such as MSP, 3rd party, and customers to provide freedom for developments and installation according to their own needs.
1.4 - Release Notes
2 - Concepts
2.1 - Architecture
2.2 - Identity
2.2.1 - Project Management
2.2.2 - Role Based Access Control
How RBAC Works
Define who can access what to who and which organization (project or domain) through SpaceONE's RBAC (Role Based Access Control).
For example, the Project Admin Role can inquire (Read) and make several changes (Update/Delete) on all resources within the specified Project. Domain Viewer Role can inquire (Read) all resources within the specified domain. Resources here include everything from users created within SpaceONE, Project/Project Groups, and individual cloud resources.
Every user has one or more roles, which can be assigned directly or inherited within a project. This makes it easy to manage user role management in complex project relationships.
Role defines what actions can be performed on the resource specified through Policy. Also, a Role is bound to each user. The diagram below shows the relationships between Users and Roles and Projects that make up RBAC.
This role management model is divided into three main components.
Role. It is a collection of access right policies that can be granted for each user. All roles must have one policy. For more detailed explanation, please refer to Understanding Role.
Project. The project or project group to which the permission is applied.
User. Users include users who log in to the console and use UI, API users, and SYSTEM users. Each user is connected to multiple Roles through the RoleBinding procedure. Through this, it is possible to access various resources of SpaceONE by receiving appropriate permissions.
Basic Concepts
When a user wants to access resources within an organization, the administrator grants each user a role of the target project or domain. SpaceONE Identity Service verifies the Role/Policy granted to each user to determine whether each user can access resources or not.
Resource
If a user wants to access a resource in a specific SpaceONE project, you can grant the user an appropriate role and then add it to the target project as a member to make it accessible. Examples of these resources are Server, Project, Alert .
In order to conveniently use the resources managed within SpaceONE for each service, we provide predefined Role/Policy. If you want to define your own access scope within the company, you can create a Custom Policy/Custom Role and apply it to the internal organization.
For a detailed explanation of this, refer to Understanding Role.
Policy
A policy is a collection of permissions. In permission, the allowed access range for each resource of Space One is defined. A policy can be assigned to each user through a role. Policies can be published on the Marketplace and be used by other users, or can be published privately for a specific domain.
This permission is expressed in the form below. {service}.{resource}.{verb} For example, it has the form inventory.Server.list .
Permission also corresponds to SpaceONE API Method. This is because each microservice in SpaceONE is closely related to each exposed API method. Therefore, when the user calls SpaceONE API Method, corresponding permission is required.
For example, if you want to call inventory.Server.list to see the server list of the Inventory service, you must have the corresponding inventory.Server.list permission included in your role.
Permission cannot be granted directly to a user. Instead, an appropriate set of permissions can be defined as a policy and assigned to a user through a role. For more information, refer to Understanding Policy.
Roles
A role is composed of a combination of an access target and a policy. Permission cannot be directly granted to a user, but can be granted in the form of a role. Also, all resources in SpaceONE belong to Project. DOMAIN, PROJECT can be separated and managed.
For example, Domain Admin Role is provided for the full administrator of the domain, and Alert Manager Operator Role is provided for event management of Alert Manager.
Members
All cloud resources managed within SpaceONE are managed in units of projects. Therefore, you can control access to resources by giving each user a role and adding them as project members.
Depending on the role type, the user can access all resources within the domain or the resources within the specified project.
- Domain: You can access all resources within the domain.
- Project: You can access the resources within the specified Project.
Project type users can access resources within the project by specifically being added as a member of the project.
If you add as member of Project Group, the right to access all subordinate project resources is inherited.
Organization
All resources in SpaceONE can be managed hierarchically through the following organizational structure.
All users can specify access targets in such a way that they are connected (RoleBinding) to the organization.
- Domain : This is the highest level organization. Covers all projects and project groups.
- PROJECT GROUP : This is an organization that can integrate and manage multiple projects.
- Projects : The smallest organizational unit in SpaceONE. All cloud resources belong to a project.
2.2.2.1 - Understanding Policy
Policy
Policy is a set of permissions defined to perform specific actions on SpaceONE resources. Permissions define the scopes that can be managed for Cloud Resources. For an overall description of the authority management system, please refer to Role Based Access Control.
Policy Type
Once defined, the policy can be shared so that it can be used by roles in other domains. Depending on whether or not this is possible, the policy is divided into two types.
- MANAGED: A policy defined globally in the Repository service. The policy is directly managed and shared by the entire system administrator. This is a common policy convenient for most users.
- CUSTOM: You can use a policy with self-defined permissions for each domain. It is useful to manage detailed permission for each domain.
Note
MANAGED Policy is published on Official Marketplace and managed by the CloudONE team.
CUSTOM Policy is published in the Private Repository and managed by the administrator of each domain.
Policy can be classified as following according to Permission Scope.
- Basic: Includes overall permission for all resources in SpaceONE.
- Predefined : Includes granular permission for specific services (alert manager, billing, etc.).
Managed Policy
The policy below is a full list of Managed Policies managed by the CloudONE team. Detailed permission is automatically updated if necessary. Managed Policy was created to classify policies according to the major roles within the organization.
Policy Type | Policy Name | Policy Id | Permission Description | Reference |
---|---|---|---|---|
MANAGED-Basic | Domain Admin Access | policy-managed-domain-admin | Has all privileges except for the following Create/delete domain api_type is SYSTEM/NO_AUTH Manage DomainOwner (create/change/delete) Manage plug-in identity.Auth Plugin management ( change) | policy-managed-domain-admin |
MANAGED-Basic | Domain Viewer Access | policy-managed-domain-viewer | Read permission among Domain Admin Access permissions | policy-managed-domain-viewer |
MANAGED-Basic | Project Admin Access | policy-managed-project-admin | Exclude the following permissions from Domain Admin Access Policy Manage providers (create/change/inquire/delete) Manage Role/Policy (create/change/delete) Manage plug-ins inventory.Collector (create/change /delete) plugin management monitoring.DataSource (create/change/delete) plugin management notification.Protocol (create/change/delete) | policy-managed-project-admin |
MANAGED-Basic | Project Viewer Access | policy-managed-project-viewer | Read permission among Permissions of Project Admin Access Policy | policy-managed-project-viewer |
MANAGED-Predefined | Alert Manager Full Access | policy-managed-alert-manager-full-access | Full access to Alert Manager | policy-managed-alert-manager-full-access |
Custom Policy
If you want to manage the policy of a domain by yourself, please refer to the Managing Custom Policy document.
2.2.2.2 - Understanding Role
Role structure
Role is a Role Type that specifies the scope of access to resources as shown below and the organization (project or project group) to which the authority is applied. Users can define access rights within each SpaceONE through RoleBinding.
Role Example
Example: Alert Operator Role
---
results:
- created_at: '2021-11-15T05:12:31.060Z'
domain_id: domain-xxx
name: Alert Manager Operator
policies:
- policy_id: policy-managed-alert-manager-operator
policy_type: MANAGED
role_id: role-f18c7d2a9398
role_type: PROJECT
tags: {}
Example : Domain Viewer Role
---
results:
- created_at: '2021-11-15T05:12:28.865Z'
domain_id: domain-xxx
name: Domain Viewer
policies:
- policy_id: policy-managed-domain-viewer
policy_type: MANAGED
role_id: role-242f9851eee7
role_type: DOMAIN
tags: {}
Role Type
Role Type specifies the range of accessible resources within the domain.
- DOMAIN: Access is possible to all resources in the domain.
- PROJECT: Access is possible to all resources in the project added as a member.
Please refer to Add as Project Member for how to add a member as a member in the project.
Add Member
All resources in SpaceONE are hierarchically managed as follows. The administrator of the domain can manage so that users can access resources within the project by adding members to each project. Users who need access to multiple projects can access all projects belonging to the lower hierarchy by being added to the parent project group as a member. For how to add as a member of the Project Group, refer to Add as a Member of Project Group.
Role Hierarchy
If a user has complex Rolebinding within the hierarchical project structure. Role is applied according to the following rules.
For example, as shown in the figure below, the user stark@example.com is bound to the parent Project Group as Project Admin Role, and the lower level project is APAC. When it is bound to Project Viewer Role in Roles for each project are applied in the following way.
- The role of the parent project is applied to the sub-project/project group that is not directly bound by RoleBinding.
- The role is applied to the subproject that has explicit RoleBinding. (overwriting the higher-level role)
Default Roles
All SpaceOne domains automatically include Default Role when created. Below is the list.
Name | Role Type | Description |
---|---|---|
Domain Admin | DOMAIN | You can search/change/delete all domain resources |
Domain Viewer | DOMAIN | You can search all domain resources |
Project Admin | PROJECT | You can view/change/delete the entire project resource added as a member |
Project Viewer | PROJECT | You can search the entire project resource added as a member |
Alert Manager Operator | PROJECT | You can inquire the entire project resource added as a member, and have the alert handling authority of Alert Manager |
Managing Roles
Roles can be managed by the domain itself through spacectl. Please refer to the Managing Roles document.
2.3 - Inventory
2.3.1 - Monitoring
2.4 - Alert Manager
2.5 - Cost Analysis
3 - Setup & Operation
3.1 - Getting Started
This is Getting Started Installation guide with minikube.
Note :- This Guide is not for production, but for developer only.
Verified Environments
Distro | Status | Link(ex. Blog) |
---|---|---|
Ubuntu 20.04 | Not Tested | |
Ubuntu 22.04 | Verified | |
Amazon Linux 2 | Not Tested | |
Amazon Linux 2023 | Not Tested | |
macOS (Apple Silicon, M1) | Verified | |
macOS (Apple Silicon, M2) | Verified | |
Windows | Verified | https://medium.com/@ayushsharma2267410/installation-of-cloudforet-in-windows-8c4a10c9a65f |
Overview
Cloudforet-Minikube Architecture
Prerequisites
- AWS EC2 VM (Intel/AMD/ARM CPU)
Recommended instance type: t3.large (2 cores, 8 GB Memory, 30GB EBS)
- Docker/Docker Desktop
- If you don't have Docker installed, minikube will return an error as minikube uses docker as the driver.
- Highly recommend installing Docker Desktop based on your OS.
- Minikube
- Requires minimum Kubernetes version of 1.21+.
- Kubectl
- Helm
- Requires minimum Helm version of 3.11.0+.
- If you want to learn more about Helm, refer to this.
Before diving into the Cloudforet Installation process, start minikube by running the command below.
minikube start --driver=docker --memory=5000mb
If you encounter
Unable to resolve the current Docker CLI context "default"
error, check if the docker daemon is running.
Installation
You can install the Cloudforet by the following the steps below.
For Cloudforet v1.12.x, we DONOT provide helm charts online. You can download the helm chart from the Cloudforet Github
1) Download Helm Chart Repository
This command wll download Helm repository.
# Set working directory
mkdir cloudforet-deployment
cd cloudforet-deployment
wget https://github.com/cloudforet-io/charts/releases/download/spaceone-1.12.12/spaceone-1.12.12.tgz
tar zxvf spaceone-1.12.12.tgz
2) Create Namespaces
kubectl create ns cloudforet
kubectl create ns cloudforet-plugin
3) Create Role and RoleBinding
First, download the rbac.yaml file.
The rbac.yaml file basically serves as a means to regulate access to computer or network resources based on the roles of individual users. For more information about RBAC Authorization in Kubernetes, refer to this.
If you are used to downloading files via command-line, run this command to download the file.
wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/rbac.yaml -O rbac.yaml
Next, execute the following command.
kubectl apply -f rbac.yaml -n cloudforet-plugin
4) Install Cloudforet Chart
Download default YAML file for helm chart.
wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/values/release-1-12.yaml -O release-1-12.yaml
helm install cloudforet spaceone -n cloudforet -f release-1-12.yaml
After executing the above command, check the status of the pod.
Scheduler pods are in
CrashLoopBackOff
orError
state. This is because the setup is not complete.
kubectl get pod -n cloudforet
NAME READY STATUS RESTARTS AGE
board-5746fd9657-vtd45 1/1 Running 0 57s
config-5d4c4b7f58-z8k9q 1/1 Running 0 58s
console-6b64cf66cb-q8v54 1/1 Running 0 59s
console-api-7c95848cb8-sgt56 2/2 Running 0 58s
console-api-v2-rest-7d64bc85dd-987zn 2/2 Running 0 56s
cost-analysis-7b9d64b944-xw9qg 1/1 Running 0 59s
cost-analysis-scheduler-ff8cc758d-lfx4n 0/1 Error 3 (37s ago) 55s
cost-analysis-worker-559b4799b9-fxmxj 1/1 Running 0 58s
dashboard-b4cc996-mgwj9 1/1 Running 0 56s
docs-5fb4cc56c7-68qbk 1/1 Running 0 59s
identity-6fc984459d-zk8r9 1/1 Running 0 56s
inventory-67498999d6-722bw 1/1 Running 0 57s
inventory-scheduler-5dc6856d44-4spvm 0/1 CrashLoopBackOff 3 (18s ago) 59s
inventory-worker-68d9fcf5fb-x6knb 1/1 Running 0 55s
marketplace-assets-8675d44557-ssm92 1/1 Running 0 59s
mongodb-7c9794854-cdmwj 1/1 Running 0 59s
monitoring-fdd44bdbf-pcgln 1/1 Running 0 59s
notification-5b477f6c49-gzfl8 1/1 Running 0 59s
notification-scheduler-675696467-gn24j 1/1 Running 0 59s
notification-worker-d88bb6df6-pjtmn 1/1 Running 0 57s
plugin-556f7bc49b-qmwln 1/1 Running 0 57s
plugin-scheduler-86c4c56d84-cmrmn 0/1 CrashLoopBackOff 3 (13s ago) 59s
plugin-worker-57986dfdd6-v9vqg 1/1 Running 0 58s
redis-75df77f7d4-lwvvw 1/1 Running 0 59s
repository-5f5b7b5cdc-lnjkl 1/1 Running 0 57s
secret-77ffdf8c9d-48k46 1/1 Running 0 55s
spacectl-5664788d5d-dtwpr 1/1 Running 0 59s
statistics-67b77b6654-p9wcb 1/1 Running 0 56s
statistics-scheduler-586875947c-8zfqg 0/1 Error 3 (30s ago) 56s
statistics-worker-68d646fc7-knbdr 1/1 Running 0 58s
supervisor-scheduler-6744657cb6-tpf78 2/2 Running 0 59s
To execute the commands below, every POD except xxxx-scheduler-yyyy must have a Running status.
5) Initialize the Configuration
First, download the initializer.yaml file.
For more information about the initializer, please refer to the spaceone-initializer.
If you are used to downloading files via command-line, run this command to download the file.
wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/initializer.yaml -O initializer.yaml
And execute the following command.
wget https://github.com/cloudforet-io/charts/releases/download/spaceone-initializer-1.3.3/spaceone-initializer-1.3.3.tgz
tar zxvf spaceone-initializer-1.3.3.tgz
helm install initializer spaceone-initializer -n cloudforet -f initializer.yaml
6) Set the Helm Values and Upgrade the Chart
Complete the initialization, you can get the system token from the initializer pod logs.
To figure out the pod name for the initializer, run this command first to show all pod names for namespace spaceone.
kubectl get pods -n cloudforet
Then, among the pods shown copy the name of the pod that starts with initialize-spaceone.
NAME READY STATUS RESTARTS AGE
board-5997d5688-kq4tx 1/1 Running 0 24m
config-5947d845b5-4ncvn 1/1 Running 0 24m
console-7fcfddbd8b-lbk94 1/1 Running 0 24m
console-api-599b86b699-2kl7l 2/2 Running 0 24m
console-api-v2-rest-cb886d687-d7n8t 2/2 Running 0 24m
cost-analysis-8658c96f8f-88bmh 1/1 Running 0 24m
cost-analysis-scheduler-67c9dc6599-k8lgx 1/1 Running 0 24m
cost-analysis-worker-6df98df444-5sjpm 1/1 Running 0 24m
dashboard-84d8969d79-vqhr9 1/1 Running 0 24m
docs-6b9479b5c4-jc2f8 1/1 Running 0 24m
identity-6d7bbb678f-b5ptf 1/1 Running 0 24m
initialize-spaceone-fsqen-74x7v 0/1 Completed 0 98m
inventory-64d6558bf9-v5ltj 1/1 Running 0 24m
inventory-scheduler-69869cc5dc-k6fpg 1/1 Running 0 24m
inventory-worker-5649876687-zjxnn 1/1 Running 0 24m
marketplace-assets-5fcc55fb56-wj54m 1/1 Running 0 24m
mongodb-b7f445749-2sr68 1/1 Running 0 101m
monitoring-799cdb8846-25w78 1/1 Running 0 24m
notification-c9988d548-gxw2c 1/1 Running 0 24m
notification-scheduler-7d4785fd88-j8zbn 1/1 Running 0 24m
notification-worker-586bc9987c-kdfn6 1/1 Running 0 24m
plugin-79976f5747-9snmh 1/1 Running 0 24m
plugin-scheduler-584df5d649-cflrb 1/1 Running 0 24m
plugin-worker-58d5cdbff9-qk5cp 1/1 Running 0 24m
redis-b684c5bbc-528q9 1/1 Running 0 24m
repository-64fc657d4f-cbr7v 1/1 Running 0 24m
secret-74578c99d5-rk55t 1/1 Running 0 24m
spacectl-8cd55f46c-xw59j 1/1 Running 0 24m
statistics-767d84bb8f-rrvrv 1/1 Running 0 24m
statistics-scheduler-65cc75fbfd-rsvz7 1/1 Running 0 24m
statistics-worker-7b6b7b9898-lmj7x 1/1 Running 0 24m
supervisor-scheduler-555d644969-95jxj 2/2 Running 0 24m
To execute the below kubectl logs command, the status of POD(Ex: here initialize-spaceone-fsqen-74x7v) should be Completed . Proceeding with this while the POD is INITIALIZING will give errors
Get the token by getting the log information of the pod with the name you found above.
kubectl logs initialize-spaceone-fsqen-74x7v -n cloudforet
...
TASK [Print Admin API Key] *********************************************************************************************
"TOKEN_SHOWN_HERE"
FINISHED [ ok=23, skipped=0 ] ******************************************************************************************
FINISH SPACEONE INITIALIZE
Update your helm values file (ex. release-1-12.yaml) and edit the values. There is only one item that need to be updated.
For EC2 users: put in your EC2 server's public IP instead of 127.0.0.1 for both CONSOLE_API and CONSOLE_API_V2 ENDPOINT.
- TOKEN
console:
production_json:
CONSOLE_API:
ENDPOINT: http://localhost:8081 # http://ec2_public_ip:8081 for EC2 users
CONSOLE_API_V2:
ENDPOINT: http://localhost:8082 # http://ec2_public_ip:8082 for EC2 users
global:
shared_conf:
TOKEN: 'TOKEN_VALUE_FROM_ABOVE' # Change the system token
After editing the helm values file(ex. release-1-12.yaml), upgrade the helm chart.
helm upgrade cloudforet spaceone -n cloudforet -f release-1-12.yaml
After upgrading, delete the pods in cloudforet namespace that have the label app.kubernetes.io/instance and value cloudforet.
kubectl delete po -n cloudforet -l app.kubernetes.io/instance=cloudforet
7) Check the status of the pods
kubectl get pod -n cloudforet
If all pods are in Running
state, the setup is complete.
Port-forwarding
Installing Cloudforet on minikube doesn't provide any Ingress objects such as Amazon ALB or NGINX ingress controller. We can use kubectl port-forward instead.
Run the following commands for port forwarding.
# CLI commands
kubectl port-forward -n cloudforet svc/console 8080:80 --address='0.0.0.0' &
kubectl port-forward -n cloudforet svc/console-api 8081:80 --address='0.0.0.0' &
kubectl port-forward -n cloudforet svc/console-api-v2-rest 8082:80 --address='0.0.0.0' &
Start Cloudforet
Log-In (Sign in for Root Account)
For EC2 users: open browser with http://your_ec2_server_ip:8080
Open browser (http://127.0.0.1:8080)
ID | PASSWORD |
---|---|
admin | Admin123!@# |
Initial Setup for Cloudforet
For your reference, Cloudforet is an open source project for SpaceOne. For additional information, refer to our official website here.
Reference
3.2 - Installation
3.2.1 - AWS
Cloudforet Helm Charts
A Helm Chart for Cloudforet 1.12
.
Prerequisites
- Kubernetes 1.21+
- Helm 3.2.0+
- Service Domain & SSL Certificate (optional)
- Console:
console.example.com
- REST API:
*.api.example.com
- gRPC API:
*.grpc.example.com
- Webhook:
webhook.example.com
- Console:
- MongoDB 5.0+ (optional)
Cloudforet Architecture
Installation
You can install the Cloudforet using the following the steps.
1) Add Helm Repository
helm repo add cloudforet https://cloudforet-io.github.io/charts
helm repo update
helm search repo cloudforet
2) Create Namespaces
kubectl create ns spaceone
kubectl create ns spaceone-plugin
If you want to use only one namespace, you don't create the spaceone-plugin
namespace.
3) Create Role and RoleBinding
First, download the rbac.yaml file.
wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/rbac.yaml -O rbac.yaml
And execute the following command.
kubectl apply -f rbac.yaml -n spaceone-plugin
or
kubectl apply -f https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/rbac.yaml -n spaceone-plugin
4) Install Cloudforet Chart
helm install cloudforet cloudforet/spaceone -n spaceone
After executing the above command, check the status of the pod.
kubectl get pod -n spaceone
NAME READY STATUS RESTARTS AGE
board-64f468ccd6-v8wx4 1/1 Running 0 4m16s
config-6748dc8cf9-4rbz7 1/1 Running 0 4m14s
console-767d787489-wmhvp 1/1 Running 0 4m15s
console-api-846867dc59-rst4k 2/2 Running 0 4m16s
console-api-v2-rest-79f8f6fb59-7zcb2 2/2 Running 0 4m16s
cost-analysis-5654566c95-rlpkz 1/1 Running 0 4m13s
cost-analysis-scheduler-69d77598f7-hh8qt 0/1 CrashLoopBackOff 3 (39s ago) 4m13s
cost-analysis-worker-68755f48bf-6vkfv 1/1 Running 0 4m15s
cost-analysis-worker-68755f48bf-7sj5j 1/1 Running 0 4m15s
cost-analysis-worker-68755f48bf-fd65m 1/1 Running 0 4m16s
cost-analysis-worker-68755f48bf-k6r99 1/1 Running 0 4m15s
dashboard-68f65776df-8s4lr 1/1 Running 0 4m12s
file-manager-5555876d89-slqwg 1/1 Running 0 4m16s
identity-6455d6f4b7-bwgf7 1/1 Running 0 4m14s
inventory-fc6585898-kjmwx 1/1 Running 0 4m13s
inventory-scheduler-6dd9f6787f-k9sff 0/1 CrashLoopBackOff 4 (21s ago) 4m15s
inventory-worker-7f6d479d88-59lxs 1/1 Running 0 4m12s
mongodb-6b78c74d49-vjxsf 1/1 Running 0 4m14s
monitoring-77d9bd8955-hv6vp 1/1 Running 0 4m15s
monitoring-rest-75cd56bc4f-wfh2m 2/2 Running 0 4m16s
monitoring-scheduler-858d876884-b67tc 0/1 Error 3 (33s ago) 4m12s
monitoring-worker-66b875cf75-9gkg9 1/1 Running 0 4m12s
notification-659c66cd4d-hxnwz 1/1 Running 0 4m13s
notification-scheduler-6c9696f96-m9vlr 1/1 Running 0 4m14s
notification-worker-77865457c9-b4dl5 1/1 Running 0 4m16s
plugin-558f9c7b9-r6zw7 1/1 Running 0 4m13s
plugin-scheduler-695b869bc-d9zch 0/1 Error 4 (59s ago) 4m15s
plugin-worker-5f674c49df-qldw9 1/1 Running 0 4m16s
redis-566869f55-zznmt 1/1 Running 0 4m16s
repository-8659578dfd-wsl97 1/1 Running 0 4m14s
secret-69985cfb7f-ds52j 1/1 Running 0 4m12s
statistics-98fc4c955-9xtbp 1/1 Running 0 4m16s
statistics-scheduler-5b6646d666-jwhdw 0/1 CrashLoopBackOff 3 (27s ago) 4m13s
statistics-worker-5f9994d85d-ftpwf 1/1 Running 0 4m12s
supervisor-scheduler-74c84646f5-rw4zf 2/2 Running 0 4m16s
Scheduler pods are in
CrashLoopBackOff
orError
state. This is because the setup is not complete.
5) Initialize the Configuration
First, download the initializer.yaml file.
wget https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/initializer.yaml -O initializer.yaml
And execute the following command.
helm install cloudforet-initializer cloudforet/spaceone-initializer -n spaceone -f initializer.yaml
or
helm install cloudforet-initializer cloudforet/spaceone-initializer -n spaceone -f https://raw.githubusercontent.com/cloudforet-io/charts/master/examples/initializer.yaml
For more information about the initializer, please refer the spaceone-initializer.
6) Set the Helm Values and Upgrade the Chart
Complete the initialization, you can get the system token from the initializer pod logs.
# check pod name
kubectl logs initialize-spaceone-xxxx-xxxxx -n spaceone
...
TASK [Print Admin API Key] *********************************************************************************************
"{TOKEN}"
FINISHED [ ok=23, skipped=0 ] ******************************************************************************************
FINISH SPACEONE INITIALIZE
First, copy this TOKEN, then Create the values.yaml
file and paste it to the TOKEN.
console:
production_json:
# If you don't have a service domain, you refer to the following 'No Domain & IP Access' example.
CONSOLE_API:
ENDPOINT: https://console.api.example.com # Change the endpoint
CONSOLE_API_V2:
ENDPOINT: https://console-v2.api.example.com # Change the endpoint
global:
shared_conf:
TOKEN: '{TOKEN}' # Change the system token
For more advanced configuration, please refer the following the links.
- Documents
- Examples
After editing the values.yaml
file, upgrade the helm chart.
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
7) Check the status of the pods
kubectl get pod -n spaceone
If all pods are in Running
state, the setup is complete.
8) Ingress and AWS Load Balancer
In Kubernetes, Ingress is an API object that provides a load-balanced external IP address to access Services in your cluster. It acts as a layer 7 (HTTP/HTTPS) reverse proxy and can route traffic to other services based on the requested host and URL path.
For more information, see What is an Application Load Balancer? on AWS and ingress in the Kubernetes documentation.
Prerequisite
Install AWS Load Balancer Controller
AWS Load Balancer Controller is a controller that helps manage ELB (Elastic Load Balancers) in a Kubernetes Cluster. Ingress resources are provisioned with Application Load Balancer, and service resources are provisioned with Network Load Balancer.
Installation methods may vary depending on the environment, so please refer to the official guide document below.
How to set up Cloudforet ingress
1) Ingress Type
Cloudforet provisions a total of 3 ingresses through 2 files.
- Console : Ingress to access the domain
- REST API : Ingress for API service
- console-api
- console-api-v2
2) Console ingress
Setting the ingress to accerss the console is as follows.
cat <<EOF> spaceone-console-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-ingress
namespace: spaceone
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-ingress # Caution!! Must be fewer than 32 characters.
spec:
ingressClassName: alb
defaultBackend:
service:
name: console
port:
number: 80
EOF
# Apply ingress
kubectl apply -f spaceone-console-ingress.yaml
If you apply the ingress, it will be provisioned to AWS Load Balancer with the name spaceone-console-ingress
. You can connect through the provisioned DNS name using HTTP (80 Port).
3) REST API ingress
Setting the REST API ingress for the API service is as follows.
cat <<EOF> spaceone-rest-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-api-ingress
namespace: spaceone
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-ingress # Caution!! Must be fewer than 32 characters.
spec:
ingressClassName: alb
defaultBackend:
service:
name: console-api
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-api-v2-ingress
namespace: spaceone
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-v2-ingress
spec:
ingressClassName: alb
defaultBackend:
service:
name: console-api-v2-rest
port:
number: 80
EOF
# Apply ingress
kubectl apply -f spaceone-rest-ingress.yaml
REST API ingress provisions two ALBs. The DNS Name of the REST API must be saved as console.CONSOLE_API.ENDPOINT
and console.CONSOLE_API_V2.ENDPOINT
in the values.yaml
file.
4) Check DNS Name
The DNS name will be generated as http://{ingress-name}-{random}.{region-code}.elb.amazoneaws.com
. You can check this through the kubectl get ingress -n spaceone
command in Kubernetes.
kubectl get ingress -n spaceone
NAME CLASS HOSTS ADDRESS PORTS AGE
console-api-ingress alb * spaceone-console-api-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com 80 15h
console-api-v2-ingress alb * spaceone-console-api-v2-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com 80 15h
console-ingress alb * spaceone-console-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com 80 15h
Or, you can check it in AWS Console. You can check it in EC2 > Load balancer as shown in the image below.
5) Connect with DNS Name
When all ingress is ready, edit the values.yaml
file, restart pods, and access the console.
console:
production_json:
# If you don't have a service domain, you refer to the following 'No Domain & IP Access' example.
CONSOLE_API:
ENDPOINT: http://spaceone-console-api-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com
CONSOLE_API_V2:
ENDPOINT: http://spaceone-console-api-v2-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com
After applying the prepared values.yaml
file, restart the pods.
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
Now you can connect to Cloudforet with the DNS Name of spaceone-console-ingress
.
http://spaceone-console-ingress-xxxxxxxxxx.{region-code}.elb.amazonaws.com
Advanced ingress settings
How to register an SSL certificate
We will guide you through how to register a certificate in ingress for SSL communication.
There are two methods for registering a certificate. One is when using ACM(AWS Certificate Manager), and the other is how to register an external certificate.
How to register an ACM certificate with ingress
If the certificate was issued through ACM, you can register the SSL certificate by simply registering acm arn in ingress.
First of all, please refer to the AWS official guide document on how to issue a certificate.
How to register the issued certificate is as follows. Please check the options added or changed for SSL communication in existing ingress.
Check out the changes in ingress.
Various settings for SSL are added and changed. Check the contents ofmetadata.annotations
.
Also, check the added contents such asssl-redirect
andspec.rules.host
inspec.rules
.
- spaceone-console-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-ingress
namespace: spaceone
annotations:
+ alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
+ alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
- alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
+ alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:..." # Change the certificate-arn
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-ingress # Caution!! Must be fewer than 32 characters.
spec:
ingressClassName: alb
- defaultBackend:
- service:
- name: console
- port:
- number: 80
+ rules:
+ - http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: ssl-redirect
+ port:
+ name: use-annotation
+ - host: "console.example.com" # Change the hostname
+ http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: console
+ port:
+ number: 80
- spaceone-rest-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-api-ingress
namespace: spaceone
annotations:
+ alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
+ alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
- alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
+ alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:..." # Change the certificate-arn
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-ingress # Caution!! Must be fewer than 32 characters.
spec:
ingressClassName: alb
- defaultBackend:
- service:
- name: console-api
- port:
- number: 80
+ rules:
+ - http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: ssl-redirect
+ port:
+ name: use-annotation
+ - host: "console.api.example.com" # Change the hostname
+ http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: console-api
+ port:
+ number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-api-v2-ingress
namespace: spaceone
annotations:
+ alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
+ alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
- alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
+ alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:..." # Change the certificate-arn
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-api-v2-ingress
spec:
ingressClassName: alb
- defaultBackend:
- service:
- name: console-api-v2-rest
- port:
- number: 80
+ rules:
+ - http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: ssl-redirect
+ port:
+ name: use-annotation
+ - host: "console-v2.api.example.com" # Change the hostname
+ http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: console-api-v2-rest
+ port:
+ number: 80
SSL application is completed when the changes are reflected through the kubectl command.
kubectl apply -f spaceone-console-ingress.yaml
kubectl apply -f spaceone-rest-ingress.yaml
How to register an SSL/TLS certificate
Certificate registration is possible even if you have an external certificate that was previously issued. You can register by adding a Kubernetes secret
using the issued certificate and declaring the added secret
name in ingress.
Create SSL/TLS certificates as Kubernetes secrets. There are two ways:
1. Using yaml file
You can add a secret to a yaml file using the command below.
kubectl apply -f <<EOF> tls-secret.yaml
apiVersion: v1
data:
tls.crt: {your crt} # crt
tls.key: {your key} # key
kind: Secret
metadata:
name: tls-secret
namespace: spaceone
type: kubernetes.io/tls
EOF
2. How to use the command if a file exists
If you have a crt and key file, you can create a secret using the following command.
kubectl create secret tls tlssecret --key tls.key --cert tls.crt
Add tls secret to Ingress
Modify ingress using registered secret information.
ingress-nginx settings
Using secret and tls may require setup methods using ingress-nginx. For more information, please refer to the following links:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: console-ingress
namespace: spaceone
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: spaceone-console-ingress # Caution!! Must be fewer than 32 characters.
spec:
tls:
- hosts:
- console.example.com # Change the hostname
secretName: tlssecret # Insert secret name
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: "console.example.com" # Change the hostname
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: console
port:
number: 80
3.2.2 - On Premise
Prerequisites
Kubernetes 1.21+ : https://kubernetes.io/docs/setup/
Kubectl command-line tool : https://kubernetes.io/docs/tasks/tools/
Helm 3.11.0+ : https://helm.sh/docs/intro/install/
Nginx Ingress Controller : https://kubernetes.github.io/ingress-nginx/deploy/
Install Cloudforet
It guides you on how to install Cloudforet using Helm chart. Related information is also available at: https://github.com/cloudforet-io/charts
1. Add Helm Repository
helm repo add cloudforet https://cloudforet-io.github.io/charts
helm repo update
helm search repo
2. Create Namespaces
kubectl create ns spaceone
kubectl create ns spaceone-plugin
Cautions of creation namespace
If you need to use only one namespace, you do not need to create thespaceone-plugin
namespace.
If changing the Cloudforet namespace, please refer to the following link. Change K8S Namespace
3. Create Role and RoleBinding
In a general situation where namespaces are not merged, supervisor distributes plugins from spaceone
namespace to spaceone-plugin
namespace, so roles and rolebindings are required as follows. Check the contents at the following link. https://github.com/cloudforet-io/charts/blob/master/examples/rbac.yaml
Details of the authority are as follows. You can edit the file to specify permissions if needed.
Create file
cat <<EOF> rbac.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: supervisor-plugin-control-role namespace: spaceone-plugin rules: - apiGroups: - "*" resources: - replicaSets - pods - deployments - services - endpoints verbs: - get - list - watch - create - delete --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: supervisor-role-binding namespace: spaceone-plugin roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: supervisor-plugin-control-role subjects: - kind: ServiceAccount name: default namespace: spaceone EOF
To apply the permission, you can reflect it with the command below. If you have changed the namespace, enter the changed namespace. (Be careful with the namespace.)
kubectl apply -f rbac.yaml -n spaceone-plugin
4. Install
Proceed with the installation using the helm command below.
helm install cloudforet cloudforet/spaceone -n spaceone
After entering the command, you can see that pods are uploaded in the spaceone
namespace as shown below.
kubectl get pod -n spaceone
NAME READY STATUS RESTARTS AGE
board-64f468ccd6-v8wx4 1/1 Running 0 4m16s
config-6748dc8cf9-4rbz7 1/1 Running 0 4m14s
console-767d787489-wmhvp 1/1 Running 0 4m15s
console-api-846867dc59-rst4k 2/2 Running 0 4m16s
console-api-v2-rest-79f8f6fb59-7zcb2 2/2 Running 0 4m16s
cost-analysis-5654566c95-rlpkz 1/1 Running 0 4m13s
cost-analysis-scheduler-69d77598f7-hh8qt 0/1 CrashLoopBackOff 3 (39s ago) 4m13s
cost-analysis-worker-68755f48bf-6vkfv 1/1 Running 0 4m15s
cost-analysis-worker-68755f48bf-7sj5j 1/1 Running 0 4m15s
cost-analysis-worker-68755f48bf-fd65m 1/1 Running 0 4m16s
cost-analysis-worker-68755f48bf-k6r99 1/1 Running 0 4m15s
dashboard-68f65776df-8s4lr 1/1 Running 0 4m12s
file-manager-5555876d89-slqwg 1/1 Running 0 4m16s
identity-6455d6f4b7-bwgf7 1/1 Running 0 4m14s
inventory-fc6585898-kjmwx 1/1 Running 0 4m13s
inventory-scheduler-6dd9f6787f-k9sff 0/1 CrashLoopBackOff 4 (21s ago) 4m15s
inventory-worker-7f6d479d88-59lxs 1/1 Running 0 4m12s
mongodb-6b78c74d49-vjxsf 1/1 Running 0 4m14s
monitoring-77d9bd8955-hv6vp 1/1 Running 0 4m15s
monitoring-rest-75cd56bc4f-wfh2m 2/2 Running 0 4m16s
monitoring-scheduler-858d876884-b67tc 0/1 Error 3 (33s ago) 4m12s
monitoring-worker-66b875cf75-9gkg9 1/1 Running 0 4m12s
notification-659c66cd4d-hxnwz 1/1 Running 0 4m13s
notification-scheduler-6c9696f96-m9vlr 1/1 Running 0 4m14s
notification-worker-77865457c9-b4dl5 1/1 Running 0 4m16s
plugin-558f9c7b9-r6zw7 1/1 Running 0 4m13s
plugin-scheduler-695b869bc-d9zch 0/1 Error 4 (59s ago) 4m15s
plugin-worker-5f674c49df-qldw9 1/1 Running 0 4m16s
redis-566869f55-zznmt 1/1 Running 0 4m16s
repository-8659578dfd-wsl97 1/1 Running 0 4m14s
secret-69985cfb7f-ds52j 1/1 Running 0 4m12s
statistics-98fc4c955-9xtbp 1/1 Running 0 4m16s
statistics-scheduler-5b6646d666-jwhdw 0/1 CrashLoopBackOff 3 (27s ago) 4m13s
statistics-worker-5f9994d85d-ftpwf 1/1 Running 0 4m12s
supervisor-scheduler-74c84646f5-rw4zf 2/2 Running 0 4m16s
If some of the scheduler pods are having problems and the rest of the pods are up, you're in the right state for now. The scheduler problem requires an upgrade operation using the values.yaml
file after issuing a token through the initializer.
5. Initialize the configuration
This is a task for Cloudforet's domain creation. A root domain is created and a root token is issued through the initializer.
spaceone-initializer can be found on the following cloudforet-io github site. https://github.com/cloudforet-io/spaceone-initializer
The initializer.yaml file to be used here can be found at the following link. https://github.com/cloudforet-io/charts/blob/master/examples/initializer.yaml
You can change the domain name, domain_owner.id/password, etc. in the initializer.yaml file.
Create file
cat <<EOF> filename.yaml main: import: - /root/spacectl/apply/root_domain.yaml - /root/spacectl/apply/create_managed_repository.yaml - /root/spacectl/apply/user_domain.yaml - /root/spacectl/apply/create_role.yaml - /root/spacectl/apply/add_statistics_schedule.yaml - /root/spacectl/apply/print_api_key.yaml var: domain: root: root user: spaceone default_language: ko default_timezone: Asia/Seoul domain_owner: id: admin password: Admin123!@# # Change your password user: id: system_api_key EOF
After editing the file, execute the initializer with the command below.
helm install initializer cloudforet/spaceone-initializer -n spaceone -f initializer.yaml
After execution, an initializer pod is created in the specified spaceone
namespace and domain creation is performed. You can check the log when the pod is in Completed
state.
6. Set the Helm Values and Upgrade the chart
To customize the default installed helm chart, the values.yaml
file is required.
A typical example of a values.yaml
file can be found at the following link. https://github.com/cloudforet-io/charts/blob/master/examples/values/all.yaml
To solve the scheduler problem, check the pod log in Completed status as shown below to obtain an admin token.
kubectl logs initializer-5f5b7b5cdc-abcd1 -n spaceone
(omit)
TASK [Print Admin API Key] *********************************************************************************************
"{TOKEN}"
FINISHED [ ok=23, skipped=0 ] ******************************************************************************************
FINISH SPACEONE INITIALIZE
Create a values.yaml
file using the token value obtained from the initializer pod log. Inside the file, you can declare app settings, namespace changes, kubernetes options changes, etc.
The following describes how to configure the console domain in the values.yaml
file and how to use the issued token as a global config.
console:
production_json:
# If you don't have a service domain, you refer to the following 'No Domain & IP Access' example.
CONSOLE_API:
ENDPOINT: https://console.api.example.com # Change the endpoint
CONSOLE_API_V2:
ENDPOINT: https://console-v2.api.example.com # Change the endpoint
global:
shared_conf:
TOKEN: '{TOKEN}' # Change the system token
After setting the values.yaml
file as above, execute the helm upgrade operation with the command below. After the upgrade is finished, delete all app instances related to cloudforet so that all pods are restarted.
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
7. Check the status of the pods
Check the status of the pod with the following command. If all pods are in Running
state, the installation is complete.
kubectl get pod -n spaceone
8. Configuration Ingress
Kubernetes Ingress is a resource that manages connections between services in a cluster and external connections. Cloudforet is serviced by registering the generated certificate as a secret and adding an ingress in the order below.
Install Nginx Ingress Controller
An ingress controller is required to use ingress in an on-premise environment. Here is a link to the installation guide for Nginx Ingress Controller supported by Kubernetes.
- Nginx Ingress Controller : https://kubernetes.github.io/ingress-nginx/deploy/
Generate self-managed SSL
Create a private ssl certificate using the openssl command below. (If an already issued certificate exists, you can create a Secret using the issued certificate. For detailed instructions, please refer to the following link. Create secret by exist cert)
console
- *.{domain}
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout console_ssl.pem -out console_ssl.csr -subj "/CN=*.{domain}/O=spaceone" -addext "subjectAltName = DNS:*.{domain}"
api
- *.api.{domain}
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout api_ssl.pem -out api_ssl.csr -subj "/CN=*.api.{domain}/O=spaceone" -addext "subjectAltName = DNS:*.api.{domain}"
Create secret for ssl
If the certificate is ready, create a secret using the certificate file.
kubectl create secret tls console-ssl --key console_ssl.pem --cert console_ssl.csr
kubectl create secret tls api-ssl --key api_ssl.pem --cert api_ssl.csr
Create Ingress
Prepare the two ingress files below. These ingress files can be downloaded from the following link.
console_ingress.yaml : https://github.com/cloudforet-io/charts/blob/master/examples/ingress/on_premise/console_ingress.yaml
rest_api_ingress.yaml : https://github.com/cloudforet-io/charts/blob/master/examples/ingress/on_premise/rest_api_ingress.yaml
Each file is as follows. Change the hostname inside the file to match the domain of the certificate you created.
console
cat <<EOF> console_ingress.yaml --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: console-ingress namespace: spaceone spec: ingressClassName: nginx tls: - hosts: - "console.example.com" # Change the hostname secretName: spaceone-tls rules: - host: "console.example.com" # Change the hostname http: paths: - path: / pathType: Prefix backend: service: name: console port: number: 80 EOF
rest_api
cat <<EOF> rest_api_ingress.yaml --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: console-api-ingress namespace: spaceone spec: ingressClassName: nginx tls: - hosts: - "*.api.example.com" # Change the hostname secretName: spaceone-tls rules: - host: "console.api.example.com" # Change the hostname http: paths: - path: / pathType: Prefix backend: service: name: console-api port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: console-api-v2-ingress namespace: spaceone spec: ingressClassName: nginx tls: - hosts: - "*.api.example.com" # Change the hostname secretName: spaceone-tls rules: - host: "console-v2.api.example.com" # Change the hostname http: paths: - path: / pathType: Prefix backend: service: name: console-api-v2-rest port: number: 80 EOF
Create the prepared ingress in the spaceone
namespace with the command below.
kubectl apply -f console_ingress.yaml -n spaceone
kubectl apply -f rest_api_ingress.yaml -n spaceone
Connect to the Console
Connect to the Cloudforet Console service.
Advanced Configurations
Additional settings are required for the following special features. Below are examples and solutions for each situation.
Name | Description |
---|---|
Set Plugin Certificate | This is how to set a certificate for each plugin when using a private certificate. |
Support Private Image Registry | In an environment where communication with the outside is blocked for organization's security reasons, you can operate your own Private Image Registry. In this case, Container Image Sync operation is required, and Cloudforet suggests a method using the dregsy tool. |
Change K8S Namespace | Namespace usage is limited by each environment, or you can use your own namespace name. Here is how to change Namespace in Cloudforet. |
Set HTTP Proxy | In the on-premise environment with no Internet connection, proxy settings are required to communicate with the external world. Here's how to set up HTTP Proxy. |
Set K8S ImagePullSecrets | If you are using Private Image Registry, you may need credentials because user authentication is set. In Kubernetes, you can use secrets to register credentials with pods. Here's how to set ImagePullSecrets. |
3.3 - Configuration
3.3.1 - Set plugin certificate
If Cloudforet is built in an on-premise environment, it can be accessed through a proxy server without direct communication with the Internet.
At this time, a private certificate is required when communicating with the proxy server.
First, configure the secret with the prepared private certificate and mount it on the private-tls volume.
After that, set the value of various environment variables required to set the certificate in supervisor's KubernetesConnectorto be the path of tls.crt in the private-tls volume.
Register the prepared private certificate as a Kubernetes Secret
Parameter | Description | Default |
---|---|---|
apiVersion | API version of resource | v1 |
kind | Kind of resource | Secret |
metadata | Metadata of resource | {...} |
metadata.name | Name of resource | private-tls |
metadata.namespace | Namespace of resource | spaceone |
data | Data of resource | tls.crt |
type | Type of resource | kubernetes.io/tls |
kubectl apply -f create_tls_secret.yml
---
apiVersion: v1
kind: Secret
metadata:
name: spaceone-tls
namespace: spaceone
data:
tls.crt: base64 encoded cert # openssl base64 -in cert.pem -out cert.base64
type: kubernetes.io/tls
Set up on KubernetesConnector of supervisor
Parameter | Description | Default |
---|---|---|
supervisor.application_scheduler | Configuration of supervisor scheduler | {...} |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[] | Environment variables for plugin | [...] |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].name | Name of environment variable | REQUESTS_CA_BUNDLE, AWS_CA_BUNDLE, CLOUDFORET_CA_BUNDLE |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].value | Value of environment variable | /opt/ssl/cert/tls.crt |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumes[] | Volumes for plugin | [...] |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumes[].name | Name of volumes | private-tls |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumes[].secret.secretName | Secret name of secret volume | private-tls |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[] | Volume mounts of plugins | [...] |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[].name | Name of volume mounts | private-tls |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[].mountPath | Path of volume mounts | /opt/ssl/cert/tls.crt |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.volumeMounts[].readOnly | Read permission on the mounted volume | true |
supervisor:
enabled: true
image:
name: spaceone/supervisor
version: x.y.z
imagePullSecrets:
- name: my-credential
application_scheduler:
CONNECTORS:
KubernetesConnector:
env:
- name: REQUESTS_CA_BUNDLE
value: /opt/ssl/cert/tls.crt
- name: AWS_CA_BUNDLE
value: /opt/ssl/cert/tls.crt
- name: CLOUDFORET_CA_BUNDLE
value: /opt/ssl/cert/tls.crt
volumes:
- name: private-tls
secret:
secretName: private-tls
volumeMounts:
- name: private-tls
mountPath: /opt/ssl/cert/tls.crt
readOnly: true
Update
You can apply the changes through the helm upgrade command and by deleting the pods
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
3.3.2 - Change kubernetes namespace
When Cloudforet is installed in the K8S environment, the core service is installed in spaceone
and the plugin service for extension function is installed in spaceone-plugin
namespace. (In v1.11.5 and below, it is installed in root-supervisor.)
If the user wants to change the core service or plugin service to a namespace with a different name or to install in a single namespace, the namespace must be changed through options.
In order to change the namespace, you need to write changes in Cloudforet's values.yaml. Changes can be made to each core service and plugin service.
Change the namespace of the core service
To change the namespace of the core service, add the spaceone-namespace
value by declaring global.namespace in the values.yaml
file.
#console:
# production_json:
# CONSOLE_API:
# ENDPOINT: https://console.api.example.com # Change the endpoint
# CONSOLE_API_V2:
# ENDPOINT: https://console-v2.api.example.com # Change the endpoint
global:
namespace: spaceone-namespace # Change the namespace
shared_conf:
Change the namespace of plugin service
You can change the namespace of supervisor's plugin service as well as the core service. Life-cycle of plugin service is managed by supervisor, and plugin namespace setting is also set in supervisor.
Below is the part where supervisor is set to change the namespace of the plugin service in the values.yaml
file. Add the plugin-namespace
value to supervisor.application_scheduler.CONNECTORS.KubernetesConnector.namespace
.
#console:
supervisor:
application_scheduler:
HOSTNAME: spaceone.svc.cluster.local # Change the hostname
CONNECTORS:
KubernetesConnector:
namespace: plugin-namespace # Change the namespace
Update
You can apply the changes through the helm upgrade command and by deleting the pods.
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
3.3.3 - Creating and applying kubernetes imagePullSecrets
Due to organization's security requirements, User can Build and utilize a private dedicated image registry to manage private images.
To pull container images from a private image registry, credentials are required. In Kubernetes, Secrets can be used to register such credentials with pods, enabling them to retrieve and pull private container images.
For more detailed information, please refer to the official documentation.
Creating a Secret for credentials.
Kubernetes pods can pull private container images using a Secret of type kubernetes.io/dockerconfigjson.
To do this, create a secret for credentials based on registry credentials.
kubectl create secret docker-registry my-credential --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Mount the credentials Secret to a Pod.
You can specify imagePullSecrets in the helm chart values of Cloudforet to mount the credentials Secret to the pods.
WARN: Kubernetes Secret is namespace-scoped resources, so they need to exist in the same namespace.
Set imagePullSecrets configuration for the core service
Parameter | description | Default |
---|---|---|
[services].imagePullSecrets[]] | imagePullSecrets configuration(* Each micro service section) | [] |
[services].imagePullSecrets[].name | Name of secret type of kubernetes.io/dockerconfigjson | "" |
console:
enable: true
image:
name: spaceone/console
version: x.y.z
imagePullSecrets:
- name: my-credential
console-api:
enable: true
image:
name: spaceone/console-api
version: x.y.z
imagePullSecrets:
- name: my-credential
(...)
Set imagePullSecrets configuration for the plugin
Parameter | description | Default |
---|---|---|
supervisor.application_scheduler | Configuration of supervisor scheduler | {...} |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.imagePullSecrets[] | imagePullSecrets configuration for plugin | [] |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.imagePullSecrets[].name | Name of secret type of kubernetes.io/dockerconfigjson for plugin | "" |
supervisor:
enabled: true
image:
name: spaceone/supervisor
version: x.y.z
imagePullSecrets:
- name: my-credential
application_scheduler:
CONNECTORS:
KubernetesConnector:
imagePullSecrets:
- name: my-credential
Update
You can apply the changes through the helm upgrade command and by deleting the pods
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
3.3.4 - Setting up http proxy
You can enable communication from pods to the external world through a proxy server by declaring the http_proxy and https_proxy environment variables.
This configuration is done by declaring http_proxy and https_proxy in the environment variables of each container.
no_proxy
environment variable is used to exclude destinations from proxy communication.
For Cloudforet, It is recommended to exclude the service domains within the cluster for communication between micro services.
Example
Set roxy configuration for the core service
Parameter | description | Default |
---|---|---|
global.common_env[] | Environment Variable for all micro services | [] |
global.common_env[].name | Name of environment variable | "" |
global.common_env[].value | Value of environment variable | "" |
global:
common_env:
- name: HTTP_PROXY
value: http://{proxy_server_address}:{proxy_port}
- name: HTTPS_PROXY
value: http://{proxy_server_address}:{proxy_port}
- name: no_proxy
value: .svc.cluster.local,localhost,{cluster_ip},board,config,console,console-api,console-api-v2,cost-analysis,dashboard,docs,file-manager,identity,inventory,marketplace-assets,monitoring,notification,plugin,repository,secret,statistics,supervisor
Set proxy configuration for the plugin
Parameter | description | Default |
---|---|---|
supervisor.application_scheduler | Configuration of supervisor schduler | {...} |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[] | Environment Variable for plugin | [] |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].name | Name of environment variable | "" |
supervisor.application_scheduler.CONNECTORS.KubernetesConnector.env[].value | Name of environment variable | "" |
WRAN:
Depending on your the installation environment, the default local domain may differ, so you need to change the default local domain such as.svc.cluster.local
to match your environment. You can check the current cluster DNS settings with the following command.kubectl run -it --rm busybox --image=busybox --restart=Never -- cat /etc/resolv.conf
supervisor:
enabled: true
image:
name: spaceone/supervisor
version: x.y.z
imagePullSecrets:
- name: my-credential
application_scheduler:
CONNECTORS:
KubernetesConnector:
env:
- name: HTTP_PROXY
value: http://{proxy_server_address}:{proxy_port}
- name: HTTPS_PROXY
value: http://{proxy_server_address}:{proxy_port}
- name: no_proxy
value: .svc.cluster.local,localhost,{cluster_ip},board,config,console,console-api,console-api-v2,cost-analysis,dashboard,docs,file-manager,identity,inventory,marketplace-assets,monitoring,notification,plugin,repository,secret,statistics,supervisor
Update
You can apply the changes through the helm upgrade command and by deleting the pods
helm upgrade cloudforet cloudforet/spaceone -n spaceone -f values.yaml
kubectl delete po -n spaceone -l app.kubernetes.io/instance=cloudforet
3.3.5 - Support private image registry
In organizations operating in an on-premise environment, there are cases where they establish and operate their own container registry within the internal network due to security concerns.
In such environments, when installing Cloudforet, access to external networks is restricted, requiring the preparation of images from Dockerhub and syncing them to their own container registry.
To automate the synchronization of container images in such scenarios, Cloudforet proposes using a Container Registry Sync tool called 'dregsy' to periodically sync container images.
In an environment situated between an external network and an internal network, dregsy is executed.
This tool periodically pulls specific container images from Dockerhub and uploads them to the organization's private container registry.
NOTE:
The dregsy tool described in this guide always pulls container images from Dockerhub, regardless of whether the images already exist in the destination registry.
And, Docker Hub limits the number of Docker image downloads, or pulls based on the account type of the user pulling the image
- For anonymous users, the rate limit is set to 100 pulls per 6 hours per IP address
- For authenticated users, itβs 200 pulls per 6 hour period.
- Users with a paid Docker subscription get up to 5000 pulls per day.
Install and Configuration
NOTE:
In this configuration, communication with Dockerhub is required, so it should be performed in an environment with internet access.
Also, this explanation is based on the installation of Cloudforet version 1.11.x
Prerequisite
- docker (Install Docker Engine)
Installation
Since the tools are executed using Docker, there is no separate installation process required.
The plan is to pull and run the dregsy image, which includes skopeo (mirror tool).
Configuration
- Create files
touch /path/to/your/dregsy-spaceone-core.yaml
touch /path/to/your/dregsy-spaceone-plugin.yaml
- Add configuration (dregsy-spaceone-core.yaml)
If authentication to the registry is configured with
username:password
,
the information is encoded and set in the 'auth' field as shown below (example - lines 19 and 22 of the configuration).echo '{"username": "...", "password": "..."}' | base64
In the case of Harbor, Robot Token is not supported for authentication.
Please authenticate by encoding the username:password
relay: skopeo
watch: true
skopeo:
binary: skopeo
certs-dir: /etc/skopeo/certs.d
lister:
maxItems: 100
cacheDuration: 2h
tasks:
- name: sync_spaceone_doc
interval: 21600 # 6 hours
verbose: true
source:
registry: registry.hub.docker.com
auth: {Token} # replace to your dockerhub token
target:
registry: {registry_address} # replace to your registry address
auth: {Token} # replace to your registry token
skip-tls-verify: true
mappings:
- from: spaceone/spacectl
to: your_registry_project/spaceone/spacectl # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/marketplace-assets
to: your_registry_project/spaceone/marketplace-assets # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/docs
to: your_registry_project/spaceone/docs # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: redis
to: your_registry_project/spaceone/redis # replace to your registry project & repository
tags:
- 'latest'
- from: mongo
to: your_registry_project/spaceone/mongo # replace to your registry project & repository
tags:
- 'latest'
- name: sync_spaceone_core
interval: 21600 # 6 hours
verbose: true
source:
registry: registry.hub.docker.com
auth: {Token}
target:
registry: {registry_address} # replace to your registry address
auth: {Token} # replace to your registry token
skip-tls-verify: true
mappings:
- from: spaceone/console
to: your_registry_project/spaceone/console # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/inventory
to: your_registry_project/spaceone/inventory # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/console-api
to: your_registry_project/spaceone/console-api # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/cost-analysis
to: your_registry_project/spaceone/cost-analysis # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/statistics
to: your_registry_project/spaceone/statistics # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/secret
to: your_registry_project/spaceone/secret # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/file-manager
to: your_registry_project/spaceone/file-manager # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/monitoring
to: your_registry_project/spaceone/monitoring # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/supervisor
to: your_registry_project/spaceone/supervisor # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/identity
to: your_registry_project/spaceone/identity # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/notification
to: your_registry_project/spaceone/notification # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/repository
to: your_registry_project/spaceone/repository # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/plugin
to: your_registry_project/spaceone/plugin # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/config
to: your_registry_project/spaceone/config # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/console-api-v2
to: your_registry_project/spaceone/console-api-v2 # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/board
to: your_registry_project/spaceone/board # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- from: spaceone/dashboard
to: your_registry_project/spaceone/dashboard # replace to your registry project & repository
tags:
- 'regex: 1\.11\.(?:[0-9]?[0-9]).*'
- Add configuration (dregsy-spaceone-plugin.yaml)
relay: skopeo
watch: true
skopeo:
binary: skopeo
certs-dir: /etc/skopeo/certs.d
lister:
maxItems: 100
cacheDuration: 2h
tasks:
- name: sync_spaceone_plugin
interval: 21600 # 6 hours
verbose: true
source:
registry: registry.hub.docker.com
auth: {Token} # replace to your dockerhub token
target:
registry: {registry_address} # replace to your registry address
auth: {Token} # replace to your registry token
skip-tls-verify: true
mappings:
- from: spaceone/plugin-google-cloud-inven-collector
to: your_registry_project/spaceone/plugin-google-cloud-inven-collector # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-azure-inven-collector
to: your_registry_project/spaceone/plugin-azure-inven-collector # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-cloudwatch-mon-datasource
to: your_registry_project/spaceone/plugin-aws-cloudwatch-mon-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-azure-activity-log-mon-datasource
to: your_registry_project/spaceone/plugin-azure-activity-log-mon-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-cloudtrail-mon-datasource
to: your_registry_project/spaceone/plugin-aws-cloudtrail-mon-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-ec2-inven-collector
to: your_registry_project/spaceone/plugin-aws-ec2-inven-collector # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-sns-mon-webhook
to: your_registry_project/spaceone/plugin-aws-sns-mon-webhook # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-trusted-advisor-inven-collector
to: your_registry_project/spaceone/plugin-aws-trusted-advisor-inven-collector # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-azure-monitor-mon-datasource
to: your_registry_project/spaceone/plugin-azure-monitor-mon-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-email-noti-protocol
to: your_registry_project/spaceone/plugin-email-noti-protocol # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-google-stackdriver-mon-datasource
to: your_registry_project/spaceone/plugin-google-stackdriver-mon-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-telegram-noti-protocol
to: your_registry_project/spaceone/plugin-telegram-noti-protocol # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-keycloak-identity-auth
to: your_registry_project/spaceone/plugin-keycloak-identity-auth # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-prometheus-mon-webhook
to: your_registry_project/spaceone/plugin-prometheus-mon-webhook # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-slack-noti-protocol
to: your_registry_project/spaceone/plugin-slack-noti-protocol # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-grafana-mon-webhook
to: your_registry_project/spaceone/plugin-grafana-mon-webhook # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-cloud-service-inven-collector
to: your_registry_project/spaceone/plugin-aws-cloud-service-inven-collector # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-phd-inven-collector
to: your_registry_project/spaceone/plugin-aws-phd-inven-collector # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-api-direct-mon-webhook
to: your_registry_project/spaceone/plugin-api-direct-mon-webhook # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-azure-cost-mgmt-cost-datasource
to: your_registry_project/spaceone/plugin-azure-cost-mgmt-cost-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-aws-cost-explorer-cost-datasource
to: your_registry_project/spaceone/plugin-aws-cost-explorer-cost-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-ms-teams-noti-protocol
to: your_registry_project/spaceone/plugin-ms-teams-noti-protocol # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-google-monitoring-mon-webhook
to: your_registry_project/spaceone/plugin-google-monitoring-mon-webhook # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-http-file-cost-datasource
to: your_registry_project/spaceone/plugin-http-file-cost-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
- from: spaceone/plugin-google-cloud-log-mon-datasource
to: your_registry_project/spaceone/plugin-google-cloud-log-mon-datasource # replace to your registry project & repository
tags:
- 'semver: >=1.0.0 <1.99.0'
- 'keep: latest 2'
Run
No need to pull docker images separately.
The command below will get the image if there is no image locally
docker run -d --rm --name dregsy_spaceone_core -v /path/to/your/dregsy-spaceone-core.yaml:/config.yaml xelalex/dregsy:0.5.0
docker run -d --rm --name dregsy_spaceone_plugin -v /path/to/your/dregsy-spaceone-plugin.yaml:/config.yaml xelalex/dregsy:0.5.0
Management
- view log
docker logs -f {container_id|container_name}
- delete docker container
docker rm {container_id|container_name} [-f]
3.3.6 - Advanced configuration guide
Title and Favicon
SpaceONE has default title and CI with Wanny favicon.
But you can change them to your own title and favicon.
Component | File Path | Description |
---|---|---|
Title | /var/www/title.txt | name of Title |
Favicon | /var/www/favicon.ico | favicon file |
Console supports the functionality of changing title and favicon. The default values are in source code, but you can overwrite them when deploying pods.
This is an example value of console.yaml file. # favicon
volumeMounts:
application:
- name: favicon
mountPath: /var/www/title.txt
subPath: title.txt
readOnly: true
- name: favicon-img
mountPath: /var/www/favicon.ico
subPath: favicon.ico
readOnly: true
volumes:
- name: favicon
configMap:
name: favicon
- name: favicon-img
configMap:
name: favicon-img
The actual values are from Kubernetes ConfigMap object. So you might have to change the value at ConfigMap or create a new one and mount it in your pod.
Title
apiVersion: v1
kind: ConfigMap
metadata:
name: favicon
namespace: spaceone
data:
title.txt: |
KB One Cloud
Favicon
apiVersion: v1
kind: ConfigMap
metadata:
name: favicon-img
namespace: spaceone
binaryData:
favicon.ico: AAABAAEAAAAAAAEAIADxxxxxxx...
NOTE: favicon.ico must be base64 encoded.
# cat favicon.ico | base64
Corporate Identity
When you open SpaceONE page, you can see the default SpaceONE CI, logo and text. You can change the default SpaceONE CI with your company CI.
Login Page
Every Page
Update helm value of console (console -> production_json -> DOMAIN_IMAGE)
keyword: DOMAIN_IMAGE
Configuration | Description | Format |
---|---|---|
CI_LOGO | Custom Logo Image | Image (56 * 56 px) |
CI_TEXT_WITH_TYPE | CI Text Image | Image (164 * 40 px) |
SIGN_IN | Sign-in page Image | Image (1024 * 1024 px) |
CI_TEXT | CI Text Image On every page | Image (123 * 16 px) |
NOTE: Recommended file format is SVG. But if you would like to use a PNG file, use transparent background and double the size than recommended size.
NOTE: SpaceONE does not support uploading files, so upload CI files at your web server or S3.!
console:
enabled: true
developer: false
name: console
replicas: 2
image:
name: spaceone/console
version: 1.8.7
imagePullPolicy: IfNotPresent
#######################
# TODO: Update value
# - ENDPOINT
# - GTAG_ID (if you have google analytics ID)
# - AMCHARTS_LICENSE (for commercial use only)
#######################
production_json:
CONSOLE_API:
ENDPOINT: http://console-api.example.com
DOMAIN_IMAGE:
CI_LOGO: https://spaceone-custom-assets.s3.ap-northeast-2.amazonaws.com/console-assets/domain/example/ci-logo.svg
CI_TEXT_WITH_TYPE: https://spaceone-custom-assets.s3.ap-northeast-2.amazonaws.com/console-assets/domain/example/ci-text1.svg
SIGN_IN: https://spaceone-custom-assets.s3.ap-northeast-2.amazonaws.com/console-assets/domain/example/login-img.png
CI_TEXT: https://spaceone-custom-assets.s3.ap-northeast-2.amazonaws.com/console-assets/domain/example/ci-text2.svg
Google Analytics
You can apply Google Analytics to SpaceONE Console by following the steps below.
Create accounts and properties
Log in to your Google account after accessing the Google Analytics site.
Click the Start Measurement button.
Enter your account name and click the Next button.
Enter a property name and click the Next button.
In the property name, enter the name of the url you want to track.
Click the Create button.
Click the Agree button after agreeing to the data processing terms.
Set up data streams
Choose Web as the platform for the data stream you want to collect.
Enter your SpaceONE Console website URL and stream name and click the Create Stream button.
Check the created stream information and copy the measurement ID.
Set up the SpaceONE Helm Chart
Paste the copied measurement ID as the value for the GTAG_ID
key in the helm chart settings as shown below.
# frontend.yaml
console:
...
production_json:
...
GTAG_ID: {measurement ID}
...
3.3.7 - Create secret by exist cert
If a public or private certificate has already been issued, you can create a secret through the existing certificate. The following is how to create a secret using the certificate_secret.yaml
file.
Create Secret from certificate_secret.yaml file
If the certificate is ready, edit the certificate_secert.yaml
file. The file can be downloaded from the link below. In addition, the downloaded content is edited and used as follows. https://github.com/cloudforet-io/charts/blob/master/examples/ingress/on_premise/certificate_secret.yaml
cat <<EOF> certificate_secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: spaceone-tls
namespace: spaceone # Change the namespace
data:
tls.crt: base64 encoded cert # openssl base64 -in cert.pem -out cert.base64
tls.key: base64 encoded key # openssl base64 -in key.pem -out key.base64
type: kubernetes.io/tls
EOF
Apply the certificate_secret.yaml
file to the spaceone
namespace through the following command.
kubectl apply -f certificate_secret.yaml -n spaceone
4 - User Guide
4.1 - Get started
Learn more about Cloudforet through a user guide.
To use Cloudforet's services, the following three prerequisites must be met:
- User settings
- Project settings
- Service account settings
User settings
Cloudforet users are classified into three types: internal users, external users, and API users.
This section only introduces how to add internal users, and how to add external users and API users can be found in [IAM] user guide.
Adding a user
(1) Click the [Create] button on the [Admin > Users] page.
(2) In the [Create user] modal, select the [Local] tab.
(2-1) After entering the ID, click the [Check ID] button to check if the ID is valid.
(2-2) After entering the name, email, and password to identify the user, click the [OK] button to complete the user creation.
Assign administratorβs privileges
If you want to give a user administratorβs privileges, you can assign them by selecting them from the [Assign admin role] dropdown.
If nothing is selected, no privileges will be granted to that user.
For a more detailed information on permissions, see here.
Project settings
Create project and Project group for systematic resource management .
Creating a project group
Since a project must belong to one project group, you must first create a project group before creating a project.
(1) Click the [Create project group] button on the [Project] page.
(2) After entering the project group name in the [Create project group] modal dialog, click the [OK] button to create the project group.
Creating a project
After creating a project group, create a project that will belong to it.
(1) Select the previously created project group from the list of project groups on the left and click the [Create project] button at the top right.
(2) After entering the project name in the [Create project] modal dialog, click the [OK] button to create the project.
Inviting project group members
You can invite users to a project group to register as a Member of the project group.
Roles of project group members
Invited members must have one role for the project group. This role is equally applied to all project groups and projects that fall under that project group.
For more information, see here.
(1) Select the previously created project group from the [Project group] list on the left.
(2) Click the [Manage project group members] icon button at the top right.
(3) Click the [Invite] button on the [Manage project group members] page to open the [Invite members] modal dialog.
(3-1) Select the member you want to invite. You can select and invite multiple members at once.
(3-2) Select the role to be granted to the members to be invited.
Memberβs role
A project member can only be granted auser
type role.
For a detailed description of role types, see here.(3-3) After entering the labels for the members to invite, press the Enter key to add them.
(3-4) Click the [OK] button to complete member invitation.
Service account settings
Service Account means the Cloud service account required to collect resources for the cloud service.
Adding cloud service account
(1) On the [Asset Inventory > Service account] page, select the cloud service you want to add.
(2) Click the [Add] button.
(3) Fill out the service account creation form.
(3-1) Enter basic information.
(3-2) Specify the project to collect resources from according to the service account.
(3-3) Enter encryption key information.
(4) Click the [Save] button to complete.
Add account by cloud service
Account information required for each cloud service may differ. You can see detailed information from the link below:
β’ AWS (link)
β’ Azure (link)
β’ GCP (link)
β’ OCI (link)
β’ Alibaba Cloud (link)
After completing the above steps, if you want to use Cloudforetβs services more conveniently and in a variety of ways, please see the following guide:
4.2 - Dashboards
You can create customized dashboards by combining specific widgets to gain a quick overview of your desired data in addition to the default provided dashboards. Furthermore, you can have precise control over variables, date ranges, and detailed options for each widget for each dashboard, allowing you to build and manage more accurate and professional dashboards tailored to your organization's requirements.
4.2.1 - Dashboard Templates
Monthly Cost Summary
This is a dashboard that visualizes cloud cost status and budget utilization based on various group-specific statistical criteria in the form of charts.
It is comprised of cost-related widgets, and among the dashboard variables, one specific Data-Source must be selected.
CDN & Traffic
This is a dashboard that charts the status of CDN and traffic-related costs and usage for a specific cloud product. One specific Data-Source must be selected.
Compliance Overview
This is a dashboard that visualizes compliance configuration audit and monitoring results. One specific Data-Source must be selected.
4.2.2 - Create Dashboard
Creating a new dashboard
You can create a dashboard following the steps below.
(1) To create a new dashboard, you can either click on [Dashboard > Create New Dashboard] in the top menu or click the [+] button at the top of the left-hand menu within the dashboard service to go to the creation page.
(2) On the "Create New Dashboard" page, select the dashboard scope and choose whether it should be public or not.
- Entire Workspace : The data for the entire workspace's projects will be displayed.
- Single Project : The dashboard will be configured using only the data from a chosen project.
(3) You can select from the default templates provided by Cloudforet or choose to duplicate an existing dashboard. After selecting your preferred options, click the [Continue] button.
(4) After entering the dashboard name, you can complete creating the dashboard using the provided widgets. For detailed editing instructions, refer to here.
(5) The created dashboard can be found on the [View All Dashboards] page, categorized based on [viewers] and [scope].
To review the created dashboard and make quick adjustments, please refer to here.
4.2.3 - Customize Dashboard
Customizing your dashboard
Switch to 'Customize' mode
Clicking the [Customize] button on the right side of a dashboard page will take you to the dashboard editing page.
Rename the dashboard
You can click the [Edit] icon button next to the dashboard title to make changes.
Manage labels
You can add or remove labels just below the dashboard title at the top. Labels are used to categorize and differentiate dashboard-related categories and features, making them useful for dashboard searches.
Apply a period range
(1) When you activate the [Date Range Selector] option from the right side panel, a dropdown button for setting the period will be displayed on the dashboard.
(2) You can select a specific month from the drop-down or choose a specific month within the last 3 years using the [Custom] menu.
Configure auto data refresh
You can choose the data refresh interval from the [Refresh] dropdown in the upper right corner of the dashboard.
Add widgets
(1) Click the [+ Add Widget] button on the right-hand dashboard editing page.
(2) Select a specific widget from the list on the left and add it.
(2-2) If you have selected a specific widget, you can enter the [Name] and set the detailed options.
e.g. In the Cost Map widget, the
Project
option is set to 'Inherit' and if you filter Project
to 'Project A' at the dashboard level, the widget will now display data relevant to 'Project A' only.(2-3) If there are no additional options you want, you can click the [+Add Option] button to add them.
(2-4) When you've finished the configuration, click the [Confirm] button to complete adding widgets to the dashboard.
Rearrange the widget order
You can change the order by drag & drop the widget name button from the widget list in the right panel.
Enlarge widget size
If you want to view a widget in full-screen, click the [Full Screen] icon button in the top right corner of the widget.
Edit a widget
(1) Click the [Edit] icon button in the top right corner of the widget to edit it.
(2) You can edit the widget name and options, then click the [Confirm] button to save your changes. However, if you don't [Save] the dashboard in [Customize] mode, the edited widget won't be reflected in the final version.
- For information on widget option settings, please refer to here.
4.2.4 - Review & Quick Configuration
Editing a dashboard's name or Deleting/Duplicating
Editing a dashboard's name
(1) Click the [Edit] icon button next to the dashboard name.
(2) After changing the dashboard's name, click the [Confirm] button.
Deleting
(1) Click the [Trash] icon button next to the dashboard name.
(2) Click the [Confirm] button in the [Delete Dashboard] modal to delete the dashboard.
Duplicating
(1) Click the [Clone] button on the right side of a dashboard.
(2) Select the following options and complete the cloning.
- Dashboard name
- Viewers:Β Select one between
Private
andΒPublic
.
Managing labels
Label addition/removal can be done in the same way as on here.
Setting filters
When you set up filters, you can view the dashboard data filtered by the desired conditions.
(1) Choose the specific options of the desired item from the Variables section at the top of the dashboard.
(2) If there are differences from the previously saved settings, the [Save] button on the right side will be activated, allowing you to quickly save the changes.
(3) If you want to revert your changes to the most recently saved values while in the process of changing options, click the [Reset] button.
(4) Click the [+ More] > [Manage Variables] button to review all available variables or add custom variables.
(4-1) If you need to add custom variables, click the [+ Add Variable] button in the upper right corner of the [Manage Variables] window.
(4-2) After entering the basic information for the variable you want to add, click the [Save] button.
Setting the period
In the dashboard's top-right [Period] dropdown, you can select a specific month or choose a specific month within the last 3 years using the [Custom] menu.
Configuring auto data refresh
You can select the data refresh interval from the [Refresh] dropdown in the top right corner of the dashboard.
Viewing and editing widget settings
(1) Click the [Full Screen] icon button in the top-right corner of a specific widget
(2) In full-screen mode, you can examine detailed widget data. The top filter options are the same as the dashboard variables, and you can explore each option to get a closer look at the widget's data.
(3) If detailed editing of the widget is required, click the [Edit Options] button in the top-right corner.
(4) Similar to [Customize] mode, you can edit and save the specific items of the widget.
- For information on widget option settings, please refer to here.
4.3 - Project
Create a Project group and a Project on the project page of Cloudforet, and invite your member.
4.3.1 - Project
A project must belong to a specific Project group, and there can be no more hierarchies below the project.
Invite a Member to the project and assign a Role that differentiates their access privilege to project resources.
Creating a project
(1) From the [Project group] list on the left side of the [Project] page, select a project group for which you will create a project.
(2) Click the [Create project] button at the top right.
(3) After entering a project name in the [Create project] modal dialog, click the [OK] button to create the project.
Viewing the project list
From the project list, you can easily check the resource status of the major categories of each project.
You can also enter a search word to see a list of project groups and projects that match your criteria.
Getting a list of all projects
You can view a list of the entire projects by selecting [All projects] from [Project groups] on the left.
Viewing a list of projects in a project group
You can select the project group you want from the [Project group] list on the left to view projects belonging to that group only.
If there are other project groups under the selected project group, the projects belonging to such other project groups are not displayed here.
- Project Group A
- Project Group B
- Project B-1
- Project B-2
- Project A-1
- Project A-2
For example in the above structure, if you select Project Group A,
only Project A-1
and Project A-2
would be displayed in the list.
Exploring projects
Select a project from a list of projects to enter the project detail page.
Project Dashboard
In the [Summary information] tab, you can check the aggregated information of the resources belonging to the project through the project dashboard.
The project dashboard shows the status of resource usage and trends by category and region.
In addition, statistical information about the project in diverse formats through multiple widgets helps to manage resources more efficiently and minimize costs.
Below is a list of widgets on the project dashboard.
Project dashboard widget name | Description |
---|---|
Alert | Information on alerts that occurred in the project |
Cost | Cost information for the project |
Today's resource updates | Resource information updated from midnight local time to the present |
Cloud services | Information on major cloud services among the services |
AWS Personal Health Dashboard | Information on AWS Personal Health Dashboard |
AWS Trusted Advisor | Information on AWS Trusted Advisor |
Edit project
Changing project name
(1) Click the [Edit] icon button to the right of the project name.
(2) After entering the name to be changed in the [Change project] modal dialog, click the [OK] button to change the project name.
Managing project tags
You can manage it by adding tags to your project.
(1) Click the [Edit] button inside the [Tag] tab.
(2) Click the [Add Tag] button on the [Tag] page.
(3) Enter the value to be added in the form of βkey:value.β
(3-1) If you want to add more tags, click the [Add tag] button as many as the number of tags you want.
(4) Click the [Save] button to finish adding tags.
Deleting a project
(1) Click the [Delete] icon button to the right of the project name.
(2) Click the [OK] button in the [Delete project] modal dialog to delete the project.
4.3.2 - Member
Members are always assigned at least one role for each, which allows them to manage access to the project and project group.
β’ Roles of project group members are equally applied to all project groups and projects under such roles.
β’ Roles of project members are applied only to corresponding projects.
β’ If roles of members exist for multiple project groups that exist in the upper hierarchy, the roles granted to each are merged and then applied.
Manage project group members
You can manage members by entering the [Manage project group members] page.
(1) Select the project group whose members you want to manage from the [Project group] list on the left side of the [Project] page.
(2) Click the [Manage project group members] icon button at the top right.
(3) Enter a search word on the [Manage project group members] page to view a list of projects that meet the criteria, invite new members, or edit/delete members.
Inviting project group members
(1) Click the [Invite] button on the [Manage project group members] page to open the [Invite members] modal dialog.
(2) Select the member you want to invite. You can select and invite multiple members at once.
(3) Select the roles to be granted to members that you want to invite.
roles of members
Roles that can be granted to members of the project can only be of a user
type.
For detailed instructions, see here.
(4) After entering labels for members to invite, press the Enter key to add them.
(5) Click the [OK] button to complete member invitation.
Editing project group members
You can change the roles and labels granted to members for the project group.
(1) In the [Manage project group members] page, select the member you want to edit.
(2) Select [Edit] from the [Action] dropdown.
(3) In the [Change member information] modal dialog, enter the contents you want to change and click the [OK] button to complete the change.
Deleting project group members
(1) In the [Manage project group members] page, select the member you want to delete. Multiple selections are possible.
(2) Select [Delete] from the [Action] dropdown.
(3) Click the [OK] button in the [Remove member] modal dialog to remove the member.
Managing project members
You can manage members by entering the [Members] tab of the project detail page, and all methods and contents are the same as the managing project group members (link).
(1) On the [Project] page, select the project whose members you want to manage and go to the project detail page.
(2) Select the [Member] tab.
4.4 - Asset inventory
Cloud provider: refers to a cloud provider offering cloud services such as AWS, Google Cloud, Azure, etc.
Cloud service: refers to a cloud service that a cloud provider offers, as in the case of AWS EC2 Instance.
Cloud resource: refers to resources of cloud services, as in the case of servers of AWS EC2 Instance.
4.4.1 - Quick Start
Creating a service account
Add a cloud service account in the [Asset inventory > Service account] page.
(1) Select a cloud service to add.
(2) Click the [Add] button.
(3) Fill out the service account creation form.
(3-1) Enter basic information.
(3-2) Specify the project to collect resources from according to the service account.
(3-3) Enter encryption key information.
Creating a collector
On the [Asset Inventory > Collector] page, create a collector to collect resources.
(1) Click the [Create] button.
(2) Select the plugin to use when collecting resources.
(3) Fill out the collector creation form. (3-1) Enter basic information such as a name and a version.
(3-2) Add tags if necessary.
(4) Create a schedule for running the collector.
(4-1) On the [Asset inventory > Collector] page, select one collector from the table, and then click the [Add] button in the [Schedule] tab.
(4-2) In the [Add schedule] modal dialog, set the time to run the collector and click the [OK] button.
Verifying collected resources
You can view the collected resources in [Asset inventory > Cloud service].
4.4.2 - Cloud service
Viewing a list of cloud services
The cloud service page displays the status of cloud service usage by Provider.
Advanced Search and filter settings allow you to filter the list by refined criteria.
Choosing a Provider
Select a provider to view cloud services provided through a certain provider only.
Filter settings
You can search with more detailed conditions by setting service classification and region filters.
(1) Click the [Settings] button to open the [Filter Settings] modal dialog.
(2) After selecting the desired filter, click the [OK] button to apply it.
Exploring Cloud Service
You can check the details of certain cloud services on the cloud service detail page.
Click a card on the cloud service page to go to the detail page.
You can check detailed information about the selected cloud service in the cloud service list on the left.
Viewing a list of resources in cloud services
You can enter a search word to see a list of cloud resources that match your criteria.
See here for a detailed description of Advanced search.
Click the [Excel] icon button to [Export as an Excel file] for a list of resources (/ko/docs/guides/advanced/excel-export) or click the [Settings] icon button to [Personalize table fields](/ko/ docs/guides/advanced/custom-table).
Viewing the status of cloud service usage
You can check statistical information about the selected cloud service.
For more detailed information, click the [View chart] button on the right.
Opening cloud resources console
Sometimes you need to work in a console provided by a cloud resources provider.
(1) Select the cloud resource to which you want to connect the console.
(2) Click the [Console connection] button.
(3) By clicking the button, open the provider's console in a new tab where you can continue working with the cloud resource.
Below is an example of the AWS EC2 Instance console that was opened.
Exploring resources in cloud services
If you select an item you want to look at in the list of cloud resources, you can check information about that resource at the bottom.
- [Details] (#check-cloud-resources-details)
- [Tag] (#manage-cloud-resources-tag)
- [Associated member] (#check-cloud-resource-associated-member)
- [Change history] (#check-cloud-resource-associated-member)
- [Monitoring] (#check-cloud-resource-monitoring-information)
Checking cloud resource details
Detailed information about the selected resource is displayed.
The information displayed here is divided into a Basic tab and a More information tab.
- Basic tab: This is provided as default in the cloud resources details, and the [Basic information] and [Original data] tabs are applicable.
- More information Tab: All tabs except the main tab are determined by the collector plugin that gathers resources. For detailed information, see here.
The image above is an example of cloud resources details.
Except for the [Basic information] tab and [Original data] tabs, all other tabs (AMI, Permissions, Tags) offer information added by the collector plugin.
Managing cloud resources tags
There are two types of tags for cloud resources: Managed
and Custom
.
For each cloud resource, you can either view the Managed
tags added from the provider or add Custom
tags.
Each tag in the form of key: value
can be useful when searching for specific resources.
[ Viewing Managed Tags ]
- The
Managed
tags can't be directly edited or removed in Cloudforet.
[ Creating & Viewing Custom Tags ]
(1) Click the [Edit Custom Tags] button
(2) After entering the tag in the form of key:value
on the tag page, click the [Save] button to complete this process.
Checking members associated with cloud resources
In the [Associated members] tab, you can check user information that meets the conditions below:
- A user who has access to the cloud resource as a Project member
Viewing history of changing cloud resources
In the [Change history] tab, you can quickly identify changes by date/time of the selected cloud resource.
(1) You can select a certain date or search for the content you want to check.
(2) When you click a certain key value or time period, you can check the details of the corresponding history of changes.
(2-1) Contents of changes: You can check the details of which key values ββof the resource were updated and how.
(2-2) Logs: As we support detailed logs by providers such as AWS CloudTrail, you can check which detailed events have occurred within/without of the selected time. This has a great advantage when identifying users who have made changes to a particular resource.
You can check the detailed log by clicking the value key you want to check.
(2-3) Notes: By adding/managing notes at a selected time, you can freely manage the workflow for each company, such as which person in charge is related to the change, which process you will choose to solve the issue, etc.
Checking cloud resource monitoring information
The [Monitoring] tab shows various metrics for cloud resources.
You can also view metrics for different criteria by changing the [Time range] filter, or by selecting a different statistical method from the [Statistics] dropdown.
If you select multiple resources by clicking the checkbox on the left from the list of cloud resources at the top, you can compare and explore metric information for multiple resources.
Metrics information is collected by the Monitoring plugin, and for detailed information, see here.
4.4.3 - Server
Getting a list of server resources
You can check a list of server resources by entering the server page through the [Asset inventory > Server] menu.
Advanced search allows you to filter the list by elaborate criteria.
Click the [Excel] icon button to [Export as an Excel file] for a list of resources (/ko/docs/guides/advanced/excel-export) or click the [Settings] icon button to [Personalize table fields](/ko/ docs/guides/advanced/custom-table).
Opening the server resources console
Sometimes you need to work on a console site that a server resources provider offers.
(1) Select the server resource to which you want to connect the console.
(2) Click the [Console connection] button.
(3) Click the button to open the provider's console in a new tab where you can continue working with the server resource.
Below is an example of the AWS EC2 Instance console that was opened.
Explore server resources
If you select the item you want to look at from a list of server resources, you can check information about the resource at the bottom.
It is equivalent to the Explore cloud service resources in the [Asset inventory > Cloud service] menu.
4.4.4 - Collector
Overview
To collect data with a collector, you need two elements:
Collector plugin
This is an element that defines the specifications of what resources to collect from the Cloud provider, and how to display the collected data on the screen.
Since each provider has a different structure and content of data, a collector completely relies on Collector plugin to collect resources.
For detailed information on this, see here.
Service account
To collect resources, you need to connect to an account on the Cloud provider.
Service Account is your account information to link to your provider's account.
A collector accesses the provider account through the service account created for each provider.
For detailed information on this, see here.
Create a collector
(1) Click the [+ Create] button at the top left.
(2) Follow the steps on the "Create New Collector" page.
(2-1) On the Plugin List page, find a required collector plugin and click the [Select] button.
(2-2) Enter the name and version of the collector and click the [Continue] button.
(Depending on the collector, it can be required to select a specific cloud provider.)
Version and auto upgrade
Version refers to the version of the previously selected collector plugin, which can be chosen by disabling auto upgrade. In this case, the data is always collected with the specified version of the plugin.
On the other hand, if you enable auto upgrade, your data will always be collected with the latest version of the plugin.
(2-3) Select additional options for the collector and click the [Continue] button.
(2-3-1) Service Account: Select either "All" or Specific Service Accounts. If you choose "All," the service accounts associated with the provider related to the collector will be automatically selected for data collection.
(2-3-2) Additional Options: Depending on the collector, there may or may not be additional options to select.
(2-4) You can set up a schedule to automatically perform data collection (optional). Once you have completed all the steps, click the [Create New Collector] button to finalize the collector creation.
(2-5) Once collector is created, you can collect data immediately.
Get a list of collectors
You can view a list of all collectors that have been created on the collector page.
Advanced search allows you to filter the list by elaborate criteria. For a detailed explanation, see here.
View/Edit/Delete collector
(1) View Details
(1-1) Select a specific collector card from the list to navigate to its detailed page.
(1-2) You can view the basic information, schedule, additional options, and attached service accounts.
(2) Edit or Delete
(2-1) Click on the [Edit] icon at the top and modify the collector name.
(2-2) If you need to edit details such as base information, schedule, additional options or service accounts, click the [Edit] button in each area.
(2-3) After making the changes, click the [Save Changes] button to complete the modification.
(2-4) If you need to delete a collector, click the [Trash] icon on the top.
Set up automated data collection
After creating a collector, you can still modify the automated data collection schedule for each individual collector.
(1) In the collector list page, you can enable or disable automated data collection for each collector by using the schedule toggle button(Switch On/Off) in the collector card section. You can quickly set and modify the frequency by clicking the [Edit] button.
(2) You can also navigate to the detailed page of each collector and change the schedule.
Start data collection immediately
You can collect data on a one-time basis without setting up automated data collection.
It allows data collection to take place even when the collector does not have an automated data collection schedule.
Data collection works in two ways:
Collect data for all attached service accounts
Collector needs account information from a Provider for data collection, which is registered through Service account.
(1) Click on [Collect Data]
(Collector list Page) Hover over the collector card area for data collection, and then click the [Collect Data] button.
(Collector Detail Page) Click the [Collect Data] button located in the top right corner of the detailed page.
(2) Proceed with data collection.
(2) Whether or not the collector has completed a data collection can be checked in Collector history. You can click the [View details] link of a selected collector to go to that page.
Collect data for a single service account
When collecting data with a collector, you may only collect data from a specific cloud providerβs account.
(1) Select a collector from the collector list page, and go to detail page.
(2) You can find the list of attached service accounts on the bottom of detail page.
Service account
Service account has access information for the provider account required for data collection.
If no information can be found here, this means there is no account information for accessing the provider, and as a result, no data collection occurs even when the collector is running.
Therefore, to collect data with a collector, you must first register the account information of the provider in the [Service account] menu.
(3) In order to start data collection, Click the [Collect Data] button on the right side of the service account for which you want to collect data.
Checking data collection history
You can check your data collection history on the Collector history page.
You can go to the collector history page by clicking the [Collector history] button at the top of the collector page.
Checking the details of data collection history
If you select a collection history from the list of data collections above, you will be taken to the collection history details page.
You can check data collection status, basic information, and Collection history by service account.
Checking collection history for each service account
When you run the collector, each collection is performed for each associated service account.
Here you can find information about how the collection was performed by the service account.
Key field Information
- Created Count: The number of newly added resources
- Updated Count: The number of imported resources
- Disconnected Count: The number of resources that were not fetched
- Deleted Count: Number of deleted resources (in case of a resource failing to fetch multiple times, it is considered deleted.)
Check the content of collection errors
(1) Select the item you want to check for error details from a list of collections for each account.
(2) You can check the details of errors in the [Error list] tab below.
4.4.5 - Service account
Add service account
There are two types of service accounts for different needs and better security.
General Account
:Option 1) You can create account with its own credentials.
Option 2) Create account using credentials from an existing
Trusted Account
.Option 3) You can also create account without credentials.
Trusted Account
:You can create an account that enables trusted access,
then other general accounts can refer to its credential key by attaching it.
Create General Account
(1) On the [Asset inventory > Service account] page, select the cloud service you want to add.
(2) Click the [Add] button.
(3) Fill out the service account creation form.
(3-1) Select General Account
.
(3-2) Enter basic information.
(3-3) Specify the project to collect resources from according to the service account.
(3-4) Enter encryption key information.
Option 1) You can create account with its own credentials.
Option 2) Create account using credentials from an existing
Trusted Account
.In the case of AWS, you can easily create Assume Role by attaching an exisiting
Trusted Account
. If you select a certainTrusted Account
, its credential key will automatically get inserted, then you will only need to enter the rest of information.Option 3) You can also create account without credentials.
(4) Click the [Save] button to complete.
Create Trusted Account
(1) On the [Asset inventory > Service account] page, select the cloud service you want to add.
(2) Click the [Add] button.
(3) Fill out the service account creation form.
(3-1) Select Trusted Account
.
(3-2) Enter basic information.
(3-3) Specify the project to collect resources from according to the service account.
(3-4) Enter encryption key information.
(4) Click the [Save] button to complete.
Viewing service account
You can view a list of service accounts that have been added, and when you click a certain account, you can check the detailed information.
Editing service account
Select a service account you want to edit from the list.
Editing each part
You can edit each part of detail information by clicking [Edit] button.
Removing service account
Select a service account you want to remove from the list.
You can delete it by clicking the delete icon button.
If the service account is Trusted Account
type and currently attached to more than one General Account
, it can't be removed.
4.5 - Cost Explorer
The amount used by period can be checked based on a Budget set by a user and Budget use notification can also be set up.
4.5.1 - Cost analysis
By grouping or filtering data based on diverse conditions, you can view the desired cost data at a glance.
Verifying cost analysis
Selecting a data source
If you have more than one billing data source connected, you can perform a detailed cost analysis by selecting each data source from the "Cost Analysis" section in the left menu.
Selecting the granularity
Granularity is criteria set for how data will be displayed. The form of the provided chart or table varies depending on the detailed criteria.
Daily
: You can review daily accumulated data for a specific month.Monthly
: You can check monthly data for a specific period (up to 12 months).Yearly
: You can examine yearly data for the most recent three years.
Selecting the period
The available options in the "Period" menu vary depending on a granularity you choose. You can select a menu from the [Period] dropdown or configure it directly through the "Custom" menu.
Group-by settings
You can select more than one result from group statistics. In the chart, only one selected result of group statistics is displayed, and in the table, you can see all the results from group statistics that you select.
Filter settings
Filters, similar to group-by, can be selected one or more at a time, and your configured values are used for filtering with an "AND" condition.
(1) Click the [Filter] button at the top of the page.
(2) When the "Filter Settings" window opens, you can choose the desired filters, and the selections will be immediately reflected in the chart and table.
Creating/managing custom cost analysis
Creating a custom analysis page
To alleviate the inconvenience of having to reset granularity and period every time you enter the "Cost Analysis" page, a feature is provided that allows you to save frequently used settings as custom analysis pages.
(1) Click the [Save As] button in the upper-right corner of a specific cost analysis page.
(2) After entering a name and clicking the [Confirm] button, a new analysis page is created.
(3) Custom cost analysis pages can be saved with settings like name, filters, group-by, etc., directly using the [Save] option, and just like the default analysis pages, you can also create new pages by using [Save As].
Editing the custom analysis name
You can edit the name by clicking the [Edit] button at the top of the page.
Deleting the custom analysis name
You can delete the page by clicking the [Delete] button at the top of the page.
4.5.2 - Budget
Creating a budget
(1) Click the [Create budget] button at the top right of the [Cost Explorer > Budget] page.
(2) Enter basic information
(2-1) Enter the name of the budget.
(2-2) Select a billing data source.
(2-3) Select the project to be the target of budget management in the target item.
(2-4) Select the cost incurring criteria. If you select all
as the cost type, all cost data related to the corresponding project will be imported.
(3) Enter the budget plan
(3-1) Set a period for managing the budget.
(3-2) Choose how you want to manage your budget.
(3-3) Set the budget amount. If you selected Set total budget,
enter the total budget amount. If you selected Set monthly budget,
enter the monthly budget amount.
Check the set budget and usage status
The budget page provides a summary of your budget data and an overview of your budget for each project at a glance. You can use filters at the top to specify a period or apply an exchange rate, and you can search for a specific project or name using an advanced search.
Budget detail page
On the budget detail page, you can view specific data for the created budget.
Budget summary
Under [Budget summary], you can check the monthly budget and cost trends through charts and tables.
Set budget usage notifications
In [Budget usage notification settings], you can adjust the settings to receive a notification when the budget has been used up over a certain threshold. When the budget amount used goes over a certain percentage or the actual amount exceeds a certain amount, you can receive a notification through the notifications channel registered in advance.
4.6 - Alert manager
4.6.1 - Quick Start
Creating alerts
Alerts can be created in two ways:
- Create an alert manually in the Cloudforet console.
- Automatically create through an external monitoring service connection
Creating an alert manually from a console
(1) Go to the [Alert manager > Alert] page and click the [Create] button.
(2) When the [Create alert] modal dialog opens, fill in the input form.
(2-1) Enter an [Alert title] and select [Urgency].
(2-2) Designate the project for which the alert occurred.
(2-3) Write [Comment] if an additional explanation is needed.
(3) Click the [OK] button to complete alert creation.
Connecting to an external monitoring service to receive alerts
When an external monitoring service is connected, an event message occurring in the service is automatically generated as an alert.
To receive alerts from the external monitoring, Webhook creation and Connection settings are required.
Webhook creation is performed in the Cloudforet console, but Connection settings must be done directly in the Cloud Service console that provides external monitoring services.
For more on how to connect an external monitoring service, see here.
Creating a webhook
To receive event messages from an external monitoring service, you need to create a webhook.
Webhooks can be created on the project detail page.
(1) Go to the [Alerts] tab of the project detail page and select the [Webhook] tab.
(2) Click the [Add] button.
(3) Write a name in an [Add webhook] modal dialog and select the plug-in of the external monitoring service to be connected.
(4) Click the [OK] button to complete set up.
Escalation policy settings
Whether an alert received via a webhook is sent as a notification to project members is determined by escalation policy.
(1) Inside the [Alert] tab of the project detail page, move to the [Settings] tab.
(2) Click the [Change] button in the escalation policy area.
(3) After selecting the [Create new policy] tab, enter the settings to create an escalation policy.
Policy | Description |
---|---|
Exit condition (status) | Define the condition to stop the generated alarm. |
Range | Indicate the scope in which escalation policy can be used. In case of "global," the policy can be used in all projects within the domain, and in case of "project," within the specified project. |
Escalation Rules | All levels from LV1 to LV5 can be added. Alerts are sent to a notifications channel belonging to a set level, and a period between steps can be given in minutes from step 2 or higher. |
Number of repetitions | Define how many times to repeat an alert notification. Notifications can be repeated up to 9 times. |
Project (if you create it from the escalation rules page) | If the scope is a project, this indicates the project being targeted. |
(4) When all settings are completed, click the [OK] button to create the escalation policy.
Notifications settings
In the [Notification] tab of the project detail page, you can decide whether or not to Create a notifications channel and enable it.
Notifications channel is a unit that expresses the systematic recipient area, including the method and level of notifications transmission. It helps to transmit alerts according to the level set in the escalation rule.
(1) On the project detail page, select the [Notification] tab and click the [Add channel] button of the desired notifications channel.
(2) On the notification creation page, enter the settings to create a notifications channel.
(2-1) Enter the basic information about the notifications channel you want to create, such as the required channel name and notification level. The [Channel name] and [Notification level] comprise the basic setting fields, and afterward, the remaining fields receive different information per channel.
(2-2) You can set a schedule to receive notifications only at certain times.
(2-3) Notifications can be received when an alert occurs or when a threshold for budget notifications was reached. You can set the occasions when you receive notifications in [Topic].
(3) Click the [Save] button to complete the notifications channel creation.
(4) Notifications channels that have been created can be checked at the bottom of the [Notification] tab.
You can control whether to activate the corresponding notifications channel through the toggle button at the top left. Even if there is a level set up under the escalation policy, without activating the notifications channel, notifications will not go out.
4.6.2 - Dashboard
You can check alerts for each of the three main parts, as follows:
Check alerts by state
At the top of the dashboard, you can view alerts by State.
Click each item to go to the Alert details page, where you can check detailed information or implement detailed settings.
Alerts history
Alert history occurred in Project is displayed.
You can see the daily data on the chart, and the increase/decrease in alerts on the card compared to the previous month.
Project dashboard
[Project dashboard] shows the alert information of each project related to a user.
In the case of [Top 5 project activities], projects are displayed in the order of having the most alerts in the [Open] state.
At the bottom of the search bar, the alerted projects are displayed in the order of highest activity.
Only projects marked with an issue
status are visible, and when all the alerts reach a cleared
status, they are changed to normal
status and are no longer visible on the dashboard.
4.6.3 - Alert
State
Alerts have one of the following states:
State | Description |
---|---|
OK | State in which an alert has been assigned and is being processed |
Created | State in which alert was first registered |
Resoled | State in which the contents of alerts such as faults, inspection, etc., have been resolved |
Error | State in which an event has been received through webhook connections but alerts were not normally registered due to error |
Urgency
There are two types of urgent alerts in Cloudforet: high
and low.
Whereas in the case of the Manual creation of alerts, it is created as one of two types, high
and low,
in the case of automatic creation through webhook connections, urgency is measured according to Severity.
Severity
Severity indicates the intensity of the risk of an event coming from a typical external monitoring hook.
There are five severity levels: critical,
error,
warning,
info,and
not_available,` and, when creating alerts from them, Cloudforet sets the urgency level based on the following criteria:
β’ High
: critical,
error,
and not available
β’ Low
: warning
and info
Creating alerts
Alerts can be created in two ways:
- Manual creation: create an alert manually in the Cloudforet console.
- Auto generation: create a webhook and receives events from an external monitoring service connected to the webhook. And it automatically generates an alert by purifying the received event message.
Creating an alert manually from a console
(1) Go to the [Alert manager > Alerts] page and click the [Create] button.
(2) When the [Create alert] modal dialog opens, fill in the input form.
(2-1) Enter an [Alert title] and select [Urgency].
(2-2) Designate the project for which the alert occurred.
(2-3) Write [Comment] if an additional explanation is needed.
(3) Click the [OK] button to complete alert creation.
Connecting to an external monitoring service to receive alerts
When an external monitoring service is connected, an event message occurring in the service is automatically generated as an alert.
To receive alerts from the external monitoring, Webhook creation and Connection settings are required.
Webhook creation is performed in the Cloudforet console, but Connection settings must be done directly in the Cloud Service console that provides external monitoring services.
For more on how to connect an external monitoring service, see here.
Creating a webhook
To receive event messages from an external monitoring service, you need to create a webhook.
Webhooks can be created on the project detail page.
(1) Go to the [Alerts] tab of the project detail page and select the [Webhook] tab.
(2) Click the [Add] button.
(3) Write a name in an [Add webhook] modal dialog and select the plug-in of the external monitoring service to be connected.
(4) Click the [OK] button to complete set up.
Using Alerts
Let's take a brief look at various ways to use the alert features in Cloudforet.
- Notifications channel: set up how and when to send alerts to which users.
- Escalation policy: apply step-by-step rules to effectively forward received alerts to project members.
- Event rules: events received through webhooks are generated as Alerts according to the circumstances.
- Maintenance period: register regular and irregular system task schedules to guide tasks and block Alerts that occur between tasks.
Getting a list of alerts
You can view alerts from all projects on the [Alert manager > Alerts] page.
You can search for alerts or change the state of an alert.
Searching for alerts
You can enter a search term to see a list of alerts that match your criteria and click the title of an alert you want to check on an alert detail page.
Also, the built-in filtering feature makes it convenient to filter alerts.
For a detailed description on advanced search, see here.
Changing alert state in lists
You can edit an alert state right from the list.
(1) Select an alert to edit the state, and click the desired button from among [OK], [Resolved], and [Delete] in the upper right corner.
(1-1) Click the [OK] button to change the state to OK
The OK
state is a state in which the alert has been assigned and is being processed by a person in charge.
As soon as you change the state, you can set the person in charge of the selected alert to yourself, and click the [OK] button to complete.
(1-2) Click the [Revolved] button to change the state to `resolvedβ
The resolved
state means that the issue that caused the alert has been processed.
You can write a note as soon as the state changes, and click the [OK] button to complete.
(1-3) Click the [Delete] button to delete an alert
You can check the alert list to be deleted once again, and click the [OK] button to delete it.
Viewing alerts
You can view and manage details and alert history on the alert detail page.
Items | Description |
---|---|
Duration | Time during which an alert lasted |
Description | As a description of an alert, the content written by a user or that of an event received from an external monitoring service |
Rules | Conditions alerted by an external monitoring service |
Severity | Level of seriousness of data received from a webhook event |
Escalation policy | Applied escalation policy |
Project | Alerted project(s) |
Create | Monitoring services that sent alerts |
Resource name | Alert occurrence target |
Renaming and deleting alerts
You can change the name of an alert or delete an alert through the [Edit] and [Delete] icon buttons for each.
Changing state/urgency
State and urgency can be easily changed via the dropdown menus.
Changing the person in charge
(1) Click the [Assign] button.
(2) Select a person in mind and click the [OK] button to complete the assignment of the person in charge.
Editing description
Only users with an administrative role for the alert can edit it.
(1) Click the [Edit] button.
(2) Write changes through a form in an alert description field and click the [Save changes] button to complete such changes.
Changing a project
You can change the project linked with an alert.
(1) Click the [Change] button to change a project.
(2) After selecting a project from a [Select project] dropdown menu, click the [Save changes] button to complete the project change.
Updating to a new state
By recording the progress in the state of alerts field, you can quickly grasp their state.
If you change the content, the previous state history will be lost.
(1) Click the [New update] button.
(2) Input the state in the [New state update] modal dialog, and click the [OK] button to complete the state update.
Adding recipients
Alerts are sent to recipients via Escalation policy.
If you need to send an alert to additional users for that alert, set up [Additional recipients].
You can view and search a list of available users by clicking the search bar, where multiple selections are possible.
Adding notes
Members can communicate by leaving comments on alerts, registering inquiries and answers to those inquiries during processing.
Viewing occurred events
You can view history by logging events that occurred in one alert.
If you click one event from a list, you can view the details of that event.
Notification policy settings
You can set an alert to occur only when the urgency of the alert that has occurred in the project is urgent
.
(1) Inside the [Alerts] tab of the project detail page, go to the [Settings] tab.
(2) Click the [Edit] icon button in the notification policy area.
(3) Select the desired notification policy.
(4) Click the [OK] button to complete policy settings.
Auto recovery settings
The auto recovery feature automatically places the alert into a resolved
state when the system crashes.
How auto recovery works
When an alert of the project for which the auto recovery is set receives an additional event, provided that the event type
value of that event is recovery,
the alert state is automatically switched to resolved.
(1) Inside the [Alerts] tab on the project detail page, move to the [Settings] tab.
(2) Click the [Edit] icon button in the auto recovery area.
(3) Select the desired auto recovery settings.
(4) Click the [OK] button to complete auto recovery settings
4.6.4 - Webhook
Creating a webhook
To receive event messages from an external monitoring service, you need to create a webhook.
Webhooks can be created on the project detail page.
(1) Go to the [Alerts] tab of the project detail page and select the [Webhook] tab.
(2) Click the [Add] button.
(3) Write a name in an [Add webhook] modal dialog and select the plug-in of the external monitoring service to be connected.
(4) Click the [OK] button to complete set up.
Connect external monitoring service
To use a webhook, you should connect to an external monitoring service through the URL of the created webhook.
For more on how to connect an external monitoring service, see here.
Getting a list of webhooks
Advanced search
You can enter a search word in the search bar to see a list of webhooks that match your criteria. For a detailed description on advanced search, see here.
Editing and deleting webhook
You can enable, disable, change, or delete a webhook viewed from the list.
Enabling/disabling a webhook
If you enable a webhook, you can receive events from an external monitoring service connected to the webhook at Alerts.
On the contrary, if you disable a webhook, incoming events are ignored and no alerts are raised.
(1) Select the webhook to enable and choose the [Enable]/[Disable] menu from the [Action] dropdown.
(2) Check the content in the [Enable/disable a webhook] modal dialog and click the [OK] button.
Renaming a webhook
(1) Select the webhook to change from the webhook list, and select the [Change] menu from the [Action] dropdown.
(2) Write a name to be changed and click the [OK] button to complete the change.
Deleting a webhook
(1) Select the webhook to delete from the webhook list, and choose the [Delete] menu from the [Action] dropdown.
(2) After entering the accurate name of the selected webhook, click the [Delete] button to delete the webhook.
4.6.5 - Event rule
Event rules are project dependent and can be managed on the project detail page.
Create event rules
(1) In the [Settings] tab found in the [Alert] tab of the project detail page, click the [Edit] button of the event rule.
(2) Click the [Add event rule] button.
(3) Enter desired setting values ββon the event rule page.
(3-1) Set the conditions to perform additional actions on the received alert.
At least one condition must be written, and you can add conditions by clicking the [Add] button on the right or delete them by clicking the [Delete] icon button.
(3-2) Specify the action to be performed on the alert that meets the conditions defined above.
List of event rules settings
Property | Description |
---|---|
Stop notifications | Suppress Notification for alerts for the corresponding conditions |
Project routing | Alerts of the corresponding conditions are not received by current project but by project selected under project routing (no alert is created in the current project) |
Project Dependencies | Alerts of the corresponding conditions can be viewed from the projects registered in project dependency. |
Urgency | Automatically assign urgency to alerts of the corresponding conditionsHigh, low, or none-set can be specified and in case of none-set, rules are applied as followsβ’ External monitoring alert: Urgency of an object β’ Direct creation: High (default) |
Person in charge | Automatically assign a person in charge of the alert for the corresponding condition(s): |
Additional recipients | When Notification occurs with the alert of the corresponding condition(s), send a notification to specified users together |
Additional information | Automatically add information to alerts for the corresponding conditions |
Stop executing further actions | If the event rule is executed, subsequent event rules are ignored (See Ways and order of event rules action) |
Edit event rules
(1) Click the [Edit] button on the event rules page.
(2) Enter the setting values you wantβfor the event rule.
(3) Click the [Save] button to complete editing the event rules.
Delete event rules
(1) Click the [Delete] button on the event rules page.
(2) In the [Delete event rule] modal dialog, click the [OK] button to complete the deletion.
Ways and order of event rules action
Event rules set by a user for when an alert occurs will be executed sequentially.
If event rules are created as in the example above, they are executed in the order of [#1], [#2], etc., starting from the highest event rule.
You can easily change the order of the event rules by clicking the [β] and the [β] buttons.
4.6.6 - Maintenance window
Setting a Maintenance window allows you to block sending notifications during that period.
The maintenance window is project dependent and can be managed on the project detail page.
Create maintenance window
(1) Click the [Create maintenance window] button at the top right of the project detail page.
(2) Enter a [Title] for a maintenance window and set the schedule to limit the occurrence of the alert.
When you set the schedule, you can start right away or have it start at a scheduled time.
Select the [Start and end now] option if you want to start immediately, or the [Start at scheduled time] option if you want to schedule an upcoming task
(3) Click the [OK] button to complete the creation.
Edit maintenance window
You can only edit maintenance windows that have not yet ended.
(1) Select the [Maintenance window] tab under the [Alerts] tab on the project detail page.
(2) Select the object you want to edit and click the [Edit] button.
(3) After changing the desired items, click the [OK] button to complete.
Closing maintenance window
(1) Select the [Maintenance window] tab under the [Alerts] tab on the project detail page.
(2) Select the object to be edited and click the [Exit] button to exit.
4.6.7 - Notification
Notifications are a means to deliver alerts.
In the Notifications channel page, you can set up how and when to send alerts to which users.
The notifications channel is project dependent and can be managed on the project detail page.
Creating a notifications channel
In the [Notification] tab of the project detail page, you can decide whether or not to Create a notifications channel and enable it.
Notifications channel is a unit that expresses the systematic recipient area, including the method and level of notifications transmission. It helps to transmit alerts according to the level set in the escalation rule.
(1) On the project detail page, select the [Notification] tab and click the [Add channel] button of the desired notifications channel.
(2) On the notification creation page, enter the settings to create a notifications channel.
(2-1) Enter the basic information about the notifications channel you want to create, such as the required channel name and notification level. The [Channel name] and [Notification level] comprise the basic setting fields, and afterward, the remaining fields receive different information per channel.
Notification level
Notification levels correlate to the escalation policy (/ko/docs/guides/alert-manager/escalation-policy/) that defines rules for spreading alerts.
Based on the notification level specified in the escalation policy, the alert is spread to the notifications channel belonging to that level.
(2-2) You can set a schedule to receive notifications only at certain times.
(2-3) Notifications can be received when an alert occurs or when a threshold for budget notifications was reached. By setting up topics, you can choose which notifications you want to receive.
If you select [Receive all notifications], you will receive both types of notifications, and if you select [Receive notifications on selected topics], you will receive only notifications related to what you selected.
(3) Click the [Save] button to complete the notifications channel creation.
Editing and deleting the notifications channel
Editing the notifications channel
Created notifications channels can be checked under each notifications channel selection.
You can change the active/inactive status through the toggle button at the top left, and you can edit each item by clicking the [Edit] button of each notifications channel.
When you complete inputting the information, click the [Save changes] button to complete the editing.
Deleting the notifications channel
You can delete the notifications channels by clicking the [Delete icon] button in the upper right corner.
Cloudforet user channel
The [Add Cloudforet user channel] button exists in the [Notifications channel] item in the project.
If you add a Cloudforet user channel, an alert is spread to the personal channels of project members. Afterward, alerts are forwarded via the Cloudforet user notifications channel of the user who has received it.
Creating a Cloudforet user notifications channel
A user notifications channel can be created in [My page > Notifications channel].
Unlike creating a project notifications channel, there are no notification level settings, and other creation procedures are the same as Creating a project notifications channel.
4.6.8 - Escalation policy
By applying stage-by-stage rules to alerts through escalation policies, alerts that have been received are effectively sent to members of the project.
Each rule has a set level, and an alert is spread to the corresponding notifications channel for each level.
Whether an alert received via a webhook is to be sent as a notification to project members is determined by Escalation policy.
Escalation policy can be managed in two places:
- [Alert manager > Escalation policy] page: Manage escalation policy under the scope of
global
andproject
- [Project] detail page: Manage escalation policy under the scope of
project
Create escalation policy
If you are a user with manage
permission on the [Escalation policy] page, you can create an escalation policy.
Create in an [Escalation policy] page
(1) Click the [Create] button on the [Alert manager > Escalation policy] page.
(2) Enter the settings to create an escalation policy.
Policy | Description |
---|---|
Exit condition (status) | Define the condition to stop the generated alarm. |
Range | Indicate the scope in which the escalation policy can be used. In case of global, the policy can be used in all projects within the domain, and in case of project, within the specified project. |
Project | Scope defined as project indicates the project being targeted. |
Escalation rules | Define rules for sending step-by-step notifications. Alerts are sent to a notifications channel belonging to a set level, and a period between steps can be given in minutes from step 2 or higher. |
Number of repetitions | Define how many times to repeat an alert notification. Notifications can be repeated up to 9 times. |
When creating such items on the [Project] detail page, a
project
is automatically selected for the scope, and the project is designated as the target.Create in a [Project] detail page
When you create an escalation policy on the [Project] detail page, the project is automatically designated as an escalation policy target.
(1) Inside the [Alert] tab of the project detail page, go to the [Settings] tab.
(2) Click the [Change] button in the escalation policy area.
(3) Click the [Create new policy] tab.
(4) Enter settings to create an escalation policy.
Level
A level is a transmission range at which you send an alert from the stage you are in when sending the alert by stage.
You can set up a notifications channel in the project, and each notifications channel has its own level.
When defining the escalation rule, you set the [Notification level]. At each set stage, an alert is sent to the notifications channel of the corresponding level.
(5) When all settings are completed, click the [OK] button to create the escalation policy.
Set as default policy
After selecting one from the list of escalation policies, you can set it up as a default by selecting the [Set as default] menu from the [Action] dropdown.
When a new project is created and the alert is activated, the corresponding policy is automatically applied.
global
can be selected through the [Set as default] menu.Modify and delete escalation
Once you select a target from the escalation policy list, [Modify] and [Delete] become available from the [Action] dropdown.
Edit
In the case of editing, you can use the same form as a modal dialog that is created when the [Create] button is clicked, and all items except the range can be edited.
Delete
In case of deletion, you can proceed with deletion through the confirmation modal dialog as shown below:
4.7 - Administration
You can create a User and designate a Role that is connected to an API policy.
4.7.1 - [IAM] User
You can also grant permissions to users by assigning them roles.
admin
type. A user
type can be assigned to a member of a project.
For how to assign roles to project members, see here.Adding users
Click the [+ Add] button on the [Administration > IAM > User] page.
There are three types of users that can be added as follows:
- Internal user: users who can sign in by using their ID and password on the login page
- External user: users added by following the external user authentication that the domain has
- API Only: users who are only able to use API, and for whom the Cloudforet console is not accessible
1. Adding internal users
Internal users are users who can sign in by using their IDs and passwords on the login page.
(1-1) After the [Add user] modal dialog opens, select the [Local] tab to add an internal user.
(1-2) After entering the ID of an internal user, click the [Check ID] button. The user ID must be in an email form, and not on the list of existing users.
(1-3) Optionally enter user name and notification email(for receiving important system-related announcements or password reset link).
(1-4) Either send user a password reset link or, set the password on user's behalf. (β» If you set the password manually, you will need to directly inform the user of the password)
(1-5) To assign admin role to the user, you can activate the 'Admin Role' section at the bottom of the modal window and grant a specific role.
(1-6) Click the [Confirm] button to complete the user addition.
2. Adding external users
Adding an external user follows the external user authentication that the domain has. Without authentication as an external user, one cannot be added as a user.
(2-1) After opening the [Add User] modal, select a specific SSO tab for adding external users. ex. Google OAuth
(2-2) Enter an existing authenticated external user account.
(2-3) Optionally enter user name and notification email(for receiving important system-related announcements or password reset link).
(2-4) To assign admin role to the user, you can activate the 'Admin Role' section at the bottom of the modal window and grant a specific role.
(2-5) Click the [Confirm] button to complete the user addition.
3. Adding API only users
API users cannot access the Cloudforet console and can only use the API.
(3-1) After the [Add user] modal dialog opens, select the [API Only] tab.
(3-2) After entering the ID, click the [Check ID] button. The user ID must not be on the list of existing users.
(3-3) Optionally enter user name.
(3-4) To assign admin role to the user, you can activate the 'Admin Role' section at the bottom of the modal window and grant a specific role.
(3-5) Click the [Confirm] button to complete the user addition.
Viewing user details
By selecting a specific user from the table on the user page, you can view detailed information on that user.
Updating users
By selecting a specific user in the table and clicking on [Actions > Edit], you can modify the user's information.
- You can modify the user's ID, name, notification email, password, admin role (role), and tags.
- If the user encounters difficulties with verification for the notification email, you can directly verify it without sending verification code.
- For local users, you can either change the password on their behalf or send them a password reset link for the user to reset it themselves.
4.8 - My page
4.8.1 - Account & profile
[My page] can be accessed through the submenu that appears when you click the icon on the far right of the top menu.
Changing settings
You can change your name, time zone, and language settings on the [My page > Account & profile] page.
Verifying Notification Email
You can enter and verify Notification Email. If your Notification Email has not been verified yet, you won't be able to receive important system notifications or password reset link.
Changing the password
If you are an internal user (a user signed in with ID/password), you can change your password on this page.
4.8.2 - Notifications channel
Creating notifications
On the [My page > Notifications channel] page, there is an [Add channel] button for each protocol.
As you click the [Add channel] button, you will enter the following page. The input form for basic information is different for each protocol, whereas the channel name, notification schedules, and selection boxes for topics able to subscribe to are the same for all protocols.
If you select anytime
as the schedule, you can receive notifications at any time. If you select set time,
you can select the desired day and time.
You can also select an option to receive all notifications
for topics, or receive notifications only for a topic you would select between alert
and budget.
Verifying the created notifications channel
When you fill out all input forms and create a notifications channel, you can check the newly created channel as follows:
Editing the notifications channel
Alerts you create can be edited directly from the list.
In the case of a protocol that can edit the entered data (e.g. SMS, voice call), data, channel name(s), schedules, and topics can all be edited. For protocols where data cannot be edited (e.g., Slack, Telegram), the [Edit] button is not active.
4.9 - Information
4.9.1 - Notice
Verifying notices
(1) Quick check for recent notices: After clicking the notification button on the top menu, click the [Notice] tab to check the recently registered notices.
(2) Check the full list: You can move to the full list of notices page through the submenu that appears when you click the icon on the far right of the top menu.
Registering notice
A user with a role whose type is [Admin] is permitted to directly create announcements within a related domain.
(1) Enter the [Notice] page, and click the [Register new notice] button to write a new post.
- The updated notice is open to all users assigned a specific role within a related domain.
(2) The updated notice can be [modified] or [deleted] later.
4.10 - Advanced feature
4.10.1 - Custom table
If you click the [Settings] icon button from the table, you can directly set up the table fields.
Getting field properties
You can sort fields by suggestion/alphabet or search by field name. You can also search by the tag field that you have.
Selecting/deselecting fields
Fields can be freely deselected/selected from the field table. Select the desired field and click the [OK] button.
Sorting fields
Auto sort
If you click the [Recommended order] or [Alphabetical order] button at the top of the field table, the fields are sorted by the corresponding condition. The sorting only applies to the selected field.
Manual sorting
You can manually sort fields by dragging and dropping the [Reorder] icon button to the right of the selected field.
Reverting to default settings
If you want to retrieve a custom field to its default settings, click the [Return to Default] button.
4.10.2 - Export as an Excel file
Click the [Export as an Excel file] icon button from the table.
The data downloaded to Excel is as follows, and if you set it up to show only some fields as a custom table, you can see the data of that field only:
4.10.3 - Search
There are two ways to use the search bar from the data tables: advanced and keyword searches.
Advanced search
The search field provided by SpaceONE makes data searches much more convenient. All field names that can be searched would appear as you hover your mouse cursor over the search bar.
After selecting a field, you can manually enter a value for that field or choose it from a list of suggestions.
Keyword search
Use the keyword search if you want to search all fields rather than limit your search to a specific field. If you type the text in the search bar and press the enter key, the data containing the keyword is filtered in and displayed in the table.
You can use both advanced and keyword searches together, and multiple searches are possible. The search word shall be displayed in the table if any of the field values ββare matched by filtering the data with the "or" condition.
4.11 - Plugin
4.11.1 - [Alert manager] notification
Overview
Cloudforet provides plugins as a notification method to deliver alerts to users.
For a list of plugins currently supported by Cloudforet, see the Plugin support list.
You can see more detailed descriptions on Telegram and Slack connections from the below link.
In addition, the Email, SMS and Voice call are available Without any additional settings.
Plugin support list
Plugins | Setup guide link |
---|---|
Telegram | https://github.com/cloudforet-io/plugin-telegram-noti-protocol/blob/master/docs/ko/GUIDE.md |
Slack | https://github.com/cloudforet-io/plugin-slack-noti-protocol/blob/master/docs/ko/GUIDE.md |
Can be used without additional settings | |
SMS | Can be used without additional settings |
Voice call | Can be used without additional settings |
4.11.2 - [Alert manager] webhook
Overview
Cloudforet supports plugin type webhooks for you to receive event messages from Various monitoring services.
For a list of webhook plugins currently supported by Cloudforet, see the Plugin support list.
In particular, event messages generated by AWS CloudWatch and AWS PHD (PersonalHealthDashboard)
are collected by Cloudforet through the AWS SNS (Simple Notification Service) webhook.
For the settings guide for each monitoring service, see Setup guide link in the plugin support list below.
Plugin support list
4.11.3 - [Asset inventory] collector
Overview
Cloudforet can collect cloud resources in use by each Cloud provider through a collector plugin.
For a list of collectors currently supported by Cloudforet, see the Plugin support list below.
First, to use the collector plugin, you must register a Service account.
However, since the ways for registering a service account registration are different for each cloud provider such as AWS, Google Cloud, Azure, etc.,
see the Setup guide link in the plugin support list below for detailed settings.
Plugin support list
Plugins | Setup guide link |
---|---|
AWS Cloud Services collector | https://github.com/cloudforet-io/plugin-aws-cloud-service-inven-collector/blob/master/docs/ko/GUIDE.md |
AWS EC2 Compute collector | https://github.com/cloudforet-io/plugin-aws-ec2-inven-collector/blob/master/docs/ko/GUIDE.md |
AWS Personal Health Dashboard collector | https://github.com/cloudforet-io/plugin-aws-phd-inven-collector/blob/master/docs/ko/GUIDE.md |
AWS Trusted Advisor collector | https://github.com/cloudforet-io/plugin-aws-trusted-advisor-inven-collector/blob/master/docs/ko/GUIDE.md |
Azure Cloud collector | https://github.com/cloudforet-io/plugin-azure-inven-collector/blob/master/docs/ko/GUIDE.md |
Google Cloud collector | https://github.com/cloudforet-io/plugin-google-cloud-inven-collector/blob/master/docs/ko/GUIDE.md |
Monitoring Metric Collector of Collected Resources | https://github.com/cloudforet-io/plugin-monitoring-metric-inven-collector/blob/master/docs/ko/GUIDE.md |
4.11.4 - [Cost analysis] data source
Overview
Cloudforet collects cost data for cloud services using a plugin.
For a list of plugins currently supported by Cloudforet, see the Plugin support list.
If there is no suitable plugin, you can develop a plugin fit for your company's billing system
and use it in Cloudforet.
Plugin support list
Plugins | Setup guide link |
---|---|
AWS hyperbilling cost datasource | https://github.com/cloudforet-io/plugin-aws-hyperbilling-cost-datasource/blob/master/docs/ko/GUIDE.md |
4.11.5 - [IAM] authentication
Overview
As a means for user authentication, Cloudforet provides an authentication method using an account of other services using a plugin.
For a list of authentication plugins currently supported by Cloudforet, see the Plugin support list.
You can use the Google Oauth2 plugin,
which authenticates users through your Google account, and the Keycloak plugin, which supports a single sign-on (SSO) via standard protocols.
For more detailed settings, see the Setup guide link below
Plugin support list
5 - Developers
5.1 - Architecture
5.1.1 - Micro Service Framework
Cloudforet Architecture
The Cloudforet consists of a micro service architecture based on identity and inventory. Each micro services provides a plugin interface for flexibility of implementation.
Cloudforet Backend Software Framework
The Cloudforet development team has created our own S/W framework like Python Django or Java Spring. Cloudforet S/W Framework provides software framework for implementing business logic. Each business logic can present its services in various way like gRPC interface, REST interface or periodic task.
Layer | Descrption | Base Class | Implementation Directory | |
---|---|---|---|---|
Interface | Entry point of Service request | core/api.py | project/interface/interface type/ | |
Handler | Pre, Post processing before Service call | |||
Service | Business logic of service | core/service.py | project/service/ | |
Cache | Caching for manager function(optional) | core/cache/ | ||
Manager | Unit operation for each service function | core/manager.py | project/manager/ | |
Connector | Interface for Data Source(ex. DB, Other micro services) |
Backend Server Type
Based on Interface type, each micro service works as
Interface type | Description |
---|---|
gRPC server | gRPC based API server which is receiving requests from console or spacectl client |
rest server | HTTP based API server, usually receiving requests from external client like grafana |
scheduler server | Periodic task creation server, for example collecting inventory resources at every hour |
worker server | Periodic task execution server which requests came from scheduler server |
5.1.2 - Micro Service Deployment
Cloudforet Deployment
The Cloudforet can be deployed by Helm chart. Each micro services has its own Helm chart, and the top chart, spaceone/spaceone contains all sub charts like console, identity, inventory and plugin.
Cloudforet provides own Helm chart repository. The repository URL is https://cloudforet-io.github.io/charts
helm repo add spaceone https://cloudforet-io.github.io/charts
helm repo list
helm repo update
helm search repo -r spaceone
NAME CHART VERSION APP VERSION DESCRIPTION
spaceone/spaceone 1.8.6 1.8.6 A Helm chart for Cloudforet
spaceone/spaceone-initializer 1.2.8 1.x.y Cloudforet domain initializer Helm chart for Kube...
spaceone/billing 1.3.6 1.x.y Cloudforet billing Helm chart for Kubernetes
spaceone/billing-v2 1.3.6 1.x.y Cloudforet billing v2 Helm chart for Kubernetes
spaceone/config 1.3.6 1.x.y Cloudforet config Helm chart for Kubernetes
spaceone/console 1.2.5 1.x.y Cloudforet console Helm chart for Kubernetes
spaceone/console-api 1.1.8 1.x.y Cloudforet console-api Helm chart for Kubernetes
spaceone/cost-analysis 1.3.7 1.x.y Cloudforet Cost Analysis Helm chart for Kubernetes
spaceone/cost-saving 1.3.6 1.x.y Cloudforet cost_saving Helm chart for Kubernetes
spaceone/docs 2.0.0 1.0.0 Cloudforet Open-Source Project Site Helm chart fo...
spaceone/identity 1.3.7 1.x.y Cloudforet identity Helm chart for Kubernetes
spaceone/inventory 1.3.7 1.x.y Cloudforet inventory Helm chart for Kubernetes
spaceone/marketplace-assets 1.1.3 1.x.y Cloudforet marketplace-assets Helm chart for Kube...
spaceone/monitoring 1.3.15 1.x.y Cloudforet monitoring Helm chart for Kubernetes
spaceone/notification 1.3.8 1.x.y Cloudforet notification Helm chart for Kubernetes
spaceone/plugin 1.3.6 1.x.y Cloudforet plugin Helm chart for Kubernetes
spaceone/power-scheduler 1.3.6 1.x.y Cloudforet power_scheduler Helm chart for Kubernetes
spaceone/project-site 1.0.0 0.1.0 Cloudforet Open-Source Project Site Helm chart fo...
spaceone/repository 1.3.6 1.x.y Cloudforet repository Helm chart for Kubernetes
spaceone/secret 1.3.9 1.x.y Cloudforet secret Helm chart for Kubernetes
spaceone/spot-automation 1.3.6 1.x.y Cloudforet spot_automation Helm chart for Kubernetes
spaceone/spot-automation-proxy 1.0.0 1.x.y Cloudforet Spot Automation Proxy Helm chart for K...
spaceone/statistics 1.3.6 1.x.y Cloudforet statistics Helm chart for Kubernetes
spaceone/supervisor 1.1.4 1.x.y Cloudforet supervisor Helm chart for Kubernetes
Installation
Cloudforet can be easily deployed by single Helm chart with spaceone/spaceone.
See https://cloudforet.io/docs/setup_operation/
Helm Chart Code
Each repository should provide its own helm chart.
The code should be at {repository}/deploy/helm
Every Helm charts consists of four components.
File or Directory | Description |
---|---|
Chart.yaml | Information of this Helm chart |
values.yaml | Default vaule of this Helm chart |
config (directory) | Default configuration of this micro service |
templates (directory) | Helm template files |
The directory looks like
deploy
βββ helm
βββ Chart.yaml
βββ config
βΒ Β βββ config.yaml
βββ templates
βΒ Β βββ NOTES.txt
βΒ Β βββ _helpers.tpl
βΒ Β βββ application-grpc-conf.yaml
βΒ Β βββ application-rest-conf.yaml
βΒ Β βββ application-scheduler-conf.yaml
βΒ Β βββ application-worker-conf.yaml
βΒ Β βββ database-conf.yaml
βΒ Β βββ default-conf.yaml
βΒ Β βββ deployment-grpc.yaml
βΒ Β βββ deployment-rest.yaml
βΒ Β βββ deployment-scheduler.yaml
βΒ Β βββ deployment-worker.yaml
βΒ Β βββ ingress-rest.yaml
βΒ Β βββ rest-nginx-conf.yaml
βΒ Β βββ rest-nginx-proxy-conf.yaml
βΒ Β βββ service-grpc.yaml
βΒ Β βββ service-rest.yaml
βΒ Β βββ shared-conf.yaml
βββ values.yaml
3 directories, 21 files
Based on micro service types like frontend, backend, or supervisor, the contents of templates directory is different.
Template Samples
Since every backend services has same templates files, spaceone provides sample of templates directory.
Template Sample URL:
https://github.com/cloudforet-io/spaceone/tree/master/helm_templates
Implementation
values.yaml
values.yaml file defines default vault of templates.
Basic information
###############################
# DEFAULT
###############################
enabled: true
developer: false
grpc: true
scheduler: false
worker: false
rest: false
name: identity
image:
name: spaceone/identity
version: latest
imagePullPolicy: IfNotPresent
database: {}
- enabled: true | false defines deploy this helm chart or not
- developer: true | false for developer mode (recommendation: false)
- grpc: true if you want to deploy gRPC server
- rest: true if you want to deploy rest server
- scheduler: true if you want to deploy scheduler server
- worker: true if you want to deploy worker server
- name: micro service name
- image: docker image and version for this micro service
- imagePullPolicy: IfNotPresent | Always
- database: if you want to overwrite default database configuration
Application Configuration
Each server type like gRPC, rest, scheduler or worker server has its own specific configuration.
application_grpc: {}
application_rest: {}
application_scheduler: {}
application_worker: {}
This section is used at templates/application-{server type}-conf.yaml, then saved as ConfigMap.
Deployment file uses this ConfigMap at volumes,
then volumeMount as /opt/spaceone/{ service name }/config/application.yaml
For example, inventory scheduler server needs QUEUES and SCHEDULERS configuration.
So you can easily configure by adding configuration at application_scheduler like
application_scheduler:
QUEUES:
collector_q:
backend: spaceone.core.queue.redis_queue.RedisQueue
host: redis
port: 6379
channel: collector
SCHEDULERS:
hourly_scheduler:
backend: spaceone.inventory.scheduler.inventory_scheduler.InventoryHourlyScheduler
queue: collector_q
interval: 1
minute: ':00'
Local sidecar
If you want to append specific sidecar in this micro service.
# local sidecar
##########################
#sidecar:
Local volumes
Every micro service needs default timezone and log directory.
##########################
# Local volumes
##########################
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Seoul
- name: log-volume
emptyDir: {}
Global variables
Every micro services need some part of same configuration or same sidecar.
#######################
# global variable
#######################
global:
shared: {}
sidecar: []
Service
gRPC or rest server needs Service
# Service
service:
grpc:
type: ClusterIP
annotations:
nil: nil
ports:
- name: grpc
port: 50051
targetPort: 50051
protocol: TCP
rest:
type: ClusterIP
annotations:
nil: nil
ports:
- name: rest
port: 80
targetPort: 80
protocol: TCP
volumeMounts
Some micro service may need additional file or configuration. In this case use volumeMounts which can attach any thing.
################################
# volumeMount per deployment
################################
volumeMounts:
application_grpc: []
application_rest: []
application_scheduler: []
application_worker: []
POD Spec
We can configure specific value for POD spec. For example, we can use nodeSelector for deploy POD at specific K8S worker node.
####################################
# pod spec (append more pod spec)
# example nodeSelect
#
# pod:
# spec:
# nodeSelector:
# application: my-node-group
####################################
pod:
spec: {}
CI (github action)
If you want to make helm chart for this micro service, trigger at github action Make Helm Chart.
Make Helm Chart
We don't need to make helm chart for each micro service usually,
since spaceone/spaceone top chart do all these steps.
5.2 - Microservices
5.2.1 - Console
5.2.2 - Identity
5.2.3 - Inventory
5.2.4 - Monitoring
5.2.5 - Notification
5.2.6 - Statistics
5.2.7 - Billing
5.2.8 - Plugin
5.2.9 - Supervisor
5.2.10 - Repository
5.2.11 - Secret
5.2.12 - Config
5.3 - Frontend
5.4 - Design System
Overview
In the hyper-competitive software market, design system have become a big part of a productβs success. So, we built our design system based on the principles below.
A design system increases collaboration and accelerates design and development cycles. Also, a design system is a single source of truth that helps us speak with one voice and vary our tone depending on the situational context.
Principle
User-centered
Design is the βtouch pointβ for users to communicate with the product. Communication between a user and a product is the key activity for us. We prioritize accessibility, simplicity, and perceivability. We are enabling familiar interactions that make complex products simple and straightforward for users to use.
Clarity
Users need to accomplish their complex tasks on our multi-cloud platform. We reduced the length of the thinking process by eliminating confusion for a better user experience. We aim to users achieve tasks simpler and improve motivation to solve tasks.
Consistency
Language development is supported by a variety of sensory experiences. We aim to have the best and the most perfectly consistent design system and keep improving the design system by checking usability.
Click the links to open the resource below for Mirinaeβs development.
Resources
GitHub
Design system repositoryStorybook
Component LibraryFigma
Preparing For Release
5.4.1 - Getting Started
κ°λ° νκ²½ μΈν
Fork
νμ¬ μ€νμ΄μ€μμ μ½μμ μ€νμμ€λ‘ μ΄μμ€μ μμ΅λλ€.
κ°λ°μ κΈ°μ¬νκΈ°μν΄ λ¨Όμ Design System λ ν¬μ§ν 리λ₯Ό κ°μΈ github κ³μ μ ν¬ν¬ν΄ μ€λλ€.
Clone
μ΄ν ν¬ν¬ν΄μ¨ λ ν¬μ§ν 리λ₯Ό λ‘μ»¬λ‘ ν΄λ‘ ν΄ μ€λλ€.
μλΈλͺ¨λλ‘ assetsκ³Ό λ²μ κ΄λ ¨ λ ν¬μ§ν λ¦¬κ° μ¬μ©μ€μ΄κΈ° λλ¬Έμ ν¨κ» μ΄κΈ°νν©λλ€.
git clone --recurse-submodules https://github.com/[github username]/spaceone-design-system
cd console
Run Storybook
μ½μμ μ€ν μ€νμν€κΈ° μν΄ npmμΌλ‘ μμ‘΄μ±μ μ€μΉνκ³ , μ€ν¬λ¦½νΈλ₯Ό μ€νν΄ μ€λλ€.
npm install --no-save
npm run storybook
Build
λ°°ν¬ κ°λ₯ν zipμ μμ±νλ €λ©΄ μλμ μ€ν¬λ¦½νΈλ₯Ό μ€ννμλ©΄ λ©λλ€.
npm run build
μ€ν 리λΆ
SpaceOne Design systemμ Storybookμ μ 곡νκ³ μμ΅λλ€.
μ»΄ν¬λνΈλ₯Ό μμ±νλ©΄ ν΄λΉ μ»΄ν¬λνΈμ κΈ°λ₯ μ μλ₯Ό Storybookμ ν΅ν΄ λ¬Έμνν©λλ€.
κΈ°λ³Έμ μΌλ‘ ν μ»΄ν¬λνΈκ° μλμ κ°μ κ΅¬μ‘°λ‘ κ΅¬μ±λμ΄ μμ΅λλ€.
- component-name
- [component-name].stories.mdx
- [component-name].vue
- story-helper.ts
- type.ts
[component-name].stories.mdx μ story-helper.ts
μ»΄ν¬λνΈμ μ€λͺ , μ¬μ©μμ, Playgroundλ₯Ό μ 곡ν©λλ€.
mdx ν¬λ©§μ μ¬μ©μ€μ΄λ©° μ¬μ©λ°©λ²μ λ¬Έμλ₯Ό μ°Έκ³ νμμμ€.
playgroundμ λͺ μλλ props, slots, eventsμ κ°μ μμ±λ€μ κ°λ μ±μ μν΄ story-helperλ₯Ό ν΅ν΄ λΆλ¦¬νμ¬ μμ±νλ λ°©μμ μ§ν₯ν©λλ€.
μ°¨νΈ λΌμ΄μ μ€
SpaceONE λμμΈ μμ€ν μ λ΄λΆμ μΌλ‘ amCharts for Dynamic Chartλ₯Ό μ¬μ©ν©λλ€.
λμμΈ μμ€ν μ μ¬μ©νκΈ° μ μ amChartsμ λΌμ΄μ μ€λ₯Ό νμΈν΄μ£Όμμμ€.
μμ μκ² μ ν©ν amCharts λΌμ΄μ μ€λ₯Ό ꡬμ νμ¬ μ ν리μΌμ΄μ μμ μ¬μ©νλ €λ©΄ λΌμ΄μ μ€ FAQλ₯Ό μ°Έμ‘°νμμμ€.
μ€νμΌ
μ€νμΌ μ μμ μμ΄ SpaceOne Consoleμ tailwind cssμ postcssλ₯Ό μ¬μ©μ€μ μμ΅λλ€.
SpaceOneμ color paletteμ λ°λΌ tailwind 컀μ€ν μ ν΅ν΄ μ μ©λμ΄ μμ΅λλ€. (μΈλΆ μ 보λ storybookμ μ°Έκ³ ν΄μ£ΌμΈμ)
5.5 - Backend
5.6 - Plugins
5.6.1 - About Plugin
About Plugin
A plugin is a software add-on that is installed on a program, enhancing its capabilities.
The Plugin interface pattern consists of two types of architecture components: a core system and plug-in modules. Application logic is divided between independent plug-in modules and the basic core system, providing extensibility, flexibility, and isolation of application features and custom processing logic
Why Cloudforet use a Plugin Interface
- Cloudforet wants to accommodate various clouds on one platform. : Multi-Cloud / Hybrid Cloud / Anything
- We want to be with Cloudforet not only in the cloud, but also in various IT solutions.
- We want to become a platform that can contain various infrastructure technologies.
- It is difficult to predict the future direction of technology, but we want to be a flexible platform that can coexist in any direction.
Integration Endpoints
Micro Service | Resource | Description |
---|---|---|
Identity | Auth | Support Single Sign-On for each specific domain ex) OAuth2, ActiveDirectory, Okta, Onelogin |
Inventory | Collector | Any Resource Objects for Inventory ex) AWS inventory collector |
Monitoring | DataSource | Metric or Log information related with Inventory Objects ex) CloudWatrch, StackDriver ... |
Monitoring | Webhook | Any Event from Monitoring Solutions ex) CPU, Memory alert ... |
Notification | Protocol | Specific Event notification ex) Slack, Email, Jira ... |
5.6.2 - Developer Guide
Plugin can be developed in any language using Protobuf.
This is because both Micro Service and Plugin communication use Protobuf by default. The basic structure is the same as the server development process using the gRPC interface.
When developing plugins, it is possible to develop in any language (all languages that gRPC interface can use), but
If you use the Python Framework we provide, you can develop more easily.
All of the currently provided plugins were developed based on the Python-based self-developed framework.
For the basic usage method for Framework, refer to the following.
The following are the development requirements to check basically when developing a plugin, and you can check the detailed step-by-step details on each page.
5.6.2.1 - Plugin Interface
First, check the interface between the plugin to be developed and the core service. The interface structure is different for each service. You can check the gRPC interface information about this in the API document. (SpaceONE API)
For example, suppose we are developing an Auth Plugin for authentication of Identity.
At this time, if you check the interface information of the Auth Plugin, it is as follows. (SpaceONE API - Identity Auth)
In order to develop Identity Auth Plugin, a total of 4 API interfaces must be implemented.
Of these, init and verify are intefaces that all plugins need equally,
The rest depends on the characteristics of each plugin.
Among them, let's take a closer look at init and verify, which are required to be implemented in common.
1. init
Plugin initialization.
In the case of Identity, when creating a domain, it is necessary to decide which authentication to use, and the related Auth Plugin is distributed.
When deploying the first plugin (or updating the plugin version), after the plugin container is created, the Core service calls the init API to the plugin.
At this time, the plugin returns metadata information required when the core service communicates with the plugin.
Information on metadata is different for each Core service.
Below is an example of python code for init implementation of Google oAuth2 plugin.
Metadata is returned as a return value, and at this time, various information required by identity is added and returned.
@transaction
@check_required(['options'])
def init(self, params):
""" verify options
Args:
params
- options
Returns:
- metadata
Raises:
ERROR_NOT_FOUND:
"""
manager = self.locator.get_manager('AuthManager')
options = params['options']
options['auth_type'] = 'keycloak'
endpoints = manager.get_endpoint(options)
capability= endpoints
return {'metadata': capability}
2. verify
Check the plugin's normal operation.
After the plugin is deployed, after the init API is called, it goes through a check procedure to see if the plugin is ready to run, and the API called at this time is verify.
In the verify step, the procedure to check whether the plugin is ready to perform normal operation is checked.
Below is an example of python code for verify implementation of Google oAuth2 plugin.
The verify action is performed through the value required for Google oAuth2 operation.
The preparation stage for actual logic execution requires verification-level code for normal operation.
def verify(self, options):
# This is connection check for Google Authorization Server
# URL: https://www.googleapis.com/oauth2/v4/token
# After connection without param.
# It should return 404
r = requests.get(self.auth_server)
if r.status_code == 404:
return "ACTIVE"
else:
raise ERROR_NOT_FOUND(key='auth_server', value=self.auth_server)
5.6.2.2 - Plugin Register
If plugin development is completed, you need to prepare plugin distribution. Since all plugins of SpaceONE are distributed as containers, the plugin code that has been developed must be built as an image for container distribution. Container build is done after docker build using Dockerfile, The resulting Image is uploaded to an image repository such as Docker hub. At this time, the image storage is uploaded to the storage managed by the Repository service, which is a microservice of SpaceONE.
If you have uploaded an image to the repository, you need to register the image in the Repository service among Microservices. Registration API uses Repository.plugin.register. (SpaceONE API - (Repository) Plugin.Register)
The example below is the parameter content delivered when registering the Notification Protocol Plugin. The image value contains the address of the previously built image.
name: Slack Notification Protocol
service_type: notification.Protocol
image: pyengine/plugin-slack-notification-protocol_settings
capability:
supported_schema:
- slack_webhook
data_type: SECRET
tags:
description: Slack
"spaceone:plugin_name": Slack
icon: 'https://spaceone-custom-assets.s3.ap-northeast-2.amazonaws.com/console-assets/icons/slack.svg'
provider: slack
template: {}
In the case of image registration, directly use gRPC API or use spacectl because it is not yet supported in Web Console. After creating the yaml file as above, you can register the image with the spacectl command as shown below.
> spacectl exec register repository.Plugin -f plugin_slack_notification_protocol.yml
When the image is registered in the Repository, you can check it as follows.
> spacectl list repository.Plugin -p repository_id=<REPOSITORY_ID> -c plugin_id,name
plugin_id | name
----------------------------------------+------------------------------------------
plugin-aws-sns-monitoring-webhook | AWS SNS Webhook
plugin-amorepacific-monitoring-webhook | Amore Pacific Webhook
plugin-email-notification-protocol_settings | Email Notification Protocol
plugin-grafana-monitoring-webhook | Grafana Webhook
plugin-keycloak-oidc | Keycloak OIDC Auth Plugin
plugin-sms-notification-protocol_settings | SMS Notification Protocol
plugin-voicecall-notification-protocol_settings | Voicecall Notification Protocol
plugin-slack-notification-protocol_settings | Slack Notification Protocol
plugin-telegram-notification-protocol_settings | Telegram Notification Protocol
Count: 9 / 9
Detailed usage of spacectl can be found on this page. Spacectl CLI Tool
5.6.2.3 - Plugin Deployment
To actually deploy and use the registered plugin, you need to deploy a pod in the Kubernetes environment based on the plugin image.
At this time, plugin distribution is automatically performed in the service that wants to use the plugin.
For example, in the case of Notification, an object called Protocol is used to deliver the generated Alert to the user.
At that time, Protocol.create action (Protocol.create) triggers installing Notification automatically.
The example below is an example of the Protocol.create command parameter for creating a Slack Protocol to send an alarm to Slack in Notification.
---
name: Slack Protocol
plugin_info:
plugin_id: plugin-slack-notification-protocol_settings
version: "1.0"
options: {}
schema: slack_webhook
tags:
description: Slack Protocol
In plugin_id, put the ID value of the plugin registered in the Repository,
In version, put the image tag information written when uploading the actual image to an image repository such as Dockerhub.
If there are multiple tags in the image repository, the plugin is distributed with the image of the specified tag version.
In the above case, because the version was specified as "1.0"
It is distributed as a "1.0" tag image among the tag information below.
In the case of the API, it takes some time to respond because it goes through the steps of creating and deploying a Service and a Pod in the Kubernetes environment.
If you check the pod deployment in the actual Kubernetes environment, you can check it as follows.
> k get po
NAME READY STATUS RESTARTS AGE
plugin-slack-notification-protocol_settings-zljrhvigwujiqfmn-bf6kgtqz 1/1 Running 0 1m
5.6.2.4 - Plugin Debugging
Using Pycharm
It is recommand to using pycharm(The most popular python IDE) to develop & testing plugins. Overall setting processes are as below.
1. Open projects and dependencies
First, open project Identity, python-core and api one by one.
Click Open
Select your project directory. In this example '~/source/cloudone/identity'
Click File > Open , then select related project one by one. In this example '~/source/cloudone/python-core'
Select New Window for an additional project. You might need to do several times if you have multiple projects. Ex) python-core and api
Now we have 3 windows. Just close python-core and API projects.
Once you open your project at least one time, you can attach them to each other. Let's do it on identity project. Do this again Open > select your anther project directory. In this example, python-core and API.
But this time, you can ATTACH it to Identity project.
You can attach a project as a module if it was imported at least once.
2. Configure Virtual Environment
Add additional python interpreter
Click virtual environment section
Designate base interpreter as 'Python 3.8'(Python3 need to be installed previously)
Then click 'OK'
Return to 'Python interpreter > Interpreter Settings..'
List of installed python package on virtual environment will be displayed
Click '+' button, Then search & click 'Install Package' below
'spaceone-core'
'spaceone-api'
'spaceone-tester'
Additional libraries are in 'pkg/pip_requirements.txt' in every repository. You also need to install them.
Repeat above process or you can install through command line
$> pip3 install -r pip_requirements.txt
3. Run Server
- Set source root directory
- Right click on 'src' directory 'Mark Directory as > Resource Root'
- Set test server configuration
- Fill in test server configurations are as below, then click 'OK'
Item | Configuration | Etc |
---|---|---|
Module name | spaceone.core.command | |
Parameters | grpc spaceone.inventory -p 50051 | -p option means portnumber (Can be changed) |
- You can run test server with 'play button' on the upper right side or the IDE
4. Execute Test Code
Every plugin repository has their own unit test case file in 'test/api' directory
- Right click on 'test_collector.py' file
- Click 'Run 'test_collector''
Some plugin needs credential to interface with other services. You need to make credential file and set them as environments before run
Go to test server configuration > test_server > Edit Configurations
Click Edit variables
Add environment variable as below
Item | Configuration | Etc |
---|---|---|
PYTHONUNBUFFERED | 1 | |
GOOGLE_APPLICATION_CREDENTIALS | Full path of your configuration file |
Finally you can test run your server
First, run test server locally
Second, run unit test
Using Terminal
5.6.3 - Plugin Designs
Inventory Collector
Inventory Collector νλ¬κ·ΈμΈμ ν΅ν΄ μ λ¬Έμ μΈ κ°λ°μ§μμ΄ μλ μμ€ν μμ§λμ΄λΆν°, μ λ¬Έ ν΄λΌμ°λ κ°λ°μλ€κΉμ§ μνλ ν΄λΌμ°λ μμ°μ 보λ₯Ό νΈλ¦¬νκ² μμ§νμ¬ μ²΄κ³μ μΌλ‘ κ΄λ¦¬ν μ μμ΅λλ€. κ·Έλ¦¬κ³ , μμ§ν μμ° μ 보λ₯Ό μ¬μ©μ UIμ μμ½κ² ννν μ μμ΅λλ€.
Inventory Collector νλ¬κ·ΈμΈμ SpaceONEμ grpc framework κΈ°λ³Έ λͺ¨λμ κΈ°λ°μΌλ‘(spaceone-core, spaceone-api) κ°λ°ν μ μμ΅λλ€. μλμ λ¬Έμλ κ°κ°μ Cloud Providerλ³ μμΈ μ€νμ λνλ λλ€.
AWS
Azure
Google Cloud
Google Cloud External IP Address
Collecting Google Cloud Instance Template
Collecting Google Cloud Load Balancing
Collecting Google Cloud Machine Image
Collecting Google Cloud Snapshot
Collecting Google Cloud Storage Bucket
Collecting Google Cloud VPC Network
Identity Authentication
Monitoring DataSources
Alert Manager Webhook
Notifications
Billing
5.6.4 - Collector
Add new Cloud Service Type
To add new Cloud Service Type
Component | Source Directory | Description |
---|---|---|
model | src/spaceone/inventory/model/skeleton | Data schema |
manager | src/spaceone/inventory/manager/skeleton | Data Merge |
connector | src/spaceone/inventory/connector/skeleton | Data Collection |
Add model
5.7.1 - gRPC API
Developer Guide
This guide explains the new SpaceONE API specification which extends the spaceone-api.
git clone https://github.com/cloudforet-io/api.git
Create new API spec file
Create new API spec file for new micro service. The file location must be
proto/spaceone/api/<new service name>/<version>/<API spec file>
For example, the APIs for inventory service is defined at
proto
βββ spaceone
βββ api
βββ core
β βββ v1
β βββ handler.proto
β βββ plugin.proto
β βββ query.proto
β βββ server_info.proto
βββ inventory
β βββ plugin
β β βββ collector.proto
β βββ v1
β βββ cloud_service.proto
β βββ cloud_service_type.proto
β βββ collector.proto
β βββ job.proto
β βββ job_task.proto
β βββ region.proto
β βββ server.proto
β βββ task_item.proto
βββ sample
βββ v1
βββ helloworld.proto
If you create new micro service called sample, create a directory proto/spaceone/api/sample/v1
Define API
After creating API spec file, update gRPC protobuf.
The content consists with two sections. + service + messages
service defines the RPC method and message defines the request and response data structure.
syntax = "proto3";
package spaceone.api.sample.v1;
// desc: The greeting service definition.
service HelloWorld {
// desc: Sends a greeting
rpc say_hello (HelloRequest) returns (HelloReply) {}
}
// desc: The request message containing the user's name.
message HelloRequest {
// is_required: true
string name = 1;
}
// desc: The response message containing the greetings
message HelloReply {
string message = 1;
}
Build API spec to specific language.
Protobuf can not be used directly, it must be translated to target langauge like python or Go.
If you create new micro service directory, udpate Makefile Append directory name at TARGET
TARGET = core identity repository plugin secret inventory monitoring statistics config report sample
Currently API supports python output.
make python
The generated python output is located at dist/python directory.
dist
βββ python
βββ setup.py
βββ spaceone
βββ __init__.py
βββ api
βββ __init__.py
βββ core
β βββ __init__.py
β βββ v1
β βββ __init__.py
β βββ handler_pb2.py
β βββ handler_pb2_grpc.py
β βββ plugin_pb2.py
β βββ plugin_pb2_grpc.py
β βββ query_pb2.py
β βββ query_pb2_grpc.py
β βββ server_info_pb2.py
β βββ server_info_pb2_grpc.py
βββ inventory
β βββ __init__.py
β βββ plugin
β β βββ __init__.py
β β βββ collector_pb2.py
β β βββ collector_pb2_grpc.py
β βββ v1
β βββ __init__.py
β βββ cloud_service_pb2.py
β βββ cloud_service_pb2_grpc.py
β βββ cloud_service_type_pb2.py
β βββ cloud_service_type_pb2_grpc.py
β βββ collector_pb2.py
β βββ collector_pb2_grpc.py
β βββ job_pb2.py
β βββ job_pb2_grpc.py
β βββ job_task_pb2.py
β βββ job_task_pb2_grpc.py
β βββ region_pb2.py
β βββ region_pb2_grpc.py
β βββ server_pb2.py
β βββ server_pb2_grpc.py
β βββ task_item_pb2.py
β βββ task_item_pb2_grpc.py
βββ sample
βββ __init__.py
βββ v1
βββ __init__.py
βββ helloworld_pb2.py
βββ helloworld_pb2_grpc.py
References
[Google protobuf] https://developers.google.com/protocol-buffers/docs/proto3
5.8 - CICD
SpaceONE CICD Architecture
CI process is mainly processed by GitHub Actions described in .github/workflows
directory of each repository. The process can be triggered automatically through events such as pull requests or push, or manually. Continuous integration includes software building, uploading image in docker, and releasing package in NPM or PyPi. Docker Image is built with all dependency packages to include.
CD process is mainly processed by Spinnaker. Spinnaker deployment is triggered by detecting image upload in docker hub. After the triggering event, Spinnaker automatically deploys the microservice to Kubernetes, with Helm chart prepared for each repository to prepare the infrastructure for the deployment.
Role of cloudforet-io/spaceone Repository
- cloudforet-io/spaceone GitHub Repository : https://github.com/cloudforet-io/spaceone
Before we discuss the CI process of each repository, we should check the cloudforet-io/spaceone
repository (or βrootβ repository). Root repository serves a role as a trigger of all repositories to start the CI process. Through manually starting one of the GitHub Action the root repository has, most of the repositories detect the action and their GitHub Action is triggered.
Repository Categories
SpaceONE repositories can be divided into 5 different categories based on their characteristics in CI/CD process.
- Frontend microservice
- Backend microservice
- Backend Core microservice
- Frontend Core microservice
- Plugin
- Tools
Core microservices are differentiated with ordinary microservices, since they support the other services by serving various functions such as framework, library or system.
Categories | Repository |
---|---|
Frontend microservice | console, console-api, console-assets, marketplace-assets |
Backend microservice | billing, config, cost-analysis, cost-saving, identity, inventory, monitoring, notification, plugin, power-scheduler, secret, spot-automation, statistics, supervisor |
Backend Core microservice | api, python-core |
Frontend Core microservice | console-core-lib, spaceone-design-system |
Plugin | plugin module repositories (excluding βpluginβ repository) |
Tools | spacectl, spaceone-initializer, tester |
** Some repositories might not fit in the categories and standards. To check more details in CI/CD, check our GitHub repositories' .github/workflow
files.
Versioning System
SpaceONE uses 3 digit versioning system in the format of βx.y.zβ. The version scheme is displayed in the table below.
Category | Format | Description | Example |
---|---|---|---|
Development | x.y.zdev[0-9]+ | api, python-core only | 1.2.3dev1 |
Release Candidate | x.y.zrc[0-9]+ | Every release has RC during QA | 1.2.3rc1 |
Final Release | x.y.z | Official Release version | 1.2.3 |
Hotfix | x.y.z.[0-9]+ | Hotfix of Final Release | 1.2.3.1 |
Released Packages & Images
- NPM : https://www.npmjs.com/search?q=spaceone
- PyPi : https://pypi.org/search/?q=spaceone-
- Docker : https://hub.docker.com/orgs/spaceone/repositories
Packages and images released in CI process can be found in the links above.
Continuous Integration Process
CI process of each repositories can be organized by 4 different kinds of triggering events.
Master Branch Push :
If the master branch in GitHub get pushed, GitHub Action occurs byCI_master_push.yml
file, which builds the software and uploads to the registry such as Docker or NPM. After the process, CloudONE team is notified through Slack.Create Release Branch :
Each repository can create release branch manually or bycloudforet-io/spaceone
repositoryβs event. After initialization, GitHub Action triggers branch tagging action.Branch Tagging :
By being triggered by the event above or getting pushed with version tags, each repository can tag branch with GitHub Action by updating the version in both local and master branch, building the software, and uploading the output to registries such as Docker or PyPi. After all process is done, Slack notification is automatically sent to CloudONE team.Reflect Branch Update :
The last CI process to be described is updating the version file in the master branch of each repository. This process is triggered by the branch tagging action orcloudforet-io/spaceone
repository GitHub Action.
While most of the process can be explained with the description above, Continuous Integration processes differ by the repository categories described above. To learn about CI of each repository type, visit the document linked below.
- Frontend : Frontend Microservice CI
- Backend : Backend Microservice CI
- Frontend Core : Frontend Core Microservice CI
- Backend Core : Backend Core Microservice CI
- Plugin : Plugin CI
- Tools : Tools CI
5.8.1 - Frontend Microservice CI
Frontend Microservice CI process details
The flowchart above describes 4 .yml
GitHub Action files for CI process of frontend microservices. Unlike the backend microservices, frontend microservices are not released as packages, so the branch tagging job does not include building and uploading the NPM software package. Frontend microservices only build software and upload it on Docker, not NPM or PyPi.
To check the details, go to the .github/workflow
directory in each directory. We provide an example of the workflow directory of the frontend microservices with the below link.
- console repository : cloudforet-io/console GitHub workflow file link
5.8.2 - Backend Microservice CI
Backend Microservice CI process details
The flowchart above describes 4 .yml
GitHub Action files for CI process of backend microservices. Most of the workflow is similar to the frontend microservices' CI. However, unlike the frontend microservices, backend microservices are released as packages, therefore the process includes building and uploading PyPi package.
To check the details, go to the .github/workflow
directory in each directory. We provide an example of the workflow directory of the backend microservices with the below link.
- identity repository : cloudforet-io/identity GitHub workflow file link
5.8.3 - Frontend Core Microservice CI
Frontend Core Microservice CI
Frontend Core microservices' codes are integrated and built, uploaded with the flow explained above. Most of the workflows include set-up process including setting Node.js, caching node modules, and installing dependencies. After the set-up proccess, each repository workflow is headed to building process proceeded in NPM. After building, both repositories' packages built are released in NPM by code npm run semantic-release
.
Check semantic-release site, npm: semantic-release for further details about the release process.
Also, unlike other repositories deployed by the flow from Docker to Spinnaker and k8s, spaceone-design-system repository is deployed differently, based on direct deployment through AWS S3.
To check the details, go to the .github/workflow
directory in each directory.
- console-core-lib repository : cloudforet-io/console-core-lib GitHub workflow file link
- spaceone-design-system repository : cloudforet-io/spaceone-design-system GitHub workflow file link
5.8.4 - Backend Core Microservice CI
Backend Core Microservice CI process details
Backend Core microservices' 4 workflow related GitHub Action files are explained through the diagram above. Unlike the other repositories, pushes in GitHub with tags are monitored and trigger to do building the package in PyPi for testing purposes, instead of workflow tasks for master branch pushes.
Also, Backend Core microservices are not built and uploaded on Docker. They are only managed in PyPi.
To check the details, go to the .github/workflow directory
in each directory.
- api repository : cloudforet-io/api GitHub workflow file link
- python-core repository : cloudforet-io/python-core GitHub workflow file link
5.8.5 - Plugin CI
Plugin CI process details
Plugin repositories with name starting with βplugin-β have unique CI process managed with workflow file named push_sync_ci.yaml
. As the total architecture of CI is different from other repositories, plugin repositories' workflow files are automatically updated at every code commit.
We can follow the plugin CI process, step by step.
Step 1. push_sync_ci.yaml
in each plugin repository is triggered by master branch push or in a manual way.
Step 2. push_sync_ci.yaml
runs cloudforet-io/actions/.github/worflows/deploy.yaml
.
Step 2-1. spaceone/actions/.github/worflows/deploy.yaml
runs cloudforet-io/actions/src/main.py
.
cloudforet-io/actions/src/main.py
updates each plugin repository workflow files based on the repository characteristics distinguished by topics. Newest version files of all plugin repository workflows are managed incloudforet-io/actions
.
Step 2-2. spaceone/actions/.github/worflows/deploy.yaml
runs push_build_dev.yaml
in each plugin repository
push_build_dev.yaml
proceeds versioning based on current date.push_build_dev.yaml
upload the plugin image in Docker.push_build_dev.yaml
sends notification through Slack.
To build and release the docker image of plugin repositories, plugins use dispatch_release.yaml
.
dispatch_release.yaml
in each plugin repository is triggered manually.dispatch_release.yaml
executes condition_check job to check version format and debug.dispatch_release.yaml
updates master branch version file.dispatch_release.yaml
executes git tagging.dispatch_release.yaml
builds and pushes to Docker Hub withdocker/build-push-action@v1
dispatch_release.yaml
sends notification through Slack.
For further details, you can check our GitHub cloudforet-io/actions
.
5.8.6 - Tools CI
Tools CI process details
spacectl, spaceone-initializer, tester repositories are tools used for the spaceone project. There are some differences from other repositories' CI process.
spacectl repository workflow includes test code for each push with a version tag, which is similar to the CI process of backend core repositories.
spaceone-initializer repository does not include the workflow file triggered by βmaster branch pushβ, which most of repositories including spacectl and tester have.
Tools-category repositories use different repositories to upload.
- spacectl : PyPi and Docker both
- spaceone-initializer : Docker
- tester : PyPi
To check the details, go to the .github/workflow
directory in each directory.
- spacectl repository : cloudforet-io/spacectl GitHub workflow file link
- spaceone-initializer repository : cloudforet-io/spaceone-initializer GitHub workflow file link
- tester repository : cloudforet-io/tester GitHub workflow file link
5.9 - Contribute
5.9.1 - Documentation
5.9.1.1 - Content Guide
Create a new page
Go to the parent page of the page creation location. Then, click the 'Create child page' button at the bottom right.
or:
You can also fork from the repository and work locally.
Choosing a title and filename
Create a filename that uses the words in your title separated by underscore (_). For example, the topic with title Using Project Management has filename project_management.md.
Adding fields to the front matter
In your document, put fields in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here's an example:
---
title: "Project Management"
linkTitle: "Project Management"
weight: 10
date: 2021-06-10
description: >
View overall status of each project and Navigate to detailed cloud resources.
---
Attention
When writing a description, if you start a sentence without a space with a tab. Entire site will fail.Description of front matter variables
Variables | Description |
---|---|
title | The title for the content |
linkTitle | Left-sidebar title |
weight | Used for ordering your content in left-sidebar. Lower weight gets higher precedence. So content with lower weight will come first. If set, weights should be non-zero, as 0 is interpreted as an unset weight. |
date | Creation date |
description | Page description |
If you want to see more details about front matter, click Front matter.
Write a document
Adding Table of Contents
When you add ##
in the documentation, it makes a list of Table of Contents automatically.
Adding images
Create a directory for images named file_name_img in the same hierarchy as the document. For example, create project_management_img directory for project_management.md. Put images in the directory.
Style guide
Please refer to the style guide to write the document.
Opening a pull request
When you are ready to submit a pull request, commit your changes with new branch.
5.9.1.2 - Style Guide (shortcodes)
Heading tag
It is recommended to use them sequentially from ##
, <h2>
. It's for style, not just semantic markup.
Note
When you add##
in the documentation, it makes a list of Table of Contents automatically.Link button
Code :
{{< link-button background-color="navy500" url="/" text="Home" >}}
{{< link-button background-color="white" url="https://cloudforet.io/" text="cloudforet.io" >}}
Output :
Home
cloudforet.io
Video
Code :
{{< video src="https://www.youtube.com/embed/zSoEg2v_JrE" title="Cloudforet Setup" >}}
Output:
Alert
Code :
{{< alert title="Note Title" >}}
Note Contents
{{< /alert >}}
Output:
Note Title
Note ContentsReference
- Learn about Hugo
- Learn about How to use Markdown for writing technical documentation