Руководство по установке и запуску OpenStack в ALT Linux p8 — различия между версиями

Материал из ALT Linux Wiki
Перейти к: навигация, поиск
(Завершение установки)
 
(не показано 85 промежуточных версий 5 участников)
Строка 1: Строка 1:
 +
{{Stub}}
 +
 
Инструкция по мотивам установки на Redhat: https://docs.openstack.org/newton/install-guide-rdo/
 
Инструкция по мотивам установки на Redhat: https://docs.openstack.org/newton/install-guide-rdo/
  
Строка 11: Строка 13:
 
  * На машине с 2Gb RAM - сталкивался с нехваткой памяти и падением процессов.  
 
  * На машине с 2Gb RAM - сталкивался с нехваткой памяти и падением процессов.  
  
Пример установки с сетевым модулем на управляющем узле (controller)  
+
Пример установки с сетевым модулем (neutron) на управляющем узле (controller)  
  
*** Сетевые интерфейсы ***  !!!! переделать с другой структурой сети
+
Сетевые интерфейсы:
* ens19 -
 
* ens20 -
 
  
  
 +
* ens19 - интерфейс управляющей сети openstack (10.0.0.0/24)
 +
* ens20 - "provider interface" параметры в [[#Создание сети|этом руководстве]] используется диапазон 203.0.113.101-203.0.113.250, в сети 203.0.113.0/24, шлюз 203.0.113.1
  
 
==  Установка управляющего узла ==
 
==  Установка управляющего узла ==
Строка 30: Строка 32:
  
 
  # apt-get update -y
 
  # apt-get update -y
  # apt-get dist-upgrade -y
+
  # apt-get dist-upgrade  
  
 
# Удаление firewalld
 
# Удаление firewalld
Строка 36: Строка 38:
  
 
Установка ПО
 
Установка ПО
  # apt-get install openstack-nova chrony python-module-memcached python3-module-memcached python-module-pymemcache python3-module-pymemcache mariadb-server python-module-MySQLdb python-module-openstackclient openstack-glance python-module-glance python-module-glance_store python-module-glanceclient  python-module-glanceclient python-module-glance_store  python-module-glance openstack-glance  openstack-nova-api openstack-nova-cells openstack-nova-cert openstack-nova-conductor openstack-nova-console  openstack-nova-scheduler rabbitmq-server  openstack-keystone apache2-mod_wsgi  memcached  
+
  # apt-get install python-module-pymysql openstack-nova chrony python-module-memcached python3-module-memcached python-module-pymemcache python3-module-pymemcache mariadb-server python-module-MySQLdb python-module-openstackclient openstack-glance python-module-glance python-module-glance_store python-module-glanceclient  python-module-glanceclient python-module-glance_store  python-module-glance openstack-glance  openstack-nova-api openstack-nova-cells openstack-nova-cert openstack-nova-conductor openstack-nova-console  openstack-nova-scheduler rabbitmq-server  openstack-keystone apache2-mod_wsgi  memcached openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge openstack-neutron-l3-agent openstack-neutron-dhcp-agent openstack-neutron-server openstack-neutron-metadata-agent openstack-dashboard spice-html5 openstack-nova-spicehtml5proxy mongo-server-mongod mongo-tools python-module-pymongo
 
 
  
== настройка времени ==
+
== Настройка времени ==
 
в /etc/chrony.conf добавляем  
 
в /etc/chrony.conf добавляем  
 
  allow 10.0.0.0/24
 
  allow 10.0.0.0/24
Строка 45: Строка 46:
 
  pool pool.ntp.org iburst
 
  pool pool.ntp.org iburst
  
  #systemctl enable chronyd.service
+
  # systemctl enable chronyd.service
 
   Synchronizing state of chronyd.service with SysV service script with /lib/systemd/systemd-sysv-install.
 
   Synchronizing state of chronyd.service with SysV service script with /lib/systemd/systemd-sysv-install.
 
   Executing: /lib/systemd/systemd-sysv-install enable chronyd
 
   Executing: /lib/systemd/systemd-sysv-install enable chronyd
  #systemctl start chronyd.service
+
  # systemctl start chronyd.service
  
  
== настройка sql сервера ===
+
=== Настройка sql сервера ===
 
   
 
   
 
  Комментируем строку "skip-networking" в /etc/my.cnf.d/server.cnf
 
  Комментируем строку "skip-networking" в /etc/my.cnf.d/server.cnf
Строка 72: Строка 73:
 
  # mysql_secure_installation
 
  # mysql_secure_installation
  
== настройка сервера сообщений rabbitmq ==
+
== Настройка сервера сообщений rabbitmq ==
  
 
  # systemctl enable rabbitmq.service
 
  # systemctl enable rabbitmq.service
Строка 78: Строка 79:
  
 
Добавляем пользователя:  
 
Добавляем пользователя:  
  #rabbitmqctl add_user openstack RABBIT_PASS
+
  # rabbitmqctl add_user openstack RABBIT_PASS
  #rabbitmqctl set_permissions openstack ".*" ".*" ".*"
+
  # rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  
 
== Настройка memcached ==
 
== Настройка memcached ==
Строка 95: Строка 96:
 
Создаём базу данных и пользователя с паролем.
 
Создаём базу данных и пользователя с паролем.
 
  # mysql -u root -p
 
  # mysql -u root -p
  > CREATE DATABASE keystone;
+
  CREATE DATABASE keystone;
  > GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
+
  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
  > GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
+
  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
  
 
Сохраняем оригинальный конфигурационный файл.
 
Сохраняем оригинальный конфигурационный файл.
Строка 105: Строка 106:
 
  # cat >  /etc/keystone/keystone.conf  
 
  # cat >  /etc/keystone/keystone.conf  
 
  [DEFAULT]
 
  [DEFAULT]
 +
admin_token = ADMIN_TOKEN
 
  [assignment]
 
  [assignment]
 
  [auth]
 
  [auth]
Строка 154: Строка 156:
  
 
  # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
 
  # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
 
 
 
Пароль пользователя admin - ADMIN_PASS
 
 
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
 
  --bootstrap-admin-url http://controller:35357/v3/ \
 
  --bootstrap-internal-url http://controller:35357/v3/ \
 
  --bootstrap-public-url http://controller:5000/v3/ \
 
  --bootstrap-region-id
 
  
== настраиваем apache2 для keystone ==
+
== Настраиваем apache2 для keystone ==
у нас apache2 собран без mod_version, поэтому убираем в файле /etc/httpd2/conf/sites-available/openstack-keystone.conf  
+
убираем в файле /etc/httpd2/conf/sites-available/openstack-keystone.conf  
всё строчки  
+
всё строчки c IfVersion (
 
  <IfVersion >= 2.4>
 
  <IfVersion >= 2.4>
 
  </IfVersion>
 
  </IfVersion>
Строка 183: Строка 175:
 
==  Создание доменов, пользователей и ролей ==
 
==  Создание доменов, пользователей и ролей ==
  
Для дальнеших работ рекомендуется создать пользователя.  
+
Для дальнейших работ рекомендуется создать пользователя.  
  
 
  # adduser admin
 
  # adduser admin
Строка 189: Строка 181:
  
 
  cat >auth
 
  cat >auth
export OS_USERNAME=admin
+
 
export OS_PASSWORD=ADMIN_PASS
+
  export OS_TOKEN=ADMIN_TOKEN
export OS_PROJECT_NAME=admin
+
  export OS_URL=http://controller:35357/v3
export OS_USER_DOMAIN_NAME=Default
 
  export OS_PROJECT_DOMAIN_NAME=Default
 
  export OS_AUTH_URL=http://controller:35357/v3
 
 
  export OS_IDENTITY_API_VERSION=3
 
  export OS_IDENTITY_API_VERSION=3
  
Строка 200: Строка 189:
 
  # su - admin
 
  # su - admin
 
  . auth
 
  . auth
  openstack project create --domain default \
+
  --description "Service Project"  
+
  openstack service create --name keystone --description "OpenStack Identity" identity
  
Укажите пароль для пользователя demo
+
Пароль для пользователя admin - ADMIN_PASS
 
 
openstack project create --domain default \
 
  --description "Demo Project" demo
 
openstack user create --domain default \
 
  --password-prompt demo
 
openstack role create user
 
openstack role add --project demo --user demo user
 
  
  
== Проверка настроек узла управления ==
+
openstack endpoint create --region RegionOne identity public http://controller:5000/v3
 +
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
 +
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
  
  # su - admin
+
  openstack domain create --description "Default Domain" default
  $ . auth
+
openstack project create --domain default --description "Admin Project" admin
  unset OS_AUTH_URL OS_PASSWORD
+
  openstack user create --domain default  --password '''ADMIN_PASS''' admin
 +
openstack role create admin
 +
  openstack role add --project admin --user admin admin
  
пароль "ADMIN_PASS"
+
openstack project create --domain default --description "Service Project" service
  openstack --os-auth-url http://controller:35357/v3 \
+
  openstack project create --domain default --description "Demo Project" demo
  --os-project-domain-name Default --os-user-domain-name Default \
+
openstack user create --domain default --password demo demo
  --os-project-name admin --os-username admin token issue
+
openstack role create user
 +
openstack role add --project demo --user demo user
  
должно вывести что-то вроде:
 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
| Field      | Value                                                                                                                                                                                 
 
|
 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
| expires    | 2017-05-16T15:08:43.854293Z                                                                                                                                                           
 
|
 
| id        | gAAAAABZGwfr4_2NvksY-XnVTayUxh0zZEi4vp7Ff4JmdPqbQQy-W3NG2rs6EzImkevuVbvx4RkCtIWwhaxpbsEUoIFhfwaBwRpqE3fmx7d6OruRucHvFEjmtCKpBPHe9htK0s9hm40n7WmaADaYgi9LgnMto6YRNEBG5mzBJhX0b4NoHgeRA0 |
 
| project_id | d22531fa71e849078c44bb1f00117d87                                                                                                                                                       
 
|
 
| user_id    | 7be0608abb9641c5bd8d9f7a3bf519cb                                                                                                                                                       
 
|
 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
  
проверка пользователя demo:
+
== Настройка окружения ==
openstack --os-auth-url http://controller:5000/v3 \
 
  --os-project-domain-name Default --os-user-domain-name Default \
 
  --os-project-name demo --os-username demo token issue
 
  
  +------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
  # systemctl restart httpd2.service
| Field      | Value                                                                                                                                                                                 
 
|
 
+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
| expires    | 2017-05-16T15:10:40.979623Z                                                                                                                                                           
 
|
 
| id        | gAAAAABZGwhhpQ5BvHvPmM9w6zuXstXZ6JMJDwkbV0zXUBsKLJuJ69CJKux0VoHzxaCKkEuaiOMtIWn2G0u__54HCMQQTvj7f8ddLezXgnlek9KLOPk9FEuoORIg9cahtgqttHgKyLuMKysHzuy331wxrcY-TtsOWWn_yhBJt7NWHtaTN7GEqNg |
 
| project_id | 19493a015aaf4e5f9983b58b460b3794                                                                                                                                                       
 
|
 
| user_id    | 9173af4437f34acd86f5a3d4516c53b6                                                                                                                                                       
 
|
 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
  
== Настройка окружения ==
 
  
  su - admin
+
  # su - admin
 
  rm auth
 
  rm auth
  
  cat > admin-openrc  
+
  cat > admin-openrc <<EOF
 
  export OS_PROJECT_DOMAIN_NAME=Default
 
  export OS_PROJECT_DOMAIN_NAME=Default
 
  export OS_USER_DOMAIN_NAME=Default
 
  export OS_USER_DOMAIN_NAME=Default
Строка 270: Строка 229:
 
  export OS_IDENTITY_API_VERSION=3
 
  export OS_IDENTITY_API_VERSION=3
 
  export OS_IMAGE_API_VERSION=2
 
  export OS_IMAGE_API_VERSION=2
 +
EOF
  
  cat > demo-openrc  
+
  cat > demo-openrc <<EOF
 
  export OS_PROJECT_DOMAIN_NAME=Default
 
  export OS_PROJECT_DOMAIN_NAME=Default
 
  export OS_USER_DOMAIN_NAME=Default
 
  export OS_USER_DOMAIN_NAME=Default
 
  export OS_PROJECT_NAME=demo
 
  export OS_PROJECT_NAME=demo
 
  export OS_USERNAME=demo
 
  export OS_USERNAME=demo
  export OS_PASSWORD=DEMO_PASS
+
  export OS_PASSWORD=demo
 
  export OS_AUTH_URL=http://controller:5000/v3
 
  export OS_AUTH_URL=http://controller:5000/v3
 
  export OS_IDENTITY_API_VERSION=3
 
  export OS_IDENTITY_API_VERSION=3
 
  export OS_IMAGE_API_VERSION=2
 
  export OS_IMAGE_API_VERSION=2
 
+
EOF
=== проверка окружения ===
+
=== Проверка окружения ===
  
 
  su - admin
 
  su - admin
Строка 300: Строка 260:
 
  |
 
  |
 
  +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
  +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
 
  
 
== Настройка сервиса glance ==
 
== Настройка сервиса glance ==
Строка 307: Строка 265:
 
   mysql -u root -p
 
   mysql -u root -p
 
  CREATE DATABASE glance;
 
  CREATE DATABASE glance;
  GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
+
  GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
  IDENTIFIED BY 'GLANCE_DBPASS';
+
  GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
  GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
 
  IDENTIFIED BY 'GLANCE_DBPASS';
 
 
   
 
   
 
  su - admin
 
  su - admin
Строка 317: Строка 273:
 
Задаем пароль сервису glance
 
Задаем пароль сервису glance
  
  openstack user create --domain default --password-prompt glance
+
  openstack user create --domain default --password GLANCE_PASS glance
 
  openstack role add --project service --user glance admin
 
  openstack role add --project service --user glance admin
 
  openstack service create --name glance --description "OpenStack Image" image
 
  openstack service create --name glance --description "OpenStack Image" image
Строка 328: Строка 284:
 
  cd /etc/glance/
 
  cd /etc/glance/
 
  mv glance-api.conf glance-api.conf_orig
 
  mv glance-api.conf glance-api.conf_orig
  cat >glance-api.conf
+
  cat >glance-api.conf <<EOF
 
  [DEFAULT]
 
  [DEFAULT]
use_syslog = true
 
 
  [cors]
 
  [cors]
 
  [cors.subdomain]
 
  [cors.subdomain]
Строка 345: Строка 300:
 
  memcached_servers = controller:11211
 
  memcached_servers = controller:11211
 
  auth_type = password
 
  auth_type = password
  project_domain_name = Default
+
  project_domain_name = default
  user_domain_name = Default
+
  user_domain_name = default
 
  project_name = service
 
  project_name = service
 
  username = glance
 
  username = glance
Строка 362: Строка 317:
 
  [task]
 
  [task]
 
  [taskflow_executor]
 
  [taskflow_executor]
 +
EOF
 +
  
  
 
  mv /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.orig
 
  mv /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.orig
  cat > /etc/glance/glance-registry.conf
+
  cat > /etc/glance/glance-registry.conf <<EOF
 
  [DEFAULT]
 
  [DEFAULT]
 
  [database]
 
  [database]
Строка 375: Строка 332:
 
  memcached_servers = controller:11211
 
  memcached_servers = controller:11211
 
  auth_type = password
 
  auth_type = password
  project_domain_name = Default
+
  project_domain_name = default
  user_domain_name = Default
+
  user_domain_name = default
 
  project_name = service
 
  project_name = service
 
  username = glance
 
  username = glance
Строка 388: Строка 345:
 
  flavor = keystone
 
  flavor = keystone
 
  [profiler]
 
  [profiler]
 +
EOF
  
=== проверка ===
+
# su -s /bin/sh -c "glance-manage db_sync" glance
 +
systemctl enable openstack-glance-api.service openstack-glance-registry.service
 +
systemctl start openstack-glance-api.service openstack-glance-registry.service
 +
=== Проверка ===
 
  su - admin
 
  su - admin
  . admin-openrc
+
  $ . admin-openrc
  wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
+
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img  
  '''openstack image create "cirros" \
+
 
  --file cirros-0.3.4-x86_64-disk.img \
+
Загружаем образ в glance.
  --disk-format qcow2 --container-format bare \
+
  $ openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public  
  --public;;
 
  
  # '''openstack image list'''
+
Проверяем успешность загрузки
 +
  $ '''openstack image list'''
 
  +--------------------------------------+--------+--------+
 
  +--------------------------------------+--------+--------+
 
  | ID                                  | Name  | Status |
 
  | ID                                  | Name  | Status |
Строка 404: Строка 365:
 
  | f1008c6a-f86a-4c48-8332-2573321e4be1 | cirros | active |
 
  | f1008c6a-f86a-4c48-8332-2573321e4be1 | cirros | active |
 
  +--------------------------------------+--------+--------+
 
  +--------------------------------------+--------+--------+
 
  
 
== Установка  вычислительного узла ==
 
== Установка  вычислительного узла ==
  
=== начальная подготовка управляющего узла ===
+
=== Начальная подготовка управляющего узла ===
  
 
Создание БД.
 
Создание БД.
Строка 421: Строка 381:
  
 
Создаём пользователя nova и указываем пароль, который потом будет использоваться при настройке.
 
Создаём пользователя nova и указываем пароль, который потом будет использоваться при настройке.
  openstack user create --domain default --password-prompt nova
+
  openstack user create --domain default --password NOVA_PASS nova
 
создаём роль
 
создаём роль
 +
openstack role add --project service --user nova admin
  
 
Создаём сервис nova
 
Создаём сервис nova
Строка 435: Строка 396:
 
  cd /etc/nova/
 
  cd /etc/nova/
 
  mv nova.conf nova.conf.orig
 
  mv nova.conf nova.conf.orig
cat >nova.conf
+
<pre>
[DEFAULT]
+
cat >nova.conf <<EOF
log_dir = /var/log/nova
+
[DEFAULT]
state_path = /var/lib/nova
+
log_dir = /var/log/nova
connection_type = libvirt
+
state_path = /var/lib/nova
compute_driver = libvirt.LibvirtDriver
+
connection_type = libvirt
image_service = nova.image.glance.GlanceImageService
+
compute_driver = libvirt.LibvirtDriver
volume_api_class = nova.volume.cinder.API
+
image_service = nova.image.glance.GlanceImageService
auth_strategy = keystone
+
volume_api_class = nova.volume.cinder.API
network_api_class = nova.network.neutronv2.api.API
+
auth_strategy = keystone
service_neutron_metadata_proxy = True
+
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
+
service_neutron_metadata_proxy = True
injected_network_template = /usr/share/nova/interfaces.template
+
security_group_api = neutron
enabled_apis = osapi_compute,metadata
+
injected_network_template = /usr/share/nova/interfaces.template
transport_url = rabbit://openstack:RABBIT_PASS@controller
+
web=/usr/share/spice-html5
auth_strategy = keystone
+
enabled_apis = osapi_compute,metadata
my_ip = 10.0.0.11
+
rpc_backend = rabbit
use_neutron = True
+
auth_strategy = keystone
firewall_driver = nova.virt.firewall.NoopFirewallDriver
+
my_ip = 10.0.0.11
web=/usr/share/spice-html5
+
use_neutron = True
[api_database]
+
firewall_driver = nova.virt.firewall.NoopFirewallDriver
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
+
[api_database]
[barbican]
+
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[cache]
+
[barbican]
[cells]
+
[cache]
[cinder]
+
[cells]
[conductor]
+
[cinder]
[cors]
+
[conductor]
[cors.subdomain]
+
[cors]
[database]
+
[cors.subdomain]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
+
[database]
[ephemeral_storage_encryption]
+
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[glance]
+
[ephemeral_storage_encryption]
api_servers = http://controller:9292
+
[glance]
[guestfs]
+
api_servers = http://controller:9292
  [hyperv]
+
[guestfs]
[image_file_url]
+
[hyperv]
[ironic]
+
[image_file_url]
[keymgr]
+
[ironic]
[keystone_authtoken]
+
[keymgr]
signing_dir = /var/cache/nova/keystone-signing
+
[keystone_authtoken]
admin_tenant_name = %SERVICE_TENANT_NAME%
+
auth_uri = http://controller:5000
admin_user = nova
+
auth_url = http://controller:35357
admin_password = %SERVICE_PASSWORD%
+
memcached_servers = controller:11211
identity_uri = http://localhost:35357
+
auth_type = password
auth_uri = http://controller:5000
+
project_domain_name = default
auth_url = http://controller:35357
+
user_domain_name = default
memcached_servers = controller:11211
+
project_name = service
auth_type = password
+
username = nova
project_domain_name = Default
+
password = NOVA_PASS
user_domain_name = Default
+
[libvirt]
project_name = service
+
[matchmaker_redis]
username = nova
+
[metrics]
password = NOVA_PASS
+
[neutron]
[libvirt]
+
admin_username = neutron
[matchmaker_redis]
+
admin_password = %SERVICE_PASSWORD%
[metrics]
+
admin_tenant_name = %SERVICE_TENANT_NAME%
[neutron]
+
url = http://localhost:9696
url = http://controller:9696
+
auth_strategy = keystone
auth_url = http://controller:35357
+
admin_auth_url = http://localhost:35357/v2.0
auth_type = password
+
url = http://controller:9696
project_domain_name = Default
+
auth_url = http://controller:35357
user_domain_name = Default
+
auth_type = password
region_name = RegionOne
+
project_domain_name = default
project_name = service
+
user_domain_name = default
username = neutron
+
region_name = RegionOne
password = NEUTRON_PASS
+
project_name = service
service_metadata_proxy = True
+
username = neutron
metadata_proxy_shared_secret =  
+
password = NEUTRON_PASS
[osapi_v21]
+
service_metadata_proxy = True
[oslo_concurrency]
+
metadata_proxy_shared_secret = METADATA_SECRET
lock_path = /var/run/nova
+
[osapi_v21]
[oslo_messaging_amqp]
+
[oslo_concurrency]
[oslo_messaging_notifications]
+
lock_path = /var/lib/nova/tmp
[oslo_messaging_rabbit]
+
[oslo_messaging_amqp]
[oslo_middleware]
+
[oslo_messaging_notifications]
[oslo_policy]
+
[oslo_messaging_rabbit]
[rdp]
+
rabbit_host = controller
[serial_console]
+
rabbit_userid = openstack
[spice]
+
rabbit_password = RABBIT_PASS
spicehtml5proxy_host = ::
+
[oslo_middleware]
html5proxy_base_url = http://controller:6082/spice_auto.html
+
[oslo_policy]
enabled = True
+
[rdp]
keymap = en-us
+
[serial_console]
enabled = true
+
[spice]
[ssl]
+
spicehtml5proxy_host = ::
[trusted_computing]
+
html5proxy_base_url = https://10.10.3.169:6082/spice_auto.html
[upgrade_levels]
+
enabled = True
[vmware]
+
keymap = en-us
[vnc]
+
[ssl]
enabled = false
+
[trusted_computing]
[workarounds]
+
[upgrade_levels]
[xenserver]
+
[vmware]
 +
[vnc]
 +
vncserver_listen = $my_ip
 +
vncserver_proxyclient_address = $my_ip
 +
[workarounds]
 +
[xenserver]
 +
EOF
 +
</pre>
 +
 
 +
 
  
 
заполнение БД nova  
 
заполнение БД nova  
Строка 536: Строка 506:
 
  # systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
 
  # systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
 
  # systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
 
  # systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
 
  
 
=== Установка вычислительной ноды ===
 
=== Установка вычислительной ноды ===
 
Ставим пакеты
 
Ставим пакеты
 
  apt-get update
 
  apt-get update
  apt-get install openstack-nova-compute
+
  apt-get install openstack-nova-compute libvirt-daemon openstack-neutron-linuxbridge ebtables ipset kernel-modules-ipset-std-def
 
  apt-get dist-upgrade  
 
  apt-get dist-upgrade  
 
   
 
   
Строка 548: Строка 517:
 
  cd /etc/nova
 
  cd /etc/nova
 
  mv nova.conf nova.conf.orig
 
  mv nova.conf nova.conf.orig
  cat >nova.conf
+
  cat >nova.conf <<EOF
 
  [DEFAULT]
 
  [DEFAULT]
 
  log_dir = /var/log/nova
 
  log_dir = /var/log/nova
Строка 565: Строка 534:
 
  transport_url = rabbit://openstack:RABBIT_PASS@controller
 
  transport_url = rabbit://openstack:RABBIT_PASS@controller
 
  auth_strategy = keystone
 
  auth_strategy = keystone
  my_ip = '''10.0.0.39'''
+
  my_ip = '''10.0.0.31'''
 
  use_neutron = True
 
  use_neutron = True
 
  firewall_driver = nova.virt.firewall.NoopFirewallDriver
 
  firewall_driver = nova.virt.firewall.NoopFirewallDriver
Строка 626: Строка 595:
 
  agent_enabled = True
 
  agent_enabled = True
 
  server_listen = ::
 
  server_listen = ::
  server_proxyclient_address = '''10.0.0.39'''
+
  server_proxyclient_address = '''10.0.0.31'''
 
  keymap = en-us
 
  keymap = en-us
 
  [ssl]
 
  [ssl]
Строка 636: Строка 605:
 
  [workarounds]
 
  [workarounds]
 
  [xenserver]
 
  [xenserver]
 +
EOF
 +
 +
Запуск nova
 +
# systemctl enable libvirtd.service openstack-nova-compute.service
 +
# systemctl start libvirtd.service openstack-nova-compute.service
  
 
=== Завершение установки ===
 
=== Завершение установки ===
Строка 645: Строка 619:
 
  virt_type = kvm
 
  virt_type = kvm
  
=== проверка установки nova ==
+
=== Проверка установки nova ===
  '''su - admin'''
+
На управляющем узле, запускаем команды:
  '''. admin-openrc'''
+
  '''# su - admin'''
  '''openstack compute service list'''
+
  '''$ . admin-openrc'''
 +
  '''$ openstack compute service list'''
 
  +----+------------------+-----------+----------+---------+-------+----------------------------+
 
  +----+------------------+-----------+----------+---------+-------+----------------------------+
 
  | Id | Binary          | Host      | Zone    | Status  | State | Updated At                |
 
  | Id | Binary          | Host      | Zone    | Status  | State | Updated At                |
Строка 658: Строка 633:
 
  +----+------------------+-----------+----------+---------+-------+----------------------------+
 
  +----+------------------+-----------+----------+---------+-------+----------------------------+
  
 +
== Настройка сетевого сервиса neutron ==
 +
 +
=== Настраиваем управляющий узел ===
 +
 +
mysql -u root -p
 +
CREATE DATABASE neutron;
 +
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
 +
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
 +
 +
su - admin
 +
. admin-openrc
 +
openstack user create --domain default --password NEUTRON_PASS neutron
 +
openstack role add --project service --user neutron admin
 +
openstack service create --name neutron --description "OpenStack Networking" network
 +
openstack endpoint create --region RegionOne network public http://controller:9696
 +
openstack endpoint create --region RegionOne network internal http://controller:9696
 +
openstack endpoint create --region RegionOne network admin http://controller:9696
 +
 +
cd /etc/neutron
 +
mv neutron.conf neutron.conf.dist
 +
cat  >neutron.conf <<EOF
 +
[DEFAULT]
 +
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 +
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
 +
state_path = /var/lib/neutron
 +
log_dir = /var/log/neutron
 +
core_plugin = ml2
 +
service_plugins =
 +
rpc_backend = rabbit
 +
notify_nova_on_port_status_changes = True
 +
notify_nova_on_port_data_changes = True
 +
[agent]
 +
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
 +
[cors]
 +
[cors.subdomain]
 +
[database]
 +
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
 +
[keystone_authtoken]
 +
signing_dir = /var/cache/neutron/keystone-signing
 +
auth_uri = http://controller:5000
 +
auth_url = http://controller:35357
 +
memcached_servers = controller:11211
 +
auth_type = password
 +
project_domain_name = default
 +
user_domain_name = default
 +
project_name = service
 +
username = neutron
 +
password = NEUTRON_PASS
 +
[matchmaker_redis]
 +
[nova]
 +
auth_url = http://controller:35357
 +
auth_type = password
 +
project_domain_name = default
 +
user_domain_name = default
 +
region_name = RegionOne
 +
project_name = service
 +
username = nova
 +
password = NOVA_PASS
 +
[oslo_concurrency]
 +
lock_path = /var/lib/neutron/tmp
 +
[oslo_messaging_amqp]
 +
[oslo_messaging_notifications]
 +
[oslo_messaging_rabbit]
 +
rabbit_host = controller
 +
rabbit_userid = openstack
 +
rabbit_password = RABBIT_PASS
 +
[oslo_policy]
 +
[qos]
 +
[quotas]
 +
[ssl]
 +
EOF
 +
 +
=== Настройка Modular Layer 2 (ML2) ===
 +
 +
cd /etc/neutron/plugins/ml2/
 +
mv ml2_conf.ini  ml2_conf.ini.ORIG
 +
cat > ml2_conf.ini <<EOF
 +
[DEFAULT]
 +
[ml2]
 +
type_drivers = flat,vlan
 +
tenant_network_types =
 +
mechanism_drivers = linuxbridge
 +
extension_drivers = port_security
 +
[ml2_type_flat]
 +
flat_networks = provider
 +
[ml2_type_geneve]
 +
[ml2_type_gre]
 +
[ml2_type_vlan]
 +
[ml2_type_vxlan]
 +
[securitygroup]
 +
enable_ipset = True
 +
EOF
 +
 +
 +
 +
cd /etc/neutron/plugins/ml2/
 +
mv linuxbridge_agent.ini linuxbridge_agent.ini.ORIG
 +
cat >linuxbridge_agent.ini <<EOF
 +
[DEFAULT]
 +
[agent]
 +
[linux_bridge]
 +
physical_interface_mappings = provider:ens20
 +
[securitygroup]
 +
enable_security_group = True
 +
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 +
[vxlan]
 +
enable_vxlan = False
 +
EOF
 +
 +
=== Настройка DHCP агента ===
 +
cd /etc/neutron
 +
mv dhcp_agent.ini dhcp_agent.ini_ORIG
 +
cat >dhcp_agent.ini <<EOF
 +
[DEFAULT]
 +
dhcp_delete_namespaces = True
 +
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
 +
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
 +
enable_isolated_metadata = True
 +
[AGENT]
 +
EOF
 +
 +
=== Наполнение базы neutron ===
 +
 +
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
 +
 +
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"
 +
 +
systemctl restart openstack-nova-api.service
 +
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
 +
systemctl start  neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
 +
 +
== Настройка neutron на вычислительном узле ==
 +
 +
cd /etc/neutron
 +
mv neutron.conf neutron.conf_ORIG
 +
cat >neutron.conf  <<EOF
 +
[DEFAULT]
 +
rpc_backend = rabbit
 +
auth_strategy = keystone
 +
[agent]
 +
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
 +
[cors]
 +
[cors.subdomain]
 +
[database]
 +
[keystone_authtoken]
 +
auth_uri = http://controller:5000
 +
auth_url = http://controller:35357
 +
memcached_servers = controller:11211
 +
auth_type = password
 +
project_domain_name = default
 +
user_domain_name = default
 +
project_name = service
 +
username = neutron
 +
password = NEUTRON_PASS
 +
[matchmaker_redis]
 +
[nova]
 +
[oslo_concurrency]
 +
lock_path = /var/lib/neutron/tmp
 +
[oslo_messaging_amqp]
 +
[oslo_messaging_notifications]
 +
[oslo_messaging_rabbit]
 +
rabbit_host = controller
 +
rabbit_userid = openstack
 +
rabbit_password = RABBIT_PASS
 +
rabbit_host = controller
 +
rabbit_userid = openstack
 +
[oslo_policy]
 +
[qos]
 +
[quotas]
 +
[ssl]
 +
EOF
 +
 +
 +
cd /etc/neutron/plugins/ml2
 +
mv linuxbridge_agent.ini linuxbridge_agent.ini_ORIG
 +
cat >linuxbridge_agent.ini <<EOF
 +
[DEFAULT]
 +
[agent]
 +
[linux_bridge]
 +
physical_interface_mappings = provider:ens20
 +
[securitygroup]
 +
enable_security_group = True
 +
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 +
[vxlan]
 +
enable_vxlan = False
 +
EOF
 +
 +
 +
systemctl restart openstack-nova-compute.service
 +
systemctl enable neutron-linuxbridge-agent.service
 +
systemctl start neutron-linuxbridge-agent.service
 +
 +
 +
=== Проверка neutron ===
 +
 +
На управляющем узле  запускаем
 +
 +
su - admin
 +
. admin-openrc
 +
  neutron ext-list
 +
<pre>+---------------------------+-----------------------------------------------+
 +
| alias                    | name                                          |
 +
+---------------------------+-----------------------------------------------+
 +
| default-subnetpools      | Default Subnetpools                          |
 +
| network-ip-availability  | Network IP Availability                      |
 +
| network_availability_zone | Network Availability Zone                    |
 +
| auto-allocated-topology  | Auto Allocated Topology Services              |
 +
| ext-gw-mode              | Neutron L3 Configurable external gateway mode |
 +
| binding                  | Port Binding                                  |
 +
| agent                    | agent                                        |
 +
| subnet_allocation        | Subnet Allocation                            |
 +
| l3_agent_scheduler        | L3 Agent Scheduler                            |
 +
| tag                      | Tag support                                  |
 +
| external-net              | Neutron external network                      |
 +
| net-mtu                  | Network MTU                                  |
 +
| availability_zone        | Availability Zone                            |
 +
| quotas                    | Quota management support                      |
 +
| l3-ha                    | HA Router extension                          |
 +
| flavors                  | Neutron Service Flavors                      |
 +
| provider                  | Provider Network                              |
 +
| multi-provider            | Multi Provider Network                        |
 +
| address-scope            | Address scope                                |
 +
| extraroute                | Neutron Extra Route                          |
 +
| timestamp_core            | Time Stamp Fields addition for core resources |
 +
| router                    | Neutron L3 Router                            |
 +
| extra_dhcp_opt            | Neutron Extra DHCP opts                      |
 +
| dns-integration          | DNS Integration                              |
 +
| security-group            | security-group                                |
 +
| dhcp_agent_scheduler      | DHCP Agent Scheduler                          |
 +
| router_availability_zone  | Router Availability Zone                      |
 +
| rbac-policies            | RBAC Policies                                |
 +
| standard-attr-description | standard-attr-description                    |
 +
| port-security            | Port Security                                |
 +
| allowed-address-pairs    | Allowed Address Pairs                        |
 +
| dvr                      | Distributed Virtual Router                    |
 +
+---------------------------+-----------------------------------------------+
 +
</pre>
 +
<pre>
 +
neutron agent-list
 +
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
 +
| id                                  | agent_type        | host      | alive | admin_state_up | binary                    |
 +
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
 +
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1  | :-)  | True          | neutron-linuxbridge-agent |
 +
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-)  | True          | neutron-linuxbridge-agent |
 +
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent        | controller | :-)  | True          | neutron-dhcp-agent        |
 +
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent    | controller | :-)  | True          | neutron-metadata-agent    |
 +
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
 +
</pre>
 +
 +
== Настройка web интерфейса ==
 +
 +
Включаем spice консоль (необходимые настройки уже имеются в конфигах)
 +
systemctl enable openstack-nova-spicehtml5proxy.service
 +
systemctl start openstack-nova-spicehtml5proxy.service
 +
 +
 +
cd /etc/openstack-dashboard
 +
mv local_settings local_settings_ORIG
 +
 +
cat >local_settings <<EOF
 +
# -*- coding: utf-8 -*-
 +
import os
 +
from django.utils.translation import ugettext_lazy as _
 +
from openstack_dashboard import exceptions
 +
from openstack_dashboard.settings import HORIZON_CONFIG
 +
DEBUG = False
 +
TEMPLATE_DEBUG = DEBUG
 +
OPENSTACK_HOST = "controller"
 +
ALLOWED_HOSTS = ['*', ]
 +
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
 +
CACHES = {
 +
    'default': {
 +
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
 +
        'LOCATION': 'controller:11211',
 +
    }
 +
}
 +
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
 +
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 +
OPENSTACK_API_VERSIONS = {
 +
    "identity": 3,
 +
    "image": 2,
 +
    "volume": 2,
 +
}
 +
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
 +
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
 +
OPENSTACK_NEUTRON_NETWORK = {
 +
    'enable_router': False,
 +
    'enable_quotas': False,
 +
    'enable_distributed_router': False,
 +
    'enable_ha_router': False,
 +
    'enable_lb': False,
 +
    'enable_firewall': False,
 +
    'enable_vpn': False,
 +
    'enable_fip_topology_check': False,
 +
}
 +
TIME_ZONE = "Europe/Moscow"
 +
WEBROOT = '/dashboard/'
 +
LOCAL_PATH = '/tmp'
 +
SECRET_KEY='da8b52fb799a5319e747'
 +
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
 +
OPENSTACK_KEYSTONE_BACKEND = {
 +
    'name': 'native',
 +
    'can_edit_user': True,
 +
    'can_edit_group': True,
 +
    'can_edit_project': True,
 +
    'can_edit_domain': True,
 +
    'can_edit_role': True,
 +
}
 +
OPENSTACK_HYPERVISOR_FEATURES = {
 +
    'can_set_mount_point': False,
 +
    'can_set_password': False,
 +
    'requires_keypair': False,
 +
}
 +
OPENSTACK_CINDER_FEATURES = {
 +
    'enable_backup': False,
 +
}
 +
OPENSTACK_HEAT_STACK = {
 +
    'enable_user_pass': True,
 +
}
 +
IMAGE_CUSTOM_PROPERTY_TITLES = {
 +
    "architecture": _("Architecture"),
 +
    "image_state": _("Euca2ools state"),
 +
    "project_id": _("Project ID"),
 +
    "image_type": _("Image Type"),
 +
}
 +
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
 +
API_RESULT_LIMIT = 1000
 +
API_RESULT_PAGE_SIZE = 20
 +
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
 +
DROPDOWN_MAX_ITEMS = 30
 +
TIME_ZONE = "UTC"
 +
POLICY_FILES_PATH = '/etc/openstack-dashboard'
 +
LOGGING = {
 +
    'version': 1,
 +
    # When set to True this will disable all logging except
 +
    # for loggers specified in this configuration dictionary. Note that
 +
    # if nothing is specified here and disable_existing_loggers is True,
 +
    # django.db.backends will still log unless it is disabled explicitly.
 +
    'disable_existing_loggers': False,
 +
    'handlers': {
 +
        'null': {
 +
            'level': 'DEBUG',
 +
            'class': 'logging.NullHandler',
 +
        },
 +
        'console': {
 +
            # Set the level to "DEBUG" for verbose output logging.
 +
            'level': 'INFO',
 +
            'class': 'logging.StreamHandler',
 +
        },
 +
    },
 +
    'loggers': {
 +
        # Logging from django.db.backends is VERY verbose, send to null
 +
        # by default.
 +
        'django.db.backends': {
 +
            'handlers': ['null'],
 +
            'propagate': False,
 +
        },
 +
        'requests': {
 +
            'handlers': ['null'],
 +
            'propagate': False,
 +
        },
 +
        'horizon': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'openstack_dashboard': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'novaclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'cinderclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'keystoneclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'glanceclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'neutronclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'heatclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'ceilometerclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'swiftclient': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'openstack_auth': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'nose.plugins.manager': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'django': {
 +
            'handlers': ['console'],
 +
            'level': 'DEBUG',
 +
            'propagate': False,
 +
        },
 +
        'iso8601': {
 +
            'handlers': ['null'],
 +
            'propagate': False,
 +
        },
 +
        'scss': {
 +
            'handlers': ['null'],
 +
            'propagate': False,
 +
        },
 +
    },
 +
}
 +
SECURITY_GROUP_RULES = {
 +
    'all_tcp': {
 +
        'name': _('All TCP'),
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '1',
 +
        'to_port': '65535',
 +
    },
 +
    'all_udp': {
 +
        'name': _('All UDP'),
 +
        'ip_protocol': 'udp',
 +
        'from_port': '1',
 +
        'to_port': '65535',
 +
    },
 +
    'all_icmp': {
 +
        'name': _('All ICMP'),
 +
        'ip_protocol': 'icmp',
 +
        'from_port': '-1',
 +
        'to_port': '-1',
 +
    },
 +
    'ssh': {
 +
        'name': 'SSH',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '22',
 +
        'to_port': '22',
 +
    },
 +
    'smtp': {
 +
        'name': 'SMTP',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '25',
 +
        'to_port': '25',
 +
    },
 +
    'dns': {
 +
        'name': 'DNS',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '53',
 +
        'to_port': '53',
 +
    },
 +
    'http': {
 +
        'name': 'HTTP',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '80',
 +
        'to_port': '80',
 +
    },
 +
    'pop3': {
 +
        'name': 'POP3',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '110',
 +
        'to_port': '110',
 +
    },
 +
    'imap': {
 +
        'name': 'IMAP',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '143',
 +
        'to_port': '143',
 +
    },
 +
    'ldap': {
 +
        'name': 'LDAP',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '389',
 +
        'to_port': '389',
 +
    },
 +
    'https': {
 +
        'name': 'HTTPS',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '443',
 +
        'to_port': '443',
 +
    },
 +
    'smtps': {
 +
        'name': 'SMTPS',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '465',
 +
        'to_port': '465',
 +
    },
 +
    'imaps': {
 +
        'name': 'IMAPS',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '993',
 +
        'to_port': '993',
 +
    },
 +
    'pop3s': {
 +
        'name': 'POP3S',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '995',
 +
        'to_port': '995',
 +
    },
 +
    'ms_sql': {
 +
        'name': 'MS SQL',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '1433',
 +
        'to_port': '1433',
 +
    },
 +
    'mysql': {
 +
        'name': 'MYSQL',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '3306',
 +
        'to_port': '3306',
 +
    },
 +
    'rdp': {
 +
        'name': 'RDP',
 +
        'ip_protocol': 'tcp',
 +
        'from_port': '3389',
 +
        'to_port': '3389',
 +
    },
 +
}
 +
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
 +
                              'LAUNCH_INSTANCE_DEFAULTS']
 +
EOF
 +
Перезапуск приложений
 +
  a2ensite openstack-dashboard
 +
systemctl restart httpd2.service memcached.service
 +
 +
== Запуск  виртуальной машины ==
 +
 +
 +
=== Создание сети ===
 +
<pre>
 +
su - admin
 +
. admin-openrc
 +
neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
 +
Created a new network:
 +
+---------------------------+--------------------------------------+
 +
| Field                    | Value                                |
 +
+---------------------------+--------------------------------------+
 +
| admin_state_up            | True                                |
 +
| id                        | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad |
 +
| mtu                      | 1500                                |
 +
| name                      | provider                            |
 +
| port_security_enabled    | True                                |
 +
| provider:network_type    | flat                                |
 +
| provider:physical_network | provider                            |
 +
| provider:segmentation_id  |                                      |
 +
| router:external          | False                                |
 +
| shared                    | True                                |
 +
| status                    | ACTIVE                              |
 +
| subnets                  |                                      |
 +
| tenant_id                | d84313397390425c8ed50b2f6e18d092    |
 +
+---------------------------+--------------------------------------+
 +
</pre>
 +
 +
Заменяем данные выделенные жирным на свой пул адресов, шлюз и DNS сервер.
 +
neutron subnet-create --name provider --allocation-pool '''start=203.0.113.101,end=203.0.113.250''' --dns-nameserver '''8.8.4.4''' --gateway '''203.0.113.1'''  provider '''203.0.113.0/24'''
 +
 +
 +
su - admin
 +
. admin-openrc
 +
 +
Создаём новый шаблон оборудования для тестового образа.
 +
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
 +
 +
Многие образы облачных сервисов поддерживают авторизацию через ключи, поэтому создаём ключи и импортируем их.
 +
 +
. demo-openrc
 +
ssh-keygen -q -N ""
 +
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
 +
Проверяем импорт:
 +
openstack keypair list
 +
+-------+-------------------------------------------------+
 +
| Name  | Fingerprint                                    |
 +
+-------+-------------------------------------------------+
 +
| mykey | be:68:58:f8:0a:6e:1e:c7:36:1c:8c:ff:c9:30:3f:60 |
 +
+-------+-------------------------------------------------+
 +
 +
 +
Создаём групповые политики (group rules)
 +
 +
openstack security group rule create --proto icmp default
 +
openstack security group rule create --proto tcp --dst-port 22 default
 +
 +
Проверяем, что образ машины '''cirros''', шаблон '''m1.nano''' и группа безопасности '''default''' создана.
 +
openstack flavor list
 +
openstack image list
 +
openstack security group list
 +
 +
Отсюда нужно будет взять ID сети provider
 +
<pre>openstack network list
 +
+--------------------------------------+--------------+--------------------------------------+
 +
| ID                                  | Name        | Subnets                              |
 +
+--------------------------------------+--------------+--------------------------------------+
 +
| 4716ddfe-6e60-40e7-b2a8-42e57bf3c31c | selfservice  | 2112d5eb-f9d6-45fd-906e-7cabd38b7c7c |
 +
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider    | 310911f6-acf0-4a47-824e-3032916582ff |
 +
+--------------------------------------+--------------+--------------------------------------+
 +
</pre>
 +
 +
Создаём виртуальную машины
 +
<pre>openstack server create --flavor m1.tiny --image cirros --nic net-id='''PROVIDER_NET_ID''' --security-group default --key-name mykey provider-instance
 +
+--------------------------------------+-----------------------------------------------+
 +
| Property                            | Value                                        |
 +
+--------------------------------------+-----------------------------------------------+
 +
| OS-DCF:diskConfig                    | MANUAL                                        |
 +
| OS-EXT-AZ:availability_zone          | nova                                          |
 +
| OS-EXT-STS:power_state              | 0                                            |
 +
| OS-EXT-STS:task_state                | scheduling                                    |
 +
| OS-EXT-STS:vm_state                  | building                                      |
 +
| OS-SRV-USG:launched_at              | -                                            |
 +
| OS-SRV-USG:terminated_at            | -                                            |
 +
| accessIPv4                          |                                              |
 +
| accessIPv6                          |                                              |
 +
| adminPass                            | hdF4LMQqC5PB                                  |
 +
| config_drive                        |                                              |
 +
| created                              | 2015-09-17T21:58:18Z                          |
 +
| flavor                              | m1.tiny (1)                                  |
 +
| hostId                              |                                              |
 +
| id                                  | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf          |
 +
| image                                | cirros (38047887-61a7-41ea-9b49-27987d5e8bb9) |
 +
| key_name                            | mykey                                        |
 +
| metadata                            | {}                                            |
 +
| name                                | provider-instance                            |
 +
| os-extended-volumes:volumes_attached | []                                            |
 +
| progress                            | 0                                            |
 +
| security_groups                      | default                                      |
 +
| status                              | BUILD                                        |
 +
| tenant_id                            | f5b2ccaa75ac413591f12fcaa096aa5c              |
 +
| updated                              | 2015-09-17T21:58:18Z                          |
 +
| user_id                              | 684286a9079845359882afc3aa5011fb              |
 +
+--------------------------------------+-----------------------------------------------+
 +
</pre>
 +
 +
Проверяем статус виртуальной машины:
 +
<pre>
 +
openstack server list
 +
+--------------------------------------+-------------------+--------+------------------------+------------+
 +
| ID                                  | Name              | Status | Networks              | Image Name |
 +
+--------------------------------------+-------------------+--------+------------------------+------------+
 +
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 | cirros    |
 +
+--------------------------------------+-------------------+--------+------------------------+------------+
 +
</pre>
 +
 +
Проверка доступности шлюза.
 +
<pre>
 +
ping -c 4 203.0.113.1
 +
 +
PING 203.0.113.1 (203.0.113.1) 56(84) bytes of data.
 +
64 bytes from 203.0.113.1: icmp_req=1 ttl=64 time=0.357 ms
 +
64 bytes from 203.0.113.1: icmp_req=2 ttl=64 time=0.473 ms
 +
64 bytes from 203.0.113.1: icmp_req=3 ttl=64 time=0.504 ms
 +
64 bytes from 203.0.113.1: icmp_req=4 ttl=64 time=0.470 ms
 +
 +
--- 203.0.113.1 ping statistics ---
 +
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
 +
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
 +
</pre>
 +
Проверка доступности виртуальной машины.
 +
<pre>
 +
ping -c 4 203.0.113.103
 +
 +
PING 203.0.113.103 (203.0.113.103) 56(84) bytes of data.
 +
64 bytes from 203.0.113.103: icmp_req=1 ttl=63 time=3.18 ms
 +
64 bytes from 203.0.113.103: icmp_req=2 ttl=63 time=0.981 ms
 +
64 bytes from 203.0.113.103: icmp_req=3 ttl=63 time=1.06 ms
 +
64 bytes from 203.0.113.103: icmp_req=4 ttl=63 time=0.929 ms
 +
 +
--- 203.0.113.103 ping statistics ---
 +
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
 +
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
 +
</pre>
 +
Зайдём на виртуальную машину.
  
 +
ssh cirros@203.0.113.103
  
 
{{Category navigation|title=OpenStack|category=OpenStack|sortkey={{SUBPAGENAME}}}}
 
{{Category navigation|title=OpenStack|category=OpenStack|sortkey={{SUBPAGENAME}}}}

Текущая версия на 07:34, 29 июня 2020

Stub.png
Данная страница находится в разработке.
Эта страница ещё не закончена. Информация, представленная здесь, может оказаться неполной или неверной.


Инструкция по мотивам установки на Redhat: https://docs.openstack.org/newton/install-guide-rdo/

Инструкция в разработке.

Минимальные требования к оборудованию[править]

  • Процессорных ядер - одно;
  • Оперативная память от 4Gb;
  • Диск 20 Гб.
* На машине с 2Gb RAM - сталкивался с нехваткой памяти и падением процессов. 

Пример установки с сетевым модулем (neutron) на управляющем узле (controller)

Сетевые интерфейсы:


  • ens19 - интерфейс управляющей сети openstack (10.0.0.0/24)
  • ens20 - "provider interface" параметры в этом руководстве используется диапазон 203.0.113.101-203.0.113.250, в сети 203.0.113.0/24, шлюз 203.0.113.1

Установка управляющего узла[править]

Добавляем на узле в /etc/hosts (не удаляйте хост 127.0.0.1)

# Управляющий узел
10.0.0.11 controller
# Вычислительный узел
10.0.0.31 compute1

Подготовка к установке[править]

# apt-get update -y
# apt-get dist-upgrade 
  1. Удаление firewalld

apt-get remove firewalld

Установка ПО

# apt-get install python-module-pymysql openstack-nova chrony python-module-memcached python3-module-memcached python-module-pymemcache python3-module-pymemcache mariadb-server python-module-MySQLdb python-module-openstackclient openstack-glance python-module-glance python-module-glance_store python-module-glanceclient  python-module-glanceclient python-module-glance_store  python-module-glance openstack-glance  openstack-nova-api openstack-nova-cells openstack-nova-cert openstack-nova-conductor openstack-nova-console  openstack-nova-scheduler rabbitmq-server  openstack-keystone apache2-mod_wsgi  memcached  openstack-neutron openstack-neutron-ml2   openstack-neutron-linuxbridge openstack-neutron-l3-agent openstack-neutron-dhcp-agent openstack-neutron-server openstack-neutron-metadata-agent openstack-dashboard spice-html5 openstack-nova-spicehtml5proxy mongo-server-mongod mongo-tools python-module-pymongo

Настройка времени[править]

в /etc/chrony.conf добавляем

allow 10.0.0.0/24

Если имеется настроенный свой NTP, заменяем "pool.ntp.org" на свой.

pool pool.ntp.org iburst
# systemctl enable chronyd.service
 Synchronizing state of chronyd.service with SysV service script with /lib/systemd/systemd-sysv-install.
 Executing: /lib/systemd/systemd-sysv-install enable chronyd
# systemctl start chronyd.service


Настройка sql сервера[править]

Комментируем строку "skip-networking" в /etc/my.cnf.d/server.cnf
# cat > /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
# systemctl enable mariadb
# systemctl start mariadb

задаем пароль администратора sql сервера root и удаляем тестовые таблички

  • пароль по умолчанию пустой "" (после ввода нового пароля, на все вопросы отвечать утвердительно)
# mysql_secure_installation

Настройка сервера сообщений rabbitmq[править]

# systemctl enable rabbitmq.service
# systemctl start rabbitmq

Добавляем пользователя:

# rabbitmqctl add_user openstack RABBIT_PASS
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Настройка memcached[править]

в файле /etc/sysconfig/memcached заменяем строчку LISTEN="127.0.0.1" на

LISTEN="10.0.0.11"


# systemctl enable memcached
# systemctl start memcached

Настройка Keystone[править]

Создаём базу данных и пользователя с паролем.

# mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

Сохраняем оригинальный конфигурационный файл.

# mv /etc/keystone/keystone.conf /etc/keystone/keystone.conf.orig


# cat >  /etc/keystone/keystone.conf 
[DEFAULT]
admin_token = ADMIN_TOKEN
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
[role]
[saml]
[shadow_users]
[signing]
[ssl]
[token]
provider = fernet
[tokenless_auth]
[trust]


Заполняем базу данных keystone

# su -s /bin/sh -c "keystone-manage db_sync" keystone
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

Настраиваем apache2 для keystone[править]

убираем в файле /etc/httpd2/conf/sites-available/openstack-keystone.conf всё строчки c IfVersion (

<IfVersion >= 2.4>
</IfVersion>

добавляем в активную конфигурацию keystone

# a2ensite openstack-keystone

Добавляем servername в конфигурацию.

echo ServerName controller >/etc/httpd2/conf/sites-enabled/servername.conf 
systemctl enable httpd2.service
systemctl start httpd2.service


Создание доменов, пользователей и ролей[править]

Для дальнейших работ рекомендуется создать пользователя.

# adduser admin
# su - admin
cat >auth
export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

Создаём пользователя demo[править]

# su - admin
. auth

openstack service create --name keystone --description "OpenStack Identity" identity

Пароль для пользователя admin - ADMIN_PASS


openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default  --password ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password demo demo
openstack role create user
openstack role add --project demo --user demo user


Настройка окружения[править]

# systemctl restart httpd2.service


# su - admin
rm auth
cat > admin-openrc  <<EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
cat > demo-openrc  <<EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

Проверка окружения[править]

su - admin
. admin-openrc
openstack token issue

Должно выдать что-то вроде такого:

+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   
|
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2017-05-16T15:48:13.101936Z                                                                                                                                                             
|
| id         | gAAAAABZGxEtWlJ0eEGve9Y1VvIRk-wQtZN128A92YPFb5iuTJuo2O7G6Gd9IYdnyPZP6xAXDmT2VzIVbuhvOKQi9bItygi2fWRTw7byAZZdKIvR3mAHpsZyLPpS61hM2ydQLsf6g57xhMKy5y1Fw4Z3uXPabK27dZi1aTslIQZB4RA4Q9WZYWM |
| project_id | d22531fa71e849078c44bb1f00117d87                                                                                                                                                        
|
| user_id    | 7be0608abb9641c5bd8d9f7a3bf519cb                                                                                                                                                        
|
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Настройка сервиса glance[править]

 mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

su - admin
. admin-openrc

Задаем пароль сервису glance

openstack user create --domain default --password GLANCE_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

настраиваем конфиг:

cd /etc/glance/
mv glance-api.conf glance-api.conf_orig
cat >glance-api.conf <<EOF
[DEFAULT]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
EOF


mv /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.orig
cat > /etc/glance/glance-registry.conf <<EOF
[DEFAULT]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
EOF
# su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service 

Проверка[править]

su - admin
$ . admin-openrc
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img 

Загружаем образ в glance.

$ openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public 

Проверяем успешность загрузки

$ openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| f1008c6a-f86a-4c48-8332-2573321e4be1 | cirros | active |
+--------------------------------------+--------+--------+

Установка вычислительного узла[править]

Начальная подготовка управляющего узла[править]

Создание БД.

mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%'   IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';


Создаём пользователя nova и указываем пароль, который потом будет использоваться при настройке.

openstack user create --domain default --password NOVA_PASS nova

создаём роль

openstack role add --project service --user nova admin

Создаём сервис nova

openstack service create --name nova  --description "OpenStack Compute" compute

создаём API endpoint

openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

Настройка nova

cd /etc/nova/
mv nova.conf nova.conf.orig
cat >nova.conf <<EOF
[DEFAULT]
log_dir = /var/log/nova
state_path = /var/lib/nova
connection_type = libvirt
compute_driver = libvirt.LibvirtDriver
image_service = nova.image.glance.GlanceImageService
volume_api_class = nova.volume.cinder.API
auth_strategy = keystone
network_api_class = nova.network.neutronv2.api.API
service_neutron_metadata_proxy = True
security_group_api = neutron
injected_network_template = /usr/share/nova/interfaces.template
web=/usr/share/spice-html5
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[ephemeral_storage_encryption]
[glance]
api_servers = http://controller:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
[matchmaker_redis]
[metrics]
[neutron]
admin_username = neutron
admin_password = %SERVICE_PASSWORD%
admin_tenant_name = %SERVICE_TENANT_NAME%
url = http://localhost:9696
auth_strategy = keystone
admin_auth_url = http://localhost:35357/v2.0
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
spicehtml5proxy_host = ::
html5proxy_base_url = https://10.10.3.169:6082/spice_auto.html
enabled = True
keymap = en-us
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[workarounds]
[xenserver]
EOF


заполнение БД nova

 su -s /bin/sh -c "nova-manage api_db sync" nova
 su -s /bin/sh -c "nova-manage db sync" nova

Запуск nova сервиса

# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

Установка вычислительной ноды[править]

Ставим пакеты

apt-get update
apt-get install openstack-nova-compute libvirt-daemon openstack-neutron-linuxbridge ebtables ipset kernel-modules-ipset-std-def
apt-get dist-upgrade 

Поменяйте ip 10.0.0.xxx на ip своей вычислительной ноды

cd /etc/nova
mv nova.conf nova.conf.orig
cat >nova.conf <<EOF
[DEFAULT]
log_dir = /var/log/nova
state_path = /var/lib/nova
connection_type = libvirt
compute_driver = libvirt.LibvirtDriver
image_service = nova.image.glance.GlanceImageService
volume_api_class = nova.volume.cinder.API
auth_strategy = keystone
network_api_class = nova.network.neutronv2.api.API
service_neutron_metadata_proxy = True
security_group_api = neutron
injected_network_template = /usr/share/nova/interfaces.template
enabled_apis = osapi_compute,metadata
compute_driver = libvirt.LibvirtDriver
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 10.0.0.31
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
connection = mysql://nova:nova@localhost/nova
[ephemeral_storage_encryption]
[glance]
api_servers = http://controller:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
[osapi_v21]
[oslo_concurrency]
lock_path = /var/run/nova
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
spicehtml5proxy_host = ::
html5proxy_base_url = http://controller:6082/spice_auto.html
enabled = True
agent_enabled = True
server_listen = ::
server_proxyclient_address = 10.0.0.31
keymap = en-us
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled = false
[workarounds]
[xenserver]
EOF

Запуск nova

# systemctl enable libvirtd.service openstack-nova-compute.service 
# systemctl start libvirtd.service openstack-nova-compute.service

Завершение установки[править]

Проверка на аппаратное ускорение.

egrep -c '(vmx|svm)' /proc/cpuinfo

Если вывод не 0 меняем в файле /etc/nova/nova.conf строчку

virt_type = qemu

на

virt_type = kvm

Проверка установки nova[править]

На управляющем узле, запускаем команды:

# su - admin
$ . admin-openrc
$ openstack compute service list
+----+------------------+-----------+----------+---------+-------+----------------------------+
| Id | Binary           | Host      | Zone     | Status  | State | Updated At                 |
+----+------------------+-----------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | conroller | internal | enabled | up    | 2017-05-18T09:09:12.000000 |
|  2 | nova-conductor   | conroller | internal | enabled | up    | 2017-05-18T09:09:14.000000 |
|  3 | nova-scheduler   | conroller | internal | enabled | up    | 2017-05-18T09:09:19.000000 |
|  6 | nova-compute     | compute3  | nova     | enabled | up    | 2017-05-18T09:09:16.000000 |
+----+------------------+-----------+----------+---------+-------+----------------------------+

Настройка сетевого сервиса neutron[править]

Настраиваем управляющий узел[править]

mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

su - admin
. admin-openrc
openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
cd /etc/neutron
mv neutron.conf neutron.conf.dist
cat  >neutron.conf <<EOF
[DEFAULT]
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
state_path = /var/lib/neutron
log_dir = /var/log/neutron
core_plugin = ml2
service_plugins =
rpc_backend = rabbit
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[keystone_authtoken]
signing_dir = /var/cache/neutron/keystone-signing
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[matchmaker_redis]
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_policy]
[qos]
[quotas]
[ssl]
EOF

Настройка Modular Layer 2 (ML2)[править]

cd /etc/neutron/plugins/ml2/
mv ml2_conf.ini  ml2_conf.ini.ORIG
cat > ml2_conf.ini <<EOF
[DEFAULT]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = True
EOF


cd /etc/neutron/plugins/ml2/
mv linuxbridge_agent.ini linuxbridge_agent.ini.ORIG
cat >linuxbridge_agent.ini <<EOF
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:ens20
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False
EOF

Настройка DHCP агента[править]

cd /etc/neutron
mv dhcp_agent.ini dhcp_agent.ini_ORIG
cat >dhcp_agent.ini <<EOF
[DEFAULT]
dhcp_delete_namespaces = True
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[AGENT]
EOF

Наполнение базы neutron[править]

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" 
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl start  neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

Настройка neutron на вычислительном узле[править]

cd /etc/neutron
mv neutron.conf neutron.conf_ORIG
cat >neutron.conf  <<EOF
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[cors]
[cors.subdomain]
[database]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
rabbit_host = controller
rabbit_userid = openstack
[oslo_policy]
[qos]
[quotas]
[ssl]
EOF


cd /etc/neutron/plugins/ml2
mv linuxbridge_agent.ini linuxbridge_agent.ini_ORIG
cat >linuxbridge_agent.ini <<EOF
[DEFAULT]
[agent]
[linux_bridge] 
physical_interface_mappings = provider:ens20
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False
EOF


systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service


Проверка neutron[править]

На управляющем узле запускаем

su - admin
. admin-openrc
 neutron ext-list
+---------------------------+-----------------------------------------------+
| alias                     | name                                          |
+---------------------------+-----------------------------------------------+
| default-subnetpools       | Default Subnetpools                           |
| network-ip-availability   | Network IP Availability                       |
| network_availability_zone | Network Availability Zone                     |
| auto-allocated-topology   | Auto Allocated Topology Services              |
| ext-gw-mode               | Neutron L3 Configurable external gateway mode |
| binding                   | Port Binding                                  |
| agent                     | agent                                         |
| subnet_allocation         | Subnet Allocation                             |
| l3_agent_scheduler        | L3 Agent Scheduler                            |
| tag                       | Tag support                                   |
| external-net              | Neutron external network                      |
| net-mtu                   | Network MTU                                   |
| availability_zone         | Availability Zone                             |
| quotas                    | Quota management support                      |
| l3-ha                     | HA Router extension                           |
| flavors                   | Neutron Service Flavors                       |
| provider                  | Provider Network                              |
| multi-provider            | Multi Provider Network                        |
| address-scope             | Address scope                                 |
| extraroute                | Neutron Extra Route                           |
| timestamp_core            | Time Stamp Fields addition for core resources |
| router                    | Neutron L3 Router                             |
| extra_dhcp_opt            | Neutron Extra DHCP opts                       |
| dns-integration           | DNS Integration                               |
| security-group            | security-group                                |
| dhcp_agent_scheduler      | DHCP Agent Scheduler                          |
| router_availability_zone  | Router Availability Zone                      |
| rbac-policies             | RBAC Policies                                 |
| standard-attr-description | standard-attr-description                     |
| port-security             | Port Security                                 |
| allowed-address-pairs     | Allowed Address Pairs                         |
| dvr                       | Distributed Virtual Router                    |
+---------------------------+-----------------------------------------------+
 neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1   | :-)   | True           | neutron-linuxbridge-agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-)   | True           | neutron-linuxbridge-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

Настройка web интерфейса[править]

Включаем spice консоль (необходимые настройки уже имеются в конфигах)

systemctl enable openstack-nova-spicehtml5proxy.service
systemctl start openstack-nova-spicehtml5proxy.service


cd /etc/openstack-dashboard
mv local_settings local_settings_ORIG
cat >local_settings <<EOF
# -*- coding: utf-8 -*-
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
TEMPLATE_DEBUG = DEBUG
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
   'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
   }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
   "identity": 3,
   "image": 2,
   "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
   'enable_router': False,
   'enable_quotas': False,
   'enable_distributed_router': False,
   'enable_ha_router': False,
   'enable_lb': False,
   'enable_firewall': False,
   'enable_vpn': False,
   'enable_fip_topology_check': False,
}
TIME_ZONE = "Europe/Moscow"
WEBROOT = '/dashboard/'
LOCAL_PATH = '/tmp'
SECRET_KEY='da8b52fb799a5319e747'
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_KEYSTONE_BACKEND = {
   'name': 'native',
   'can_edit_user': True,
   'can_edit_group': True,
   'can_edit_project': True,
   'can_edit_domain': True,
   'can_edit_role': True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
   'can_set_mount_point': False,
   'can_set_password': False,
   'requires_keypair': False,
}
OPENSTACK_CINDER_FEATURES = {
   'enable_backup': False,
}
OPENSTACK_HEAT_STACK = {
   'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
   "architecture": _("Architecture"),
   "image_state": _("Euca2ools state"),
   "project_id": _("Project ID"),
   "image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "UTC"
POLICY_FILES_PATH = '/etc/openstack-dashboard'
LOGGING = {
   'version': 1,
   # When set to True this will disable all logging except
   # for loggers specified in this configuration dictionary. Note that
   # if nothing is specified here and disable_existing_loggers is True,
   # django.db.backends will still log unless it is disabled explicitly.
   'disable_existing_loggers': False,
   'handlers': {
       'null': {
           'level': 'DEBUG',
           'class': 'logging.NullHandler',
       },
       'console': {
           # Set the level to "DEBUG" for verbose output logging.
           'level': 'INFO',
           'class': 'logging.StreamHandler',
       },
   },
   'loggers': {
       # Logging from django.db.backends is VERY verbose, send to null
       # by default.
       'django.db.backends': {
           'handlers': ['null'],
           'propagate': False,
       },
       'requests': {
           'handlers': ['null'],
           'propagate': False,
       },
       'horizon': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'openstack_dashboard': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'novaclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'cinderclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'keystoneclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'glanceclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'neutronclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'heatclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'ceilometerclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'swiftclient': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'openstack_auth': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'nose.plugins.manager': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'django': {
           'handlers': ['console'],
           'level': 'DEBUG',
           'propagate': False,
       },
       'iso8601': {
           'handlers': ['null'],
           'propagate': False,
       },
       'scss': {
           'handlers': ['null'],
           'propagate': False,
       },
   },
}
SECURITY_GROUP_RULES = {
   'all_tcp': {
       'name': _('All TCP'),
       'ip_protocol': 'tcp',
       'from_port': '1',
       'to_port': '65535',
   },
   'all_udp': {
       'name': _('All UDP'),
       'ip_protocol': 'udp',
       'from_port': '1',
       'to_port': '65535',
   },
   'all_icmp': {
       'name': _('All ICMP'),
       'ip_protocol': 'icmp',
       'from_port': '-1',
       'to_port': '-1',
   },
   'ssh': {
       'name': 'SSH',
       'ip_protocol': 'tcp',
       'from_port': '22',
       'to_port': '22',
   },
   'smtp': {
       'name': 'SMTP',
       'ip_protocol': 'tcp',
       'from_port': '25',
       'to_port': '25',
   },
   'dns': {
       'name': 'DNS',
       'ip_protocol': 'tcp',
       'from_port': '53',
       'to_port': '53',
   },
   'http': {
       'name': 'HTTP',
       'ip_protocol': 'tcp',
       'from_port': '80',
       'to_port': '80',
   },
   'pop3': {
       'name': 'POP3',
       'ip_protocol': 'tcp',
       'from_port': '110',
       'to_port': '110',
   },
   'imap': {
       'name': 'IMAP',
       'ip_protocol': 'tcp',
       'from_port': '143',
       'to_port': '143',
   },
   'ldap': {
       'name': 'LDAP',
       'ip_protocol': 'tcp',
       'from_port': '389',
       'to_port': '389',
   },
   'https': {
       'name': 'HTTPS',
       'ip_protocol': 'tcp',
       'from_port': '443',
       'to_port': '443',
   },
   'smtps': {
       'name': 'SMTPS',
       'ip_protocol': 'tcp',
       'from_port': '465',
       'to_port': '465',
   },
   'imaps': {
       'name': 'IMAPS',
       'ip_protocol': 'tcp',
       'from_port': '993',
       'to_port': '993',
   },
   'pop3s': {
       'name': 'POP3S',
       'ip_protocol': 'tcp',
       'from_port': '995',
       'to_port': '995',
   },
   'ms_sql': {
       'name': 'MS SQL',
       'ip_protocol': 'tcp',
       'from_port': '1433',
       'to_port': '1433',
   },
   'mysql': {
       'name': 'MYSQL',
       'ip_protocol': 'tcp',
       'from_port': '3306',
       'to_port': '3306',
   },
   'rdp': {
       'name': 'RDP',
       'ip_protocol': 'tcp',
       'from_port': '3389',
       'to_port': '3389',
   },
}
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
                              'LAUNCH_INSTANCE_DEFAULTS']
EOF

Перезапуск приложений

 a2ensite openstack-dashboard
systemctl restart httpd2.service memcached.service

Запуск виртуальной машины[править]

Создание сети[править]

su - admin
. admin-openrc
neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad |
| mtu                       | 1500                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | d84313397390425c8ed50b2f6e18d092     |
+---------------------------+--------------------------------------+

Заменяем данные выделенные жирным на свой пул адресов, шлюз и DNS сервер.

neutron subnet-create --name provider --allocation-pool start=203.0.113.101,end=203.0.113.250 --dns-nameserver 8.8.4.4 --gateway 203.0.113.1  provider 203.0.113.0/24


su - admin
. admin-openrc 

Создаём новый шаблон оборудования для тестового образа.

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

Многие образы облачных сервисов поддерживают авторизацию через ключи, поэтому создаём ключи и импортируем их.

. demo-openrc
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey 

Проверяем импорт:

openstack keypair list
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | be:68:58:f8:0a:6e:1e:c7:36:1c:8c:ff:c9:30:3f:60 |
+-------+-------------------------------------------------+


Создаём групповые политики (group rules)

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

Проверяем, что образ машины cirros, шаблон m1.nano и группа безопасности default создана.

openstack flavor list
openstack image list
openstack security group list

Отсюда нужно будет взять ID сети provider

openstack network list
 +--------------------------------------+--------------+--------------------------------------+
| ID                                   | Name         | Subnets                              |
+--------------------------------------+--------------+--------------------------------------+
| 4716ddfe-6e60-40e7-b2a8-42e57bf3c31c | selfservice  | 2112d5eb-f9d6-45fd-906e-7cabd38b7c7c |
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider     | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+--------------+--------------------------------------+

Создаём виртуальную машины

openstack server create --flavor m1.tiny --image cirros --nic net-id='''PROVIDER_NET_ID''' --security-group default --key-name mykey provider-instance
+--------------------------------------+-----------------------------------------------+
| Property                             | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          | nova                                          |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | -                                             |
| OS-SRV-USG:terminated_at             | -                                             |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| adminPass                            | hdF4LMQqC5PB                                  |
| config_drive                         |                                               |
| created                              | 2015-09-17T21:58:18Z                          |
| flavor                               | m1.tiny (1)                                   |
| hostId                               |                                               |
| id                                   | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf          |
| image                                | cirros (38047887-61a7-41ea-9b49-27987d5e8bb9) |
| key_name                             | mykey                                         |
| metadata                             | {}                                            |
| name                                 | provider-instance                             |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| security_groups                      | default                                       |
| status                               | BUILD                                         |
| tenant_id                            | f5b2ccaa75ac413591f12fcaa096aa5c              |
| updated                              | 2015-09-17T21:58:18Z                          |
| user_id                              | 684286a9079845359882afc3aa5011fb              |
+--------------------------------------+-----------------------------------------------+

Проверяем статус виртуальной машины:

 openstack server list
+--------------------------------------+-------------------+--------+------------------------+------------+
| ID                                   | Name              | Status | Networks               | Image Name |
+--------------------------------------+-------------------+--------+------------------------+------------+
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 | cirros     |
+--------------------------------------+-------------------+--------+------------------------+------------+

Проверка доступности шлюза.

ping -c 4 203.0.113.1

PING 203.0.113.1 (203.0.113.1) 56(84) bytes of data.
64 bytes from 203.0.113.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 203.0.113.1: icmp_req=2 ttl=64 time=0.473 ms
64 bytes from 203.0.113.1: icmp_req=3 ttl=64 time=0.504 ms
64 bytes from 203.0.113.1: icmp_req=4 ttl=64 time=0.470 ms

--- 203.0.113.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms

Проверка доступности виртуальной машины.

ping -c 4 203.0.113.103

PING 203.0.113.103 (203.0.113.103) 56(84) bytes of data.
64 bytes from 203.0.113.103: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.103: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.103: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.103: icmp_req=4 ttl=63 time=0.929 ms

--- 203.0.113.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms

Зайдём на виртуальную машину.

ssh cirros@203.0.113.103