2014년 9월 17일 수요일

[Openstack] Understanding ephemeral and persistant volumes

When you ask Nova to boot a VM, nova-compute will connect to Glance and "GET" the image file from Glance and save it on the its local filesystem in "/var/lib/nova/instances/_base".
If Glance is set to use Swift as its backend storage, then Glance will get that file from Swift (through the Proxy). If not, then it will stream the file from Glance's filesystem (check the variable "filesystem_store_datadir" in the file "glance-api.conf" to see what Glance is set to use as backend store). 

So by default the disk of an instance is basically stored on the local filesystem of the server where the instance is running (in "/var/lib/nova/instances/instance-0000000X/disk"), and it's called ephemeral because when you terminate the instance the entire directory "/var/lib/nova/instances/instance-0000000X" gets deleted and the virtual disk is gone, but the base image in the "_base" directory is not touched. 

If the virtual disk is using qcow2 then only the changes that occur from the baseline are captured in the virtual disk, so the disk grows as the instance is changed more. The benefit is that you can have five instances using the same base template without using five times the space on the local filesystem (read http://people.gnome.org/~markmc/qcow-image-format.html for more info about qcow2). 

Persistent volumes are virtual disks that you attach to a running instance using the nova-volume service. These virtual disks are actually LVM volumes exported over iSCSI by the nova-volume server. They are called persistent because they are not affected by an instance being terminated, or by a nova-compute server crashing. 
You could just start a new instance and re-attach that volume and get your data back. The nova-volume is using LVM + iSCSI but there are drivers/plugins for Nexenta (and Netapp will release its own soon), so there are enterprise grade options available.


Table 10.1. Flavor parameters
Column
Description
ID
A unique numeric ID.
Name
A descriptive name, such as xx.size_name, is conventional but not required, though some third-party tools may rely on it.
Memory_MB
Virtual machine memory in megabytes.
Disk
Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. The "0" size is a special case that uses the native base image size as the size of the ephemeral root volume.
Ephemeral
Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance.

OpenStack Storage Concepts

Table 6.1, “OpenStack storage” explains the different storage concepts provided by OpenStack.
Table 6.1. OpenStack storage
Ephemeral storageBlock storageObject storage
Used to…
Run operating system and scratch space
Add additional persistent storage to a virtual machine (VM)
Store data, including VM images
Accessed through…
A file system
block device that can be partitioned, formatted, and mounted (such as, /dev/vdc)
The REST API
Accessible from…
Within a VM
Within a VM
Anywhere
Managed by…
OpenStack Compute (nova)
OpenStack Block Storage (cinder)
OpenStack Object Storage (swift)
Persists until…
VM is terminated
Deleted by user
Deleted by user
Sizing determined by…
Administrator configuration of size settings, known as flavors
User specification in initial request
Amount of available physical storage
Example of typical usage…
10 GB first disk, 30 GB second disk
1 TB disk
10s of TBs of dataset storage

Cloud Image - image convert from vmdk to qcow2

we can convert the VMware vmdk cloud image to KVM based qcow2 image with following command on OpenStack qemu.
The option of "-o compat" is for compatibility of low version.

[Convert Image]
qemu-img convert -o compat=0.10 -f vmdk -O qcow2 CentOS_Cloud-disk1.vmdk centos-cloud-6.5.qcow2

[Compress Image]
sudo virt-sparsify -o compat=0.10 --compress vCDN_DnsTld-disk1.qcow2 vCDN_DnsTld-disk1_compressed.qcow2

2014년 9월 11일 목요일

How to restart any service in Devstack

You can manually restart any service you like in devstack by following the below procedure:


1. In terminal type "screen -x stack" . Here stack is the screen name. you will be logged into the screens for all devstack services

2. Browse to your selected service by pressing Ctrl+a and then n (Next screen). Ctrl+a and then p (Previous screen)

3. When at the desired screen press Ctrl+c (This will stop the service)

4. Press the UP key and press enter.

The service will restart.

For an automated restart you can copy the command (from 4th step) for restarting service and make a script that can re-start all such services.

==========================================================================
Devstack does not run services. Its runs as screens. After successfully running stack.sh, if you want to restart any openstack service, get into screen using screen -r. For restarting nova network, go to nova network screen which is screen 9, using the command CTRL+A followed by 9. Then kill the nova network using CTRL+C and then restart it using "up" arrow and enter.

Note if you reboot your machine running devstack, you need to rerun stack.sh

2014년 9월 10일 수요일

Connect VM instance by using SSH (key pair)

There are two ways to connect VM instance using SSH. One is the way of key pair and the other one is the config_init.

############################################
# Login to VM instance using key pair
############################################

1. Create a key pair in OpenStack
    ex) name : ubuntu-cloud

2. Download it to local PC (when you create a key pair in Horizontal GUI then it will automatically be downloaded.

3. Create a new VM instance in OpenStack with the key pair

4. Login to the VM instance using SSH as below
     # ssh -i ubuntu-cloud.pem ubuntu@192.168.100.102

##### INFO #####
- The permission of the key file should be 600. So change the mod of key file as below
# chmod 600 ./ubuntu-cloud.pem


############################################
# Login to VM instance using config_init
############################################

* Ref. #1 : http://docs.openstack.org/user-guide/content/user-data.html
* Ref. #2 : http://www.blog.sandro-mathys.ch/2013/07/setting-user-password-when-launching.html
* Ref. #3 : https://help.ubuntu.com/community/CloudInit


1. Put a custom script when create a VM instance in Horizon
1) Go to Post-Creation tab 
2) Insert the code below as a Customization Script. 
3) Hit the Launch button 
4) Once the instance is up, you should be able to log in with the configured password.

#cloud-config
password: ubuntu
chpasswd: { expire: False }
ssh_pwauth: True



############################################
# Login to CentOS VM instance using config_init
############################################

#cloud-config
chpasswd:
list: |
root:stackops
cloud-user:stackops
expire: False
ssh_pwauth: True

CentOS 7.0 images:

#cloud-config
chpasswd:
list: |
root:stackops
centos:stackops
expire: False
ssh_pwauth: True

2014년 9월 3일 수요일

Installation of OpenStack icehouse with Flat Network using devstack

* References
http://blog.felipe-alfaro.com/2014/02/03/openstack-with-devstack-in-ubuntu/
https://wiki.openstack.org/wiki/Obsolete:ConfigureOpenvswitch
http://wiki.stackinsider.org/index.php/DevStack_-_Single_Node_using_Neutron_FLAT_-_Havana


1. Network Configuration

- eth0 : NAT
- eth1 : Bridge

- /etc/network/interfaces
====================================================
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.186.10
        netmask 255.255.255.0

auto eth1
iface eth1 inet static
        address 10.10.1.190
        netmask 255.255.255.0
        gateway 10.10.1.1
        dns-nameservers 168.126.63.1
====================================================


2. local.conf
====================================================
HOST_IP=10.10.1.190

# network
FLAT_INTERFACE=eth1
FIXED_RANGE=20.1.1.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=20.1.1.1
#FLOATING_RANGE=192.168.100.0/24
#PUBLIC_NETWORK_GATEWAY=192.168.100.1


# Neutron
Q_PLUGIN=ml2
Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=vlan)
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=100:200
#OVS_VLAN_RANGE=physnet1
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1


# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
====================================================


3. Post configuration

[Configuration for network interface]
auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up 
        up ip link set $IFACE promisc on 
        down ip link set $IFACE promisc off 
        down ifconfig $IFACE down 

# The Open VSwitch network interface 
auto br-eth1 
iface br-eth1 inet static 
        address 10.10.1.190
        netmask 255.255.255.0
        gateway 10.10.1.1
        dns-nameservers 168.126.63.1
        up ip link set $IFACE promisc on 
        down ip link set $IFACE promisc off

[Restart network interfaces]
# sudo ifdown eth1
# sudo ifup eth1
# sudo ifup br-eth1


[Configuration for Open vSwitch]
# ovs-vsctl add-port br-eth1 eth1
# ifconfig br-eth1 promisc up


4. Authentication

# vi keystone-admin
# put the configuration below
export OS_TENANT_NAME=admin 
export OS_USERNAME=admin 
export OS_PASSWORD=admin
PS1="\u@\h:\w (keystone-$OS_USERNAME)\$ " 
source openrc

# source ./keystone-admin

5. Create a flat network

# neutron net-create --os-tenant-name admin external_flat --shared --provider:network_type flat --provider:physical_network physnet1
# neutron subnet-create --os-tenant-name admin external_flat 10.10.1.0/24 --gateway 10.10.1.1 --dns-nameserver 168.126.63.1 --allocation-pool start=10.10.1.225,end=10.10.1.254