2014년 12월 1일 월요일

Clean up (remove MAC address when you generate a snapshot image from virtual machine)

Reference : http://docs.openstack.org/image-guide/content/ubuntu-image.html

Clean up (remove MAC address details)

The operating system records the MAC address of the virtual Ethernet card in locations such as /etc/udev/rules.d/70-persistent-net.rules during the instance process. However, each time the image boots up, the virtual Ethernet card will have a different MAC address, so this information must be deleted from the configuration file.
There is a utility called virt-sysprep, that performs various cleanup tasks such as removing the MAC address references. It will clean up a virtual machine image in place:
# virt-sysprep -d trusty

2014년 11월 10일 월요일

Increase size of CentOS cloud image (root disk size)

The following is very easy way to increase the disk size of default CentOS cloud image.

Reference : https://github.com/flegmatik/linux-rootfs-resize#linux-rootfs-resize



linux-rootfs-resize

Supported Linux distributions: CentOS 6, Debian 6, Debian 7.
Rework of my previous project, that was limited only to CentOS 6.
This tool creates new initrd (initramfs) image with ability to resize root filesystem over available space. Tipically you need this when you provision your virtual machine on OpenStack cloud for the first time (your image becomes flavor aware)
For now, filesystem resize is limited to ext2, ext3 and ext4 (resize2fs) including LVM volumes.
This code was successfuly tested on: CentOS 6.5, Debian 6 and Debian 7.2
DEPENDENCIES:
cloud-utils (https://launchpad.net/cloud-utils)
parted (CentOS)
INSTALL:
Install git, clone this project on your machine, run 'install'. 
On CentOS:
cd /opt
rpm -ivh http://ftp-stud.hs-esslingen.de/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum install git parted cloud-utils
git clone https://github.com/flegmatik/linux-rootfs-resize.git
cd linux-rootfs-resize
./install
Tool is designed in modular fashion, so support for other distributions can be added without much work (I hope).

2014년 9월 17일 수요일

[Openstack] Understanding ephemeral and persistant volumes

When you ask Nova to boot a VM, nova-compute will connect to Glance and "GET" the image file from Glance and save it on the its local filesystem in "/var/lib/nova/instances/_base".
If Glance is set to use Swift as its backend storage, then Glance will get that file from Swift (through the Proxy). If not, then it will stream the file from Glance's filesystem (check the variable "filesystem_store_datadir" in the file "glance-api.conf" to see what Glance is set to use as backend store). 

So by default the disk of an instance is basically stored on the local filesystem of the server where the instance is running (in "/var/lib/nova/instances/instance-0000000X/disk"), and it's called ephemeral because when you terminate the instance the entire directory "/var/lib/nova/instances/instance-0000000X" gets deleted and the virtual disk is gone, but the base image in the "_base" directory is not touched. 

If the virtual disk is using qcow2 then only the changes that occur from the baseline are captured in the virtual disk, so the disk grows as the instance is changed more. The benefit is that you can have five instances using the same base template without using five times the space on the local filesystem (read http://people.gnome.org/~markmc/qcow-image-format.html for more info about qcow2). 

Persistent volumes are virtual disks that you attach to a running instance using the nova-volume service. These virtual disks are actually LVM volumes exported over iSCSI by the nova-volume server. They are called persistent because they are not affected by an instance being terminated, or by a nova-compute server crashing. 
You could just start a new instance and re-attach that volume and get your data back. The nova-volume is using LVM + iSCSI but there are drivers/plugins for Nexenta (and Netapp will release its own soon), so there are enterprise grade options available.


Table 10.1. Flavor parameters
Column
Description
ID
A unique numeric ID.
Name
A descriptive name, such as xx.size_name, is conventional but not required, though some third-party tools may rely on it.
Memory_MB
Virtual machine memory in megabytes.
Disk
Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. The "0" size is a special case that uses the native base image size as the size of the ephemeral root volume.
Ephemeral
Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance.

OpenStack Storage Concepts

Table 6.1, “OpenStack storage” explains the different storage concepts provided by OpenStack.
Table 6.1. OpenStack storage
Ephemeral storageBlock storageObject storage
Used to…
Run operating system and scratch space
Add additional persistent storage to a virtual machine (VM)
Store data, including VM images
Accessed through…
A file system
block device that can be partitioned, formatted, and mounted (such as, /dev/vdc)
The REST API
Accessible from…
Within a VM
Within a VM
Anywhere
Managed by…
OpenStack Compute (nova)
OpenStack Block Storage (cinder)
OpenStack Object Storage (swift)
Persists until…
VM is terminated
Deleted by user
Deleted by user
Sizing determined by…
Administrator configuration of size settings, known as flavors
User specification in initial request
Amount of available physical storage
Example of typical usage…
10 GB first disk, 30 GB second disk
1 TB disk
10s of TBs of dataset storage

Cloud Image - image convert from vmdk to qcow2

we can convert the VMware vmdk cloud image to KVM based qcow2 image with following command on OpenStack qemu.
The option of "-o compat" is for compatibility of low version.

[Convert Image]
qemu-img convert -o compat=0.10 -f vmdk -O qcow2 CentOS_Cloud-disk1.vmdk centos-cloud-6.5.qcow2

[Compress Image]
sudo virt-sparsify -o compat=0.10 --compress vCDN_DnsTld-disk1.qcow2 vCDN_DnsTld-disk1_compressed.qcow2

2014년 9월 11일 목요일

How to restart any service in Devstack

You can manually restart any service you like in devstack by following the below procedure:


1. In terminal type "screen -x stack" . Here stack is the screen name. you will be logged into the screens for all devstack services

2. Browse to your selected service by pressing Ctrl+a and then n (Next screen). Ctrl+a and then p (Previous screen)

3. When at the desired screen press Ctrl+c (This will stop the service)

4. Press the UP key and press enter.

The service will restart.

For an automated restart you can copy the command (from 4th step) for restarting service and make a script that can re-start all such services.

==========================================================================
Devstack does not run services. Its runs as screens. After successfully running stack.sh, if you want to restart any openstack service, get into screen using screen -r. For restarting nova network, go to nova network screen which is screen 9, using the command CTRL+A followed by 9. Then kill the nova network using CTRL+C and then restart it using "up" arrow and enter.

Note if you reboot your machine running devstack, you need to rerun stack.sh

2014년 9월 10일 수요일

Connect VM instance by using SSH (key pair)

There are two ways to connect VM instance using SSH. One is the way of key pair and the other one is the config_init.

############################################
# Login to VM instance using key pair
############################################

1. Create a key pair in OpenStack
    ex) name : ubuntu-cloud

2. Download it to local PC (when you create a key pair in Horizontal GUI then it will automatically be downloaded.

3. Create a new VM instance in OpenStack with the key pair

4. Login to the VM instance using SSH as below
     # ssh -i ubuntu-cloud.pem ubuntu@192.168.100.102

##### INFO #####
- The permission of the key file should be 600. So change the mod of key file as below
# chmod 600 ./ubuntu-cloud.pem


############################################
# Login to VM instance using config_init
############################################

* Ref. #1 : http://docs.openstack.org/user-guide/content/user-data.html
* Ref. #2 : http://www.blog.sandro-mathys.ch/2013/07/setting-user-password-when-launching.html
* Ref. #3 : https://help.ubuntu.com/community/CloudInit


1. Put a custom script when create a VM instance in Horizon
1) Go to Post-Creation tab 
2) Insert the code below as a Customization Script. 
3) Hit the Launch button 
4) Once the instance is up, you should be able to log in with the configured password.

#cloud-config
password: ubuntu
chpasswd: { expire: False }
ssh_pwauth: True



############################################
# Login to CentOS VM instance using config_init
############################################

#cloud-config
chpasswd:
list: |
root:stackops
cloud-user:stackops
expire: False
ssh_pwauth: True

CentOS 7.0 images:

#cloud-config
chpasswd:
list: |
root:stackops
centos:stackops
expire: False
ssh_pwauth: True

2014년 9월 3일 수요일

Installation of OpenStack icehouse with Flat Network using devstack

* References
http://blog.felipe-alfaro.com/2014/02/03/openstack-with-devstack-in-ubuntu/
https://wiki.openstack.org/wiki/Obsolete:ConfigureOpenvswitch
http://wiki.stackinsider.org/index.php/DevStack_-_Single_Node_using_Neutron_FLAT_-_Havana


1. Network Configuration

- eth0 : NAT
- eth1 : Bridge

- /etc/network/interfaces
====================================================
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.186.10
        netmask 255.255.255.0

auto eth1
iface eth1 inet static
        address 10.10.1.190
        netmask 255.255.255.0
        gateway 10.10.1.1
        dns-nameservers 168.126.63.1
====================================================


2. local.conf
====================================================
HOST_IP=10.10.1.190

# network
FLAT_INTERFACE=eth1
FIXED_RANGE=20.1.1.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=20.1.1.1
#FLOATING_RANGE=192.168.100.0/24
#PUBLIC_NETWORK_GATEWAY=192.168.100.1


# Neutron
Q_PLUGIN=ml2
Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=vlan)
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=100:200
#OVS_VLAN_RANGE=physnet1
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1


# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
====================================================


3. Post configuration

[Configuration for network interface]
auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up 
        up ip link set $IFACE promisc on 
        down ip link set $IFACE promisc off 
        down ifconfig $IFACE down 

# The Open VSwitch network interface 
auto br-eth1 
iface br-eth1 inet static 
        address 10.10.1.190
        netmask 255.255.255.0
        gateway 10.10.1.1
        dns-nameservers 168.126.63.1
        up ip link set $IFACE promisc on 
        down ip link set $IFACE promisc off

[Restart network interfaces]
# sudo ifdown eth1
# sudo ifup eth1
# sudo ifup br-eth1


[Configuration for Open vSwitch]
# ovs-vsctl add-port br-eth1 eth1
# ifconfig br-eth1 promisc up


4. Authentication

# vi keystone-admin
# put the configuration below
export OS_TENANT_NAME=admin 
export OS_USERNAME=admin 
export OS_PASSWORD=admin
PS1="\u@\h:\w (keystone-$OS_USERNAME)\$ " 
source openrc

# source ./keystone-admin

5. Create a flat network

# neutron net-create --os-tenant-name admin external_flat --shared --provider:network_type flat --provider:physical_network physnet1
# neutron subnet-create --os-tenant-name admin external_flat 10.10.1.0/24 --gateway 10.10.1.1 --dns-nameserver 168.126.63.1 --allocation-pool start=10.10.1.225,end=10.10.1.254

2014년 7월 29일 화요일

Errors and solutions during OpenStack icehouse installation

#######################################################
ERROR: openstackclient.shell Exception raised: six>=1.6.0

Solution:
Updating setuptools to the latest version right after installing pip looks to fix the issue.

sudo pip install --upgrade setuptools
#######################################################

2014년 5월 3일 토요일

Installation of OpenStack icehouse using devstack

###########################
# Network interface configuration
# /etc/network/interfaces
###########################

# The primary network interface
auto eth0
        iface eth0 inet dhcp

auto eth1
        iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        up ip link set $IFACE promisc on
        down ip link set $IFACE promisc off
        down ifconfig $IFACE down


############################
Installation steps
###########################

1. ssh root
2. apt-get -y update
3. apt-get -y install git
4. create user stack

# useradd -U -G sudo -s /bin/bash -m stack
# echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# passwd stack

5. login as user stack
6. git clone https://github.com/openstack-dev/devstack.git -b stable/icehouse
7. cd devstack/
8. copy ./samples/local.conf to root directory
9. Add contents below into local.conf
10. ./stack.sh 
11. Do configuration as below for accessing from local PC to VM instances
      $ sudo ovs-vsctl show
      $ sudo ovs-vsctl add-port br-ex eth1


###########################
local.conf
###########################

# default 
HOST_IP=10.1.100.12

# network
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
FLOATING_RANGE=192.168.100.0/24

# This IP will be assigned to br-ex interface after installation.
# So MUST check if this IP is not conflict with the IP of virtual port in local PC (e.g. VMnet8 or VMnet1 in case of VMware)
PUBLIC_NETWORK_GATEWAY=192.168.100.1


# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api


2014년 3월 19일 수요일

Linux performance analysis and tools

http://www.slideshare.net/brendangregg/linux-performance-analysis-and-tools

Ubuntu KVM 설치

아래 우분투(Ubuntu) 사이트의 가이드대로만 따라하면 됨.
주의. root 계정으로 설치할 것.

https://help.ubuntu.com/community/KVM/Installation

2014년 3월 17일 월요일

Mac OS X Parallels 에서 CPU 가상화(Virtualization)를 위한 Intel VT-X option enable


정말 이 옵션 찾느라 몇시간을 헤멘거 같다.

Parallels에서 CPU 가상화를 위한 Intel VT-X 옵션을 enable 시키는 방법이다.

사실 처음에 VirtualBox 에 우분투(Ubuntu 64bit 12.04.4)를 깔고 거기에 KVM을 설치하는 과정에서 CPU가 가상화지원을 하지 않아 KVM을 설치하지 못한다는 메세지를 보고, 이걸 해결하려고 열심히 구글링 했다.
결론은 Apple Macbook Pro에서는 기본적으로 VT-X 옵션이 enable되어 있기때문에 별도로 할게 없다는 거다. (윈도우의 경우 BIOS에서 Virtualization Technology 옵션을 체크해야 함)

VirtualBox에서도 VT-x/AMD-V 옵션이 모두 체크되어 있는데도 우분투에서는 계속 CPU가 가상화 지원을 안한다고 나오는데 정말 이상했다.
결국 포기하고 Parallels 를 이용해서 성공했다. 내 생각에 Mac OS용 VirtualBox의 버그가 아닌지 모르겠다.



Mac OS X에서 CPU 가상화를 위한 Intel VT-x support 지원여부 확인


How to check that Intel VT-x is supported in CPU:

# sysctl -a | grep machdep.cpu.features

3. You may see output similar to the one bellow

Mac:~ user$ sysctl -a | grep machdep.cpu.features
kern.exec: unknown type returned
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM SSE3 MON VMX EST TM2 TPR PDCM


If you see VMX entry then CPU supports Intel VT-x feature, but it still may be disabled.

Please install all firmware updates from Apple to resolve this issue

add-apt-repository 설치

add-apt-repository 커맨드가 없다고 나올 경우, 아래와 같이 관련 패키지를 설치한다.


sudo apt-get install python-software-properties
sudo apt-get install software-properties-common



위 명령을 실행하면 add-apt-repository 커맨드가 설치된다.

2014년 3월 11일 화요일

OpenStack 설치 - DevStack

==========================
1. VM 설정 및 우분투(ubuntu) 12.04 LTS 설치
==========================
- Ubuntu 설치
- Guest OS(Ubuntu)에 네트워크 카드 2개 설정
- 하나는 "공유네트워크"(NAT)로 설정하고 다른 하나는 "호스트 전용"으로 설정

==========================
2. Network 설정
==========================
- /etc/network/interface 파일 수정

auto eth0
iface eth0 inet static
        address         10.1.100.3
        netmask         255.255.255.0
        gateway         10.1.100.1
        dns-nameservers 168.126.63.1

auto eth1
        iface eth1 inet static
        address 192.168.100.3
        netmask 255.255.255.0

- Proxy 서버 설정(Proxy서버가 있는 경우에만 설정)
sudo vi /etc/apt/apt.conf
Acquire::http::proxy "http://xx.xx.xx.xx:8080/";
Acquire::https::proxy "https://xx.xx.xx.xx:8080/";


==========================
3. Git 설치
==========================
$ sudo apt-get install -y git


==========================
4. 사용자(stack) 추가
==========================
devstack으로 설치를 하기 위해서 stack 사용자를 추가하고 sudo 권한 부여


# useradd -U -G sudo -s /bin/bash -m stack
# echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# passwd stack


==========================
5. Devstack 설치파일 다운로드
==========================
stack 사용자 계정으로 전환 후, 홈 디렉토리에서 Devstack 다운로드

$ git clone git://github.com/openstack-dev/devstack.git


==========================
6. local.conf 수정
==========================
    - ~devstack/samples/local.conf 파일을 devstack 루트 디렉토리로 복사
    - 제일 밑에 아래 라인 추가. (한번 설치한 후부터는 인터넷으로부터 패키지설치작업 생량)
       OFFLINE=true


==========================
7. OpenStack 설치
==========================
~devstack$ ./stack.sh


#############################
간혹 에러가 난다.

ImportError “No Module named Setuptools”

sudo apt-get install python-setuptools



error: command 'gcc' failed with exit status 1 


sudo apt-get install python-dev
sudo apt-get install libevent-dev

2014년 1월 9일 목요일

맥에서 라우팅 테이블 변경하기 (How to add a route in Mac OSX)

리눅스에서 라우팅 테이블에 경로하나 추가하는 명령어와 거의 동일함.

아래 예시는, 192.0.100.0 네트웍으로 가기위한 게이트웨이를 192.0.254.108로 설정하는 예시임.



sudo route -n add 192.0.100.0/24 192.0.254.108