Discussion:
[one-users] error when create image on ceph datastore
Huynh Dac Nguyen
2014-11-13 07:02:09 UTC
Permalink
Hi All

I added ceph to datastore successful but can't create image on it.

[***@ho-srv-cloudlab-01 ~]$ oneimage create centos65min.one
--datastore ceph_2 --persistent --type CDROM --disk_type CDROM

Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: cp: Copying local image
/var/lib/one/images/CentOS-6.5-x86_64-minimal.iso to the image
repository
Thu Nov 13 01:47:35 2014 [Z0][ImM][E]: cp: Command " set -e
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]:
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: FORMAT=$(qemu-img info
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26 | grep "^file format:" | awk
'{print }')
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]:
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: if [ "$FORMAT" != "raw" ]; then

Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: qemu-img convert -O raw
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26.raw
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: mv
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26.raw
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: fi
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]:
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: rbd import --image-format 2
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26 one/one-29
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]:
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: # remove original
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: rm -f
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26" failed: Warning: Permanently
added 'ho-srv-ceph-03,10.10.15.69' (RSA) to the list of known hosts.
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: sh: line 5: qemu-img: command
not found
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: sh: line 8: qemu-img: command
not found
Thu Nov 13 01:47:35 2014 [Z0][ImM][E]: Error registering one/one-29 in
ho-srv-ceph-03
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: ExitCode: 127
Thu Nov 13 01:47:35 2014 [Z0][ImM][E]: Error copying image in the
datastore: Error registering one/one-29 in ho-srv-ceph-03

[***@ho-srv-cloudlab-01 ~]$ onedatastore show 120
DATASTORE 120 INFORMATION

ID : 120
NAME : ceph_2
USER : oneadmin
GROUP : oneadmin
CLUSTER : -
TYPE : IMAGE
DS_MAD : ceph
TM_MAD : ceph
BASE PATH : /var/lib/one//datastores/120
DISK_TYPE : RBD

DATASTORE CAPACITY

TOTAL: : 186.9G
FREE: : 143.3G
USED: : 43.5G
LIMIT: : -

PERMISSIONS

OWNER : um-
GROUP : u--
OTHER : ---

DATASTORE TEMPLATE

BASE_PATH="/var/lib/one//datastores/"
BRIDGE_LIST="ho-srv-ceph-03"
CLONE_TARGET="SELF"
DATASTORE_CAPACITY_CHECK="yes"
DISK_TYPE="RBD"
DS_MAD="ceph"
LN_TARGET="NONE"
POOL_NAME="one"
RBD_FORMAT="2"
SAFE_DIRS="/var/lib/one/images"
TM_MAD="ceph"
TYPE="IMAGE_DS"

IMAGES
29


[***@ho-srv-cloudlab-01 ~]$ cat centos65min.one
NAME = "CentOS-6.5-x86_64-minimal"
TYPE = CDROM
PATH = /var/lib/one/images/CentOS-6.5-x86_64-minimal.iso
DESCRIPTION = "CentOS-6.5-x86_64-minimal.iso"


Note:
ho-srv-ceph-03: my ceph MON
the one pool is just used after created and do nothing more
OS: CentOS 6.6
Opennebula 4.8.0-1


Is that a bug or did i do wrong?



Regards,
Ndhuynh





This e-mail message including any attachments is for the sole use of the
intended(s) and may contain privileged or confidential information. Any
unauthorized review, use, disclosure or distribution is prohibited. If
you are not intended recipient, please immediately contact the sender by
reply e-mail and delete the original message and destroy all copies
thereof.
Giancarlo De Filippis
2014-11-13 08:50:17 UTC
Permalink
Hi all,

someone have some documentation for a sample of private cloud structure
with:

- Two nodes in HA

- Front-end on virtual machine HA

With storage file system on glusterfs

Thanks all..

GD
Bart
2014-11-13 10:38:58 UTC
Permalink
Hi,

Do you mean to have the frontend (OpenNebula management) running on the
actual OpenNebula cluster?

If that's the case then I would also be very interested in this scenario :)

As for GlusterFS, we've followed these instructions with success:


- http://docs.opennebula.org/4.10/
- http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html


Also, all other instructions that we found were also on the documentation
site.

With it, we've created the following:


- *GlusterFS:* Two storage hosts (running CentOS 7) in the back-end
which provide the whole image/datastore for OpenNebula. HA is setup via
round robin DNS. After testing we've found this to work quite well in terms
of performance and redundancy (better then expected). All network ports are
setup as a bond using balance-alb.
- *Hypervisors: *a.t.m. 4 hypervisors (will eventually grow to 8)
running CentOS 7. They have 2 network ports setup using bonding with
balance-alb, they are connected to the virt/storage VLAN. And two network
ports setup in a bond using active-backup, this is used for the bridge port
(with VLAN tagging enabled). Active-backup seems to be the proper setting
for a bridge port since a bridge can't work that well with balance-alb.
- *OpenNebula management node:* This includes the OpenNebula daemon,
sunstone and all other functions. Currently, since we're building up this
environment. Hypervisors and storage have been arranged and we're happy
with how those parts are setup, however the management node is currently
still something we're not very sure about. We're considering buying two
physical servers to do this job and create an active backup solution as
described in the documentation:
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html.
However, buying hardware just for this seems a waste since these servers
won't be doing that much (resource wise).
- We do have plans on testing this node, dual, on the actual
OpenNebula cluster. But something is holding us back. I can't find any
proof of someone achieving this. Also, we're seeing some issues
that might
occur when OpenNebula is managing itself (e.g. live migration, reboot,
shutdown, etc.).
- *Virtual machine HA setup: *We haven't actually started with this part
yet, but this document describes on how you can create a setup where
virtual machines become HA:
http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html
For us this is something we'll probably start looking into once we've found
a proper setup for the OpenNebula management node.


Hopefully this info helps you a little and I also hope someone else can
elaborate on running OpenNebula as a VM on it's own cluster. Or, at least
other best practices on how to run OpenNebula in a VM without buying two
physical servers for this job, (any insight on this subject would be
helpful :)


-- Bart
Post by Giancarlo De Filippis
Hi all,
someone have some documentation for a sample of private cloud structure
- Two nodes in HA
- Front-end on virtual machine HA
With storage file system on glusterfs
Thanks all..
GD
_______________________________________________
Users mailing list
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
--
Bart G.
Giancarlo De Filippis
2014-11-13 11:08:44 UTC
Permalink
Thanks so much Bart,

That's the case..... because vm running on cluster is more similar to
VMware HA solution (only two hosts without front-end)....

For the glusterfs i've already used this solution on my public cloud
with success.

You helped me for port-setup of hypervisors :)

I hope (like you) that someone (users or OpenNebula Team) have best
practices on how to run OpenNebula in a VM.

Thks.

Giancarlo
Hi,
Do you mean to have the frontend (OpenNebula management) running on the actual OpenNebula cluster?
If that's the case then I would also be very interested in this scenario :)
* http://docs.opennebula.org/4.10/ [2]
* http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html [3]
Also, all other instructions that we found were also on the documentation site.
* GLUSTERFS: Two storage hosts (running CentOS 7) in the back-end which provide the whole image/datastore for OpenNebula. HA is setup via round robin DNS. After testing we've found this to work quite well in terms of performance and redundancy (better then expected). All network ports are setup as a bond using balance-alb.
* HYPERVISORS: a.t.m. 4 hypervisors (will eventually grow to 8) running CentOS 7. They have 2 network ports setup using bonding with balance-alb, they are connected to the virt/storage VLAN. And two network ports setup in a bond using active-backup, this is used for the bridge port (with VLAN tagging enabled). Active-backup seems to be the proper setting for a bridge port since a bridge can't work that well with balance-alb.
* OPENNEBULA MANAGEMENT NODE: This includes the OpenNebula daemon, sunstone and all other functions. Currently, since we're building up this environment. Hypervisors and storage have been arranged and we're happy with how those parts are setup, however the management node is currently still something we're not very sure about. We're considering buying two physical servers to do this job and create an active backup solution as described in the documentation: http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html [4]. However, buying hardware just for this seems a waste since these servers won't be doing that much (resource wise).
* We do have plans on testing this node, dual, on the actual OpenNebula cluster. But something is holding us back. I can't find any proof of someone achieving this. Also, we're seeing some issues that might occur when OpenNebula is managing itself (e.g. live migration, reboot, shutdown, etc.).
* VIRTUAL MACHINE HA SETUP: We haven't actually started with this part yet, but this document describes on how you can create a setup where virtual machines become HA: http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html [5] For us this is something we'll probably start looking into once we've found a proper setup for the OpenNebula management node.
Hopefully this info helps you a little and I also hope someone else can elaborate on running OpenNebula as a VM on it's own cluster. Or, at least other best practices on how to run OpenNebula in a VM without buying two physical servers for this job, (any insight on this subject would be helpful :)
-- Bart
Post by Giancarlo De Filippis
Hi all,
- Two nodes in HA
- Front-end on virtual machine HA
With storage file system on glusterfs
Thanks all..
GD
_______________________________________________
Users mailing list
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
--
Bart G.
Links:
------
[1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2] http://docs.opennebula.org/4.10/
[3]
http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html
[4]
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html
[5]
http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html
Daniel Dehennin
2014-11-13 11:33:54 UTC
Permalink
Post by Giancarlo De Filippis
I hope (like you) that someone (users or OpenNebula Team) have best
practices on how to run OpenNebula in a VM.
Hello,

The ONE frontend VM can not manage itself, you must use something else.

I made a test with pacemaker/corosync and it can be quite easy[1]:

#+begin_src conf
primitive Stonith-ONE-Frontend stonith:external/libvirt \
params hostlist="one-frontend" hypervisor_uri="qemu:///system" \
pcmk_host_list="one-frontend" pcmk_host_check="static-list" \
op monitor interval="30m"
primitive ONE-Frontend-VM ocf:heartbeat:VirtualDomain \
params config="/var/lib/one/datastores/one/one.xml" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
utilization cpu="1" hv_memory="1024"
group ONE-Frontend Stonith-ONE-Frontend ONE-Frontend-VM
location ONE-Frontend-run-on-hypervisor ONE-Frontend \
rule $id="ONE-Frontend-run-on-hypervisor-rule" 40: #uname eq nebula1 \
rule $id="ONE-Frontend-run-on-hypervisor-rule-0" 30: #uname eq nebula3 \
rule $id="ONE-Frontend-run-on-hypervisor-rule-1" 20: #uname eq nebula2
#+end_src

I have troubles with my cluster because my nodes _and_ the ONE frontend
needs to access the same SAN.

My nodes have two LUNs over multipath FC (/dev/mapper/SAN-FS{1,2}), they
are both PV of a cluster volume group (cLVM) with a GFS2 on top.

So I need:

- corosync for messaging
- dlm for cLVM and GFS2
- cLVM
- GFS2

I add the LUNs as raw block disks to my frontend VM and install the
whole stack in it, but I'm facing some communication issues, and manage
to solve somes[3].

According to the pacemaker mailing list, having the nodes _and_ a VM in
the same pacemaker/corosync cluster ”sounds like a recipe for
disaster”[2].

Hope this will help you have a picture of the topic.

Regards.

Footnotes:
[1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html-single/Clusters_from_Scratch/index.html

[2] http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/023000.html

[3] http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/022964.html
--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
Daniel Dehennin
2014-11-17 13:08:56 UTC
Permalink
Hi Daniel,
Hello,
So basically, to sum it up, there is currently no way of running the
OpenNebula management node (with all functionality inside one VM) on it's
own virtualisation cluster (and thus managing itself along with the rest of
the cluster).
You can run the VM on the same hardware, but it will not be managed by
OpenNebula, since you need OpenNebula to start OpenNebula VMs.

That's why the documentation[1] explains the setup of an H-A system.

If you have two physical servers, you may create two OpenNebula VMs, one
master and one slave with replicated database between them.

Regards.

Footnotes:
[1] http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html
--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
Jaime Melis
2014-11-19 16:00:52 UTC
Permalink
Hi Ndhuynh,

you have to install qemu-img (qemu-utils package).

cheers,
Jaime
Post by Huynh Dac Nguyen
Hi All
I added ceph to datastore successful but can't create image on it.
--datastore ceph_2 --persistent --type CDROM --disk_type CDROM
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: cp: Copying local image
/var/lib/one/images/CentOS-6.5-x86_64-minimal.iso to the image repository
Thu Nov 13 01:47:35 2014 [Z0][ImM][E]: cp: Command " set -e
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: FORMAT=$(qemu-img info
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26 | grep "^file format:" | awk
'{print }')
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: if [ "$FORMAT" != "raw" ]; then
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: qemu-img convert -O raw
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26.raw
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: mv
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26.raw
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: fi
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: rbd import --image-format 2
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26 one/one-29
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: # remove original
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: rm -f
/var/tmp/2f6f4e105d2aea3b28c214e005e97e26" failed: Warning: Permanently
added 'ho-srv-ceph-03,10.10.15.69' (RSA) to the list of known hosts.
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: sh: line 5: qemu-img: command not
found
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: sh: line 8: qemu-img: command not
found
Thu Nov 13 01:47:35 2014 [Z0][ImM][E]: Error registering one/one-29 in
ho-srv-ceph-03
Thu Nov 13 01:47:35 2014 [Z0][ImM][I]: ExitCode: 127
Thu Nov 13 01:47:35 2014 [Z0][ImM][E]: Error copying image in the
datastore: Error registering one/one-29 in ho-srv-ceph-03
DATASTORE 120 INFORMATION
ID : 120
NAME : ceph_2
USER : oneadmin
GROUP : oneadmin
CLUSTER : -
TYPE : IMAGE
DS_MAD : ceph
TM_MAD : ceph
BASE PATH : /var/lib/one//datastores/120
DISK_TYPE : RBD
DATASTORE CAPACITY
TOTAL: : 186.9G
FREE: : 143.3G
USED: : 43.5G
LIMIT: : -
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
DATASTORE TEMPLATE
BASE_PATH="/var/lib/one//datastores/"
BRIDGE_LIST="ho-srv-ceph-03"
CLONE_TARGET="SELF"
DATASTORE_CAPACITY_CHECK="yes"
DISK_TYPE="RBD"
DS_MAD="ceph"
LN_TARGET="NONE"
POOL_NAME="one"
RBD_FORMAT="2"
SAFE_DIRS="/var/lib/one/images"
TM_MAD="ceph"
TYPE="IMAGE_DS"
IMAGES
29
NAME = "CentOS-6.5-x86_64-minimal"
TYPE = CDROM
PATH = /var/lib/one/images/CentOS-6.5-x86_64-minimal.iso
DESCRIPTION = "CentOS-6.5-x86_64-minimal.iso"
*Note:*
ho-srv-ceph-03: my ceph MON
the one pool is just used after created and do nothing more
OS: CentOS 6.6
Opennebula 4.8.0-1
Is that a bug or did i do wrong?
Regards,
Ndhuynh
This e-mail message including any attachments is for the sole use of the
intended(s) and may contain privileged or confidential information. Any
unauthorized review, use, disclosure or distribution is prohibited. If you
are not intended recipient, please immediately contact the sender by reply
e-mail and delete the original message and destroy all copies thereof.
_______________________________________________
Users mailing list
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | ***@opennebula.org
Loading...