Configuring H2 database for JUnit Tests with OpenJPA

When it comes time to write unit tests for an application most likely then not there will be a scenario where the tests would have to communicate to a database backend. However, using the main database of the application during the unit tests run can pollute and corrupt the data so the recommended approach is to use a dedicated database just for the tests.

Provisioning and maintaining a full database like MySQL or PostgreSQL can have a bit of an overhead. For that regards the most suited database for unit tests is H2

The area where H2 excels is that it can be provisioned before the test suite run starts and at the end removed it since it is a in memory database.

Below are the steps necessary for configuring a OpenJPA Entity Manager Factory using the H2 driver:

Add maven dependency


Create Data Source

PoolConfiguration props = new PoolProperties();  
dataSource = new DataSource();  

Create Entity Manager Factory

Properties jpaProps = new Properties();  
jpaProps.put("openjpa.ConnectionFactory", dataSource);  
jpaProps.put("openjpa.Log", "log4j");  
jpaProps.put("openjpa.ConnectionFactoryProperties", "true");  
entityManagerFactory = Persistence.createEntityManagerFactory("myfactory", jpaProps);  

Installing OpenStack, Quantum problems

During the following weeks we plan to expand more on the subject of setting up an OpenStack cloud using Quantum.
For now we have been experimenting with different Quantum functionality and settings.
At first Quantum might look like a black box, not due to its complexity, but because it deals with several different plugins and protocols that if a person is not very familiar with them it becomes hard to understand why Quantum is there in the first place.

In a nutshell Quantum has the role to provide an interface to configure the network of multiple VMs in a cluster.

In the last few years the lines between a system, network and virtualization admin have become really blury.
The classical unix admin is pretty much non existent now a days since most services are offered in the cloud in virtualized environments.
And since everything seems to be migrating over to the cloud some network principles that were applied into physical networks in the past some times don’t translate very well to virtualized networks.

Later we’ll have some posts explaining what technologies and techniques underlie the network configuration of a cloud, in our case focusing specifically on OpenStack and Quantum.

With that being said, below are a few errors that came up during the configuration of Quantum:

1. ERROR [quantum.agent.dhcp_agent] Unable to sync network state.

This is error is most likely caused due a misconfiguration of the rabbitmq server.
A few ways to debug the issue is to:
Check if the file /etc/quantum/quantum.conf in the controller node(where the quantum server is installed) has the proper rabbit credentials

By default rabbitmq runs on port 5672, so run:

netstat -an | grep 5672

and check if the rabbitmq server is up an running

On the network node(where the quantum agents are installed) also check if the /etc/quantum/quantum.conf have the proper rabbit credentials:

If you are running a multihost setup make sure the rabbit_host var points to the ip where the rabbit server is located.

Just to be safe check if you have a connection on the management networking by pinging all the hosts in the cluster and restart both the quantum and rabbitmq server as well the quantum agents.

2. ERROR [quantum.agent.l3agent] Error running l3nat daemon_loop

This error requires a very simple fix, however, it was very difficult to find information about the problem online.
Luckily, I found one thread on the mailing list of the fedora project explaining in more details the problem.

This is error is due to the fact that keystone authentication is not working.
A quick explanation – the l3 agent makes use of the quantum http client to interface with the quantum service.
This requires keystone authentication. If this fails then the l3 agent will not be able to communicate with the service.

To debug this problem check if the quantum server is up and running.
By default the server runs on port 9696

[email protected]:/home/senecacd# netstat -an | grep 9696
tcp 0 0* LISTEN

If nothing shows up is because the quantum server is down, try restarting the service to see if the problems goes away:

quantum-server restart

You can also try to ping the quantum server from the network node(in a multihost scenario):

[email protected]:/home/senecacd# nmap -p 9696

Starting Nmap 5.21 ( ) at 2013-01-28 08:07 PST
Nmap scan report for folsom-controller (
Host is up (0.00038s latency).
9696/tcp open unknown
MAC Address: 00:0C:29:0C:F0:8C (VMware)

Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds

3.ERROR [quantum.agent.l3agent] Error running l3nat daemon_loop – rootwrap error

I didn’t come across this bug, but I found a few people running into this issue.
Kieran already wrote a good blog post explaining the problem and how to fix it

You can check the bug discussion here

4. Bad floating ip request: Cannot create floating IP and bind it to Port , since that port is owned by a different tenant.

This is just a problem of mixed credentials.
Kieran documented the solution for the issue here

There is also a post on the OpenStack wiki talking about the problem.


This should help fixing the problems that might arise with a Quantum installation.
If anybody knows about any other issues with Quantum or has any suggestions about the problems listed above please let us know!

Also check the official guide for other common errors and fixes

OpenStack Overview

Recently we started moving away from our original setup plan that was to have an OpenNebula cloud running on CentOS boxes, to an OpenStack Cloud running on Ubuntu boxes.

I’ll try to give a quick overview of what is OpenStack and talk a little bit about some of its main components.

Before I get started, I really like this diagram that demonstrates what problem OpenStack is trying to solve.

This is a basic diagram demonstrating the basic usage/need of a cloud platform.
OpenStack works on all levels of the cloud set up, from the low level libraries that interact with the HyperVisors to the Web Dashboard that allows users to interact with the cloud.
In the end OpenStack aggregates different solutions to offer a simple way to build public and private cloud platforms.

Some OpenStack history

Back in 2010 NASA and RacKSpace started the OpenStack Cloud project with the goal of having an open standard for both hardware and software cloud solutions.
Currently there are more than 150 companies involved with the project, such as Intel, IBM, Cisco and HP to name a few.

As far as activity, OpenStack seems to be very active with two major releases per year.
Their release history:

  1. Austin 21 October 2010
  2. Bexar 3 February 2011
  3. Cactus 15 April 2011
  4. Diablo 22 September 2011
  5. Essex 5 April 2012
  6. Folsom 27 September 2012
  7. Grizzly 4 April 2013

OpenStack is divided into three different categories:


  • Nova – OpenStack Compute
  • Glance – OpenStack Image Service


  • Swift – OpenStack Object Storage
  • Cinder – OpenStack Block Storage

Common Projects

  • Keystone – OpenStack Identity
  • Horizon – OpenStack Dashboard


  • Quantum – OpenStack Networking

Now lets took into more details on the components listed above:

Nova – Compute

Nova can be summarized as a cloud controller.
It centralizes the management of a cloud and communicates between all different components, a good diagram that illustrate its role:

The main components of Nova are:

  1. API server: handles requests from the user and relays them to the cloud controller.
  2. Cloud controller: handles the communication between the compute nodes, the networking controllers, the API server and the scheduler.
  3. Scheduler: selects a host to run a command.
  4. Compute worker: manages computing instances: launch/terminate instance, attach/detach volumes
  5. Network controller: manages networking resources: allocate fixed IP addresses, configure VLANs

List of features:

  • Manage virtualized commodity server resources
  • Manage Local Area Networks (LAN)
  • API with rate limiting and authentication
  • Distributed and asynchronous architecture
  • Virtual Machine (VM) image management
  • Live VM management
  • Floating IP addresses
  • Security Groups
  • Role Based Access Control (RBAC)
  • Projects & Quotas
  • VNC Proxy through web browser
  • Store and Manage files programmatically via API
  • Least privileged access design
  • Dashboard with fully integrated support for self-service provisioning
  • VM Image Caching on compute nodes

Glance – Image Service

Glance takes care of managing virtual machine images.
Some of its functionality:

  • List Available Images
  • Retrieve Image Metadata
  • Retrieve Raw Image Data
  • Add a New Image
  • Update an Image
  • List Image Memberships
  • List Shared Images
  • Add a Member to an Image
  • Remove a Member from an Image
  • Replace a Membership List for an Image

Glance is only responsible for creating and managing virtual machine images, the deployment and maintenance of the VMs is taken care by Nova

Swift – Object Storage

The Swift  component is a Container/Object storage system.
Different from SAN and NAS file systems, Swift cannot be mounted, since it is not a file system itself.
You can think of Swift as a distributed storage system for static data, such as images, videos, archive data, etc
All the files are exposed through an HTTP interface, typically with a REST API

A good analogy is to think of Containers as folders as in a regular OS such as Windows
Objects are chunks of data living inside containers.
So when you upload a picture to Swift it will be wrapped in an Object and placed inside a container.
The object can contain some metadata in the form of key value pairs.
No encryption or compression is performed on objects upon storage.
List of features:

  • Leverages commodity hardware
  • HDD/node failure agnostic
  • Unlimited storage
  • Multi-dimensional scalability (scale out architecture)
  • Account/Container/Object structure
  • Built-in replication
  • Easily add capacity unlike RAID resize
  • No central database
  • RAID not required
  • Built-in management utilities
  • Drive auditing
  • Expiring objects
  • Direct object access
  • Realtime visibility into client requests
  • Supports S3 API
  • Restrict containers per account
  • Support for NetApp, Nexenta, SolidFire
  • Snapshot and backup API for block volumes
  • Standalone volume API available
  • Integration with Compute

Keystone – Identity

To summarize what Keystone does in one sentence:

keep track of users and what they are permitted to do

Keystone handles the user management of the cloud system, allowing different permissions to be configured for different users.
One of the keywords they use when talking about Keystone is Tenants.
Their explanation of tenants is something that can be thought of as a project, group, or organization

A diagram demonstrating the basic flow of Keystone:

Horizon – Dashboard

Horizon is the web interface available for OpenStack,  it’s very much like what OpenStone is for OpenNebula.

Quantum – Networking

Quantum is a new addition to OpenStack.
Up to the Essex release Quantum was built in the Nova component. However to provide more functionality in terms of network management Quantum was created with the focus on managing a network of an OpenStack solution


After all that being said, based on the image from the beginning of the blog post, we can have a better idea of where the OpenStack components fit in the overall cloud set up:



From somebody who is just getting started with cloud computing I would say that between OpenNebula and OpenStack the later is a better option. Even though OpenNebula is an older project OpenStack seems more mature.
A few things I noticed:
There is a lot more uptodate information about OpenStack, plus it is very easy to navigate through their website and find information about the different solutions they provide.
The community appears to be a lot more active with OpenStack than with OpenNebula.
Maybe it is just me, but after a few minute reading about OpenStack I was lead to their official repo on github where all the related projects are hosted.
I really like that since it is an easy way to stay in close touch with the project and follow the latest additions, plus it provides a channel of communication between the users and the developer and makes a lot easier for people to contribute back to the project.
I didn’t have the same feeling when it came to OpenNebula, the project didn’t feel as open as OpenStack, I felt there were a lot more hurdles to get started with OpenNebula.

Talking strictly technical it’s hard for me to say which one is better.
Reading about them both I could definitely see a lot of areas where they overlap, for example, they both use noVnc for the visualization of VMs and libvirt to communicate with the hypervisors.
They both have a web interface to manage the cloud and they both have options to configure VM templates.

A big difference is that OpenNebula is written in ruby while OpenStack is in python
And I’m sure they must differ in other architectural decisions as well, but for the most part they are very similar

For more information about OpenStack:
OpenStack wiki
Keystone basic concepts
Underlying technologies
Architecture overview
OpenStack Storage
Very good blog about python & openstack
OpenStack Wikipedia
RackSpace Wikipedia

Running OpenNebula Sunstone

To get open nebula up and running on my machine I had to gather information from a few different places since none of the instructions of each single source worked flawlessly.

The steps I took the first time:
Followed the instructions from the official guide published by
I’ve blogged about that process here

I’ve got through the set up part but when I tried to run SunStone it would not authenticate the user, thus failing to launch the tool.

I started looking for solutions online and I came up with these two tutorials:

I’ve tried them but they didn’t work by themselves, something was always missing.

In the end, after all tries and fails my system was a mess, I had to modify so many places I didn’t even remember what was happening.

I then decided to take a step back and analyze the problem.

I re-read the official guide and other online the articles mentioning the installation of opennebula and sunstone

I then re-image my computer and started fresh again.

Below is the list of the steps I took to install OpenNebula and SunStone on a CentoOS box

1 – Add EPL to the yum repo list

[sourcecode language=”bash”]
yum repolist
mv 0608B895.txt /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
rpm -ivh epel-release-6-5.noarch.rpm
rpm -qa gpg*
yum clean all
yum update

A more complete explanation about EPL

2 – GroupInstall Virtualization and Development Tools

[sourcecode language=”bash”]
sudo yum groupinstall “Development Tools”
sudo yum groupinstall Virtualization

3 – Download latest packages for OpenNebula

Downloads Page

I selected the OpenNebula 3.8.1 CenOS 6.3 tarball

The contents of the downloaded package are:

  • opennebula
  • opennebula-sunstone
  • opennebula-java

I first installed the OpenNebula rpm, followed by the sunstone and java packages.

4 – Install OpenNebula

To install all the required dependencies for OpenNebula install the package with yum:

[sourcecode language=”bash”]
sudo yum localinstall opennebula-3.8.1-1.x86_64.rpm

5 – Install gems

Before installing gems one dependency is required:

[sourcecode language=”bash”]
sudo yum install redhat-lsb
sudo ./install_gems

6 – This part I had some problems, so there might exist a simpler way to circumvent or even prevent this problem

My guess is that since I didn’t have mysql up and running the creation of the credentials for the oneadmin user were not effective, but I’m not sure.

Another issue I was facing was that I wasn’t able to log in with the oneadmin user
If I tried:

[sourcecode language=”bash”]
su oneadmin

it would ask for the password but I didn’t know the password since the user was created during the installation of OpenNebula.
That was really throwing me off since on the official guide they tells you to log in with the oneadmin before setting the credentials for it

Since it wasn’t working I tried the contrary:

[sourcecode language=”bash”]
echo "oneadmin:yourpasshere" > /var/lib/one/.one/one_auth

I’ve tried to log in again with the oneadmin user.
It didn’t work.
I then log as root and then tried again to log in as oneadmin.
This time worked.

*If anybody knows why I can log in with the oneadmin user when I’m root but not when I have my default user account please leave a comment below :)

Anyway, I was able to log in with the oneadmin user so I kept moving on

7 – Configure ssh

This part is pretty straight forward and the official guide covers it very well:

Logged as oneadmin:

[sourcecode language=”bash”]
cat ~/.ssh/ >> ~/.ssh/authorizedkeys
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/
chmod 600 ~/.ssh/id
chmod 600 ~/.ssh/authorizedkeys
chmod 600 ~/.one/one

cd ~/.ssh
vim config
Host *
StrictHostKeyChecking no

8 – Starting OpenNebula

Logged as oneadmin

[sourcecode language=”bash”]
one start

check if the user is properly configured by listing the vms

[sourcecode language=”bash”]
onevm list

Here is where I got the error that I mentioned earlier:

[“HostPoolInfo] User couldn’t be authenticated, aborting call”]

That was very frustrating, since no where in the official documentation had any mention of this.

Looking online I found one mail thread where somebody suggested a solution:

Before I get to the solution here is where I started the mysql service, as I said before the fact that mysql was off might have caused the error in the first place,

[sourcecode language=”bash”]
sudo chkconfig –levels 235 mysqld on
sudo service mysqld start

The solution as suggested by one of OpenNebula engineers was to remove the one.db file, located at the home of the oneadmin user.
The default location would be /var/lib/one
The next time OpenNebula is started it will use the credentials copied earlier to the *one_auth
file and then create the one.db file with the correct credentials.

Indeed that solution worked.

Logged as oneadmin

[sourcecode language=”bash”]
one stop
cd ~
rm one.db
one start
onevm list

You should see something like:

9 – Installing Sunstone

Back to the folder where you extracted the OpenNebula package, install the sunstone and java packages:

Before installing sunstone, this dependencies are needed:

[sourcecode language=”bash”]
sudo yum install ruby rubygems rubygem-nokogiri rubygem-json rubygem-rack rubygem-sequel rubygem-sinatra rubygem-sqlite3-ruby rubygem-thin rubygem-uuidtools json sequel sqlite3

And also noVNC, OpenNebula has a script for that:

[sourcecode language=”bash”]
cd /usr/share/one
sudo ./

Now installing the sunstone and java packages:

[sourcecode language=”bash”]
sudo rpm -Uhv opennebula-sunstone-3.8.1-1.x8664.rpm
sudo rpm -Uhv opennebula-java-3.8.1-1.x86

10 – Start Sunstone

Logged as oneadmin:

[sourcecode language=”bash”]
one start
sunstone-server start

The sunstone server will be listening on port 9689
All the settings for sunstone are located at /etc/one/sunstone-server.conf

By now you should be able to access sunstone in your browser by going to

OpenNebula setup, Part 1

This will be the first of many blog posts to come outlining the process of setting up OpenNebula on a CentOS box and managing a small cluster of VMs.

First, a simple and good definition for OpenNebula:

…the open-source industry standard for data center virtualization, offering the most feature-rich, flexible solution for the comprehensive management of virtualized data centers to enable on-premise IaaS clouds

This diagram clearly shows the role of OpenNebula as far a Cloud infrastructure goes:

Even though the OpenNebula website is very complete and provides lots of good documentation and reference guides, a lot of that information is still out of context for me.
I haven’t dealt with cloud infrastructure before, so it is hard to put all these features and solutions that OpenNebula provides in context.
I guess once we have a small cloud set up it will start becoming clear the problems OpenNebula was designed to solved.

With all that being said lets get started!

I’ll be following the official tutorial posted on

There are a few ways to install OpenNebula, they have prebuilt packages for a few Linux Distros and also provide the source code for whoever wants to build from source.
Later I plan to build from source, for now I’ll be installing from one of their pre-built packages

Download page

I selected the option:
*OpenNebula 3.8.1 Download Source, RHEL/CentOS, Debian, openSUSE and Ubuntu Binary Packages Now!
Which took me to the another download page where I selected the distro I was interested in, my case CentOS.

The downloaded tar file came with three packages and the src code:


At first I tried to install the rpm packages by typing:

[sourcecode language=”bash”]
[[email protected] opennebula-3.8.1]$ sudo rpm -Uvh opennebula-3.8.1-1.x8664.rpm
error: Failed dependencies: is needed by opennebula-3.8.1-1.x86
64 is needed by opennebula-3.8.1-1.x8664 is needed by opennebula-3.8.1-1.x8664 is needed by opennebula-3.8.1-1.x8664
rubygem-json is needed by opennebula-3.8.1-1.x86
rubygem-nokogiri is needed by opennebula-3.8.1-1.x8664
rubygem-rack is needed by opennebula-3.8.1-1.x86
rubygem-sequel is needed by opennebula-3.8.1-1.x8664
rubygem-sinatra is needed by opennebula-3.8.1-1.x86
rubygem-sqlite3-ruby is needed by opennebula-3.8.1-1.x8664
rubygem-thin is needed by opennebula-3.8.1-1.x86
rubygem-uuidtools is needed by opennebula-3.8.1-1.x8664
rubygems is needed by opennebula-3.8.1-1.x86

However it gave me an error saying the package needed some dependencies.

Instead of installing each dependency by hand, yum has a nice feature that installs all the dependencies automatically:

[sourcecode language=”bash”]
[[email protected] opennebula-3.8.1]$ sudo yum localinstall opennebula-3.8.1-1.x86_64.rpm

After installing all the three packages the next step was to install the required ruby gems.

[sourcecode language=”bash”]
sudo /usr/share/one/install_gems

installs_gems is nothing more than a ruby script.
A snippet of the script:

[sourcecode language=”ruby”]
:debian => {
:id => [‘Ubuntu’, ‘Debian’],
:dependencies => {
SQLITE => [‘gcc’, ‘libsqlite3-dev’],
‘mysql’ => [‘gcc’, ‘libmysqlclient-dev’],
‘curb’ => [‘gcc’, ‘libcurl4-openssl-dev’],
‘nokogiri’ => %w{gcc rake libxml2-dev libxslt1-dev},
‘xmlparser’ => [‘gcc’, ‘libexpat1-dev’],
‘thin’ => [‘g++’],
‘json’ => [‘make’, ‘gcc’]
:installcommand => ‘apt-get install’,
env => {
‘rake’ => ‘/usr/bin/rake’
:redhat => {
:id => [‘CentOS’, /^RedHat/],
:dependencies => {
SQLITE => [‘gcc’, ‘sqlite-devel’],
‘mysql’ => [‘gcc’, ‘mysql-devel’],
‘curb’ => [‘gcc’, ‘curl-devel’],
‘nokogiri’ => %w{gcc rubygem-rake libxml2-devel libxslt-devel},
‘xmlparser’ => [‘gcc’, ‘expat-devel’],
‘thin’ => [‘gcc-c++’],
‘json’ => [‘make’, ‘gcc’]
:installcommand => ‘yum install’
:suse => {
:id => [/^SUSE/],
:dependencies => {
SQLITE => [‘gcc’, ‘sqlite3-devel’],
‘mysql’ => [‘gcc’, ‘libmysqlclient-devel’],
‘curb’ => [‘gcc’, ‘libcurl-devel’],
‘nokogiri’ => %w{rubygem-rake gcc rubygem-rake libxml2-devel libxslt-devel},
‘xmlparser’ => [‘gcc’, ‘libexpat-devel’],
‘thin’ => [‘rubygem-rake’, ‘gcc-c++’],
‘json’ => [‘make’, ‘gcc’]
command => ‘zypper install’

It checks which distro you are running and then install the correct packages for it.

When I tried to run the script I bumped into two problems:

[sourcecode language=”bash”]
[[email protected] opennebula-3.8.1]$ sudo /usr/share/one/install_gems
mkmf.rb can’t find header files for ruby at /usr/lib/ruby/ruby.h
ruby development package is needed to install gems

[[email protected] opennebula-3.8.1]$ sudo /usr/share/one/installgems
ruby/1.8/rubygems/customrequire.rb:31: command not found: lsbrelease -a
lsb_release command not found. If you are using a RedHat based
distribution install redhat-lsb


To fix those problems I installed the following packages:

  • ruby-devel
  • redhat-lsb

This time running the install_gems script installed all the dependencies, without errors.

The next sections from the official OpenNebula tutorial explained how to configure the user oneadmin in the FronteEnd and Hosts.

They refer to FrontEnd when talking about the machine that has OpenNebula installed and Hosts for the machines belonging to the cloud setup.

An important point to mention is that OpenNebula only needs to be installed on the FronteEnd, all the hosts only need to a ssh server, hypervisors and ruby installed.

For the oneadmin user configuration I followed the steps listed on the tutorial but I didn’t have a chance to look deeper on what is actually happening.

I’ll go over that configuration steps again once I configure a host computer.
Since right now there are no other computers with hypervisors installed in the network it is hard to test if the oneamdin is properly configured.

The last part of the tutorial was to actually start OpenNebula and test if everything was installed:

*All the interaction of OpenNebula needs to be done via the oneadmin user

So before running the commands below I needed to switch the terminal session to the oneadmin user:

[sourcecode language=”bash”]
su oneadmin

First set the credentials for oneadmin user

[sourcecode language=”bash”]
$ mkdir ~/.one
$ echo "oneadmin:password" > ~/.one/oneauth
$ chmod 600 ~/.one/one

To start opennebula:

[sourcecode language=”bash”]
one start

However OpenNebula didn’t start, I got an error instead

[sourcecode language=”bash”]
[[email protected] ~]$ one start
Could not open database.
oned failed to start

Well, it turns out that I couldn’t even start mysql, no wonder why OpenNebula wasn’t able to open the database

[sourcecode language=”bash”]
[[email protected] opennebula-3.8.1]$ mysql
ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock’ (2)

Searching about the error above, I actually found out mysql wasn’t fully installed on my machine. I wasn’t able to start mysql as a service.

I thought that OpenNebula had installed mysql before, but I guess it didn’t, either way I just installed the mysql pakcages again:

[sourcecode language=”bash”]
yum install mysql-server mysql mysql-client

Then configured mysql to start on boot

[sourcecode language=”bash”]
chkconfig –levels 235 mysqld on
[sourcecode language=”bash”]
service mysqld start

After running those commands I was able to start mysql.

I then tried one more time to start OpenNebula

[sourcecode language=”bash”]
[[email protected] ~]$ one start
Could not open database.
oned failed to start

But again I got the same error.

I started looking online for possible solutions for the error I was getting but didn’t have any luck.

I remembered reading at beginning of the tutorial that all the logs for OpenNebula are saved.
They have a very good diagram explaining all the directories used by OpenNebula:

I took a look on the /var/log/one/one.d file
That provided some very good information:

OpenNebula Configuration File
DATASTOREMAD=ARGUMENTS=-t 15 -d fs,vmware,vmfs,iscsi,lvm,EXECUTABLE=onedatastore

The DB was set to sqlite.
I didn’t have sqlite installed, no wonder why the OpenNebula wasn’t being able to Open the DB

I went to the configuration file where all the settings for OpenNebula are defined:


Indeed the DB was set to sqlite:

DB = [ backend = "sqlite" ]
# Sample configuration for MySQL
# DB = [ backend = "mysql",
# server = "localhost",
# port = 0,
# user = "oneadmin",
# passwd = "oneadmin",
# db_name = "opennebula" ]

I uncommented the configuration for mysql and commented out the one for sqlite

The last thing left to do was to create the database and user for mysql:

[sourcecode language=”mysql”]
mysql> CREATE DATABASE opennebula;

mysql> GRANT ALL ON opennebula.* TO [email protected] IDENTIFIED BY ‘oneadmin';

This time when I tried to run OpenNebula everything worked as expected!

[sourcecode language=”bash”]
[[email protected] diogogmt]$ one start
[[email protected] diogogmt]$ onevm list

Next step is to understand how the ssh configuration works for the Front-End and Hosts and try to set up a small cluster of Hypervisors and use OpenNebula to manage them.