Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file


Today I got an error while trying to install a new VE in OpenVZ

Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file.

# vzctl create 101 –ostemplate centos-6-x86_64
Unable to open /dev/vzctl: No such device or address
Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file.

Solution :-

# /etc/init.d/vz restart
Stopping OpenVZ: [ OK ]
Starting OpenVZ: [ OK ]
Applying OOM adjustments: [ OK ]
Bringing up interface venet0: [ OK ]


OpenVZ VE monitoring commands


Lists the top ten containers based on cpu usage

/usr/sbin/vzstat -t -s cpu|awk ‘NF==10{print $0}’|head


/usr/sbin/vzlist -o ctid,hostname,ip,laverage|awk ‘BEGIN{MAX=10} {split($2,arr,”/”); if( (int(arr[1])>MAX) || (int(arr[2])>MAX) || (int(arr[3])>MAX) )print $0}’


for i in `/usr/sbin/vzlist -H -o ctid`; do echo “CTID: ${i} `/usr/sbin/vzctl exec ${i} cat /proc/loadavg`”; done

2) List out all containers for which the status is not in “OK” status. This is quite helpful while troubleshooting load issues when the load average in the node is super-high(above 1000)

/usr/sbin/vzstat -t|awk ‘{if(NF==10 && $2!=”OK” && $1!=”CTID”)print $0}’

3) Lists the top ten containers based on socket usage
/usr/sbin/vzstat -t -s sock|awk ‘NF==10{print $0}’|head

4) Lists the top 10 containers based on number of processes running inside the container.
/usr/sbin/vzlist -H -o ctid,numproc|sort -r -n -k2|head

5) Lists the top 10 containers based on TCP sender buffer usage,
/usr/sbin/vzlist -H -o ctid,tcpsndbuf |sort -r -n -k2|head

6) Lists the top 10 containers based on TCP receive buffer usage,
/usr/sbin/vzlist -H -o ctid,tcprcvbuf |sort -r -n -k2|head

7) Sorts containers based on the highest inbound traffic(quite useful while troubleshooting n/w related attacks),
/usr/sbin/vznetstat -r |awk ‘$3 ~ /G/ {print $0}’|sort -r -nk3

8) Sorts containers based on the highest oubound traffic(quite useful while troubleshooting n/w related attacks) ,
/usr/sbin/vznetstat -r |awk ‘$5 ~ /G/ {print $0}’|sort -r -nk5

9) List out containers with resource shortage


How to clone a VPS in Virtuozzo


Cloning refers to a process of creating an exact copy (or multiple copies) of a Virtuozzo Container on the same Hardware Node. The new Container will have its own private area and root directories but the rest of the configuration parameters will be exactly the same. This means that even the parameters that should be unique for each individual Container (IP addresses, hostname, name) will be copied unchanged. You don’t have an option to specify new values during the cloning operation. Instead, you will have to clone the Container first and then update the configuration of the new Container(s) in a separate procedure.

In Virtuozzo-based systems, you can use the vzmlocal utility to copy a Container within the given Hardware Node.

# vzmlocal -C 4466:4465
Moving/copying CT#4466 -> CT#4465, [], [] …
Check disk space
Tracker started
Syncing private area ‘/vz/private/4466′->’/vz/private/4465’
OfflineManagement CT#4466 …
Stopping CT#4466 …

Message from syslogd@vzhost0** at Mar 22 04:17:31 …
ack finished successfully
Syncing tracked files from ‘/vz/private/4466/fs’ to ‘/vz/private/4465/fs’
Copying/modifying config scripts of CT#4466 …
OfflineManagement CT#4466 …
Starting CT#4466 …
vzctl : Hostname of the Container set:
ExecAction CT#4465 …
Successfully completed

Check the cloned VPS list

#vzlist -a

You can optionally specify custom private area and root directories for the new Container. To define custom private area and root paths for Container 4465, you can execute the following command:

# vzmlocal -C 4466:4465:/vz/private/dir_4465/:/vz/root/ct4465

Additional parameters can be used as per requirement :

-s, –fast-sid : allows to speed up cloning process
-d, –destroy-source : destroys the source container after making a clone
-l, –skiplock : allows to clone the locked containers

Reference :-

How to Upgrade from VZ 4.0 to VZ 4.6


Upgrading from VZ 4.0 to VZ 4.6

If your server already has VZ 4.0, it’s very easy to upgrade to 4.6. You pretty much don’t have a choice because if you have other servers running 4.6, you won’t be able to move containers back and forth, and you won’t have many new kernel updates headed your way.

Note: This procedure is only for VZ 4.0 servers. If you run a previous version, you need to re-OS and reinstall VZ. You can’t upgrade to 4.6 from previous vesions!

1. Migrate all your containers off the server

If this is a live server, you’ll want to move off all your customers to avoid lengthy downtime. If you plan on moving them back, you can speed up the reverse process by moving off the containers and retaining their private areas

for ve in `vzlist -H -o veid,diskspace |sort -rnk2 |awk ‘{print $1}’` ; do vzmigrate -r no 192.168.x.x $ve ; done

2. Run a Yum update
Take this opportunity to update your operating system, noting that Virtuozzo already has an exclude file (/etc/yum/swsoft-excludes) to prevent the updates of certain packages.
yum update -y

3) install VZ 4.6
cd /root
chmod 755 vzinstall-linux-x86_64.bin

4) Run the self update
After the installation, run vzup2date twice (one is a self-update, the other is the kernel update)

5) reboot
6) Reapply customizations

If you had any customizations to VZ scripts (such as files in /etc/sysconfig/vz-script), you may need to re-apply them

7) Re-populate the server
Start migrations back to the server, which will take a fraction of the time since the private areas were retained:

for ve in `vzlist -H -o veid,diskspace |sort -rnk2 |awk ‘{print $1}’` ; do vzmigrate -r yes 192.168.x.x $ve; done

How to recreating the Service Container in Virtuozzo


Recreating the Service Container

If your service container isn’t working or you need to recreate it, follow these steps:

1) Download the autoinstaller for Virtuozzo:

2) Run the autoinstaller and select the download only option. This will download the files to /root/virtuozzo

chmod 755 vzinstall-linux.bin

3) Extract the source to a temporary location:
mkdir -p /vz/temp
/root/virtuozzo/download/Linux/{arch}/virtuozzo-{version}-{arch}.sfx -d /vz/temp–extract

4) Now create the service container and assign it an IP:
vzsveinstall -v -D /vz/temp -s $ip


Cannot lock Container in Virtuozzo


Cannot lock Container

vim /vz/lock/.lk

ps aux | grep ID

#check vzquota is running , kill the process

vzquota drop
vzctl stop vpsid
vzquota drop vpsid
vzctl start vpsid

[root@server ~]# vzquota off VEID
[root@server ~]# vzquota on VEID
[root@server ~]# vzctl start VEID


Failed to enter Container


ISSUE: I am unable to enter into one of the vps inside virtuozzo node using “vzctl enter CTID” command. Getting the error “enter failed. Failed to enter container”.

root@virtuozzo# vzctl enter 1330
enter failed
Failed to enter Container 1330

REASON : VZFS symlinks of the Container private area to system and application templates are somehow corrupted.

FIX: Use the vzctl recover CTID option to re-write the original symlinks to the Container private area.

The vzctl recover command restores the original VZFS symlinks of the Container private area to the OS and/or application template(s) as they were at the time when the Container was created and/or when the application template(s) were added to the Container. This command does not deal with any user files on the Container:

root@virtuozzo# vzctl recover 1330
Optimizing Container private area…
vzquota : (warning) Quota is running for id 1330 already
Setting quota …
Container is mounted
Setup slm memory limit
Setup slm subgroup (default)
Container is unmounted
Recover OS template: redhat-el5-x86
Creating Container private area (redhat-el5-x86)

Recovering Container completed successfully

As per the parallels documentation the recover option doesn’t touch the user data files, so there is no problem of data missing.


Enable fuse on vps – openvz


In order to enable FUSE on VPS you need first confirm the FUSE module is enabled on Hardware node in which the vps is hosted.

Commands to enable FUSE on Hardware node ::
# modprobe fuse

Check module is loaded properly :
# lsmod | grep fuse

2. Enable FUSE for VPS :
# vzctl set vpsid –devnodes fuse:rw –save

Reference :

VPS iptables rule limit is too low


VPS iptables rule limit is too low.

You may come across with a “numiptent” error message while restarting iptables or whatever firewall (say csf) you have installed on your VPS. The error appear as follows:

The VPS iptables rule limit (numiptent) is too low (300/450) – stopping firewall to prevent iptables blocking all connections

There is a limit on the number of iptables packet filtering entries for a VPS and if the iptable rules added on a VPS exceeds the “numiptent” set, you will receive the given error message.

To make sure iptables works properly on a VPS, you need to increase the “numiptent” value in the VPS configuration file which is located at /etc/sysconfig/vz-scripts/veid.conf and have to restart the VPS.

or you can increase the numiptent value by using this command from node.

# vzctl set 101 –save –numiptent 400

Virtuozzo Commands


Virtuozzo Command Line Utilities

General utilities are intended for performing day-to-day maintenance tasks:

( Utility to control Containers. )
( Utility to view a list of Containers existing on the Node with additional information )
( Utility to control Virtuozzo Containers disk quotas.
( Utility to display the Virtuozzo license status and parameters )
( Utility to manage Virtuozzo licenses on the Hardware Node.
( Utility to activate the Virtuozzo Containers installation, update the Virtuozzo licenses installed on the Hardware Node, or transfer the Virtuozzo license from the Source Node to the Destination Node.
( Utility for migrating Containers from one Hardware Node to another )
( Utility for the local cloning or moving of the Containers )
( Utility to migrate a physical server to a Container on the Node )
( Utility to migrate a Container to a physical server )
( Utility to back up Containers.
( Utility to restore backed up Containers.
( Utility to back up Hardware Nodes and their Containers. As distinct from vzbackup, this utility requires the Parallels Agent software for its functioning.
( Utility to restore backed up Hardware Nodes and Containers. As distinct from vzrestore, this utility requires the Parallels Agent software for its functioning.
Utility to manage OS and application EZ templates either inside your Containers or on the Hardware Node itself.
Utility to create OS and application EZ templates.
Utility to convert Containers based on Virtuozzo standard templates to EZ template-based Containers.
Utility to create caching proxy servers for handling OS and application EZ templates.
Utility to create RHN proxy servers for handling the packages included in the RHEL 4 and RHEL 5 OS EZ templates.
Utility to get a list of templates available on the Hardware Node and in Containers.
Utility to get the information on any template installed on the Hardware Node.
Create a new package set from binary RPM or DEB files.
Utility to add a new template to a Container.
Utility to replace real files inside a Container with symlinks to these very files on the Node.
Utility to remove a template from a Container.
Update a set of preinstalled Container archives after new template installation.

Supplementary tools perform a number of miscellaneous tasks in the Hardware Node and Container context:

Utility to update your Virtuozzo software and templates.
Utility to create local mirrors of the Virtuozzo official repository.
Utility for the VZFS optimization and consistency checking.
Utility to gain extra disk space by caching the files identical in different Containers.
Utility to create the Service Container on the Hardware Node.
Utility to update the packages inside the Service Container.
#vzps and vztop
Utilities working as the standard ps and top utilities, with Container-related functionality added.
Utility to switch some services between a standalone and xinetd-dependent modes.
Print file space current usage from quota’s point of view.
#vzdqdump and vzdqload
Utilities to dump the Container user/group quota limits and grace times from the kernel or the quota file or for loading them to a quota file.
Utility that prints network traffic usage statistic by Containers.
Utility for checking CPU utilization by Containers.
Utility for checking the Hardware Node and Container current memory parameters.
Utility to calculate resource usage by a Container.
Utility to check the current system overcommitment and safety of the total resource control settings.
Utility to monitor the Hardware Node and Container resources consumption in real time.
Utility that prints Container id the process belongs to.
Utility to generate Container configuration file sample, “splitting” the Hardware Node into equal parts.
Utility to scale the Container configuration.
Utility to validate Container configuration file correctness.
Utility to convert Virtuozzo 2.0.2 Container configuration files to Virtuozzo 2.5.x format.
Utility to analyze the logs collected by vzlmond and to generate statistics reports on the basis of these logs (in the text and graphical form).
Utility to draw up a problem report and to automatically send it to the Parallels support team.
Utility to scan the main resources on any Linux server and create a file where this information will be specified.
Utility to convert the Containers based on Virtuozzo standard OS templates to the EZ template-based ones.
Utility to manage network devices on the Hardware Node.
Utility to migrate the installed OS and application templates from the one Hardware Node to another.