OpenVZ: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
notable? dubious, AfD
tidy
 
(35 intermediate revisions by 30 users not shown)
Line 1: Line 1:
{{Short description|Operating-system level virtualization technology}}
{{primary sources|date=December 2013}}
{{notability|software|date=March 2017}}
{{Lead too short|date=March 2024}}
<!-- Please do not remove or change this AfD message until the discussion has been closed. -->
{{Article for deletion/dated|page=OpenVZ|timestamp=20170403080444|year=2017|month=April|day=3|substed=yes}}
<!-- Once discussion is closed, please place on talk page: {{Old AfD multi|page=OpenVZ|date=3 April 2017|result='''keep'''}} -->
<!-- End of AfD message, feel free to edit beyond this point -->

{{Infobox software
{{Infobox software
| name = OpenVZ
| name = OpenVZ
| title = OpenVZ
| title = OpenVZ
| logo = [[File:OpenVZ-logo.png|x64px]]
| logo = [[File:OpenVZ-logo.svg|x64px]]
| logo caption =
| logo caption =
| screenshot = <!-- [[File: ]] -->
| screenshot = OpenVZ 2.png
| caption =
| caption =
| collapsible =
| collapsible =
| author =
| developer = [[Virtuozzo (company)|Virtuozzo]] and OpenVZ community
| author =
| released = {{Start date and age|2005||}}
| developer = Community project, supported by Odin, Inc.
| discontinued =
| released = {{Start date|2005||}}
| latest release date = {{Start date and age|2016|07|25}}
| discontinued =
| programming language = C
| frequently updated = <!-- DO NOT include this parameter unless you know what it does -->
| operating system = [[Linux]]
| programming language = C
| operating system = [[Linux]]
| platform = [[x86]], [[x86-64]]
| size =
| platform = [[x86]], [[x86-64]]
| language = English
| size =
| language = English
| language footnote =
| status =
| language count = <!-- DO NOT include this parameter unless you know what it does -->
| genre = [[Operating-system-level virtualization|OS-level virtualization]]
| language footnote =
| license = [[GNU General Public License|GPLv2]]
| status =
| alexa =
| genre = [[Operating system-level virtualization|OS-level virtualization]]
| website = {{URL|openvz.org}}
| license = [[GNU GPL]] v.2
| alexa =
| website = {{URL|openvz.org}}
}}
}}


'''OpenVZ''' ('''Open [[Virtuozzo]]''') is an [[operating system-level virtualization]] technology for [[Linux]]. It allows a physical server to run multiple isolated operating system instances, called containers, [[virtual private server]]s (VPSs), or virtual environments (VEs). OpenVZ is similar to [[Solaris Containers]] and [[LXC]].
'''OpenVZ''' ('''Open [[Virtuozzo]]''') is an [[operating-system-level virtualization]] technology for [[Linux]]. It allows a physical server to run multiple isolated operating system instances, called containers, [[virtual private server]]s (VPSs), or virtual environments (VEs). OpenVZ is similar to [[Solaris Containers]] and [[LXC]].


==OpenVZ compared to other virtualization technologies==
== OpenVZ compared to other virtualization technologies ==


While virtualization technologies like [[VMware]] and [[Xen]] provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single [[Patch (computing)|patched]] Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true [[hypervisor]], it is very fast and efficient.<ref>http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.html?jumpid=reg_R1002_USEN</ref>
While virtualization technologies such as [[VMware]], [[Xen]] and [[Kernel-based Virtual Machine|KVM]] provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true [[hypervisor]], it is very fast and efficient.<ref>{{cite web |url=http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.html?jumpid=reg_R1002_USEN |url-status=dead |archive-url=https://web.archive.org/web/20090115085242/http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.html?jumpid=reg_R1002_USEN |archive-date=2009-01-15 |title=Performance Evaluation of Virtualization Technologies for Server Consolidation}}</ref>


Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for [[disk buffer|disk caching]]. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using [[chroot]]), current versions of OpenVZ allow each container to have its own file system.<ref>http://wiki.openvz.org/Ploop</ref>
Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for [[disk buffer|disk caching]]. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using [[chroot]]), current versions of OpenVZ allow each container to have its own file system.<ref>{{cite web |url=http://wiki.openvz.org/Ploop |url-status=dead |archive-url=https://web.archive.org/web/20120326211228/http://wiki.openvz.org/Ploop |archive-date=2012-03-26 |title=Ploop - OpenVZ Linux Containers Wiki}}</ref>


==Kernel==
== Kernel ==
The OpenVZ kernel is a [[Linux kernel]], modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and [[Application checkpointing|checkpointing]]. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.<ref>{{cite web | last = Kolyshkin | first = Kir | title = OpenVZ turns 7, gifts are available! | work = OpenVZ Blog | date = 6 October 2012 | url = http://openvz.livejournal.com/42793.html | accessdate = 2013-01-17}}</ref>


The OpenVZ kernel is a [[Linux kernel]], modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and [[Application checkpointing|checkpointing]]. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.<ref>{{cite web | last = Kolyshkin | first = Kir | title = OpenVZ turns 7, gifts are available! | work = OpenVZ Blog | date = 6 October 2012 | url = http://openvz.livejournal.com/42793.html | access-date = 2013-01-17}}</ref>
===Virtualization and isolation===
Each container is a separate entity, and behaves largely as a physical server would. Each has its own:
;Files: System [[library (computing)|libraries]], [[application software|applications]], virtualized <code>[[/proc]]</code> and <code>[[/sys]]</code>, virtualized [[lock (computer science)|locks]], etc.


=== Virtualization and isolation ===
;Users and groups: Each container has its own [[superuser|root user]], as well as other [[user (computing)|users]] and [[group (computing)|groups]].


Each container is a separate entity, and behaves largely as a physical server would. Each has its own:
;Process tree: A container only sees its own [[process (computing)|processes]] (starting from <tt>[[init]]</tt>). [[Process identifier|PID]]s are virtualized, so that the [[init]] PID is 1 as it should be.


; Files: System [[library (computing)|libraries]], [[application software|applications]], virtualized <code>[[/proc]]</code> and <code>[[/sys]]</code>, virtualized [[lock (computer science)|locks]], etc.
;Network: Virtual [[computer networking device|network device]], which allows a container to have its own [[IP address]]es, as well as a set of [[netfilter/iptables|netfilter (<code>iptables</code>)]], and [[routing]] rules.
; Users and groups: Each container has its own [[superuser|root user]], as well as other [[user (computing)|users]] and [[group (computing)|groups]].
; Process tree: A container only sees its own [[process (computing)|processes]] (starting from <code>[[init]]</code>). [[Process identifier|PID]]s are virtualized, so that the [[init]] PID is 1 as it should be.
; Network: Virtual [[computer networking device|network device]], which allows a container to have its own [[IP address]]es, as well as a set of [[netfilter/iptables|netfilter (<code>iptables</code>)]], and [[routing]] rules.
; Devices: If needed, any container can be granted access to real devices like [[network interface controller|network interfaces]], [[serial port]]s, [[disk partition]]s, etc.
; IPC objects: [[Shared memory (interprocess communication)|Shared memory]], [[semaphore (programming)|semaphores]], [[message passing|messages]].


=== Resource management ===
;Devices: If needed, any container can be granted access to real devices like [[network interface controller|network interfaces]], [[serial port]]s, [[disk partition]]s, etc.


;IPC objects: [[Shared memory (interprocess communication)|Shared memory]], [[semaphore (programming)|semaphores]], [[message passing|messages]].

===Resource management===
OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user bean counters (see below). These resources can be changed during container [[run time (program lifecycle phase)|run time]], eliminating the need to [[booting|reboot]].
OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user bean counters (see below). These resources can be changed during container [[run time (program lifecycle phase)|run time]], eliminating the need to [[booting|reboot]].


; Two-level disk quota: Each container can have its own [[disk quota]]s, measured in terms of disk blocks and [[inodes]] (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group [[disk quota]]s.
====Two-level disk quota====
; CPU scheduler: The CPU scheduler in OpenVZ is a two-level implementation of [[fair-share scheduling]] strategy.On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container ''cpuunits'' values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities. It is possible to set different values for the CPUs in each container. Real CPU time will be distributed proportionally to these values. In addition, OpenVZ provides ways to set strict CPU limits, such as 10% of a total CPU time (<code>--cpulimit</code>), limit number of CPU cores available to container (<code>--cpus</code>), and bind a container to a specific set of CPUs (<code>--cpumask</code>).<ref>vzctl(8) man page, CPU fair scheduler parameters section, http://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters {{Webarchive|url=https://web.archive.org/web/20170414023838/https://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters |date=2017-04-14 }}</ref>
Each container can have its own [[disk quota]]s, measured in terms of disk blocks and [[inodes]] (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group [[disk quota]]s.
; I/O scheduler: Similar to the CPU scheduler described above, [[I/O scheduling|I/O scheduler]] in OpenVZ is also two-level, utilizing [[Jens Axboe]]'s [[CFQ]] I/O scheduler on its second level. Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.
; User Beancounters: User Beancounters is a set of per-container counters, limits, and guarantees, meant to prevent a single container from monopolizing system resources. In current OpenVZ kernels (RHEL6-based 042stab*) there are two primary parameters, and others are optional.<ref>{{cite web |url=http://openvz.org/VSwap |url-status=dead |archive-url=https://web.archive.org/web/20130213165243/http://openvz.org/VSwap |archive-date=2013-02-13 |title=VSwap - OpenVZ Linux Containers Wiki}}</ref> Other resources are mostly memory and various in-kernel objects such as [[Shared memory (interprocess communication)|Inter-process communication shared memory]] segments and network buffers. Each resource can be seen from <code>/proc/user_beancounters</code> and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container.


=== Checkpointing and live migration ===
====CPU scheduler====
The CPU scheduler in OpenVZ is a two-level implementation of [[fair-share scheduling]] strategy.


A [[live migration]] and [[Application checkpointing|checkpointing]] feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay.
On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container '''cpuunits''' values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities.


== Limitations ==
It is possible to set different values for the CPUs in each container. Real CPU time will be distributed proportionally to these values.


By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports,<ref>vzctl(8) man page, Device access management subsection, http://wiki.openvz.org/Man/vzctl.8#Device_access_management</ref> PCI devices<ref>vzctl(8) man page, PCI device management section, http://wiki.openvz.org/Man/vzctl.8#PCI_device_management</ref> or physical network cards.<ref>vzctl(8) man page, Network devices section, http://wiki.openvz.org/Man/vzctl.8#Network_devices_control_parameters</ref>
In addition to the above, OpenVZ provides<ref>vzctl(8) man page, CPU fair scheduler parameters section, http://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters</ref> ways to:
* set strict CPU limits, such as 10% of a total CPU time (<code>--cpulimit</code>);
* limit number of CPU cores available to container (<code>--cpus</code>);
* bind a container to a specific set of CPUs (<code>--cpumask</code>).


<code>/dev/loopN</code> is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use [[Filesystem in Userspace|FUSE]].
====I/O scheduler====
Similar to the CPU scheduler described above, [[I/O scheduling|I/O scheduler]] in OpenVZ is also two-level, utilizing [[Jens Axboe]]'s [[CFQ]] I/O scheduler on its second level.


OpenVZ is limited to providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. [[IPsec]] is supported inside containers since kernel 2.6.32.
Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.


A [[graphical user interface]] called EasyVZ was attempted in 2007,<ref>[http://www.golem.de/0702/50387.html EasyVZ: Grafische Verwaltung für OpenVZ. Frontend für freie Linux-Virtualisierung]</ref> but it did not progress beyond version 0.1. Up to version 3.4, [[Proxmox Virtual Environment|Proxmox VE]] could be used as an OpenVZ-based server virtualization environment with a GUI, although later versions switched to [[LXC]].
====User Beancounters====
User Beancounters is a set of per-container counters, limits, and guarantees, meant to prevent a single container from monopolizing system resources. In current OpenVZ kernels (RHEL6-based 042stab*) there are two primary parameters ('''ram''' and '''swap''', a.k.a. '''physpages''' and '''swappages'''), and others are optional.<ref>http://openvz.org/VSwap</ref>


== See also ==
Other resources are mostly memory and various in-kernel objects such as [[Shared memory (interprocess communication)|Inter-process communication shared memory]] segments and network buffers. Each resource can be seen from <tt>/proc/user_beancounters</tt> and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container.
{{Portal|Free and open-source software}}


* [[Comparison of platform virtualization software]]
===Checkpointing and live migration===
* [[Operating-system-level virtualization]]
A [[live migration]] and [[Application checkpointing|checkpointing]] feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay.
* [[Proxmox Virtual Environment]]

==Limitations==
By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports,<ref>vzctl(8) man page, Device access management subsection, http://wiki.openvz.org/Man/vzctl.8#Device_access_management</ref> PCI devices<ref>vzctl(8) man page, PCI device management section, http://wiki.openvz.org/Man/vzctl.8#PCI_device_management</ref> or physical network cards.<ref>vzctl(8) man page, Network devices section, http://wiki.openvz.org/Man/vzctl.8#Network_devices_control_parameters</ref>

<tt>/dev/loopN</tt> is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use [[Filesystem in Userspace|FUSE]].

OpenVZ is limited to providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. [[IPsec]] is supported inside containers since kernel 2.6.32.


== References ==
A [[graphical user interface]] called EasyVZ was attempted in 2007,<ref>[http://www.golem.de/0702/50387.html EasyVZ: Grafische Verwaltung für OpenVZ. Frontend für freie Linux-Virtualisierung]</ref> but it did not progress beyond version 0.1.


{{refs}}
As of 2007, OpenVZ could not be used with SELinux enabled by some users,<ref>{{cite web |url=http://forum.openvz.org/index.php?t=msg&goto=9950&&srch=enable+selinux#msg_9950 |title=Some problem !!!! |date=31 January 2007 |accessdate=24 June 2014 |author=i255p |website=forum.openvz.org}}</ref> although it is unclear whether this remains the case.


==See also==
== External links ==
{{Portal|Free software}}
*[[Comparison of platform virtualization software]]
*[[Operating system-level virtualization]]


* {{official}}
==References==
{{Reflist}}


{{virtualization software}}
==External links==
*{{Official website}}
{{linux kernel}}
{{Virtualization software}}
{{Linux kernel}}


{{DEFAULTSORT:Openvz}}
[[Category:Free virtualization software]]
[[Category:Free virtualization software]]
[[Category:Free software programmed in C]]
[[Category:Operating system security]]
[[Category:Operating system security]]
[[Category:Virtualization-related software for Linux]]
[[Category:Virtualization software for Linux]]

Latest revision as of 21:24, 14 March 2024

OpenVZ
Developer(s)Virtuozzo and OpenVZ community
Initial release2005; 19 years ago (2005)
Repository
Written inC
Operating systemLinux
Platformx86, x86-64
Available inEnglish
TypeOS-level virtualization
LicenseGPLv2
Websiteopenvz.org

OpenVZ (Open Virtuozzo) is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.

OpenVZ compared to other virtualization technologies[edit]

While virtualization technologies such as VMware, Xen and KVM provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true hypervisor, it is very fast and efficient.[1]

Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for disk caching. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using chroot), current versions of OpenVZ allow each container to have its own file system.[2]

Kernel[edit]

The OpenVZ kernel is a Linux kernel, modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and checkpointing. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.[3]

Virtualization and isolation[edit]

Each container is a separate entity, and behaves largely as a physical server would. Each has its own:

Files
System libraries, applications, virtualized /proc and /sys, virtualized locks, etc.
Users and groups
Each container has its own root user, as well as other users and groups.
Process tree
A container only sees its own processes (starting from init). PIDs are virtualized, so that the init PID is 1 as it should be.
Network
Virtual network device, which allows a container to have its own IP addresses, as well as a set of netfilter (iptables), and routing rules.
Devices
If needed, any container can be granted access to real devices like network interfaces, serial ports, disk partitions, etc.
IPC objects
Shared memory, semaphores, messages.

Resource management[edit]

OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user bean counters (see below). These resources can be changed during container run time, eliminating the need to reboot.

Two-level disk quota
Each container can have its own disk quotas, measured in terms of disk blocks and inodes (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group disk quotas.
CPU scheduler
The CPU scheduler in OpenVZ is a two-level implementation of fair-share scheduling strategy.On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container cpuunits values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities. It is possible to set different values for the CPUs in each container. Real CPU time will be distributed proportionally to these values. In addition, OpenVZ provides ways to set strict CPU limits, such as 10% of a total CPU time (--cpulimit), limit number of CPU cores available to container (--cpus), and bind a container to a specific set of CPUs (--cpumask).[4]
I/O scheduler
Similar to the CPU scheduler described above, I/O scheduler in OpenVZ is also two-level, utilizing Jens Axboe's CFQ I/O scheduler on its second level. Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.
User Beancounters
User Beancounters is a set of per-container counters, limits, and guarantees, meant to prevent a single container from monopolizing system resources. In current OpenVZ kernels (RHEL6-based 042stab*) there are two primary parameters, and others are optional.[5] Other resources are mostly memory and various in-kernel objects such as Inter-process communication shared memory segments and network buffers. Each resource can be seen from /proc/user_beancounters and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container.

Checkpointing and live migration[edit]

A live migration and checkpointing feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay.

Limitations[edit]

By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports,[6] PCI devices[7] or physical network cards.[8]

/dev/loopN is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use FUSE.

OpenVZ is limited to providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. IPsec is supported inside containers since kernel 2.6.32.

A graphical user interface called EasyVZ was attempted in 2007,[9] but it did not progress beyond version 0.1. Up to version 3.4, Proxmox VE could be used as an OpenVZ-based server virtualization environment with a GUI, although later versions switched to LXC.

See also[edit]

References[edit]

  1. ^ "Performance Evaluation of Virtualization Technologies for Server Consolidation". Archived from the original on 2009-01-15.
  2. ^ "Ploop - OpenVZ Linux Containers Wiki". Archived from the original on 2012-03-26.
  3. ^ Kolyshkin, Kir (6 October 2012). "OpenVZ turns 7, gifts are available!". OpenVZ Blog. Retrieved 2013-01-17.
  4. ^ vzctl(8) man page, CPU fair scheduler parameters section, http://openvz.org/Man/vzctl.8#CPU_fair_scheduler_parameters Archived 2017-04-14 at the Wayback Machine
  5. ^ "VSwap - OpenVZ Linux Containers Wiki". Archived from the original on 2013-02-13.
  6. ^ vzctl(8) man page, Device access management subsection, http://wiki.openvz.org/Man/vzctl.8#Device_access_management
  7. ^ vzctl(8) man page, PCI device management section, http://wiki.openvz.org/Man/vzctl.8#PCI_device_management
  8. ^ vzctl(8) man page, Network devices section, http://wiki.openvz.org/Man/vzctl.8#Network_devices_control_parameters
  9. ^ EasyVZ: Grafische Verwaltung für OpenVZ. Frontend für freie Linux-Virtualisierung

External links[edit]