Using CGroups with OpenLiteSpeed

You are here:
< Back

CGroups in OpenLiteSpeed

If you are a web server administrator and your users are creating CGI services that are chewing up your server’s CPU, memory, or tasks, you have a powerful tool to deal with it in OpenLiteSpeed: CGroups.

“Cgroup” is short for “control group”. Cgroups is a Linux kernel feature that, according to Wikipedia, “limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.”

The feature is part of the Linux Operating System and can be activated and utilized with version 1.5.1 of OpenLiteSpeed.  The newest incarnation of cgroups has been implemented within the systemd subsystem in the newer Linuxes.  We have certified that it operates correctly in RedHat/Centos Linux v7.5 and above, Ubuntu 18.04 and above and SuSE 15.1 and above.  If you enable cgroups within OpenLiteSpeed and it will not function as expected, OpenLiteSpeed will detect it, log a message and refuse to attempt to utilize it.

The features that have been certified to work with cgroups are:

  • CPU utilization
  • real memory
  • maximum tasks

There are other features of cgroups, and if you enable them, you do so at your own risk. OpenLiteSpeed will not attempt to stop you.

Things to Know About Cgroups

Cgroups is only available for true CGI applications at this time.  Below we discuss the details for enabling it, but it works on the operating system resource definitions for the owner of the CGI script file.

Containers (like Docker containers) are operating system facilities that run within the base OS.  Containers may use cgroups to avoid completely dominating the machine, and they can only do this if using cgroups. This is because cgroups is one of the few methods that can be used to address overuse of resources without killing the processes (unlike ulimit and other limiting schemes).  Cgroups helps you better balance the use of your system, keeping it active and functional for all of your users.

Cgroups affects not just OpenLiteSpeed, since it is a kernel feature. It also affects tasks running as specific users that start their own user sessions, like SSH or SFTP sessions.  Thus, if you set a CPUQuota of 10% for a user, all accesses by the user must not exceed 10%. This includes those in OpenLiteSpeed and user login sessions. All will be shared fairly by the operating system.  If you start more processes, the total percentage for the user does not change, but each process runs a bit slower so as to avoid monopolizing the CPU.

Note that this is managed by systemd, which means that all services within your operating system can be controlled similarly, but only at the service level.  To control it at the user level, OpenLiteSpeed’s CGI functions utilize specific user-management code.

The reason why this method is not more widespread is the lack of systemd available operating systems.  Also, note that users which are logged in with ‘su’ or a screen window manager (KDE, Gnome, etc.) can’t use cgroups, as their sessions are managed by the service that initialized the slice rather than the user slice.  OpenLiteSpeed creates it’s CGI tasks (when configured to) within the user’s sessions and thus can take advantage of them.

Supported Cgroup Features

The following are cgroup facilities supported by the operating system, tested, and certified to work with OpenLiteSpeed.  The names specified below are the ones that you will use in sytemctl to modify them.

CPUQuota

This is specified as a percentage of a single processor to use.  By default, the CPUQuota of all tasks is infinite, and explains why a single user’s task can pretty much monopolize the CPU resources of your system.  This is by far the most useful of the supported features as CPU affects not only processor I/O but also access to buffers in buffered I/O.  Tools like top help you monitor CPU utilization for a task and display the user and process that are running.  Note that values above 100% are allowed and indicate that multiple CPUs are to be allowed.

TasksMax

TasksMax refers to the number of tasks that can be started by a user. This ensures that the number of tasks accounted for the user stays below a specific limit. This takes an absolute number of tasks for the user on the system.  A sample default used by the system is 12000 and can result in a very large number of tasks, each of which is managed by the OS.  A reasonable value might be 500.

MemoryLimit

This is the maximum amount of real memory allocated to a process.  It is not the amount of virtual memory that a process uses.  Specify the limit on maximum memory usage of the executed processes. The limit specifies how much process and kernel memory can be used by tasks in this unit. Takes a memory size in bytes. If the value is suffixed with K, M, G or T, the specified memory size is parsed as Kilobytes, Megabytes, Gigabytes, or Terabytes (with the base 1024), respectively.  Remember that this is not virtual memory (which is stored both on disk and in real memory), but real memory.  Note that real memory affects virtual memory and that tasks that take too much of it can weigh a system down.  A reasonable value might be 10M.

Step 1: User Configuration

User configuration is within the operating system.  Eventually, all Linux operating systems will use the systemctl method and use it well. What’s nice about systemctl is that it affects not only the running tasks but is also saved to drop-in configuration files which are automatically accessed on reboot.

Currently only Ubuntu 18.04 and SuSE supports systemctl correctly.  RedHat/Centos 7.5 correctly applies systemctl functions at runtime, but drop-in files are only saved to a non-persistent file system (/run).  However, once you have configured a user to use a persistent directory for drop-in files, even RedHat/Centos will use it correctly and thus you can continue to use that method.

You will need to identify the users you want to limit and obtain their User-IDs.  The /etc/passwd file contains a list of users and passwords, though there are many ways of identifying user IDs for a user.  When initially using RedHat/Centos use the Drop-In files method, as described below. After you have configured drop-in files, you can edit the values and add different parameters using systemctl.  Note that all configuration will need to be done either running as root, or by using the sudo command as a prefix for your command.  The notes below will show the entries as if you were running as root, add the sudo prefix if you need to on your system.

The systemctl Method

systemctl is the primary tool of accessing services, slices and units on a machine.  It is accessed from a terminal session.  You can see it’s options by using:

systemctl --help

You may have noted that this is the same command used to start and stop services on your system.  This is not an accident; these facilities were initially intended for services and have been extended by kernel developers for users.

You can only modify a service or user that is either active, logged-in or lingered.  Lingering is requesting that a user slice stay active even if the user is not logged in.  There is virtually no overhead in enabling linger for a user.  Linger is one of the very few commands which is done with a different program, in this case loginctl.

To turn on linger for user2, enter the following:

loginctl enable-linger user2

You can see if a user is lingered by entering:

loginctl user-status user2

Or you can use systemctl.  systemctl requires that you specify the user using the user ID (uid) in the form user-UID.slice. If user2 has uid of 1001, then you’d specify the user as user-1001.slice

systemctl show user-1001.slice

The titles and values for the properties are mentioned above (CPUQuota=xx%, TasksMax=xx and MemoryLimit=xx).  To set a property like CPUQuota to 10% for user2 (uid=1001), enter:

systemctl set-property user-1001.slice CPUQuota=10%

Properties are set in the system and take effect immediately for future processes.  Current ones are not affected, as mentioned above, they are also saved and may work on reboot.  If you don’t want them saved, you can avoid this by adding the modifier --runtime after systemctl and before the command.  For example:

systemctl --runtime set-property user-1001.slice CPUQuota=10%

To see its current value, enter (as you did above):

systemctl show user-1001.slice

Its current value will be buried within all of the other parameters, but it will be there.  Except CPUQuota, which is listed in a systemctl show command as CPUQuotaPerSecUSec (USec means microseconds, but it’s displayed in milliseconds).  10% is displayed as 100ms.

You can see where the information you update gets stored by using systemctl with the status command and the user slice.  For example:

$ systemctl status user-1001.slice      
● user-1001.slice 
  Loaded: loaded 
 Drop-In: /etc/systemd/system/user-1001.slice.d 
          └─50-CPUQuota.conf, 50-TasksMax.conf 
  Active: active since Wed 2019-01-23 15:56:52 EST; 1 day 22h ago 
   Tasks: 0 (limit: 1000)

Note that it shows its status and the names and locations of the drop-in files.

You will need to repeat the steps above for all of the users you wish to control.

If you are using Ubuntu proceed to Step 2.

Drop-In Files Method

When initially configuring for Red-Hat/Centos you need to use the drop-in files method.  This requires that you create text files.  You can do it with a text editor like vi or kwrite or even with echo.  In the example below echo will be used.  You can reduce the amount of typing with a text editor, and you can copy files from one user to another (recommended, actually, as it avoids typographical errors).

For each user you wish to configure, you will need to create a directory in /etc/systemd/system/user-UID.slice.d.  For example:

mkdir /etc/systemd/system/user-1001.slice.d

You’ll note that it’s the same name as displayed in the systemctl status above.  In each directory you will need to create a file for the property you wish to modify: either 50-CPUQuota.conf, 50-TasksMax.conf or 50-MemoryLimit.conf.  For example, if I want to set the CPUQuota to 10% for the user 1001, I could do it this way:

mkdir /etc/systemd/system/user-1001.slice.d
cd /etc/systemd/system/user-1001.slice.d
echo "[Slice]" >> 50-CPUQuota.conf
echo "CPUQuota=10%" >> 50-CPUQuota.conf

To validate the contents of the file enter:

cat 50-CPUQuota.conf

If the contents are wrong, it’s best to delete the file (rm 50-CPUQuota.conf for this example) and then repeat the echo process as listed above.  As mentioned, once the system is rebooted and systemd can find the drop-in files, you may find that user slice configuration options can be modified and additional ones created using the systemctl method.

Step 2. OpenLiteSpeed Setup.

To use cgroups with OpenLiteSpeed you need to install glib as root.

  • RedHat/Centos: sudo yum install glib2
  • Ubuntu: sudo apt-get install glib2.0
  • SuSE: sudo zypper install glib2

In OLS WebAdmin, configure OpenLiteSpeed to have CGI apps use the user ID of the owner of the file.  Navigate to Configuration > Virtual Hosts > and select the virtual host you wish to manage.  In the Basic tab, modify the Security entry and set the ExtApp Set UID Mode to CGI File UID.  Save the entry.

To enable cgroups from the server level, in OLS WebAdmin, navigate to Server Configuration > Security and edit the CGI Settings group.  The parameter cgroup controls whether cgroups can be used and if so their default:

  • Off (the same as not set):  Allows cgroups to be used by OpenLiteSpeed, but are not enabled by default – they must be specified at the vhost level (see below).
  • On:  Allows cgroups to be used by OpenLiteSpeed and are enabled by default but can be disabled at the vhost level.
  • Disabled:  No matter the setting at the vhost level, cgroups are disabled.

Save the group to proceed.  If you do not want to set cgroups at the vhost level, do a graceful restart of OpenLiteSpeed to activate your configuration and proceed to Step 3.

Optional VHost Configuration

If you have not disabled cgroups at the server level and want to further control them at the vhost level, navigate to Configuration > Virtual Hosts and select the virtual host you wish to manage.  In the General tab, modify the General entry and set cgroups:

  • Off: Disables cgroups for this vhost, regardless of the server setting.
  • On: Enables cgroups for this vhost if not disabled at the server level.

Save the group to proceed.  Do a graceful restart of OpenLiteSpeed to activate your configuration and proceed to Step 3.

Modifying the Configuration in Files

This is only for reference if you’re using the screens.

In $SERVER_ROOT/conf/httpd.config.conf there is a single new parameter in the CGIRLimit section with parameters like procHardLimit:

  • cgroups:  Set to 0 (default) to allow cgroups to be used by OpenLiteSpeed but are not enabled by default – they must be specified at the vhost level.  Set to 1 to enable cgroups by default if no vhost definition exists. Set to 2 to disable cgroup support entirely, regardless of the specification at the vhost level.

You can specify specific virtual hosts to enable/disable at the virtual host level unless you have disabled it entirely.  That is done in the $SERVER_ROOT/conf/vhosts/<virtual host>/vhconf.conf file.  For example, for the Example virtual host you’d edit $SERVER_ROOT/conf/vhosts/Example/vhconf.conf and set in the main group under no specific section with parameters like docRoot and enableGzip:

  • cgroups:  Set to 1 to override the default and turn cgroups on for the virtual host, and 0 to override the default and turn cgroups off the virtual host.

You will need to gracefully restart OpenLiteSpeed to activate the configuration.

Step 3. Validation

After configuring users and OpenLiteSpeed to use cgroups you want to make sure that they are applied correctly.  Configure a user to use a small amount of CPU (like 10%).  Then create a CGI file in your virtual host directory which is owned by your configured user (with a command like chown).  Below is an example.

This is a dangerous example as it uses 100% of CPU so it should only be used on test systems.  I’ve named it $SERVER_ROOT/<virtual host>/cgi-bin/pound and changed it’s owner to be user2:
chown user2:users pound

#!/bin/sh

date=`date -u '+%a, %d %b %Y %H:%M:%S %Z'`
results=`/usr/bin/dd if=/dev/zero of=/dev/null` 
cat << EOF
Content-type: text/plain
Expires: $date

$results

EOF

Note that in Ubuntu, the dd command is in /bin/dd rather than /usr/bin/dd (on the results line).

When you specify the site in your browser: http://127.0.0.1/cgi-bin/pound the dd command should be owned by user2 and it should take only 10%.  You can view this with the top program and kill it there.  If there are problems you should check the $SERVER_ROOT/logs/error.log file to see if there are any cgroup messages.

Other Useful Tools

There are other useful tools which can help you understand how configuration is implemented in your system:

  • systemd-cgls -a  Enter this command to display a tree like structure of the processes and under which slice and scope they are running.  The dd command in the pound example above should be running under user-1001.slice.
  • /proc/<process-id>/cgroup file.  In the 1:name line you should see the slice that the process is running in.