Container

Containers = namespace + cgroups+CoW Storage

  • Cgroups = limits how much you can use
  • namespaces = limits what you can see (and therefore use)
  • Cow Storage = Considerably reduces footprint and "boot" time

Copy on Write Storage

  • Create a new container instantly, instead of copying its whole filesystem
  • Storage keeps track of what has changed
  • Many options available
    • AUFS, overlay (file level)
    • device mapper thinp (block level)
    • BTRFS, ZFS (FS level)
  • Considerably reduces footprint and "boot" time

    namespaces

     A namespace wraps a global system resource in an abstraction that
     makes it appear to the processes within the namespace that they have
     their own isolated instance of the global resource.  Changes to the
     global resource are visible to other processes that are members of
     the namespace, but are invisible to other processes.  One use of
     namespaces is to implement containers.
    
     Linux provides the following namespaces:
    
     Namespace   Constant        Isolates
     IPC         CLONE_NEWIPC    System V IPC, POSIX message queues
     Network     CLONE_NEWNET    Network devices, stacks, ports, etc.
     Mount       CLONE_NEWNS     Mount points
     PID         CLONE_NEWPID    Process IDs
     User        CLONE_NEWUSER   User and group IDs
     UTS         CLONE_NEWUTS    Hostname and NIS domain name
    

The namespaces API

   As well as various /proc files described below, the namespaces API
   includes the following system calls:

   clone(2)
          The clone(2) system call creates a new process.  If the flags
          argument of the call specifies one or more of the CLONE_NEW*
          flags listed below, then **new namespaces are created** for each
          flag, and the **child process is made a member of those
          namespaces**.  (This system call also implements a number of
          features unrelated to namespaces.)

   setns(2)
          The setns(2) system call allows the **calling process to join an
          existing namespace**.  The namespace to join is specified via a
          file descriptor that refers to one of the /proc/[pid]/ns files
          described below.

   unshare(2)
          The unshare(2) system call **moves the calling process to a new
          namespace**.  If the flags argument of the call specifies one or
          more of the CLONE_NEW* flags listed below, then new namespaces
          are created for each flag, and the calling process is made a
          member of those namespaces.  (This system call also implements
          a number of features unrelated to namespaces.)

   Creation of new namespaces using clone(2) and unshare(2) in most
   cases requires the CAP_SYS_ADMIN capability.  User namespaces are the
   exception: since Linux 3.8, no privilege is required to create a user
   namespace.

The /proc/[pid]/ns/ directory

   Each process has a /proc/[pid]/ns/ subdirectory containing one entry
   for each namespace that supports being manipulated by setns(2):

       $ ls -l /proc/$$/ns
       total 0
       lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 ipc -> ipc:[4026531839]
       lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 mnt -> mnt:[4026531840]
       lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 net -> net:[4026531956]
       lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 pid -> pid:[4026531836]
       lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 user -> user:[4026531837]
       lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 uts -> uts:[4026531838]

   Bind mounting (see mount(2)) one of the files in this directory to
   somewhere else in the filesystem keeps the corresponding namespace of
   the process specified by pid alive even if all processes currently in
   the namespace terminate.

   Opening one of the files in this directory (or a file that is bind
   mounted to one of these files) returns a file handle for the
   corresponding namespace of the process specified by pid.  As long as
   this file descriptor remains open, the namespace will remain alive,
   even if all processes in the namespace terminate.  The file
   descriptor can be passed to setns(2).

IPC namespaces

   IPC namespaces isolate certain IPC resources, namely, **System V IPC
   objects** (see svipc(7)) and (since Linux 2.6.30) **POSIX message queues**
   (see mq_overview(7)).  The common characteristic of these IPC
   mechanisms is that **IPC objects are identified by mechanisms other
   than filesystem pathnames**.

   Each IPC namespace has its own set of System V IPC identifiers and
   its own POSIX message queue filesystem.  Objects created in an IPC
   namespace are visible to all other processes that are members of that
   namespace, but are not visible to processes in other IPC namespaces.

   The following /proc interfaces are distinct in each IPC namespace:

   *  The POSIX message queue interfaces in **/proc/sys/fs/mqueue**.

   *  The System V IPC interfaces in /proc/sys/kernel, namely: msgmax,
      msgmnb, msgmni, sem, shmall, shmmax, shmmni, and shm_rmid_forced.

   *  The System V IPC interfaces in /proc/sysvipc.

Network namespaces

   Network namespaces provide isolation of the system resources
   associated with networking: **network devices**, **IPv4 and IPv6 protocol
   stacks**, **IP routing tables**, **firewalls**, the **/proc/net directory**, the
   **/sys/class/net** directory, **port numbers (sockets)**, and so on.  **A
   physical network device can live in exactly one network namespace.**  **A
   virtual network device ("veth") pair provides a pipe-like abstraction
   that can be used to create tunnels between network namespaces**, and
   can be used to create a bridge to a physical network device in
   another namespace.

Mount namespace

   Mount namespaces isolate the set of filesystem mount points, meaning
   that processes in different mount namespaces can have different views
   of the filesystem hierarchy.  The set of mounts in a mount namespace
   is modified using mount(2) and umount(2).

   The /proc/[pid]/mounts file (present since Linux 2.4.19) lists all
   the filesystems currently mounted in the process's mount namespace.
   The format of this file is documented in fstab(5).  Since kernel
   version 2.6.15, this file is pollable: after opening the file for
   reading, a change in this file (i.e., a filesystem mount or unmount)
   causes select(2) to mark the file descriptor as readable, and poll(2)
   and epoll_wait(2) mark the file as having an error condition.

   The /proc/[pid]/mountstats file (present since Linux 2.6.17) exports
   information (statistics, configuration information) about the mount
   points in the process's mount namespace.  This file is readable only
   by the owner of the process. 
  • Processes can have their own root fs (à la chroot)
  • Processes can also have "private" mounts
    • /tmp (scoped per user, per service...)
    • Masking of /proc , /sys
    • NFS auto-mounts (why not?)
  • Mounts can be totally private, or shared
  • No easy way to pass along a mount from a namespace to another

    PID namespace

     PID namespaces isolate the process ID number space, meaning that
     processes in different PID namespaces can have the same PID.  PID
     namespaces allow containers to provide functionality such as
     suspending/resuming the set of processes in the container and
     migrating the container to a new host while the processes inside the
     container maintain the same PIDs.
    

User namespace

   User namespaces isolate **security-related identifiers and attributes**,
   in particular, **user IDs and group IDs** (see credentials(7)), **the root
   directory**, **keys** (see keyctl(2)), and** capabilities** (see
   capabilities(7)).  A process's user and group IDs can be different
   inside and outside a user namespace.  In particular, a process can
   have a normal unprivileged user ID outside a user namespace while at
   the same time having a user ID of 0 inside the namespace; in other
   words, the process has full privileges for operations inside the user
   namespace, but is unprivileged for operations outside the namespace.

UTS namespace

   UTS namespaces provide isolation of two system identifiers: the
   hostname and the NIS domain name.  These identifiers are set using
   sethostname(2) and setdomainname(2), and can be retrieved using
   uname(2), gethostname(2), and getdomainname(2).

Control groups

  • Resource metering and limiting
    • memory
    • CPU
    • block I/O
    • network (With cooperation from iptables and tc)
  • Device node (/dev/* ) access control
  • Cgroups are often in /sys/fs/cgroup

Notes:

  • Each subsystem (memory, CPU...) has a hierarchy (tree)
  • Hierarchies are independent, the trees for e.g. memory and CPU can be different
  • Each process belongs to exactly 1 node in each hierarchy
  • Each hierarchy starts with 1 node (the root)
  • All processes initially belong to the root of each hierarchy*
  • Each node = group of processes, sharing the same resources

Memory cgroup: accounting

  • Keeps track of pages used by each group:
    • file (read/write/mmap from block devices)
    • anonymous (stack, heap, anonymous mmap)
    • active (recently accessed)
    • inactive (candidate for eviction)
  • Each page is "charged" to a group
  • Pages can be shared across multiple groups, e.g. multiple processes reading from the same files
  • When pages are shared, the groups "split the bill"

Memory cgroup: limits

  • Each group can have (optional) hard and soft limits
  • Soft limits are not enforced, they influence reclaim under memory pressure
  • Hard limits will trigger a per-group OOM killer
  • The OOM killer can be customized (oom-notifier); when the hard limit is exceeded:
    • freeze all processes in the group
    • notify user space (instead of going rampage)
    • we can kill processes, raise limits, migrate containers ...
    • when we're in the clear again, unfreeze the group
  • Limits can be set for physical, kernel, total memory

Memory cgroup: tricky details

  • Each time the kernel gives a page to a process, or takes it away, it updates the counters
  • This adds some overhead
  • Unfortunately, this cannot be enabled/disabled per process, it has to be done at boot time
  • Cost sharing means thata process leaving a group (e.g. because it terminates) can theoretically cause an out of memory condition

Cpu cgroup

  • Keeps track of user/system CPU time
  • Keeps track of usage per CPU
  • Allows to set weights
  • Can't set CPU limits
    • OK, let's say you give N%
    • then the CPU throttles to a lower clock speed
    • now what?
    • same if you give a time slot
    • instructions? their exec speed varies wildly

Cpuset cgroup

  • Pin groups to specific CPU(s)
  • Reserve CPUs for specific apps
  • Avoid processes bouncing between CPUs
  • Also relevant for NUMA systems
  • Provides extra dials and knobs: per zone memory pressure, process migration costs...

Blkio cgroup

  • Keeps track of I/Os for each group
    • per block device
    • read vs write
    • sync vs async
  • Set throttle (limits) for each group
    • per block device
    • read vs write
    • ops vs bytes
  • Set relative weights for each group

Net_cls and net_prio cgroup

  • Automatically set traffic class or priority, for traffic generated by processes in the group
  • Only works for egress traffic
  • Net_cls will assign traffic to a class, that has to be matched with tc/iptables, otherwise traffic just flows normally
  • Net_prio will assign traffic to a priority, priorities are used by queuing disciplines

Devices cgroup

  • Controls what the group can do on device nodes
  • Permissions include read/write/mknod
  • Typical use:
    allow /dev/{tty,zero,random,null} ...
    deny everything else
    
  • A few interesting nodes:
    • /dev/net/tun (network interface manipulation)
    • /dev/fuse (filesystems in user space)
    • /dev/kvm (VMs in containers, yay inception!)
    • /dev/dri (GPU)

Notes

  • PID 1 is placed at the root of each hierarchy
  • When a process is created, it is placed in the same groups as its parent
  • Groups are materialized by one (or multiple) pseudo-fs, typically mounted in /sys/fs/cgroup
  • Groups are created by mkdir in the pseudo-fs
  • To move a process:
    echo $PID > /sys/fs/cgroup/.../tasks
    
  • The cgroup wars: systemd vs cgmanager vs ...

Some missing bits

  • Capabilities
    • break down "root / non-root" into fine-grained rights
    • allow to keep root, but without the dangerous bits
    • however: CAP_SYS_ADMIN remains a big catchall
  • SELinux / AppArmor ...
    • containers that actually contain