IPC
Inter-Process Communication.
A central IPC abstraction in POSIX is the message queue API (mq ). On all platforms we studied, applications used some alternate form of IPC. In fact, Android is missing the mq APIs altogether. IPC on all of these platforms has divergently evolved beyond (in some cases parallel to) POSIX.
IPC on Android: Binder
In Android, Binder is the standard method of IPC. Binder APIs are exposed to apps through highly abstracted classes and interface definition files that give structure and meaning to all communica- tion. Some of the insurmountable limitation of traditional POSIX IPC that urge for new IPC mechanism include: (i) Filesystem-based IPC mechanisms, e.g., named pipes, can- not be used in Android (and other similarly sandboxed sys- tems) due to a global security policy that eliminates world- writable directories. (ii) Message queues and pipes can- not be used to pass file descriptors between processes. (iii) POSIX IPC primitives do not support the context manage- ment necessary for service handle discovery. (iv) Message queues have no support for authorization or credential man- agement between senders and recipients. (v) There is no support for resource referencing and automatic release of in-kernel structures.
Binder overcomes POSIX IPC limitations and serves as the backbone of Android inter-process communication. Us- ing a custom kernel module, Binder IPC supports file de- scriptor passing between processes, implements object refer- ence counting, and uses a scalable multi-threaded model that allows a process to consume many simultaneous requests. In addition, Binder leverages its access to processes’ address space and provides fast, single-copy transactions. When a message is sent from one process to another, the in-kernel Binder module allocates space in the destination process’ ad- dress space and copies the message directly from the source process’ address space.
Binder exposes IPC abstractions to higher layers of soft- ware in appropriately abstract APIs that easily support ser- vice discovery (through the Service Manager), and regis- tration of RPCs and intent filtering. Android apps can focus on logical program flow and interact with Binder, and other processes, through what appear to be standard Java objects and methods, without the need to manage low-level IPC de- tails. Because no existing API supported all the necessary functionality, Binder was implemented using ioctl as the singular kernel interface. Binder IPC is used in every An- droid application, and accounts for nearly 25% of the to- tal measured POSIX CPU time which funnels through the ioctl call.
IPC on OS X: Mach IPC.
IPC in OS X diverged from POSIX (and its Android/Linux contemporaries) since its in- ception. Apple’s XNU kernel uses an optimized descendant of CMU’s Mach IPC [16, 44, 61] as the backbone for inter- process communication. Mach comprises a flexible API that supports high-level concepts such as: object-based APIs ab- stracting communication channels as ports, real-time com- munication, shared memory regions, RPC, synchronization, and secure resource management. Although flexible and ex- tensible, the complexity of Mach has led Apple to develop a simpler higher-level API called XPC [17]. Most apps use XPC APIs that integrate with other high-level APIs, such as Grand Central Dispatch [18], and launchd the Mach IPC based init program providing global IPC service discovery.
To highlight key differences in POSIX-style IPC and newer IPC mechanisms, we adapt a simple Android Binder benchmark found within the Android source code to mea- sure both pipes and unix domain sockets as well as Binder transactions. We also use the MPMMTest application found in Apple’s open source XNU [13]. We measure the latency of a round-trip message using several different message sizes, ranging from 32 bytes to 100 (4096 byte) pages. The re- sults are summarized in Table 6. Both Binder and Mach IPC leverage fast, single-copy and zero-copy mechanisms respectively. Large messages in both Binder and Mach IPC are sent in near-constant time. In contrast, traditional POSIX mechanisms on all platforms suffer from large variation and scale linearly with message size.
In summary, driven by the common need for feature- rich IPC interfaces, and given the limitations of traditional POSIX IPC, different OSes have created similar but not converging, and non-standards adherent, IPC mechanisms.
IPC on Linux: D-Bus and KD-Bus.
In Ubuntu, the D- Bus protocol [25] provides apps with high-level IPC abstrac- tions. D-Bus describes an IPC messaging bus system that implements (i) a system daemon monitoring system-wide events, e.g., attaching or detaching a removable media de- vice, and (ii) a per-user login session daemon for communi- cation between applications within the same session. There are several implementations of D-Bus available in Ubuntu. The applications we inspect use mostly the libdbus imple- mentation (38 out of 45 apps). This library is, at the low level, implemented using traditional Unix domain sockets, and it accounts for less than 1% of the total CPU time mea- sured across our Ubuntu workloads.
Another recent evolution in IPC is called the Linux Ker- nel D-Bus, or kdbus. The kdbus system is gaining popularity in GNU/Linux OSes. It is an in-kernel implementation of D-Bus that uses Linux kernel features to overcome inherent limitations of user space D-Bus implementations. Specifi- cally, it supports zero-copy message passing between pro- cesses. Also, as opposed to D-Bus, it is available at boot, and there is no need to wait for the D-Bus daemon to bootstrap. Thus, Linux security modules can directly leverage it.
D-Bus is powerful IPC
- Method Call Transactions,
- Signals, Properties,
- OO,
- Broadcasting,
- Discovery,
- Introspection,
- Policy,
- Activation,
- Synchronization,
- Type-safe Marshalling,
- Security,
- Monitoring,
- exposes APIs/not streams,
- Passing of Credentials,
- File Descriptor Passing,
- Language agnostic,
- Network transparency,
- no trust required,
- High-level error concept
D-Bus has limitations
- Suitable only for control, not payload
- It’s inefficient (10 copies, 4 complete validations, 4 context switches per duplex method call transaction)
- Credentials one can send/recv are limited
- No implicit timestamping
- Not available in early boot, initrd, late boot
- Hookup with security frameworks happens in userspace
- Activatable bus services are independent from other system services
- Codebase is a bit too baroque, XML, . . .
- No race-free exit-on-idle bus activated services
kdbus
- Suitable for large data (GiB!), zero-copy, optionally reusable
- It’s efficient (2 or fewer copies, 2 validations, 2 context switches per duplex methd call transaction)
- Credentials sent along are comprehensive (uid, pid, gid, selinux label, pid starttime, tid, comm, tid comm, argv, exe, cgroup, caps, audit, . . . )
- Implicit timestamping
- Always available, from earliest boot to latest shutdown
- Open for LSMs to hook into from the kernel side
- Activation is identical to activation of other services
- Userspace is much simpler, no XML, . . .
- Priority queues, . . .
- Race-free exit-on-idle for bus activated services