An application should sit in a deep dark hole
In a modern webapp, all of the following concerns should be performed by local sidecars:
- inbound request throttling, decapsulation, authentication, and authorization
- outbound service discovery, connection pooling, certificate management, encapsulation, timeouts and retries
- tracking configuration changes and materializing them into a local store
- making requests that use third party API keys (the sidecar is a Level 1 enclave)
- aggregation of logging and statistics
- caching popular data
- health checking and service registration
So what?
With these considerations obviated, the web service becomes a boring one. There are no background threads to monitor or manage. There’s no shared state, no mutexes, no chance of race conditions or deadlocks.
There are very few libraries to include from infrastructure teams. There are scant third party libraries to keep updated, and what does exist is mostly not being used in a security-critical context. The service can be heavily seccomped. If an attacker manages to gain an RCE, they’ll also need to convince one of the sidecar processes to help them out.
The operating system tracks each process’s CPU and memory utilization separately. There’s no ambiguity around which team is responsible for what resource usage. There’s no one to blame for resource utilization woes, except yourself. There’s no cyclic GC. If a VM runtime starts using too much RAM, you just throw that VM away and start with a fresh zygote.
There are no in-process credentials; no cryptography to perform. If something goes wrong, an administrator can attach a debugger or take a coredump without the risk of leaking long-lived keys. A syscall tracer can watch plaintext requests as they enter and exit the service’s process boundary.
Sign me up!
Not so fast. Each of the concerns handled by these sidecars is a significant pile of complexity in itself. Mature service meshes, for instance, don’t pop into existence by luck.
In addition, deployment platforms like Kubernetes don’t really provide security isolation within a pod: containers within a pod share a single network namespace and a single Kubernetes service account. Fortunately Kubernetes does provide the ability to selectively control mounted volumes on a container-by-container basis.
I’ll close with Peter Salus’s statement of the unix philosophy:
- Write programs that do one thing and do it well.
- Write programs to work together.
- Write programs to handle text streams, because that is a universal interface.
Modern webapps should be architected according to exactly this philosophy, using OS processes1 to isolate each concern from all of the others2. A well-done realization of this vision would allow the owner of each concern to safely iterate without affecting the others, but that’ll have to be a topic for another post.