The container landscape, once largely synonymous with Docker, has fractured and matured into a complex ecosystem where specialized tools are now the expectation, not the exception. The tectonic shift initiated by Docker Desktop's licensing changes, coupled with a growing industry demand for enhanced security and resource efficiency, has pushed alternatives like Podman, Buildah, and containerd firmly into the mainstream. This isn't merely a rebranding exercise; these tools offer fundamentally different architectural paradigms and workflows that warrant a deep, critical examination. For a broader look at this transition, check out our Deep Dive: Why Podman and containerd 2.0 are Replacing Docker in 2026. Having recently put these updated platforms through their paces, it's clear the marketing often simplifies a much more nuanced reality.
The Shifting Sands of Containerization: Beyond the Docker Monolith
For years, "Docker" was the umbrella term for containerization, encapsulating everything from image building and runtime to orchestration. This monolithic approach, while undeniably convenient for rapid adoption, came with inherent trade-offs, particularly in security due to its daemon-centric, root-privileged architecture. The introduction of stricter licensing terms for Docker Desktop merely accelerated an existing trend: developers and organizations seeking more granular control, improved security postures, and leaner resource consumption.
The current landscape, particularly in early 2026, sees a strong push towards Open Container Initiative (OCI) compliant tools. This adherence to OCI Runtime and Image Specifications is the bedrock upon which the interoperability of Podman, Buildah, and containerd rests, allowing them to largely consume and produce the same container images as Docker. However, while OCI compliance ensures basic compatibility, it doesn't magically smooth over the practical differences in how these tools operate, manage resources, or integrate into existing workflows. The promise of "Docker-compatible CLI" often masks underlying complexities, particularly when moving beyond basic run and ps commands.
Podman's Daemonless Dogma: Security by Design or by Default?
Podman's primary allure remains its daemonless architecture. Unlike Docker's dockerd process, which runs as a privileged background service overseeing all containers, each Podman container is launched directly as a child process of the podman CLI, or managed by systemd for long-running services. This fundamental design choice eliminates a single point of failure and significantly reduces the attack surface, as there's no central daemon with root privileges to compromise.
But here's the catch: the much-lauded "rootless by default" operation, while a genuine security enhancement, isn't a silver bullet. While it's true that containers run as non-root users, preventing an escape from immediately granting root access to the host, configuring rootless environments demands a deeper understanding of Linux user namespaces (user_namespaces), subuid, and subgid mappings. Without proper configuration in /etc/subuid and /etc/subgid, users attempting to run rootless containers will encounter permission errors when the container tries to map its internal root user (UID 0) to an unprivileged range on the host. For instance, a simple podman run --rm -it alpine id -u will return 0 inside the container, but this maps to a high-numbered UID on the host (e.g., 65536) as defined in subuid. This isolation is sturdy, but misconfigurations can lead to opaque failures, requiring a non-trivial amount of troubleshooting for those accustomed to Docker's "just works" rootful defaults.
# Example /etc/subuid entry for user 'developer'
developer:100000:65536
# Example /etc/subgid entry for user 'developer'
developer:100000:65536
# Running a rootless container:
# The container's UID 0 (root) maps to host UID 100000,
# and GID 0 (root) maps to host GID 100000.
podman run --rm -it --user 0:0 alpine sh -c "id -u && id -g"
# Expected output:
# 0
# 0
# On the host, this process would be running as user 'developer'
# with the effective UID/GID mapped from the subuid/subgid range.
[Official Docs]
This remapping, while secure, fundamentally changes how file permissions and volume mounts behave, often requiring the --userns=keep-id or --userns=auto flags for specific scenarios, or careful use of SELinux labeling with :Z or :z to prevent permission denied errors when interacting with host directories. The learning curve for truly leveraging Podman's security model without hitting operational snags is steeper than often advertised.
Buildah: The Granular Artificer of OCI Images
Buildah carves out a distinct niche by specializing solely in OCI image construction, separating the build process from the runtime concerns handled by Podman or containerd. Its daemonless nature extends to image building, allowing rootless image creation directly from the command line, a significant advantage for CI/CD pipelines where privileged build agents are a security liability.
While Buildah can consume traditional Dockerfiles (often referred to as Containerfiles in the Podman/Buildah ecosystem), its true power lies in its interactive, step-by-step image building capabilities. This allows developers to mount a container's filesystem, make changes, and commit layers explicitly, offering a level of control that docker build (even with BuildKit) simply doesn't provide.
Consider a multi-stage build scenario. With Docker, you define stages in a single Dockerfile. With Buildah, you can execute each stage as a distinct operation:
#!/bin/bash
# Buildah: A more explicit, step-by-step image construction
# 1. Start a new container from a base image
# This creates a "working container" which is essentially a mounted root filesystem
CONTAINER=$(buildah from registry.access.redhat.com/ubi8/ubi)
echo "Working container: $CONTAINER" [Official Docs]
# 2. Install dependencies interactively
# This simulates a RUN instruction but allows for inspection and debugging
buildah run $CONTAINER -- dnf install -y git make gcc [Official Docs]
# 3. Copy application source code
buildah copy $CONTAINER . /app [Official Docs]
# 4. Set working directory and build application
buildah config --workingdir /app $CONTAINER [Official Docs]
buildah run $CONTAINER -- make build [Official Docs]
# 5. Commit the first stage as an image (builder image)
buildah commit $CONTAINER my-app-builder:latest [Official Docs]
# 6. Start a new container for the final, slim image
FINAL_CONTAINER=$(buildah from registry.access.redhat.com/ubi8/ubi-minimal) [Official Docs]
# 7. Copy compiled artifacts from the builder container
# This is analogous to `COPY --from=builder` in a Dockerfile
buildah copy --from $CONTAINER $FINAL_CONTAINER /app/bin/myapp /usr/local/bin/myapp [Official Docs]
# 8. Set entrypoint and commit the final image
buildah config --entrypoint '["/usr/local/bin/myapp"]' $FINAL_CONTAINER [Official Docs]
buildah commit $FINAL_CONTAINER my-app:latest [Official Docs]
# 9. Clean up working containers
buildah rm $CONTAINER $FINAL_CONTAINER
[Red Hat Documentation]
Recent developments in Buildah, such as the --add-file flag introduced in v1.35 (June 2024), allow for adding files directly to committed images, which can be useful for injecting configuration post-build without re-running an entire Dockerfile. More critically, the --sbom flag (also v1.35) for generating Software Bill of Materials during build and commit processes is a pragmatic response to increasing supply chain security demands. While Docker's BuildKit also offers advanced features, Buildah's explicit, command-oriented workflow provides a level of transparency and scriptability that is often more appealing for complex, security-conscious build environments. The buildah farm build feature, enabling distributed builds, also signals its intent to compete with Docker's BuildKit for scaling complex image creation.
containerd's Ascendancy: The Unseen Foundation
containerd is not a direct user-facing tool in the same vein as Docker or Podman. Instead, it serves as a robust, low-level runtime that manages the complete container lifecycle, from image transfer and storage to container execution and supervision. It's the engine under the hood for Docker Engine, and crucially, the de facto container runtime interface (CRI) implementation for Kubernetes. This makes containerd a foundational component in nearly all production Kubernetes deployments.
The containerd 2.0 release in late 2024 marked a significant milestone, stabilizing several experimental features and streamlining its API. One notable advancement is the Node Resource Interface (NRI), now enabled by default. NRI provides a standardized plugin mechanism for customizing low-level container configurations, allowing for dynamic resource allocation and policy enforcement at the runtime level. This is critical for advanced scheduling and resource management within Kubernetes, enabling more sophisticated integration with hardware accelerators and specialized resources.
For developers, interacting directly with containerd typically involves the ctr CLI, which is notoriously verbose and low-level, serving more as a debugging tool than a daily driver. For a more Docker-like experience, nerdctl has emerged as the preferred client, offering a CLI that closely mirrors Docker's commands while leveraging containerd's capabilities, including features like lazy-loaded images and image encryption.
# Example: Running a container with ctr (verbose)
# Pull image
ctr images pull docker.io/library/nginx:latest [Official Docs]
# Create container
ctr containers create docker.io/library/nginx:latest nginx_ctr
# Create and start a task for the container
ctr tasks start nginx_ctr
# Example: Running a container with nerdctl (Docker-compatible)
nerdctl run -d --name web -p 8080:80 nginx:latest [Official Docs]
[containerd Docs], [nerdctl GitHub]
While containerd's role is mostly transparent to end-users (unless you're operating Kubernetes clusters), its continuous development, particularly in areas like CRI User-Namespace Support (experimental in v1.7, likely progressing in v2.x) and an improved Transfer Service for artifact objects, underscores its critical, evolving role at the core of the container ecosystem. Its architecture, built around a robust plugin model for snapshotters and shims, offers immense flexibility for specialized runtimes like runc, crun, or even WebAssembly-based shims (runwasi), which Docker's monolithic design could not easily accommodate.
The Performance Conundrum: Benchmarks vs. Reality
Performance benchmarks for container runtimes are notoriously difficult to conduct objectively, and recent comparisons between Podman and Docker are a prime example of conflicting narratives. Some 2025 benchmarks suggest that Podman consistently outperforms Docker in container startup times by 20% to 50% in larger workloads, attributing this to its daemonless, rootless architecture and lower memory footprint (65% less memory when idle due to no daemon). This would naturally lead to more efficient CI/CD pipelines and better resource utilization in automated build environments.
However, other benchmarks from late 2025 indicate Docker might be marginally faster (10-15%) for starting individual containers and image operations because its daemon is always running, thus avoiding the startup overhead of Podman's child processes. Where Podman generally wins is in idle overhead (zero baseline memory usage) and scalability with many concurrent containers, as there's no central daemon bottleneck. Furthermore, kernel-level improvements have reportedly brought Podman's rootless file I/O performance on par with Docker's native overlay driver.
The reality is that "performance" is workload-dependent. For a single, ephemeral container launch, Docker might indeed feel snappier due to its pre-existing daemon. For a system hosting dozens or hundreds of containers, or in CI/CD where resource efficiency and cold-start times for new builds matter, Podman's daemonless design and lower memory footprint can translate to tangible gains. The critical takeaway is that neither is a universal "winner"; developers must benchmark against their specific use cases and resource constraints rather than relying on generalized claims. The "up to 50% faster" claims, while eye-catching, require scrutiny into the benchmark methodology, including system specs, image sizes, and caching strategies.
Ecosystem Maturity: Podman Desktop and the Missing Links
The user experience for Docker has long been defined by Docker Desktop on macOS and Windows: a polished GUI, integrated Kubernetes, and an extension marketplace. Podman, initially a Linux-first CLI tool, recognized this gap. Podman Desktop has matured rapidly, with versions like 1.25.1 (January 2026) and Podman Engine 5.7.0 (November 2025) bringing significant enhancements.
Podman Desktop now offers a functional GUI for managing containers, images, and pods, along with advanced network creation options (drivers like bridge, macvlan, ipvlan, dual-stack IPv6, custom IP ranges, DNS settings) directly from the UI. Its Kubernetes capabilities have also been enhanced, providing better stability and full Kubernetes API support, and the podman play kube command (which runs Kubernetes YAML files on a local machine) is now cancellable. For macOS and Windows users, Podman Desktop transparently manages a lightweight VM (using WSL2 on Windows, QEMU or native Hypervisor Framework on macOS) to host the Linux container engine, a setup analogous to Docker Desktop's. Podman 5, in particular, improved macOS support by leveraging the native Hypervisor framework and virtiofs for faster I/O.
Despite these strides, Podman Desktop still feels "comparatively newer". While functional, it lacks some of the long-term polish and the vast extension ecosystem of Docker Desktop. More critically, the integration with Docker Compose remains a mixed bag. While podman-compose exists and Podman can run Docker Compose files by pointing to its optional Docker-compatible socket, simply aliasing docker to podman or piping unmodified Docker Compose configurations often bypasses Podman's core security advantages, like user namespace separation per container (UserNS=auto) and robust SELinux integration. To truly leverage Podman's security features with multi-container applications, one is often encouraged to translate Compose files into Kubernetes quadlet files and use podman play kube, which represents a significant shift in workflow and a steeper learning curve. This is a "missing link" for many developers accustomed to Compose's simplicity for local development.
Networking and Storage: The Daemonless Maze
Networking and persistent storage in the daemonless world of Podman present a different set of challenges compared to Docker's batteries-included approach. In a rootful Podman setup, networking is handled by CNI (Container Network Interface) plugins, a standard in the Kubernetes ecosystem. However, in rootless mode, the complexity increases. Podman leverages a user-space network stack, primarily netavark and aardvark-dns, to provide network connectivity without requiring root privileges [Official Docs]. netavark manages the network configuration, while aardvark-dns provides DNS resolution for rootless containers and pods.
# Creating a custom rootless network with Podman
podman network create my_custom_network --driver bridge --subnet 10.88.0.0/16 [Official Docs]
# Running a container on the custom network
podman run -d --network my_custom_network --name webserver nginx:latest [Official Docs]
# Inspecting the network (note the user-specific path)
podman network inspect my_custom_network [Official Docs]
# This would show details like the CNI configuration file generated
# in ~/.config/cni/net.d/ for rootless networks,
# and how netavark/aardvark-dns are managing it.
While functional, this user-space networking can introduce subtle performance differences or compatibility issues compared to kernel-level networking. Debugging network problems can also be more involved, requiring familiarity with netavark and aardvark-dns logs and configurations, which are less universally understood than Docker's networking primitives. If you are managing complex configurations, you can use this JSON Formatter to verify your structure.
For storage, Podman uses containers/storage, which supports various graph drivers like overlayfs, vfs, and btrfs. Volume mounts behave similarly to Docker, but again, rootless operation introduces permission considerations. Explicitly setting SELinux labels (:Z or :z) when mounting host volumes is often necessary to avoid permission denied errors, especially in hardened Linux environments. While these mechanisms are robust, they demand a more explicit understanding of the underlying Linux security and networking primitives, moving away from Docker's "magic" into a more transparent, yet more demanding, configuration model.
Expert Insight: The OCI Specification's Unsung Triumph
The real "game-changer" – if one must use such a term – isn't any single tool, but the quiet, persistent triumph of the Open Container Initiative (OCI) specifications. These standards for container image formats and runtimes have decoupled the concerns of building, distributing, and running containers, enabling the rise of specialized tools like Podman, Buildah, and containerd. Without OCI, we would be locked into proprietary ecosystems, stifling innovation and fostering vendor lock-in.
My prediction for the near future is a continued acceleration towards composable container tooling and build-time security validation. The monolithic container engine is steadily being replaced by a suite of OCI-compliant tools, each excelling at a specific task. Developers will increasingly orchestrate these tools – Buildah for image creation, Podman for local development and pod management, containerd as the robust runtime for production Kubernetes – rather than relying on a single, all-encompassing solution.
A critical trend to watch is the ubiquitous adoption of Software Bill of Materials (SBOM) generation during the build process. Features like Buildah's --sbom flag are not just nice-to-haves; they will become non-negotiable requirements for supply chain security and compliance. Expect to see stricter policies and automated checks that reject images without verifiable SBOMs, pushing developers to integrate these capabilities early in their CI/CD pipelines. This means understanding what goes into your image, not just that it runs. The shift demands a more discerning, security-conscious developer, moving beyond simple docker pull and docker run to a more thoughtful, auditable approach to container lifecycle management.
Sources
This article was published by the DataFormatHub Editorial Team, a group of developers and data enthusiasts dedicated to making data transformation accessible and private. Our goal is to provide high-quality technical insights alongside our suite of privacy-first developer tools.
🛠️ Related Tools
Explore these DataFormatHub tools related to this topic:
- YAML to JSON - Convert container configs
- JSON Formatter - Format Dockerfiles
