76 lines
4.8 KiB
Markdown
76 lines
4.8 KiB
Markdown
|
# Development Report for May 19, 2017
|
||
|
|
||
|
## Containers and Tasks
|
||
|
|
||
|
Moving more functionality into containerd means more requirements from users. One thing that we have ran into was the disconnect of what our Container model is and what users expect, `docker create;docker start;docker rm` vs a container that is destroyed when it exits.
|
||
|
|
||
|
To users, containers are more of a metadata object that resources(rw, configuration) and information(state, last exit status) are attached to. We have been reworking what we call a "container" today to be called a "task" and a Container metadata object. The task only has runtime state: a pid, namespaces, cgroups, etc. A container has an id, root filesystem, configuration, and other metadata from a user.
|
||
|
|
||
|
Managing static state and runtime state in the same object is very tricky so we choose to keep execution and metadata separate. We are hoping to not cause more confusion with this additional task object. You can see a mockup of a client interacting with a container and how the task is handled below:
|
||
|
|
||
|
```go
|
||
|
container, err := client.NewContainer(id, spec, rootfs)
|
||
|
|
||
|
task, err := container.CreateTask(containerd.Stdio())
|
||
|
task.Start()
|
||
|
task.Pid()
|
||
|
task.Kill(syscall.SIGKILL)
|
||
|
|
||
|
container.Delete()
|
||
|
```
|
||
|
|
||
|
[container metadata PR](https://github.com/containerd/containerd/pull/859)
|
||
|
|
||
|
## Checkpoint && Restore
|
||
|
|
||
|
There is a PR open for checkpoint and restore within containerd. We needed to port over this code from the existing containerd branch but we are able to do more in terms of functionality since we have filesystem and distribution built in. The overall functionality for checkpoint/restore is that you can still checkpoint containers but instead of having a directory on disk with the checkpointed data, it is checkpointed to the content store.
|
||
|
|
||
|
Having checkpoints in the content store allows you to push checkpoints that include the container's memory data, rw layer with the file contents that the container has written, and other resources like bind(volumes) to a registry that is running so that you can live migrate containers to other hosts in your cluster. And you can do all this with your existing registries; no other services required to migrate containers around your data center.
|
||
|
|
||
|
[checkpoint restore PR](https://github.com/containerd/containerd/pull/862)
|
||
|
|
||
|
## Snapshot and Diff Service
|
||
|
|
||
|
The RootFS service has been removed and replaced with a snapshot service and
|
||
|
diff service. The snapshot service provides access to all snapshotter methods
|
||
|
and allows clients to operate directly against the snapshotter interface. This
|
||
|
enables clients to prepare RW layers as it could before with the RootFS service,
|
||
|
but also has full access to commit or remove those snapshots directly. The diff
|
||
|
service provides 2 methods, extract and diff. The extract takes in a set of
|
||
|
mounts and a descriptor to a layer tar, mounts, then extracts the tar from the
|
||
|
content store into the mount. The diff service takes in 2 sets of mounts,
|
||
|
computes the diff, sends the diff to the content store and returns the content
|
||
|
descriptor for the computed diff. The diff service is designed to allow clients
|
||
|
to pull content into snapshotters without requiring the privileges to mount and
|
||
|
handle root-owned files.
|
||
|
|
||
|
[Snapshot and Diff Service PR](https://github.com/containerd/containerd/pull/849)
|
||
|
[Diff implementation PR](https://github.com/containerd/containerd/pull/863)
|
||
|
|
||
|
## Roadmap for May and June
|
||
|
|
||
|
We have a few remaining tasks to finish up in the next few weeks before we consider containerd to be feature complete.
|
||
|
|
||
|
### Namespaces
|
||
|
|
||
|
We want the ability to have a single containerd running on a system but allow multiple consumers like Docker, swarmkit, and Kube all consuming the same containerd daemon without stepping on each others containers and images. We will do this with namespaces.
|
||
|
|
||
|
```go
|
||
|
client, err := containerd.NewClient(address, namespace)
|
||
|
```
|
||
|
|
||
|
### Push Support
|
||
|
|
||
|
Right now we have pull support for fetching content but we need to have the ability to produce content in terms of builds and pushing checkpoints.
|
||
|
|
||
|
### Daemon Wide Events
|
||
|
|
||
|
We need to finish the events service so that consumers have a single service that they can consume to handle events produced by containerd, images, and their containers.
|
||
|
|
||
|
|
||
|
After we have these features completed, the majority of the time in June will be spent reviewing the API and client packages so that we have an API we are comfortable supporting in an LTS release.
|
||
|
|
||
|
We currently have a few integrations in the works for using containerd with Kube, Swarmkit, and Docker and in June we hope to complete some of these integrations and have testing infrastructure setup for these.
|
||
|
|
||
|
So that is the basic plan for the next 6 weeks or so. Feature complete with the last remaining tasks then iterate on our API and clients to make sure we have the features that consumers need and an API that we can support for the long term.
|