Compare commits

..

36 Commits

Author SHA1 Message Date
08b30c2eaa chore: release (#279)
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-08-06 00:48:44 +00:00
224fdbe227 fix(idm): process all idm messages in the same frame and use childwait exit notification for exec (fixes #290) (#302) 2024-08-06 00:29:09 +00:00
62569f6c59 build(deps): bump the dep-updates group across 1 directory with 4 updates (#300)
Bumps the dep-updates group with 4 updates in the / directory: [bytes](https://github.com/tokio-rs/bytes), [flate2](https://github.com/rust-lang/flate2-rs), [regex](https://github.com/rust-lang/regex) and [serde_json](https://github.com/serde-rs/json).


Updates `bytes` from 1.7.0 to 1.7.1
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.7.0...v1.7.1)

Updates `flate2` from 1.0.30 to 1.0.31
- [Release notes](https://github.com/rust-lang/flate2-rs/releases)
- [Commits](https://github.com/rust-lang/flate2-rs/commits)

Updates `regex` from 1.10.5 to 1.10.6
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.10.5...1.10.6)

Updates `serde_json` from 1.0.121 to 1.0.122
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.121...v1.0.122)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: flate2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-06 00:14:17 +00:00
0b991f454e build(deps): bump the dep-updates group with 2 updates (#301)
Bumps the dep-updates group with 2 updates: [actions/upload-artifact](https://github.com/actions/upload-artifact) and [MarcoIeni/release-plz-action](https://github.com/marcoieni/release-plz-action).


Updates `actions/upload-artifact` from 4.3.4 to 4.3.5
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](0b2256b8c0...89ef406dd8)

Updates `MarcoIeni/release-plz-action` from 0.5.62 to 0.5.64
- [Release notes](https://github.com/marcoieni/release-plz-action/releases)
- [Commits](86afd21a7b...92ae919a6b)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: MarcoIeni/release-plz-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-06 00:14:12 +00:00
75aba8a1e3 build(deps): bump the dep-updates group with 4 updates (#296)
Bumps the dep-updates group with 4 updates: [bytes](https://github.com/tokio-rs/bytes), [indexmap](https://github.com/indexmap-rs/indexmap), [toml](https://github.com/toml-rs/toml) and [clap](https://github.com/clap-rs/clap).


Updates `bytes` from 1.6.1 to 1.7.0
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.6.1...v1.7.0)

Updates `indexmap` from 2.2.6 to 2.3.0
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.2.6...2.3.0)

Updates `toml` from 0.8.17 to 0.8.19
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.8.17...toml-v0.8.19)

Updates `clap` from 4.5.11 to 4.5.13
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.11...v4.5.13)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
- dependency-name: toml
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-01 06:29:02 +00:00
8216ab3602 feature(oci): use local index as resolution cache when appropriate, fixes #289 (#294) 2024-07-31 23:05:15 +00:00
902fffe207 build(deps): bump docker/setup-buildx-action in the dep-updates group (#291)
Bumps the dep-updates group with 1 update: [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action).


Updates `docker/setup-buildx-action` from 3.5.0 to 3.6.1
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](aa33708b10...988b5a0280)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-31 21:16:37 +00:00
45cfc6bb27 build(deps): bump toml from 0.8.16 to 0.8.17 in the dep-updates group (#292)
Bumps the dep-updates group with 1 update: [toml](https://github.com/toml-rs/toml).


Updates `toml` from 0.8.16 to 0.8.17
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.8.16...toml-v0.8.17)

---
updated-dependencies:
- dependency-name: toml
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-31 21:16:24 +00:00
146bda0810 build(deps): bump rust in /images in the dep-updates group (#285)
Bumps the dep-updates group in /images with 1 update: rust.


Updates `rust` from 1.79-alpine to 1.80-alpine

---
updated-dependencies:
- dependency-name: rust
  dependency-type: direct:production
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-29 05:34:37 +00:00
45e7d7515b build(deps): bump the dep-updates group across 1 directory with 6 updates (#288)
Bumps the dep-updates group with 6 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [env_logger](https://github.com/rust-cli/env_logger) | `0.11.4` | `0.11.5` |
| [serde_json](https://github.com/serde-rs/json) | `1.0.120` | `1.0.121` |
| [termtree](https://github.com/rust-cli/termtree) | `0.5.0` | `0.5.1` |
| [toml](https://github.com/toml-rs/toml) | `0.8.15` | `0.8.16` |
| [clap](https://github.com/clap-rs/clap) | `4.5.10` | `4.5.11` |
| [tokio](https://github.com/tokio-rs/tokio) | `1.39.1` | `1.39.2` |



Updates `env_logger` from 0.11.4 to 0.11.5
- [Release notes](https://github.com/rust-cli/env_logger/releases)
- [Changelog](https://github.com/rust-cli/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-cli/env_logger/compare/v0.11.4...v0.11.5)

Updates `serde_json` from 1.0.120 to 1.0.121
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.120...v1.0.121)

Updates `termtree` from 0.5.0 to 0.5.1
- [Changelog](https://github.com/rust-cli/termtree/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-cli/termtree/compare/v0.5.0...v0.5.1)

Updates `toml` from 0.8.15 to 0.8.16
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.8.15...toml-v0.8.16)

Updates `clap` from 4.5.10 to 4.5.11
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.10...clap_complete-v4.5.11)

Updates `tokio` from 1.39.1 to 1.39.2
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.1...tokio-1.39.2)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: termtree
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: toml
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-29 05:34:16 +00:00
f161b5afd6 build(deps): bump rust in /images in the dep-updates group (#281)
Bumps the dep-updates group in /images with 1 update: rust.


Updates `rust` from `a454f49` to `71c9d7a`

---
updated-dependencies:
- dependency-name: rust
  dependency-type: direct:production
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 09:33:07 +00:00
7fe3e2c7cb build(deps): bump the dep-updates group with 3 updates (#282)
Bumps the dep-updates group with 3 updates: [env_logger](https://github.com/rust-cli/env_logger), [clap](https://github.com/clap-rs/clap) and [tokio](https://github.com/tokio-rs/tokio).


Updates `env_logger` from 0.11.3 to 0.11.4
- [Release notes](https://github.com/rust-cli/env_logger/releases)
- [Changelog](https://github.com/rust-cli/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-cli/env_logger/compare/v0.11.3...v0.11.4)

Updates `clap` from 4.5.9 to 4.5.10
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.9...v4.5.10)

Updates `tokio` from 1.38.1 to 1.39.1
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.38.1...tokio-1.39.1)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 09:32:54 +00:00
3a5be71db4 build(deps): bump the dep-updates group with 3 updates (#280)
Bumps the dep-updates group with 3 updates: [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action), [docker/login-action](https://github.com/docker/login-action) and [docker/build-push-action](https://github.com/docker/build-push-action).


Updates `docker/setup-buildx-action` from 3.4.0 to 3.5.0
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](4fd812986e...aa33708b10)

Updates `docker/login-action` from 3.2.0 to 3.3.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](0d4c9c5ea7...9780b0c442)

Updates `docker/build-push-action` from 6.4.1 to 6.5.0
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](1ca370b3a9...5176d81f87)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 09:32:41 +00:00
d1b910f5c4 fix(workflows): upgrade rustup on darwin as best-effort fix for homebrew regression (#284) 2024-07-25 02:15:03 -07:00
8806a79161 zone: init: mount /proc with hidepid=1 (#277)
Mounting procfs with hidepid=1 denies access to procfs directories
for processes not accessible by the current user credentials.

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-07-22 06:11:36 +00:00
c8795fa08d build(deps): bump the dep-updates group with 2 updates (#278)
Bumps the dep-updates group with 2 updates: [async-compression](https://github.com/Nullus157/async-compression) and [oci-spec](https://github.com/containers/oci-spec-rs).


Updates `async-compression` from 0.4.11 to 0.4.12
- [Release notes](https://github.com/Nullus157/async-compression/releases)
- [Changelog](https://github.com/Nullus157/async-compression/blob/main/CHANGELOG.md)
- [Commits](https://github.com/Nullus157/async-compression/compare/v0.4.11...v0.4.12)

Updates `oci-spec` from 0.6.7 to 0.6.8
- [Release notes](https://github.com/containers/oci-spec-rs/releases)
- [Changelog](https://github.com/containers/oci-spec-rs/blob/main/release.md)
- [Commits](https://github.com/containers/oci-spec-rs/compare/v0.6.7...v0.6.8)

---
updated-dependencies:
- dependency-name: async-compression
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: oci-spec
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-22 06:07:29 +00:00
d792eb5439 fix(workflows): tag latest version during release-assets workflow (#275) 2024-07-20 00:55:09 +00:00
398e555bd3 chore: release (#249)
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-07-19 06:34:46 +00:00
75901233b1 feature(kratactl): rework cli to use subcommands (#268) 2024-07-19 06:13:29 +00:00
04665ce690 build(deps): bump step-security/harden-runner in the dep-updates group (#269)
Bumps the dep-updates group with 1 update: [step-security/harden-runner](https://github.com/step-security/harden-runner).


Updates `step-security/harden-runner` from 2.8.1 to 2.9.0
- [Release notes](https://github.com/step-security/harden-runner/releases)
- [Commits](17d0e2bd7d...0d381219dd)

---
updated-dependencies:
- dependency-name: step-security/harden-runner
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-19 05:38:46 +00:00
481a5884d9 fix(workflows): use full platform name in all names (#267) 2024-07-19 04:46:21 +00:00
5ee1035896 feature(krata): rename guest to zone (#266) 2024-07-19 03:47:18 +00:00
9bd8d1bb1d chore(workflows): make builds faster by only installing necessary tools (#265) 2024-07-19 02:26:26 +00:00
3bada811b2 build(deps): bump docker/build-push-action in the dep-updates group (#262)
Bumps the dep-updates group with 1 update: [docker/build-push-action](https://github.com/docker/build-push-action).


Updates `docker/build-push-action` from 6.4.0 to 6.4.1
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](a254f8ca60...1ca370b3a9)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 22:22:35 +00:00
e08d25ebde fix(root): remove empty file (#264) 2024-07-18 22:06:00 +00:00
2c884a6882 fix(workflows): give id-token write permission to nightly and release-assets oci (#263) 2024-07-18 21:47:35 +00:00
d756fa82f4 build(deps): bump the dep-updates group across 1 directory with 5 updates (#261)
Bumps the dep-updates group with 5 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [thiserror](https://github.com/dtolnay/thiserror) | `1.0.62` | `1.0.63` |
| [toml](https://github.com/toml-rs/toml) | `0.8.14` | `0.8.15` |
| [tonic-build](https://github.com/hyperium/tonic) | `0.12.0` | `0.12.1` |
| [tokio](https://github.com/tokio-rs/tokio) | `1.38.0` | `1.38.1` |
| [tonic](https://github.com/hyperium/tonic) | `0.12.0` | `0.12.1` |



Updates `thiserror` from 1.0.62 to 1.0.63
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.62...1.0.63)

Updates `toml` from 0.8.14 to 0.8.15
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.8.14...toml-v0.8.15)

Updates `tonic-build` from 0.12.0 to 0.12.1
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.12.0...v0.12.1)

Updates `tokio` from 1.38.0 to 1.38.1
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.38.0...tokio-1.38.1)

Updates `tonic` from 0.12.0 to 0.12.1
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.12.0...v0.12.1)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: toml
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: tonic-build
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: tonic
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 04:06:07 +00:00
6e051f52b9 chore(workflows): rework and simplify github actions workflows (#260) 2024-07-18 03:48:54 +00:00
b2fba6400e chore(dependabot): look for dockerfiles in images subdirectory (#259) 2024-07-17 02:44:18 +00:00
b26469be28 chore(workflows): use rustup directly to not depend on external actions (#258) 2024-07-17 02:39:16 +00:00
28d63d7d70 chore(cleanup): remove legacy OS technology demo (#256) 2024-07-17 02:02:47 +00:00
6b91f0be94 chore(dependabot): rename version groups to be more concise (#255) 2024-07-17 01:54:21 +00:00
9e91ffe065 chore(security): pin docker images and improve actions permissions (#253) 2024-07-16 22:25:29 +00:00
b57d95c610 chore(deps): upgrade dependencies, fix hyper io traits issue (#252) 2024-07-16 21:15:07 +00:00
de6bfe38fe build(deps): bump docker/build-push-action (#251)
Bumps the production-version-updates group with 1 update: [docker/build-push-action](https://github.com/docker/build-push-action).


Updates `docker/build-push-action` from 6.3.0 to 6.4.0
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](1a162644f9...a254f8ca60)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-version-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-16 12:17:01 +00:00
f6dffd6e17 build(deps): bump bytes in the production-version-updates group (#250)
Bumps the production-version-updates group with 1 update: [bytes](https://github.com/tokio-rs/bytes).


Updates `bytes` from 1.6.0 to 1.6.1
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.6.0...v1.6.1)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-version-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-15 15:23:30 +00:00
103 changed files with 2264 additions and 2185 deletions

View File

@ -5,10 +5,10 @@ updates:
schedule:
interval: "daily"
groups:
production-version-updates:
dep-updates:
dependency-type: "production"
applies-to: "version-updates"
development-version-updates:
dev-updates:
dependency-type: "development"
applies-to: "version-updates"
- package-ecosystem: "cargo"
@ -16,20 +16,20 @@ updates:
schedule:
interval: "daily"
groups:
production-version-updates:
dep-updates:
dependency-type: "production"
applies-to: "version-updates"
development-version-updates:
dev-updates:
dependency-type: "development"
applies-to: "version-updates"
- package-ecosystem: "docker"
directory: "/"
directory: "/images"
schedule:
interval: "daily"
groups:
production-version-updates:
dep-updates:
dependency-type: "production"
applies-to: "version-updates"
development-version-updates:
dev-updates:
dependency-type: "development"
applies-to: "version-updates"

View File

@ -7,30 +7,196 @@ on:
branches:
- main
jobs:
fmt:
name: fmt
rustfmt:
name: rustfmt
runs-on: ubuntu-latest
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
components: rustfmt
- run: ./hack/ci/install-linux-deps.sh
- name: install stable rust toolchain with rustfmt
run: |
rustup update --no-self-update stable
rustup default stable
rustup component add rustfmt
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
# Temporarily ignored: https://github.com/edera-dev/krata/issues/206
- run: ./hack/build/cargo.sh fmt --all -- --check || true
- name: cargo fmt
run: ./hack/build/cargo.sh fmt --all -- --check || true
shellcheck:
name: shellcheck
runs-on: ubuntu-latest
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- run: ./hack/code/shellcheck.sh
- name: shellcheck
run: ./hack/code/shellcheck.sh
full-build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: full build linux-${{ matrix.arch }}
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain
run: |
rustup update --no-self-update stable
rustup default stable
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: cargo build
run: ./hack/build/cargo.sh build
full-test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: full test linux-${{ matrix.arch }}
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain
run: |
rustup update --no-self-update stable
rustup default stable
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: cargo test
run: ./hack/build/cargo.sh test
full-clippy:
runs-on: ubuntu-latest
strategy:
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: full clippy linux-${{ matrix.arch }}
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain with clippy
run: |
rustup update --no-self-update stable
rustup default stable
rustup component add clippy
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: cargo clippy
run: ./hack/build/cargo.sh clippy
zone-initrd:
runs-on: ubuntu-latest
strategy:
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: zone initrd linux-${{ matrix.arch }}
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain with ${{ matrix.arch }}-unknown-linux-gnu and ${{ matrix.arch }}-unknown-linux-musl rust targets
run: |
rustup update --no-self-update stable
rustup default stable
rustup target add ${{ matrix.arch }}-unknown-linux-gnu ${{ matrix.arch }}-unknown-linux-musl
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: initrd build
run: ./hack/initrd/build.sh
kratactl-build:
strategy:
fail-fast: false
matrix:
platform:
- { os: linux, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: linux, arch: aarch64, on: ubuntu-latest, deps: linux }
- { os: darwin, arch: x86_64, on: macos-14, deps: darwin }
- { os: darwin, arch: aarch64, on: macos-14, deps: darwin }
- { os: freebsd, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: windows, arch: x86_64, on: windows-latest, deps: windows }
env:
TARGET_OS: "${{ matrix.platform.os }}"
TARGET_ARCH: "${{ matrix.platform.arch }}"
runs-on: "${{ matrix.platform.on }}"
name: kratactl build ${{ matrix.platform.os }}-${{ matrix.platform.arch }}
defaults:
run:
shell: bash
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: configure git line endings
run: git config --global core.autocrlf false && git config --global core.eol lf
if: ${{ matrix.platform.os == 'windows' }}
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain
run: |
rustup update --no-self-update stable
rustup default stable
- name: install ${{ matrix.platform.arch }}-apple-darwin rust target
run: "rustup target add --toolchain stable ${{ matrix.platform.arch }}-apple-darwin"
if: ${{ matrix.platform.os == 'darwin' }}
- name: setup homebrew
uses: homebrew/actions/setup-homebrew@4b34604e75af8f8b23b454f0b5ffb7c5d8ce0056 # master
if: ${{ matrix.platform.os == 'darwin' }}
- name: install ${{ matrix.platform.deps }} dependencies
run: ./hack/ci/install-${{ matrix.platform.deps }}-deps.sh
- name: cargo build kratactl
run: ./hack/build/cargo.sh build --bin kratactl

View File

@ -1,47 +0,0 @@
name: client
on:
pull_request:
branches:
- main
merge_group:
branches:
- main
jobs:
build:
strategy:
fail-fast: false
matrix:
platform:
- { os: linux, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: linux, arch: aarch64, on: ubuntu-latest, deps: linux }
- { os: darwin, arch: x86_64, on: macos-14, deps: darwin }
- { os: darwin, arch: aarch64, on: macos-14, deps: darwin }
- { os: freebsd, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: windows, arch: x86_64, on: windows-latest, deps: windows }
env:
TARGET_OS: "${{ matrix.platform.os }}"
TARGET_ARCH: "${{ matrix.platform.arch }}"
runs-on: "${{ matrix.platform.on }}"
name: client build ${{ matrix.platform.os }}-${{ matrix.platform.arch }}
defaults:
run:
shell: bash
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- run: git config --global core.autocrlf false && git config --global core.eol lf
if: ${{ matrix.platform.os == 'windows' }}
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
if: ${{ matrix.platform.os != 'darwin' }}
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
targets: "${{ matrix.platform.arch }}-apple-darwin"
if: ${{ matrix.platform.os == 'darwin' }}
- uses: homebrew/actions/setup-homebrew@4b34604e75af8f8b23b454f0b5ffb7c5d8ce0056 # master
if: ${{ matrix.platform.os == 'darwin' }}
- run: ./hack/ci/install-${{ matrix.platform.deps }}-deps.sh
- run: ./hack/build/cargo.sh build --bin kratactl

View File

@ -5,10 +5,8 @@ on:
- cron: "0 10 * * *"
permissions:
contents: read
packages: write
id-token: write
jobs:
server:
full-build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
@ -18,45 +16,49 @@ jobs:
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: nightly server ${{ matrix.arch }}
CI_NEEDS_FPM: "1"
name: nightly full build linux-${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
targets: "${{ matrix.arch }}-unknown-linux-gnu,${{ matrix.arch }}-unknown-linux-musl"
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/dist/bundle.sh
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
- name: install stable rust toolchain with ${{ matrix.arch }}-unknown-linux-gnu and ${{ matrix.arch }}-unknown-linux-musl rust targets
run: |
rustup update --no-self-update stable
rustup default stable
rustup target add ${{ matrix.arch }}-unknown-linux-gnu ${{ matrix.arch }}-unknown-linux-musl
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: build systemd bundle
run: ./hack/dist/bundle.sh
- name: upload systemd bundle
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: krata-bundle-systemd-${{ matrix.arch }}
path: "target/dist/bundle-systemd-${{ matrix.arch }}.tgz"
compression-level: 0
- run: ./hack/dist/deb.sh
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
- name: build deb package
run: ./hack/dist/deb.sh
- name: upload deb package
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: krata-debian-${{ matrix.arch }}
path: "target/dist/*.deb"
compression-level: 0
- run: ./hack/dist/apk.sh
env:
KRATA_KERNEL_BUILD_SKIP: "1"
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
- name: build apk package
run: ./hack/dist/apk.sh
- name: upload apk package
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: krata-alpine-${{ matrix.arch }}
path: "target/dist/*_${{ matrix.arch }}.apk"
compression-level: 0
- run: ./hack/os/build.sh
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
with:
name: krata-os-${{ matrix.arch }}
path: "target/os/krata-${{ matrix.arch }}.qcow2"
compression-level: 0
client:
kratactl-build:
strategy:
fail-fast: false
matrix:
@ -71,40 +73,49 @@ jobs:
TARGET_OS: "${{ matrix.platform.os }}"
TARGET_ARCH: "${{ matrix.platform.arch }}"
runs-on: "${{ matrix.platform.on }}"
name: nightly client ${{ matrix.platform.os }}-${{ matrix.platform.arch }}
name: nightly kratactl build ${{ matrix.platform.os }}-${{ matrix.platform.arch }}
defaults:
run:
shell: bash
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- run: git config --global core.autocrlf false && git config --global core.eol lf
- name: configure git line endings
run: git config --global core.autocrlf false && git config --global core.eol lf
if: ${{ matrix.platform.os == 'windows' }}
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
if: ${{ matrix.platform.os != 'darwin' }}
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
targets: "${{ matrix.platform.arch }}-apple-darwin"
- name: install stable rust toolchain
run: |
rustup update --no-self-update stable
rustup default stable
- name: install ${{ matrix.platform.arch }}-apple-darwin rust target
run: "rustup target add --toolchain stable ${{ matrix.platform.arch }}-apple-darwin"
if: ${{ matrix.platform.os == 'darwin' }}
- uses: homebrew/actions/setup-homebrew@4b34604e75af8f8b23b454f0b5ffb7c5d8ce0056 # master
- name: setup homebrew
uses: homebrew/actions/setup-homebrew@4b34604e75af8f8b23b454f0b5ffb7c5d8ce0056 # master
if: ${{ matrix.platform.os == 'darwin' }}
- run: ./hack/ci/install-${{ matrix.platform.deps }}-deps.sh
- run: ./hack/build/cargo.sh build --release --bin kratactl
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
- name: install ${{ matrix.platform.deps }} dependencies
run: ./hack/ci/install-${{ matrix.platform.deps }}-deps.sh
- name: cargo build kratactl
run: ./hack/build/cargo.sh build --release --bin kratactl
- name: upload kratactl
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: kratactl-${{ matrix.platform.os }}-${{ matrix.platform.arch }}
path: "target/*/release/kratactl"
if: ${{ matrix.platform.os != 'windows' }}
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
- name: upload kratactl
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: kratactl-${{ matrix.platform.os }}-${{ matrix.platform.arch }}
path: "target/*/release/kratactl.exe"
if: ${{ matrix.platform.os == 'windows' }}
oci:
oci-build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
@ -113,31 +124,42 @@ jobs:
- kratactl
- kratad
- kratanet
- krata-guest-init
name: "oci build ${{ matrix.component }}"
- krata-zone
name: nightly oci build ${{ matrix.component }}
permissions:
contents: read
id-token: write
packages: write
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: sigstore/cosign-installer@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20 # v3.5.0
- uses: docker/setup-buildx-action@4fd812986e6c8c2a69e18311145f9371337f27d4 # v3.4.0
- uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446 # v3.2.0
- name: install cosign
uses: sigstore/cosign-installer@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20 # v3.5.0
- name: setup docker buildx
uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v3.6.1
- name: login to container registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: ghcr.io
username: "${{ github.actor }}"
password: "${{ secrets.GITHUB_TOKEN }}"
- uses: docker/build-push-action@1a162644f9a7e87d8f4b053101d1d9a712edc18c # v6.3.0
- name: docker build and push ${{ matrix.component }}
uses: docker/build-push-action@5176d81f87c23d6fc96624dfdbcd9f3830bbe445 # v6.5.0
id: push
with:
file: ./images/Dockerfile.${{ matrix.component }}
platforms: linux/amd64,linux/aarch64
tags: "ghcr.io/edera-dev/${{ matrix.component }}:nightly"
push: true
- env:
- name: cosign sign ${{ matrix.component }}
run: cosign sign --yes "${TAGS}@${DIGEST}"
env:
DIGEST: "${{ steps.push.outputs.digest }}"
TAGS: "ghcr.io/edera-dev/${{ matrix.component }}:nightly"
COSIGN_EXPERIMENTAL: "true"
run: cosign sign --yes "${TAGS}@${DIGEST}"

View File

@ -1,37 +0,0 @@
name: os
on:
pull_request:
branches:
- main
merge_group:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: os build ${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
targets: "${{ matrix.arch }}-unknown-linux-gnu,${{ matrix.arch }}-unknown-linux-musl"
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/os/build.sh
- uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # v4.3.4
with:
name: krata-os-${{ matrix.arch }}
path: "target/os/krata-${{ matrix.arch }}.qcow2"
compression-level: 0

172
.github/workflows/release-assets.yml vendored Normal file
View File

@ -0,0 +1,172 @@
name: release-assets
on:
release:
types:
- published
env:
CARGO_INCREMENTAL: 0
CARGO_NET_GIT_FETCH_WITH_CLI: true
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
RUSTUP_MAX_RETRIES: 10
jobs:
services:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
CI_NEEDS_FPM: "1"
name: release-assets services ${{ matrix.arch }}
permissions:
contents: write
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain with ${{ matrix.arch }}-unknown-linux-gnu and ${{ matrix.arch }}-unknown-linux-musl rust targets
run: |
rustup update --no-self-update stable
rustup default stable
rustup target add ${{ matrix.arch }}-unknown-linux-gnu ${{ matrix.arch }}-unknown-linux-musl
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: build systemd bundle
run: ./hack/dist/bundle.sh
- name: assemble systemd bundle
run: "./hack/ci/assemble-release-assets.sh bundle-systemd ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/dist/bundle-systemd-${{ matrix.arch }}.tgz"
- name: build deb package
run: ./hack/dist/deb.sh
- name: assemble deb package
run: "./hack/ci/assemble-release-assets.sh debian ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/dist/*.deb"
- name: build apk package
run: ./hack/dist/apk.sh
- name: assemble apk package
run: "./hack/ci/assemble-release-assets.sh alpine ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/dist/*_${{ matrix.arch }}.apk"
- name: upload release artifacts
run: "./hack/ci/upload-release-assets.sh ${{ github.event.release.tag_name }}"
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
kratactl:
strategy:
fail-fast: false
matrix:
platform:
- { os: linux, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: linux, arch: aarch64, on: ubuntu-latest, deps: linux }
- { os: darwin, arch: x86_64, on: macos-14, deps: darwin }
- { os: darwin, arch: aarch64, on: macos-14, deps: darwin }
- { os: freebsd, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: windows, arch: x86_64, on: windows-latest, deps: windows }
env:
TARGET_OS: "${{ matrix.platform.os }}"
TARGET_ARCH: "${{ matrix.platform.arch }}"
runs-on: "${{ matrix.platform.on }}"
name: release-assets kratactl ${{ matrix.platform.os }}-${{ matrix.platform.arch }}
defaults:
run:
shell: bash
timeout-minutes: 60
permissions:
contents: write
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install stable rust toolchain
run: |
rustup update --no-self-update stable
rustup default stable
- name: install ${{ matrix.platform.arch }}-apple-darwin rust target
run: "rustup target add --toolchain stable ${{ matrix.platform.arch }}-apple-darwin"
if: ${{ matrix.platform.os == 'darwin' }}
- name: setup homebrew
uses: homebrew/actions/setup-homebrew@4b34604e75af8f8b23b454f0b5ffb7c5d8ce0056 # master
if: ${{ matrix.platform.os == 'darwin' }}
- name: install ${{ matrix.platform.deps }} dependencies
run: ./hack/ci/install-${{ matrix.platform.deps }}-deps.sh
- name: cargo build kratactl
run: ./hack/build/cargo.sh build --release --bin kratactl
- name: assemble kratactl executable
run: "./hack/ci/assemble-release-assets.sh kratactl ${{ github.event.release.tag_name }} ${{ matrix.platform.os }}-${{ matrix.platform.arch }} target/*/release/kratactl"
if: ${{ matrix.platform.os != 'windows' }}
- name: assemble kratactl executable
run: "./hack/ci/assemble-release-assets.sh kratactl ${{ github.event.release.tag_name }} ${{ matrix.platform.os }}-${{ matrix.platform.arch }} target/*/release/kratactl.exe"
if: ${{ matrix.platform.os == 'windows' }}
- name: upload release artifacts
run: "./hack/ci/upload-release-assets.sh ${{ github.event.release.tag_name }}"
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
oci:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
component:
- kratactl
- kratad
- kratanet
- krata-zone
name: release-assets oci ${{ matrix.component }}
permissions:
contents: read
id-token: write
packages: write
steps:
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- name: install cosign
uses: sigstore/cosign-installer@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20 # v3.5.0
- name: setup docker buildx
uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v3.6.1
- name: login to container registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: ghcr.io
username: "${{ github.actor }}"
password: "${{ secrets.GITHUB_TOKEN }}"
- name: capture krata version
id: version
run: |
echo "KRATA_VERSION=$(./hack/dist/version.sh)" >> "${GITHUB_OUTPUT}"
- name: docker build and push ${{ matrix.component }}
uses: docker/build-push-action@5176d81f87c23d6fc96624dfdbcd9f3830bbe445 # v6.5.0
id: push
with:
file: ./images/Dockerfile.${{ matrix.component }}
platforms: linux/amd64,linux/aarch64
tags: "ghcr.io/edera-dev/${{ matrix.component }}:${{ steps.version.outputs.KRATA_VERSION }},ghcr.io/edera-dev/${{ matrix.component }}:latest"
push: true
- name: cosign sign ${{ matrix.component }}:${{ steps.version.outputs.KRATA_VERSION }}
run: cosign sign --yes "${TAGS}@${DIGEST}"
env:
DIGEST: "${{ steps.push.outputs.digest }}"
TAGS: "ghcr.io/edera-dev/${{ matrix.component }}:${{ steps.version.outputs.KRATA_VERSION }}"
COSIGN_EXPERIMENTAL: "true"
- name: cosign sign ${{ matrix.component }}:latest
run: cosign sign --yes "${TAGS}@${DIGEST}"
env:
DIGEST: "${{ steps.push.outputs.digest }}"
TAGS: "ghcr.io/edera-dev/${{ matrix.component }}:latest"
COSIGN_EXPERIMENTAL: "true"

View File

@ -1,134 +0,0 @@
name: release-binaries
permissions:
contents: write
packages: write
id-token: write
on:
release:
types:
- published
env:
CARGO_INCREMENTAL: 0
CARGO_NET_GIT_FETCH_WITH_CLI: true
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
RUSTUP_MAX_RETRIES: 10
jobs:
server:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: release-binaries server ${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
targets: "${{ matrix.arch }}-unknown-linux-gnu,${{ matrix.arch }}-unknown-linux-musl"
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/dist/bundle.sh
- run: "./hack/ci/assemble-release-assets.sh bundle-systemd ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/dist/bundle-systemd-${{ matrix.arch }}.tgz"
- run: ./hack/dist/deb.sh
- run: "./hack/ci/assemble-release-assets.sh debian ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/dist/*.deb"
- run: ./hack/dist/apk.sh
- run: "./hack/ci/assemble-release-assets.sh alpine ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/dist/*_${{ matrix.arch }}.apk"
- run: ./hack/os/build.sh
- run: "./hack/ci/assemble-release-assets.sh os ${{ github.event.release.tag_name }} ${{ matrix.arch }} target/os/krata-${{ matrix.arch }}.qcow2"
- run: "./hack/ci/upload-release-assets.sh ${{ github.event.release.tag_name }}"
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
client:
strategy:
fail-fast: false
matrix:
platform:
- { os: linux, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: linux, arch: aarch64, on: ubuntu-latest, deps: linux }
- { os: darwin, arch: x86_64, on: macos-14, deps: darwin }
- { os: darwin, arch: aarch64, on: macos-14, deps: darwin }
- { os: freebsd, arch: x86_64, on: ubuntu-latest, deps: linux }
- { os: windows, arch: x86_64, on: windows-latest, deps: windows }
env:
TARGET_OS: "${{ matrix.platform.os }}"
TARGET_ARCH: "${{ matrix.platform.arch }}"
runs-on: "${{ matrix.platform.on }}"
name: release-binaries client ${{ matrix.platform.os }}-${{ matrix.platform.arch }}
defaults:
run:
shell: bash
timeout-minutes: 60
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
if: ${{ matrix.platform.os != 'darwin' }}
- uses: dtolnay/rust-toolchain@stable
with:
targets: "${{ matrix.platform.arch }}-apple-darwin"
if: ${{ matrix.platform.os == 'darwin' }}
- uses: homebrew/actions/setup-homebrew@4b34604e75af8f8b23b454f0b5ffb7c5d8ce0056 # master
if: ${{ matrix.platform.os == 'darwin' }}
- run: ./hack/ci/install-${{ matrix.platform.deps }}-deps.sh
- run: ./hack/build/cargo.sh build --release --bin kratactl
- run: "./hack/ci/assemble-release-assets.sh kratactl ${{ github.event.release.tag_name }} ${{ matrix.platform.os }}-${{ matrix.platform.arch }} target/*/release/kratactl"
if: ${{ matrix.platform.os != 'windows' }}
- run: "./hack/ci/assemble-release-assets.sh kratactl ${{ github.event.release.tag_name }} ${{ matrix.platform.os }}-${{ matrix.platform.arch }} target/*/release/kratactl.exe"
if: ${{ matrix.platform.os == 'windows' }}
- run: "./hack/ci/upload-release-assets.sh ${{ github.event.release.tag_name }}"
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
oci:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
component:
- kratactl
- kratad
- kratanet
- krata-guest-init
name: "release-binaries oci ${{ matrix.component }}"
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: sigstore/cosign-installer@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20 # v3.5.0
- uses: docker/setup-buildx-action@4fd812986e6c8c2a69e18311145f9371337f27d4 # v3.4.0
- uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446 # v3.2.0
with:
registry: ghcr.io
username: "${{ github.actor }}"
password: "${{ secrets.GITHUB_TOKEN }}"
- id: version
run: |
echo "KRATA_VERSION=$(./hack/dist/version.sh)" >> "${GITHUB_OUTPUT}"
- uses: docker/build-push-action@1a162644f9a7e87d8f4b053101d1d9a712edc18c # v6.3.0
id: push
with:
file: ./images/Dockerfile.${{ matrix.component }}
platforms: linux/amd64,linux/aarch64
tags: "ghcr.io/edera-dev/${{ matrix.component }}:${{ steps.version.outputs.KRATA_VERSION }}"
push: true
- env:
DIGEST: "${{ steps.push.outputs.digest }}"
TAGS: "ghcr.io/edera-dev/${{ matrix.component }}:${{ steps.version.outputs.KRATA_VERSION }}"
COSIGN_EXPERIMENTAL: "true"
run: cosign sign --yes "${TAGS}@${DIGEST}"

View File

@ -1,7 +1,4 @@
name: release-plz
permissions:
pull-requests: write
contents: write
on:
push:
branches:
@ -13,24 +10,34 @@ jobs:
release-plz:
name: release-plz
runs-on: ubuntu-latest
permissions:
pull-requests: write
contents: write
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
- name: harden runner
uses: step-security/harden-runner@0d381219ddf674d61a7572ddd19d7941e271515c # v2.9.0
with:
egress-policy: audit
- uses: actions/create-github-app-token@31c86eb3b33c9b601a1f60f98dcbfd1d70f379b4 # v1.10.3
- name: generate cultivator token
uses: actions/create-github-app-token@31c86eb3b33c9b601a1f60f98dcbfd1d70f379b4 # v1.10.3
id: generate-token
with:
app-id: "${{ secrets.EDERA_CULTIVATION_APP_ID }}"
private-key: "${{ secrets.EDERA_CULTIVATION_APP_PRIVATE_KEY }}"
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: checkout repository
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
fetch-depth: 0
token: "${{ steps.generate-token.outputs.token }}"
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
- run: ./hack/ci/install-linux-deps.sh
- name: install stable rust toolchain
run: |
rustup update --no-self-update stable
rustup default stable
- name: install linux dependencies
run: ./hack/ci/install-linux-deps.sh
- name: release-plz
uses: MarcoIeni/release-plz-action@86afd21a7b114234aab55ba0005eed52f77d89e4 # v0.5.62
uses: MarcoIeni/release-plz-action@92ae919a6b3e27c0472659e3a7414ff4a00e833f # v0.5.64
env:
GITHUB_TOKEN: "${{ steps.generate-token.outputs.token }}"
CARGO_REGISTRY_TOKEN: "${{ secrets.KRATA_RELEASE_CARGO_TOKEN }}"

View File

@ -1,94 +0,0 @@
name: server
on:
pull_request:
branches:
- main
merge_group:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: server build ${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/build/cargo.sh build
test:
runs-on: ubuntu-latest
strategy:
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: server test ${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/build/cargo.sh test
clippy:
runs-on: ubuntu-latest
strategy:
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: server clippy ${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
components: clippy
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/build/cargo.sh clippy
initrd:
runs-on: ubuntu-latest
strategy:
matrix:
arch:
- x86_64
- aarch64
env:
TARGET_ARCH: "${{ matrix.arch }}"
name: server initrd ${{ matrix.arch }}
steps:
- uses: step-security/harden-runner@17d0e2bd7d51742c71671bd19fa12bdc9d40a3d6 # v2.8.1
with:
egress-policy: audit
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@d388a4836fcdbde0e50e395dc79a2670ccdef13f # stable
with:
targets: "${{ matrix.arch }}-unknown-linux-gnu,${{ matrix.arch }}-unknown-linux-musl"
- run: ./hack/ci/install-linux-deps.sh
- run: ./hack/initrd/build.sh

View File

@ -6,6 +6,29 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.0.14](https://github.com/edera-dev/krata/compare/v0.0.13...v0.0.14) - 2024-08-06
### Added
- *(oci)* use local index as resolution cache when appropriate, fixes [#289](https://github.com/edera-dev/krata/pull/289) ([#294](https://github.com/edera-dev/krata/pull/294))
### Fixed
- *(idm)* process all idm messages in the same frame and use childwait exit notification for exec (fixes [#290](https://github.com/edera-dev/krata/pull/290)) ([#302](https://github.com/edera-dev/krata/pull/302))
### Other
- init: mount /proc with hidepid=1 ([#277](https://github.com/edera-dev/krata/pull/277))
- update Cargo.toml dependencies
## [0.0.13](https://github.com/edera-dev/krata/compare/v0.0.12...v0.0.13) - 2024-07-19
### Added
- *(kratactl)* rework cli to use subcommands ([#268](https://github.com/edera-dev/krata/pull/268))
- *(krata)* rename guest to zone ([#266](https://github.com/edera-dev/krata/pull/266))
### Other
- *(deps)* upgrade dependencies, fix hyper io traits issue ([#252](https://github.com/edera-dev/krata/pull/252))
- update Cargo.lock dependencies
- update Cargo.toml dependencies
## [0.0.12](https://github.com/edera-dev/krata/compare/v0.0.11...v0.0.12) - 2024-07-12
### Added

581
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,7 @@ members = [
"crates/build",
"crates/krata",
"crates/oci",
"crates/guest",
"crates/zone",
"crates/runtime",
"crates/daemon",
"crates/network",
@ -18,7 +18,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.0.12"
version = "0.0.14"
homepage = "https://krata.dev"
license = "Apache-2.0"
repository = "https://github.com/edera-dev/krata"
@ -26,13 +26,13 @@ repository = "https://github.com/edera-dev/krata"
[workspace.dependencies]
anyhow = "1.0"
arrayvec = "0.7.4"
async-compression = "0.4.11"
async-compression = "0.4.12"
async-stream = "0.3.5"
async-trait = "0.1.81"
backhand = "0.15.0"
backhand = "0.18.0"
base64 = "0.22.1"
byteorder = "1"
bytes = "1.5.0"
bytes = "1.7.1"
c2rust-bitfields = "0.18.0"
cgroups-rs = "0.3.4"
circular-buffer = "0.1.7"
@ -40,13 +40,15 @@ comfy-table = "7.1.1"
crossterm = "0.27.0"
ctrlc = "3.4.4"
elf = "0.7.4"
env_logger = "0.11.0"
etherparse = "0.14.3"
env_logger = "0.11.5"
etherparse = "0.15.0"
fancy-duration = "0.9.2"
flate2 = "1.0"
futures = "0.3.30"
hyper = "1.4.1"
hyper-util = "0.1.6"
human_bytes = "0.4"
indexmap = "2.2.6"
indexmap = "2.3.0"
indicatif = "0.17.8"
ipnetwork = "0.20.0"
libc = "0.2"
@ -56,45 +58,46 @@ krata-advmac = "1.1.0"
krata-tokio-tar = "0.4.0"
memchr = "2"
nix = "0.29.0"
oci-spec = "0.6.7"
oci-spec = "0.6.8"
once_cell = "1.19.0"
path-absolutize = "3.1.1"
path-clean = "1.0.1"
pin-project-lite = "0.2.14"
platform-info = "2.0.3"
prost = "0.12.6"
prost-build = "0.12.6"
prost-reflect-build = "0.13.0"
prost-types = "0.12.6"
prost = "0.13.1"
prost-build = "0.13.1"
prost-reflect-build = "0.14.0"
prost-types = "0.13.1"
rand = "0.8.5"
ratatui = "0.26.3"
ratatui = "0.27.0"
redb = "2.1.1"
regex = "1.10.5"
regex = "1.10.6"
rtnetlink = "0.14.1"
scopeguard = "1.2.0"
serde_json = "1.0.120"
serde_json = "1.0.122"
serde_yaml = "0.9"
sha256 = "1.5.0"
signal-hook = "0.3.17"
slice-copy = "0.3.0"
smoltcp = "0.11.0"
sysinfo = "0.30.13"
termtree = "0.4.1"
termtree = "0.5.1"
thiserror = "1.0"
tokio-tun = "0.11.5"
toml = "0.8.14"
tonic-build = "0.11.0"
toml = "0.8.19"
tonic-build = "0.12.1"
tower = "0.4.13"
udp-stream = "0.0.11"
udp-stream = "0.0.12"
url = "2.5.2"
walkdir = "2"
xz2 = "0.1"
[workspace.dependencies.clap]
version = "4.5.9"
version = "4.5.13"
features = ["derive"]
[workspace.dependencies.prost-reflect]
version = "0.13.1"
version = "0.14.0"
features = ["derive"]
[workspace.dependencies.reqwest]
@ -111,7 +114,7 @@ version = "3.0.0"
default-features = false
[workspace.dependencies.tokio]
version = "1.38.0"
version = "1.39.2"
features = ["full"]
[workspace.dependencies.tokio-stream]
@ -119,7 +122,7 @@ version = "0.1"
features = ["io-util", "net"]
[workspace.dependencies.tonic]
version = "0.11.0"
version = "0.12.1"
features = ["tls"]
[workspace.dependencies.uuid]

57
DEV.md
View File

@ -4,25 +4,25 @@
krata is composed of four major executables:
| Executable | Runs On | User Interaction | Dev Runner | Code Path |
| ---------- | ------- | ---------------- | ------------------------ | ----------------- |
| kratad | host | backend daemon | ./hack/debug/kratad.sh | crates/daemon |
| kratanet | host | backend daemon | ./hack/debug/kratanet.sh | crates/network |
| kratactl | host | CLI tool | ./hack/debug/kratactl.sh | crates/ctl |
| krataguest | guest | none, guest init | N/A | crates/guest |
| Executable | Runs On | User Interaction | Dev Runner | Code Path |
|------------|---------|------------------|--------------------------|----------------|
| kratad | host | backend daemon | ./hack/debug/kratad.sh | crates/daemon |
| kratanet | host | backend daemon | ./hack/debug/kratanet.sh | crates/network |
| kratactl | host | CLI tool | ./hack/debug/kratactl.sh | crates/ctl |
| kratazone | zone | none, zone init | N/A | crates/zone |
You will find the code to each executable available in the bin/ and src/ directories inside
it's corresponding code path from the above table.
## Environment
| Component | Specification | Notes |
| ------------- | ------------- | --------------------------------------------------------------------------------- |
| Architecture | x86_64 | aarch64 support is still in development |
| Memory | At least 6GB | dom0 will need to be configured with lower memory limit to give krata guests room |
| Xen | 4.17 | Temporary due to hardcoded interface version constants |
| Debian | stable / sid | Debian is recommended due to the ease of Xen setup |
| rustup | any | Install Rustup from https://rustup.rs |
| Component | Specification | Notes |
|--------------|---------------|----------------------------------------------------------------------------------|
| Architecture | x86_64 | aarch64 support is still in development |
| Memory | At least 6GB | dom0 will need to be configured with lower memory limit to give krata zones room |
| Xen | 4.17+ | |
| Debian | stable / sid | Debian is recommended due to the ease of Xen setup |
| rustup | any | Install Rustup from https://rustup.rs |
## Setup Guide
@ -31,8 +31,7 @@ it's corresponding code path from the above table.
2. Install required packages:
```sh
$ apt install git xen-system-amd64 build-essential \
libclang-dev musl-tools flex bison libelf-dev libssl-dev bc \
$ apt install git xen-system-amd64 build-essential musl-tools \
protobuf-compiler libprotobuf-dev squashfs-tools erofs-utils
```
@ -45,10 +44,10 @@ $ rustup target add x86_64-unknown-linux-gnu
$ rustup target add x86_64-unknown-linux-musl
```
4. Configure `/etc/default/grub.d/xen.cfg` to give krata guests some room:
4. Configure `/etc/default/grub.d/xen.cfg` to give krata zones some room:
```sh
# Configure dom0_mem to be 4GB, but leave the rest of the RAM for krata guests.
# Configure dom0_mem to be 4GB, but leave the rest of the RAM for krata zones.
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=4G,max:4G"
```
@ -64,36 +63,36 @@ $ git clone https://github.com/edera-dev/krata.git krata
$ cd krata
```
6. Fetch the guest kernel image:
6. Fetch the zone kernel image:
```sh
$ ./hack/kernel/fetch.sh -u
```
7. Copy the guest kernel artifacts to `/var/lib/krata/guest/kernel` so it is automatically detected by kratad:
7. Copy the zone kernel artifacts to `/var/lib/krata/zone/kernel` so it is automatically detected by kratad:
```sh
$ mkdir -p /var/lib/krata/guest
$ cp target/kernel/kernel-x86_64 /var/lib/krata/guest/kernel
$ cp target/kernel/addons-x86_64.squashfs /var/lib/krata/guest/addons.squashfs
$ mkdir -p /var/lib/krata/zone
$ cp target/kernel/kernel-x86_64 /var/lib/krata/zone/kernel
$ cp target/kernel/addons-x86_64.squashfs /var/lib/krata/zone/addons.squashfs
```
8. Launch `./hack/debug/kratad.sh` and keep it running in the foreground.
9. Launch `./hack/debug/kratanet.sh` and keep it running in the foreground.
10. Run `kratactl` to launch a guest:
10. Run `kratactl` to launch a zone:
```sh
$ ./hack/debug/kratactl.sh launch --attach alpine:latest
$ ./hack/debug/kratactl.sh zone launch --attach alpine:latest
```
To detach from the guest console, use `Ctrl + ]` on your keyboard.
To detach from the zone console, use `Ctrl + ]` on your keyboard.
To list the running guests, run:
To list the running zones, run:
```sh
$ ./hack/debug/kratactl.sh list
$ ./hack/debug/kratactl.sh zone list
```
To destroy a running guest, copy it's UUID from either the launch command or the guest list and run:
To destroy a running zone, copy it's UUID from either the launch command or the zone list and run:
```sh
$ ./hack/debug/kratactl.sh destroy GUEST_UUID
$ ./hack/debug/kratactl.sh zone destroy ZONE_UUID
```

2
FAQ.md
View File

@ -12,4 +12,4 @@ Xen is a very interesting technology, and Edera believes that type-1 hypervisors
## Why not utilize pvcalls to provide access to the host network?
pvcalls is extremely interesting, and although it is certainly possible to utilize pvcalls to get the job done, we chose to utilize userspace networking technology in order to enhance security. Our goal is to drop the use of all xen networking backend drivers within the kernel and have the guest talk directly to a userspace daemon, bypassing the vif (xen-netback) driver. Currently, in order to develop the networking layer, we utilize xen-netback and then use raw sockets to provide the userspace networking layer on the host.
pvcalls is fascinating, and although it is certainly possible to utilize pvcalls to get the job done, we chose to utilize userspace networking technology in order to enhance security. Our goal is to drop the use of all xen networking backend drivers within the kernel and have the guest talk directly to a userspace daemon, bypassing the vif (xen-netback) driver. Currently, in order to develop the networking layer, we utilize xen-netback and then use raw sockets to provide the userspace networking layer on the host.

View File

@ -2,6 +2,10 @@
An isolation engine for securing compute workloads.
```bash
$ kratactl zone launch -a alpine:latest
```
![license](https://img.shields.io/github/license/edera-dev/krata)
![discord](https://img.shields.io/discord/1207447453083766814?label=discord)
[![check](https://github.com/edera-dev/krata/actions/workflows/check.yml/badge.svg)](https://github.com/edera-dev/krata/actions/workflows/check.yml)
@ -22,7 +26,7 @@ krata utilizes the core of the Xen hypervisor with a fully memory-safe Rust cont
## Hardware Support
| Architecture | Completion Level | Hardware Virtualization |
| ------------ | ---------------- | ------------------------------- |
| x86_64 | 100% Completed | None, Intel VT-x, AMD-V |
| aarch64 | 10% Completed | AArch64 virtualization |
| Architecture | Completion Level | Hardware Virtualization |
|--------------|------------------|-------------------------|
| x86_64 | 100% Completed | None, Intel VT-x, AMD-V |
| aarch64 | 10% Completed | AArch64 virtualization |

View File

@ -16,7 +16,7 @@ oci-spec = { workspace = true }
scopeguard = { workspace = true }
tokio = { workspace = true }
tokio-stream = { workspace = true }
krata-oci = { path = "../oci", version = "^0.0.12" }
krata-oci = { path = "../oci", version = "^0.0.14" }
krata-tokio-tar = { workspace = true }
uuid = { workspace = true }

View File

@ -39,7 +39,7 @@ async fn main() -> Result<()> {
);
let image = ImageName::parse(&args().nth(1).unwrap())?;
let mut cache_dir = std::env::temp_dir().clone();
let mut cache_dir = env::temp_dir().clone();
cache_dir.push(format!("krata-cache-{}", Uuid::new_v4()));
fs::create_dir_all(&cache_dir).await?;
@ -50,7 +50,7 @@ async fn main() -> Result<()> {
let (context, _) = OciProgressContext::create();
let service = OciPackerService::new(None, &cache_dir, platform).await?;
let packed = service
.request(image.clone(), OciPackedFormat::Tar, false, context)
.request(image.clone(), OciPackedFormat::Tar, false, true, context)
.await?;
let annotations = packed
.manifest

View File

@ -20,7 +20,7 @@ env_logger = { workspace = true }
fancy-duration = { workspace = true }
human_bytes = { workspace = true }
indicatif = { workspace = true }
krata = { path = "../krata", version = "^0.0.12" }
krata = { path = "../krata", version = "^0.0.14" }
log = { workspace = true }
prost-reflect = { workspace = true, features = ["serde"] }
prost-types = { workspace = true }

View File

@ -1,46 +0,0 @@
use anyhow::Result;
use clap::Parser;
use krata::v1::control::{control_service_client::ControlServiceClient, HostCpuTopologyRequest};
use tonic::{transport::Channel, Request};
fn class_to_str(input: i32) -> String {
match input {
0 => "Standard".to_string(),
1 => "Performance".to_string(),
2 => "Efficiency".to_string(),
_ => "???".to_string(),
}
}
#[derive(Parser)]
#[command(about = "Display information about a host's CPU topology")]
pub struct CpuTopologyCommand {}
impl CpuTopologyCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
println!(
"{0:<10} {1:<10} {2:<10} {3:<10} {4:<10} {5:<10}",
"CPUID", "Node", "Socket", "Core", "Thread", "Class"
);
let response = client
.get_host_cpu_topology(Request::new(HostCpuTopologyRequest {}))
.await?
.into_inner();
for (i, cpu) in response.cpus.iter().enumerate() {
println!(
"{0:<10} {1:<10} {2:<10} {3:<10} {4:<10} {5:<10}",
i,
cpu.node,
cpu.socket,
cpu.core,
cpu.thread,
class_to_str(cpu.class)
);
}
Ok(())
}
}

View File

@ -12,7 +12,7 @@ use tonic::transport::Channel;
use crate::format::{kv2line, proto2dynamic, proto2kv};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum ListDevicesFormat {
enum DeviceListFormat {
Table,
Json,
JsonPretty,
@ -24,12 +24,12 @@ enum ListDevicesFormat {
#[derive(Parser)]
#[command(about = "List the devices on the isolation engine")]
pub struct ListDevicesCommand {
pub struct DeviceListCommand {
#[arg(short, long, default_value = "table", help = "Output format")]
format: ListDevicesFormat,
format: DeviceListFormat,
}
impl ListDevicesCommand {
impl DeviceListCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
@ -44,26 +44,26 @@ impl ListDevicesCommand {
devices.sort_by(|a, b| a.name.cmp(&b.name));
match self.format {
ListDevicesFormat::Table => {
DeviceListFormat::Table => {
self.print_devices_table(devices)?;
}
ListDevicesFormat::Simple => {
DeviceListFormat::Simple => {
for device in devices {
println!("{}\t{}\t{}", device.name, device.claimed, device.owner);
}
}
ListDevicesFormat::Json | ListDevicesFormat::JsonPretty | ListDevicesFormat::Yaml => {
DeviceListFormat::Json | DeviceListFormat::JsonPretty | DeviceListFormat::Yaml => {
let mut values = Vec::new();
for device in devices {
let message = proto2dynamic(device)?;
values.push(serde_json::to_value(message)?);
}
let value = Value::Array(values);
let encoded = if self.format == ListDevicesFormat::JsonPretty {
let encoded = if self.format == DeviceListFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == ListDevicesFormat::Yaml {
} else if self.format == DeviceListFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
@ -71,14 +71,14 @@ impl ListDevicesCommand {
println!("{}", encoded.trim());
}
ListDevicesFormat::Jsonl => {
DeviceListFormat::Jsonl => {
for device in devices {
let message = proto2dynamic(device)?;
println!("{}", serde_json::to_string(&message)?);
}
}
ListDevicesFormat::KeyValue => {
DeviceListFormat::KeyValue => {
self.print_key_value(devices)?;
}
}

View File

@ -0,0 +1,44 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use tonic::transport::Channel;
use krata::events::EventStream;
use krata::v1::control::control_service_client::ControlServiceClient;
use crate::cli::device::list::DeviceListCommand;
pub mod list;
#[derive(Parser)]
#[command(about = "Manage the devices on the isolation engine")]
pub struct DeviceCommand {
#[command(subcommand)]
subcommand: DeviceCommands,
}
impl DeviceCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
self.subcommand.run(client, events).await
}
}
#[derive(Subcommand)]
pub enum DeviceCommands {
List(DeviceListCommand),
}
impl DeviceCommands {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
match self {
DeviceCommands::List(list) => list.run(client, events).await,
}
}
}

View File

@ -0,0 +1,60 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use comfy_table::presets::UTF8_FULL_CONDENSED;
use comfy_table::{Cell, Table};
use krata::v1::control::{
control_service_client::ControlServiceClient, HostCpuTopologyClass, HostCpuTopologyRequest,
};
use tonic::{transport::Channel, Request};
fn class_to_str(input: HostCpuTopologyClass) -> String {
match input {
HostCpuTopologyClass::Standard => "Standard".to_string(),
HostCpuTopologyClass::Performance => "Performance".to_string(),
HostCpuTopologyClass::Efficiency => "Efficiency".to_string(),
}
}
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum HostCpuTopologyFormat {
Table,
}
#[derive(Parser)]
#[command(about = "Display information about the host CPU topology")]
pub struct HostCpuTopologyCommand {
#[arg(short, long, default_value = "table", help = "Output format")]
format: HostCpuTopologyFormat,
}
impl HostCpuTopologyCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.get_host_cpu_topology(Request::new(HostCpuTopologyRequest {}))
.await?
.into_inner();
let mut table = Table::new();
table.load_preset(UTF8_FULL_CONDENSED);
table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic);
table.set_header(vec!["id", "node", "socket", "core", "thread", "class"]);
for (i, cpu) in response.cpus.iter().enumerate() {
table.add_row(vec![
Cell::new(i),
Cell::new(cpu.node),
Cell::new(cpu.socket),
Cell::new(cpu.core),
Cell::new(cpu.thread),
Cell::new(class_to_str(cpu.class())),
]);
}
if !table.is_empty() {
println!("{}", table);
}
Ok(())
}
}

View File

@ -6,9 +6,9 @@ use tonic::{transport::Channel, Request};
#[derive(Parser)]
#[command(about = "Identify information about the host")]
pub struct IdentifyHostCommand {}
pub struct HostIdentifyCommand {}
impl IdentifyHostCommand {
impl HostIdentifyCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.identify_host(Request::new(IdentifyHostRequest {}))

View File

@ -15,7 +15,7 @@ use tonic::transport::Channel;
use crate::format::{kv2line, proto2dynamic, value2kv};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum IdmSnoopFormat {
enum HostIdmSnoopFormat {
Simple,
Jsonl,
KeyValue,
@ -23,12 +23,12 @@ enum IdmSnoopFormat {
#[derive(Parser)]
#[command(about = "Snoop on the IDM bus")]
pub struct IdmSnoopCommand {
pub struct HostIdmSnoopCommand {
#[arg(short, long, default_value = "simple", help = "Output format")]
format: IdmSnoopFormat,
format: HostIdmSnoopFormat,
}
impl IdmSnoopCommand {
impl HostIdmSnoopCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
@ -43,16 +43,16 @@ impl IdmSnoopCommand {
};
match self.format {
IdmSnoopFormat::Simple => {
HostIdmSnoopFormat::Simple => {
self.print_simple(line)?;
}
IdmSnoopFormat::Jsonl => {
HostIdmSnoopFormat::Jsonl => {
let encoded = serde_json::to_string(&line)?;
println!("{}", encoded.trim());
}
IdmSnoopFormat::KeyValue => {
HostIdmSnoopFormat::KeyValue => {
self.print_key_value(line)?;
}
}

View File

@ -0,0 +1,54 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use tonic::transport::Channel;
use krata::events::EventStream;
use krata::v1::control::control_service_client::ControlServiceClient;
use crate::cli::host::cpu_topology::HostCpuTopologyCommand;
use crate::cli::host::identify::HostIdentifyCommand;
use crate::cli::host::idm_snoop::HostIdmSnoopCommand;
pub mod cpu_topology;
pub mod identify;
pub mod idm_snoop;
#[derive(Parser)]
#[command(about = "Manage the host of the isolation engine")]
pub struct HostCommand {
#[command(subcommand)]
subcommand: HostCommands,
}
impl HostCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
self.subcommand.run(client, events).await
}
}
#[derive(Subcommand)]
pub enum HostCommands {
CpuTopology(HostCpuTopologyCommand),
Identify(HostIdentifyCommand),
IdmSnoop(HostIdmSnoopCommand),
}
impl HostCommands {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
match self {
HostCommands::CpuTopology(cpu_topology) => cpu_topology.run(client).await,
HostCommands::Identify(identify) => identify.run(client).await,
HostCommands::IdmSnoop(snoop) => snoop.run(client, events).await,
}
}
}

View File

@ -0,0 +1,44 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use tonic::transport::Channel;
use krata::events::EventStream;
use krata::v1::control::control_service_client::ControlServiceClient;
use crate::cli::image::pull::ImagePullCommand;
pub mod pull;
#[derive(Parser)]
#[command(about = "Manage the images on the isolation engine")]
pub struct ImageCommand {
#[command(subcommand)]
subcommand: ImageCommands,
}
impl ImageCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
self.subcommand.run(client, events).await
}
}
#[derive(Subcommand)]
pub enum ImageCommands {
Pull(ImagePullCommand),
}
impl ImageCommands {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
match self {
ImageCommands::Pull(pull) => pull.run(client).await,
}
}
}

View File

@ -10,7 +10,7 @@ use tonic::transport::Channel;
use crate::pull::pull_interactive_progress;
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
pub enum PullImageFormat {
pub enum ImagePullImageFormat {
Squashfs,
Erofs,
Tar,
@ -18,26 +18,27 @@ pub enum PullImageFormat {
#[derive(Parser)]
#[command(about = "Pull an image into the cache")]
pub struct PullCommand {
pub struct ImagePullCommand {
#[arg(help = "Image name")]
image: String,
#[arg(short = 's', long, default_value = "squashfs", help = "Image format")]
image_format: PullImageFormat,
image_format: ImagePullImageFormat,
#[arg(short = 'o', long, help = "Overwrite image cache")]
overwrite_cache: bool,
}
impl PullCommand {
impl ImagePullCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.pull_image(PullImageRequest {
image: self.image.clone(),
format: match self.image_format {
PullImageFormat::Squashfs => OciImageFormat::Squashfs.into(),
PullImageFormat::Erofs => OciImageFormat::Erofs.into(),
PullImageFormat::Tar => OciImageFormat::Tar.into(),
ImagePullImageFormat::Squashfs => OciImageFormat::Squashfs.into(),
ImagePullImageFormat::Erofs => OciImageFormat::Erofs.into(),
ImagePullImageFormat::Tar => OciImageFormat::Tar.into(),
},
overwrite_cache: self.overwrite_cache,
update: true,
})
.await?;
let reply = pull_interactive_progress(response.into_inner()).await?;

View File

@ -1,36 +1,21 @@
pub mod attach;
pub mod cpu_topology;
pub mod destroy;
pub mod exec;
pub mod identify_host;
pub mod idm_snoop;
pub mod launch;
pub mod list;
pub mod list_devices;
pub mod logs;
pub mod metrics;
pub mod pull;
pub mod resolve;
pub mod top;
pub mod watch;
pub mod device;
pub mod host;
pub mod image;
pub mod zone;
use crate::cli::device::DeviceCommand;
use crate::cli::host::HostCommand;
use crate::cli::image::ImageCommand;
use crate::cli::zone::ZoneCommand;
use anyhow::{anyhow, Result};
use clap::{Parser, Subcommand};
use clap::Parser;
use krata::{
client::ControlClientProvider,
events::EventStream,
v1::control::{control_service_client::ControlServiceClient, ResolveGuestRequest},
v1::control::{control_service_client::ControlServiceClient, ResolveZoneRequest},
};
use tonic::{transport::Channel, Request};
use self::{
attach::AttachCommand, cpu_topology::CpuTopologyCommand, destroy::DestroyCommand,
exec::ExecCommand, identify_host::IdentifyHostCommand, idm_snoop::IdmSnoopCommand,
launch::LaunchCommand, list::ListCommand, list_devices::ListDevicesCommand, logs::LogsCommand,
metrics::MetricsCommand, pull::PullCommand, resolve::ResolveCommand, top::TopCommand,
watch::WatchCommand,
};
#[derive(Parser)]
#[command(version, about = "Control the krata isolation engine")]
pub struct ControlCommand {
@ -43,26 +28,15 @@ pub struct ControlCommand {
connection: String,
#[command(subcommand)]
command: Commands,
command: ControlCommands,
}
#[derive(Subcommand)]
pub enum Commands {
Launch(LaunchCommand),
Destroy(DestroyCommand),
List(ListCommand),
ListDevices(ListDevicesCommand),
Attach(AttachCommand),
Pull(PullCommand),
Logs(LogsCommand),
Watch(WatchCommand),
Resolve(ResolveCommand),
Metrics(MetricsCommand),
IdmSnoop(IdmSnoopCommand),
Top(TopCommand),
IdentifyHost(IdentifyHostCommand),
Exec(ExecCommand),
CpuTopology(CpuTopologyCommand),
#[derive(Parser)]
pub enum ControlCommands {
Zone(ZoneCommand),
Image(ImageCommand),
Device(DeviceCommand),
Host(HostCommand),
}
impl ControlCommand {
@ -71,84 +45,31 @@ impl ControlCommand {
let events = EventStream::open(client.clone()).await?;
match self.command {
Commands::Launch(launch) => {
launch.run(client, events).await?;
}
ControlCommands::Zone(zone) => zone.run(client, events).await,
Commands::Destroy(destroy) => {
destroy.run(client, events).await?;
}
ControlCommands::Image(image) => image.run(client, events).await,
Commands::Attach(attach) => {
attach.run(client, events).await?;
}
ControlCommands::Device(device) => device.run(client, events).await,
Commands::Logs(logs) => {
logs.run(client, events).await?;
}
Commands::List(list) => {
list.run(client, events).await?;
}
Commands::Watch(watch) => {
watch.run(events).await?;
}
Commands::Resolve(resolve) => {
resolve.run(client).await?;
}
Commands::Metrics(metrics) => {
metrics.run(client, events).await?;
}
Commands::IdmSnoop(snoop) => {
snoop.run(client, events).await?;
}
Commands::Top(top) => {
top.run(client, events).await?;
}
Commands::Pull(pull) => {
pull.run(client).await?;
}
Commands::IdentifyHost(identify) => {
identify.run(client).await?;
}
Commands::Exec(exec) => {
exec.run(client).await?;
}
Commands::ListDevices(list) => {
list.run(client, events).await?;
}
Commands::CpuTopology(cpu_topology) => {
cpu_topology.run(client).await?;
}
ControlCommands::Host(snoop) => snoop.run(client, events).await,
}
Ok(())
}
}
pub async fn resolve_guest(
pub async fn resolve_zone(
client: &mut ControlServiceClient<Channel>,
name: &str,
) -> Result<String> {
let reply = client
.resolve_guest(Request::new(ResolveGuestRequest {
.resolve_zone(Request::new(ResolveZoneRequest {
name: name.to_string(),
}))
.await?
.into_inner();
if let Some(guest) = reply.guest {
Ok(guest.id)
if let Some(zone) = reply.zone {
Ok(zone.id)
} else {
Err(anyhow!("unable to resolve guest '{}'", name))
Err(anyhow!("unable to resolve zone '{}'", name))
}
}

View File

@ -7,27 +7,27 @@ use tonic::transport::Channel;
use crate::console::StdioConsoleStream;
use super::resolve_guest;
use crate::cli::resolve_zone;
#[derive(Parser)]
#[command(about = "Attach to the guest console")]
pub struct AttachCommand {
#[arg(help = "Guest to attach to, either the name or the uuid")]
guest: String,
#[command(about = "Attach to the zone console")]
pub struct ZoneAttachCommand {
#[arg(help = "Zone to attach to, either the name or the uuid")]
zone: String,
}
impl AttachCommand {
impl ZoneAttachCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let input = StdioConsoleStream::stdin_stream(guest_id.clone()).await;
let output = client.console_data(input).await?.into_inner();
let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let input = StdioConsoleStream::stdin_stream(zone_id.clone()).await;
let output = client.attach_zone_console(input).await?.into_inner();
let stdout_handle =
tokio::task::spawn(async move { StdioConsoleStream::stdout(output).await });
let exit_hook_task = StdioConsoleStream::guest_exit_hook(guest_id.clone(), events).await?;
let exit_hook_task = StdioConsoleStream::zone_exit_hook(zone_id.clone(), events).await?;
let code = select! {
x = stdout_handle => {
x??;

View File

@ -3,10 +3,10 @@ use clap::Parser;
use krata::{
events::EventStream,
v1::{
common::GuestStatus,
common::ZoneStatus,
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
DestroyGuestRequest,
DestroyZoneRequest,
},
},
};
@ -14,67 +14,67 @@ use krata::{
use log::error;
use tonic::{transport::Channel, Request};
use crate::cli::resolve_guest;
use crate::cli::resolve_zone;
#[derive(Parser)]
#[command(about = "Destroy a guest")]
pub struct DestroyCommand {
#[command(about = "Destroy a zone")]
pub struct ZoneDestroyCommand {
#[arg(
short = 'W',
long,
help = "Wait for the destruction of the guest to complete"
help = "Wait for the destruction of the zone to complete"
)]
wait: bool,
#[arg(help = "Guest to destroy, either the name or the uuid")]
guest: String,
#[arg(help = "Zone to destroy, either the name or the uuid")]
zone: String,
}
impl DestroyCommand {
impl ZoneDestroyCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let _ = client
.destroy_guest(Request::new(DestroyGuestRequest {
guest_id: guest_id.clone(),
.destroy_zone(Request::new(DestroyZoneRequest {
zone_id: zone_id.clone(),
}))
.await?
.into_inner();
if self.wait {
wait_guest_destroyed(&guest_id, events).await?;
wait_zone_destroyed(&zone_id, events).await?;
}
Ok(())
}
}
async fn wait_guest_destroyed(id: &str, events: EventStream) -> Result<()> {
async fn wait_zone_destroyed(id: &str, events: EventStream) -> Result<()> {
let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await {
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
let Event::ZoneChanged(changed) = event;
let Some(zone) = changed.zone else {
continue;
};
if guest.id != id {
if zone.id != id {
continue;
}
let Some(state) = guest.state else {
let Some(state) = zone.state else {
continue;
};
if let Some(ref error) = state.error_info {
if state.status() == GuestStatus::Failed {
if state.status() == ZoneStatus::Failed {
error!("destroy failed: {}", error.message);
std::process::exit(1);
} else {
error!("guest error: {}", error.message);
error!("zone error: {}", error.message);
}
}
if state.status() == GuestStatus::Destroyed {
if state.status() == ZoneStatus::Destroyed {
std::process::exit(0);
}
}

View File

@ -4,42 +4,42 @@ use anyhow::Result;
use clap::Parser;
use krata::v1::{
common::{GuestTaskSpec, GuestTaskSpecEnvVar},
control::{control_service_client::ControlServiceClient, ExecGuestRequest},
common::{ZoneTaskSpec, ZoneTaskSpecEnvVar},
control::{control_service_client::ControlServiceClient, ExecZoneRequest},
};
use tonic::{transport::Channel, Request};
use crate::console::StdioConsoleStream;
use super::resolve_guest;
use crate::cli::resolve_zone;
#[derive(Parser)]
#[command(about = "Execute a command inside the guest")]
pub struct ExecCommand {
#[command(about = "Execute a command inside the zone")]
pub struct ZoneExecCommand {
#[arg[short, long, help = "Environment variables"]]
env: Option<Vec<String>>,
#[arg(short = 'w', long, help = "Working directory")]
working_directory: Option<String>,
#[arg(help = "Guest to exec inside, either the name or the uuid")]
guest: String,
#[arg(help = "Zone to exec inside, either the name or the uuid")]
zone: String,
#[arg(
allow_hyphen_values = true,
trailing_var_arg = true,
help = "Command to run inside the guest"
help = "Command to run inside the zone"
)]
command: Vec<String>,
}
impl ExecCommand {
impl ZoneExecCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let initial = ExecGuestRequest {
guest_id,
task: Some(GuestTaskSpec {
let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let initial = ExecZoneRequest {
zone_id,
task: Some(ZoneTaskSpec {
environment: env_map(&self.env.unwrap_or_default())
.iter()
.map(|(key, value)| GuestTaskSpecEnvVar {
.map(|(key, value)| ZoneTaskSpecEnvVar {
key: key.clone(),
value: value.clone(),
})
@ -52,7 +52,7 @@ impl ExecCommand {
let stream = StdioConsoleStream::stdin_stream_exec(initial).await;
let response = client.exec_guest(Request::new(stream)).await?.into_inner();
let response = client.exec_zone(Request::new(stream)).await?.into_inner();
let code = StdioConsoleStream::exec_output(response).await?;
std::process::exit(code);

View File

@ -6,12 +6,12 @@ use krata::{
events::EventStream,
v1::{
common::{
guest_image_spec::Image, GuestImageSpec, GuestOciImageSpec, GuestSpec, GuestSpecDevice,
GuestStatus, GuestTaskSpec, GuestTaskSpecEnvVar, OciImageFormat,
zone_image_spec::Image, OciImageFormat, ZoneImageSpec, ZoneOciImageSpec, ZoneSpec,
ZoneSpecDevice, ZoneStatus, ZoneTaskSpec, ZoneTaskSpecEnvVar,
},
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
CreateGuestRequest, PullImageRequest,
CreateZoneRequest, PullImageRequest,
},
},
};
@ -28,61 +28,58 @@ pub enum LaunchImageFormat {
}
#[derive(Parser)]
#[command(about = "Launch a new guest")]
pub struct LaunchCommand {
#[command(about = "Launch a new zone")]
pub struct ZoneLaunchCommand {
#[arg(long, default_value = "squashfs", help = "Image format")]
image_format: LaunchImageFormat,
#[arg(long, help = "Overwrite image cache on pull")]
pull_overwrite_cache: bool,
#[arg(short, long, help = "Name of the guest")]
#[arg(long, help = "Update image on pull")]
pull_update: bool,
#[arg(short, long, help = "Name of the zone")]
name: Option<String>,
#[arg(
short,
long,
default_value_t = 1,
help = "vCPUs available to the guest"
)]
#[arg(short, long, default_value_t = 1, help = "vCPUs available to the zone")]
cpus: u32,
#[arg(
short,
long,
default_value_t = 512,
help = "Memory available to the guest, in megabytes"
help = "Memory available to the zone, in megabytes"
)]
mem: u64,
#[arg[short = 'D', long = "device", help = "Devices to request for the guest"]]
#[arg[short = 'D', long = "device", help = "Devices to request for the zone"]]
device: Vec<String>,
#[arg[short, long, help = "Environment variables set in the guest"]]
#[arg[short, long, help = "Environment variables set in the zone"]]
env: Option<Vec<String>>,
#[arg(
short,
long,
help = "Attach to the guest after guest starts, implies --wait"
help = "Attach to the zone after zone starts, implies --wait"
)]
attach: bool,
#[arg(
short = 'W',
long,
help = "Wait for the guest to start, implied by --attach"
help = "Wait for the zone to start, implied by --attach"
)]
wait: bool,
#[arg(short = 'k', long, help = "OCI kernel image for guest to use")]
#[arg(short = 'k', long, help = "OCI kernel image for zone to use")]
kernel: Option<String>,
#[arg(short = 'I', long, help = "OCI initrd image for guest to use")]
#[arg(short = 'I', long, help = "OCI initrd image for zone to use")]
initrd: Option<String>,
#[arg(short = 'w', long, help = "Working directory")]
working_directory: Option<String>,
#[arg(help = "Container image for guest to use")]
#[arg(help = "Container image for zone to use")]
oci: String,
#[arg(
allow_hyphen_values = true,
trailing_var_arg = true,
help = "Command to run inside the guest"
help = "Command to run inside the zone"
)]
command: Vec<String>,
}
impl LaunchCommand {
impl ZoneLaunchCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
@ -117,18 +114,18 @@ impl LaunchCommand {
None
};
let request = CreateGuestRequest {
spec: Some(GuestSpec {
let request = CreateZoneRequest {
spec: Some(ZoneSpec {
name: self.name.unwrap_or_default(),
image: Some(image),
kernel,
initrd,
vcpus: self.cpus,
mem: self.mem,
task: Some(GuestTaskSpec {
task: Some(ZoneTaskSpec {
environment: env_map(&self.env.unwrap_or_default())
.iter()
.map(|(key, value)| GuestTaskSpecEnvVar {
.map(|(key, value)| ZoneTaskSpecEnvVar {
key: key.clone(),
value: value.clone(),
})
@ -140,26 +137,26 @@ impl LaunchCommand {
devices: self
.device
.iter()
.map(|name| GuestSpecDevice { name: name.clone() })
.map(|name| ZoneSpecDevice { name: name.clone() })
.collect(),
}),
};
let response = client
.create_guest(Request::new(request))
.create_zone(Request::new(request))
.await?
.into_inner();
let id = response.guest_id;
let id = response.zone_id;
if self.wait || self.attach {
wait_guest_started(&id, events.clone()).await?;
wait_zone_started(&id, events.clone()).await?;
}
let code = if self.attach {
let input = StdioConsoleStream::stdin_stream(id.clone()).await;
let output = client.console_data(input).await?.into_inner();
let output = client.attach_zone_console(input).await?.into_inner();
let stdout_handle =
tokio::task::spawn(async move { StdioConsoleStream::stdout(output).await });
let exit_hook_task = StdioConsoleStream::guest_exit_hook(id.clone(), events).await?;
let exit_hook_task = StdioConsoleStream::zone_exit_hook(id.clone(), events).await?;
select! {
x = stdout_handle => {
x??;
@ -180,17 +177,18 @@ impl LaunchCommand {
client: &mut ControlServiceClient<Channel>,
image: &str,
format: OciImageFormat,
) -> Result<GuestImageSpec> {
) -> Result<ZoneImageSpec> {
let response = client
.pull_image(PullImageRequest {
image: image.to_string(),
format: format.into(),
overwrite_cache: self.pull_overwrite_cache,
update: self.pull_update,
})
.await?;
let reply = pull_interactive_progress(response.into_inner()).await?;
Ok(GuestImageSpec {
image: Some(Image::Oci(GuestOciImageSpec {
Ok(ZoneImageSpec {
image: Some(Image::Oci(ZoneOciImageSpec {
digest: reply.digest,
format: reply.format,
})),
@ -198,38 +196,38 @@ impl LaunchCommand {
}
}
async fn wait_guest_started(id: &str, events: EventStream) -> Result<()> {
async fn wait_zone_started(id: &str, events: EventStream) -> Result<()> {
let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await {
match event {
Event::GuestChanged(changed) => {
let Some(guest) = changed.guest else {
Event::ZoneChanged(changed) => {
let Some(zone) = changed.zone else {
continue;
};
if guest.id != id {
if zone.id != id {
continue;
}
let Some(state) = guest.state else {
let Some(state) = zone.state else {
continue;
};
if let Some(ref error) = state.error_info {
if state.status() == GuestStatus::Failed {
if state.status() == ZoneStatus::Failed {
error!("launch failed: {}", error.message);
std::process::exit(1);
} else {
error!("guest error: {}", error.message);
error!("zone error: {}", error.message);
}
}
if state.status() == GuestStatus::Destroyed {
error!("guest destroyed");
if state.status() == ZoneStatus::Destroyed {
error!("zone destroyed");
std::process::exit(1);
}
if state.status() == GuestStatus::Started {
if state.status() == ZoneStatus::Started {
break;
}
}

View File

@ -4,9 +4,9 @@ use comfy_table::{presets::UTF8_FULL_CONDENSED, Cell, Color, Table};
use krata::{
events::EventStream,
v1::{
common::{Guest, GuestStatus},
common::{Zone, ZoneStatus},
control::{
control_service_client::ControlServiceClient, ListGuestsRequest, ResolveGuestRequest,
control_service_client::ControlServiceClient, ListZonesRequest, ResolveZoneRequest,
},
},
};
@ -14,10 +14,10 @@ use krata::{
use serde_json::Value;
use tonic::{transport::Channel, Request};
use crate::format::{guest_simple_line, guest_status_text, kv2line, proto2dynamic, proto2kv};
use crate::format::{kv2line, proto2dynamic, proto2kv, zone_simple_line, zone_status_text};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum ListFormat {
enum ZoneListFormat {
Table,
Json,
JsonPretty,
@ -28,41 +28,39 @@ enum ListFormat {
}
#[derive(Parser)]
#[command(about = "List the guests on the isolation engine")]
pub struct ListCommand {
#[command(about = "List the zones on the isolation engine")]
pub struct ZoneListCommand {
#[arg(short, long, default_value = "table", help = "Output format")]
format: ListFormat,
#[arg(help = "Limit to a single guest, either the name or the uuid")]
guest: Option<String>,
format: ZoneListFormat,
#[arg(help = "Limit to a single zone, either the name or the uuid")]
zone: Option<String>,
}
impl ListCommand {
impl ZoneListCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let mut guests = if let Some(ref guest) = self.guest {
let mut zones = if let Some(ref zone) = self.zone {
let reply = client
.resolve_guest(Request::new(ResolveGuestRequest {
name: guest.clone(),
}))
.resolve_zone(Request::new(ResolveZoneRequest { name: zone.clone() }))
.await?
.into_inner();
if let Some(guest) = reply.guest {
vec![guest]
if let Some(zone) = reply.zone {
vec![zone]
} else {
return Err(anyhow!("unable to resolve guest '{}'", guest));
return Err(anyhow!("unable to resolve zone '{}'", zone));
}
} else {
client
.list_guests(Request::new(ListGuestsRequest {}))
.list_zones(Request::new(ListZonesRequest {}))
.await?
.into_inner()
.guests
.zones
};
guests.sort_by(|a, b| {
zones.sort_by(|a, b| {
a.spec
.as_ref()
.map(|x| x.name.as_str())
@ -71,26 +69,26 @@ impl ListCommand {
});
match self.format {
ListFormat::Table => {
self.print_guest_table(guests)?;
ZoneListFormat::Table => {
self.print_zone_table(zones)?;
}
ListFormat::Simple => {
for guest in guests {
println!("{}", guest_simple_line(&guest));
ZoneListFormat::Simple => {
for zone in zones {
println!("{}", zone_simple_line(&zone));
}
}
ListFormat::Json | ListFormat::JsonPretty | ListFormat::Yaml => {
ZoneListFormat::Json | ZoneListFormat::JsonPretty | ZoneListFormat::Yaml => {
let mut values = Vec::new();
for guest in guests {
let message = proto2dynamic(guest)?;
for zone in zones {
let message = proto2dynamic(zone)?;
values.push(serde_json::to_value(message)?);
}
let value = Value::Array(values);
let encoded = if self.format == ListFormat::JsonPretty {
let encoded = if self.format == ZoneListFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == ListFormat::Yaml {
} else if self.format == ZoneListFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
@ -98,65 +96,63 @@ impl ListCommand {
println!("{}", encoded.trim());
}
ListFormat::Jsonl => {
for guest in guests {
let message = proto2dynamic(guest)?;
ZoneListFormat::Jsonl => {
for zone in zones {
let message = proto2dynamic(zone)?;
println!("{}", serde_json::to_string(&message)?);
}
}
ListFormat::KeyValue => {
self.print_key_value(guests)?;
ZoneListFormat::KeyValue => {
self.print_key_value(zones)?;
}
}
Ok(())
}
fn print_guest_table(&self, guests: Vec<Guest>) -> Result<()> {
fn print_zone_table(&self, zones: Vec<Zone>) -> Result<()> {
let mut table = Table::new();
table.load_preset(UTF8_FULL_CONDENSED);
table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic);
table.set_header(vec!["name", "uuid", "status", "ipv4", "ipv6"]);
for guest in guests {
let ipv4 = guest
for zone in zones {
let ipv4 = zone
.state
.as_ref()
.and_then(|x| x.network.as_ref())
.map(|x| x.guest_ipv4.as_str())
.map(|x| x.zone_ipv4.as_str())
.unwrap_or("n/a");
let ipv6 = guest
let ipv6 = zone
.state
.as_ref()
.and_then(|x| x.network.as_ref())
.map(|x| x.guest_ipv6.as_str())
.map(|x| x.zone_ipv6.as_str())
.unwrap_or("n/a");
let Some(spec) = guest.spec else {
let Some(spec) = zone.spec else {
continue;
};
let status = guest.state.as_ref().cloned().unwrap_or_default().status();
let status_text = guest_status_text(status);
let status = zone.state.as_ref().cloned().unwrap_or_default().status();
let status_text = zone_status_text(status);
let status_color = match status {
GuestStatus::Destroyed | GuestStatus::Failed => Color::Red,
GuestStatus::Destroying | GuestStatus::Exited | GuestStatus::Starting => {
Color::Yellow
}
GuestStatus::Started => Color::Green,
ZoneStatus::Destroyed | ZoneStatus::Failed => Color::Red,
ZoneStatus::Destroying | ZoneStatus::Exited | ZoneStatus::Starting => Color::Yellow,
ZoneStatus::Started => Color::Green,
_ => Color::Reset,
};
table.add_row(vec![
Cell::new(spec.name),
Cell::new(guest.id),
Cell::new(zone.id),
Cell::new(status_text).fg(status_color),
Cell::new(ipv4.to_string()),
Cell::new(ipv6.to_string()),
]);
}
if table.is_empty() {
if self.guest.is_none() {
println!("no guests have been launched");
if self.zone.is_none() {
println!("no zones have been launched");
}
} else {
println!("{}", table);
@ -164,9 +160,9 @@ impl ListCommand {
Ok(())
}
fn print_key_value(&self, guests: Vec<Guest>) -> Result<()> {
for guest in guests {
let kvs = proto2kv(guest)?;
fn print_key_value(&self, zones: Vec<Zone>) -> Result<()> {
for zone in zones {
let kvs = proto2kv(zone)?;
println!("{}", kv2line(kvs),);
}
Ok(())

View File

@ -3,7 +3,7 @@ use async_stream::stream;
use clap::Parser;
use krata::{
events::EventStream,
v1::control::{control_service_client::ControlServiceClient, ConsoleDataRequest},
v1::control::{control_service_client::ControlServiceClient, ZoneConsoleRequest},
};
use tokio::select;
@ -12,39 +12,39 @@ use tonic::transport::Channel;
use crate::console::StdioConsoleStream;
use super::resolve_guest;
use crate::cli::resolve_zone;
#[derive(Parser)]
#[command(about = "View the logs of a guest")]
pub struct LogsCommand {
#[arg(short, long, help = "Follow output from the guest")]
#[command(about = "View the logs of a zone")]
pub struct ZoneLogsCommand {
#[arg(short, long, help = "Follow output from the zone")]
follow: bool,
#[arg(help = "Guest to show logs for, either the name or the uuid")]
guest: String,
#[arg(help = "Zone to show logs for, either the name or the uuid")]
zone: String,
}
impl LogsCommand {
impl ZoneLogsCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let guest_id_stream = guest_id.clone();
let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let zone_id_stream = zone_id.clone();
let follow = self.follow;
let input = stream! {
yield ConsoleDataRequest { guest_id: guest_id_stream, data: Vec::new() };
yield ZoneConsoleRequest { zone_id: zone_id_stream, data: Vec::new() };
if follow {
let mut pending = pending::<ConsoleDataRequest>();
let mut pending = pending::<ZoneConsoleRequest>();
while let Some(x) = pending.next().await {
yield x;
}
}
};
let output = client.console_data(input).await?.into_inner();
let output = client.attach_zone_console(input).await?.into_inner();
let stdout_handle =
tokio::task::spawn(async move { StdioConsoleStream::stdout(output).await });
let exit_hook_task = StdioConsoleStream::guest_exit_hook(guest_id.clone(), events).await?;
let exit_hook_task = StdioConsoleStream::zone_exit_hook(zone_id.clone(), events).await?;
let code = select! {
x = stdout_handle => {
x??;

View File

@ -3,8 +3,8 @@ use clap::{Parser, ValueEnum};
use krata::{
events::EventStream,
v1::{
common::GuestMetricNode,
control::{control_service_client::ControlServiceClient, ReadGuestMetricsRequest},
common::ZoneMetricNode,
control::{control_service_client::ControlServiceClient, ReadZoneMetricsRequest},
},
};
@ -12,10 +12,10 @@ use tonic::transport::Channel;
use crate::format::{kv2line, metrics_flat, metrics_tree, proto2dynamic};
use super::resolve_guest;
use crate::cli::resolve_zone;
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum MetricsFormat {
enum ZoneMetricsFormat {
Tree,
Json,
JsonPretty,
@ -24,37 +24,37 @@ enum MetricsFormat {
}
#[derive(Parser)]
#[command(about = "Read metrics from the guest")]
pub struct MetricsCommand {
#[command(about = "Read metrics from the zone")]
pub struct ZoneMetricsCommand {
#[arg(short, long, default_value = "tree", help = "Output format")]
format: MetricsFormat,
#[arg(help = "Guest to read metrics for, either the name or the uuid")]
guest: String,
format: ZoneMetricsFormat,
#[arg(help = "Zone to read metrics for, either the name or the uuid")]
zone: String,
}
impl MetricsCommand {
impl ZoneMetricsCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let root = client
.read_guest_metrics(ReadGuestMetricsRequest { guest_id })
.read_zone_metrics(ReadZoneMetricsRequest { zone_id })
.await?
.into_inner()
.root
.unwrap_or_default();
match self.format {
MetricsFormat::Tree => {
ZoneMetricsFormat::Tree => {
self.print_metrics_tree(root)?;
}
MetricsFormat::Json | MetricsFormat::JsonPretty | MetricsFormat::Yaml => {
ZoneMetricsFormat::Json | ZoneMetricsFormat::JsonPretty | ZoneMetricsFormat::Yaml => {
let value = serde_json::to_value(proto2dynamic(root)?)?;
let encoded = if self.format == MetricsFormat::JsonPretty {
let encoded = if self.format == ZoneMetricsFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == MetricsFormat::Yaml {
} else if self.format == ZoneMetricsFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
@ -62,7 +62,7 @@ impl MetricsCommand {
println!("{}", encoded.trim());
}
MetricsFormat::KeyValue => {
ZoneMetricsFormat::KeyValue => {
self.print_key_value(root)?;
}
}
@ -70,12 +70,12 @@ impl MetricsCommand {
Ok(())
}
fn print_metrics_tree(&self, root: GuestMetricNode) -> Result<()> {
fn print_metrics_tree(&self, root: ZoneMetricNode) -> Result<()> {
print!("{}", metrics_tree(root));
Ok(())
}
fn print_key_value(&self, metrics: GuestMetricNode) -> Result<()> {
fn print_key_value(&self, metrics: ZoneMetricNode) -> Result<()> {
let kvs = metrics_flat(metrics);
println!("{}", kv2line(kvs));
Ok(())

View File

@ -0,0 +1,89 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use tonic::transport::Channel;
use krata::events::EventStream;
use krata::v1::control::control_service_client::ControlServiceClient;
use crate::cli::zone::attach::ZoneAttachCommand;
use crate::cli::zone::destroy::ZoneDestroyCommand;
use crate::cli::zone::exec::ZoneExecCommand;
use crate::cli::zone::launch::ZoneLaunchCommand;
use crate::cli::zone::list::ZoneListCommand;
use crate::cli::zone::logs::ZoneLogsCommand;
use crate::cli::zone::metrics::ZoneMetricsCommand;
use crate::cli::zone::resolve::ZoneResolveCommand;
use crate::cli::zone::top::ZoneTopCommand;
use crate::cli::zone::watch::ZoneWatchCommand;
pub mod attach;
pub mod destroy;
pub mod exec;
pub mod launch;
pub mod list;
pub mod logs;
pub mod metrics;
pub mod resolve;
pub mod top;
pub mod watch;
#[derive(Parser)]
#[command(about = "Manage the zones on the isolation engine")]
pub struct ZoneCommand {
#[command(subcommand)]
subcommand: ZoneCommands,
}
impl ZoneCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
self.subcommand.run(client, events).await
}
}
#[derive(Subcommand)]
pub enum ZoneCommands {
Attach(ZoneAttachCommand),
List(ZoneListCommand),
Launch(ZoneLaunchCommand),
Destroy(ZoneDestroyCommand),
Exec(ZoneExecCommand),
Logs(ZoneLogsCommand),
Metrics(ZoneMetricsCommand),
Resolve(ZoneResolveCommand),
Top(ZoneTopCommand),
Watch(ZoneWatchCommand),
}
impl ZoneCommands {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
match self {
ZoneCommands::Launch(launch) => launch.run(client, events).await,
ZoneCommands::Destroy(destroy) => destroy.run(client, events).await,
ZoneCommands::Attach(attach) => attach.run(client, events).await,
ZoneCommands::Logs(logs) => logs.run(client, events).await,
ZoneCommands::List(list) => list.run(client, events).await,
ZoneCommands::Watch(watch) => watch.run(events).await,
ZoneCommands::Resolve(resolve) => resolve.run(client).await,
ZoneCommands::Metrics(metrics) => metrics.run(client, events).await,
ZoneCommands::Top(top) => top.run(client, events).await,
ZoneCommands::Exec(exec) => exec.run(client).await,
}
}
}

View File

@ -1,26 +1,26 @@
use anyhow::Result;
use clap::Parser;
use krata::v1::control::{control_service_client::ControlServiceClient, ResolveGuestRequest};
use krata::v1::control::{control_service_client::ControlServiceClient, ResolveZoneRequest};
use tonic::{transport::Channel, Request};
#[derive(Parser)]
#[command(about = "Resolve a guest name to a uuid")]
pub struct ResolveCommand {
#[arg(help = "Guest name")]
guest: String,
#[command(about = "Resolve a zone name to a uuid")]
pub struct ZoneResolveCommand {
#[arg(help = "Zone name")]
zone: String,
}
impl ResolveCommand {
impl ZoneResolveCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let reply = client
.resolve_guest(Request::new(ResolveGuestRequest {
name: self.guest.clone(),
.resolve_zone(Request::new(ResolveZoneRequest {
name: self.zone.clone(),
}))
.await?
.into_inner();
if let Some(guest) = reply.guest {
println!("{}", guest.id);
if let Some(zone) = reply.zone {
println!("{}", zone.id);
} else {
std::process::exit(1);
}

View File

@ -24,19 +24,19 @@ use ratatui::{
};
use crate::{
format::guest_status_text,
format::zone_status_text,
metrics::{
lookup_metric_value, MultiMetricCollector, MultiMetricCollectorHandle, MultiMetricState,
},
};
#[derive(Parser)]
#[command(about = "Dashboard for running guests")]
pub struct TopCommand {}
#[command(about = "Dashboard for running zones")]
pub struct ZoneTopCommand {}
pub type Tui = Terminal<CrosstermBackend<Stdout>>;
impl TopCommand {
impl ZoneTopCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
@ -44,14 +44,14 @@ impl TopCommand {
) -> Result<()> {
let collector = MultiMetricCollector::new(client, events, Duration::from_millis(200))?;
let collector = collector.launch().await?;
let mut tui = TopCommand::init()?;
let mut app = TopApp {
metrics: MultiMetricState { guests: vec![] },
let mut tui = ZoneTopCommand::init()?;
let mut app = ZoneTopApp {
metrics: MultiMetricState { zones: vec![] },
exit: false,
table: TableState::new(),
};
app.run(collector, &mut tui).await?;
TopCommand::restore()?;
ZoneTopCommand::restore()?;
Ok(())
}
@ -68,13 +68,13 @@ impl TopCommand {
}
}
pub struct TopApp {
pub struct ZoneTopApp {
table: TableState,
metrics: MultiMetricState,
exit: bool,
}
impl TopApp {
impl ZoneTopApp {
pub async fn run(
&mut self,
mut collector: MultiMetricCollectorHandle,
@ -136,7 +136,7 @@ impl TopApp {
}
}
impl Widget for &mut TopApp {
impl Widget for &mut ZoneTopApp {
fn render(self, area: Rect, buf: &mut Buffer) {
let title = Title::from(" krata isolation engine ".bold());
let instructions = Title::from(vec![" Quit ".into(), "<Q> ".blue().bold()]);
@ -152,12 +152,12 @@ impl Widget for &mut TopApp {
let mut rows = vec![];
for ms in &self.metrics.guests {
let Some(ref spec) = ms.guest.spec else {
for ms in &self.metrics.zones {
let Some(ref spec) = ms.zone.spec else {
continue;
};
let Some(ref state) = ms.guest.state else {
let Some(ref state) = ms.zone.state else {
continue;
};
@ -176,8 +176,8 @@ impl Widget for &mut TopApp {
let row = Row::new(vec![
spec.name.clone(),
ms.guest.id.clone(),
guest_status_text(state.status()),
ms.zone.id.clone(),
zone_status_text(state.status()),
memory_total.unwrap_or_default(),
memory_used.unwrap_or_default(),
memory_free.unwrap_or_default(),

View File

@ -2,53 +2,48 @@ use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::{
events::EventStream,
v1::{common::Guest, control::watch_events_reply::Event},
v1::{common::Zone, control::watch_events_reply::Event},
};
use prost_reflect::ReflectMessage;
use serde_json::Value;
use crate::format::{guest_simple_line, kv2line, proto2dynamic, proto2kv};
use crate::format::{kv2line, proto2dynamic, proto2kv, zone_simple_line};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum WatchFormat {
enum ZoneWatchFormat {
Simple,
Json,
KeyValue,
}
#[derive(Parser)]
#[command(about = "Watch for guest changes")]
pub struct WatchCommand {
#[command(about = "Watch for zone changes")]
pub struct ZoneWatchCommand {
#[arg(short, long, default_value = "simple", help = "Output format")]
format: WatchFormat,
format: ZoneWatchFormat,
}
impl WatchCommand {
impl ZoneWatchCommand {
pub async fn run(self, events: EventStream) -> Result<()> {
let mut stream = events.subscribe();
loop {
let event = stream.recv().await?;
let Event::GuestChanged(changed) = event;
let guest = changed.guest.clone();
self.print_event("guest.changed", changed, guest)?;
let Event::ZoneChanged(changed) = event;
let zone = changed.zone.clone();
self.print_event("zone.changed", changed, zone)?;
}
}
fn print_event(
&self,
typ: &str,
event: impl ReflectMessage,
guest: Option<Guest>,
) -> Result<()> {
fn print_event(&self, typ: &str, event: impl ReflectMessage, zone: Option<Zone>) -> Result<()> {
match self.format {
WatchFormat::Simple => {
if let Some(guest) = guest {
println!("{}", guest_simple_line(&guest));
ZoneWatchFormat::Simple => {
if let Some(zone) = zone {
println!("{}", zone_simple_line(&zone));
}
}
WatchFormat::Json => {
ZoneWatchFormat::Json => {
let message = proto2dynamic(event)?;
let mut value = serde_json::to_value(&message)?;
if let Value::Object(ref mut map) = value {
@ -57,7 +52,7 @@ impl WatchCommand {
println!("{}", serde_json::to_string(&value)?);
}
WatchFormat::KeyValue => {
ZoneWatchFormat::KeyValue => {
let mut map = proto2kv(event)?;
map.insert("event.type".to_string(), typ.to_string());
println!("{}", kv2line(map),);

View File

@ -7,10 +7,10 @@ use crossterm::{
use krata::{
events::EventStream,
v1::{
common::GuestStatus,
common::ZoneStatus,
control::{
watch_events_reply::Event, ConsoleDataReply, ConsoleDataRequest, ExecGuestReply,
ExecGuestRequest,
watch_events_reply::Event, ExecZoneReply, ExecZoneRequest, ZoneConsoleReply,
ZoneConsoleRequest,
},
},
};
@ -25,10 +25,10 @@ use tonic::Streaming;
pub struct StdioConsoleStream;
impl StdioConsoleStream {
pub async fn stdin_stream(guest: String) -> impl Stream<Item = ConsoleDataRequest> {
pub async fn stdin_stream(zone: String) -> impl Stream<Item = ZoneConsoleRequest> {
let mut stdin = stdin();
stream! {
yield ConsoleDataRequest { guest_id: guest, data: vec![] };
yield ZoneConsoleRequest { zone_id: zone, data: vec![] };
let mut buffer = vec![0u8; 60];
loop {
@ -43,14 +43,14 @@ impl StdioConsoleStream {
if size == 1 && buffer[0] == 0x1d {
break;
}
yield ConsoleDataRequest { guest_id: String::default(), data };
yield ZoneConsoleRequest { zone_id: String::default(), data };
}
}
}
pub async fn stdin_stream_exec(
initial: ExecGuestRequest,
) -> impl Stream<Item = ExecGuestRequest> {
initial: ExecZoneRequest,
) -> impl Stream<Item = ExecZoneRequest> {
let mut stdin = stdin();
stream! {
yield initial;
@ -68,12 +68,12 @@ impl StdioConsoleStream {
if size == 1 && buffer[0] == 0x1d {
break;
}
yield ExecGuestRequest { guest_id: String::default(), task: None, data };
yield ExecZoneRequest { zone_id: String::default(), task: None, data };
}
}
}
pub async fn stdout(mut stream: Streaming<ConsoleDataReply>) -> Result<()> {
pub async fn stdout(mut stream: Streaming<ZoneConsoleReply>) -> Result<()> {
if stdin().is_tty() {
enable_raw_mode()?;
StdioConsoleStream::register_terminal_restore_hook()?;
@ -90,7 +90,7 @@ impl StdioConsoleStream {
Ok(())
}
pub async fn exec_output(mut stream: Streaming<ExecGuestReply>) -> Result<i32> {
pub async fn exec_output(mut stream: Streaming<ExecZoneReply>) -> Result<i32> {
let mut stdout = stdout();
let mut stderr = stderr();
while let Some(reply) = stream.next().await {
@ -106,33 +106,33 @@ impl StdioConsoleStream {
}
if reply.exited {
if reply.error.is_empty() {
return Ok(reply.exit_code);
return if reply.error.is_empty() {
Ok(reply.exit_code)
} else {
return Err(anyhow!("exec failed: {}", reply.error));
}
Err(anyhow!("exec failed: {}", reply.error))
};
}
}
Ok(-1)
}
pub async fn guest_exit_hook(
pub async fn zone_exit_hook(
id: String,
events: EventStream,
) -> Result<JoinHandle<Option<i32>>> {
Ok(tokio::task::spawn(async move {
let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await {
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
let Event::ZoneChanged(changed) = event;
let Some(zone) = changed.zone else {
continue;
};
let Some(state) = guest.state else {
let Some(state) = zone.state else {
continue;
};
if guest.id != id {
if zone.id != id {
continue;
}
@ -141,7 +141,7 @@ impl StdioConsoleStream {
}
let status = state.status();
if status == GuestStatus::Destroying || status == GuestStatus::Destroyed {
if status == ZoneStatus::Destroying || status == ZoneStatus::Destroyed {
return Some(10);
}
}

View File

@ -3,7 +3,7 @@ use std::{collections::HashMap, time::Duration};
use anyhow::Result;
use fancy_duration::FancyDuration;
use human_bytes::human_bytes;
use krata::v1::common::{Guest, GuestMetricFormat, GuestMetricNode, GuestStatus};
use krata::v1::common::{Zone, ZoneMetricFormat, ZoneMetricNode, ZoneStatus};
use prost_reflect::{DynamicMessage, ReflectMessage};
use prost_types::Value;
use termtree::Tree;
@ -75,32 +75,31 @@ pub fn kv2line(map: HashMap<String, String>) -> String {
.join(" ")
}
pub fn guest_status_text(status: GuestStatus) -> String {
pub fn zone_status_text(status: ZoneStatus) -> String {
match status {
GuestStatus::Starting => "starting",
GuestStatus::Started => "started",
GuestStatus::Destroying => "destroying",
GuestStatus::Destroyed => "destroyed",
GuestStatus::Exited => "exited",
GuestStatus::Failed => "failed",
ZoneStatus::Starting => "starting",
ZoneStatus::Started => "started",
ZoneStatus::Destroying => "destroying",
ZoneStatus::Destroyed => "destroyed",
ZoneStatus::Exited => "exited",
ZoneStatus::Failed => "failed",
_ => "unknown",
}
.to_string()
}
pub fn guest_simple_line(guest: &Guest) -> String {
let state = guest_status_text(
guest
.state
pub fn zone_simple_line(zone: &Zone) -> String {
let state = zone_status_text(
zone.state
.as_ref()
.map(|x| x.status())
.unwrap_or(GuestStatus::Unknown),
.unwrap_or(ZoneStatus::Unknown),
);
let name = guest.spec.as_ref().map(|x| x.name.as_str()).unwrap_or("");
let network = guest.state.as_ref().and_then(|x| x.network.as_ref());
let ipv4 = network.map(|x| x.guest_ipv4.as_str()).unwrap_or("");
let ipv6 = network.map(|x| x.guest_ipv6.as_str()).unwrap_or("");
format!("{}\t{}\t{}\t{}\t{}", guest.id, state, name, ipv4, ipv6)
let name = zone.spec.as_ref().map(|x| x.name.as_str()).unwrap_or("");
let network = zone.state.as_ref().and_then(|x| x.network.as_ref());
let ipv4 = network.map(|x| x.zone_ipv4.as_str()).unwrap_or("");
let ipv6 = network.map(|x| x.zone_ipv6.as_str()).unwrap_or("");
format!("{}\t{}\t{}\t{}\t{}", zone.id, state, name, ipv4, ipv6)
}
fn metrics_value_string(value: Value) -> String {
@ -116,18 +115,18 @@ fn metrics_value_numeric(value: Value) -> f64 {
string.parse::<f64>().ok().unwrap_or(f64::NAN)
}
pub fn metrics_value_pretty(value: Value, format: GuestMetricFormat) -> String {
pub fn metrics_value_pretty(value: Value, format: ZoneMetricFormat) -> String {
match format {
GuestMetricFormat::Bytes => human_bytes(metrics_value_numeric(value)),
GuestMetricFormat::Integer => (metrics_value_numeric(value) as u64).to_string(),
GuestMetricFormat::DurationSeconds => {
ZoneMetricFormat::Bytes => human_bytes(metrics_value_numeric(value)),
ZoneMetricFormat::Integer => (metrics_value_numeric(value) as u64).to_string(),
ZoneMetricFormat::DurationSeconds => {
FancyDuration(Duration::from_secs_f64(metrics_value_numeric(value))).to_string()
}
_ => metrics_value_string(value),
}
}
fn metrics_flat_internal(prefix: &str, node: GuestMetricNode, map: &mut HashMap<String, String>) {
fn metrics_flat_internal(prefix: &str, node: ZoneMetricNode, map: &mut HashMap<String, String>) {
if let Some(value) = node.value {
map.insert(prefix.to_string(), metrics_value_string(value));
}
@ -142,13 +141,13 @@ fn metrics_flat_internal(prefix: &str, node: GuestMetricNode, map: &mut HashMap<
}
}
pub fn metrics_flat(root: GuestMetricNode) -> HashMap<String, String> {
pub fn metrics_flat(root: ZoneMetricNode) -> HashMap<String, String> {
let mut map = HashMap::new();
metrics_flat_internal("", root, &mut map);
map
}
pub fn metrics_tree(node: GuestMetricNode) -> Tree<String> {
pub fn metrics_tree(node: ZoneMetricNode) -> Tree<String> {
let mut name = node.name.to_string();
let format = node.format();
if let Some(value) = node.value {

View File

@ -2,10 +2,10 @@ use anyhow::Result;
use krata::{
events::EventStream,
v1::{
common::{Guest, GuestMetricNode, GuestStatus},
common::{Zone, ZoneMetricNode, ZoneStatus},
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
ListGuestsRequest, ReadGuestMetricsRequest,
ListZonesRequest, ReadZoneMetricsRequest,
},
},
};
@ -22,12 +22,12 @@ use tonic::transport::Channel;
use crate::format::metrics_value_pretty;
pub struct MetricState {
pub guest: Guest,
pub root: Option<GuestMetricNode>,
pub zone: Zone,
pub root: Option<ZoneMetricNode>,
}
pub struct MultiMetricState {
pub guests: Vec<MetricState>,
pub zones: Vec<MetricState>,
}
pub struct MultiMetricCollector {
@ -72,26 +72,26 @@ impl MultiMetricCollector {
pub async fn process(&mut self, sender: Sender<MultiMetricState>) -> Result<()> {
let mut events = self.events.subscribe();
let mut guests: Vec<Guest> = self
let mut zones: Vec<Zone> = self
.client
.list_guests(ListGuestsRequest {})
.list_zones(ListZonesRequest {})
.await?
.into_inner()
.guests;
.zones;
loop {
let collect = select! {
x = events.recv() => match x {
Ok(event) => {
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
let Event::ZoneChanged(changed) = event;
let Some(zone) = changed.zone else {
continue;
};
let Some(ref state) = guest.state else {
let Some(ref state) = zone.state else {
continue;
};
guests.retain(|x| x.id != guest.id);
if state.status() != GuestStatus::Destroying {
guests.push(guest);
zones.retain(|x| x.id != zone.id);
if state.status() != ZoneStatus::Destroying {
zones.push(zone);
}
false
},
@ -111,19 +111,19 @@ impl MultiMetricCollector {
}
let mut metrics = Vec::new();
for guest in &guests {
let Some(ref state) = guest.state else {
for zone in &zones {
let Some(ref state) = zone.state else {
continue;
};
if state.status() != GuestStatus::Started {
if state.status() != ZoneStatus::Started {
continue;
}
let root = timeout(
Duration::from_secs(5),
self.client.read_guest_metrics(ReadGuestMetricsRequest {
guest_id: guest.id.clone(),
self.client.read_zone_metrics(ReadZoneMetricsRequest {
zone_id: zone.id.clone(),
}),
)
.await
@ -132,16 +132,16 @@ impl MultiMetricCollector {
.map(|x| x.into_inner())
.and_then(|x| x.root);
metrics.push(MetricState {
guest: guest.clone(),
zone: zone.clone(),
root,
});
}
sender.send(MultiMetricState { guests: metrics }).await?;
sender.send(MultiMetricState { zones: metrics }).await?;
}
}
}
pub fn lookup<'a>(node: &'a GuestMetricNode, path: &str) -> Option<&'a GuestMetricNode> {
pub fn lookup<'a>(node: &'a ZoneMetricNode, path: &str) -> Option<&'a ZoneMetricNode> {
let Some((what, b)) = path.split_once('/') else {
return node.children.iter().find(|x| x.name == path);
};
@ -149,7 +149,7 @@ pub fn lookup<'a>(node: &'a GuestMetricNode, path: &str) -> Option<&'a GuestMetr
return lookup(next, b);
}
pub fn lookup_metric_value(node: &GuestMetricNode, path: &str) -> Option<String> {
pub fn lookup_metric_value(node: &ZoneMetricNode, path: &str) -> Option<String> {
lookup(node, path).and_then(|x| {
x.value
.as_ref()

View File

@ -17,9 +17,9 @@ circular-buffer = { workspace = true }
clap = { workspace = true }
env_logger = { workspace = true }
futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.12" }
krata-oci = { path = "../oci", version = "^0.0.12" }
krata-runtime = { path = "../runtime", version = "^0.0.12" }
krata = { path = "../krata", version = "^0.0.14" }
krata-oci = { path = "../oci", version = "^0.0.14" }
krata-runtime = { path = "../runtime", version = "^0.0.14" }
log = { workspace = true }
prost = { workspace = true }
redb = { workspace = true }

View File

@ -13,7 +13,7 @@ use tokio::{
};
use uuid::Uuid;
use crate::glt::GuestLookupTable;
use crate::zlt::ZoneLookupTable;
const CONSOLE_BUFFER_SIZE: usize = 1024 * 1024;
type RawConsoleBuffer = CircularBuffer<CONSOLE_BUFFER_SIZE, u8>;
@ -24,7 +24,7 @@ type BufferMap = Arc<Mutex<HashMap<u32, ConsoleBuffer>>>;
#[derive(Clone)]
pub struct DaemonConsoleHandle {
glt: GuestLookupTable,
glt: ZoneLookupTable,
listeners: ListenerMap,
buffers: BufferMap,
sender: Sender<(u32, Vec<u8>)>,
@ -84,7 +84,7 @@ impl Drop for DaemonConsoleHandle {
}
pub struct DaemonConsole {
glt: GuestLookupTable,
glt: ZoneLookupTable,
listeners: ListenerMap,
buffers: BufferMap,
receiver: Receiver<(u32, Option<Vec<u8>>)>,
@ -93,7 +93,7 @@ pub struct DaemonConsole {
}
impl DaemonConsole {
pub async fn new(glt: GuestLookupTable) -> Result<DaemonConsole> {
pub async fn new(glt: ZoneLookupTable) -> Result<DaemonConsole> {
let (service, sender, receiver) =
ChannelService::new("krata-console".to_string(), Some(0)).await?;
let task = service.launch().await?;

View File

@ -7,16 +7,16 @@ use krata::{
ExecStreamRequestStdin, ExecStreamRequestUpdate, MetricsRequest, Request as IdmRequest,
},
v1::{
common::{Guest, GuestState, GuestStatus, OciImageFormat},
common::{OciImageFormat, Zone, ZoneState, ZoneStatus},
control::{
control_service_server::ControlService, ConsoleDataReply, ConsoleDataRequest,
CreateGuestReply, CreateGuestRequest, DestroyGuestReply, DestroyGuestRequest,
DeviceInfo, ExecGuestReply, ExecGuestRequest, HostCpuTopologyInfo,
HostCpuTopologyReply, HostCpuTopologyRequest, HostPowerManagementPolicy,
IdentifyHostReply, IdentifyHostRequest, ListDevicesReply, ListDevicesRequest,
ListGuestsReply, ListGuestsRequest, PullImageReply, PullImageRequest,
ReadGuestMetricsReply, ReadGuestMetricsRequest, ResolveGuestReply, ResolveGuestRequest,
SnoopIdmReply, SnoopIdmRequest, WatchEventsReply, WatchEventsRequest,
control_service_server::ControlService, CreateZoneReply, CreateZoneRequest,
DestroyZoneReply, DestroyZoneRequest, DeviceInfo, ExecZoneReply, ExecZoneRequest,
HostCpuTopologyInfo, HostCpuTopologyReply, HostCpuTopologyRequest,
HostPowerManagementPolicy, IdentifyHostReply, IdentifyHostRequest, ListDevicesReply,
ListDevicesRequest, ListZonesReply, ListZonesRequest, PullImageReply, PullImageRequest,
ReadZoneMetricsReply, ReadZoneMetricsRequest, ResolveZoneReply, ResolveZoneRequest,
SnoopIdmReply, SnoopIdmRequest, WatchEventsReply, WatchEventsRequest, ZoneConsoleReply,
ZoneConsoleRequest,
},
},
};
@ -37,9 +37,9 @@ use tonic::{Request, Response, Status, Streaming};
use uuid::Uuid;
use crate::{
command::DaemonCommand, console::DaemonConsoleHandle, db::GuestStore,
devices::DaemonDeviceManager, event::DaemonEventContext, glt::GuestLookupTable,
idm::DaemonIdmHandle, metrics::idm_metric_to_api, oci::convert_oci_progress,
command::DaemonCommand, console::DaemonConsoleHandle, db::ZoneStore,
devices::DaemonDeviceManager, event::DaemonEventContext, idm::DaemonIdmHandle,
metrics::idm_metric_to_api, oci::convert_oci_progress, zlt::ZoneLookupTable,
};
pub struct ApiError {
@ -62,13 +62,13 @@ impl From<ApiError> for Status {
#[derive(Clone)]
pub struct DaemonControlService {
glt: GuestLookupTable,
glt: ZoneLookupTable,
devices: DaemonDeviceManager,
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>,
zones: ZoneStore,
zone_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
runtime: Runtime,
}
@ -76,13 +76,13 @@ pub struct DaemonControlService {
impl DaemonControlService {
#[allow(clippy::too_many_arguments)]
pub fn new(
glt: GuestLookupTable,
glt: ZoneLookupTable,
devices: DaemonDeviceManager,
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>,
zones: ZoneStore,
zone_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
runtime: Runtime,
) -> Self {
@ -92,8 +92,8 @@ impl DaemonControlService {
events,
console,
idm,
guests,
guest_reconciler_notify,
zones,
zone_reconciler_notify,
packer,
runtime,
}
@ -102,7 +102,7 @@ impl DaemonControlService {
enum ConsoleDataSelect {
Read(Option<Vec<u8>>),
Write(Option<Result<ConsoleDataRequest, tonic::Status>>),
Write(Option<Result<ZoneConsoleRequest, Status>>),
}
enum PullImageSelect {
@ -112,11 +112,11 @@ enum PullImageSelect {
#[tonic::async_trait]
impl ControlService for DaemonControlService {
type ExecGuestStream =
Pin<Box<dyn Stream<Item = Result<ExecGuestReply, Status>> + Send + 'static>>;
type ExecZoneStream =
Pin<Box<dyn Stream<Item = Result<ExecZoneReply, Status>> + Send + 'static>>;
type ConsoleDataStream =
Pin<Box<dyn Stream<Item = Result<ConsoleDataReply, Status>> + Send + 'static>>;
type AttachZoneConsoleStream =
Pin<Box<dyn Stream<Item = Result<ZoneConsoleReply, Status>> + Send + 'static>>;
type PullImageStream =
Pin<Box<dyn Stream<Item = Result<PullImageReply, Status>> + Send + 'static>>;
@ -139,25 +139,25 @@ impl ControlService for DaemonControlService {
}))
}
async fn create_guest(
async fn create_zone(
&self,
request: Request<CreateGuestRequest>,
) -> Result<Response<CreateGuestReply>, Status> {
request: Request<CreateZoneRequest>,
) -> Result<Response<CreateZoneReply>, Status> {
let request = request.into_inner();
let Some(spec) = request.spec else {
return Err(ApiError {
message: "guest spec not provided".to_string(),
message: "zone spec not provided".to_string(),
}
.into());
};
let uuid = Uuid::new_v4();
self.guests
self.zones
.update(
uuid,
Guest {
Zone {
id: uuid.to_string(),
state: Some(GuestState {
status: GuestStatus::Starting.into(),
state: Some(ZoneState {
status: ZoneStatus::Starting.into(),
network: None,
exit_info: None,
error_info: None,
@ -169,21 +169,21 @@ impl ControlService for DaemonControlService {
)
.await
.map_err(ApiError::from)?;
self.guest_reconciler_notify
self.zone_reconciler_notify
.send(uuid)
.await
.map_err(|x| ApiError {
message: x.to_string(),
})?;
Ok(Response::new(CreateGuestReply {
guest_id: uuid.to_string(),
Ok(Response::new(CreateZoneReply {
zone_id: uuid.to_string(),
}))
}
async fn exec_guest(
async fn exec_zone(
&self,
request: Request<Streaming<ExecGuestRequest>>,
) -> Result<Response<Self::ExecGuestStream>, Status> {
request: Request<Streaming<ExecZoneRequest>>,
) -> Result<Response<Self::ExecZoneStream>, Status> {
let mut input = request.into_inner();
let Some(request) = input.next().await else {
return Err(ApiError {
@ -200,7 +200,7 @@ impl ControlService for DaemonControlService {
.into());
};
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let idm = self.idm.client(uuid).await.map_err(|error| ApiError {
@ -232,7 +232,7 @@ impl ControlService for DaemonControlService {
loop {
select! {
x = input.next() => if let Some(update) = x {
let update: Result<ExecGuestRequest, Status> = update.map_err(|error| ApiError {
let update: Result<ExecZoneRequest, Status> = update.map_err(|error| ApiError {
message: error.to_string()
}.into());
@ -252,7 +252,7 @@ impl ControlService for DaemonControlService {
let Some(IdmResponseType::ExecStream(update)) = response.response else {
break;
};
let reply = ExecGuestReply {
let reply = ExecZoneReply {
exited: update.exited,
error: update.error,
exit_code: update.exit_code,
@ -269,80 +269,80 @@ impl ControlService for DaemonControlService {
}
};
Ok(Response::new(Box::pin(output) as Self::ExecGuestStream))
Ok(Response::new(Box::pin(output) as Self::ExecZoneStream))
}
async fn destroy_guest(
async fn destroy_zone(
&self,
request: Request<DestroyGuestRequest>,
) -> Result<Response<DestroyGuestReply>, Status> {
request: Request<DestroyZoneRequest>,
) -> Result<Response<DestroyZoneReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let Some(mut guest) = self.guests.read(uuid).await.map_err(ApiError::from)? else {
let Some(mut zone) = self.zones.read(uuid).await.map_err(ApiError::from)? else {
return Err(ApiError {
message: "guest not found".to_string(),
message: "zone not found".to_string(),
}
.into());
};
guest.state = Some(guest.state.as_mut().cloned().unwrap_or_default());
zone.state = Some(zone.state.as_mut().cloned().unwrap_or_default());
if guest.state.as_ref().unwrap().status() == GuestStatus::Destroyed {
if zone.state.as_ref().unwrap().status() == ZoneStatus::Destroyed {
return Err(ApiError {
message: "guest already destroyed".to_string(),
message: "zone already destroyed".to_string(),
}
.into());
}
guest.state.as_mut().unwrap().status = GuestStatus::Destroying.into();
self.guests
.update(uuid, guest)
zone.state.as_mut().unwrap().status = ZoneStatus::Destroying.into();
self.zones
.update(uuid, zone)
.await
.map_err(ApiError::from)?;
self.guest_reconciler_notify
self.zone_reconciler_notify
.send(uuid)
.await
.map_err(|x| ApiError {
message: x.to_string(),
})?;
Ok(Response::new(DestroyGuestReply {}))
Ok(Response::new(DestroyZoneReply {}))
}
async fn list_guests(
async fn list_zones(
&self,
request: Request<ListGuestsRequest>,
) -> Result<Response<ListGuestsReply>, Status> {
request: Request<ListZonesRequest>,
) -> Result<Response<ListZonesReply>, Status> {
let _ = request.into_inner();
let guests = self.guests.list().await.map_err(ApiError::from)?;
let guests = guests.into_values().collect::<Vec<Guest>>();
Ok(Response::new(ListGuestsReply { guests }))
let zones = self.zones.list().await.map_err(ApiError::from)?;
let zones = zones.into_values().collect::<Vec<Zone>>();
Ok(Response::new(ListZonesReply { zones }))
}
async fn resolve_guest(
async fn resolve_zone(
&self,
request: Request<ResolveGuestRequest>,
) -> Result<Response<ResolveGuestReply>, Status> {
request: Request<ResolveZoneRequest>,
) -> Result<Response<ResolveZoneReply>, Status> {
let request = request.into_inner();
let guests = self.guests.list().await.map_err(ApiError::from)?;
let guests = guests
let zones = self.zones.list().await.map_err(ApiError::from)?;
let zones = zones
.into_values()
.filter(|x| {
let comparison_spec = x.spec.as_ref().cloned().unwrap_or_default();
(!request.name.is_empty() && comparison_spec.name == request.name)
|| x.id == request.name
})
.collect::<Vec<Guest>>();
Ok(Response::new(ResolveGuestReply {
guest: guests.first().cloned(),
.collect::<Vec<Zone>>();
Ok(Response::new(ResolveZoneReply {
zone: zones.first().cloned(),
}))
}
async fn console_data(
async fn attach_zone_console(
&self,
request: Request<Streaming<ConsoleDataRequest>>,
) -> Result<Response<Self::ConsoleDataStream>, Status> {
request: Request<Streaming<ZoneConsoleRequest>>,
) -> Result<Response<Self::AttachZoneConsoleStream>, Status> {
let mut input = request.into_inner();
let Some(request) = input.next().await else {
return Err(ApiError {
@ -351,7 +351,7 @@ impl ControlService for DaemonControlService {
.into());
};
let request = request?;
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let (sender, mut receiver) = channel(100);
@ -364,7 +364,7 @@ impl ControlService for DaemonControlService {
})?;
let output = try_stream! {
yield ConsoleDataReply { data: console.initial.clone(), };
yield ZoneConsoleReply { data: console.initial.clone(), };
loop {
let what = select! {
x = receiver.recv() => ConsoleDataSelect::Read(x),
@ -373,7 +373,7 @@ impl ControlService for DaemonControlService {
match what {
ConsoleDataSelect::Read(Some(data)) => {
yield ConsoleDataReply { data, };
yield ZoneConsoleReply { data, };
},
ConsoleDataSelect::Read(None) => {
@ -396,15 +396,17 @@ impl ControlService for DaemonControlService {
}
};
Ok(Response::new(Box::pin(output) as Self::ConsoleDataStream))
Ok(Response::new(
Box::pin(output) as Self::AttachZoneConsoleStream
))
}
async fn read_guest_metrics(
async fn read_zone_metrics(
&self,
request: Request<ReadGuestMetricsRequest>,
) -> Result<Response<ReadGuestMetricsReply>, Status> {
request: Request<ReadZoneMetricsRequest>,
) -> Result<Response<ReadZoneMetricsReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let client = self.idm.client(uuid).await.map_err(|error| ApiError {
@ -420,7 +422,7 @@ impl ControlService for DaemonControlService {
message: error.to_string(),
})?;
let mut reply = ReadGuestMetricsReply::default();
let mut reply = ReadZoneMetricsReply::default();
if let Some(IdmResponseType::Metrics(metrics)) = response.response {
reply.root = metrics.root.map(idm_metric_to_api);
}
@ -446,7 +448,7 @@ impl ControlService for DaemonControlService {
let output = try_stream! {
let mut task = tokio::task::spawn(async move {
our_packer.request(name, format, request.overwrite_cache, context).await
our_packer.request(name, format, request.overwrite_cache, request.update, context).await
});
let abort_handle = task.abort_handle();
let _task_cancel_guard = scopeguard::guard(abort_handle, |handle| {

View File

@ -1,66 +1,66 @@
use std::{collections::HashMap, path::Path, sync::Arc};
use anyhow::Result;
use krata::v1::common::Guest;
use krata::v1::common::Zone;
use log::error;
use prost::Message;
use redb::{Database, ReadableTable, TableDefinition};
use uuid::Uuid;
const GUESTS: TableDefinition<u128, &[u8]> = TableDefinition::new("guests");
const ZONES: TableDefinition<u128, &[u8]> = TableDefinition::new("zones");
#[derive(Clone)]
pub struct GuestStore {
pub struct ZoneStore {
database: Arc<Database>,
}
impl GuestStore {
impl ZoneStore {
pub fn open(path: &Path) -> Result<Self> {
let database = Database::create(path)?;
let write = database.begin_write()?;
let _ = write.open_table(GUESTS);
let _ = write.open_table(ZONES);
write.commit()?;
Ok(GuestStore {
Ok(ZoneStore {
database: Arc::new(database),
})
}
pub async fn read(&self, id: Uuid) -> Result<Option<Guest>> {
pub async fn read(&self, id: Uuid) -> Result<Option<Zone>> {
let read = self.database.begin_read()?;
let table = read.open_table(GUESTS)?;
let table = read.open_table(ZONES)?;
let Some(entry) = table.get(id.to_u128_le())? else {
return Ok(None);
};
let bytes = entry.value();
Ok(Some(Guest::decode(bytes)?))
Ok(Some(Zone::decode(bytes)?))
}
pub async fn list(&self) -> Result<HashMap<Uuid, Guest>> {
let mut guests: HashMap<Uuid, Guest> = HashMap::new();
pub async fn list(&self) -> Result<HashMap<Uuid, Zone>> {
let mut zones: HashMap<Uuid, Zone> = HashMap::new();
let read = self.database.begin_read()?;
let table = read.open_table(GUESTS)?;
let table = read.open_table(ZONES)?;
for result in table.iter()? {
let (key, value) = result?;
let uuid = Uuid::from_u128_le(key.value());
let state = match Guest::decode(value.value()) {
let state = match Zone::decode(value.value()) {
Ok(state) => state,
Err(error) => {
error!(
"found invalid guest state in database for uuid {}: {}",
"found invalid zone state in database for uuid {}: {}",
uuid, error
);
continue;
}
};
guests.insert(uuid, state);
zones.insert(uuid, state);
}
Ok(guests)
Ok(zones)
}
pub async fn update(&self, id: Uuid, entry: Guest) -> Result<()> {
pub async fn update(&self, id: Uuid, entry: Zone) -> Result<()> {
let write = self.database.begin_write()?;
{
let mut table = write.open_table(GUESTS)?;
let mut table = write.open_table(ZONES)?;
let bytes = entry.encode_to_vec();
table.insert(id.to_u128_le(), bytes.as_slice())?;
}
@ -71,7 +71,7 @@ impl GuestStore {
pub async fn remove(&self, id: Uuid) -> Result<()> {
let write = self.database.begin_write()?;
{
let mut table = write.open_table(GUESTS)?;
let mut table = write.open_table(ZONES)?;
table.remove(id.to_u128_le())?;
}
write.commit()?;

View File

@ -31,7 +31,7 @@ impl DaemonDeviceManager {
let mut devices = self.devices.write().await;
let Some(state) = devices.get_mut(device) else {
return Err(anyhow!(
"unable to claim unknown device '{}' for guest {}",
"unable to claim unknown device '{}' for zone {}",
device,
uuid
));
@ -39,7 +39,7 @@ impl DaemonDeviceManager {
if let Some(owner) = state.owner {
return Err(anyhow!(
"unable to claim device '{}' for guest {}: already claimed by {}",
"unable to claim device '{}' for zone {}: already claimed by {}",
device,
uuid,
owner
@ -92,7 +92,7 @@ impl DaemonDeviceManager {
for (name, uuid) in &claims {
if !devices.contains_key(name) {
warn!("unknown device '{}' assigned to guest {}", name, uuid);
warn!("unknown device '{}' assigned to zone {}", name, uuid);
}
}

View File

@ -4,10 +4,12 @@ use std::{
time::Duration,
};
use crate::{db::ZoneStore, idm::DaemonIdmHandle};
use anyhow::Result;
use krata::v1::common::ZoneExitInfo;
use krata::{
idm::{internal::event::Event as EventType, internal::Event},
v1::common::{GuestExitInfo, GuestState, GuestStatus},
v1::common::{ZoneState, ZoneStatus},
};
use log::{error, warn};
use tokio::{
@ -21,8 +23,6 @@ use tokio::{
};
use uuid::Uuid;
use crate::{db::GuestStore, idm::DaemonIdmHandle};
pub type DaemonEvent = krata::v1::control::watch_events_reply::Event;
const EVENT_CHANNEL_QUEUE_LEN: usize = 1000;
@ -45,8 +45,8 @@ impl DaemonEventContext {
}
pub struct DaemonEventGenerator {
guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>,
zones: ZoneStore,
zone_reconciler_notify: Sender<Uuid>,
feed: broadcast::Receiver<DaemonEvent>,
idm: DaemonIdmHandle,
idms: HashMap<u32, (Uuid, JoinHandle<()>)>,
@ -57,15 +57,15 @@ pub struct DaemonEventGenerator {
impl DaemonEventGenerator {
pub async fn new(
guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>,
zones: ZoneStore,
zone_reconciler_notify: Sender<Uuid>,
idm: DaemonIdmHandle,
) -> Result<(DaemonEventContext, DaemonEventGenerator)> {
let (sender, _) = broadcast::channel(EVENT_CHANNEL_QUEUE_LEN);
let (idm_sender, idm_receiver) = channel(IDM_EVENT_CHANNEL_QUEUE_LEN);
let generator = DaemonEventGenerator {
guests,
guest_reconciler_notify,
zones,
zone_reconciler_notify,
feed: sender.subscribe(),
idm,
idms: HashMap::new(),
@ -78,20 +78,20 @@ impl DaemonEventGenerator {
}
async fn handle_feed_event(&mut self, event: &DaemonEvent) -> Result<()> {
let DaemonEvent::GuestChanged(changed) = event;
let Some(ref guest) = changed.guest else {
let DaemonEvent::ZoneChanged(changed) = event;
let Some(ref zone) = changed.zone else {
return Ok(());
};
let Some(ref state) = guest.state else {
let Some(ref state) = zone.state else {
return Ok(());
};
let status = state.status();
let id = Uuid::from_str(&guest.id)?;
let id = Uuid::from_str(&zone.id)?;
let domid = state.domid;
match status {
GuestStatus::Started => {
ZoneStatus::Started => {
if let Entry::Vacant(e) = self.idms.entry(domid) {
let client = self.idm.client_by_domid(domid).await?;
let mut receiver = client.subscribe().await?;
@ -111,7 +111,7 @@ impl DaemonEventGenerator {
}
}
GuestStatus::Destroyed => {
ZoneStatus::Destroyed => {
if let Some((_, handle)) = self.idms.remove(&domid) {
handle.abort();
}
@ -130,18 +130,18 @@ impl DaemonEventGenerator {
}
async fn handle_exit_code(&mut self, id: Uuid, code: i32) -> Result<()> {
if let Some(mut guest) = self.guests.read(id).await? {
guest.state = Some(GuestState {
status: GuestStatus::Exited.into(),
network: guest.state.clone().unwrap_or_default().network,
exit_info: Some(GuestExitInfo { code }),
if let Some(mut zone) = self.zones.read(id).await? {
zone.state = Some(ZoneState {
status: ZoneStatus::Exited.into(),
network: zone.state.clone().unwrap_or_default().network,
exit_info: Some(ZoneExitInfo { code }),
error_info: None,
host: guest.state.clone().map(|x| x.host).unwrap_or_default(),
domid: guest.state.clone().map(|x| x.domid).unwrap_or(u32::MAX),
host: zone.state.clone().map(|x| x.host).unwrap_or_default(),
domid: zone.state.clone().map(|x| x.domid).unwrap_or(u32::MAX),
});
self.guests.update(id, guest).await?;
self.guest_reconciler_notify.send(id).await?;
self.zones.update(id, zone).await?;
self.zone_reconciler_notify.send(id).await?;
}
Ok(())
}

View File

@ -24,14 +24,14 @@ use tokio::{
};
use uuid::Uuid;
use crate::glt::GuestLookupTable;
use crate::zlt::ZoneLookupTable;
type BackendFeedMap = Arc<Mutex<HashMap<u32, Sender<IdmTransportPacket>>>>;
type ClientMap = Arc<Mutex<HashMap<u32, IdmInternalClient>>>;
#[derive(Clone)]
pub struct DaemonIdmHandle {
glt: GuestLookupTable,
glt: ZoneLookupTable,
clients: ClientMap,
feeds: BackendFeedMap,
tx_sender: Sender<(u32, IdmTransportPacket)>,
@ -72,7 +72,7 @@ pub struct DaemonIdmSnoopPacket {
}
pub struct DaemonIdm {
glt: GuestLookupTable,
glt: ZoneLookupTable,
clients: ClientMap,
feeds: BackendFeedMap,
tx_sender: Sender<(u32, IdmTransportPacket)>,
@ -84,7 +84,7 @@ pub struct DaemonIdm {
}
impl DaemonIdm {
pub async fn new(glt: GuestLookupTable) -> Result<DaemonIdm> {
pub async fn new(glt: ZoneLookupTable) -> Result<DaemonIdm> {
let (service, tx_raw_sender, rx_receiver) =
ChannelService::new("krata-channel".to_string(), None).await?;
let (tx_sender, tx_receiver) = channel(100);
@ -136,34 +136,36 @@ impl DaemonIdm {
if let Some(data) = data {
let buffer = buffers.entry(domid).or_insert_with_key(|_| BytesMut::new());
buffer.extend_from_slice(&data);
if buffer.len() < 6 {
continue;
}
if buffer[0] != 0xff || buffer[1] != 0xff {
buffer.clear();
continue;
}
let size = (buffer[2] as u32 | (buffer[3] as u32) << 8 | (buffer[4] as u32) << 16 | (buffer[5] as u32) << 24) as usize;
let needed = size + 6;
if buffer.len() < needed {
continue;
}
let mut packet = buffer.split_to(needed);
packet.advance(6);
match IdmTransportPacket::decode(packet) {
Ok(packet) => {
let _ = client_or_create(domid, &self.tx_sender, &self.clients, &self.feeds).await?;
let guard = self.feeds.lock().await;
if let Some(feed) = guard.get(&domid) {
let _ = feed.try_send(packet.clone());
}
let _ = self.snoop_sender.send(DaemonIdmSnoopPacket { from: domid, to: 0, packet });
loop {
if buffer.len() < 6 {
break;
}
Err(packet) => {
warn!("received invalid packet from domain {}: {}", domid, packet);
if buffer[0] != 0xff || buffer[1] != 0xff {
buffer.clear();
break;
}
let size = (buffer[2] as u32 | (buffer[3] as u32) << 8 | (buffer[4] as u32) << 16 | (buffer[5] as u32) << 24) as usize;
let needed = size + 6;
if buffer.len() < needed {
break;
}
let mut packet = buffer.split_to(needed);
packet.advance(6);
match IdmTransportPacket::decode(packet) {
Ok(packet) => {
let _ = client_or_create(domid, &self.tx_sender, &self.clients, &self.feeds).await?;
let guard = self.feeds.lock().await;
if let Some(feed) = guard.get(&domid) {
let _ = feed.try_send(packet.clone());
}
let _ = self.snoop_sender.send(DaemonIdmSnoopPacket { from: domid, to: 0, packet });
}
Err(packet) => {
warn!("received invalid packet from domain {}: {}", domid, packet);
}
}
}
} else {

View File

@ -4,16 +4,15 @@ use anyhow::{anyhow, Result};
use config::DaemonConfig;
use console::{DaemonConsole, DaemonConsoleHandle};
use control::DaemonControlService;
use db::GuestStore;
use db::ZoneStore;
use devices::DaemonDeviceManager;
use event::{DaemonEventContext, DaemonEventGenerator};
use glt::GuestLookupTable;
use idm::{DaemonIdm, DaemonIdmHandle};
use krata::{dial::ControlDialAddress, v1::control::control_service_server::ControlServiceServer};
use krataoci::{packer::service::OciPackerService, registry::OciPlatform};
use kratart::Runtime;
use log::info;
use reconcile::guest::GuestReconciler;
use reconcile::zone::ZoneReconciler;
use tokio::{
fs,
net::UnixListener,
@ -23,6 +22,7 @@ use tokio::{
use tokio_stream::wrappers::UnixListenerStream;
use tonic::transport::{Identity, Server, ServerTlsConfig};
use uuid::Uuid;
use zlt::ZoneLookupTable;
pub mod command;
pub mod config;
@ -31,21 +31,21 @@ pub mod control;
pub mod db;
pub mod devices;
pub mod event;
pub mod glt;
pub mod idm;
pub mod metrics;
pub mod oci;
pub mod reconcile;
pub mod zlt;
pub struct Daemon {
store: String,
_config: Arc<DaemonConfig>,
glt: GuestLookupTable,
glt: ZoneLookupTable,
devices: DaemonDeviceManager,
guests: GuestStore,
zones: ZoneStore,
events: DaemonEventContext,
guest_reconciler_task: JoinHandle<()>,
guest_reconciler_notify: Sender<Uuid>,
zone_reconciler_task: JoinHandle<()>,
zone_reconciler_notify: Sender<Uuid>,
generator_task: JoinHandle<()>,
idm: DaemonIdmHandle,
console: DaemonConsoleHandle,
@ -53,7 +53,7 @@ pub struct Daemon {
runtime: Runtime,
}
const GUEST_RECONCILER_QUEUE_LEN: usize = 1000;
const ZONE_RECONCILER_QUEUE_LEN: usize = 1000;
impl Daemon {
pub async fn new(store: String) -> Result<Self> {
@ -89,40 +89,40 @@ impl Daemon {
generated
};
let initrd_path = detect_guest_path(&store, "initrd")?;
let kernel_path = detect_guest_path(&store, "kernel")?;
let addons_path = detect_guest_path(&store, "addons.squashfs")?;
let initrd_path = detect_zone_path(&store, "initrd")?;
let kernel_path = detect_zone_path(&store, "kernel")?;
let addons_path = detect_zone_path(&store, "addons.squashfs")?;
let seed = config.oci.seed.clone().map(PathBuf::from);
let packer = OciPackerService::new(seed, &image_cache_dir, OciPlatform::current()).await?;
let runtime = Runtime::new(host_uuid).await?;
let glt = GuestLookupTable::new(0, host_uuid);
let guests_db_path = format!("{}/guests.db", store);
let guests = GuestStore::open(&PathBuf::from(guests_db_path))?;
let (guest_reconciler_notify, guest_reconciler_receiver) =
channel::<Uuid>(GUEST_RECONCILER_QUEUE_LEN);
let glt = ZoneLookupTable::new(0, host_uuid);
let zones_db_path = format!("{}/zones.db", store);
let zones = ZoneStore::open(&PathBuf::from(zones_db_path))?;
let (zone_reconciler_notify, zone_reconciler_receiver) =
channel::<Uuid>(ZONE_RECONCILER_QUEUE_LEN);
let idm = DaemonIdm::new(glt.clone()).await?;
let idm = idm.launch().await?;
let console = DaemonConsole::new(glt.clone()).await?;
let console = console.launch().await?;
let (events, generator) =
DaemonEventGenerator::new(guests.clone(), guest_reconciler_notify.clone(), idm.clone())
DaemonEventGenerator::new(zones.clone(), zone_reconciler_notify.clone(), idm.clone())
.await?;
let runtime_for_reconciler = runtime.dupe().await?;
let guest_reconciler = GuestReconciler::new(
let zone_reconciler = ZoneReconciler::new(
devices.clone(),
glt.clone(),
guests.clone(),
zones.clone(),
events.clone(),
runtime_for_reconciler,
packer.clone(),
guest_reconciler_notify.clone(),
zone_reconciler_notify.clone(),
kernel_path,
initrd_path,
addons_path,
)?;
let guest_reconciler_task = guest_reconciler.launch(guest_reconciler_receiver).await?;
let zone_reconciler_task = zone_reconciler.launch(zone_reconciler_receiver).await?;
let generator_task = generator.launch().await?;
// TODO: Create a way of abstracting early init tasks in kratad.
@ -139,10 +139,10 @@ impl Daemon {
_config: config,
glt,
devices,
guests,
zones,
events,
guest_reconciler_task,
guest_reconciler_notify,
zone_reconciler_task,
zone_reconciler_notify,
generator_task,
idm,
console,
@ -158,8 +158,8 @@ impl Daemon {
self.events.clone(),
self.console.clone(),
self.idm.clone(),
self.guests.clone(),
self.guest_reconciler_notify.clone(),
self.zones.clone(),
self.zone_reconciler_notify.clone(),
self.packer.clone(),
self.runtime.clone(),
);
@ -214,20 +214,20 @@ impl Daemon {
impl Drop for Daemon {
fn drop(&mut self) {
self.guest_reconciler_task.abort();
self.zone_reconciler_task.abort();
self.generator_task.abort();
}
}
fn detect_guest_path(store: &str, name: &str) -> Result<PathBuf> {
let mut path = PathBuf::from(format!("{}/guest/{}", store, name));
fn detect_zone_path(store: &str, name: &str) -> Result<PathBuf> {
let mut path = PathBuf::from(format!("{}/zone/{}", store, name));
if path.is_file() {
return Ok(path);
}
path = PathBuf::from(format!("/usr/share/krata/guest/{}", name));
path = PathBuf::from(format!("/usr/share/krata/zone/{}", name));
if path.is_file() {
return Ok(path);
}
Err(anyhow!("unable to find required guest file: {}", name))
Err(anyhow!("unable to find required zone file: {}", name))
}

View File

@ -1,20 +1,20 @@
use krata::{
idm::internal::{MetricFormat, MetricNode},
v1::common::{GuestMetricFormat, GuestMetricNode},
v1::common::{ZoneMetricFormat, ZoneMetricNode},
};
fn idm_metric_format_to_api(format: MetricFormat) -> GuestMetricFormat {
fn idm_metric_format_to_api(format: MetricFormat) -> ZoneMetricFormat {
match format {
MetricFormat::Unknown => GuestMetricFormat::Unknown,
MetricFormat::Bytes => GuestMetricFormat::Bytes,
MetricFormat::Integer => GuestMetricFormat::Integer,
MetricFormat::DurationSeconds => GuestMetricFormat::DurationSeconds,
MetricFormat::Unknown => ZoneMetricFormat::Unknown,
MetricFormat::Bytes => ZoneMetricFormat::Bytes,
MetricFormat::Integer => ZoneMetricFormat::Integer,
MetricFormat::DurationSeconds => ZoneMetricFormat::DurationSeconds,
}
}
pub fn idm_metric_to_api(node: MetricNode) -> GuestMetricNode {
pub fn idm_metric_to_api(node: MetricNode) -> ZoneMetricNode {
let format = node.format();
GuestMetricNode {
ZoneMetricNode {
name: node.name,
value: node.value,
format: idm_metric_format_to_api(format).into(),

View File

@ -1 +1 @@
pub mod guest;
pub mod zone;

View File

@ -7,11 +7,11 @@ use std::{
use anyhow::Result;
use krata::v1::{
common::{Guest, GuestErrorInfo, GuestExitInfo, GuestNetworkState, GuestState, GuestStatus},
control::GuestChangedEvent,
common::{Zone, ZoneErrorInfo, ZoneExitInfo, ZoneNetworkState, ZoneState, ZoneStatus},
control::ZoneChangedEvent,
};
use krataoci::packer::service::OciPackerService;
use kratart::{GuestInfo, Runtime};
use kratart::{Runtime, ZoneInfo};
use log::{error, info, trace, warn};
use tokio::{
select,
@ -25,69 +25,69 @@ use tokio::{
use uuid::Uuid;
use crate::{
db::GuestStore,
db::ZoneStore,
devices::DaemonDeviceManager,
event::{DaemonEvent, DaemonEventContext},
glt::GuestLookupTable,
zlt::ZoneLookupTable,
};
use self::start::GuestStarter;
use self::start::ZoneStarter;
mod start;
const PARALLEL_LIMIT: u32 = 5;
#[derive(Debug)]
enum GuestReconcilerResult {
enum ZoneReconcilerResult {
Unchanged,
Changed { rerun: bool },
}
struct GuestReconcilerEntry {
struct ZoneReconcilerEntry {
task: JoinHandle<()>,
sender: Sender<()>,
}
impl Drop for GuestReconcilerEntry {
impl Drop for ZoneReconcilerEntry {
fn drop(&mut self) {
self.task.abort();
}
}
#[derive(Clone)]
pub struct GuestReconciler {
pub struct ZoneReconciler {
devices: DaemonDeviceManager,
glt: GuestLookupTable,
guests: GuestStore,
zlt: ZoneLookupTable,
zones: ZoneStore,
events: DaemonEventContext,
runtime: Runtime,
packer: OciPackerService,
kernel_path: PathBuf,
initrd_path: PathBuf,
addons_path: PathBuf,
tasks: Arc<Mutex<HashMap<Uuid, GuestReconcilerEntry>>>,
guest_reconciler_notify: Sender<Uuid>,
reconcile_lock: Arc<RwLock<()>>,
tasks: Arc<Mutex<HashMap<Uuid, ZoneReconcilerEntry>>>,
zone_reconciler_notify: Sender<Uuid>,
zone_reconcile_lock: Arc<RwLock<()>>,
}
impl GuestReconciler {
impl ZoneReconciler {
#[allow(clippy::too_many_arguments)]
pub fn new(
devices: DaemonDeviceManager,
glt: GuestLookupTable,
guests: GuestStore,
zlt: ZoneLookupTable,
zones: ZoneStore,
events: DaemonEventContext,
runtime: Runtime,
packer: OciPackerService,
guest_reconciler_notify: Sender<Uuid>,
zone_reconciler_notify: Sender<Uuid>,
kernel_path: PathBuf,
initrd_path: PathBuf,
modules_path: PathBuf,
) -> Result<Self> {
Ok(Self {
devices,
glt,
guests,
zlt,
zones,
events,
runtime,
packer,
@ -95,8 +95,8 @@ impl GuestReconciler {
initrd_path,
addons_path: modules_path,
tasks: Arc::new(Mutex::new(HashMap::new())),
guest_reconciler_notify,
reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)),
zone_reconciler_notify,
zone_reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)),
})
}
@ -115,13 +115,13 @@ impl GuestReconciler {
Some(uuid) => {
if let Err(error) = self.launch_task_if_needed(uuid).await {
error!("failed to start guest reconciler task {}: {}", uuid, error);
error!("failed to start zone reconciler task {}: {}", uuid, error);
}
let map = self.tasks.lock().await;
if let Some(entry) = map.get(&uuid) {
if let Err(error) = entry.sender.send(()).await {
error!("failed to notify guest reconciler task {}: {}", uuid, error);
error!("failed to notify zone reconciler task {}: {}", uuid, error);
}
}
}
@ -138,52 +138,52 @@ impl GuestReconciler {
}
pub async fn reconcile_runtime(&self, initial: bool) -> Result<()> {
let _permit = self.reconcile_lock.write().await;
let _permit = self.zone_reconcile_lock.write().await;
trace!("reconciling runtime");
let runtime_guests = self.runtime.list().await?;
let stored_guests = self.guests.list().await?;
let runtime_zones = self.runtime.list().await?;
let stored_zones = self.zones.list().await?;
let non_existent_guests = runtime_guests
let non_existent_zones = runtime_zones
.iter()
.filter(|x| !stored_guests.iter().any(|g| *g.0 == x.uuid))
.filter(|x| !stored_zones.iter().any(|g| *g.0 == x.uuid))
.collect::<Vec<_>>();
for guest in non_existent_guests {
warn!("destroying unknown runtime guest {}", guest.uuid);
if let Err(error) = self.runtime.destroy(guest.uuid).await {
for zone in non_existent_zones {
warn!("destroying unknown runtime zone {}", zone.uuid);
if let Err(error) = self.runtime.destroy(zone.uuid).await {
error!(
"failed to destroy unknown runtime guest {}: {}",
guest.uuid, error
"failed to destroy unknown runtime zone {}: {}",
zone.uuid, error
);
}
self.guests.remove(guest.uuid).await?;
self.zones.remove(zone.uuid).await?;
}
let mut device_claims = HashMap::new();
for (uuid, mut stored_guest) in stored_guests {
let previous_guest = stored_guest.clone();
let runtime_guest = runtime_guests.iter().find(|x| x.uuid == uuid);
match runtime_guest {
for (uuid, mut stored_zone) in stored_zones {
let previous_zone = stored_zone.clone();
let runtime_zone = runtime_zones.iter().find(|x| x.uuid == uuid);
match runtime_zone {
None => {
let mut state = stored_guest.state.as_mut().cloned().unwrap_or_default();
if state.status() == GuestStatus::Started {
state.status = GuestStatus::Starting.into();
let mut state = stored_zone.state.as_mut().cloned().unwrap_or_default();
if state.status() == ZoneStatus::Started {
state.status = ZoneStatus::Starting.into();
}
stored_guest.state = Some(state);
stored_zone.state = Some(state);
}
Some(runtime) => {
self.glt.associate(uuid, runtime.domid).await;
let mut state = stored_guest.state.as_mut().cloned().unwrap_or_default();
self.zlt.associate(uuid, runtime.domid).await;
let mut state = stored_zone.state.as_mut().cloned().unwrap_or_default();
if let Some(code) = runtime.state.exit_code {
state.status = GuestStatus::Exited.into();
state.exit_info = Some(GuestExitInfo { code });
state.status = ZoneStatus::Exited.into();
state.exit_info = Some(ZoneExitInfo { code });
} else {
state.status = GuestStatus::Started.into();
state.status = ZoneStatus::Started.into();
}
for device in &stored_guest
for device in &stored_zone
.spec
.as_ref()
.cloned()
@ -193,16 +193,16 @@ impl GuestReconciler {
device_claims.insert(device.name.clone(), uuid);
}
state.network = Some(guestinfo_to_networkstate(runtime));
stored_guest.state = Some(state);
state.network = Some(zoneinfo_to_networkstate(runtime));
stored_zone.state = Some(state);
}
}
let changed = stored_guest != previous_guest;
let changed = stored_zone != previous_zone;
if changed || initial {
self.guests.update(uuid, stored_guest).await?;
let _ = self.guest_reconciler_notify.try_send(uuid);
self.zones.update(uuid, stored_zone).await?;
let _ = self.zone_reconciler_notify.try_send(uuid);
}
}
@ -212,59 +212,59 @@ impl GuestReconciler {
}
pub async fn reconcile(&self, uuid: Uuid) -> Result<bool> {
let _runtime_reconcile_permit = self.reconcile_lock.read().await;
let Some(mut guest) = self.guests.read(uuid).await? else {
let _runtime_reconcile_permit = self.zone_reconcile_lock.read().await;
let Some(mut zone) = self.zones.read(uuid).await? else {
warn!(
"notified of reconcile for guest {} but it didn't exist",
"notified of reconcile for zone {} but it didn't exist",
uuid
);
return Ok(false);
};
info!("reconciling guest {}", uuid);
info!("reconciling zone {}", uuid);
self.events
.send(DaemonEvent::GuestChanged(GuestChangedEvent {
guest: Some(guest.clone()),
.send(DaemonEvent::ZoneChanged(ZoneChangedEvent {
zone: Some(zone.clone()),
}))?;
let start_status = guest.state.as_ref().map(|x| x.status()).unwrap_or_default();
let start_status = zone.state.as_ref().map(|x| x.status()).unwrap_or_default();
let result = match start_status {
GuestStatus::Starting => self.start(uuid, &mut guest).await,
GuestStatus::Exited => self.exited(&mut guest).await,
GuestStatus::Destroying => self.destroy(uuid, &mut guest).await,
_ => Ok(GuestReconcilerResult::Unchanged),
ZoneStatus::Starting => self.start(uuid, &mut zone).await,
ZoneStatus::Exited => self.exited(&mut zone).await,
ZoneStatus::Destroying => self.destroy(uuid, &mut zone).await,
_ => Ok(ZoneReconcilerResult::Unchanged),
};
let result = match result {
Ok(result) => result,
Err(error) => {
guest.state = Some(guest.state.as_mut().cloned().unwrap_or_default());
guest.state.as_mut().unwrap().status = GuestStatus::Failed.into();
guest.state.as_mut().unwrap().error_info = Some(GuestErrorInfo {
zone.state = Some(zone.state.as_mut().cloned().unwrap_or_default());
zone.state.as_mut().unwrap().status = ZoneStatus::Failed.into();
zone.state.as_mut().unwrap().error_info = Some(ZoneErrorInfo {
message: error.to_string(),
});
warn!("failed to start guest {}: {}", guest.id, error);
GuestReconcilerResult::Changed { rerun: false }
warn!("failed to start zone {}: {}", zone.id, error);
ZoneReconcilerResult::Changed { rerun: false }
}
};
info!("reconciled guest {}", uuid);
info!("reconciled zone {}", uuid);
let status = guest.state.as_ref().map(|x| x.status()).unwrap_or_default();
let destroyed = status == GuestStatus::Destroyed;
let status = zone.state.as_ref().map(|x| x.status()).unwrap_or_default();
let destroyed = status == ZoneStatus::Destroyed;
let rerun = if let GuestReconcilerResult::Changed { rerun } = result {
let event = DaemonEvent::GuestChanged(GuestChangedEvent {
guest: Some(guest.clone()),
let rerun = if let ZoneReconcilerResult::Changed { rerun } = result {
let event = DaemonEvent::ZoneChanged(ZoneChangedEvent {
zone: Some(zone.clone()),
});
if destroyed {
self.guests.remove(uuid).await?;
self.zones.remove(uuid).await?;
let mut map = self.tasks.lock().await;
map.remove(&uuid);
} else {
self.guests.update(uuid, guest.clone()).await?;
self.zones.update(uuid, zone.clone()).await?;
}
self.events.send(event)?;
@ -276,50 +276,50 @@ impl GuestReconciler {
Ok(rerun)
}
async fn start(&self, uuid: Uuid, guest: &mut Guest) -> Result<GuestReconcilerResult> {
let starter = GuestStarter {
async fn start(&self, uuid: Uuid, zone: &mut Zone) -> Result<ZoneReconcilerResult> {
let starter = ZoneStarter {
devices: &self.devices,
kernel_path: &self.kernel_path,
initrd_path: &self.initrd_path,
addons_path: &self.addons_path,
packer: &self.packer,
glt: &self.glt,
glt: &self.zlt,
runtime: &self.runtime,
};
starter.start(uuid, guest).await
starter.start(uuid, zone).await
}
async fn exited(&self, guest: &mut Guest) -> Result<GuestReconcilerResult> {
if let Some(ref mut state) = guest.state {
state.set_status(GuestStatus::Destroying);
Ok(GuestReconcilerResult::Changed { rerun: true })
async fn exited(&self, zone: &mut Zone) -> Result<ZoneReconcilerResult> {
if let Some(ref mut state) = zone.state {
state.set_status(ZoneStatus::Destroying);
Ok(ZoneReconcilerResult::Changed { rerun: true })
} else {
Ok(GuestReconcilerResult::Unchanged)
Ok(ZoneReconcilerResult::Unchanged)
}
}
async fn destroy(&self, uuid: Uuid, guest: &mut Guest) -> Result<GuestReconcilerResult> {
async fn destroy(&self, uuid: Uuid, zone: &mut Zone) -> Result<ZoneReconcilerResult> {
if let Err(error) = self.runtime.destroy(uuid).await {
trace!("failed to destroy runtime guest {}: {}", uuid, error);
trace!("failed to destroy runtime zone {}: {}", uuid, error);
}
let domid = guest.state.as_ref().map(|x| x.domid);
let domid = zone.state.as_ref().map(|x| x.domid);
if let Some(domid) = domid {
self.glt.remove(uuid, domid).await;
self.zlt.remove(uuid, domid).await;
}
info!("destroyed guest {}", uuid);
guest.state = Some(GuestState {
status: GuestStatus::Destroyed.into(),
info!("destroyed zone {}", uuid);
zone.state = Some(ZoneState {
status: ZoneStatus::Destroyed.into(),
network: None,
exit_info: None,
error_info: None,
host: self.glt.host_uuid().to_string(),
host: self.zlt.host_uuid().to_string(),
domid: domid.unwrap_or(u32::MAX),
});
self.devices.release_all(uuid).await?;
Ok(GuestReconcilerResult::Changed { rerun: false })
Ok(ZoneReconcilerResult::Changed { rerun: false })
}
async fn launch_task_if_needed(&self, uuid: Uuid) -> Result<()> {
@ -333,7 +333,7 @@ impl GuestReconciler {
Ok(())
}
async fn launch_task(&self, uuid: Uuid) -> Result<GuestReconcilerEntry> {
async fn launch_task(&self, uuid: Uuid) -> Result<ZoneReconcilerEntry> {
let this = self.clone();
let (sender, mut receiver) = channel(10);
let task = tokio::task::spawn(async move {
@ -346,7 +346,7 @@ impl GuestReconciler {
let rerun = match this.reconcile(uuid).await {
Ok(rerun) => rerun,
Err(error) => {
error!("failed to reconcile guest {}: {}", uuid, error);
error!("failed to reconcile zone {}: {}", uuid, error);
false
}
};
@ -358,15 +358,15 @@ impl GuestReconciler {
}
}
});
Ok(GuestReconcilerEntry { task, sender })
Ok(ZoneReconcilerEntry { task, sender })
}
}
pub fn guestinfo_to_networkstate(info: &GuestInfo) -> GuestNetworkState {
GuestNetworkState {
guest_ipv4: info.guest_ipv4.map(|x| x.to_string()).unwrap_or_default(),
guest_ipv6: info.guest_ipv6.map(|x| x.to_string()).unwrap_or_default(),
guest_mac: info.guest_mac.as_ref().cloned().unwrap_or_default(),
pub fn zoneinfo_to_networkstate(info: &ZoneInfo) -> ZoneNetworkState {
ZoneNetworkState {
zone_ipv4: info.zone_ipv4.map(|x| x.to_string()).unwrap_or_default(),
zone_ipv6: info.zone_ipv6.map(|x| x.to_string()).unwrap_or_default(),
zone_mac: info.zone_mac.as_ref().cloned().unwrap_or_default(),
gateway_ipv4: info.gateway_ipv4.map(|x| x.to_string()).unwrap_or_default(),
gateway_ipv6: info.gateway_ipv6.map(|x| x.to_string()).unwrap_or_default(),
gateway_mac: info.gateway_mac.as_ref().cloned().unwrap_or_default(),

View File

@ -6,40 +6,40 @@ use std::sync::atomic::{AtomicBool, Ordering};
use anyhow::{anyhow, Result};
use futures::StreamExt;
use krata::launchcfg::LaunchPackedFormat;
use krata::v1::common::GuestOciImageSpec;
use krata::v1::common::{guest_image_spec::Image, Guest, GuestState, GuestStatus, OciImageFormat};
use krata::v1::common::ZoneOciImageSpec;
use krata::v1::common::{OciImageFormat, Zone, ZoneState, ZoneStatus};
use krataoci::packer::{service::OciPackerService, OciPackedFormat};
use kratart::launch::{PciBdf, PciDevice, PciRdmReservePolicy};
use kratart::{launch::GuestLaunchRequest, Runtime};
use kratart::{launch::ZoneLaunchRequest, Runtime};
use log::info;
use crate::config::DaemonPciDeviceRdmReservePolicy;
use crate::devices::DaemonDeviceManager;
use crate::{
reconcile::zone::{zoneinfo_to_networkstate, ZoneReconcilerResult},
zlt::ZoneLookupTable,
};
use krata::v1::common::zone_image_spec::Image;
use tokio::fs::{self, File};
use tokio::io::AsyncReadExt;
use tokio_tar::Archive;
use uuid::Uuid;
use crate::config::DaemonPciDeviceRdmReservePolicy;
use crate::devices::DaemonDeviceManager;
use crate::{
glt::GuestLookupTable,
reconcile::guest::{guestinfo_to_networkstate, GuestReconcilerResult},
};
pub struct GuestStarter<'a> {
pub struct ZoneStarter<'a> {
pub devices: &'a DaemonDeviceManager,
pub kernel_path: &'a Path,
pub initrd_path: &'a Path,
pub addons_path: &'a Path,
pub packer: &'a OciPackerService,
pub glt: &'a GuestLookupTable,
pub glt: &'a ZoneLookupTable,
pub runtime: &'a Runtime,
}
impl GuestStarter<'_> {
impl ZoneStarter<'_> {
pub async fn oci_spec_tar_read_file(
&self,
file: &Path,
oci: &GuestOciImageSpec,
oci: &ZoneOciImageSpec,
) -> Result<Vec<u8>> {
if oci.format() != OciImageFormat::Tar {
return Err(anyhow!(
@ -75,9 +75,9 @@ impl GuestStarter<'_> {
))
}
pub async fn start(&self, uuid: Uuid, guest: &mut Guest) -> Result<GuestReconcilerResult> {
let Some(ref spec) = guest.spec else {
return Err(anyhow!("guest spec not specified"));
pub async fn start(&self, uuid: Uuid, zone: &mut Zone) -> Result<ZoneReconcilerResult> {
let Some(ref spec) = zone.spec else {
return Err(anyhow!("zone spec not specified"));
};
let Some(ref image) = spec.image else {
@ -100,7 +100,7 @@ impl GuestStarter<'_> {
OciImageFormat::Squashfs => OciPackedFormat::Squashfs,
OciImageFormat::Erofs => OciPackedFormat::Erofs,
OciImageFormat::Tar => {
return Err(anyhow!("tar image format is not supported for guests"));
return Err(anyhow!("tar image format is not supported for zones"));
}
},
)
@ -176,7 +176,7 @@ impl GuestStarter<'_> {
let info = self
.runtime
.launch(GuestLaunchRequest {
.launch(ZoneLaunchRequest {
format: LaunchPackedFormat::Squashfs,
uuid: Some(uuid),
name: if spec.name.is_empty() {
@ -201,17 +201,17 @@ impl GuestStarter<'_> {
})
.await?;
self.glt.associate(uuid, info.domid).await;
info!("started guest {}", uuid);
guest.state = Some(GuestState {
status: GuestStatus::Started.into(),
network: Some(guestinfo_to_networkstate(&info)),
info!("started zone {}", uuid);
zone.state = Some(ZoneState {
status: ZoneStatus::Started.into(),
network: Some(zoneinfo_to_networkstate(&info)),
exit_info: None,
error_info: None,
host: self.glt.host_uuid().to_string(),
domid: info.domid,
});
success.store(true, Ordering::Release);
Ok(GuestReconcilerResult::Changed { rerun: false })
Ok(ZoneReconcilerResult::Changed { rerun: false })
}
}

View File

@ -3,18 +3,18 @@ use std::{collections::HashMap, sync::Arc};
use tokio::sync::RwLock;
use uuid::Uuid;
struct GuestLookupTableState {
struct ZoneLookupTableState {
domid_to_uuid: HashMap<u32, Uuid>,
uuid_to_domid: HashMap<Uuid, u32>,
}
impl GuestLookupTableState {
impl ZoneLookupTableState {
pub fn new(host_uuid: Uuid) -> Self {
let mut domid_to_uuid = HashMap::new();
let mut uuid_to_domid = HashMap::new();
domid_to_uuid.insert(0, host_uuid);
uuid_to_domid.insert(host_uuid, 0);
GuestLookupTableState {
ZoneLookupTableState {
domid_to_uuid,
uuid_to_domid,
}
@ -22,18 +22,18 @@ impl GuestLookupTableState {
}
#[derive(Clone)]
pub struct GuestLookupTable {
pub struct ZoneLookupTable {
host_domid: u32,
host_uuid: Uuid,
state: Arc<RwLock<GuestLookupTableState>>,
state: Arc<RwLock<ZoneLookupTableState>>,
}
impl GuestLookupTable {
impl ZoneLookupTable {
pub fn new(host_domid: u32, host_uuid: Uuid) -> Self {
GuestLookupTable {
ZoneLookupTable {
host_domid,
host_uuid,
state: Arc::new(RwLock::new(GuestLookupTableState::new(host_uuid))),
state: Arc::new(RwLock::new(ZoneLookupTableState::new(host_uuid))),
}
}

View File

@ -1,30 +0,0 @@
use anyhow::{anyhow, Result};
use env_logger::Env;
use krataguest::{death, init::GuestInit};
use log::error;
use std::env;
#[tokio::main]
async fn main() -> Result<()> {
env::set_var("RUST_BACKTRACE", "1");
env_logger::Builder::from_env(Env::default().default_filter_or("warn")).init();
if env::var("KRATA_UNSAFE_ALWAYS_ALLOW_INIT").unwrap_or("0".to_string()) != "1" {
let pid = std::process::id();
if pid > 3 {
return Err(anyhow!(
"not running because the pid of {} indicates this is probably not \
the right context for the init daemon. \
run with KRATA_UNSAFE_ALWAYS_ALLOW_INIT=1 to bypass this check",
pid
));
}
}
let mut guest = GuestInit::new();
if let Err(error) = guest.init().await {
error!("failed to initialize guest: {}", error);
death(127).await?;
return Ok(());
}
death(1).await?;
Ok(())
}

View File

@ -15,6 +15,7 @@ bytes = { workspace = true }
libc = { workspace = true }
log = { workspace = true }
once_cell = { workspace = true }
pin-project-lite = { workspace = true }
prost = { workspace = true }
prost-reflect = { workspace = true }
prost-types = { workspace = true }
@ -27,6 +28,8 @@ tower = { workspace = true }
url = { workspace = true }
[target.'cfg(unix)'.dependencies]
hyper = { workspace = true }
hyper-util = { workspace = true }
nix = { workspace = true, features = ["term"] }
[build-dependencies]

View File

@ -8,29 +8,29 @@ option java_outer_classname = "CommonProto";
import "google/protobuf/struct.proto";
message Guest {
message Zone {
string id = 1;
GuestSpec spec = 2;
GuestState state = 3;
ZoneSpec spec = 2;
ZoneState state = 3;
}
message GuestSpec {
message ZoneSpec {
string name = 1;
GuestImageSpec image = 2;
ZoneImageSpec image = 2;
// If not specified, defaults to the daemon default kernel.
GuestImageSpec kernel = 3;
ZoneImageSpec kernel = 3;
// If not specified, defaults to the daemon default initrd.
GuestImageSpec initrd = 4;
ZoneImageSpec initrd = 4;
uint32 vcpus = 5;
uint64 mem = 6;
GuestTaskSpec task = 7;
repeated GuestSpecAnnotation annotations = 8;
repeated GuestSpecDevice devices = 9;
ZoneTaskSpec task = 7;
repeated ZoneSpecAnnotation annotations = 8;
repeated ZoneSpecDevice devices = 9;
}
message GuestImageSpec {
message ZoneImageSpec {
oneof image {
GuestOciImageSpec oci = 1;
ZoneOciImageSpec oci = 1;
}
}
@ -42,77 +42,77 @@ enum OciImageFormat {
OCI_IMAGE_FORMAT_TAR = 3;
}
message GuestOciImageSpec {
message ZoneOciImageSpec {
string digest = 1;
OciImageFormat format = 2;
}
message GuestTaskSpec {
repeated GuestTaskSpecEnvVar environment = 1;
message ZoneTaskSpec {
repeated ZoneTaskSpecEnvVar environment = 1;
repeated string command = 2;
string working_directory = 3;
}
message GuestTaskSpecEnvVar {
message ZoneTaskSpecEnvVar {
string key = 1;
string value = 2;
}
message GuestSpecAnnotation {
message ZoneSpecAnnotation {
string key = 1;
string value = 2;
}
message GuestSpecDevice {
message ZoneSpecDevice {
string name = 1;
}
message GuestState {
GuestStatus status = 1;
GuestNetworkState network = 2;
GuestExitInfo exit_info = 3;
GuestErrorInfo error_info = 4;
message ZoneState {
ZoneStatus status = 1;
ZoneNetworkState network = 2;
ZoneExitInfo exit_info = 3;
ZoneErrorInfo error_info = 4;
string host = 5;
uint32 domid = 6;
}
enum GuestStatus {
GUEST_STATUS_UNKNOWN = 0;
GUEST_STATUS_STARTING = 1;
GUEST_STATUS_STARTED = 2;
GUEST_STATUS_EXITED = 3;
GUEST_STATUS_DESTROYING = 4;
GUEST_STATUS_DESTROYED = 5;
GUEST_STATUS_FAILED = 6;
enum ZoneStatus {
ZONE_STATUS_UNKNOWN = 0;
ZONE_STATUS_STARTING = 1;
ZONE_STATUS_STARTED = 2;
ZONE_STATUS_EXITED = 3;
ZONE_STATUS_DESTROYING = 4;
ZONE_STATUS_DESTROYED = 5;
ZONE_STATUS_FAILED = 6;
}
message GuestNetworkState {
string guest_ipv4 = 1;
string guest_ipv6 = 2;
string guest_mac = 3;
message ZoneNetworkState {
string zone_ipv4 = 1;
string zone_ipv6 = 2;
string zone_mac = 3;
string gateway_ipv4 = 4;
string gateway_ipv6 = 5;
string gateway_mac = 6;
}
message GuestExitInfo {
message ZoneExitInfo {
int32 code = 1;
}
message GuestErrorInfo {
message ZoneErrorInfo {
string message = 1;
}
message GuestMetricNode {
message ZoneMetricNode {
string name = 1;
google.protobuf.Value value = 2;
GuestMetricFormat format = 3;
repeated GuestMetricNode children = 4;
ZoneMetricFormat format = 3;
repeated ZoneMetricNode children = 4;
}
enum GuestMetricFormat {
GUEST_METRIC_FORMAT_UNKNOWN = 0;
GUEST_METRIC_FORMAT_BYTES = 1;
GUEST_METRIC_FORMAT_INTEGER = 2;
GUEST_METRIC_FORMAT_DURATION_SECONDS = 3;
enum ZoneMetricFormat {
ZONE_METRIC_FORMAT_UNKNOWN = 0;
ZONE_METRIC_FORMAT_BYTES = 1;
ZONE_METRIC_FORMAT_INTEGER = 2;
ZONE_METRIC_FORMAT_DURATION_SECONDS = 3;
}

View File

@ -12,17 +12,17 @@ import "krata/v1/common.proto";
service ControlService {
rpc IdentifyHost(IdentifyHostRequest) returns (IdentifyHostReply);
rpc CreateGuest(CreateGuestRequest) returns (CreateGuestReply);
rpc DestroyGuest(DestroyGuestRequest) returns (DestroyGuestReply);
rpc ResolveGuest(ResolveGuestRequest) returns (ResolveGuestReply);
rpc ListGuests(ListGuestsRequest) returns (ListGuestsReply);
rpc CreateZone(CreateZoneRequest) returns (CreateZoneReply);
rpc DestroyZone(DestroyZoneRequest) returns (DestroyZoneReply);
rpc ResolveZone(ResolveZoneRequest) returns (ResolveZoneReply);
rpc ListZones(ListZonesRequest) returns (ListZonesReply);
rpc ListDevices(ListDevicesRequest) returns (ListDevicesReply);
rpc ExecGuest(stream ExecGuestRequest) returns (stream ExecGuestReply);
rpc ExecZone(stream ExecZoneRequest) returns (stream ExecZoneReply);
rpc AttachZoneConsole(stream ZoneConsoleRequest) returns (stream ZoneConsoleReply);
rpc ReadZoneMetrics(ReadZoneMetricsRequest) returns (ReadZoneMetricsReply);
rpc ConsoleData(stream ConsoleDataRequest) returns (stream ConsoleDataReply);
rpc ReadGuestMetrics(ReadGuestMetricsRequest) returns (ReadGuestMetricsReply);
rpc SnoopIdm(SnoopIdmRequest) returns (stream SnoopIdmReply);
rpc WatchEvents(WatchEventsRequest) returns (stream WatchEventsReply);
@ -40,41 +40,41 @@ message IdentifyHostReply {
string krata_version = 3;
}
message CreateGuestRequest {
krata.v1.common.GuestSpec spec = 1;
message CreateZoneRequest {
krata.v1.common.ZoneSpec spec = 1;
}
message CreateGuestReply {
string guest_id = 1;
message CreateZoneReply {
string Zone_id = 1;
}
message DestroyGuestRequest {
string guest_id = 1;
message DestroyZoneRequest {
string Zone_id = 1;
}
message DestroyGuestReply {}
message DestroyZoneReply {}
message ResolveGuestRequest {
message ResolveZoneRequest {
string name = 1;
}
message ResolveGuestReply {
krata.v1.common.Guest guest = 1;
message ResolveZoneReply {
krata.v1.common.Zone Zone = 1;
}
message ListGuestsRequest {}
message ListZonesRequest {}
message ListGuestsReply {
repeated krata.v1.common.Guest guests = 1;
message ListZonesReply {
repeated krata.v1.common.Zone Zones = 1;
}
message ExecGuestRequest {
string guest_id = 1;
krata.v1.common.GuestTaskSpec task = 2;
message ExecZoneRequest {
string Zone_id = 1;
krata.v1.common.ZoneTaskSpec task = 2;
bytes data = 3;
}
message ExecGuestReply {
message ExecZoneReply {
bool exited = 1;
string error = 2;
int32 exit_code = 3;
@ -82,12 +82,12 @@ message ExecGuestReply {
bytes stderr = 5;
}
message ConsoleDataRequest {
string guest_id = 1;
message ZoneConsoleRequest {
string Zone_id = 1;
bytes data = 2;
}
message ConsoleDataReply {
message ZoneConsoleReply {
bytes data = 1;
}
@ -95,20 +95,20 @@ message WatchEventsRequest {}
message WatchEventsReply {
oneof event {
GuestChangedEvent guest_changed = 1;
ZoneChangedEvent Zone_changed = 1;
}
}
message GuestChangedEvent {
krata.v1.common.Guest guest = 1;
message ZoneChangedEvent {
krata.v1.common.Zone Zone = 1;
}
message ReadGuestMetricsRequest {
string guest_id = 1;
message ReadZoneMetricsRequest {
string Zone_id = 1;
}
message ReadGuestMetricsReply {
krata.v1.common.GuestMetricNode root = 1;
message ReadZoneMetricsReply {
krata.v1.common.ZoneMetricNode root = 1;
}
message SnoopIdmRequest {}
@ -184,6 +184,7 @@ message PullImageRequest {
string image = 1;
krata.v1.common.OciImageFormat format = 2;
bool overwrite_cache = 3;
bool update = 4;
}
message PullImageReply {
@ -205,9 +206,9 @@ message ListDevicesReply {
}
enum HostCpuTopologyClass {
CPU_CLASS_STANDARD = 0;
CPU_CLASS_PERFORMANCE = 1;
CPU_CLASS_EFFICIENCY = 2;
HOST_CPU_TOPOLOGY_CLASS_STANDARD = 0;
HOST_CPU_TOPOLOGY_CLASS_PERFORMANCE = 1;
HOST_CPU_TOPOLOGY_CLASS_EFFICIENCY = 2;
}
message HostCpuTopologyInfo {

View File

@ -1,14 +1,10 @@
#[cfg(unix)]
use crate::unix::HyperUnixConnector;
use crate::{dial::ControlDialAddress, v1::control::control_service_client::ControlServiceClient};
#[cfg(not(unix))]
use anyhow::anyhow;
use anyhow::Result;
#[cfg(unix)]
use tokio::net::UnixStream;
#[cfg(unix)]
use tonic::transport::Uri;
use tonic::transport::{Channel, ClientTlsConfig, Endpoint};
#[cfg(unix)]
use tower::service_fn;
pub struct ControlClientProvider {}
@ -52,10 +48,7 @@ impl ControlClientProvider {
async fn dial_unix_socket(path: String) -> Result<Channel> {
// This URL is not actually used but is required to be specified.
Ok(Endpoint::try_from(format!("unix://localhost/{}", path))?
.connect_with_connector(service_fn(|uri: Uri| {
let path = uri.path().to_string();
UnixStream::connect(path)
}))
.connect_with_connector(HyperUnixConnector {})
.await?)
}
}

View File

@ -9,6 +9,7 @@ use std::{
};
use anyhow::{anyhow, Result};
use bytes::{BufMut, BytesMut};
use log::{debug, error};
use nix::sys::termios::{cfmakeraw, tcgetattr, tcsetattr, SetArg};
use prost::Message;
@ -96,10 +97,12 @@ impl IdmBackend for IdmFileBackend {
async fn send(&mut self, packet: IdmTransportPacket) -> Result<()> {
let mut file = self.write.lock().await;
let data = packet.encode_to_vec();
file.write_all(&[0xff, 0xff]).await?;
file.write_u32_le(data.len() as u32).await?;
file.write_all(&data).await?;
let length = packet.encoded_len();
let mut buffer = BytesMut::with_capacity(6 + length);
buffer.put_slice(&[0xff, 0xff]);
buffer.put_u32_le(length as u32);
packet.encode(&mut buffer)?;
file.write_all(&buffer).await?;
Ok(())
}
}
@ -488,7 +491,7 @@ impl<R: IdmRequest, E: IdmSerializable> IdmClient<R, E> {
error!("unable to send idm packet, packet size exceeded (tried to send {} bytes)", length);
continue;
}
backend.send(packet).await?;
backend.send(packet.clone()).await?;
},
None => {

View File

@ -12,6 +12,9 @@ pub mod launchcfg;
#[cfg(target_os = "linux")]
pub mod ethtool;
#[cfg(unix)]
pub mod unix;
pub static DESCRIPTOR_POOL: Lazy<DescriptorPool> = Lazy::new(|| {
DescriptorPool::decode(
include_bytes!(concat!(env!("OUT_DIR"), "/file_descriptor_set.bin")).as_ref(),

73
crates/krata/src/unix.rs Normal file
View File

@ -0,0 +1,73 @@
use std::future::Future;
use std::io::Error;
use std::pin::Pin;
use std::task::{Context, Poll};
use hyper::rt::ReadBufCursor;
use hyper_util::rt::TokioIo;
use pin_project_lite::pin_project;
use tokio::io::AsyncWrite;
use tokio::net::UnixStream;
use tonic::transport::Uri;
use tower::Service;
pin_project! {
#[derive(Debug)]
pub struct HyperUnixStream {
#[pin]
pub stream: UnixStream,
}
}
impl hyper::rt::Read for HyperUnixStream {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: ReadBufCursor<'_>,
) -> Poll<Result<(), Error>> {
let mut tokio = TokioIo::new(self.project().stream);
Pin::new(&mut tokio).poll_read(cx, buf)
}
}
impl hyper::rt::Write for HyperUnixStream {
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, Error>> {
self.project().stream.poll_write(cx, buf)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Error>> {
self.project().stream.poll_flush(cx)
}
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Error>> {
self.project().stream.poll_shutdown(cx)
}
}
pub struct HyperUnixConnector;
impl Service<Uri> for HyperUnixConnector {
type Response = HyperUnixStream;
type Error = Error;
#[allow(clippy::type_complexity)]
type Future =
Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send + 'static>>;
fn call(&mut self, req: Uri) -> Self::Future {
let fut = async move {
let path = req.path().to_string();
let stream = UnixStream::connect(path).await?;
Ok(HyperUnixStream { stream })
};
Box::pin(fut)
}
fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
}

View File

@ -16,7 +16,7 @@ clap = { workspace = true }
env_logger = { workspace = true }
etherparse = { workspace = true }
futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.12" }
krata = { path = "../krata", version = "^0.0.14" }
krata-advmac = { workspace = true }
libc = { workspace = true }
log = { workspace = true }

View File

@ -2,10 +2,10 @@ use anyhow::Result;
use krata::{
events::EventStream,
v1::{
common::Guest,
common::Zone,
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
ListGuestsRequest,
ListZonesRequest,
},
},
};
@ -33,7 +33,7 @@ pub struct NetworkSide {
pub struct NetworkMetadata {
pub domid: u32,
pub uuid: Uuid,
pub guest: NetworkSide,
pub zone: NetworkSide,
pub gateway: NetworkSide,
}
@ -60,23 +60,23 @@ impl AutoNetworkWatcher {
}
pub async fn read(&mut self) -> Result<Vec<NetworkMetadata>> {
let mut all_guests: HashMap<Uuid, Guest> = HashMap::new();
for guest in self
let mut all_zones: HashMap<Uuid, Zone> = HashMap::new();
for zone in self
.control
.list_guests(ListGuestsRequest {})
.list_zones(ListZonesRequest {})
.await?
.into_inner()
.guests
.zones
{
let Ok(uuid) = Uuid::from_str(&guest.id) else {
let Ok(uuid) = Uuid::from_str(&zone.id) else {
continue;
};
all_guests.insert(uuid, guest);
all_zones.insert(uuid, zone);
}
let mut networks: Vec<NetworkMetadata> = Vec::new();
for (uuid, guest) in &all_guests {
let Some(ref state) = guest.state else {
for (uuid, zone) in &all_zones {
let Some(ref state) = zone.state else {
continue;
};
@ -88,15 +88,15 @@ impl AutoNetworkWatcher {
continue;
};
let Ok(guest_ipv4_cidr) = Ipv4Cidr::from_str(&network.guest_ipv4) else {
let Ok(zone_ipv4_cidr) = Ipv4Cidr::from_str(&network.zone_ipv4) else {
continue;
};
let Ok(guest_ipv6_cidr) = Ipv6Cidr::from_str(&network.guest_ipv6) else {
let Ok(zone_ipv6_cidr) = Ipv6Cidr::from_str(&network.zone_ipv6) else {
continue;
};
let Ok(guest_mac) = EthernetAddress::from_str(&network.guest_mac) else {
let Ok(zone_mac) = EthernetAddress::from_str(&network.zone_mac) else {
continue;
};
@ -115,10 +115,10 @@ impl AutoNetworkWatcher {
networks.push(NetworkMetadata {
domid: state.domid,
uuid: *uuid,
guest: NetworkSide {
ipv4: guest_ipv4_cidr,
ipv6: guest_ipv6_cidr,
mac: guest_mac,
zone: NetworkSide {
ipv4: zone_ipv4_cidr,
ipv6: zone_ipv6_cidr,
mac: zone_mac,
},
gateway: NetworkSide {
ipv4: gateway_ipv4_cidr,
@ -175,7 +175,7 @@ impl AutoNetworkWatcher {
loop {
select! {
x = receiver.recv() => match x {
Ok(Event::GuestChanged(_)) => {
Ok(Event::ZoneChanged(_)) => {
break;
},

View File

@ -54,11 +54,11 @@ impl NetworkStack<'_> {
match what {
NetworkStackSelect::Receive(Some(packet)) => {
if let Err(error) = self.bridge.to_bridge_sender.try_send(packet.clone()) {
trace!("failed to send guest packet to bridge: {}", error);
trace!("failed to send zone packet to bridge: {}", error);
}
if let Err(error) = self.nat.receive_sender.try_send(packet.clone()) {
trace!("failed to send guest packet to nat: {}", error);
trace!("failed to send zone packet to nat: {}", error);
}
self.udev.rx = Some(packet);
@ -137,7 +137,7 @@ impl NetworkBackend {
.expect("failed to set ip addresses");
});
let sockets = SocketSet::new(vec![]);
let handle = self.bridge.join(self.metadata.guest.mac).await?;
let handle = self.bridge.join(self.metadata.zone.mac).await?;
let kdev = AsyncRawSocketChannel::new(mtu, kdev)?;
Ok(NetworkStack {
tx: tx_receiver,
@ -153,12 +153,12 @@ impl NetworkBackend {
pub async fn launch(self) -> Result<JoinHandle<()>> {
Ok(tokio::task::spawn(async move {
info!(
"launched network backend for krata guest {}",
"launched network backend for krata zone {}",
self.metadata.uuid
);
if let Err(error) = self.run().await {
warn!(
"network backend for krata guest {} failed: {}",
"network backend for krata zone {} failed: {}",
self.metadata.uuid, error
);
}
@ -169,7 +169,7 @@ impl NetworkBackend {
impl Drop for NetworkBackend {
fn drop(&mut self) {
info!(
"destroyed network backend for krata guest {}",
"destroyed network backend for krata zone {}",
self.metadata.uuid
);
}

View File

@ -7,7 +7,7 @@ use hbridge::HostBridge;
use krata::{
client::ControlClientProvider,
dial::ControlDialAddress,
v1::{common::Guest, control::control_service_client::ControlServiceClient},
v1::{common::Zone, control::control_service_client::ControlServiceClient},
};
use log::warn;
use tokio::{task::JoinHandle, time::sleep};
@ -33,7 +33,7 @@ pub const EXTRA_MTU: usize = 20;
pub struct NetworkService {
pub control: ControlServiceClient<Channel>,
pub guests: HashMap<Uuid, Guest>,
pub zones: HashMap<Uuid, Zone>,
pub backends: HashMap<Uuid, JoinHandle<()>>,
pub bridge: VirtualBridge,
pub hbridge: HostBridge,
@ -47,7 +47,7 @@ impl NetworkService {
HostBridge::new(HOST_BRIDGE_MTU + EXTRA_MTU, "krata0".to_string(), &bridge).await?;
Ok(NetworkService {
control,
guests: HashMap::new(),
zones: HashMap::new(),
backends: HashMap::new(),
bridge,
hbridge,
@ -99,7 +99,7 @@ impl NetworkService {
Err((metadata, error)) => {
warn!(
"failed to launch network backend for krata guest {}: {}",
"failed to launch network backend for krata zone {}: {}",
metadata.uuid, error
);
failed.push(metadata.uuid);

View File

@ -37,7 +37,13 @@ async fn main() -> Result<()> {
});
let service = OciPackerService::new(seed, &cache_dir, OciPlatform::current()).await?;
let packed = service
.request(image.clone(), OciPackedFormat::Squashfs, false, context)
.request(
image.clone(),
OciPackedFormat::Squashfs,
false,
true,
context,
)
.await?;
println!(
"generated squashfs of {} to {}",

View File

@ -4,6 +4,7 @@ use crate::{
schema::OciSchema,
};
use crate::fetch::OciResolvedImage;
use anyhow::Result;
use log::{debug, error};
use oci_spec::image::{
@ -50,6 +51,51 @@ impl OciPackerCache {
Ok(index.manifests().clone())
}
pub async fn resolve(
&self,
name: ImageName,
format: OciPackedFormat,
) -> Result<Option<OciResolvedImage>> {
if name.reference.as_deref() == Some("latest") {
return Ok(None);
}
let name_str = name.to_string();
let index = self.index.read().await;
let mut descriptor: Option<Descriptor> = None;
for manifest in index.manifests() {
let Some(name) = manifest
.annotations()
.clone()
.unwrap_or_default()
.get(ANNOTATION_IMAGE_NAME)
.cloned()
else {
continue;
};
if name == name_str {
descriptor = Some(manifest.clone());
}
}
let Some(descriptor) = descriptor else {
return Ok(None);
};
debug!("resolve hit name={} digest={}", name, descriptor.digest());
self.recall(name, descriptor.digest().as_ref(), format)
.await
.map(|image| {
image.map(|i| OciResolvedImage {
name: i.name,
digest: i.digest,
descriptor: i.descriptor,
manifest: i.manifest,
})
})
}
pub async fn recall(
&self,
name: ImageName,

View File

@ -75,13 +75,23 @@ impl OciPackerService {
name: ImageName,
format: OciPackedFormat,
overwrite: bool,
pull: bool,
progress_context: OciProgressContext,
) -> Result<OciPackedImage> {
let progress = OciProgress::new();
let progress = OciBoundProgress::new(progress_context.clone(), progress);
let mut resolved = None;
if !pull && !overwrite {
resolved = self.cache.resolve(name.clone(), format).await?;
}
let fetcher =
OciImageFetcher::new(self.seed.clone(), self.platform.clone(), progress.clone());
let resolved = fetcher.resolve(name.clone()).await?;
let resolved = if let Some(resolved) = resolved {
resolved
} else {
fetcher.resolve(name.clone()).await?
};
let key = OciPackerTaskKey {
digest: resolved.digest.clone(),
format,

View File

@ -138,7 +138,7 @@ impl VfsNode {
header.set_mode(self.mode);
if let Some(link_name) = self.link_name.as_ref() {
header.set_link_name(&PathBuf::from(link_name))?;
header.set_link_name(PathBuf::from(link_name))?;
}
header.set_size(self.size);
Ok(header)

View File

@ -1,6 +1,6 @@
[package]
name = "krata-runtime"
description = "Runtime for running guests on the krata isolation engine"
description = "Runtime for managing zones on the krata isolation engine"
license.workspace = true
version.workspace = true
homepage.workspace = true
@ -12,20 +12,20 @@ resolver = "2"
anyhow = { workspace = true }
backhand = { workspace = true }
ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.12" }
krata = { path = "../krata", version = "^0.0.14" }
krata-advmac = { workspace = true }
krata-oci = { path = "../oci", version = "^0.0.12" }
krata-oci = { path = "../oci", version = "^0.0.14" }
log = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true }
uuid = { workspace = true }
krata-loopdev = { path = "../loopdev", version = "^0.0.12" }
krata-xencall = { path = "../xen/xencall", version = "^0.0.12" }
krata-xenclient = { path = "../xen/xenclient", version = "^0.0.12" }
krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.12" }
krata-xengnt = { path = "../xen/xengnt", version = "^0.0.12" }
krata-xenplatform = { path = "../xen/xenplatform", version = "^0.0.12" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.12" }
krata-loopdev = { path = "../loopdev", version = "^0.0.14" }
krata-xencall = { path = "../xen/xencall", version = "^0.0.14" }
krata-xenclient = { path = "../xen/xenclient", version = "^0.0.14" }
krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.14" }
krata-xengnt = { path = "../xen/xengnt", version = "^0.0.14" }
krata-xenplatform = { path = "../xen/xenplatform", version = "^0.0.14" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.14" }
walkdir = { workspace = true }
indexmap = { workspace = true }

View File

@ -99,23 +99,23 @@ impl IpVendor {
continue;
};
let assigned_ipv4 = store
.read_string(format!("{}/krata/network/guest/ipv4", dom_path))
.read_string(format!("{}/krata/network/zone/ipv4", dom_path))
.await?
.and_then(|x| Ipv4Network::from_str(&x).ok());
let assigned_ipv6 = store
.read_string(format!("{}/krata/network/guest/ipv6", dom_path))
.read_string(format!("{}/krata/network/zone/ipv6", dom_path))
.await?
.and_then(|x| Ipv6Network::from_str(&x).ok());
if let Some(existing_ipv4) = assigned_ipv4 {
if let Some(previous) = state.ipv4.insert(existing_ipv4.ip(), uuid) {
error!("ipv4 conflict detected: guest {} owned {} but {} also claimed to own it, giving it to {}", previous, existing_ipv4.ip(), uuid, uuid);
error!("ipv4 conflict detected: zone {} owned {} but {} also claimed to own it, giving it to {}", previous, existing_ipv4.ip(), uuid, uuid);
}
}
if let Some(existing_ipv6) = assigned_ipv6 {
if let Some(previous) = state.ipv6.insert(existing_ipv6.ip(), uuid) {
error!("ipv6 conflict detected: guest {} owned {} but {} also claimed to own it, giving it to {}", previous, existing_ipv6.ip(), uuid, uuid);
error!("ipv6 conflict detected: zone {} owned {} but {} also claimed to own it, giving it to {}", previous, existing_ipv6.ip(), uuid, uuid);
}
}
}
@ -251,13 +251,13 @@ impl IpVendor {
intermediate.ipv6.insert(self.gateway_ipv6, self.host_uuid);
for (ipv4, uuid) in &state.pending_ipv4 {
if let Some(previous) = intermediate.ipv4.insert(*ipv4, *uuid) {
error!("ipv4 conflict detected: guest {} owned (pending) {} but {} also claimed to own it, giving it to {}", previous, ipv4, uuid, uuid);
error!("ipv4 conflict detected: zone {} owned (pending) {} but {} also claimed to own it, giving it to {}", previous, ipv4, uuid, uuid);
}
intermediate.pending_ipv4.insert(*ipv4, *uuid);
}
for (ipv6, uuid) in &state.pending_ipv6 {
if let Some(previous) = intermediate.ipv6.insert(*ipv6, *uuid) {
error!("ipv6 conflict detected: guest {} owned (pending) {} but {} also claimed to own it, giving it to {}", previous, ipv6, uuid, uuid);
error!("ipv6 conflict detected: zone {} owned (pending) {} but {} also claimed to own it, giving it to {}", previous, ipv6, uuid, uuid);
}
intermediate.pending_ipv6.insert(*ipv6, *uuid);
}
@ -271,16 +271,16 @@ impl IpVendor {
domid: u32,
) -> Result<Option<IpAssignment>> {
let dom_path = format!("/local/domain/{}", domid);
let Some(guest_ipv4) = self
let Some(zone_ipv4) = self
.store
.read_string(format!("{}/krata/network/guest/ipv4", dom_path))
.read_string(format!("{}/krata/network/zone/ipv4", dom_path))
.await?
else {
return Ok(None);
};
let Some(guest_ipv6) = self
let Some(zone_ipv6) = self
.store
.read_string(format!("{}/krata/network/guest/ipv6", dom_path))
.read_string(format!("{}/krata/network/zone/ipv6", dom_path))
.await?
else {
return Ok(None);
@ -300,10 +300,10 @@ impl IpVendor {
return Ok(None);
};
let Some(guest_ipv4) = Ipv4Network::from_str(&guest_ipv4).ok() else {
let Some(zone_ipv4) = Ipv4Network::from_str(&zone_ipv4).ok() else {
return Ok(None);
};
let Some(guest_ipv6) = Ipv6Network::from_str(&guest_ipv6).ok() else {
let Some(zone_ipv6) = Ipv6Network::from_str(&zone_ipv6).ok() else {
return Ok(None);
};
let Some(gateway_ipv4) = Ipv4Network::from_str(&gateway_ipv4).ok() else {
@ -315,10 +315,10 @@ impl IpVendor {
Ok(Some(IpAssignment {
vendor: self.clone(),
uuid,
ipv4: guest_ipv4.ip(),
ipv4_prefix: guest_ipv4.prefix(),
ipv6: guest_ipv6.ip(),
ipv6_prefix: guest_ipv6.prefix(),
ipv4: zone_ipv4.ip(),
ipv4_prefix: zone_ipv4.prefix(),
ipv6: zone_ipv6.ip(),
ipv6_prefix: zone_ipv6.prefix(),
gateway_ipv4: gateway_ipv4.ip(),
gateway_ipv6: gateway_ipv6.ip(),
committed: true,

View File

@ -20,13 +20,13 @@ use xenplatform::domain::BaseDomainConfig;
use crate::cfgblk::ConfigBlock;
use crate::RuntimeContext;
use super::{GuestInfo, GuestState};
use super::{ZoneInfo, ZoneState};
pub use xenclient::{
pci::PciBdf, DomainPciDevice as PciDevice, DomainPciRdmReservePolicy as PciRdmReservePolicy,
};
pub struct GuestLaunchRequest {
pub struct ZoneLaunchRequest {
pub format: LaunchPackedFormat,
pub kernel: Vec<u8>,
pub initrd: Vec<u8>,
@ -42,11 +42,11 @@ pub struct GuestLaunchRequest {
pub addons_image: Option<PathBuf>,
}
pub struct GuestLauncher {
pub struct ZoneLauncher {
pub launch_semaphore: Arc<Semaphore>,
}
impl GuestLauncher {
impl ZoneLauncher {
pub fn new(launch_semaphore: Arc<Semaphore>) -> Result<Self> {
Ok(Self { launch_semaphore })
}
@ -54,16 +54,16 @@ impl GuestLauncher {
pub async fn launch(
&mut self,
context: &RuntimeContext,
request: GuestLaunchRequest,
) -> Result<GuestInfo> {
request: ZoneLaunchRequest,
) -> Result<ZoneInfo> {
let uuid = request.uuid.unwrap_or_else(Uuid::new_v4);
let xen_name = format!("krata-{uuid}");
let mut gateway_mac = MacAddr6::random();
gateway_mac.set_local(true);
gateway_mac.set_multicast(false);
let mut container_mac = MacAddr6::random();
container_mac.set_local(true);
container_mac.set_multicast(false);
let mut zone_mac = MacAddr6::random();
zone_mac.set_local(true);
zone_mac.set_multicast(false);
let _launch_permit = self.launch_semaphore.acquire().await?;
let mut ip = context.ipvendor.assign(uuid).await?;
@ -145,7 +145,7 @@ impl GuestLauncher {
}
let cmdline = cmdline_options.join(" ");
let guest_mac_string = container_mac.to_string().replace('-', ":");
let zone_mac_string = zone_mac.to_string().replace('-', ":");
let gateway_mac_string = gateway_mac.to_string().replace('-', ":");
let mut disks = vec![
@ -191,16 +191,16 @@ impl GuestLauncher {
("krata/uuid".to_string(), uuid.to_string()),
("krata/loops".to_string(), loops.join(",")),
(
"krata/network/guest/ipv4".to_string(),
"krata/network/zone/ipv4".to_string(),
format!("{}/{}", ip.ipv4, ip.ipv4_prefix),
),
(
"krata/network/guest/ipv6".to_string(),
"krata/network/zone/ipv6".to_string(),
format!("{}/{}", ip.ipv6, ip.ipv6_prefix),
),
(
"krata/network/guest/mac".to_string(),
guest_mac_string.clone(),
"krata/network/zone/mac".to_string(),
zone_mac_string.clone(),
),
(
"krata/network/gateway/ipv4".to_string(),
@ -240,7 +240,7 @@ impl GuestLauncher {
initialized: false,
}],
vifs: vec![DomainNetworkInterface {
mac: guest_mac_string.clone(),
mac: zone_mac_string.clone(),
mtu: 1500,
bridge: None,
script: None,
@ -248,20 +248,20 @@ impl GuestLauncher {
pcis: request.pcis.clone(),
filesystems: vec![],
extra_keys,
extra_rw_paths: vec!["krata/guest".to_string()],
extra_rw_paths: vec!["krata/zone".to_string()],
};
match context.xen.create(&config).await {
Ok(created) => {
ip.commit().await?;
Ok(GuestInfo {
Ok(ZoneInfo {
name: request.name.as_ref().map(|x| x.to_string()),
uuid,
domid: created.domid,
image: request.image.digest,
loops: vec![],
guest_ipv4: Some(IpNetwork::new(IpAddr::V4(ip.ipv4), ip.ipv4_prefix)?),
guest_ipv6: Some(IpNetwork::new(IpAddr::V6(ip.ipv6), ip.ipv6_prefix)?),
guest_mac: Some(guest_mac_string.clone()),
zone_ipv4: Some(IpNetwork::new(IpAddr::V4(ip.ipv4), ip.ipv4_prefix)?),
zone_ipv6: Some(IpNetwork::new(IpAddr::V6(ip.ipv6), ip.ipv6_prefix)?),
zone_mac: Some(zone_mac_string.clone()),
gateway_ipv4: Some(IpNetwork::new(
IpAddr::V4(ip.gateway_ipv4),
ip.ipv4_prefix,
@ -271,7 +271,7 @@ impl GuestLauncher {
ip.ipv6_prefix,
)?),
gateway_mac: Some(gateway_mac_string.clone()),
state: GuestState { exit_code: None },
state: ZoneState { exit_code: None },
})
}
Err(error) => {

View File

@ -12,7 +12,7 @@ use xenstore::{XsdClient, XsdInterface};
use self::{
autoloop::AutoLoop,
launch::{GuestLaunchRequest, GuestLauncher},
launch::{ZoneLaunchRequest, ZoneLauncher},
power::PowerManagementContext,
};
@ -29,29 +29,32 @@ type RuntimePlatform = xenplatform::x86pv::X86PvPlatform;
#[cfg(not(target_arch = "x86_64"))]
type RuntimePlatform = xenplatform::unsupported::UnsupportedPlatform;
pub struct GuestLoopInfo {
#[derive(Clone)]
pub struct ZoneLoopInfo {
pub device: String,
pub file: String,
pub delete: Option<String>,
}
pub struct GuestState {
#[derive(Clone)]
pub struct ZoneState {
pub exit_code: Option<i32>,
}
pub struct GuestInfo {
#[derive(Clone)]
pub struct ZoneInfo {
pub name: Option<String>,
pub uuid: Uuid,
pub domid: u32,
pub image: String,
pub loops: Vec<GuestLoopInfo>,
pub guest_ipv4: Option<IpNetwork>,
pub guest_ipv6: Option<IpNetwork>,
pub guest_mac: Option<String>,
pub loops: Vec<ZoneLoopInfo>,
pub zone_ipv4: Option<IpNetwork>,
pub zone_ipv6: Option<IpNetwork>,
pub zone_mac: Option<String>,
pub gateway_ipv4: Option<IpNetwork>,
pub gateway_ipv6: Option<IpNetwork>,
pub gateway_mac: Option<String>,
pub state: GuestState,
pub state: ZoneState,
}
#[derive(Clone)]
@ -75,8 +78,8 @@ impl RuntimeContext {
})
}
pub async fn list(&self) -> Result<Vec<GuestInfo>> {
let mut guests: Vec<GuestInfo> = Vec::new();
pub async fn list(&self) -> Result<Vec<ZoneInfo>> {
let mut zones: Vec<ZoneInfo> = Vec::new();
for domid_candidate in self.xen.store.list("/local/domain").await? {
if domid_candidate == "0" {
continue;
@ -112,20 +115,20 @@ impl RuntimeContext {
.store
.read_string(&format!("{}/krata/loops", &dom_path))
.await?;
let guest_ipv4 = self
let zone_ipv4 = self
.xen
.store
.read_string(&format!("{}/krata/network/guest/ipv4", &dom_path))
.read_string(&format!("{}/krata/network/zone/ipv4", &dom_path))
.await?;
let guest_ipv6 = self
let zone_ipv6 = self
.xen
.store
.read_string(&format!("{}/krata/network/guest/ipv6", &dom_path))
.read_string(&format!("{}/krata/network/zone/ipv6", &dom_path))
.await?;
let guest_mac = self
let zone_mac = self
.xen
.store
.read_string(&format!("{}/krata/network/guest/mac", &dom_path))
.read_string(&format!("{}/krata/network/zone/mac", &dom_path))
.await?;
let gateway_ipv4 = self
.xen
@ -143,14 +146,14 @@ impl RuntimeContext {
.read_string(&format!("{}/krata/network/gateway/mac", &dom_path))
.await?;
let guest_ipv4 = if let Some(guest_ipv4) = guest_ipv4 {
IpNetwork::from_str(&guest_ipv4).ok()
let zone_ipv4 = if let Some(zone_ipv4) = zone_ipv4 {
IpNetwork::from_str(&zone_ipv4).ok()
} else {
None
};
let guest_ipv6 = if let Some(guest_ipv6) = guest_ipv6 {
IpNetwork::from_str(&guest_ipv6).ok()
let zone_ipv6 = if let Some(zone_ipv6) = zone_ipv6 {
IpNetwork::from_str(&zone_ipv6).ok()
} else {
None
};
@ -170,7 +173,7 @@ impl RuntimeContext {
let exit_code = self
.xen
.store
.read_string(&format!("{}/krata/guest/exit-code", &dom_path))
.read_string(&format!("{}/krata/zone/exit-code", &dom_path))
.await?;
let exit_code: Option<i32> = match exit_code {
@ -178,37 +181,37 @@ impl RuntimeContext {
None => None,
};
let state = GuestState { exit_code };
let state = ZoneState { exit_code };
let loops = RuntimeContext::parse_loop_set(&loops);
guests.push(GuestInfo {
zones.push(ZoneInfo {
name,
uuid,
domid,
image,
loops,
guest_ipv4,
guest_ipv6,
guest_mac,
zone_ipv4,
zone_ipv6,
zone_mac,
gateway_ipv4,
gateway_ipv6,
gateway_mac,
state,
});
}
Ok(guests)
Ok(zones)
}
pub async fn resolve(&self, uuid: Uuid) -> Result<Option<GuestInfo>> {
for guest in self.list().await? {
if guest.uuid == uuid {
return Ok(Some(guest));
pub async fn resolve(&self, uuid: Uuid) -> Result<Option<ZoneInfo>> {
for zone in self.list().await? {
if zone.uuid == uuid {
return Ok(Some(zone));
}
}
Ok(None)
}
fn parse_loop_set(input: &Option<String>) -> Vec<GuestLoopInfo> {
fn parse_loop_set(input: &Option<String>) -> Vec<ZoneLoopInfo> {
let Some(input) = input else {
return Vec::new();
};
@ -219,7 +222,7 @@ impl RuntimeContext {
.map(|x| (x[0].clone(), x[1].clone(), x[2].clone()))
.collect::<Vec<(String, String, String)>>();
sets.iter()
.map(|(device, file, delete)| GuestLoopInfo {
.map(|(device, file, delete)| ZoneLoopInfo {
device: device.clone(),
file: file.clone(),
delete: if delete == "none" {
@ -228,7 +231,7 @@ impl RuntimeContext {
Some(delete.clone())
},
})
.collect::<Vec<GuestLoopInfo>>()
.collect::<Vec<ZoneLoopInfo>>()
}
}
@ -249,8 +252,8 @@ impl Runtime {
})
}
pub async fn launch(&self, request: GuestLaunchRequest) -> Result<GuestInfo> {
let mut launcher = GuestLauncher::new(self.launch_semaphore.clone())?;
pub async fn launch(&self, request: ZoneLaunchRequest) -> Result<ZoneInfo> {
let mut launcher = ZoneLauncher::new(self.launch_semaphore.clone())?;
launcher.launch(&self.context, request).await
}
@ -259,7 +262,7 @@ impl Runtime {
.context
.resolve(uuid)
.await?
.ok_or_else(|| anyhow!("unable to resolve guest: {}", uuid))?;
.ok_or_else(|| anyhow!("unable to resolve zone: {}", uuid))?;
let domid = info.domid;
let store = XsdClient::open().await?;
let dom_path = store.get_domain_path(domid).await?;
@ -307,7 +310,7 @@ impl Runtime {
if let Some(ip) = ip {
if let Err(error) = self.context.ipvendor.recall(&ip).await {
error!(
"failed to recall ip assignment for guest {}: {}",
"failed to recall ip assignment for zone {}: {}",
uuid, error
);
}
@ -316,7 +319,7 @@ impl Runtime {
Ok(uuid)
}
pub async fn list(&self) -> Result<Vec<GuestInfo>> {
pub async fn list(&self) -> Result<Vec<ZoneInfo>> {
self.context.list().await
}

View File

@ -25,7 +25,7 @@ pub struct CpuTopologyInfo {
pub class: CpuClass,
}
fn labelled_topo(input: &[SysctlCputopo]) -> Vec<CpuTopologyInfo> {
fn labeled_topology(input: &[SysctlCputopo]) -> Vec<CpuTopologyInfo> {
let mut cores: IndexMap<(u32, u32, u32), Vec<CpuTopologyInfo>> = IndexMap::new();
let mut pe_cores = false;
let mut last: Option<SysctlCputopo> = None;
@ -140,9 +140,9 @@ impl PowerManagementContext {
/// If there is a p-core/e-core split, then CPU class will be defined as
/// `CpuClass::Performance` or `CpuClass::Efficiency`, else `CpuClass::Standard`.
pub async fn cpu_topology(&self) -> Result<Vec<CpuTopologyInfo>> {
let xentopo = self.context.xen.call.cpu_topology().await?;
let logicaltopo = labelled_topo(&xentopo);
Ok(logicaltopo)
let xen_topology = self.context.xen.call.cpu_topology().await?;
let logical_topology = labeled_topology(&xen_topology);
Ok(logical_topology)
}
/// Enable or disable SMT awareness in the scheduler.

View File

@ -13,9 +13,9 @@ async-trait = { workspace = true }
indexmap = { workspace = true }
libc = { workspace = true }
log = { workspace = true }
krata-xencall = { path = "../xencall", version = "^0.0.12" }
krata-xenplatform = { path = "../xenplatform", version = "^0.0.12" }
krata-xenstore = { path = "../xenstore", version = "^0.0.12" }
krata-xencall = { path = "../xencall", version = "^0.0.14" }
krata-xenplatform = { path = "../xenplatform", version = "^0.0.14" }
krata-xenstore = { path = "../xenstore", version = "^0.0.14" }
regex = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true }

View File

@ -16,7 +16,7 @@ flate2 = { workspace = true }
indexmap = { workspace = true }
libc = { workspace = true }
log = { workspace = true }
krata-xencall = { path = "../xencall", version = "^0.0.12" }
krata-xencall = { path = "../xencall", version = "^0.0.14" }
memchr = { workspace = true }
nix = { workspace = true }
regex = { workspace = true }

View File

@ -143,19 +143,6 @@ pub const XSD_ERROR_EPERM: XsdError = XsdError {
pub const XSD_WATCH_PATH: u32 = 0;
pub const XSD_WATCH_TOKEN: u32 = 1;
#[repr(C)]
pub struct XenDomainInterface {
req: [i8; 1024],
rsp: [i8; 1024],
req_cons: u32,
req_prod: u32,
rsp_cons: u32,
rsp_prod: u32,
server_features: u32,
connection: u32,
error: u32,
}
pub const XS_PAYLOAD_MAX: u32 = 4096;
pub const XS_ABS_PATH_MAX: u32 = 3072;
pub const XS_REL_PATH_MAX: u32 = 2048;

View File

@ -1,6 +1,6 @@
[package]
name = "krata-guest"
description = "Guest services for the krata isolation engine"
name = "krata-zone"
description = "zone services for the krata isolation engine"
license.workspace = true
version.workspace = true
homepage.workspace = true
@ -14,8 +14,8 @@ cgroups-rs = { workspace = true }
env_logger = { workspace = true }
futures = { workspace = true }
ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.12" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.12" }
krata = { path = "../krata", version = "^0.0.14" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.14" }
libc = { workspace = true }
log = { workspace = true }
nix = { workspace = true, features = ["ioctl", "process", "fs"] }
@ -30,8 +30,8 @@ sysinfo = { workspace = true }
tokio = { workspace = true }
[lib]
name = "krataguest"
name = "kratazone"
[[bin]]
name = "krataguest"
name = "krata-zone"
path = "bin/init.rs"

19
crates/zone/bin/init.rs Normal file
View File

@ -0,0 +1,19 @@
use anyhow::Result;
use env_logger::Env;
use kratazone::{death, init::ZoneInit};
use log::error;
use std::env;
#[tokio::main]
async fn main() -> Result<()> {
env::set_var("RUST_BACKTRACE", "1");
env_logger::Builder::from_env(Env::default().default_filter_or("warn")).init();
let mut zone = ZoneInit::new();
if let Err(error) = zone.init().await {
error!("failed to initialize zone: {}", error);
death(127).await?;
return Ok(());
}
death(1).await?;
Ok(())
}

View File

@ -1,7 +1,7 @@
use crate::{
childwait::{ChildEvent, ChildWait},
death,
exec::GuestExecTask,
exec::ZoneExecTask,
metrics::MetricsCollector,
};
use anyhow::Result;
@ -18,20 +18,16 @@ use log::debug;
use nix::unistd::Pid;
use tokio::{select, sync::broadcast};
pub struct GuestBackground {
pub struct ZoneBackground {
idm: IdmInternalClient,
child: Pid,
_cgroup: Cgroup,
wait: ChildWait,
}
impl GuestBackground {
pub async fn new(
idm: IdmInternalClient,
cgroup: Cgroup,
child: Pid,
) -> Result<GuestBackground> {
Ok(GuestBackground {
impl ZoneBackground {
pub async fn new(idm: IdmInternalClient, cgroup: Cgroup, child: Pid) -> Result<ZoneBackground> {
Ok(ZoneBackground {
idm,
child,
_cgroup: cgroup,
@ -43,6 +39,7 @@ impl GuestBackground {
let mut event_subscription = self.idm.subscribe().await?;
let mut requests_subscription = self.idm.requests().await?;
let mut request_streams_subscription = self.idm.request_streams().await?;
let mut wait_subscription = self.wait.subscribe().await?;
loop {
select! {
x = event_subscription.recv() => match x {
@ -89,9 +86,9 @@ impl GuestBackground {
}
},
event = self.wait.recv() => match event {
Some(event) => self.child_event(event).await?,
None => {
event = wait_subscription.recv() => match event {
Ok(event) => self.child_event(event).await?,
Err(_) => {
break;
}
}
@ -132,9 +129,10 @@ impl GuestBackground {
&mut self,
handle: IdmClientStreamResponseHandle<Request>,
) -> Result<()> {
let wait = self.wait.clone();
if let Some(RequestType::ExecStream(_)) = &handle.initial.request {
tokio::task::spawn(async move {
let exec = GuestExecTask { handle };
let exec = ZoneExecTask { wait, handle };
if let Err(error) = exec.run().await {
let _ = exec
.handle

View File

@ -11,7 +11,7 @@ use anyhow::Result;
use libc::{c_int, waitpid, WEXITSTATUS, WIFEXITED};
use log::warn;
use nix::unistd::Pid;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use tokio::sync::broadcast::{channel, Receiver, Sender};
const CHILD_WAIT_QUEUE_LEN: usize = 10;
@ -21,18 +21,19 @@ pub struct ChildEvent {
pub status: c_int,
}
#[derive(Clone)]
pub struct ChildWait {
receiver: Receiver<ChildEvent>,
sender: Sender<ChildEvent>,
signal: Arc<AtomicBool>,
_task: JoinHandle<()>,
_task: Arc<JoinHandle<()>>,
}
impl ChildWait {
pub fn new() -> Result<ChildWait> {
let (sender, receiver) = channel(CHILD_WAIT_QUEUE_LEN);
let (sender, _) = channel(CHILD_WAIT_QUEUE_LEN);
let signal = Arc::new(AtomicBool::new(false));
let mut processor = ChildWaitTask {
sender,
sender: sender.clone(),
signal: signal.clone(),
};
let task = thread::spawn(move || {
@ -41,14 +42,14 @@ impl ChildWait {
}
});
Ok(ChildWait {
receiver,
sender,
signal,
_task: task,
_task: Arc::new(task),
})
}
pub async fn recv(&mut self) -> Option<ChildEvent> {
self.receiver.recv().await
pub async fn subscribe(&self) -> Result<Receiver<ChildEvent>> {
Ok(self.sender.subscribe())
}
}
@ -68,7 +69,7 @@ impl ChildWaitTask {
pid: Pid::from_raw(pid),
status: WEXITSTATUS(status),
};
let _ = self.sender.try_send(event);
let _ = self.sender.send(event);
if self.signal.load(Ordering::Acquire) {
return Ok(());
@ -80,6 +81,8 @@ impl ChildWaitTask {
impl Drop for ChildWait {
fn drop(&mut self) {
self.signal.store(true, Ordering::Release);
if Arc::strong_count(&self.signal) <= 1 {
self.signal.store(true, Ordering::Release);
}
}
}

View File

@ -1,6 +1,12 @@
use std::{collections::HashMap, process::Stdio};
use anyhow::{anyhow, Result};
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
join,
process::Command,
};
use krata::idm::{
client::IdmClientStreamResponseHandle,
internal::{
@ -9,17 +15,15 @@ use krata::idm::{
},
internal::{response::Response as ResponseType, Request, Response},
};
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
join,
process::Command,
};
pub struct GuestExecTask {
use crate::childwait::ChildWait;
pub struct ZoneExecTask {
pub wait: ChildWait,
pub handle: IdmClientStreamResponseHandle<Request>,
}
impl GuestExecTask {
impl ZoneExecTask {
pub async fn run(&self) -> Result<()> {
let mut receiver = self.handle.take().await?;
@ -58,6 +62,7 @@ impl GuestExecTask {
start.working_directory.clone()
};
let mut wait_subscription = self.wait.subscribe().await?;
let mut child = Command::new(exe)
.args(cmd)
.envs(env)
@ -69,6 +74,7 @@ impl GuestExecTask {
.spawn()
.map_err(|error| anyhow!("failed to spawn: {}", error))?;
let pid = child.id().ok_or_else(|| anyhow!("pid is not provided"))?;
let mut stdin = child
.stdin
.take()
@ -150,12 +156,19 @@ impl GuestExecTask {
}
});
let exit = child.wait().await?;
let code = exit.code().unwrap_or(-1);
let _ = join!(stdout_task, stderr_task);
stdin_task.abort();
let data_task = tokio::task::spawn(async move {
let _ = join!(stdout_task, stderr_task);
stdin_task.abort();
});
let code = loop {
if let Ok(event) = wait_subscription.recv().await {
if event.pid.as_raw() as u32 == pid {
break event.status;
}
}
};
data_task.await?;
let response = Response {
response: Some(ResponseType::ExecStream(ExecStreamResponseUpdate {
exited: true,

View File

@ -26,7 +26,7 @@ use std::str::FromStr;
use sys_mount::{FilesystemType, Mount, MountFlags};
use tokio::fs;
use crate::background::GuestBackground;
use crate::background::ZoneBackground;
const IMAGE_BLOCK_DEVICE_PATH: &str = "/dev/xvda";
const CONFIG_BLOCK_DEVICE_PATH: &str = "/dev/xvdb";
@ -57,17 +57,17 @@ const ADDONS_MODULES_PATH: &str = "/addons/modules";
ioctl_write_int_bad!(set_controlling_terminal, TIOCSCTTY);
pub struct GuestInit {}
pub struct ZoneInit {}
impl Default for GuestInit {
impl Default for ZoneInit {
fn default() -> Self {
Self::new()
}
}
impl GuestInit {
pub fn new() -> GuestInit {
GuestInit {}
impl ZoneInit {
pub fn new() -> ZoneInit {
ZoneInit {}
}
pub async fn init(&mut self) -> Result<()> {
@ -127,7 +127,7 @@ impl GuestInit {
}
if let Some(cfg) = config.config() {
trace!("running guest task");
trace!("running zone task");
self.run(cfg, &launch, idm).await?;
} else {
return Err(anyhow!(
@ -147,7 +147,7 @@ impl GuestInit {
self.create_dir("/run", Some(0o0755)).await?;
self.mount_kernel_fs("devtmpfs", "/dev", "mode=0755", None, None)
.await?;
self.mount_kernel_fs("proc", "/proc", "", None, None)
self.mount_kernel_fs("proc", "/proc", "hidepid=1", None, None)
.await?;
self.mount_kernel_fs("sysfs", "/sys", "", None, None)
.await?;
@ -521,7 +521,7 @@ impl GuestInit {
let mut env = HashMap::new();
if let Some(config_env) = config.env() {
env.extend(GuestInit::env_map(config_env));
env.extend(ZoneInit::env_map(config_env));
}
env.extend(launch.env.clone());
env.insert("KRATA_CONTAINER".to_string(), "1".to_string());
@ -540,13 +540,13 @@ impl GuestInit {
return Err(anyhow!("cannot get file name of command path as str"));
};
cmd.insert(0, file_name.to_string());
let env = GuestInit::env_list(env);
let env = ZoneInit::env_list(env);
trace!("running guest command: {}", cmd.join(" "));
trace!("running zone command: {}", cmd.join(" "));
let path = CString::new(path.as_os_str().as_bytes())?;
let cmd = GuestInit::strings_as_cstrings(cmd)?;
let env = GuestInit::strings_as_cstrings(env)?;
let cmd = ZoneInit::strings_as_cstrings(cmd)?;
let env = ZoneInit::strings_as_cstrings(env)?;
let mut working_dir = config
.working_dir()
.as_ref()
@ -566,7 +566,7 @@ impl GuestInit {
async fn init_cgroup(&self) -> Result<Cgroup> {
trace!("initializing cgroup");
let hierarchy = cgroups_rs::hierarchies::auto();
let cgroup = Cgroup::new(hierarchy, "krata-guest-task")?;
let cgroup = Cgroup::new(hierarchy, "krata-zone-task")?;
cgroup.set_cgroup_type("threaded")?;
trace!("initialized cgroup");
Ok(cgroup)
@ -619,7 +619,7 @@ impl GuestInit {
cmd: Vec<CString>,
env: Vec<CString>,
) -> Result<()> {
GuestInit::set_controlling_terminal()?;
ZoneInit::set_controlling_terminal()?;
std::env::set_current_dir(working_dir)?;
cgroup.add_task(CgroupPid::from(std::process::id() as u64))?;
execve(&path, &cmd, &env)?;
@ -640,7 +640,7 @@ impl GuestInit {
cgroup: Cgroup,
executed: Pid,
) -> Result<()> {
let mut background = GuestBackground::new(idm, cgroup, executed).await?;
let mut background = ZoneBackground::new(idm, cgroup, executed).await?;
background.run().await?;
Ok(())
}

View File

@ -13,7 +13,7 @@ pub mod metrics;
pub async fn death(code: c_int) -> Result<()> {
let store = XsdClient::open().await?;
store
.write_string("krata/guest/exit-code", &code.to_string())
.write_string("krata/zone/exit-code", &code.to_string())
.await?;
drop(store);
loop {

View File

@ -14,7 +14,7 @@ impl MetricsCollector {
pub fn collect(&self) -> Result<MetricNode> {
let mut sysinfo = sysinfo::System::new();
Ok(MetricNode::structural(
"guest",
"zone",
vec![
self.collect_system(&mut sysinfo)?,
self.collect_processes(&mut sysinfo)?,

View File

@ -47,9 +47,6 @@ do
elif [ "${FORM}" = "bundle-systemd" ]
then
asset "${SOURCE_FILE_PATH}" "target/assets/krata-systemd_${TAG_NAME}_${PLATFORM}.tgz"
elif [ "${FORM}" = "os" ]
then
asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.qcow2"
else
echo "ERROR: Unknown form '${FORM}'"
exit 1

View File

@ -2,3 +2,4 @@
set -e
brew install protobuf
brew upgrade rustup || true

View File

@ -1,10 +1,28 @@
#!/bin/sh
#!/bin/bash
set -e
sudo apt-get update
sudo apt-get install -y \
build-essential libssl-dev libelf-dev musl-dev \
flex bison bc protobuf-compiler musl-tools qemu-utils gcc-aarch64-linux-gnu
CROSS_RS_REV="7b79041c9278769eca57fae10c74741f5aa5c14b"
FPM_VERSION="1.15.1"
sudo gem install --no-document fpm
cargo install cross --git https://github.com/cross-rs/cross
PACKAGES=(build-essential musl-dev protobuf-compiler musl-tools)
sudo apt-get update
if [ "${TARGET_ARCH}" = "aarch64" ]
then
PACKAGES+=(gcc-aarch64-linux-gnu)
fi
sudo apt-get install -y "${PACKAGES[@]}"
CROSS_COMPILE="$(./hack/build/cross-compile.sh)"
if [ "${CROSS_COMPILE}" = "1" ]
then
cargo install cross --git "https://github.com/cross-rs/cross.git" --rev "${CROSS_RS_REV}"
fi
if [ "${CI_NEEDS_FPM}" = "1" ]
then
sudo gem install --no-document fpm -v "${FPM_VERSION}"
fi

View File

@ -5,4 +5,3 @@ REAL_SCRIPT="$(realpath "${0}")"
cd "$(dirname "${REAL_SCRIPT}")/../.."
find hack -type f -name '*.sh' -print0 | xargs -0 shellcheck -x
find os/internal -type f -name '*.sh' -print0 | xargs -0 shellcheck -x

View File

@ -19,12 +19,12 @@ fi
build_and_run() {
EXE_TARGET="${1}"
shift
sudo mkdir -p /var/lib/krata/guest
sudo mkdir -p /var/lib/krata/zone
if [ "${KRATA_BUILD_INITRD}" = "1" ]
then
TARGET_ARCH="$(./hack/build/arch.sh)"
./hack/initrd/build.sh ${CARGO_BUILD_FLAGS}
sudo cp "target/initrd/initrd-${TARGET_ARCH}" "/var/lib/krata/guest/initrd"
sudo cp "target/initrd/initrd-${TARGET_ARCH}" "/var/lib/krata/zone/initrd"
fi
RUST_TARGET="$(./hack/build/target.sh)"
./hack/build/cargo.sh build ${CARGO_BUILD_FLAGS} --bin "${EXE_TARGET}"

6
hack/dist/systar.sh vendored
View File

@ -38,9 +38,9 @@ else
mv ../krata/kratad.service ../krata/kratanet.service usr/lib/systemd/system/
fi
mkdir -p usr/share/krata/guest
mv ../krata/kernel ../krata/initrd usr/share/krata/guest
mv ../krata/addons.squashfs usr/share/krata/guest/addons.squashfs
mkdir -p usr/share/krata/zone
mv ../krata/kernel ../krata/initrd usr/share/krata/zone
mv ../krata/addons.squashfs usr/share/krata/zone/addons.squashfs
tar czf "${SYSTAR}" --owner 0 --group 0 .

View File

@ -12,9 +12,9 @@ export TARGET_LIBC="musl"
RUST_TARGET="$(./hack/build/target.sh)"
export RUSTFLAGS="-Ctarget-feature=+crt-static"
./hack/build/cargo.sh build "${@}" --release --bin krataguest
./hack/build/cargo.sh build "${@}" --release --bin krata-zone
INITRD_DIR="$(mktemp -d /tmp/krata-initrd.XXXXXXXXXXXXX)"
cp "target/${RUST_TARGET}/release/krataguest" "${INITRD_DIR}/init"
cp "target/${RUST_TARGET}/release/krata-zone" "${INITRD_DIR}/init"
chmod +x "${INITRD_DIR}/init"
cd "${INITRD_DIR}"
mkdir -p "${KRATA_DIR}/target/initrd"

View File

@ -1,119 +0,0 @@
#!/bin/sh
set -e
REAL_SCRIPT="$(realpath "${0}")"
cd "$(dirname "${REAL_SCRIPT}")/../.."
./hack/dist/apk.sh
KRATA_VERSION="$(./hack/dist/version.sh)"
TARGET_ARCH="$(./hack/build/arch.sh)"
TARGET_ARCH_ALT="$(KRATA_ARCH_KERNEL_NAME=1 ./hack/build/arch.sh)"
CROSS_COMPILE="$(./hack/build/cross-compile.sh)"
TARGET_DIR="${PWD}/target"
TARGET_OS_DIR="${TARGET_DIR}/os"
mkdir -p "${TARGET_OS_DIR}"
cp "${TARGET_DIR}/dist/krata_${KRATA_VERSION}_${TARGET_ARCH}.apk" "${TARGET_OS_DIR}/krata-${TARGET_ARCH}.apk"
DOCKER_FLAGS="--platform linux/${TARGET_ARCH_ALT}"
if [ -t 0 ]
then
DOCKER_FLAGS="${DOCKER_FLAGS} -it"
fi
if [ "${CROSS_COMPILE}" = "1" ]
then
docker run --privileged --rm tonistiigi/binfmt --install all
fi
ROOTFS="${TARGET_OS_DIR}/rootfs-${TARGET_ARCH}.tar"
# shellcheck disable=SC2086
docker run --rm --privileged -v "${PWD}:/mnt" ${DOCKER_FLAGS} alpine:latest "/mnt/os/internal/stage1.sh" "${TARGET_ARCH}"
sudo chown "${USER}:${GROUP}" "${ROOTFS}"
sudo modprobe nbd
next_nbd_device() {
find /dev -maxdepth 2 -name 'nbd[0-9]*' | while read -r DEVICE
do
if [ "$(sudo blockdev --getsize64 "${DEVICE}")" = "0" ]
then
echo "${DEVICE}"
break
fi
done
}
NBD_DEVICE="$(next_nbd_device)"
if [ -z "${NBD_DEVICE}" ]
then
echo "ERROR: unable to allocate nbd device" > /dev/stderr
exit 1
fi
OS_IMAGE="${TARGET_OS_DIR}/krata-${TARGET_ARCH}.qcow2"
EFI_PART="${NBD_DEVICE}p1"
ROOT_PART="${NBD_DEVICE}p2"
ROOT_DIR="${TARGET_OS_DIR}/root-${TARGET_ARCH}"
EFI_DIR="${ROOT_DIR}/boot/efi"
cleanup() {
trap '' EXIT HUP INT TERM
sudo umount -R "${ROOT_DIR}" > /dev/null 2>&1 || true
sudo umount "${EFI_PART}" > /dev/null 2>&1 || true
sudo umount "${ROOT_PART}" > /dev/null 2>&1 || true
sudo qemu-nbd --disconnect "${NBD_DEVICE}" > /dev/null 2>&1 || true
sudo rm -rf "${ROOT_DIR}"
}
rm -f "${OS_IMAGE}"
qemu-img create -f qcow2 "${OS_IMAGE}" "2G"
trap cleanup EXIT HUP INT TERM
sudo qemu-nbd --connect="${NBD_DEVICE}" --cache=writeback -f qcow2 "${OS_IMAGE}"
printf '%s\n' \
'label: gpt' \
'name=efi,type=U,size=128M,bootable' \
'name=system,type=L' | sudo sfdisk "${NBD_DEVICE}"
sudo mkfs.fat -F32 -n EFI "${EFI_PART}"
sudo mkfs.ext4 -L root -E discard "${ROOT_PART}"
mkdir -p "${ROOT_DIR}"
sudo mount -t ext4 "${ROOT_PART}" "${ROOT_DIR}"
sudo mkdir -p "${EFI_DIR}"
sudo mount -t vfat "${EFI_PART}" "${EFI_DIR}"
sudo tar xf "${ROOTFS}" -C "${ROOT_DIR}"
ROOT_UUID="$(sudo blkid "${ROOT_PART}" | sed -En 's/.*\bUUID="([^"]+)".*/\1/p')"
EFI_UUID="$(sudo blkid "${EFI_PART}" | sed -En 's/.*\bUUID="([^"]+)".*/\1/p')"
echo "${ROOT_UUID}"
sudo mkdir -p "${ROOT_DIR}/proc" "${ROOT_DIR}/dev" "${ROOT_DIR}/sys"
sudo mount -t proc none "${ROOT_DIR}/proc"
sudo mount --bind /dev "${ROOT_DIR}/dev"
sudo mount --make-private "${ROOT_DIR}/dev"
sudo mount --bind /sys "${ROOT_DIR}/sys"
sudo mount --make-private "${ROOT_DIR}/sys"
sudo cp "${PWD}/os/internal/stage2.sh" "${ROOT_DIR}/stage2.sh"
echo "${ROOT_UUID}" | sudo tee "${ROOT_DIR}/root-uuid" > /dev/null
sudo mv "${ROOT_DIR}/etc/resolv.conf" "${ROOT_DIR}/etc/resolv.conf.orig"
sudo cp "/etc/resolv.conf" "${ROOT_DIR}/etc/resolv.conf"
sudo chroot "${ROOT_DIR}" /bin/sh -c "/stage2.sh ${TARGET_ARCH} ${TARGET_ARCH_ALT}"
sudo mv "${ROOT_DIR}/etc/resolv.conf.orig" "${ROOT_DIR}/etc/resolv.conf"
sudo rm -f "${ROOT_DIR}/stage2.sh"
sudo rm -f "${ROOT_DIR}/root-uuid"
{
echo "# krata fstab"
echo "UUID=${ROOT_UUID} / ext4 relatime 0 1"
echo "UUID=${EFI_UUID} / vfat rw,relatime,fmask=0133,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2"
} | sudo tee "${ROOT_DIR}/etc/fstab" > /dev/null
cleanup
OS_SMALL_IMAGE="${TARGET_OS_DIR}/krata-${TARGET_ARCH}.small.qcow2"
qemu-img convert -O qcow2 "${OS_IMAGE}" "${OS_SMALL_IMAGE}"
mv -f "${OS_SMALL_IMAGE}" "${OS_IMAGE}"

View File

@ -1,4 +1,4 @@
FROM rust:1.79-alpine AS build
FROM rust:1.80-alpine@sha256:596c7fa13f7458097b8c88ad83f33420da0341e2f5b544e34d9aa18a22fe11d0 AS build
RUN apk update && apk add protoc protobuf-dev build-base && rm -rf /var/cache/apk/*
ENV TARGET_LIBC=musl TARGET_VENDOR=unknown

View File

@ -1,4 +1,4 @@
FROM rust:1.79-alpine AS build
FROM rust:1.80-alpine@sha256:596c7fa13f7458097b8c88ad83f33420da0341e2f5b544e34d9aa18a22fe11d0 AS build
RUN apk update && apk add protoc protobuf-dev build-base && rm -rf /var/cache/apk/*
ENV TARGET_LIBC=musl TARGET_VENDOR=unknown

View File

@ -1,4 +1,4 @@
FROM rust:1.79-alpine AS build
FROM rust:1.80-alpine@sha256:596c7fa13f7458097b8c88ad83f33420da0341e2f5b544e34d9aa18a22fe11d0 AS build
RUN apk update && apk add protoc protobuf-dev build-base && rm -rf /var/cache/apk/*
ENV TARGET_LIBC=musl TARGET_VENDOR=unknown

View File

@ -1,4 +1,4 @@
FROM rust:1.79-alpine AS build
FROM rust:1.80-alpine@sha256:596c7fa13f7458097b8c88ad83f33420da0341e2f5b544e34d9aa18a22fe11d0 AS build
RUN apk update && apk add protoc protobuf-dev build-base && rm -rf /var/cache/apk/*
ENV TARGET_LIBC=musl TARGET_VENDOR=unknown

View File

@ -1,78 +0,0 @@
#!/bin/sh
set -e
TARGET_ARCH="${1}"
apk add --update-cache alpine-base \
linux-lts linux-firmware-none \
mkinitfs dosfstools e2fsprogs \
tzdata chrony
apk add --allow-untrusted "/mnt/target/os/krata-${TARGET_ARCH}.apk"
for SERVICE in kratad kratanet
do
rc-update add "${SERVICE}" default
done
apk add xen xen-hypervisor
rc-update add xenstored default
for MODULE in xen-netblock xen-blkback tun tap
do
echo "${MODULE}" >> /etc/modules
done
cat > /etc/network/interfaces <<-EOF
auto eth0
iface eth0 inet dhcp
EOF
for SERVICE in networking chronyd
do
rc-update add "${SERVICE}" default
done
for SERVICE in devfs dmesg mdev hwdrivers cgroups
do
rc-update add "${SERVICE}" sysinit
done
for SERVICE in modules hwclock swap hostname sysctl bootmisc syslog seedrng
do
rc-update add "${SERVICE}" boot
done
for SERVICE in killprocs savecache mount-ro
do
rc-update add "${SERVICE}" shutdown
done
echo 'root:krata' | chpasswd
echo 'krata' > /etc/hostname
{
echo '# krata resolver configuration'
echo 'nameserver 1.1.1.1'
echo 'nameserver 1.0.0.1'
echo 'nameserver 2606:4700:4700::1111'
echo 'nameserver 2606:4700:4700::1001'
} > /etc/resolv.conf
{
echo 'Welcome to krataOS!'
echo 'You may now login to the console to manage krata.'
} > /etc/issue
echo > /etc/motd
ln -s /usr/share/zoneinfo/UTC /etc/localtime
rm -rf /var/cache/apk/*
rm -rf /.dockerenv
cd /
rm -f "/mnt/target/os/rootfs-${TARGET_ARCH}.tar"
tar cf "/mnt/target/os/rootfs-${TARGET_ARCH}.tar" --numeric-owner \
--exclude 'mnt/**' --exclude 'proc/**' \
--exclude 'sys/**' --exclude 'dev/**' .

Some files were not shown because too many files have changed in this diff Show More